<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Vito Tumas</title>
    <description>The latest articles on Forem by Vito Tumas (@vtumas).</description>
    <link>https://forem.com/vtumas</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/vtumas"/>
    <language>en</language>
    <item>
      <title>A Formal Verification of the XRP Ledger</title>
      <dc:creator>Vito Tumas</dc:creator>
      <pubDate>Wed, 17 Dec 2025 15:52:43 +0000</pubDate>
      <link>https://forem.com/ripplexdev/a-formal-verification-of-the-xrp-ledger-51e4</link>
      <guid>https://forem.com/ripplexdev/a-formal-verification-of-the-xrp-ledger-51e4</guid>
      <description>&lt;p&gt;&lt;em&gt;Summary&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Ripple is working with Common Prefix to specify and formally verify key components (Payment Engine and the Consensus Protocol) of the XRP Ledger. You can read the fresh specification of the Payment Engine on Github:  &lt;a href="https://github.com/commonprefix/payment-system-docs" rel="noopener noreferrer"&gt;XRP Ledger Payment Engine Specification&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;In 2012, when the XRP Ledger first went live, its creators had a singular goal, to make a new, more efficient blockchain, with the limited resources available. There were no teams of researchers, formal specification nor an ecosystem of auditors and academic papers to lean on. The engineers were racing to build a functioning, reliable decentralised ledger. Since those days, XRP Ledger has become one of the longest-running blockchains, operating well over a decade without downtime, powering hundreds of millions of ledger, and transactions[&lt;a href="https://xrpscan.com/ledger/100000000" rel="noopener noreferrer"&gt;1&lt;/a&gt;]. However, for the foundational components, the single C++ implementation, xrpld, has served as the only definitive source of truth, creating fundamental challenges:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The system does not prove that it cannot reach an invalid state. The decade-long track record is a testament to the quality of engineering, however, to prepare the ledger for the next generation of complex features, we must move beyond empirical success to mathematical certainty.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The code tells us, in very precise C++ terms, what it does. It does not always tell us why, or, in other words, the code doesn't tell us the intention, making it impossible to distinguish between deliberate and ingrained behaviour.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This "specification debt" in the ledger's core is coming due as the blockchain evolves. The XRP Ledger is a dynamic system with new, highly complex features being continuously proposed and added.&lt;/p&gt;

&lt;p&gt;Consider the recent and upcoming features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://dev.to/ripplexdev/xrp-ledger-lending-protocol-2pla"&gt;The Lending Protocol&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://dev.to/ripplexdev/multi-purpose-tokens-mpt-chronology-and-how-to-test-on-devnet-19nj"&gt;Multi-Purpose Token (MPT) DEX&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://dev.to/ripplexdev/introducing-batch-transactions-on-the-xrp-ledger-more-opportunities-less-friction-50h5"&gt;Batch Transactions&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://ripple.com/insights/unlocking-institutional-access-to-defi-on-the-xrp-ledger/" rel="noopener noreferrer"&gt;Permissioned DEXes&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are intricate amendments that must weave into the decades-old logic of the ledger, but this raises further questions. How does the new Lending Protocol interact with the rules for frozen assets or clawbacks? How do batch transactions affect the ordering and execution logic of the DEX?&lt;/p&gt;

&lt;p&gt;Each new feature that must fit into an already complex, unspecified system creates an exponential increase in possible states and interactions. Relying on human intuition and traditional testing alone is no longer sufficient to guarantee correctness.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Verifiable Source of Truth
&lt;/h2&gt;

&lt;p&gt;Addressing these challenges requires a formal, abstract specification of XRP Ledger’s critical components, that provide a verifiable source of truth to reason about the system’s behaviour, both manually and mechanically. Two distinct, but complementary assets:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A Human-Readable Specification: A clear, unambiguous document describing the system behaviour, serving as the canonical reference for developers and researchers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A Machine-Verifiable Model: A formal, mathematical representation of the specification enabling mechanical proofs of system properties, simulating network behaviour, and verifying that new code changes do not violate core safety guarantees.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Benefits: A Stronger Foundation
&lt;/h2&gt;

&lt;p&gt;Establishing a formal specification builds a stronger foundation that will deliver compounding benefits across the entire XRP Ledger ecosystem.&lt;/p&gt;

&lt;p&gt;But what are formal methods? In simple terms, formal methods are a set of techniques based on applied mathematics and logic, used to specify, design, and verify complex software and hardware systems.&lt;/p&gt;

&lt;p&gt;Instead of relying solely on traditional testing, which can only prove the presence of bugs, formal methods allow us to prove the absence of certain classes of bugs. They help us answer questions like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;"Is it ever possible for this system to enter an invalid state?"&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;"Does this new Lending Protocol break any of the core invariants of the payment engine?"&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In a system with rising complexity, human intuition fails. Formal methods provide tools to model complex interactions, detect edge-case bugs and provide mathematical certainty about the correctness and soundness of protocol designs.&lt;/p&gt;

&lt;p&gt;First and foremost, a formal specification provides clarity and reduces ambiguity. It acts as a single source of truth, eliminating guesswork for developers building the XRP Ledger. This clarity also leads to faster onboarding, as a structured specification significantly accelerates new developers' and researchers' understanding of the protocol's core mechanics.&lt;/p&gt;

&lt;p&gt;This initiative will also lead to more robust testing and auditing. The specification becomes the canonical benchmark against which to measure the implementation, enabling the creation of more comprehensive test suites and allowing auditors to independently verify an implementation's correctness, not just its internal self-consistency.&lt;/p&gt;

&lt;p&gt;Furthermore, a formal model enables safer protocol evolution. Proposed amendments can be modelled and evaluated with mathematical rigour before a single line of code is written, ensuring predictable, more secure upgrades.&lt;/p&gt;

&lt;p&gt;Finally, this work serves as a foundation for advanced technologies. A formal specification is the essential blueprint for building complex, next-generation features like trustless ZK-bridges. It simplifies the design of cryptographic circuits and dramatically reduces the risk of introducing subtle, critical errors.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Process: From Code to Formal Proof
&lt;/h2&gt;

&lt;p&gt;It's important to understand that turning an existing C++ implementation into a verifiable model is not straightforward. It is an act of modelling and abstraction, not translation. A direct, line-by-line conversion of the C++ codebase into a formal language is not only unfeasible, but it would miss the entire point of the exercise.&lt;/p&gt;

&lt;p&gt;This first stage requires archaeology and engineering, reviewing design documents and code and engaging with core developers to understand the system's intent. The output of this stage is a clear, structured document in plain English that describes the protocol's rules without C++-specific details.&lt;/p&gt;

&lt;p&gt;The second stage demands precise semantics of a formal language, focusing on the areas where the most dangerous bugs hide: concurrency, distributed consensus, and complex state transitions.&lt;/p&gt;

&lt;p&gt;Modelling a system is an iterative process: checking for flaws with tools like &lt;a href="https://lamport.azurewebsites.net/tla/tla.html" rel="noopener noreferrer"&gt;TLA+&lt;/a&gt; and refining the specification based on the results.&lt;/p&gt;

&lt;p&gt;A direct code conversion would inevitably fail for three key reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;State Explosion: The C++ code contains excessive detail. A model that included all of it would have a state space far too large for any computer to analyse effectively.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Implementation Bias: A converted model would be a model of the implementation, not the design. A bug in the code would be faithfully reproduced in the model, defeating the purpose of verification.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Loss of Abstraction: The focus shifts from verifying the high-level correctness of the protocol to checking low-level details such as memory management, losing the crucial design-level insights we aim to gain.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thus, instead modelling the system as a state machine to express its behaviour and desired system properties, enables using model checkers to  exhaustively search for logical flaws in the design exhaustively, creating a powerful feedback loop that allows finding and fixing ambiguities in the informal spec and logical errors in the formal model, refining both until they are robust.&lt;/p&gt;

&lt;h2&gt;
  
  
  Collaboration and Focus
&lt;/h2&gt;

&lt;p&gt;To execute this critical work, Ripple is collaborating with Common Prefix, a firm with deep expertise in formal verification and protocol analysis, including a specialized focus on consensus foundations, interoperability, and mathematically proving core properties of distributed ledger protocols.&lt;/p&gt;

&lt;p&gt;The sheer complexity of the XRP Ledger requires a focused approach. It would be prohibitively expensive and time-consuming to specify the entire system at once. Together, we have identified the two most critical and complex components as our starting point: the Payment Engine and the Consensus Protocol.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Payment Engine&lt;/strong&gt; is the system responsible for all value transfer, including complex operations like crossing the decentralised exchange and rippling.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Consensus Protocol&lt;/strong&gt; is the heart of the ledger, enabling nodes to reach consensus on a common state. Its correctness is non-negotiable and underpins the safety and liveness of the entire network. We will create a formal model of the consensus mechanism to mathematically prove its core properties of liveness (the network continues to make progress), safety (the network never reaches an invalid state), and finality (transactions are irreversible once confirmed).&lt;/p&gt;

&lt;p&gt;This initiative marks a crucial step in maturing the XRPL into a platform ready for the next decade of institutional finance and decentralised innovation.&lt;/p&gt;

&lt;p&gt;The shift from code-as-truth to mathematics-as-truth is underway. We invite you to read the &lt;a href="https://github.com/commonprefix/payment-system-docs" rel="noopener noreferrer"&gt;XRP Ledger Payment Engine Specification&lt;/a&gt;. Building on the Payment Engine specification, we’ll begin formal verification of it, and the Consensus Protocol in 2026.&lt;/p&gt;

</description>
      <category>xrpledger</category>
      <category>verification</category>
      <category>safety</category>
    </item>
    <item>
      <title>To Squelch or not to Squelch? Optimising XRP Ledger Validator Communication</title>
      <dc:creator>Vito Tumas</dc:creator>
      <pubDate>Thu, 26 Jun 2025 09:27:33 +0000</pubDate>
      <link>https://forem.com/ripplexdev/to-squelch-or-not-to-squelch-optimising-xrp-ledger-validator-communication-4644</link>
      <guid>https://forem.com/ripplexdev/to-squelch-or-not-to-squelch-optimising-xrp-ledger-validator-communication-4644</guid>
      <description>&lt;p&gt;The XRP Ledger is expanding. As the number of nodes and validators joining the network grows, the Ledger is becoming increasingly resilient and robust. But this success brings a critical challenge: a rising tide of network traffic that, if left unmanaged, can strain the resources of every node operator.&lt;/p&gt;

&lt;p&gt;While innovations like Zero-Knowledge Proofs, Real-World Assets, and DeFi take the spotlight, the essential networking that underpins them is often overlooked. It's the plumbing of the digital house: invisible and forgotten until you turn on the shower and the once-mighty stream weakens to a frustrating trickle. Similarly, a blockchain's networking is invisible until its performance degrades to unacceptable levels of cost and delay.&lt;/p&gt;

&lt;p&gt;Thus, as a preemptive measure to ensure XRP Ledger performance does not falter, we propose optimising the XRP Ledger validator communication. In this article, we will examine the algorithms and steps we are taking to maintain high water pressure.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Background&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Before we dive into the algorithm details, let's briefly recap how communication works in the XRP Ledger. The XRP Ledger network consists of interconnected servers (nodes) running the &lt;a href="https://github.com/XRPLF/rippled" rel="noopener noreferrer"&gt;&lt;code&gt;rippled&lt;/code&gt;&lt;/a&gt; client. A special subset of these nodes are validators, participating directly in the consensus process. Each node maintains several connections to other nodes, known as its peers. Note that these peers are a small subset of all network nodes.&lt;/p&gt;

&lt;p&gt;To &lt;a href="https://xrpl.org/docs/concepts/consensus-protocol/consensus-structure" rel="noopener noreferrer"&gt;achieve consensus&lt;/a&gt;, validators constantly exchange two critical types of messages, which we'll refer to collectively as "validator messages":&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Proposals:&lt;/strong&gt; Messages that contain a set of transactions to be included in the next ledger. Validators use these to agree on a common transaction set.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Validations:&lt;/strong&gt; Messages that serve as a final confirmation, ensuring that a specific ledger version has been agreed upon.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Since validator messages don't have a single destination, they are relayed from node to node across the network. The efficiency of this relay mechanism is critical for the health and performance of the entire XRP Ledger.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Current State: The Great Flood&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Currently, the XRP Ledger employs a &lt;strong&gt;flooding&lt;/strong&gt; (or broadcasting) algorithm to disseminate validator messages. Here's how it operates:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffqvswx981xvylsdqn1u4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffqvswx981xvylsdqn1u4.png" alt="Image description" width="723" height="205"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When a node receives a message from a peer, it first checks if it has encountered this specific message before.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;If the message is new,&lt;/strong&gt; the node:&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Caches the message to recognise future duplicates.
&lt;/li&gt;
&lt;li&gt;Processes the message as required.
&lt;/li&gt;
&lt;li&gt;Forwards the message to every peer except the one who sent it.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;If the message is a duplicate,&lt;/strong&gt; The node drops it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The flooding algorithm has a few distinct advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;High Reliability:&lt;/strong&gt; As long as every node has at least one active peer, flooding ensures that all nodes eventually receive every message.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Effectiveness:&lt;/strong&gt; A flooded message will traverse every possible path through the network. Consequently, it is guaranteed to travel the fastest possible route from the sender to every other node.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;However, flooding also has a significant drawback: it is highly inefficient. Because a message travels through all available paths, each node receives the same message multiple times. For example, if a node has 30 peers, it will receive 30 copies of the same validator message—one from each peer. This redundancy is the price paid for reliability, and its scale can be surprising.&lt;/p&gt;

&lt;p&gt;To understand the scale of this inefficiency, let's consider some typical figures for the XRP Ledger:&lt;/p&gt;

&lt;h4&gt;
  
  
  Network &amp;amp; Message Parameters
&lt;/h4&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Total Message Size&lt;/th&gt;
&lt;th&gt;All Validators&lt;/th&gt;
&lt;th&gt;UNL Validators&lt;/th&gt;
&lt;th&gt;Daily Ledgers&lt;/th&gt;
&lt;th&gt;Nodes&lt;/th&gt;
&lt;th&gt;Connections&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;432 bytes&lt;/td&gt;
&lt;td&gt;203&lt;/td&gt;
&lt;td&gt;35&lt;/td&gt;
&lt;td&gt;20,000&lt;/td&gt;
&lt;td&gt;1,015&lt;/td&gt;
&lt;td&gt;~13,000&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h4&gt;
  
  
  Traffic Calculations under Flooding
&lt;/h4&gt;

&lt;p&gt;&lt;em&gt;Note: These calculations assume one proposal and one validation message per validator per ledger. In reality, validators often produce multiple proposals due to differing transaction sets. This assumption provides a conservative lower bound for comparison.&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Each validator generates 432 bytes per ledger. Together, all validators generate:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;203 validators × 432 bytes/validator = &lt;strong&gt;87.7 KB per ledger&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Under flooding, each peer connection effectively carries 87.7 KB of data for each ledger. Per day, each connection transfers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;87.7 KB/ledger/connection × 20,000 ledgers/day = &lt;strong&gt;1.754 GB per connection per day&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Collectively, all connections transfer:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;1.754 GB/connection/day × 13,000 total connections = &lt;strong&gt;22.8 TB per day&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Each of the 1,015 nodes processes this unique set of messages. Therefore, the 'useful' portion of the total traffic is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;1.754 GB/node/day × 1,015 nodes = &lt;strong&gt;approx. 1.8 TB per Day&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Comparing useful traffic to total traffic: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;(1.8 TB unique / 22.8 TB total) × 100% = &lt;strong&gt;7.8%&lt;/strong&gt;, meaning that &lt;strong&gt;92.2%&lt;/strong&gt; of validator traffic is redundant.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The bright tomorrow: Squelching&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The journey to optimise XRP Ledger's validator traffic began in 2020 with an algorithm called "Squelching." The foundational code for this feature has been part of rippled since &lt;a href="https://xrpl.org/blog/2021/rippled-1.7.0" rel="noopener noreferrer"&gt;version 1.7.0&lt;/a&gt; (released in 2020) and discussed in a previous blog post: &lt;a href="https://xrpl.org/blog/2021/message-routing-optimizations-pt-1-proposal-validation-relaying" rel="noopener noreferrer"&gt;Message Routing Optimizations Pt 1: Proposal &amp;amp; Validation Relaying&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Before we explore the specific mechanics of the squelching algorithm, let's consider the name itself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Squelch (noun):&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;A soft sucking sound made when pressure is applied to liquid or mud.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A circuit that suppresses the output of a radio receiver if the signal strength falls below a certain level.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;While the first definition offers a more amusing image, the second is remarkably apt for our algorithm. It directly points to the core mechanism Base Squelching uses to reduce duplicate traffic: &lt;strong&gt;suppression&lt;/strong&gt;. The underlying philosophy is that a server should decide what traffic it wants to receive from its peers and have a mechanism to suppress traffic it is not interested in.&lt;/p&gt;

&lt;p&gt;The squelching algorithm enables a server to select a subset of peers as sources of validator messages and suppress the remaining peers, thereby significantly reducing the duplicate messages it receives. However, despite its availability, this initial version of Squelching did not see widespread adoption across the network. From our observations, we learned about a few key limitations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Its benefits are only realised when most servers enable the feature.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It primarily focused on reducing duplicate traffic from &lt;em&gt;trusted&lt;/em&gt; validators.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Crucially, it did not address the growing volume of messages from the ever-increasing number of &lt;em&gt;untrusted&lt;/em&gt; validators.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This last point is particularly important. The recent growth in untrusted validators, while welcome, has increased the processing load on nodes, highlighting the need for a more comprehensive solution.&lt;/p&gt;

&lt;p&gt;With these lessons in mind, we have revisited the original squelching implementation to introduce two key optimisations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;"Base Squelching"&lt;/strong&gt; is an improved version of the original squelching algorithm, designed to suppress duplicate traffic from both trusted and untrusted validators, thereby ensuring improved interoperability across the network.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;"Enhanced Squelching"&lt;/strong&gt; is a new, complementary algorithm designed to reduce the volume of unique untrusted validator messages.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Base Squelching&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Base Squelching is an improved algorithm designed to drastically reduce duplicate validator messages from all sources—trusted and untrusted. It allows each node to intelligently select its information sources, ensuring seamless operation even when connected to peers that don't use this new logic.&lt;/p&gt;

&lt;h4&gt;
  
  
  How Base Squelching Works
&lt;/h4&gt;

&lt;p&gt;Base Squelching works through a continuous process of source selection and suppression, managed by each node for each validator individually. Think of it as each node constantly interviewing its peers to find the most reliable messengers for a specific validator (let's call it Validator X).&lt;/p&gt;

&lt;p&gt;Here's a detailed look at the process:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw5s68ie404vsgy8h6o3g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw5s68ie404vsgy8h6o3g.png" alt="Image description" width="722" height="296"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Initial Learning Phase – Monitoring All Peers&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Initially, a node listens to all its peers, receiving messages from Validator X from each of them. For each peer, the node maintains a counter that it increments every time it gets a message from Validator X via that peer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Identifying Potential Sources – The Consideration List&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once a peer has successfully delivered a certain number of unique messages from Validator X, that peer is deemed a potentially reliable source and is added to a "consideration list" for Validator X. The message threshold introduces a tradeoff; if the threshold is low, peers will be squelched faster, but at the cost of less information about their reliability. Therefore, we chose &lt;strong&gt;20 messages&lt;/strong&gt;, or roughly 20 ledgers, as the criteria.&lt;/p&gt;

&lt;p&gt;This process also includes a timeliness condition:  a node resets the peer's progress if it fails to deliver a new message within &lt;strong&gt;8 seconds&lt;/strong&gt;, ensuring that the node considers only fast and well-connected peers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Selecting Primary Sources – Random Selection&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When the consideration list has a minimum number of qualified peers, the node conducts a random selection. It chooses &lt;strong&gt;5 peers&lt;/strong&gt; from this list to be its &lt;strong&gt;designated primary sources&lt;/strong&gt; for Validator X's messages. This number provides a good balance between reliability and traffic reduction. If a node selects too few peers, it may not receive sufficient messages from the validator. On the other hand, if it selects too many peers, the benefits of reducing duplicate traffic will diminish. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Squelching Other Peers – Suppressing Duplicates&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After choosing the primary sources for Validator X, the node sends a "squelch" control message to all its other peers (i.e., those not selected as primary sources for Validator X).&lt;/p&gt;

&lt;p&gt;This squelch message instructs these other peers to temporarily stop forwarding messages from that particular Validator (Validator X) to the node that sent the squelch message. For each peer, the squelch duration is random, between &lt;strong&gt;5 and 10 minutes&lt;/strong&gt;. For nodes with more than 60 peers, the upper bound increases up to an hour. Since the squelch duration is random, this ensures that peers will not start sending messages simultaneously, causing spikes in traffic and load.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Dynamic Re-evaluation – Ensuring Adaptability&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The selection of primary sources is not static. The process continuously adapts to network changes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Squelch Expiry:&lt;/strong&gt; When a squelch expires, the peer reenters the learning phase to qualify again.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Selected Peer Disconnects:&lt;/strong&gt; If a chosen primary source disconnects, the node sends an "unsquelch" message to all peers, restarting the entire selection process to find a replacement.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;New Peer Connects:&lt;/strong&gt; A new peer immediately enters the learning phase, competing against the established sources without triggering a complete reset.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Handling Squelch Requests&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The algorithm also defines how a node (Node R) must respond to squelch requests from its peers (Peer S).&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Maintaining Squelch Records:&lt;/strong&gt; Node R keeps a simple record for each peer, listing which validators that peer has squelched and for how long.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Processing an Incoming Squelch Message:&lt;/strong&gt; When Node R receives a squelch message from Peer S regarding messages from a specific Validator V:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Verify Duration:&lt;/strong&gt; Node R first checks the requested squelch duration, which must be less than a predefined maximum of one hour. This verification is crucial as it ensures that a validator's messages are never indefinitely or excessively silenced by any single peer.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Update Records:&lt;/strong&gt; If the duration is valid, Node R updates its records for Peer S, noting not to send messages from Validator V to Peer S until the squelch duration expires.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Special Case for Own Messages:&lt;/strong&gt; A validator node has a special condition. If Validator A receives a squelch request from one of its peers concerning its own (Validator A's) messages, it ignores this request. The protective measure ensures a validator can always propagate its messages and maintain its network presence.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Clearing a Squelch:&lt;/strong&gt; If Node R receives a squelch message from Peer S for Validator V with a squelch duration of zero, Node R clears any existing squelch entry for Validator V related to Peer S. This effectively acts as an "unsquelch" request.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Applying Squelch Rules During Relaying:&lt;/strong&gt; It consults its records before Node R relays any validator message to a specific peer. Suppose Peer S squelched Validator V. In that case, Node R will refrain from sending that message to Peer S. However, it will still relay the message to peers for whom no such squelch is active.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This recipient-side logic ensures that squelch requests are respected, reducing redundant traffic across the network overall.&lt;/p&gt;

&lt;p&gt;Base Squelching involves each node continuously learning and adapting, identifying useful peers for each specific validator's messages. It randomly chooses a small set of primary sources from a qualified pool and temporarily squelches others for that validator. This dynamic process reduces duplicate messages and keeps the system adaptive to changing network conditions and peer availability.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Quantifying the Impact of Base Squelching&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;While the detailed mechanism of Base Squelching is granular, we can model its aggregate effect as reducing redundant pathways for validator messages, thereby estimating the high-level traffic impact. The result is a projection where each node effectively processes the complete set of validator messages as if receiving them through &lt;strong&gt;5 optimised peer connections&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;The total effective connections in the network after squelching:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;1,015 nodes × 5 effective connections/node = &lt;strong&gt;5,075 connections&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;The baseline data rate from the flooding scenario, 1.754 GB/day, is multiplied by the reduced network connections. Under base squelching, the new daily network traffic is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;1.754 GB/connection/day × 5,075 connections = &lt;strong&gt;8.9 TB per day&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;When comparing useful traffic to the new total (1.8 TB unique / 8.9 TB total) × 100% = &lt;strong&gt;20.2%&lt;/strong&gt;, network redundancy for validator messages drops from 92.2% to &lt;strong&gt;79.8%&lt;/strong&gt;, demonstrating a substantial reduction in wasted bandwidth just by enabling Base Squelching.&lt;/p&gt;

&lt;h3&gt;
  
  
  Second Improvement: Enhanced Squelching
&lt;/h3&gt;

&lt;p&gt;While Base Squelching organises the existing validator traffic, Enhanced Squelching fundamentally alters the traffic a node accepts. It takes optimisation a step further by dramatically reducing the &lt;em&gt;volume&lt;/em&gt; of unique messages a node processes, focusing specifically on the growing crowd of untrusted validators.  &lt;/p&gt;

&lt;p&gt;The best way to think of this is like a concierge at an exclusive event. Base Squelching acts like an usher, efficiently guiding accepted guests inside without creating a mob. Enhanced Squelching, however, is the concierge at the velvet rope—it decides who gets in. Its job is to maintain a small, high-quality guest list of untrusted validators and politely turn away the rest.&lt;/p&gt;

&lt;p&gt;Reducing unique untrusted validator messages is crucial because nodes on the XRP Ledger primarily care about messages from trusted validators on their Unique Node List (UNL). They only relay messages from untrusted validators on the off-chance they might be useful to a peer. Enhanced Squelching applies strict "VIP" criteria, monitoring untrusted validators for their activity and network reach. It's worth noting that rippled servers already limit the relay of &lt;em&gt;proposals&lt;/em&gt; from untrusted sources, so Enhanced Squelching focuses on filtering the much more common &lt;em&gt;validation&lt;/em&gt; messages.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;How Enhanced Squelching Works&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Enhanced Squelching enables each node to act like a discerning concierge, identifying a small, active, and well-propagated set of untrusted validators to listen to. It does this by meticulously monitoring their validation messages. Here’s how the algorithm operates:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr0fvs5q2q5fonruordfw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr0fvs5q2q5fonruordfw.png" alt="Image description" width="702" height="274"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Initial Monitoring &amp;amp; New Information:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Initially, a node receives all untrusted validator validation messages from any of its peers. When a node encounters a unique untrusted validation message for the first time, it processes this message and begins tracking the validator's metrics. Tracking is active as long as the node has not yet selected its full complement of primary untrusted validators and has 'open slots' it is looking to fill with qualified candidates.&lt;/p&gt;

&lt;p&gt;Even when a node fills all primary slots, the system remains dynamic. If a currently selected untrusted validator is later deselected, the slot reopens, prompting the node to reconsider other qualifying untrusted validators. This design ensures the network can adapt to new participants while effectively managing load from untrusted sources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Tracking Untrusted Validator Activity&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For each untrusted validator (e.g., UntrustedValidatorA), the node maintains several data points based on their validation messages.&lt;/p&gt;

&lt;p&gt;A counter for the unique validation messages it has received from UntrustedValidatorA to gauge how active the validator is. A record of distinct peers that relayed validation messages for UntrustedValidatorA to understand how well-connected and widely seen that validator is. &lt;/p&gt;

&lt;p&gt;Finally, the node resets validators progress if it does not receive a new unique validation message from the validator within &lt;strong&gt;8 seconds&lt;/strong&gt;, preventing slow or poorly connected validators from being selected.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Qualifying Untrusted Validators for Selection&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;An untrusted validator must meet several criteria to become eligible for selection.&lt;/p&gt;

&lt;p&gt;First, the validator must have originated &lt;strong&gt;20 unique timely messages&lt;/strong&gt;. The number introduces a tradeoff similar to base squelching. A node may quickly select a less reliable validator if the threshold is low.&lt;/p&gt;

&lt;p&gt;Second, at least &lt;strong&gt;5 different peers&lt;/strong&gt; must have sent its validation messages, indicating reasonable network propagation for its validations. If a node were to require a single peer, it may qualify as a poorly connected validator or a validator to a node to which it is directly connected. On the other hand, if the criteria were stricter, no validator could meet it. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Selecting a Core Set of Untrusted Validators&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As validators meet qualification criteria, the node selects them one at a time, on a &lt;strong&gt;first-come, first-served basis&lt;/strong&gt;, up to a maximum of &lt;strong&gt;5&lt;/strong&gt;. These become the primary untrusted validators whose validators the node will accept, process, and relay, subject to Base Squelching.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Squelching Other Untrusted Validators – Instructing Peers&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The node suppresses all other untrusted validators by sending a squelch message to all its peers. Unlike base squelching, the duration of an untrusted validator squelch is longer and fixed: &lt;strong&gt;1 hour&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;When a node receives a message from a squelched validator via a new or an existing peer, it responds by sending the peer a squelch message. The node keeps track of the last time it squelched the peer to prevent excessive spam of squelch messages.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Applying Base Squelching to Messages from Selected Untrusted Validators&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The selected validators' messages are processed through the Base Squelching algorithm, ensuring that duplicate transmissions from these untrusted sources are also effectively minimised. If the untrusted validator becomes idle, base squelching deletes the allocated slot, and the enhanced squelching algorithm picks a new validator.&lt;/p&gt;

&lt;p&gt;Enhanced Squelching first narrows the field of untrusted validators to a small, active, and relevant set by meticulously analysing their validation message patterns. Then, Base Squelching deduplicates all messages received from this selected. This layered approach significantly reduces the processing of less relevant, unique, untrusted validator traffic and minimises the overall redundant messages a node has to handle.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Quantifying the Impact of Enhanced Squelching&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Combining Enhanced Squelching to select a core set of untrusted validators and Base Squelching to reduce duplicates from all validators significantly reduces overall network traffic. Let's project the traffic based on the parameters defined earlier:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Effectively enhanced Squelching reduces the number of validators from which a node receives messages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;35 Trusted Validators + 5 Selected Untrusted Validators = &lt;strong&gt;40 Validators&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;By reducing the total number of validators, the algorithm reduces the data amount generated per ledger:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;40 validators × 432 bytes/validator = &lt;strong&gt;17.3 KB per ledger&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Under enhanced flooding, per day, each connection transfers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;17.3 KB/ledger × 20,000 ledgers/Day = &lt;strong&gt;0.346 GB per connection per Day&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Combined with base squelching, the projected total daily traffic is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;0.346 GB/day per connection × 5,075 connections = &lt;strong&gt;1.78 TB per Day&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;This projected traffic of &lt;strong&gt;1.78 TB per Day&lt;/strong&gt; is significantly reduced from the &lt;strong&gt;22.8 TB per Day&lt;/strong&gt; calculated for the current flooding model, showcasing a potential traffic decrease of over &lt;strong&gt;92%&lt;/strong&gt;. The efficiency for processing messages from this selected set of 40 validators approaches optimal levels, significantly improving overall network resource utilisation.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What's Next? The Path to a More Efficient Network&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The introduction of Base Squelching and the plans for Enhanced Squelching are steps towards a more efficient, scalable, and robust XRP Ledger network. These algorithms enhance network performance and reduce the resource burden on node operators by reducing duplicate and unnecessary unique validator messages.&lt;/p&gt;

&lt;p&gt;Our next step is to start a slow and meticulous rollout plan for these features.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Release of Base Squelching:&lt;/strong&gt; The upcoming rippled version 2.5.0 will include the improved Base Squelching algorithm. If node operators upgrade to this version, they can configure and activate the feature. However, these features will remain turned off by default. Furthermore, we recommend keeping Base Squelching off initially while we perform canary testing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Concurrent Canary Testing of Base and Enhanced Squelching:&lt;/strong&gt; Following the release of version 2.5.0, we will initiate a crucial canary testing phase.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Base Squelching:&lt;/strong&gt; We will test it on a controlled set of nodes running version 2.5.0, monitoring its performance and stability in a live environment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enhanced Squelching:&lt;/strong&gt; We will conduct canary testing for Enhanced Squelching. While the feature will not be released in version 2.5.0, the necessary code will be available to evaluate it on specific nodes. It will allow us to gather real-world data on its effectiveness early and refine it before a wider public release.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Public Release of Enhanced Squelching:&lt;/strong&gt; Based on its dedicated canary testing outcomes, the Enhanced Squelching algorithm will be publicly available in a subsequent rippled release. This formal release will make the feature available to all node operators.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Gradual Network-Wide Rollout:&lt;/strong&gt; Following respective public releases and successful canary testing of Base and Enhanced Squelching, the final phase will be a gradual, network-wide activation driven by the community. We will collaborate with infrastructure providers to support this process, recommending a cautious approach where operators enable the features on a few servers at a time. Gradual rollout allows them to monitor the real-world impact and ensures that any issues can be addressed with minimal disruption, thereby safeguarding the entire network.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This meticulous, phased strategy—releasing features, testing them in controlled live environments, and then supporting a gradual, community-involved rollout ensures a smooth and safe transition, bringing these powerful optimisations to the entire XRP Ledger ecosystem.&lt;/p&gt;

&lt;p&gt;We are excited about the potential of these improvements and are committed to transparently sharing updates on their progress, deployment, and performance. Stay tuned for more information as we embark on this next phase of network enhancement.&lt;/p&gt;

&lt;p&gt;If you would like to participate in testing or the rollout of the algorithms, please reach out to: &lt;a href="mailto:vtumas@ripple.com"&gt;vtumas@ripple.com&lt;/a&gt; . Your contributions are key to smooth activation! :) &lt;/p&gt;

</description>
      <category>xrpledger</category>
      <category>networking</category>
      <category>optimisation</category>
    </item>
    <item>
      <title>XRP Ledger Lending Protocol</title>
      <dc:creator>Vito Tumas</dc:creator>
      <pubDate>Thu, 16 Jan 2025 16:23:10 +0000</pubDate>
      <link>https://forem.com/ripplexdev/xrp-ledger-lending-protocol-2pla</link>
      <guid>https://forem.com/ripplexdev/xrp-ledger-lending-protocol-2pla</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;The XRP Ledger (XRPL) is a decentralised, open-source blockchain known for its unique consensus mechanism that enables fast, energy-efficient transactions. Unlike proof-of-work systems, XRPL’s consensus algorithm ensures transaction finality in seconds while minimising costs, making it ideal for financial applications. XRPL’s low fees and quick settlement times are especially beneficial for DeFi applications.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/XRPLF/XRPL-Standards/pull/239" rel="noopener noreferrer"&gt;XLS-65&lt;/a&gt; and &lt;a href="https://github.com/XRPLF/XRPL-Standards/pull/240" rel="noopener noreferrer"&gt;XLS-66&lt;/a&gt; specifications introduce native primitives to enable uncollateralised, fixed-term loans sourced from a pooled liquidity structure known as a Vault. This article provides an overview of these specifications' key features and functions, highlighting how they contribute to a robust lending framework on the XRP Ledger.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3xdosmny6i00ld0g0e7n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3xdosmny6i00ld0g0e7n.png" alt="XRP Ledger Lending Protocol Architecture" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  XLS-65: Single Asset Vault Overview
&lt;/h2&gt;

&lt;p&gt;The XLS-65 specification introduces a Single Asset Vault, an on-chain primitive designed to aggregate assets from one or more accounts, making this liquidity accessible to other protocols, such as the Lending Protocol. By decoupling liquidity provision logic from the core protocol or business logic, the Single Asset Vault offers greater flexibility and efficiency in managing pooled assets within the XRP Ledger ecosystem.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Vault Representation and Management&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The Single Asset Vault is represented on the ledger by a &lt;strong&gt;Vault entry&lt;/strong&gt;. This Vault object is owned and managed by a VaultOwner account responsible for overseeing functionality and asset management. Assets are deposited into the Vault by one or more users, known as &lt;em&gt;depositors&lt;/em&gt;, whose shares represent the proportional ownership of the Vault assets. These shares entitle them to withdraw assets from the Vault.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Withdrawal Policy&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The withdrawal policy defines how depositors exchange their shares for assets in the vault. The Single Asset Vault currently uses a First-Come, First-Serve policy, where withdrawal requests are processed in the order they are received. While this approach is simple and efficient, it may only suit some DeFi use cases. For example, it can incentivise early withdrawals, making vault liquidity less predictable. Therefore, the vault design is modular, allowing for future implementation of alternative withdrawal policies aligned with specific use cases and offering greater flexibility in managing vault liquidity.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Public and Private Vaults&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;A Vault falls into one of two types: &lt;strong&gt;public&lt;/strong&gt; or &lt;strong&gt;private&lt;/strong&gt;, with the type determined at creation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Public Vault&lt;/strong&gt;: Any account can deposit and withdraw assets freely, encouraging open participation.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Private Vault&lt;/strong&gt;: Only authorised Depositors—granted permission by the Vault Owner via a Permissioned Domain—can deposit assets. However, any account holding corresponding shares can withdraw assets, ensuring Depositors always retain access to their assets.
This dual structure enables flexible access control, supporting open and restricted configurations within the XRP Ledger ecosystem.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Permissioned Domains for Access Control&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;A &lt;strong&gt;Permissioned Domain&lt;/strong&gt; is an allow-list-based access control mechanism in the XRP Ledger. It enables Vault Owners to specify accepted Credential Authority and Credential Types. If an account has a credential issued by an approved authority and is of the correct type, it is permitted to deposit assets into a Vault and hold the Vaults shares. In the event the Credential Authority revokes account credentials, the account will still be able to withdraw shares held. However, the account will be unable to deposit assets or receive additional shares. For more on Permissioned Domains and Credentials, see the specifications of the &lt;a href="https://github.com/XRPLF/XRPL-Standards/tree/master/XLS-0080d-permissioned-domains" rel="noopener noreferrer"&gt;XLS-80d Permissioned Domain&lt;/a&gt; and &lt;a href="https://github.com/XRPLF/XRPL-Standards/tree/master/XLS-0070d-credentials" rel="noopener noreferrer"&gt;XLS-70d Credentials&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Vault Shares and Transferability&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Vault shares are &lt;strong&gt;first-order assets&lt;/strong&gt; issued directly by the Vault instead of the Vault Owner, with transferability configured at Vault creation:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Non-transferable shares&lt;/strong&gt;: Cannot be sent to other accounts or traded on the Decentralized Exchange, limiting circulation.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Transferable shares&lt;/strong&gt;: Can be freely traded on the Decentralized Exchange or transferred to other accounts, provided the recipient (or buyer) is authorised to hold the shares. In other words, if a Vault has an associated Permissioned Domain, the recipient must have credentials accepted in that domain, even when trading on the Decentralized Exchange.
This flexibility allows Vault Owners to tailor liquidity and trading dynamics, with transferable shares offering secondary market opportunities to enhance overall market liquidity.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Compliance Features: Freezing and Clawback&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The Single Asset Vault supports compliance features, including &lt;strong&gt;asset freezing&lt;/strong&gt; and &lt;strong&gt;clawback&lt;/strong&gt; by the Asset Issuer:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Account Freeze&lt;/strong&gt;: If an Asset Issuer freezes a Depositor’s account, that Depositor cannot deposit or withdraw assets from the Vault, nor can they transfer or receive shares.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Global Freeze&lt;/strong&gt;: Prevents all Depositors from depositing, withdrawing assets, or transferring shares.
If the Asset Issuer freezes the Vault, this halts operations and any connected protocols. Only the Asset Issuer can recover assets through &lt;strong&gt;Clawback&lt;/strong&gt;—executing a forced withdrawal that burns the Depositor's shares in exchange for the available funds in the Vault.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  XLS-66: Lending Protocol
&lt;/h2&gt;

&lt;p&gt;The XLS-66 specification introduces the XRP Ledger-native Lending Protocol, which facilitates straightforward, on-chain, uncollateralised fixed-term loans with pre-set interest terms. Loan liquidity is sourced from pooled funds, while the design relies on off-chain underwriting and risk management to assess borrowers’ creditworthiness. In cases of loan default, the First-Loss Capital protection scheme absorbs a portion of losses to protect Vault Depositors.&lt;br&gt;&lt;br&gt;
  The Lending Protocol is represented on the ledger by a &lt;strong&gt;LoanBroker entry&lt;/strong&gt;, created, owned, and managed by the same account as the VaultOwner. Future updates may allow for greater independence and flexibility by decoupling Vault and Lending Protocol components.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Loan Creation and Terms&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Loans in the Lending Protocol are represented on the ledger by a &lt;strong&gt;Loan entry&lt;/strong&gt;—a formalised, on-chain agreement between the Loan Issuer (the owner of the LoanBroker object) and the Borrower. The Loan entry is created through a transaction signed by both parties, binding them to specific terms.&lt;br&gt;&lt;br&gt;
  Since the Lending Protocol relies on off-chain underwriting and risk assessment, it assumes that the Loan Issuer has conducted thorough due diligence before issuing the loan.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Payment Structure and Options&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Loans are issued with a fixed term and a &lt;strong&gt;pre-determined payment schedule&lt;/strong&gt; calculated using an amortisation function, producing fixed-sized payments with varying proportions of principal and interest. XRP Ledger Lending Protocol loans utilise a second-based payment resolution, allowing for intervals as short as 60 seconds, enhancing repayment flexibility.&lt;br&gt;&lt;br&gt;
  Upon issuance, the Borrower can draw down the loan funds and make periodic payments toward the loan. Each payment includes three components:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Principal&lt;/strong&gt;: Applied toward the loan balance and returned to the Vault.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Interest&lt;/strong&gt;: Also returned to the Vault (minus a LoanBroker fee).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LoanBroker Fee&lt;/strong&gt;: Directed to the LoanBroker.
&lt;strong&gt;Missed and Late Payments&lt;/strong&gt;
If the Borrower misses a scheduled payment, they enter a grace period, during which they can still make a late payment, albeit with a late payment fee and a higher interest rate. After the grace period, the Loan Issuer may default the loan, which:
&lt;/li&gt;
&lt;li&gt;Recalls any undrawn loan principal to the Loan Issuer.
&lt;/li&gt;
&lt;li&gt;Triggers risk management processes (detailed later in this article).
&lt;/li&gt;
&lt;li&gt;Reduces the Vault’s total value, impacting all Vault Depositors.
&lt;strong&gt;Early and Overpayments&lt;/strong&gt;
The Borrower may also repay the loan in full before the term ends, with early repayment incurring an early repayment interest rate and fee. Additionally, Borrowers can make overpayments, paying more than the required amount, which is handled based on the Loan configuration:
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Non-application&lt;/strong&gt;: Only the minimum payment is applied, disregarding extra payment.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Application to Principal&lt;/strong&gt;: The overpayment reduces the outstanding principal, lowering future interest. (The Loan Issuer may apply an overpayment interest rate and fee to offset reduced future yield.)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Risk Management and First-Loss Capital Protection&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The Lending Protocol introduces optional first-loss capital protection, where the LoanBroker deposits a fund that can partially cover losses in case of a loan default. This capital protection mitigates risk to Vault Depositors:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The first-loss capital required is a percentage of the total debt owed to the Vault.
&lt;/li&gt;
&lt;li&gt;Upon default, a portion of this capital is liquidated based on the minimum required cover, and the proceeds are returned to the Vault to cover some losses.
&lt;strong&gt;Loan Impairment&lt;/strong&gt;
If the Loan Issuer anticipates difficulty with a Borrower’s repayment, they can &lt;strong&gt;impair&lt;/strong&gt; the loan, which advances the payment due date to the current time and temporarily reduces the Vault value. This discourages Depositors from withdrawing prematurely, as it would shift the burden of losses to the remaining Depositors. Once resolved, the impairment status is lifted.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Asset Freezing and Protocol Impact&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Freezing of assets by the Issuer can affect the Lending Protocol. If the Borrower’s account is frozen, preventing them from making a payment, the loan remains active and is not pardoned. After a grace period, the Loan Issuer may:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Default the Loan&lt;/strong&gt;: Claiming any remaining assets to protect the Vault.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Impair the Loan&lt;/strong&gt;: Accelerating the default process if repayment is unlikely.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Future Development
&lt;/h2&gt;

&lt;p&gt;The next step in developing the Lending Protocol involves introducing a collateral system to enhance risk mitigation and strengthen the security of loan agreements. Borrowers will pledge acceptable assets as collateral, protecting depositors against potential defaults and promoting responsible borrowing practices. The system will support various asset types, providing flexibility in securing loans. A dynamic collateralisation ratio will adjust requirements based on asset volatility and risk profiles, ensuring robust protection for both borrowers and depositors.&lt;br&gt;&lt;br&gt;
  Implementing a robust liquidation mechanism will be crucial to the success of the collateral system. This mechanism will establish clear thresholds for collateral value that trigger liquidation processes, enabling the protocol to recover loan amounts in the event of default. Borrowers will have a grace period to address any shortfalls in collateral, reinforcing the protocol’s commitment to maintaining positive borrower-lender relationships. Real-time collateral valuation through Oracle services (&lt;a href="https://github.com/XRPLF/XRPL-Standards/tree/master/XLS-0047-PriceOracles" rel="noopener noreferrer"&gt;XLS-47 Specification&lt;/a&gt;) and regular audits will ensure accurate collateral values, providing transparency and trust in the lending process. Overall, introducing a collateral system represents a significant advancement in the Lending Protocol's capabilities, contributing to a more secure and efficient lending ecosystem within the XRP Ledger.&lt;/p&gt;

</description>
      <category>xrpledger</category>
      <category>defi</category>
      <category>lendingprotocol</category>
    </item>
  </channel>
</rss>
