<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: lina</title>
    <description>The latest articles on Forem by lina (@lina_lina_lina).</description>
    <link>https://forem.com/lina_lina_lina</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/lina_lina_lina"/>
    <language>en</language>
    <item>
      <title>Exploring Privacy Stack in Anonymous Crypto Wallets</title>
      <dc:creator>lina</dc:creator>
      <pubDate>Mon, 16 Feb 2026 17:53:17 +0000</pubDate>
      <link>https://forem.com/rocknblock/exploring-privacy-stack-in-anonymous-crypto-wallets-4kfb</link>
      <guid>https://forem.com/rocknblock/exploring-privacy-stack-in-anonymous-crypto-wallets-4kfb</guid>
      <description>&lt;p&gt;Privacy is no longer optional in crypto. For Web3 builders and founders, understanding anonymous crypto wallets is essential — not just to protect users, but to design secure, scalable systems. Cake Wallet is one of the most prominent examples of a mobile wallet combining multiple privacy mechanisms across blockchains. In this article, we break down how Cake Wallet works and the architecture patterns that make it truly anonymous.&lt;/p&gt;

&lt;p&gt;This is a short summary of our research on anonymous crypto wallets, highlighting the architecture, privacy features, and technical choices behind Cake Wallet.&lt;/p&gt;

&lt;p&gt;For the full breakdown and detailed insights, read the research: &lt;a href="https://rocknblock.io/blog/how-anonymous-crypto-wallets-work-architecture-privacy-tech-and-features?utm_source=devto&amp;amp;utm_medium=research&amp;amp;utm_campaign=cake+wallet" rel="noopener noreferrer"&gt;How Anonymous Crypto Wallets Work&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is an Anonymous Crypto Wallet?
&lt;/h2&gt;

&lt;p&gt;An anonymous crypto wallet is a wallet that reduces linkability between senders, receivers, and transaction amounts. Unlike standard wallets, which are transparent by default, these wallets:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Obscure transaction history&lt;/li&gt;
&lt;li&gt;Hide recipient addresses&lt;/li&gt;
&lt;li&gt;Protect network metadata&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Privacy wallets achieve this through protocol-level cryptography (ring signatures, zk-SNARKs) or by masking network connections (Tor routing, Payjoin). The goal is simple: prevent third parties from building a financial profile around a wallet.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Cake Wallet Works
&lt;/h2&gt;

&lt;p&gt;Cake Wallet is an open-source, non-custodial wallet built around privacy. Running on iOS, Android, macOS, and Linux with Flutter/Dart and C++ cryptography, it supports multiple blockchains while exposing anonymity features through a unified architecture.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Technical stack highlights:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Flutter/Dart: cross-platform codebase&lt;/li&gt;
&lt;li&gt;C++ FFI: cryptography operations&lt;/li&gt;
&lt;li&gt;flutter_libmonero: Monero integration&lt;/li&gt;
&lt;li&gt;ledger-flutter-plus: hardware wallet support&lt;/li&gt;
&lt;li&gt;reown_flutter: WalletConnect integration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Supported blockchains &amp;amp; privacy mechanisms:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Monero:&lt;/strong&gt; ring signatures, stealth addresses, RingCT&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bitcoin:&lt;/strong&gt; Silent Payments, Payjoin&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Litecoin:&lt;/strong&gt; MWEB (hides amounts &amp;amp; addresses)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Zcash:&lt;/strong&gt; zk-SNARK shielded addresses&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ethereum &amp;amp; Solana:&lt;/strong&gt; transparent, ERC-20/SPL tokens&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Cake Wallet Architecture Overview
&lt;/h2&gt;

&lt;p&gt;Cake Wallet’s architecture is modular, allowing privacy mechanisms to integrate seamlessly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Presentation:&lt;/strong&gt; UI &amp;amp; reactive state
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Business logic:&lt;/strong&gt; wallet managers &amp;amp; transaction builders
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Domain:&lt;/strong&gt; accounts, keys, transaction rules
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data:&lt;/strong&gt; encrypted local storage
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Native:&lt;/strong&gt; cryptography &amp;amp; protocol operations
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This design supports cross-chain privacy features while remaining scalable and mobile-friendly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Privacy Features in Cake Wallet
&lt;/h2&gt;

&lt;p&gt;Cake Wallet implements multiple layers of privacy to protect both transaction origins and destinations, amounts, and network metadata. These mechanisms are designed to break on-chain linkability while keeping transactions verifiable by the network.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Ring Signatures (Monero)
&lt;/h3&gt;

&lt;p&gt;Ring signatures obscure which wallet input is being spent. They allow a user to authorize a transaction without revealing the signer, so observers can only see that someone in a group spent funds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Components:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Ring:&lt;/strong&gt; A group of transaction outputs (UTXOs) including 1 real and N−1 decoys. Monero uses a minimum ring size of 16.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Signers:&lt;/strong&gt; Only the sender signs; decoys provide public keys but do not participate.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Key Image:&lt;/strong&gt; Derived from the private key and spent output to prevent double-spending while preserving anonymity.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pedersen Commitments:&lt;/strong&gt; Hide transaction amounts while allowing network verification of input-output balance.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RingCT:&lt;/strong&gt; Combines ring signatures with Pedersen commitments to hide both sender and amount.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Workflow Overview:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Wallet selects unspent outputs (UTXOs) to spend.
&lt;/li&gt;
&lt;li&gt;Random decoys are picked from the blockchain to form a ring.
&lt;/li&gt;
&lt;li&gt;One-time stealth addresses are generated for recipients.
&lt;/li&gt;
&lt;li&gt;Pedersen commitments conceal amounts.
&lt;/li&gt;
&lt;li&gt;MLSAG ring signature is built, proving ownership of one input without revealing which.
&lt;/li&gt;
&lt;li&gt;Transaction is signed and broadcast.
&lt;/li&gt;
&lt;li&gt;Recipient scans blockchain to detect and claim funds.
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Trade-Offs:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Larger rings increase privacy but also transaction size and fees.
&lt;/li&gt;
&lt;li&gt;Proper decoy selection is critical for maintaining anonymity.
&lt;/li&gt;
&lt;li&gt;Amounts require RingCT for obfuscation.
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Stealth Addresses
&lt;/h3&gt;

&lt;p&gt;Stealth addresses hide who receives funds by generating a unique, one-time destination address for each payment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How It Works:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sender derives a one-time public key using elliptic curve operations and a shared secret with the recipient.
&lt;/li&gt;
&lt;li&gt;Recipient scans outputs with a private viewing key and derives the matching spending key.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Benefits:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Observers cannot link multiple transactions to the same recipient.
&lt;/li&gt;
&lt;li&gt;Transfers remain non-interactive: no coordination is required.
&lt;/li&gt;
&lt;li&gt;Two-key system (view key + spend key) allows safe background scanning without exposing funds.
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Bitcoin Silent Payments (BIP-352)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Each payment uses a unique, one-time address.
&lt;/li&gt;
&lt;li&gt;Public addresses can be shared without compromising incoming payment privacy.
&lt;/li&gt;
&lt;li&gt;Compatible with wallet labeling and tracking without exposing linkability.
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4. Payjoin v2 (BIP-77)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Enhances Bitcoin privacy by mixing sender and receiver inputs.
&lt;/li&gt;
&lt;li&gt;Receiver adds inputs to the transaction, making it hard for outsiders to determine which funds are spent.
&lt;/li&gt;
&lt;li&gt;Uses Oblivious HTTP (OHTTP) to hide IP addresses.
&lt;/li&gt;
&lt;li&gt;Compatible with lightweight wallets, no server burden required.
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  5. MimbleWimble Extension Blocks (Litecoin)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;MWEB hides addresses and amounts simultaneously.
&lt;/li&gt;
&lt;li&gt;Transactions move into a parallel MWEB layer using cryptographic commitments.
&lt;/li&gt;
&lt;li&gt;Offline receiving is supported.
&lt;/li&gt;
&lt;li&gt;Provides fungibility, opt-in privacy, and scalability while remaining compatible with the base layer.
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  6. Zcash Shielded Transactions (zk-SNARKs)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Shielded transactions hide sender, recipient, and amount using zero-knowledge proofs.
&lt;/li&gt;
&lt;li&gt;zk-SNARKs: Zero-Knowledge Succinct Non-Interactive Arguments of Knowledge.
&lt;/li&gt;
&lt;li&gt;Network verifies correctness without seeing sensitive details.
&lt;/li&gt;
&lt;li&gt;Halo 2 upgrades remove trusted setup requirements, enable scalable shielded transactions, and improve proof efficiency.
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  7. Network-Level Anonymity (Tor Integration)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Cake Wallet routes all traffic through Tor.
&lt;/li&gt;
&lt;li&gt;Multi-hop onion routing hides IP addresses and geographic location.
&lt;/li&gt;
&lt;li&gt;Protects against ISP monitoring and network-level tracking.
&lt;/li&gt;
&lt;li&gt;Each node in the chain only knows the previous and next hop, never the full path.
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key Design Patterns for Building Anonymous Crypto Wallets
&lt;/h2&gt;

&lt;p&gt;Cake Wallet illustrates how to combine protocol-level privacy, transaction-level obfuscation, and network anonymity. Key patterns for builders:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Layer cryptography and network-level protections
&lt;/li&gt;
&lt;li&gt;Modular architecture separating presentation, domain, and native crypto operations
&lt;/li&gt;
&lt;li&gt;Cross-chain abstraction for consistent privacy features across blockchains
&lt;/li&gt;
&lt;li&gt;User-friendly privacy: seamless stealth addresses, ring signatures, and shielded transactions without extra user steps
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These patterns serve as a blueprint for Web3 teams designing secure, scalable, anonymous crypto wallets.&lt;/p&gt;

</description>
      <category>privacy</category>
      <category>web3</category>
      <category>cryptocurrency</category>
    </item>
    <item>
      <title>We Researched Hyperliquid’s Architecture — Here’s What Actually Stands Out</title>
      <dc:creator>lina</dc:creator>
      <pubDate>Tue, 03 Feb 2026 14:23:01 +0000</pubDate>
      <link>https://forem.com/rocknblock/we-researched-hyperliquids-architecture-heres-what-actually-stands-out-3568</link>
      <guid>https://forem.com/rocknblock/we-researched-hyperliquids-architecture-heres-what-actually-stands-out-3568</guid>
      <description>&lt;p&gt;Our team recently conducted a deep technical research into Hyperliquid to understand why it keeps coming up in discussions around high-performance perpetual DEXs. We focused on system design — how trades are executed, where state lives, and how liquidity, custody, and settlement are structured.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Read the full research here: &lt;a href="https://rocknblock.io/blog/how-does-hyperliquid-work-a-technical-deep-dive?utm_source=devto&amp;amp;utm_medium=research&amp;amp;utm_campaign=hl" rel="noopener noreferrer"&gt;How Hyperliquid Works — A Technical Deep Dive&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In this summary, we cover Hyperliquid’s architecture, the HyperCore order book, HyperEVM, and the mechanisms managing trading, liquidity, and smart contract interactions.&lt;/p&gt;




&lt;h2&gt;
  
  
  DEX Mechanics Background
&lt;/h2&gt;

&lt;p&gt;Most DEXs work like this: the market price of one asset relative to another is determined by a predefined curve, and liquidity comes from users called liquidity providers. These providers supply funds but don’t set prices — pricing happens automatically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Other key features of typical DEXs:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Users retain full control of funds via smart contracts&lt;/li&gt;
&lt;li&gt;Trades can execute without waiting for a counterparty&lt;/li&gt;
&lt;li&gt;Independent pricing, self-custody, and no intermediaries&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Hyperliquid takes a different approach. Its interface resembles a centralized exchange, but all execution is &lt;strong&gt;on-chain&lt;/strong&gt; and fully deterministic.  &lt;/p&gt;




&lt;h2&gt;
  
  
  Hyperliquid Overview
&lt;/h2&gt;

&lt;p&gt;Hyperliquid provides a &lt;strong&gt;fully on-chain central limit order book (CLOB)&lt;/strong&gt;. Unlike other DEXs that rely on off-chain matching or AMM curves, order creation, matching, and execution happen directly on-chain.&lt;/p&gt;

&lt;p&gt;It runs on a custom L1 blockchain with two execution environments:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;HyperCore&lt;/strong&gt; — high-performance on-chain trading engine&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HyperEVM&lt;/strong&gt; — EVM-compatible smart contract environment&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both operate under &lt;strong&gt;HyperBFT consensus&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Hyperliquid L1 Blockchain and HyperBFT Consensus
&lt;/h2&gt;

&lt;p&gt;Hyperliquid uses &lt;strong&gt;HyperBFT&lt;/strong&gt;, a variant of Delegated Proof-of-Stake (DPoS):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Leader rotation across multiple rounds&lt;/li&gt;
&lt;li&gt;Validators vote proportionally to stake&lt;/li&gt;
&lt;li&gt;Byzantine Fault Tolerance tolerates up to ⅓ malicious validators&lt;/li&gt;
&lt;li&gt;Ensures consistent state across all network participants&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Hyperliquid Blockchain Architecture
&lt;/h2&gt;

&lt;p&gt;Hyperliquid splits blockchain state between &lt;strong&gt;HyperCore&lt;/strong&gt; and &lt;strong&gt;HyperEVM&lt;/strong&gt;, synchronized under HyperBFT.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyaa5r2olvtyz1yw0h2q5.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyaa5r2olvtyz1yw0h2q5.webp" alt="Hyperliquid Architecture" width="" height=""&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;
&lt;em&gt;Source: Hyperliquid Community Wiki&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Let’s take a closer look at the diagram. The architecture has four layers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Frontend layer&lt;/strong&gt; — the interface traders interact with
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Application logic layer&lt;/strong&gt; — trading, margin, and funding rules
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Execution layer&lt;/strong&gt; — HyperCore for the on-chain CLOB, HyperEVM for smart contracts
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consensus layer&lt;/strong&gt; — HyperBFT ensures state consistency
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Even with two execution environments, &lt;strong&gt;HyperEVM can read and write HyperCore data&lt;/strong&gt;, enabling smart contracts to interact directly with trading and account state.&lt;/p&gt;




&lt;h2&gt;
  
  
  HyperCore On-Chain Order Book
&lt;/h2&gt;

&lt;p&gt;HyperCore is the &lt;strong&gt;core of Hyperliquid’s trading engine&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fully on-chain CLOB&lt;/li&gt;
&lt;li&gt;Handles order creation, cancellation, matching, and execution&lt;/li&gt;
&lt;li&gt;Deterministic price-time priority ensures consistent state across validators&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Sub-second finality and throughput:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Average finality: ~0.2s
&lt;/li&gt;
&lt;li&gt;Throughput: up to 200,000 orders per second
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Order placement types include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Market, Limit, Stop Market, Stop Limit
&lt;/li&gt;
&lt;li&gt;TWAP (time-weighted average price), Scale orders
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reduce Only, Good Til Cancel, Post Only, Immediate or Cancel
&lt;/li&gt;
&lt;li&gt;Take Profit, Stop Loss
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Margin Management and the Clearinghouse
&lt;/h2&gt;

&lt;p&gt;The clearinghouse is Hyperliquid’s &lt;strong&gt;central accounting and risk engine&lt;/strong&gt;. It ensures that all trades, margins, and liquidations are processed deterministically.&lt;/p&gt;

&lt;p&gt;Let’s take a closer look at the diagram below to see how the clearinghouse operates.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq7pgldlck1k99mw88yvx.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq7pgldlck1k99mw88yvx.webp" alt="Clearinghouse Architecture" width="800" height="451"&gt;&lt;/a&gt;&lt;em&gt;Source: Hyperliquid Community Wiki&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What the Clearinghouse Does in Hyperliquid
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Tracks balances and positions: user deposits, withdrawals, margin balances, and open positions
&lt;/li&gt;
&lt;li&gt;Enforces leverage and margin requirements: validates margin sufficiency at order submission and before execution
&lt;/li&gt;
&lt;li&gt;Executes liquidations automatically when maintenance thresholds are breached
&lt;/li&gt;
&lt;li&gt;Calculates funding rates for perpetuals to align prices with spot markets
&lt;/li&gt;
&lt;li&gt;Integrates oracle prices from Binance, OKX, Bybit, Kraken, KuCoin, Gate IO, MEXC, and Hyperliquid
&lt;/li&gt;
&lt;li&gt;Supports multiple margin modes: isolated and cross margin
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Understanding Funding Rate for Perpetuals
&lt;/h2&gt;

&lt;p&gt;Above, we mentioned the concept of funding rate. Let’s break down what it is.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Funding rate&lt;/strong&gt; keeps Hyperliquid perpetuals aligned with the underlying spot market. It balances incentives between long and short traders.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How it works:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Perp price above spot → longs pay shorts
&lt;/li&gt;
&lt;li&gt;Perp price below spot → shorts pay longs
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Formula:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Funding Rate = Average Premium Index(P) + clamp(Interest Rate – P, -0.0005, 0.0005)&lt;/p&gt;

&lt;p&gt;Where:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Premium Index (P):&lt;/strong&gt; difference between impact price and oracle price
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Interest Rate:&lt;/strong&gt; fixed at 0.01%
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Clamp:&lt;/strong&gt; limits sharp deviations to [-0.0005, 0.0005]
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Payment frequency:&lt;/strong&gt; hourly, keeping perp prices continuously aligned with spot.&lt;/p&gt;




&lt;h2&gt;
  
  
  HyperEVM Smart Contract Environment
&lt;/h2&gt;

&lt;p&gt;HyperEVM is &lt;strong&gt;EVM-compatible&lt;/strong&gt; and runs on the same blockchain:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Supports Ethereum tools and EIP-1559
&lt;/li&gt;
&lt;li&gt;Priority fees burned to zero address
&lt;/li&gt;
&lt;li&gt;Small blocks: 2M gas/sec, Large blocks: 30M gas/min
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;How it interacts with HyperCore:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftzhgneyuhrz4glhyw4dw.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftzhgneyuhrz4glhyw4dw.webp" alt="HyperCore and HyperEVM Data Flow" width="800" height="474"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Source: Hyperliquid Community Wiki&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Read access: asset prices, open orders, trade history, balances, validator deposits, blockchain metadata
&lt;/li&gt;
&lt;li&gt;Write access: manage orders, execute trades, interact with trading engine in real-time
&lt;/li&gt;
&lt;li&gt;Smart contracts act as active market participants without compromising HyperCore determinism
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Hyperliquid&lt;/strong&gt; is a fully on-chain, high-performance perpetual DEX
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HyperCore:&lt;/strong&gt; on-chain CLOB, deterministic execution, sub-second finality
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HyperEVM:&lt;/strong&gt; EVM-compatible environment with HyperCore access
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Clearinghouse:&lt;/strong&gt; enforces margin, liquidation, and funding rates using integrated oracle prices
&lt;/li&gt;
&lt;li&gt;Multiple order types and leverage options, all processed on-chain
&lt;/li&gt;
&lt;li&gt;Hourly &lt;strong&gt;funding rate&lt;/strong&gt; aligns perpetuals with spot
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This combination of trading engine and smart contract environment provides transparent, programmable, and high-performance DeFi markets.&lt;/p&gt;

&lt;p&gt;Make sure to&lt;a href="https://x.com/RockNBlockX" rel="noopener noreferrer"&gt;Follow us on X&lt;/a&gt; for more deep dives.&lt;/p&gt;

&lt;p&gt;We ❤️ Development&lt;/p&gt;

</description>
      <category>hyperliquid</category>
      <category>web3</category>
      <category>perpetuals</category>
    </item>
    <item>
      <title>Making blockchain data sane with smarter tools</title>
      <dc:creator>lina</dc:creator>
      <pubDate>Fri, 25 Jul 2025 12:00:24 +0000</pubDate>
      <link>https://forem.com/rocknblock/making-blockchain-data-sane-with-smarter-tools-3e7a</link>
      <guid>https://forem.com/rocknblock/making-blockchain-data-sane-with-smarter-tools-3e7a</guid>
      <description>&lt;p&gt;If you’ve ever tried to extract data from a blockchain, you know it’s not exactly plug-and-play. You’re dealing with distributed infrastructure, frequent reorgs, and often incomplete APIs. The data’s all there — somewhere — but getting it out, structured, and production-ready is a project of its own.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In our &lt;a href="https://rocknblock.io/blog/a-deep-dive-into-how-to-index-blockchain-data?utm_source=devto&amp;amp;utm_medium=blog&amp;amp;utm_campaign=indexer+deep+dive" rel="noopener noreferrer"&gt;full deep dive&lt;/a&gt;, we explore the whole landscape. This is a shorter version — a technical summary of the different approaches and tools we’ve seen work when you need blockchain data at scale.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The Most Basic Blockchain Data Indexing Solution (And Why It’s Not Enough)
&lt;/h2&gt;

&lt;p&gt;Every blockchain indexing setup starts with the node. It exposes an RPC interface that lets you query raw data from the chain. On Ethereum, the most straightforward way to get started is with eth_getLogs.&lt;/p&gt;

&lt;h3&gt;
  
  
  How eth_getLogs works
&lt;/h3&gt;

&lt;p&gt;Logs are emitted by smart contracts to provide information for off-chain consumers — they exist for this exact purpose. With eth_getLogs, you can filter by event signature, contract address, and block range. It’s simple, efficient for many use cases, and works reliably over time.&lt;/p&gt;

&lt;p&gt;But once things get more complex, eth_getLogs starts to show its limits.&lt;/p&gt;

&lt;p&gt;Logs don’t contain everything. If you need transaction metadata — like timestamps, sender info, or execution context — you’ll have to make additional RPC calls like eth_getTransactionByHash. This means multiple queries per event, which slows down the pipeline and introduces inefficiencies, especially when working with high volumes of data.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using eth_getBlockReceipts
&lt;/h3&gt;

&lt;p&gt;To improve on that, Ethereum provides eth_getBlockReceipts, which returns all transaction receipts from a given block. This gives you both the input data (calldata) and the resulting logs in one request. It’s a more complete view of block activity and helps reduce the number of round trips to the node.&lt;/p&gt;

&lt;p&gt;Still, there are trade-offs. eth_getBlockReceipts doesn’t support filtering — you can’t ask for just the receipts related to a specific contract or event. So even though it reduces the number of calls, it increases the amount of data you have to process.&lt;/p&gt;

&lt;p&gt;This can be especially limiting in protocols like Uniswap V3, where swap events trigger deeper state changes that aren’t captured in logs or receipts. To correctly track LP fees, you need access to updated storage values like FeeGrowth, which aren’t emitted as events and aren’t included in receipts. The only way to get them is by querying the contract’s storage directly — and doing that per transaction doesn’t scale, especially on fast chains with hundreds of swaps in a single block.&lt;/p&gt;

&lt;h3&gt;
  
  
  Full execution context with debug_traceBlock
&lt;/h3&gt;

&lt;p&gt;For use cases like Uniswap V3 state changes, Ethereum offers a more powerful option: the debug_traceBlock method. It’s part of the debug API, and not all RPC providers expose it — but when available, it gives full execution traces for each transaction in a block. That includes calldata, logs, internal calls, and storage changes.&lt;/p&gt;

&lt;p&gt;This lets you extract values like FeeGrowth directly from the execution trace, without making additional storage queries. It also shows the full call tree across contracts, which is essential when you need to understand how different components of a protocol interact in a single transaction.&lt;/p&gt;

&lt;h2&gt;
  
  
  The limitations of polling the node
&lt;/h2&gt;

&lt;p&gt;Working directly with the node still comes with two major limitations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;First&lt;/strong&gt;, there’s no real push model. You can’t subscribe to a stream of historical data from a specific block. WebSockets only give you events from the moment you connect, and if the connection drops, you lose data. This makes real-time indexing fragile unless you implement polling logic on the client side.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Second&lt;/strong&gt;, nodes don’t handle chain reorgs for you. If a block gets orphaned, you won’t be notified. You either have to stick to finalized blocks (which adds delay), or write your own logic to detect and handle reorgs. That’s a significant amount of overhead for something the node already does internally.&lt;/p&gt;

&lt;p&gt;So while node-level indexing using RPC and debug methods is the foundation of many tools, it has clear limits — especially for teams building real-time, reliable, or high-volume data pipelines.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solving Polling Limitations with Firehose by The Graph
&lt;/h2&gt;

&lt;p&gt;The two main pain points with traditional polling-based blockchain indexers led to a new approach that actually solves them: Firehose from The Graph. Let’s break down how this service works.&lt;/p&gt;

&lt;h3&gt;
  
  
  The first piece is a modified node
&lt;/h3&gt;

&lt;p&gt;Running a regular blockchain node like the ones discussed earlier doesn’t make much sense—it just moves us back to the inefficient polling model.&lt;/p&gt;

&lt;p&gt;Instead, the node is forked and a streaming patch is added that the service can read from. Here’s how it works:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When a new block lands on the node, it’s immediately pushed into a pipe.&lt;/li&gt;
&lt;li&gt;The indexing service reads from this pipe in real time.&lt;/li&gt;
&lt;li&gt;For Ethereum, this requires a custom fork since there’s no official way to patch nodes for streaming. On Solana, it’s simpler—there’s a Geyser plugin that allows hooking into the node’s events. &lt;/li&gt;
&lt;li&gt;When a new block appears, it gets pushed into a pipe that the service reads.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Adding historical streaming
&lt;/h3&gt;

&lt;p&gt;Standard nodes aren’t built to stream historical blockchain data from any point in the past, here’s why:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Nodes rely on efficient storage, usually on disk, optimized for quick lookups.&lt;/li&gt;
&lt;li&gt;Streaming historical data means constant heavy reads from storage, which can overload the system. &lt;/li&gt;
&lt;li&gt;Streaming live data in memory is one thing, but hitting storage nonstop for older blocks creates unpredictable load.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Because of this, nodes don’t support historical streaming out of the box. Firehose addresses this by providing a service that can stream blockchain data from any block height, letting indexers replay the chain as needed.&lt;/p&gt;

&lt;h3&gt;
  
  
  The second piece is cloud storage
&lt;/h3&gt;

&lt;p&gt;Firehose stores data as flat files—similar to what the node itself uses—which is the smallest efficient unit. It uses S3-compatible cloud storage, which brings some big benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cloud-native and serverless, so managing infrastructure or scaling is not required. &lt;/li&gt;
&lt;li&gt;Payment is based exactly on usage.&lt;/li&gt;
&lt;li&gt;No vendor lock-in, since almost every cloud provider offers S3-compatible storage with similar APIs. Switching providers is straightforward if a better option is found.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The final piece is a better API
&lt;/h3&gt;

&lt;p&gt;Regular nodes communicate over JSON-RPC via HTTP, streaming plain text exactly as received, which isn’t very efficient for modern indexing tools.&lt;/p&gt;

&lt;p&gt;Firehose uses gRPC, a binary protocol that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Packs data efficiently before streaming.&lt;/li&gt;
&lt;li&gt;Works across languages—schemas are defined once, then client code can be generated in whatever language is needed.&lt;/li&gt;
&lt;li&gt;This eliminates the need to write and maintain separate client libraries for every language, making integration much easier.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Firehose Blockchain indexing service workflow explained
&lt;/h3&gt;

&lt;p&gt;Here’s the basic flow of the Firehose service:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A blockchain node is run and modified to enable real-time streaming.&lt;/li&gt;
&lt;li&gt;The streamed data is pushed into cloud storage buckets (e.g., S3).&lt;/li&gt;
&lt;li&gt;A streaming interface is built that users connect to for blockchain data indexing.&lt;/li&gt;
&lt;li&gt;A key part of this interface is the Joined Block Source—a mechanism that automatically switches between data sources depending on user needs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;For example&lt;/strong&gt;, if a user wants to stream blocks starting from an hour ago (historical data), the service initially fetches data from historical storage (the buckets). Once the user catches up to the latest block (the current block head), the stream switches automatically to real-time data delivered directly from the modified node.&lt;/p&gt;

&lt;h3&gt;
  
  
  User benefits of Firehose streaming
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Cursor-based streaming lets users specify the exact block to start from, enabling precise indexing. Chain-agnostic design works across blockchain networks—the node layer changes, but storage and API remain the same.&lt;/li&gt;
&lt;li&gt;Immediate reorg notifications ensure consistency across indexers.&lt;/li&gt;
&lt;li&gt;Unified stream for both historical and live data—no manual switching needed.&lt;/li&gt;
&lt;li&gt;Reorg logic is fully handled inside Firehose, so clients only need to respond to events.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This architecture removes major pain points in blockchain indexing and delivers a scalable, reliable solution that simplifies how developers and applications consume blockchain data.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Firehose keeps indexing always up and running
&lt;/h2&gt;

&lt;p&gt;To ensure 100% availability, the architecture is built to avoid any single point of failure. Here’s how Firehose handles it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;At least two nodes stream blocks in parallel. One node is kept as the primary source, and the second acts as a backup. An RPC provider working in polling mode is added as an additional fallback.&lt;/li&gt;
&lt;li&gt;To handle these data streams efficiently, the reader component is split into at least two instances. These readers independently fetch blocks from different sources and write them into a centralized bucket storage.&lt;/li&gt;
&lt;li&gt;Each reader exposes a gRPC interface to stream binary block data.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Firehose component performs the following for end users:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Subscribes to multiple live sources to get the freshest data.&lt;/li&gt;
&lt;li&gt;Merges incoming data streams and performs deduplication.&lt;/li&gt;
&lt;li&gt;Whichever reader delivers a block first, that block is sent to the user.&lt;/li&gt;
&lt;li&gt;If the primary node fails, the backup continues streaming without disruption.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Handling data duplication in storage
&lt;/h3&gt;

&lt;p&gt;Since all readers write blocks to the same bucket, deduplication at the storage level is essential. To solve this, a dedicated merger service is introduced that:&lt;br&gt;
P&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ulls all blocks from the primary bucket (One blocks bucket).&lt;/li&gt;
&lt;li&gt;Optimizes storage of finalized blocks by bundling them into groups of 100.&lt;/li&gt;
&lt;li&gt;Writes these optimized bundles into a separate storage—the Merged blocks bucket.&lt;/li&gt;
&lt;li&gt;Stores all forked blocks separately in the Forked bloc
ks bucket.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, Firehose works with three buckets:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;One blocks bucket (raw blocks from readers)&lt;/li&gt;
&lt;li&gt;Merged blocks bucket (deduplicated, optimized bundles)&lt;/li&gt;
&lt;li&gt;Forked blocks bucket (fork data)
When a large historical range is requested, the service delivers blocks in bundles of 100 instead of one-by-one, making retrieval faster and more efficient.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Remaining challenges
&lt;/h2&gt;

&lt;p&gt;Firehose solves key problems related to fetching data directly from nodes and greatly improves service reliability.However, overfetching remains an issue. Firehose currently streams all data without filtering, which isn’t optimal since different applications require different data subsets.&lt;/p&gt;

&lt;p&gt;Standard filter presets can’t cover every use case because each app’s needs are unique and often complex.&lt;/p&gt;

&lt;p&gt;The simplest and most flexible solution is to let devs to write custom filters themselves, streaming only the filtered data their applications actually need and making Firehose more efficient and adaptable. This is where Substreams step in.&lt;/p&gt;

&lt;h2&gt;
  
  
  Custom Data Filtering with Substreams
&lt;/h2&gt;

&lt;p&gt;Substreams is an engine that allows developers to upload their own code — essentially a function that takes some input, processes it, and returns a result — compiled to WebAssembly.&lt;/p&gt;

&lt;p&gt;In practice, the developer writes a function that takes input (for example, a block) and outputs something specific — like Raydium events. How these Raydium events are extracted from the block depends entirely on the developer’s logic.&lt;/p&gt;

&lt;p&gt;The code is written, compiled, and uploaded to the server — from there, the engine runs that function on every block. This means the stream delivers exactly the custom data the application needs, as defined by the developer’s logic.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Blockchain Data Streaming Service Architecture Evolves with Substreams
&lt;/h2&gt;

&lt;p&gt;When Substreams is introduced, the architecture shifts as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Substreams operates as its own service alongside Firehose.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It runs developer-supplied WebAssembly (Wasm) modules, processes incoming block data, and streams back only the filtered, application-specific data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Developers define exactly what data they need.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Contracts, events, or on-chain data relevant to the app are specified — no unnecessary data floods the client.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To support this, a Relayer component is introduced:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the original Firehose setup, Firehose was the sole consumer of reader streams and handled deduplication itself. Now that both Firehose and Substreams consume block data, deduplication logic is moved into the Relayer.&lt;/li&gt;
&lt;li&gt;The Relayer ensures that whichever node delivers the block first is the one whose data gets streamed to clients.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How Substreams Blockchain Data Streaming Service Scales
&lt;/h2&gt;

&lt;p&gt;The Substreams service is built around two core components: the Front Tier and the Worker Pool.&lt;/p&gt;

&lt;p&gt;When a user requests processing for a block range — for example, from block 10,000 to 14,999 (5,000 blocks) — the request is sent to the Front Tier.&lt;/p&gt;

&lt;p&gt;The Front Tier manages a group of workers (Substreams Tier 2). Each worker can handle up to 16 concurrent tasks. The Front Tier splits the requested range into smaller segments of about 1,000 blocks each and distributes these segments across the workers.&lt;/p&gt;

&lt;p&gt;Each worker processes its assigned block segment and writes the resulting data into a dedicated Substreams store bucket. This bucket serves as a cache layer that stores processed data for quick access and efficient retrieval — its importance will be covered in more detail when discussing data bundling.&lt;/p&gt;

&lt;p&gt;Instead of streaming data directly back to the Front Tier, the workers stream progress updates. These updates indicate when a segment finishes processing or if an error occurs (e.g., a function revert), since user-defined logic might occasionally fail.&lt;/p&gt;

&lt;p&gt;The Front Tier ensures strict ordering by waiting for the first segment to finish before streaming its data to the user. It then moves sequentially through each segment, waiting for completion before sending its data. This guarantees a reliable, ordered data stream from the start to the end of the requested block range.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Modules Work in Substreams
&lt;/h2&gt;

&lt;p&gt;A breakdown of the functions loadable into Substreams and how they help with scaling:&lt;/p&gt;

&lt;h3&gt;
  
  
  Module Outputs Caching
&lt;/h3&gt;

&lt;p&gt;When writing a module, it can be configured to accept the output of some cached module instead of raw blocks. By referencing data from this cached module in a request:&lt;/p&gt;

&lt;p&gt;For example, an existing module — built previously — takes blocks from the merged blocks bucket as input. Its job is to extract all Uniswap V3 events within each block. It doesn’t modify data, just filters it down, so the output is smaller than the original block data. Essentially, it contains only the Uniswap V3 events, not the entire block data.&lt;/p&gt;

&lt;p&gt;This filtered data is then stored in the Substreams Store Bucket. When writing a module, it can be specified to take another module’s output (the Uniswap V3 events) as input instead of raw blocks. The server recognizes it can pull pre-filtered data directly from the cache, saving compute resources.&lt;/p&gt;

&lt;p&gt;Since billing is based on the amount of data retrieved, accessing already filtered data from the cache not only streamlines the developer’s workflow but also reduces costs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Index Modules
&lt;/h3&gt;

&lt;p&gt;Index modules differ from regular ones by producing a standard kind of output. For every block, they give a list of keys — markers — that help quickly check if the block holds the data needed.&lt;/p&gt;

&lt;p&gt;This means the index module takes raw blocks, scans them, and builds an index showing which contracts were touched or what log topics appeared in that block.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Filters Use Indexes to Cut Down Data
&lt;/h3&gt;

&lt;p&gt;For example, a module called Filtered Transactions uses the index output to narrow down blocks. The module’s manifest specifies “I want to use this index,” adding a filter like “Show me Raydium transactions.”&lt;/p&gt;

&lt;p&gt;The server pulls cached indexes, figures out which blocks contain Raydium transactions, and only sends those blocks to the Filtered Transactions module. This prevents time wasted checking every block.&lt;br&gt;
If someone already filtered Raydium transactions before, that data is likely cached. Instead of re-running the index, the filtered result can be grabbed immediately to start right away.&lt;/p&gt;

&lt;h2&gt;
  
  
  Streaming Blockchain Indexed Data into a Database
&lt;/h2&gt;

&lt;p&gt;At this stage, the goal is to transfer all data processed by Substreams into a database. This is done via SQL Sink, an open-source tool developed by The Graph.&lt;/p&gt;

&lt;h3&gt;
  
  
  Connecting Substreams to a Database via SQL Sink
&lt;/h3&gt;

&lt;p&gt;SQL Sink connects to the Substreams server and consumes data streams. It requires data modules to emit data in a specific format that maps blockchain data to database operations. This format includes commands like insert, upsert, update, and delete along with their primary keys and associated data.&lt;/p&gt;

&lt;p&gt;This design delegates all data transformation logic to Substreams modules, enabling SQL Sink to efficiently execute database operations. Users only need to implement modules that produce data in the required format.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Processing Workflow
&lt;/h3&gt;

&lt;p&gt;SQL Sink processes database commands by distributing data across tables as defined by modules.&lt;br&gt;
To handle chain reorganizations (reorgs), every database operation is logged in a History table.&lt;/p&gt;

&lt;p&gt;When a reorg occurs, operations linked to invalid blocks are rolled back using the History table, keeping the database consistent.&lt;br&gt;
While SQL Sink currently supports basic commands (insert, upsert, update, delete), it can be forked and extended to support additional operations like increments. Users can create custom modules and handlers to translate these into SQL commands.&lt;/p&gt;

&lt;p&gt;Users are not limited to SQL Sink alone; they can build custom sinks tailored to their needs using the core data streams and parallel processing provided by Substreams.&lt;/p&gt;

&lt;h3&gt;
  
  
  Comparison with Subgraphs
&lt;/h3&gt;

&lt;p&gt;Subgraphs provide a self-contained package where users supply compiled WebAssembly code defining all logic to handle events and transactions.&lt;/p&gt;

&lt;p&gt;Unlike Substreams, subgraphs do not maintain their own block storage. Instead, they query nodes directly for block data as needed, simplifying setup and deployment. This independence is a key advantage in simplicity.&lt;/p&gt;

&lt;p&gt;However, subgraphs lack data parallelization—they must sync blocks sequentially, which can cause bottlenecks. They work well on networks like Ethereum but are less practical for high-throughput chains such as Solana.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Indexing Still Holds Back Blockchain Growth
&lt;/h2&gt;

&lt;p&gt;Despite the rise of new Layer 1 and high-performance chains, indexing infrastructure remains a major bottleneck. Many networks lack native, reliable, and scalable indexing tools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This results in significant challenges:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Accessing blockchain data at scale remains complex.&lt;/li&gt;
&lt;li&gt;Developers spend time and resources on infrastructure rather than app development.&lt;/li&gt;
&lt;li&gt;Protocol teams repeatedly solve the same indexing problems.&lt;/li&gt;
&lt;li&gt;Poor indexing slows adoption by making new networks harder to build on.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Substreams is designed as a high-throughput data indexing framework enabling blockchains to natively provide production-grade data infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key benefits include:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Real-time and historical blockchain data streaming.&lt;/li&gt;
&lt;li&gt;Cursor-based access enables parallel processing.&lt;/li&gt;
&lt;li&gt;A modular architecture allowing developers to write custom filters.&lt;/li&gt;
&lt;li&gt;Caching and deduplication that reduce costs and improve performance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By integrating Substreams, blockchains can provide developers with efficient, structured, and streamable access to blockchain data without sacrificing scalability.&lt;/p&gt;

&lt;h2&gt;
  
  
  About Rock’n’Block
&lt;/h2&gt;

&lt;p&gt;Rock’n’Block is a Web3-native development company. We build backend infrastructure and indexing pipelines for projects and protocols across multiple blockchain ecosystems.&lt;br&gt;
Our work spans real-time and historical data processing, with production-ready systems tailored to handle high throughput and complex queries.&lt;br&gt;
&lt;strong&gt;Focus areas:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Firehose, Substreams, and custom indexing pipelines&lt;/li&gt;
&lt;li&gt;EVM chains, Solana, TON&lt;/li&gt;
&lt;li&gt;Scalable architecture for developer tooling and dApp infrastructure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Case study:&lt;/strong&gt; How we built a blockchain data streaming service for Blum → &lt;a href="https://rocknblock.io/portfolio/blum" rel="noopener noreferrer"&gt;https://rocknblock.io/portfolio/blum&lt;/a&gt;&lt;br&gt;
We’ve contributed to over 300 projects that collectively reached 71M+ users, raised $160M+, and hit $2.4B+ in peak market cap. Our role is to handle the backend complexity so teams can move faster and ship with confidence.&lt;/p&gt;

</description>
      <category>blockchain</category>
      <category>web3</category>
      <category>webdev</category>
      <category>postgres</category>
    </item>
  </channel>
</rss>
