<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Satori Geeks</title>
    <description>The latest articles on Forem by Satori Geeks (@satorigeeks).</description>
    <link>https://forem.com/satorigeeks</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/satorigeeks"/>
    <language>en</language>
    <item>
      <title>Why TON beat the highest-scoring candidate</title>
      <dc:creator>Satori Geeks</dc:creator>
      <pubDate>Mon, 27 Apr 2026 09:39:41 +0000</pubDate>
      <link>https://forem.com/satorigeeks/why-ton-beat-the-highest-scoring-candidate-464a</link>
      <guid>https://forem.com/satorigeeks/why-ton-beat-the-highest-scoring-candidate-464a</guid>
      <description>&lt;p&gt;Weeks 1–4 each had a defined category. Base was the EVM L2 week. Scroll was ZK. Core DAO was EVM-on-Bitcoin. Solana was non-EVM mainstream. Every week, the category defined the field. The question was never "what kind of chain?" It was "which chain best fits this type?" Clear axis, tractable problem.&lt;/p&gt;

&lt;p&gt;Week 5 is the Wildcard slot. The brief from day one: the unknown, what's at the frontier right now? No category. No axis. A wide field and an open question, and it turns out the open question is harder to answer than the constrained one.&lt;/p&gt;

&lt;h2&gt;
  
  
  The field
&lt;/h2&gt;

&lt;p&gt;Six candidates made the shortlist: Monad, TON, AO on Arweave, Fuel Network, Starknet, Stacks. One parallel-EVM (Monad) and one ZK-VM (Starknet — Cairo makes ZK explicit in the language itself, unlike Scroll where it's invisible plumbing). The most exotic option: AO, which runs Lua on Arweave's permanent log storage. Fuel uses strict VM parallelism via access lists in a UTXO model, with Sway as a Rust-inspired contract language. And TON — Telegram-native, actor-model VM, 900 million monthly active Telegram users already inside the access layer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Two eliminations worth naming
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;AO on Arweave&lt;/strong&gt; scored 27/60. That number undersells it. Lua as the contract language, running on Arweave's permanent log storage — the Holographic State model, where the state itself is never stored, only the process logs, and anyone can replay them to derive the current state (they call this Holographic State). An actor-oriented computation model that makes TON look almost conventional. I genuinely wanted to build on it. The problem: no local test framework, documentation that runs dry fast, sparse production examples.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stacks (Clarity)&lt;/strong&gt; was the harder cut. Clarity is interpreted, not compiled — which is what makes it fully decidable: no infinite loops, no reentrancy by design. The source publishes on-chain rather than bytecode, so there's nothing to exploit at the compiler level. Genuinely interesting language. But Stacks settles on Bitcoin. &lt;a href="https://dev.to/satorigeeks/the-most-exotic-consensus-in-the-series-the-most-vanilla-build-1pf9"&gt;Week 3 was Core DAO&lt;/a&gt;, which also settles on Bitcoin (different consensus model, same anchor). Two Bitcoin-settled chapters in five weeks is one too many.&lt;/p&gt;

&lt;h2&gt;
  
  
  The scored finalists
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Chain&lt;/th&gt;
&lt;th&gt;Language&lt;/th&gt;
&lt;th&gt;Est. Score&lt;/th&gt;
&lt;th&gt;One-line verdict&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Monad&lt;/td&gt;
&lt;td&gt;Solidity (EVM)&lt;/td&gt;
&lt;td&gt;54/60&lt;/td&gt;
&lt;td&gt;Parallel EVM — best rubric ceiling, fourth EVM week&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Starknet&lt;/td&gt;
&lt;td&gt;Cairo&lt;/td&gt;
&lt;td&gt;43.5/60&lt;/td&gt;
&lt;td&gt;ZK in the language, not just the plumbing — strong alt&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;TON&lt;/td&gt;
&lt;td&gt;Tact&lt;/td&gt;
&lt;td&gt;43/60&lt;/td&gt;
&lt;td&gt;Actor model + Telegram-native + language new to the series&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Fuel Network&lt;/td&gt;
&lt;td&gt;Sway&lt;/td&gt;
&lt;td&gt;40/60&lt;/td&gt;
&lt;td&gt;UTXO-parallel architecture, thin ecosystem&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Stacks&lt;/td&gt;
&lt;td&gt;Clarity&lt;/td&gt;
&lt;td&gt;39/60&lt;/td&gt;
&lt;td&gt;Interesting language, too much W3 overlap&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AO (Arweave)&lt;/td&gt;
&lt;td&gt;Lua&lt;/td&gt;
&lt;td&gt;27/60&lt;/td&gt;
&lt;td&gt;Best story, tooling not build-week-ready&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Full methodology &lt;a href="https://dev.to/satorigeeks/how-im-scoring-the-chains-clc"&gt;can be found here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why TON, and why not Monad
&lt;/h2&gt;

&lt;p&gt;Monad scored 54/60 — the highest estimate of any candidate in this series. It would have been the easiest build by a long way: same Solidity contract as weeks 1–3, Foundry, MetaMask. The entire tooling delta is adding one RPC URL to the wallet config. The actual frontier story with Monad is optimistic parallel execution and MonadDb — state-heavy apps can see real throughput gains from that architecture. The tip jar doesn't come close to stress-testing it. There's also a practical catch: Monad is still in gated devnet as of this week, so the real friction wouldn't have been the code at all — it would have been getting network access and a working faucet.&lt;/p&gt;

&lt;p&gt;That aside, it still didn't win.&lt;/p&gt;

&lt;p&gt;Using the Wildcard slot to ship a fourth Solidity week would have answered "what's at the frontier?" with "what you already know." Reasonable answer most weeks. Not for the finale.&lt;/p&gt;

&lt;p&gt;Starknet at 43.5/60 was the strongest alternative. Those 0.5 points over TON's 43 come from tooling: Starknet Foundry and Scarb are genuinely mature, closer in feel to the Foundry experience than Blueprint is. TON's score is lower because the tooling is younger and the actor model has a steeper ramp than Cairo does. But this is a Wildcard pick, and that's a narrative call as much as a DX one. TON has things this series hasn't touched: an actor-model VM where every contract is an isolated process and all cross-contract communication goes through async messages — not a pattern layered on top of shared EVM state, a different execution model from the ground up. A language (Tact) that looks like TypeScript and thinks like Erlang. And the chain is literally embedded as the payment layer in a messaging app 900 million people already have on their phones.&lt;/p&gt;

&lt;p&gt;Starknet's rubric ceiling is 0.5 points higher. TON's story ceiling isn't close.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this means for the build
&lt;/h2&gt;

&lt;p&gt;TON isn't a safe pick. The 43/60 estimate is a promise of friction: the async message-passing model is a genuine mental shift from Solidity, TON uses a Bag of Cells (BoC) architecture where state is a tree of cells rather than EVM's flat storage slots (which makes hitting cell size or reference limits a real constraint when storing arbitrary-length strings), and contract verification on verifier.ton.org is more manual than &lt;code&gt;forge verify-contract&lt;/code&gt;. If something blocks mid-week, Starknet at 43.5/60 is the fallback — solid tooling, Cairo is genuinely novel, ZK-in-the-language is different enough from Scroll. But the aim is TON.&lt;/p&gt;




&lt;p&gt;→ The live app is at &lt;a href=""&gt;https://proof-of-support.pages.dev&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;→ Scoring methodology for the series: &lt;a href="https://dev.to/satorigeeks/how-im-scoring-the-chains-clc"&gt;How I'm Scoring the Chains&lt;/a&gt;&lt;/p&gt;

</description>
      <category>blockchain</category>
      <category>buildinpublic</category>
      <category>smartcontract</category>
      <category>web3</category>
    </item>
    <item>
      <title>Week 4: Solana — the view from outside the EVM</title>
      <dc:creator>Satori Geeks</dc:creator>
      <pubDate>Thu, 23 Apr 2026 13:33:30 +0000</pubDate>
      <link>https://forem.com/satorigeeks/week-4-solana-the-view-from-outside-the-evm-46cp</link>
      <guid>https://forem.com/satorigeeks/week-4-solana-the-view-from-outside-the-evm-46cp</guid>
      <description>&lt;p&gt;After three weeks of Solidity — Base, Scroll, Core DAO — week four was the one I kept putting off mentally. Same app, same interface spec. But the contract language changes. The runtime model changes. Where state lives changes. I went in expecting a hard week. It was hard. Also weirder than expected in specific ways.&lt;/p&gt;

&lt;h2&gt;
  
  
  The cost shock
&lt;/h2&gt;

&lt;p&gt;Deploy cost hit first. The full story is in the &lt;a href="https://dev.to/satorigeeks/anchor-vs-pinocchio-the-real-deploy-cost-4ihm"&gt;Anchor vs Pinocchio piece&lt;/a&gt;. Short version: I measured the Anchor binary before pushing it to mainnet. 230 KB. At current SOL prices that's $141. On Base or Scroll, deploy costs a dollar or two and you never think about it again.&lt;/p&gt;

&lt;p&gt;I rewrote the program in Pinocchio. Four hours, 30 KB, $18 actual deploy cost. Worth it, though I hadn't planned to spend day two on a rewrite.&lt;/p&gt;

&lt;h2&gt;
  
  
  The mental model shift
&lt;/h2&gt;

&lt;p&gt;On EVM, state lives in the contract. Mappings, arrays, structs — all inside, implicit. Solana programs are stateless. State lives in separate accounts the program owns. For the tip jar that's three accounts: a &lt;a href="https://solscan.io/account/HEgx3idjyhXfN1CriqTJe4AP1JZVqUhBKBGK5boh3tnn" rel="noopener noreferrer"&gt;MessageBoard PDA&lt;/a&gt; (global state), a &lt;a href="https://solscan.io/account/8SPGaei4h6KrgtMCwECogpQ9T4LWSS1dT9Wu1jt6VDgj" rel="noopener noreferrer"&gt;Vault PDA&lt;/a&gt; (accumulated SOL), and one &lt;a href="https://solscan.io/account/8GH17qSvdMCfA5siu63YtP6C3RYXrg42tFAm2sNu8aNB" rel="noopener noreferrer"&gt;Support PDA&lt;/a&gt; per message.&lt;/p&gt;

&lt;p&gt;Every instruction has to declare explicitly which accounts it touches, before execution — and whether each account is read-only or writable. Sealevel uses that metadata to schedule transactions in parallel: two transactions can run concurrently if they don't share any writable accounts. Touch the same writable account and they're serialised. The verbose account declarations aren't bureaucratic overhead; they're the scheduling contract.&lt;/p&gt;

&lt;p&gt;Rent is the other non-EVM thing. Every account holds a SOL deposit proportional to its byte size — a "rent-exempt minimum" it has to maintain to survive. Each message creates a 387-byte Support PDA. The sender pays ~0.00358 SOL for that deposit. At $86/SOL that's $0.31. The transaction fee itself is $0.0004. Worth flagging: unlike EVM storage gas, which is spent and gone, this SOL is a deposit — if the program closes the account it goes back to a specified address. In this implementation, Support PDAs are never closed, so in practice it's permanent. But that's a program design choice, not a chain constraint.&lt;/p&gt;

&lt;h2&gt;
  
  
  The thing I didn't see coming
&lt;/h2&gt;

&lt;p&gt;Public Solana RPC nodes disable &lt;code&gt;getProgramAccounts&lt;/code&gt;. HTTP 410.&lt;/p&gt;

&lt;p&gt;I wrote &lt;code&gt;getMessages()&lt;/code&gt; with &lt;code&gt;getProgramAccounts&lt;/code&gt; — that's what every tutorial uses. Tested on devnet: "Could not load messages." The reason &lt;code&gt;getProgramAccounts&lt;/code&gt; is disabled isn't arbitrary rate-limiting — it requires a full scan of the account database, which is expensive enough that most public RPCs (including Helius and Triton's free tiers) simply don't offer it. The fix took an hour once I understood the problem: fetch &lt;code&gt;message_count&lt;/code&gt; from the MessageBoard PDA, derive each Support PDA address by index, batch-fetch with &lt;code&gt;getMultipleAccountsInfo&lt;/code&gt;. That works for a small program — &lt;code&gt;getMultipleAccountsInfo&lt;/code&gt; caps at 100 accounts per call, so it scales fine for a demo. For production with high message volume you'd want a proper indexer: Helius's Digital Asset Standard API or something like Subsquid. But you never see this on a local validator. You only find out at smoke test on a public endpoint.&lt;/p&gt;

&lt;h2&gt;
  
  
  Rubric scores
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;(Full methodology: &lt;a href="https://paragraph.com/@0x5f7bd072eadeb2c18f2ada5a0c5b125423a1ea36/how-im-scoring-the-chains" rel="noopener noreferrer"&gt;How I'm scoring the chains&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Dimension&lt;/th&gt;
&lt;th&gt;Estimated&lt;/th&gt;
&lt;th&gt;Actual&lt;/th&gt;
&lt;th&gt;Delta&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Getting Started&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Developer Tooling&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Contract Authoring&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Documentation Quality&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;−1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Frontend / Wallet&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;−2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Deployment Experience&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Transaction Cost&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;−2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Community &amp;amp; Ecosystem&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;−1&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Weighted score: 41.5 / 60&lt;/strong&gt; (estimated: 50 / 60)&lt;/p&gt;

&lt;p&gt;The two big drops: Frontend/Wallet and Transaction Cost. Both looked smooth on paper; both had a gotcha that only showed up in practice.&lt;/p&gt;

&lt;h2&gt;
  
  
  Verdict
&lt;/h2&gt;

&lt;p&gt;41.5/60. Solana lands in "Good — solid DX with manageable rough edges," and I'm not going to argue with that. Anchor has real depth behind it — the test loop is as good as Foundry, the community answered almost every blocker I hit. Pinocchio is worth knowing about when you care about binary size, though it's not where you'd start.&lt;/p&gt;

&lt;p&gt;The per-message SOL outlay came in at $0.31–0.54 (rent deposit on the Support PDA, not the tx fee — technically recoverable if the program closes the account, but this one doesn't). The frontend had a public RPC gotcha that nothing in the docs adequately flags. Both are workable. Neither is what the tutorials show.&lt;/p&gt;

&lt;p&gt;One gap the rubric doesn't capture: the developer's one-time deploy cost. On EVM it's irrelevant. On Solana, before the Pinocchio rewrite, it was $141. Full breakdown in the &lt;a href="https://dev.to/satorigeeks/anchor-vs-pinocchio-the-real-deploy-cost-4ihm"&gt;Pinocchio article&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;A week out, I'd build on Solana again. With more SOL in the wallet and &lt;code&gt;getProgramAccounts&lt;/code&gt; crossed off the list of things I'd reach for.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Building on five chains back to back, what I keep noticing is that the friction is never where the docs say it will be.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;→ The live app is at &lt;a href=""&gt;https://proof-of-support.pages.dev&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;→ Scoring methodology for the series: &lt;a href="https://dev.to/satorigeeks/how-im-scoring-the-chains-clc"&gt;How I'm Scoring the Chains&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;→ &lt;a href="https://explorer.solana.com/address/CGKRufiP231MMh4QGGCcViH4NWb5wD39NgxJP8ymaiKb" rel="noopener noreferrer"&gt;Program on explorer&lt;/a&gt;&lt;/p&gt;

</description>
      <category>solana</category>
      <category>rust</category>
      <category>retrospective</category>
      <category>buildinpublic</category>
    </item>
    <item>
      <title>Anchor vs Pinocchio: the real deploy cost</title>
      <dc:creator>Satori Geeks</dc:creator>
      <pubDate>Wed, 22 Apr 2026 21:23:53 +0000</pubDate>
      <link>https://forem.com/satorigeeks/anchor-vs-pinocchio-the-real-deploy-cost-4ihm</link>
      <guid>https://forem.com/satorigeeks/anchor-vs-pinocchio-the-real-deploy-cost-4ihm</guid>
      <description>&lt;p&gt;The Solana mainnet deploy came in at $141. For a demo project. I stopped, recalculated, and stared at the number for a minute.&lt;/p&gt;

&lt;p&gt;I'd built the tip jar in Anchor 0.32.1 — the standard framework, the one with real docs, the one everyone reaches for first. It worked. Seven integration tests passed. Then I checked the binary before committing 1.6 SOL to mainnet: 230 KB.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three options, one decision
&lt;/h2&gt;

&lt;p&gt;On Solana, accounts need to hold enough SOL to cover two years of rent to become rent-exempt, otherwise the runtime can garbage-collect them. For a program account, that's about 6,960 lamports per byte. At ~$86/SOL, 230 KB works out to ~1.64 SOL — $141.&lt;/p&gt;

&lt;p&gt;You can close a Solana program later and reclaim that SOL to a recipient wallet. So it's not technically unrecoverable. But "recoverable in theory" doesn't change what you need in your wallet before you can deploy.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Option&lt;/th&gt;
&lt;th&gt;Binary size&lt;/th&gt;
&lt;th&gt;Approx. cost&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Anchor 0.32.1, unoptimised&lt;/td&gt;
&lt;td&gt;230 KB&lt;/td&gt;
&lt;td&gt;~$141&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Anchor + &lt;code&gt;opt-level = "z"&lt;/code&gt;, &lt;code&gt;lto = true&lt;/code&gt;, &lt;code&gt;strip = "symbols"&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;~90–100 KB&lt;/td&gt;
&lt;td&gt;~$60&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Pinocchio rewrite&lt;/td&gt;
&lt;td&gt;15–20 KB (estimation)&lt;/td&gt;
&lt;td&gt;~$9&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Anchor with size flags was 15 minutes of work. Those flags strip symbols and merge compilation units — they don't touch Anchor's macro overhead. There's a floor, and $60 is still a lot for a demo. I went with Pinocchio.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Pinocchio actually is
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/anza-xyz/pinocchio" rel="noopener noreferrer"&gt;Pinocchio&lt;/a&gt; is a zero-dependency Rust framework for Solana programs. No macros, no IDL, no borsh. You write the discriminator logic, the account parsing, the PDA derivation — all of it.&lt;/p&gt;

&lt;p&gt;Where Anchor's &lt;code&gt;#[derive(Accounts)]&lt;/code&gt; generates validation from a handful of attributes with a lot happening out of sight, Pinocchio puts every check in explicit byte-offset reads. Nothing is invisible. The price for that is verbosity. The reason to pay it is binary size.&lt;/p&gt;

&lt;h2&gt;
  
  
  The rewrite
&lt;/h2&gt;

&lt;p&gt;Four hours. Three instructions (initialize, send_support, withdraw) and three account types (MessageBoard PDA, Vault PDA, per-message Support PDA). I wrote the byte layout manually, encoded fields by hand, checked boundaries explicitly.&lt;/p&gt;

&lt;p&gt;The 7/7 tests passed at the end. Worth noting: since Pinocchio generates no IDL, the TypeScript tests couldn't use Anchor's &lt;code&gt;program.methods.sendSupport().rpc()&lt;/code&gt; style. Every instruction had to be built as a raw &lt;code&gt;TransactionInstruction&lt;/code&gt; — manual &lt;code&gt;Buffer.concat&lt;/code&gt;, explicit account list, correct discriminator byte. So the actual scope of the work was the Rust rewrite plus rewriting the test client from scratch.&lt;/p&gt;

&lt;p&gt;Tests ran against a live &lt;code&gt;solana-test-validator&lt;/code&gt; with the Pinocchio binary pre-loaded. No Anchor runner.&lt;/p&gt;

&lt;p&gt;Binary: &lt;strong&gt;30 KB&lt;/strong&gt;. Above the 15–20 KB estimate — the manual validation boilerplate adds code that Anchor macros would have inlined. Still 87% smaller than the Anchor build.&lt;/p&gt;

&lt;p&gt;Actual mainnet deploy: 0.214 SOL. ~$18. Within 1% of the rent calculation I'd done upfront.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Build&lt;/th&gt;
&lt;th&gt;Binary size&lt;/th&gt;
&lt;th&gt;Deploy cost&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Anchor 0.32.1&lt;/td&gt;
&lt;td&gt;230 KB&lt;/td&gt;
&lt;td&gt;~$141 (never deployed to mainnet)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Pinocchio 0.9 — estimated&lt;/td&gt;
&lt;td&gt;~15–20 KB&lt;/td&gt;
&lt;td&gt;~$9&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Pinocchio 0.9 — actual&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;30 KB&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$18&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  The rubric gap
&lt;/h2&gt;

&lt;p&gt;My &lt;a href="https://dev.to/satorigeeks/how-im-scoring-the-chains-clc"&gt;scoring rubric&lt;/a&gt; has a dimension for per-message transaction cost (D7) and one for deployment experience (D6). There's no dimension for the developer's one-time deploy cost.&lt;/p&gt;

&lt;p&gt;On EVM weeks — Base, Scroll — this never came up. Both cost cents. On Solana, after the optimisation, it was $18. Before: $141. That's a real number with no rubric home. Worth flagging, because a methodology that doesn't measure something can't be honest about it.&lt;/p&gt;

</description>
      <category>solana</category>
      <category>rust</category>
      <category>web3</category>
      <category>buildinpublic</category>
    </item>
    <item>
      <title>Eight Non-EVM Chains, One Week — Why Solana Won</title>
      <dc:creator>Satori Geeks</dc:creator>
      <pubDate>Mon, 20 Apr 2026 21:07:55 +0000</pubDate>
      <link>https://forem.com/satorigeeks/eight-non-evm-chains-one-week-why-solana-won-4o5a</link>
      <guid>https://forem.com/satorigeeks/eight-non-evm-chains-one-week-why-solana-won-4o5a</guid>
      <description>&lt;p&gt;Week four was always going to be the hard one.&lt;/p&gt;

&lt;p&gt;Weeks one through three: EVM. Different platforms — Base, Scroll, Core DAO — different security profiles, different deploy quirks. Same language. Solidity, all the way down. Week four is the first time the contract language changes. The question isn't "which non-EVM chain is active?" (most of them are, more or less). It's whether I can actually learn one in a week, starting from Rust-curious-but-not-fluent.&lt;/p&gt;

&lt;h2&gt;
  
  
  The field
&lt;/h2&gt;

&lt;p&gt;The brief started with five candidates. Three more landed when I started digging: Fuel Network (Sway), Algorand (Python), Radix (Scrypto). Altogether: Rust-based languages, a Python chain, a Move chain, a Haskell-inspired language, and one chain where Rust contracts were recently abandoned.&lt;/p&gt;

&lt;h2&gt;
  
  
  Two eliminations worth naming
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Polkadot / ink!:&lt;/strong&gt; ink! development and maintenance ended in January 2026. Funding ran out; v5 is the last release. Polkadot is pivoting toward PolkaVM and JAM (Join-Accumulate Machine), but neither is production-ready yet. If a tutorial still lists ink! as a viable target, it's pointing at a dead end. Good to know before you spend a day on toolchain setup.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cardano / Aiken:&lt;/strong&gt; Lowest score in the comparison — 34.5/60. Not because Aiken is bad tooling. It isn't. The issue is eUTxO. There is no &lt;code&gt;mapping(address =&amp;gt; Message[])&lt;/code&gt;. Messages would be UTxOs carrying datums, locked at a script address, constructed entirely off-chain. A one-week build on Cardano isn't about shipping features — it's a week-long wrestling match with a new mental model. Wrong scope for a sprint.&lt;/p&gt;

&lt;h2&gt;
  
  
  The scored finalists
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Chain&lt;/th&gt;
&lt;th&gt;Language&lt;/th&gt;
&lt;th&gt;Score&lt;/th&gt;
&lt;th&gt;Verdict&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Solana&lt;/td&gt;
&lt;td&gt;Rust + Anchor&lt;/td&gt;
&lt;td&gt;50/60&lt;/td&gt;
&lt;td&gt;Deepest community, best frontend story, account model is the cliff&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;NEAR Protocol&lt;/td&gt;
&lt;td&gt;Rust or TypeScript&lt;/td&gt;
&lt;td&gt;49/60&lt;/td&gt;
&lt;td&gt;JS contracts make it approachable; wallet deprecation is a gotcha&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Algorand&lt;/td&gt;
&lt;td&gt;Python (Puya/AVM)&lt;/td&gt;
&lt;td&gt;46/60&lt;/td&gt;
&lt;td&gt;Most learnable language, wallet ecosystem is too thin&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Aptos&lt;/td&gt;
&lt;td&gt;Move&lt;/td&gt;
&lt;td&gt;46/60&lt;/td&gt;
&lt;td&gt;Polished tooling, resource model shift smaller than expected&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Fuel Network&lt;/td&gt;
&lt;td&gt;Sway&lt;/td&gt;
&lt;td&gt;39/60&lt;/td&gt;
&lt;td&gt;Interesting architecture, young ecosystem, one wallet option&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Polkadot / ink!&lt;/td&gt;
&lt;td&gt;Rust&lt;/td&gt;
&lt;td&gt;35/60&lt;/td&gt;
&lt;td&gt;⚠ Maintenance ended Jan 2026 — not a safe pick&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Radix&lt;/td&gt;
&lt;td&gt;Scrypto&lt;/td&gt;
&lt;td&gt;35/60&lt;/td&gt;
&lt;td&gt;Foundation transition in 2026 adds uncertainty&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cardano&lt;/td&gt;
&lt;td&gt;Aiken&lt;/td&gt;
&lt;td&gt;34.5/60&lt;/td&gt;
&lt;td&gt;eUTxO model is a week unto itself&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Why Solana, and why not NEAR
&lt;/h2&gt;

&lt;p&gt;One point between them. NEAR had something real going for it: JS contracts. With &lt;code&gt;near-sdk-js&lt;/code&gt;, a Solidity developer writes a NEAR contract in TypeScript and deploys same-day. No Rust. I almost picked it on that basis alone. One caveat worth flagging: &lt;code&gt;near-sdk-js&lt;/code&gt; has historically been seen as less production-ready than the Rust SDK — higher gas costs, fewer security audits at scale. For a one-week experiment that's manageable. For production, it matters.&lt;/p&gt;

&lt;p&gt;What actually moved the needle was community depth. Not the kind that shows up in a score — the kind that matters at 11pm when something breaks and you need an answer now. Solana has a Helius blog post for the exact error. A StackOverflow thread. Someone in the Anchor Discord hit this three weeks ago and documented the fix. NEAR is active, just smaller. That gap hits at the worst moment.&lt;/p&gt;

&lt;p&gt;The other NEAR problem: wallet fragmentation. The original wallet (wallet.near.org) is deprecated. The replacement is scattered across Meteor, MyNEAR, HERE Wallet. You run into that friction during connect-wallet testing — not a good time.&lt;/p&gt;

&lt;p&gt;The account model — PDAs, explicit account declarations, rent — is the steepest part. It exists for a reason: Solana's runtime (Sealevel) processes non-conflicting transactions in parallel. Unlike the EVM's single-threaded execution, Sealevel needs to know upfront what each transaction will touch — that's why every account must be declared explicitly. The friction is the feature. Learnable-hard, not paradigm-shift hard. And it's the most transferable thing you take out of the week.&lt;/p&gt;

&lt;h2&gt;
  
  
  The fallback
&lt;/h2&gt;

&lt;p&gt;If Solana hit a wall mid-build — toolchain drift, account model slower than expected — NEAR with JS contracts was the escape hatch. I wrote that down before starting. One-week builds need a named fallback, not a vague "we'll figure it out."&lt;/p&gt;




&lt;p&gt;→ The live app is at &lt;a href="https://proof-of-support.pages.dev" rel="noopener noreferrer"&gt;https://proof-of-support.pages.dev&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;→ Scoring methodology for the series: &lt;a href="https://dev.to/satorigeeks/how-im-scoring-the-chains-clc"&gt;How I'm Scoring the Chains&lt;/a&gt;&lt;/p&gt;

</description>
      <category>blockchain</category>
      <category>solana</category>
      <category>rust</category>
      <category>web3</category>
    </item>
    <item>
      <title>Lean context architecture for multi-agent pipelines</title>
      <dc:creator>Satori Geeks</dc:creator>
      <pubDate>Thu, 16 Apr 2026 19:01:25 +0000</pubDate>
      <link>https://forem.com/satorigeeks/lean-context-architecture-for-multi-agent-pipelines-4a81</link>
      <guid>https://forem.com/satorigeeks/lean-context-architecture-for-multi-agent-pipelines-4a81</guid>
      <description>&lt;p&gt;In a second week of &lt;a href="https://dev.to/satorigeeks/series/37587"&gt;my project&lt;/a&gt; I noticed my agents were getting slower and dumber. When I checked the logs, I realized why: every single agent was loading the entire project history just to write one line of copy.&lt;/p&gt;

&lt;p&gt;That file was 254 lines, around 3,400 tokens. Then there was a separate AGENTS.md — 267 lines, about 3,050 tokens. Two more root files added another 2,200. Every agent loaded all four. Roughly 8,700 tokens just to orient, most of it irrelevant to whatever that agent actually needed to do.&lt;/p&gt;

&lt;p&gt;This is the refactor that fixed it.&lt;/p&gt;

&lt;h2&gt;
  
  
  255 lines and everyone reads everything
&lt;/h2&gt;

&lt;p&gt;Token waste adds up (8,700 per invocation, across 8 agents, across multiple sessions), but that's not the real problem. The real problem is noise diluting signal. An agent working from clean, relevant context does better work than one working from an everything-file where most of what it reads doesn't apply to its job.&lt;/p&gt;

&lt;p&gt;I should have seen it earlier. I didn't.&lt;/p&gt;

&lt;h2&gt;
  
  
  The load-only-if-needed pattern
&lt;/h2&gt;

&lt;p&gt;The fix was just a tedious afternoon of moving files around. Four root files collapsed into one: &lt;code&gt;CLAUDE.md&lt;/code&gt;, now at ~1,070 tokens. Pipeline overview, agent roster, done criteria, chain lineup, dev commands — the minimum every agent needs to orient.&lt;/p&gt;

&lt;p&gt;Then each agent gets its own spec: &lt;code&gt;agents/[role]/AGENT.md&lt;/code&gt;. Small files on purpose. The Copywriter's runs ~420 tokens. QA is ~235. Each covers only what that role needs — inputs, outputs, what it doesn't do, and references to load only when relevant.&lt;/p&gt;

&lt;p&gt;Every one of them ends the same way: &lt;em&gt;Await your brief. It will contain all week-specific state.&lt;/em&gt; Agents spawn stateless. Stable context (who they are, what they do) lives in the AGENT.md. Week-specific facts arrive in the brief for that session. The separation sounds obvious in retrospect. It wasn't obvious when everything was baked into one file.&lt;/p&gt;

&lt;p&gt;Per-agent context load: ~8,700 tokens → ~1,385. Agent output got more focused. Fewer irrelevant references, fewer moments where the wrong agent wandered into territory that wasn't its job. This is also a Claude Code pipeline, and the refactor improved what the agents actually produced — less noise in means less noise out.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cleaning up the broken links
&lt;/h2&gt;

&lt;p&gt;Every structural rename silently breaks briefs that reference the old path. You move a file, run the pipeline a week later, something fails in a confusing way, you trace it back to a path nobody updated.&lt;/p&gt;

&lt;p&gt;The fix is one habit:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s2"&gt;"old-filename"&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After collapsing the four root files, I ran it and found 18 files still pointing to deleted names. 11 active files got updated. 7 were historical records — intentionally left, because those paths were accurate when those files were written. That's not sloppiness. It's a triage decision, and it's worth making deliberately rather than stumbling into it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The checksum refactor workflow
&lt;/h2&gt;

&lt;p&gt;The grep habit catches stale references. The checksum workflow is what makes large-scale moves tractable in the first place.&lt;/p&gt;

&lt;p&gt;Before touching anything structural, snapshot checksums of every file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;find &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nt"&gt;-type&lt;/span&gt; f &lt;span class="nt"&gt;-not&lt;/span&gt; &lt;span class="nt"&gt;-path&lt;/span&gt; &lt;span class="s1"&gt;'*/.*'&lt;/span&gt; | &lt;span class="nb"&gt;sort&lt;/span&gt; | xargs &lt;span class="nb"&gt;md5sum&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; checksums_before.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Do the moves. Then run it again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;find &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nt"&gt;-type&lt;/span&gt; f &lt;span class="nt"&gt;-not&lt;/span&gt; &lt;span class="nt"&gt;-path&lt;/span&gt; &lt;span class="s1"&gt;'*/.*'&lt;/span&gt; | &lt;span class="nb"&gt;sort&lt;/span&gt; | xargs &lt;span class="nb"&gt;md5sum&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; checksums_after.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Same hash, different path means the same file moved. Diff the two outputs and you have a rename map automatically — no manual tracking of what went where. Then feed each old path into grep.&lt;/p&gt;

&lt;p&gt;This project moved 76 files in one session using this approach. Without the checksum diff, correlating old references to new paths across multiple terminal sessions would have been a manual tracking problem. With it, the rename map came out of a script. The combination is: checksum diff builds the map, grep burns through the tree, triage decides what to update and what to leave as historical record.&lt;/p&gt;

&lt;h2&gt;
  
  
  YAML vs Markdown — when each wins
&lt;/h2&gt;

&lt;p&gt;One design rule that came out of this: YAML for status documents, Markdown for instruction documents.&lt;/p&gt;

&lt;p&gt;State passed between pipeline stages — session status, open items, key facts — is short, structured, needs to be machine-readable. YAML fits. Agent briefs with code blocks, nested tables, and ordered research questions need Markdown. Forcing those into YAML adds syntax noise and makes them harder to parse.&lt;/p&gt;

&lt;p&gt;The rule: if an agent reads it to understand what to do, use Markdown. If a pipeline stage produces it for the next stage to consume, use YAML. That distinction maps to a lot of systems beyond this one.&lt;/p&gt;

&lt;h2&gt;
  
  
  The honest tradeoff
&lt;/h2&gt;

&lt;p&gt;This architecture has overhead. For a solo one-week project, one context file is fine. You don't need any of this.&lt;/p&gt;

&lt;p&gt;The argument is only load-bearing when agents specialise and when the pipeline runs across multiple sessions — which is exactly this project's structure. If you're wiring up a couple of agents that each run once, skip it.&lt;/p&gt;

&lt;p&gt;It's better now, but I'm still keeping an eye on that 1,000-token root file. It’s already starting to grow again.&lt;/p&gt;

</description>
      <category>claude</category>
      <category>multiagent</category>
      <category>devtools</category>
      <category>ai</category>
    </item>
    <item>
      <title>The most exotic consensus in the series. The most vanilla build.</title>
      <dc:creator>Satori Geeks</dc:creator>
      <pubDate>Wed, 15 Apr 2026 19:14:17 +0000</pubDate>
      <link>https://forem.com/satorigeeks/the-most-exotic-consensus-in-the-series-the-most-vanilla-build-1pf9</link>
      <guid>https://forem.com/satorigeeks/the-most-exotic-consensus-in-the-series-the-most-vanilla-build-1pf9</guid>
      <description>&lt;p&gt;Week 3 of &lt;a href="https://proof-of-support.pages.dev" rel="noopener noreferrer"&gt;Proof of Support&lt;/a&gt; is live on Core DAO. Chain selection has its own article — &lt;a href="https://dev.to/satorigeeks/finding-life-in-the-experimental-evm-shortlist-1o0h"&gt;I covered why Core won the shortlist here&lt;/a&gt;. Short version: Satoshi Plus consensus, Bitcoin miners voting with hashrate, BTC holders timelocking on L1 with OP_CLTV. Three parties, one validator election, EVM on top.&lt;/p&gt;

&lt;p&gt;Here's what actually happened.&lt;/p&gt;

&lt;h2&gt;
  
  
  The thing that was supposed to bite didn't
&lt;/h2&gt;

&lt;p&gt;Research flagged one config change going in: &lt;code&gt;evm_version = "shanghai"&lt;/code&gt; in &lt;code&gt;foundry.toml&lt;/code&gt;. Core DAO's docs require the Shanghai pin — Cancun opcode support isn't confirmed even after the Hermes fork. Two weeks on Base and Scroll had me defaulting to &lt;code&gt;"cancun"&lt;/code&gt;, and silently wrong bytecode is exactly the kind of wrong that wastes a morning.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;forge build&lt;/code&gt; compiled clean on the first run. OZ Ownable and ReentrancyGuard are fine against the Shanghai target. 14 tests. No issues.&lt;/p&gt;

&lt;p&gt;So. That wasn't the problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  The first L1 in the series — and the CORE token problem
&lt;/h2&gt;

&lt;p&gt;Base is an OP Stack rollup. Scroll is a ZK rollup. Core DAO is just an L1. No sequencer, no proving layer, no 7-day withdrawal window. After two weeks of rollup mechanics, the mental model simplification is real — "confirmed" means confirmed, full stop.&lt;/p&gt;

&lt;p&gt;The catch is the gas token. CORE, not ETH. For a developer it's trivial — you need CORE in the deployer wallet, you get it, you move on. For a casual supporter landing on the app, it's a different story.&lt;/p&gt;

&lt;p&gt;Getting CORE isn't smooth. There's one official bridge at &lt;a href="https://bridge.coredao.org/bridge/" rel="noopener noreferrer"&gt;bridge.coredao.org&lt;/a&gt; (uses LayerZero for its bridging tech!) that works. Symbiosis &lt;a href="https://symbiosis.finance/bridge-core-dao" rel="noopener noreferrer"&gt;also lists a Core DAO bridge&lt;/a&gt; — I checked, it was dry, at least for certain pairs. In the end I swapped on a CEX from my Base wallet and sent CORE directly to the deployer. That's not a bad path if you know your way around exchanges, but it's not something a casual user clicks through in thirty seconds. It's probably the biggest real-world friction point for adoption this week.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hermes finality is noticeable
&lt;/h2&gt;

&lt;p&gt;One quick line on this: post-Hermes (November 2025), Core DAO has 2-block finality — about 6 seconds. Compared to Scroll's ZK proving latency, where confirmation means "in the sequencer, not yet on L1," the Core DAO frontend felt fast. Transactions landed visibly. No sitting and wondering.&lt;/p&gt;

&lt;h2&gt;
  
  
  Vibe Check
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;(Full methodology: &lt;a href="https://dev.to/satorigeeks/how-im-scoring-the-chains-clc"&gt;how I'm scoring the chains&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Dimension&lt;/th&gt;
&lt;th&gt;Weight&lt;/th&gt;
&lt;th&gt;Est&lt;/th&gt;
&lt;th&gt;Actual&lt;/th&gt;
&lt;th&gt;Delta&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;D1 Getting Started&lt;/td&gt;
&lt;td&gt;×1.0&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;D2 Developer Tooling&lt;/td&gt;
&lt;td&gt;×2.0&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;D3 Contract Authoring&lt;/td&gt;
&lt;td&gt;×2.0&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;D4 Documentation Quality&lt;/td&gt;
&lt;td&gt;×1.5&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;−1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;D5 Frontend / Wallet&lt;/td&gt;
&lt;td&gt;×2.0&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;D6 Deployment Experience&lt;/td&gt;
&lt;td&gt;×1.5&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;−1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;D7 Transaction Cost&lt;/td&gt;
&lt;td&gt;×1.0&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;D8 Community &amp;amp; Ecosystem&lt;/td&gt;
&lt;td&gt;×1.0&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Actual: 46/60. Estimated: 49/60. Delta: −3 weighted points&lt;/strong&gt; &lt;em&gt;(D4 and D6 each drop one raw point at ×1.5 weight: −1.5 each)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;D4 and D6 are both down one — the verification docs had the wrong flags, and the testnet/mainnet API key split wasn't documented anywhere. Everything else landed where research said it would.&lt;/p&gt;

&lt;p&gt;D5 is the strong spot: &lt;code&gt;coreDao&lt;/code&gt; is built into viem, MetaMask connected immediately, zero custom chain config for mainnet. D7 stays a 5 — the deploy cost 0.083 CORE, roughly $0.002 at ~$0.027/CORE. A &lt;code&gt;sendSupport()&lt;/code&gt; call is a fraction of that. Effectively free for supporters.&lt;/p&gt;

&lt;h2&gt;
  
  
  The consensus paradox
&lt;/h2&gt;

&lt;p&gt;Satoshi Plus is the most interesting security mechanism I've touched in this series. Bitcoin miners include metadata in coinbase transactions to vote for Core validators. BTC holders timelock coins on Bitcoin mainnet using OP_CLTV — the coins don't move, they just declare a preference. Three independent parties produce a hybrid score that elects 31 validators for the day.&lt;/p&gt;

&lt;p&gt;From inside the house — the Solidity contract, the viem adapter, the MetaMask prompt — I felt none of it. Same EVM. Same Foundry flags. Same gas estimates I'd have on any other EVM L1. The foundations are Bitcoin-native. The walls look identical.&lt;/p&gt;

&lt;p&gt;I'm genuinely uncertain whether that's a feature or a missed opportunity. You want the security model to be invisible to the developer — that's what "it just works" means. But there's something odd about spending a week on the most exotic consensus design in the series and coming away with a build log that reads the same as the previous two weeks.&lt;/p&gt;




&lt;p&gt;The app is live: &lt;a href="https://proof-of-support.pages.dev" rel="noopener noreferrer"&gt;https://proof-of-support.pages.dev&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Week 4 picks up the non-EVM category. The build log will look different.&lt;/p&gt;

</description>
      <category>blockchain</category>
      <category>bitcoin</category>
      <category>ethereum</category>
      <category>foundry</category>
    </item>
    <item>
      <title>Finding life in the experimental EVM shortlist</title>
      <dc:creator>Satori Geeks</dc:creator>
      <pubDate>Mon, 13 Apr 2026 20:40:06 +0000</pubDate>
      <link>https://forem.com/satorigeeks/finding-life-in-the-experimental-evm-shortlist-1o0h</link>
      <guid>https://forem.com/satorigeeks/finding-life-in-the-experimental-evm-shortlist-1o0h</guid>
      <description>&lt;p&gt;"Experimental" EVM. That was the brief for Week 3. For this series, it means finding a chain where the Solidity code stays the same but the neighborhood is completely foreign. No L2s, no rollups, no "rebranded" OP Stack chains. I wanted my code secured by something that isn't an Ethereum validator.&lt;/p&gt;

&lt;p&gt;I expected a playground. I found a graveyard.&lt;/p&gt;

&lt;h2&gt;
  
  
  The ghosts and the marketing plays
&lt;/h2&gt;

&lt;p&gt;Step outside the Ethereum L2 bubble and the landscape changes fast. Most of the names I remembered from previous cycles are either in maintenance mode or actively winding down. &lt;/p&gt;

&lt;p&gt;Evmos, once the poster child for EVM on Cosmos, effectively finished in 2025. Milkomeda—the bridge to Cardano and Algorand—is archived and frozen. Vaulta (the artist formerly known as EOS EVM) is being deprecated in favor of yet another project. (I’ve lost track of how many times EOS has rebranded at this point).&lt;/p&gt;

&lt;p&gt;Then there’s the "Bitcoin EVM" wave. BOB (Build on Bitcoin) sounds perfect, until you realize it’s just another OP Stack rollup on Ethereum where you pay gas in ETH. It’s a marketing play in a trench coat. I’m looking for a new security model, not a new way to pay fees to Ethereum L1. &lt;/p&gt;

&lt;p&gt;Hedera was another candidate, but the "EVM compatibility" has too many asterisks. You have to "activate" new MetaMask addresses manually. The contract size limit is enforced through a non-EVM file service that breaks standard Foundry deploys. It’s a neighborhood with a very restrictive HOA—technically live, but a nightmare to move into. I'm here to write code, not read a 40-page manual on how to use a wallet.&lt;/p&gt;

&lt;h2&gt;
  
  
  The scored finalists
&lt;/h2&gt;

&lt;p&gt;I ran the remaining candidates through &lt;a href="https://dev.to/satorigeeks/how-im-scoring-the-chains-clc"&gt;the rubric&lt;/a&gt;. I wasn't just looking for a chain that worked; I wanted one that was actually alive. &lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Chain&lt;/th&gt;
&lt;th&gt;Platform&lt;/th&gt;
&lt;th&gt;Score&lt;/th&gt;
&lt;th&gt;One-line verdict&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Core DAO&lt;/td&gt;
&lt;td&gt;Bitcoin (Satoshi Plus)&lt;/td&gt;
&lt;td&gt;49/60&lt;/td&gt;
&lt;td&gt;Cleanest DX, stable ecosystem, Bitcoin consensus is the real story&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Moonbeam&lt;/td&gt;
&lt;td&gt;Polkadot parachain&lt;/td&gt;
&lt;td&gt;48/60&lt;/td&gt;
&lt;td&gt;Solid tooling, sharp ecosystem decline&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Flare Network&lt;/td&gt;
&lt;td&gt;Data-centric L1&lt;/td&gt;
&lt;td&gt;48/60&lt;/td&gt;
&lt;td&gt;FTSOv2 oracles are interesting; niche audience&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Berachain&lt;/td&gt;
&lt;td&gt;Cosmos SDK&lt;/td&gt;
&lt;td&gt;45/60&lt;/td&gt;
&lt;td&gt;TVL collapsed from $3.2B to $74M since launch&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Kava&lt;/td&gt;
&lt;td&gt;Cosmos SDK EVM&lt;/td&gt;
&lt;td&gt;41/60&lt;/td&gt;
&lt;td&gt;Maintenance mode, verification friction&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Rootstock / RSK&lt;/td&gt;
&lt;td&gt;Bitcoin (merge-mined)&lt;/td&gt;
&lt;td&gt;41/60&lt;/td&gt;
&lt;td&gt;Oldest Bitcoin EVM, Foundry bug, HTTP-only RPC&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Astar Network&lt;/td&gt;
&lt;td&gt;Polkadot parachain&lt;/td&gt;
&lt;td&gt;41/60&lt;/td&gt;
&lt;td&gt;Team pivoted to Soneium; parachain in caretaker mode&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Neon EVM&lt;/td&gt;
&lt;td&gt;Solana SVM&lt;/td&gt;
&lt;td&gt;36/60&lt;/td&gt;
&lt;td&gt;Exotic, but lacks Ethereum's &lt;code&gt;transfer()&lt;/code&gt; gas stipend / reentrancy protection&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Canto&lt;/td&gt;
&lt;td&gt;Cosmos SDK EVM&lt;/td&gt;
&lt;td&gt;36/60&lt;/td&gt;
&lt;td&gt;No GitHub commits since Sep 2024; team status unknown&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Why Core DAO
&lt;/h2&gt;

&lt;p&gt;Core DAO won because it offered the cleanest path to the most interesting narrative. While other survivors are in maintenance mode (Moonbeam, Kava, Astar) or dealing with massive TVL drawdowns (Berachain), Core DAO feels stable enough to actually build on. I checked the explorer—the blocks are ticking over, the documentation is current, and the "neighborhood" doesn't feel like it’s being packed into boxes.&lt;/p&gt;

&lt;p&gt;The hook is "Satoshi Plus" consensus. It’s a three-party model: Bitcoin miners delegate their hash power, BTC holders stake natively (via &lt;code&gt;OP_CLTV&lt;/code&gt; time-locks, no custodians involved), and CORE stakers handle the validation. It’s an EVM chain physically integrated into the Bitcoin mining economy.&lt;/p&gt;

&lt;p&gt;Coming from &lt;a href="https://dev.to/satorigeeks/why-im-starting-with-base-and-what-comes-after-1733"&gt;Week 1&lt;/a&gt; (Base, an optimistic rollup) and &lt;a href="https://dev.to/satorigeeks/the-runner-up-chain-won-how-i-chose-scroll-for-week-2-11n5"&gt;Week 2&lt;/a&gt; (Scroll, a ZK rollup), Core DAO is the third major answer to the "what secures my code?" question. This time, the answer is Bitcoin miners. &lt;/p&gt;

&lt;h2&gt;
  
  
  The ones that got away
&lt;/h2&gt;

&lt;p&gt;Flare Network and Rootstock are still on my list. Flare's "enshrined oracles"—where data feeds are part of the L1 protocol itself—is a compelling piece of infrastructure. Rootstock remains the OG Bitcoin EVM, and despite some Foundry-related address bugs, it's still the purist's choice for merge-mining.&lt;/p&gt;

&lt;p&gt;But for Week 3, we're heading to Core. The code is the same, but the security model is a completely different animal.&lt;/p&gt;




&lt;p&gt;→ The live app is at &lt;a href="https://proof-of-support.pages.dev" rel="noopener noreferrer"&gt;https://proof-of-support.pages.dev&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;→ Scoring methodology for the series: &lt;a href="https://dev.to/satorigeeks/how-im-scoring-the-chains-clc"&gt;How I'm Scoring the Chains&lt;/a&gt;&lt;/p&gt;

</description>
      <category>blockchain</category>
      <category>buildinpublic</category>
      <category>web3</category>
    </item>
    <item>
      <title>What "Stage 1 ZK Rollup" actually means for your deployed contract</title>
      <dc:creator>Satori Geeks</dc:creator>
      <pubDate>Sun, 12 Apr 2026 09:08:28 +0000</pubDate>
      <link>https://forem.com/satorigeeks/what-stage-1-zk-rollup-actually-means-for-your-deployed-contract-1cif</link>
      <guid>https://forem.com/satorigeeks/what-stage-1-zk-rollup-actually-means-for-your-deployed-contract-1cif</guid>
      <description>&lt;p&gt;You'll see "Stage 1" on L2Beat and most developers treat it as a good sign — higher is better, move on. But it's a specific security model with specific requirements. Scroll crossed into it with the Euclid upgrade in April 2025.&lt;/p&gt;

&lt;h2&gt;
  
  
  The three stages
&lt;/h2&gt;

&lt;p&gt;L2Beat defines rollup maturity in three stages. The progression isn't about performance — it's about who can stop you from getting your money out.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stage 0&lt;/strong&gt; is where most rollups start. The validity proof system may be live, but user protections are thin: the operator can censor transactions, the team controls upgrade keys unilaterally, and there's no guaranteed exit path that doesn't depend on the team cooperating. You're trusting that the people running it won't act against you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stage 1&lt;/strong&gt; adds three things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Forced transaction inclusion.&lt;/strong&gt; If the sequencer ignores your transaction, you can submit it directly to the L1 inbox contract. The sequencer can't censor you indefinitely.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Permissionless batch submission.&lt;/strong&gt; If the sequencer fails to post a batch within a defined window — the Liveness Gap — any user can submit a batch and a validity proof directly to the L1 contract to move state forward. The chain doesn't die with its operator.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security Council with independent majority.&lt;/strong&gt; Upgrade keys move from team-controlled multisig to a council where independent members hold the majority. For Scroll, that's a 9-of-12 multisig — 7 of the 9 required signers are not Scroll employees. The team cannot push a protocol upgrade unilaterally.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Stage 2&lt;/strong&gt; removes trust in the Security Council as well. Exit is fully guaranteed by code, not governance. No rollup has reached Stage 2 yet — Arbitrum has removed the whitelist for fraud proofs, but retains a Security Council emergency override for bug fixes, most researchers still call it Stage 1.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Euclid actually changed
&lt;/h2&gt;

&lt;p&gt;Before Euclid, Scroll's prover used custom halo2 arithmetic circuits — one circuit gadget per EVM opcode, hard capacity ceiling per batch. Valid state transitions required the proof system, but forced inclusion and permissionless batch submission weren't live.&lt;/p&gt;

&lt;p&gt;Euclid shipped in two phases: Phase 1 on April 16, 2025, Phase 2 on April 22. Forced inclusion went live — users can now submit directly to the L1 inbox contract if censored — and the Security Council replaced team-controlled upgrade keys. Scroll also replaced the prover stack with OpenVM (a RISC-V zkVM built with Axiom and the Ethereum Foundation's PSE team), which delivered the 5× throughput and ~50% fee reduction, and migrated the state commitment from Scroll's custom zktrie to Ethereum's standard Merkle-Patricia Trie.&lt;/p&gt;

&lt;p&gt;The ZK validity proofs were already live before Euclid. What Euclid added was the exit guarantees and independent upgrade control.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this means for a contract you deploy today
&lt;/h2&gt;

&lt;p&gt;For a simple storage contract: every state transition is cryptographically verified, so an adversarial sequencer can't produce an invalid state root without generating an invalid proof. If the sequencer goes rogue, your funds aren't trapped — unlike an Optimistic rollup where exit requires a 7-day fraud proof window, the validity proof proves your exit is valid immediately once the batch is finalized on L1. A protocol upgrade still requires sign-off from independent members who can block it.&lt;/p&gt;

&lt;p&gt;None of this is full decentralization. A single sequencer still handles transaction ordering, and Stage 2 is the actual end state — nobody's there yet. But the trust assumptions are different from a Stage 0 chain where "we're a reputable team" is the primary protection.&lt;/p&gt;

&lt;p&gt;It matters most when evaluating chains for anything with real money in it. For the Proof of Support rubric, it's part of why Scroll's ecosystem score sits where it does — even as Scroll has shifted focus from the points-farming TVL chase of 2024 toward Total Value Secured (TVS) and technical alignment.&lt;/p&gt;




&lt;p&gt;→ The build this came from: &lt;a href="[https://dev.to/satorigeeks/research-passed-tests-passed-security-passed-scroll-mainnet-didnt-37l]"&gt;Week 2 retrospective on Scroll&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;→ The live app is at &lt;a href="https://proof-of-support.pages.dev" rel="noopener noreferrer"&gt;https://proof-of-support.pages.dev&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;→ Scoring methodology for the series: &lt;a href="https://dev.to/satorigeeks/how-im-scoring-the-chains-clc"&gt;How I'm Scoring the Chains&lt;/a&gt;&lt;/p&gt;

</description>
      <category>blockchain</category>
      <category>ethereum</category>
      <category>zk</category>
      <category>security</category>
    </item>
    <item>
      <title>Why Your Scroll Deployment Cost $25 — And Then $0.04 the Next Attempt</title>
      <dc:creator>Satori Geeks</dc:creator>
      <pubDate>Sat, 11 Apr 2026 11:44:55 +0000</pubDate>
      <link>https://forem.com/satorigeeks/why-your-scroll-deployment-cost-25-and-then-004-the-next-attempt-4m8c</link>
      <guid>https://forem.com/satorigeeks/why-your-scroll-deployment-cost-25-and-then-004-the-next-attempt-4m8c</guid>
      <description>&lt;p&gt;&lt;code&gt;forge script&lt;/code&gt; returned an L1 fee estimate of roughly $25 to deploy a 6,452-byte contract on Scroll mainnet. L1 base fee at the time: 0.23 gwei — historically low. The wallet had ETH. The node rejected the transaction anyway.&lt;/p&gt;

&lt;p&gt;The next morning, the same contract deployed for $0.04 total. The 625× gap comes down to two parameters and who controls them.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the oracle returns
&lt;/h2&gt;

&lt;p&gt;Every Scroll transaction carries an L1 data fee on top of the L2 execution fee. The Curie hardfork (mainnet July 3, 2024) introduced this formula:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;l1Fee = (commitScalar × l1BaseFee + blobScalar × txDataLength × l1BlobBaseFee) / 1e9
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Both parameters are in the &lt;code&gt;L1GasPriceOracle&lt;/code&gt; contract at &lt;code&gt;0x5300000000000000000000000000000000000002&lt;/code&gt;. Call &lt;code&gt;getL1Fee(bytes calldata data)&lt;/code&gt; with your serialised transaction to see what Scroll will estimate.&lt;/p&gt;

&lt;p&gt;At peak: L1 base fee was 0.23 gwei, &lt;code&gt;commitScalar&lt;/code&gt; was &lt;code&gt;6,195,200,000,000&lt;/code&gt;. The oracle returned &lt;code&gt;15,216,334,956,613,434 wei&lt;/code&gt; — roughly $25. The math checks out.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;getL1Fee&lt;/code&gt; is an upper-bound estimate, not the amount charged. The oracle is a push model — a Scroll-operated relayer updates &lt;code&gt;l1BaseFee&lt;/code&gt; and &lt;code&gt;l1BlobBaseFee&lt;/code&gt; when the change crosses a threshold. Once your transaction lands in an L2 block, the L1 fee locks in at whatever value the oracle holds at that moment. The sequencer absorbs the difference between quoted and actual L1 cost. The actual charge is typically ~2× lower than the estimate.&lt;/p&gt;

&lt;h2&gt;
  
  
  What happened to the scalars
&lt;/h2&gt;

&lt;p&gt;The $25 estimate wasn't caused by high Ethereum gas or unusual contract size. Over roughly four days, the Scroll team made six consecutive &lt;code&gt;setCommitScalar&lt;/code&gt; and &lt;code&gt;setBlobScalar&lt;/code&gt; calls from a team multisig — a cumulative &lt;strong&gt;1,280×&lt;/strong&gt; increase.&lt;/p&gt;

&lt;p&gt;There's no onchain governance vote required to update these parameters. No timelock. The &lt;code&gt;L1GasPriceOracle&lt;/code&gt; owner calls the setter directly. Scroll's own SDK docs noted as of early 2025 that tooling to automate updates was "currently being built" — the adjustments are manual operational decisions.&lt;/p&gt;

&lt;p&gt;On April 9, 2026, after developer pushback — notably from the Succinct relayer team — the scalars were rolled back 160×. Transactions that had been costing $20+ returned to fractions of a cent. Total excess fees paid during the spike: roughly $50,000. Automated Ether.fi Cash bots, mid-migration to Optimism, bore about 66% of that. L2BEAT and The Defiant covered the event; the event framing is theirs; the numbers here are on-chain.&lt;/p&gt;

&lt;h2&gt;
  
  
  The deploy, after the rollback
&lt;/h2&gt;

&lt;p&gt;Next morning: L1 base fee at 0.092 gwei. &lt;code&gt;getL1Fee&lt;/code&gt; estimated 0.000038 ETH. Actual L1 data fee on-chain: &lt;code&gt;0.000019191719458387 ETH&lt;/code&gt;. Total deploy cost: &lt;code&gt;0.000019357216031371 ETH&lt;/code&gt; — $0.04.&lt;/p&gt;

&lt;p&gt;The 2× gap between estimate and actual is expected, not a discrepancy. The conservative &lt;code&gt;fluctuation_multiplier&lt;/code&gt; baked into the scalars is intentional. The sequencer quoted high, settled lower, kept the margin.&lt;/p&gt;

&lt;p&gt;Gas used: 1,377,898 of 1,791,267 (76.92%). Same 6,452-byte bytecode. Identical contract.&lt;/p&gt;

&lt;h2&gt;
  
  
  Before you deploy on Scroll
&lt;/h2&gt;

&lt;p&gt;Read &lt;code&gt;l1BaseFee()&lt;/code&gt; from the oracle before any large deploy. If it looks elevated relative to L1 mainnet gas at &lt;a href="https://etherscan.io/gastracker" rel="noopener noreferrer"&gt;etherscan.io/gastracker&lt;/a&gt;, the scalars may be set high — not L1 congestion.&lt;/p&gt;

&lt;p&gt;Treat &lt;code&gt;getL1Fee&lt;/code&gt; as a ceiling. A 2× overestimate is normal. During the April 2026 spike it was the difference between $25 and $0.04.&lt;/p&gt;

&lt;p&gt;There's no automated alert when scalars change. The most reliable current signal is L2BEAT's Scroll page or the Scroll team's announcement channels.&lt;/p&gt;

&lt;p&gt;If the estimate looks wrong, wait. The fee model is an operational parameter, not a protocol constant. It changed 1,280× in four days. It rolled back in one.&lt;/p&gt;

&lt;p&gt;The contract deployed. Week 2 is done. The mechanism is documented.&lt;/p&gt;




&lt;p&gt;→ The build this came from: &lt;a href="[https://dev.to/satorigeeks/research-passed-tests-passed-security-passed-scroll-mainnet-didnt-37l]"&gt;Week 2 retrospective on Scroll&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;→ The live app is at &lt;a href="https://proof-of-support.pages.dev" rel="noopener noreferrer"&gt;https://proof-of-support.pages.dev&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;→ Scoring methodology for the series: &lt;a href="https://dev.to/satorigeeks/how-im-scoring-the-chains-clc"&gt;How I'm Scoring the Chains&lt;/a&gt;&lt;/p&gt;

</description>
      <category>blockchain</category>
      <category>buildinpublic</category>
      <category>solidity</category>
      <category>ethereum</category>
    </item>
    <item>
      <title>Research Passed. Tests Passed. Security Passed. Scroll Mainnet Didn't.</title>
      <dc:creator>Satori Geeks</dc:creator>
      <pubDate>Thu, 09 Apr 2026 14:47:47 +0000</pubDate>
      <link>https://forem.com/satorigeeks/research-passed-tests-passed-security-passed-scroll-mainnet-didnt-37l</link>
      <guid>https://forem.com/satorigeeks/research-passed-tests-passed-security-passed-scroll-mainnet-didnt-37l</guid>
      <description>&lt;p&gt;The wallet had ETH. The transaction kept failing with "insufficient funds."&lt;/p&gt;

&lt;p&gt;I checked the balance: &lt;code&gt;cast balance&lt;/code&gt; confirmed 0.0023 ETH. The error was &lt;code&gt;-32000: invalid transaction: insufficient funds for l1fee + gas * price + value&lt;/code&gt;. That last field was the tell. This wasn't about execution gas.&lt;/p&gt;

&lt;h2&gt;
  
  
  The investigation
&lt;/h2&gt;

&lt;p&gt;Sent 1 wei to myself first — no calldata, just to confirm the account could actually send anything.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cast send 0x5f7bD072EADeB2C18F2aDa5a0c5b125423a1EA36 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--value&lt;/span&gt; 1 &lt;span class="nt"&gt;--rpc-url&lt;/span&gt; https://rpc.scroll.io &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--account&lt;/span&gt; deployer &lt;span class="nt"&gt;--legacy&lt;/span&gt; &lt;span class="nt"&gt;--gas-price&lt;/span&gt; 200000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Went through. Receipt showed &lt;code&gt;l1Fee: 127,720,592,525,704 wei&lt;/code&gt; (0.000128 ETH) for zero calldata. At 0.23 gwei Ethereum base fee. Which is historically low.&lt;/p&gt;

&lt;p&gt;That was the tell. Queried the L1GasOracle at &lt;code&gt;0x5300000000000000000000000000000000000002&lt;/code&gt; for the full picture. Post-Curie hardfork, Scroll's fee model uses a &lt;code&gt;commitScalar&lt;/code&gt; to price L1 data.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;commitScalar: 6,195,200,000,000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;6.2 trillion. That number amplifies how much weight L1 data costs carry in the final fee. Then called &lt;code&gt;getL1Fee&lt;/code&gt; directly with the deployment bytecode — all 6,452 bytes.&lt;/p&gt;

&lt;p&gt;Result: &lt;code&gt;15,216,334,956,613,434 wei&lt;/code&gt;. That's 0.01521 ETH in L1 fees before L2 execution. Total roughly 0.016 ETH, about $25 at current prices.&lt;/p&gt;

&lt;p&gt;Week 1, Base: 0.000021 ETH total including L1 fees. Same contract, same bytecode.&lt;/p&gt;

&lt;p&gt;That's 700×. Not a universal constant. Base and Scroll have different underlying architectures, DA models, and fee parameters. It's the number for this contract, on these two chains, at the time of testing.&lt;/p&gt;

&lt;p&gt;Mainnet cancelled. Not a bug, not a configuration error. Economically unsuitable for this use case.&lt;/p&gt;

&lt;p&gt;(The per-transaction L1 fee tells the same story: in this implementation, a user's &lt;code&gt;sendSupport&lt;/code&gt; call would cost more in L1 overhead than the tip itself. Deployment cost is sensitive to bytecode size and compression, so other contracts will see different numbers — but for a small social contract, the math doesn't work.)&lt;/p&gt;

&lt;h2&gt;
  
  
  One thing worth calling out
&lt;/h2&gt;

&lt;p&gt;Scrollscan runs on Blockscout, not Etherscan. On the verified contract page there's a file tree panel in the sidebar: the full project directory structure, each file readable on click. Nothing flattened into a single tab. There's also a UML diagram auto-generated from the Solidity source: inheritance, state variables, function signatures in one view. Best block explorer UX I've seen in this series so far. The UML visualizer especially — Etherscan doesn't have one.&lt;/p&gt;

&lt;p&gt;The verified testnet contract is at &lt;a href="https://sepolia.scrollscan.com/address/0x6d89c4974f8f211ed07b8e8da08177dee627defa" rel="noopener noreferrer"&gt;&lt;code&gt;0x6D89c4974f8f211eD07b8E8DA08177DEE627DeFa&lt;/code&gt;&lt;/a&gt; on Scroll Sepolia. Worth a look at the explorer tab.&lt;/p&gt;

&lt;h2&gt;
  
  
  Community signal
&lt;/h2&gt;

&lt;p&gt;Posted to &lt;a href="https://farcaster.xyz/satorigeeks/0x0ab0373a" rel="noopener noreferrer"&gt;Farcaster&lt;/a&gt;, Scroll Discord, and &lt;a href="https://ethereum.stackexchange.com/questions/172244/scroll-mainnet-is-a-0-015-eth-l1-data-fee-for-deploying-a-6-5-kb-contract-corre" rel="noopener noreferrer"&gt;Ethereum StackExchange&lt;/a&gt; asking about the Curie fee model's impact on deployers. No response so far. If I got the fee mechanics wrong, nobody's corrected me yet. Not bitter — but it's a data point, and the rubric counts it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The vibe check
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;(Full methodology: &lt;a href="https://paragraph.com/@0x5f7bd072eadeb2c18f2ada5a0c5b125423a1ea36/how-im-scoring-the-chains" rel="noopener noreferrer"&gt;How I'm Scoring the Chains&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Dimension&lt;/th&gt;
&lt;th&gt;Weight&lt;/th&gt;
&lt;th&gt;Estimated&lt;/th&gt;
&lt;th&gt;Actual&lt;/th&gt;
&lt;th&gt;Delta&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;D1 Getting Started&lt;/td&gt;
&lt;td&gt;×1.0&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;-2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;D2 Developer Tooling&lt;/td&gt;
&lt;td&gt;×2.0&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;D3 Contract Authoring&lt;/td&gt;
&lt;td&gt;×2.0&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;D4 Documentation Quality&lt;/td&gt;
&lt;td&gt;×1.5&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;-1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;D5 Frontend / Wallet&lt;/td&gt;
&lt;td&gt;×2.0&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;D6 Deployment Experience&lt;/td&gt;
&lt;td&gt;×1.5&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;-1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;D7 Transaction Cost&lt;/td&gt;
&lt;td&gt;×1.0&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;-2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;D8 Community &amp;amp; Ecosystem&lt;/td&gt;
&lt;td&gt;×1.0&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;-1&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Weighted total: 42/60.&lt;/strong&gt; Good band. Research estimated 50/60.&lt;/p&gt;

&lt;p&gt;Every point of that 8-point drop came from economics and ecosystem, not tooling.&lt;/p&gt;

&lt;p&gt;D2, D3: identical Foundry and OZ setup to Week 1. Nothing changed. Score unchanged.&lt;/p&gt;

&lt;p&gt;D1 (2/5): nine faucets tried. Most required login, mainnet ETH balance, or were under maintenance. Resolution: Sepolia ETH from Google Cloud Web3 faucet, bridged via the Scroll Sepolia portal. About 10 minutes waiting for the bridge. The neighbourhood is fine once you're in; getting there is a longer walk than expected.&lt;/p&gt;

&lt;p&gt;D4 (3/5): Scrollscan stopped accepting new API key registrations mid-transition between explorer providers. The fix is Foundry's &lt;code&gt;--verifier blockscout --verifier-url https://sepolia.scrollscan.com/api/&lt;/code&gt; flags. Not mentioned anywhere in the official getting-started docs. The Curie fee model impact on deployers is the same story — research and blog posts exist, but none of it is surfaced where a developer would look before a mainnet deploy.&lt;/p&gt;

&lt;p&gt;D5 (4/5): wagmi ships &lt;code&gt;scroll&lt;/code&gt; and &lt;code&gt;scrollSepolia&lt;/code&gt; chain imports. Wallet integration was clean. Dropped the Coinbase SDK connector — Scroll supports EIP-1193 so injected wallets including Coinbase Wallet work fine, but the Coinbase SDK's Smart Wallet onboarding path lacks native Scroll support. Integration hurdle, not a chain-level incompatibility. Worth knowing before you wire it up.&lt;/p&gt;

&lt;p&gt;D6 (3/5): testnet was a single command, Blockscout verification worked first attempt. Mainnet blocked on economics, not process.&lt;/p&gt;

&lt;p&gt;D7 (1/5): ~$25 to deploy at historically low L1 prices. That cost would increase significantly under higher L1 gas prices — rollup fees don't scale linearly due to batching and compression, but the direction is clear. For small, frequent transactions the L1 fee overhead is structural and unworkable for this use case.&lt;/p&gt;

&lt;p&gt;D8 (2/5): three channels, zero responses.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testnet is the verdict
&lt;/h2&gt;

&lt;p&gt;The testnet deploy worked cleanly. One command, contract verified on Blockscout, 14/14 tests passing. The decision to stop at testnet was economic, not technical.&lt;/p&gt;

&lt;p&gt;There is no mainnet contract address. That's not an oversight.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Verdict:&lt;/strong&gt; The ZK proof system is genuinely interesting. Foundry, OZ, wagmi all work identically to every other EVM chain. The Blockscout explorer is the best in this series. The fee model, at current parameters, makes Scroll unsuitable for social micro-transactions.&lt;/p&gt;

&lt;p&gt;I wouldn't build a production tip jar here today.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Update — same day, a few hours later&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It went through.&lt;/p&gt;

&lt;p&gt;Contract live on Scroll mainnet at &lt;code&gt;0x53814B0BB5fe236285843342563213287DFFb674&lt;/code&gt;. First message already sent — &lt;a href="https://scrollscan.com/tx/0xf8cba11c0b2ac1f496c5a14af4bcc8001578cd1d75714ea6168c185b47ae452c" rel="noopener noreferrer"&gt;tx on Scrollscan&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Total deploy cost: 0.000019357 ETH. $0.04. L1 data fee: 0.000019191 ETH, L2 execution: 0.000000165 ETH. The oracle's re-query had estimated ~0.000038 ETH earlier — actual was roughly half. It overestimates.&lt;/p&gt;

&lt;p&gt;The number worth keeping: peak L1 fee earlier that day was ~0.01521 ETH (~$25). Actual deploy a few hours later: $0.04. Same contract. Same bytecode. Same chain. Same day. ~625×.&lt;/p&gt;

&lt;p&gt;That's not a misconfiguration. That's the fee model responding to Ethereum L1 congestion. You cannot quote a deploy cost on Scroll. You can quote a range. That range, within a single calendar day, spanned three orders of magnitude.&lt;/p&gt;

&lt;p&gt;The week is complete. Failed retro, then not failed. That's the story.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Something about running five weeks of chain comparisons with AI assistance: the gaps in the documentation that the agents flag as "likely resolved by now" keep turning out to be real.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;→ The live app is at &lt;a href="https://proof-of-support.pages.dev" rel="noopener noreferrer"&gt;https://proof-of-support.pages.dev&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;→ Scoring methodology for the series: &lt;a href="https://dev.to/satorigeeks/how-im-scoring-the-chains-clc"&gt;How I'm Scoring the Chains&lt;/a&gt;&lt;/p&gt;

</description>
      <category>blockchain</category>
      <category>solidity</category>
      <category>buildinpublic</category>
      <category>foundry</category>
    </item>
    <item>
      <title>The runner-up chain won: how I chose Scroll for Week 2</title>
      <dc:creator>Satori Geeks</dc:creator>
      <pubDate>Wed, 08 Apr 2026 20:23:46 +0000</pubDate>
      <link>https://forem.com/satorigeeks/the-runner-up-chain-won-how-i-chose-scroll-for-week-2-11n5</link>
      <guid>https://forem.com/satorigeeks/the-runner-up-chain-won-how-i-chose-scroll-for-week-2-11n5</guid>
      <description>&lt;p&gt;Linea scored 51 out of 60 on &lt;a href="https://dev.to/satorigeeks/how-im-scoring-the-chains-clc"&gt;my rubric&lt;/a&gt;. Scroll scored 50. Scroll got the build week.&lt;/p&gt;

&lt;p&gt;That one-point gap wasn't post-rationalized. The pick came down to two things the rubric doesn't score directly: faucet auth requirements and timing. I'll get to both.&lt;/p&gt;

&lt;h2&gt;
  
  
  Two chains that didn't make the shortlist
&lt;/h2&gt;

&lt;p&gt;Five candidates went through the blocker check. Two were eliminated. Polygon zkEVM was announced as sunset in June 2025. PancakeSwap pulled support the following month and developer momentum has been draining out since. Building on it now would produce an article about a chain in hospice care. Kakarot was simpler: the GitHub repo was archived January 9, 2025, the team was acquired by Zama, and they pivoted to FHE. No deployable mainnet. Both gone before we talk scores.&lt;/p&gt;

&lt;h2&gt;
  
  
  The remaining four
&lt;/h2&gt;

&lt;p&gt;zkSync Era made the cut but scored the lowest at 40.5 out of 60. The &lt;code&gt;foundry-zksync&lt;/code&gt; fork is still alpha, tests run roughly 17× slower than mainline Foundry, and the native account abstraction model means &lt;code&gt;msg.sender&lt;/code&gt; behaves differently in smart wallet contexts. Real build risk for a quick-turnaround week.&lt;/p&gt;

&lt;p&gt;Taiko is technically the purest: Type 1 zkEVM, identical EVM, zero code changes, decentralized sequencer through Ethereum L1 validators. It scored 49. TVL sits around $20M, and the Hoodi testnet only replaced Hekla in September 2025, so the wagmi chain definition needs manual configuration. Newer testnet story than I wanted.&lt;/p&gt;

&lt;p&gt;That left Linea at 51 and Scroll at 50.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Linea almost won
&lt;/h2&gt;

&lt;p&gt;Linea is built by Consensys, the same company behind MetaMask, and it ships pre-configured in MetaMask's default network list. Docs are professionally maintained. TVL is the highest of the qualified candidates. The proof system uses lattice-based cryptography (Vortex/Plonk) instead of elliptic-curve SNARKs, which is a genuinely interesting post-quantum angle. If the criterion were "biggest chain with the most traction," Linea wins.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Scroll won
&lt;/h2&gt;

&lt;p&gt;Two things.&lt;/p&gt;

&lt;p&gt;First: faucets. W1 on Base went smoothly partly because the testnet faucet only needed a wallet address. I learned to check this before scoring, not after. Linea's cleanest path to Sepolia ETH requires a free Infura account. Not social-gated or ENS-gated, but you still need an account. Scroll's Telegram bot takes a wallet address and sends ETH, nothing else. That difference is small on paper. After last week, I notice it.&lt;/p&gt;

&lt;p&gt;Second: timing. In April 2025, Scroll replaced its entire prover stack. The original architecture used hand-written halo2 arithmetic circuits to prove EVM execution, one circuit gadget per opcode, with a hard capacity ceiling per batch. Euclid swapped it out for OpenVM, a RISC-V zkVM built by Axiom in collaboration with Scroll and the Ethereum Foundation's research team. Instead of proving EVM execution directly, the system compiles to RISC-V and proves that. Throughput went up 5×, transaction costs dropped roughly 50%, Scroll hit Stage 1 ZK Rollup status on L2Beat. They also moved from a custom zktrie to Ethereum's standard Merkle-Patricia Trie, so any tooling that handles Ethereum state proofs now handles Scroll state proofs too. Most developers I've talked to don't know this happened. It's April 2026 and it's still not common knowledge.&lt;/p&gt;

&lt;h2&gt;
  
  
  The TVL question
&lt;/h2&gt;

&lt;p&gt;Ether.fi exited to Optimism in early 2026. They took 300,000 user accounts, 70,000 active cards, and roughly 85% of Scroll's TVL with them, dropping the number from over $1 billion to around $27 million. That's the number. Ether.fi's reason: Optimism has larger TVL and a broader app suite, which matters more than ZK rollup maturity for a payments product. That's a real signal about where liquidity goes. But it's a liquidity routing decision, not a judgment on the developer infrastructure. The tooling is solid. The proving stack is active. The GitHub has ongoing commits across 152 repositories as of April 2026. TVL and developer experience are different charts.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the pick says
&lt;/h2&gt;

&lt;p&gt;Picking the second-ranked chain means the rubric is a starting point, not a verdict. A 1-point gap is noise. What mattered was whether the friction points I care about actually lined up: faucet auth on day one, a fresh technical story, proven tooling. They did.&lt;/p&gt;

&lt;p&gt;Scroll was open-source from the first commit. Not retroactively. A governance vote happened before the biggest upgrade shipped. W1 was the establishment chain. This is the contrast.&lt;/p&gt;




&lt;p&gt;→ The live app is at &lt;a href="https://proof-of-support.pages.dev" rel="noopener noreferrer"&gt;https://proof-of-support.pages.dev&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;→ Scoring methodology for the series: &lt;a href="https://dev.to/satorigeeks/how-im-scoring-the-chains-clc"&gt;How I'm Scoring the Chains&lt;/a&gt;&lt;/p&gt;

</description>
      <category>blockchain</category>
      <category>zk</category>
      <category>buildinpublic</category>
    </item>
    <item>
      <title>I ran the same smart contract through three AI security audits. The brief was the bug.</title>
      <dc:creator>Satori Geeks</dc:creator>
      <pubDate>Fri, 03 Apr 2026 15:28:24 +0000</pubDate>
      <link>https://forem.com/satorigeeks/i-ran-the-same-smart-contract-through-three-ai-security-audits-the-brief-was-the-bug-dnl</link>
      <guid>https://forem.com/satorigeeks/i-ran-the-same-smart-contract-through-three-ai-security-audits-the-brief-was-the-bug-dnl</guid>
      <description>&lt;p&gt;A smart contract reviewed by the same model that wrote it is a managed risk at best. Models from the same family, given similar prompts, will apply similar reasoning patterns — not because they're "colluding," but because of their shared DNA. If they're trained on the same overlapping datasets (Common Crawl, GitHub, Stack Overflow), they'll likely converge on the same blind spots regarding obscure Solidity vulnerabilities or specific EIPs. The same interpretive pattern that shaped a flaw is the one most likely to miss it.&lt;/p&gt;

&lt;p&gt;The fix isn't to stop using them; it's to increase coverage. Running reviews across different lineages — different training data, different alignment, different fine-tuning — minimises the chance of a shared blind spot. ChatGPT, Gemini, and Qwen are all transformers, but the paths they took to get here are different enough to matter.&lt;/p&gt;

&lt;p&gt;For Week 1 on Base, I ran the audit on three models independently: ChatGPT, Gemini, and a local Qwen instance. Same contract, same checklist, parallel sessions. No "peeking" allowed.&lt;/p&gt;

&lt;h2&gt;
  
  
  What three independent audits found
&lt;/h2&gt;

&lt;p&gt;All three passed. No one found a dealbreaker. That's a useful signal, but it isn't a guarantee. LLM-based reviews can still miss critical vulnerabilities entirely; having three models agree doesn't change the underlying tech's limits. What it does do is ensure a gap isn't just a quirk of one model's training.&lt;/p&gt;

&lt;p&gt;Interestingly, all three flagged the same bottleneck: &lt;code&gt;getMessages()&lt;/code&gt; iterates over the entire message array to return results newest-first. This is an O(n) scaling issue. On Base (an L2), gas is cheap, but the block gas limit is still the ceiling. While off-chain view calls would handle the load, any on-chain transaction triggering that iteration would eventually revert — a Gas Limit DoS that grows silently alongside adoption.&lt;/p&gt;

&lt;p&gt;Qwen called it Medium severity. ChatGPT and Gemini treated it as a Note. The resolution: spec-required, acceptable at current scale, no action before mainnet. The finding was consistent; the panic level was the variable.&lt;/p&gt;

&lt;h2&gt;
  
  
  The brief was the real problem
&lt;/h2&gt;

&lt;p&gt;This was the biggest takeaway I didn't see coming.&lt;/p&gt;

&lt;p&gt;My original brief for all three models was: "Audit the contract against the spec." That's a standard request, but it's also a trap. It frames the spec as the ceiling. A model following that instruction will check if the code matches the document, but it won't necessarily ask if the document itself is flawed or missing key security properties.&lt;/p&gt;

&lt;p&gt;I caught the framing error early and updated the briefs: "Perform a full security audit; treat the spec as the correctness baseline, not the audit scope."&lt;/p&gt;

&lt;p&gt;It's a minor wording tweak with a massive shift in optimisation. The spec becomes a reference — what the contract is supposed to do — rather than a boundary — the only thing you need to check. The first brief asks for conformance. The second asks for vulnerabilities.&lt;/p&gt;

&lt;p&gt;The lesson: prompts are specifications. The same discipline that goes into writing a contract interface — precise, unambiguous, explicit — has to apply to the security brief. Vague input produces vague output. Not because the model is "lazy," but because the brief didn't ask the right question.&lt;/p&gt;

&lt;h2&gt;
  
  
  The new standing structure
&lt;/h2&gt;

&lt;p&gt;Three-model review is now the standard. Each week, three parallel briefs go out — &lt;code&gt;SECURITY_CHATGPT.md&lt;/code&gt;, &lt;code&gt;SECURITY_GEMINI.md&lt;/code&gt;, &lt;code&gt;SECURITY_QWEN.md&lt;/code&gt; — and consolidate into a single &lt;code&gt;security.md&lt;/code&gt; for the final handoff.&lt;/p&gt;

&lt;p&gt;This isn't just overhead. It's the difference between a single-pass sanity check and a robust coverage strategy. It's not a replacement for formal verification or a professional audit, but it's significantly more reliable than a single-model pass.&lt;/p&gt;




&lt;p&gt;The structural risk of AI blind spots is real. The solution has to be structural, too. Running a parallel process surfaced a flaw in the briefing that a single-model review never would have caught.&lt;/p&gt;

&lt;p&gt;The contract passed. The process stayed. The next brief is already in the works.&lt;/p&gt;

&lt;p&gt;→ The full Week 1 build — deploy experience, faucet reality, rubric scores — is in the retrospective: &lt;a href="https://dev.to/satorigeeks/week-1-base-5660-4bga"&gt;Week 1: Base — 56/60&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;→ The live app is at &lt;a href="https://proof-of-support.pages.dev" rel="noopener noreferrer"&gt;https://proof-of-support.pages.dev&lt;/a&gt;&lt;/p&gt;

</description>
      <category>blockchain</category>
      <category>ai</category>
      <category>security</category>
      <category>buildinpublic</category>
    </item>
  </channel>
</rss>
