<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Satori Geeks</title>
    <description>The latest articles on Forem by Satori Geeks (@satorigeeks).</description>
    <link>https://forem.com/satorigeeks</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/satorigeeks"/>
    <language>en</language>
    <item>
      <title>I ran the same smart contract through three AI security audits. The brief was the bug.</title>
      <dc:creator>Satori Geeks</dc:creator>
      <pubDate>Fri, 03 Apr 2026 15:28:24 +0000</pubDate>
      <link>https://forem.com/satorigeeks/i-ran-the-same-smart-contract-through-three-ai-security-audits-the-brief-was-the-bug-dnl</link>
      <guid>https://forem.com/satorigeeks/i-ran-the-same-smart-contract-through-three-ai-security-audits-the-brief-was-the-bug-dnl</guid>
      <description>&lt;p&gt;A smart contract reviewed by the same model that wrote it is a managed risk at best. Models from the same family, given similar prompts, will apply similar reasoning patterns — not because they're "colluding," but because of their shared DNA. If they're trained on the same overlapping datasets (Common Crawl, GitHub, Stack Overflow), they'll likely converge on the same blind spots regarding obscure Solidity vulnerabilities or specific EIPs. The same interpretive pattern that shaped a flaw is the one most likely to miss it.&lt;/p&gt;

&lt;p&gt;The fix isn't to stop using them; it's to increase coverage. Running reviews across different lineages — different training data, different alignment, different fine-tuning — minimises the chance of a shared blind spot. ChatGPT, Gemini, and Qwen are all transformers, but the paths they took to get here are different enough to matter.&lt;/p&gt;

&lt;p&gt;For Week 1 on Base, I ran the audit on three models independently: ChatGPT, Gemini, and a local Qwen instance. Same contract, same checklist, parallel sessions. No "peeking" allowed.&lt;/p&gt;

&lt;h2&gt;
  
  
  What three independent audits found
&lt;/h2&gt;

&lt;p&gt;All three passed. No one found a dealbreaker. That's a useful signal, but it isn't a guarantee. LLM-based reviews can still miss critical vulnerabilities entirely; having three models agree doesn't change the underlying tech's limits. What it does do is ensure a gap isn't just a quirk of one model's training.&lt;/p&gt;

&lt;p&gt;Interestingly, all three flagged the same bottleneck: &lt;code&gt;getMessages()&lt;/code&gt; iterates over the entire message array to return results newest-first. This is an O(n) scaling issue. On Base (an L2), gas is cheap, but the block gas limit is still the ceiling. While off-chain view calls would handle the load, any on-chain transaction triggering that iteration would eventually revert — a Gas Limit DoS that grows silently alongside adoption.&lt;/p&gt;

&lt;p&gt;Qwen called it Medium severity. ChatGPT and Gemini treated it as a Note. The resolution: spec-required, acceptable at current scale, no action before mainnet. The finding was consistent; the panic level was the variable.&lt;/p&gt;

&lt;h2&gt;
  
  
  The brief was the real problem
&lt;/h2&gt;

&lt;p&gt;This was the biggest takeaway I didn't see coming.&lt;/p&gt;

&lt;p&gt;My original brief for all three models was: "Audit the contract against the spec." That's a standard request, but it's also a trap. It frames the spec as the ceiling. A model following that instruction will check if the code matches the document, but it won't necessarily ask if the document itself is flawed or missing key security properties.&lt;/p&gt;

&lt;p&gt;I caught the framing error early and updated the briefs: "Perform a full security audit; treat the spec as the correctness baseline, not the audit scope."&lt;/p&gt;

&lt;p&gt;It's a minor wording tweak with a massive shift in optimisation. The spec becomes a reference — what the contract is supposed to do — rather than a boundary — the only thing you need to check. The first brief asks for conformance. The second asks for vulnerabilities.&lt;/p&gt;

&lt;p&gt;The lesson: prompts are specifications. The same discipline that goes into writing a contract interface — precise, unambiguous, explicit — has to apply to the security brief. Vague input produces vague output. Not because the model is "lazy," but because the brief didn't ask the right question.&lt;/p&gt;

&lt;h2&gt;
  
  
  The new standing structure
&lt;/h2&gt;

&lt;p&gt;Three-model review is now the standard. Each week, three parallel briefs go out — &lt;code&gt;SECURITY_CHATGPT.md&lt;/code&gt;, &lt;code&gt;SECURITY_GEMINI.md&lt;/code&gt;, &lt;code&gt;SECURITY_QWEN.md&lt;/code&gt; — and consolidate into a single &lt;code&gt;security.md&lt;/code&gt; for the final handoff.&lt;/p&gt;

&lt;p&gt;This isn't just overhead. It's the difference between a single-pass sanity check and a robust coverage strategy. It's not a replacement for formal verification or a professional audit, but it's significantly more reliable than a single-model pass.&lt;/p&gt;




&lt;p&gt;The structural risk of AI blind spots is real. The solution has to be structural, too. Running a parallel process surfaced a flaw in the briefing that a single-model review never would have caught.&lt;/p&gt;

&lt;p&gt;The contract passed. The process stayed. The next brief is already in the works.&lt;/p&gt;

&lt;p&gt;→ The full Week 1 build — deploy experience, faucet reality, rubric scores — is in the retrospective: &lt;a href="https://dev.to/satorigeeks/week-1-base-5660-4bga"&gt;Week 1: Base — 56/60&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;→ The live app is at &lt;a href="https://proof-of-support.pages.dev" rel="noopener noreferrer"&gt;https://proof-of-support.pages.dev&lt;/a&gt;&lt;/p&gt;

</description>
      <category>blockchain</category>
      <category>ai</category>
      <category>security</category>
      <category>buildinpublic</category>
    </item>
    <item>
      <title>Base Is Migrating Off the OP Stack. Here's What I Checked Before Building.</title>
      <dc:creator>Satori Geeks</dc:creator>
      <pubDate>Thu, 02 Apr 2026 10:05:32 +0000</pubDate>
      <link>https://forem.com/satorigeeks/base-is-migrating-off-the-op-stack-heres-what-i-checked-before-building-5f9</link>
      <guid>https://forem.com/satorigeeks/base-is-migrating-off-the-op-stack-heres-what-i-checked-before-building-5f9</guid>
      <description>&lt;p&gt;In February 2026, Coinbase &lt;a href="https://blog.base.dev/next-chapter-for-base-chain-1" rel="noopener noreferrer"&gt;announced&lt;/a&gt; what amounted to a quiet bombshell: Base is moving away from the OP Stack toward an in-house "unified codebase."&lt;/p&gt;

&lt;p&gt;I caught the news right as I was prepping a contract deployment. My entire stack — chain IDs, RPC endpoints, Foundry configs, wagmi imports — had been built for the OP Stack. My first thought wasn't "this is a crisis," but it was: is my documentation about to become obsolete at the exact moment I hit deploy?&lt;/p&gt;

&lt;p&gt;I went down the rabbit hole. Here's what's actually changing.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the migration actually is (and isn't)
&lt;/h2&gt;

&lt;p&gt;The new system is the &lt;a href="https://github.com/base/base" rel="noopener noreferrer"&gt;unified stack&lt;/a&gt;, currently living in &lt;code&gt;github.com/base/base&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Don't let the "in-house codebase" framing spook you. This isn't a VM overhaul or a sudden pivot to ZK (yet). It's an infrastructure consolidation — pulling components previously scattered across different repositories into a single Base-managed binary, &lt;a href="https://blog.base.dev/next-chapter-for-base-chain-1" rel="noopener noreferrer"&gt;built on open-sourced components including Reth&lt;/a&gt;, the Rust Ethereum execution client.&lt;/p&gt;

&lt;p&gt;Status as of now: v0.6.0 shipped March 27, 2026, and the repo has over 3,000 commits. We're well past announcement territory. The full mainnet cutover is expected "in the coming months," per the blog. Base Sepolia and mainnet RPCs are unaffected throughout.&lt;/p&gt;

&lt;p&gt;One data point on the Superchain side: &lt;a href="https://www.dlnews.com/articles/defi/optimism-token-price-plunges-as-base-leaves-superchain/" rel="noopener noreferrer"&gt;DL News reports&lt;/a&gt; that revenue sharing with the Optimism Collective is expected to end as part of this move, citing an Optimism spokesperson. The &lt;a href="https://coincentral.com/optimism-op-price-token-drops-4-after-base-announces-move-away-from-op-stack/" rel="noopener noreferrer"&gt;OP token fell roughly 4%&lt;/a&gt; on the day of the announcement, with further declines over the following days. Whether the Superchain model survives that departure is a debate for another day; the price move is just a signal of how it was read.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building on Base this week? Change nothing.
&lt;/h2&gt;

&lt;p&gt;I cross-referenced the &lt;a href="https://blog.base.dev/next-chapter-for-base-chain-1" rel="noopener noreferrer"&gt;Base Engineering Blog&lt;/a&gt; with current tooling and repo activity. No changes have been announced to developer-facing tooling, and everything continues to work as-is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Solidity:&lt;/strong&gt; &lt;code&gt;pragma solidity 0.8.24&lt;/code&gt; — still correct. No EVM or compiler changes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Foundry:&lt;/strong&gt; Business as usual. &lt;code&gt;forge script --verify&lt;/code&gt; still targets Basescan. Base has explicitly committed to "continue contributing to Foundry."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RPCs:&lt;/strong&gt; &lt;code&gt;https://mainnet.base.org&lt;/code&gt; and &lt;code&gt;https://sepolia.base.org&lt;/code&gt; — no changes. Base's stated commitment: "All RPCs, including those in the optimism namespace, will continue to be fully supported."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Frontend:&lt;/strong&gt; &lt;code&gt;import { base, baseSepolia } from 'wagmi/chains'&lt;/code&gt; — unchanged. Chain IDs 8453 (mainnet) and 84532 (Sepolia) do not change mid-chain. Base commits to "continue contributing to Wagmi."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;viem, EIP-1559, &lt;code&gt;block.timestamp&lt;/code&gt;&lt;/strong&gt; — no changes announced.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Research Agent ran the verification in minutes; the conclusion took three seconds to read. Nothing you interact with as a developer has changed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Two threads to watch
&lt;/h2&gt;

&lt;p&gt;Two items stayed on my radar after the verification pass.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. The proof system upgrade.&lt;/strong&gt; Base has had &lt;a href="https://base.mirror.xyz/eOsedW4tm8MU5OhdGK107A9wsn-aU7MAb8f3edgX5Tk" rel="noopener noreferrer"&gt;fault proofs live since October 2024&lt;/a&gt;, using the Cannon optimistic system — permissionless challengers, 3.5-day dispute window. Base reached &lt;a href="https://blog.base.org/base-has-reached-stage-1-decentralization" rel="noopener noreferrer"&gt;Stage 1 decentralization in April 2025&lt;/a&gt;. The unified stack roadmap points toward Base V1: a planned swap from optimistic to TEE/ZK proofs for faster finality. No launch date yet. This is an upgrade to a functioning system, not a gap being closed — but worth watching when the spec arrives.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. The exit mechanics.&lt;/strong&gt; Two paths currently protect you from sequencer failure. Withdrawal finalization has a &lt;a href="https://docs.base.org/base-chain/network-information/transaction-finality" rel="noopener noreferrer"&gt;7-day challenge window&lt;/a&gt;. If the sequencer goes offline entirely, you can force transaction inclusion via L1 — &lt;a href="https://l2beat.com/scaling/projects/base" rel="noopener noreferrer"&gt;L2Beat documents&lt;/a&gt; up to a 12-hour delay on that path. Both are confirmed compatible during the current transition. How they change post-Base V1 is not publicly specified — a genuine documentation gap to revisit once the spec is out.&lt;/p&gt;




&lt;p&gt;I checked what I needed to before building. The tools work as expected. Two threads to revisit when Base V1 ships — the proof system upgrade and the exit mechanics spec. The build went ahead on the same configuration the research documented. The &lt;a href="https://dev.to/satorigeeks/week-1-base-5660-4bga"&gt;Week 1 retrospective&lt;/a&gt; covers how it went.&lt;/p&gt;

&lt;p&gt;→ The full Week 1 build — deploy experience, faucet reality, rubric scores — is in the retrospective: &lt;a href="https://dev.to/satorigeeks/week-1-base-5660-4bga"&gt;Week 1: Base — 56/60&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;→ The live app is at &lt;a href="https://proof-of-support.pages.dev" rel="noopener noreferrer"&gt;https://proof-of-support.pages.dev&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;→ Scoring methodology for the series: &lt;a href="https://dev.to/satorigeeks/how-im-scoring-the-chains-clc"&gt;How I'm Scoring the Chains&lt;/a&gt;&lt;/p&gt;

</description>
      <category>blockchain</category>
      <category>base</category>
      <category>buildinpublic</category>
    </item>
    <item>
      <title>Your private key doesn't belong in your terminal. Here's the Foundry fix.</title>
      <dc:creator>Satori Geeks</dc:creator>
      <pubDate>Wed, 01 Apr 2026 13:10:01 +0000</pubDate>
      <link>https://forem.com/satorigeeks/your-private-key-doesnt-belong-in-your-terminal-heres-the-foundry-fix-8me</link>
      <guid>https://forem.com/satorigeeks/your-private-key-doesnt-belong-in-your-terminal-heres-the-foundry-fix-8me</guid>
      <description>&lt;p&gt;You're about to run &lt;code&gt;forge script --broadcast&lt;/code&gt;. The command needs a private key. The options that come to mind first all share the same problem: paste it into the terminal and it ends up in &lt;code&gt;.bash_history&lt;/code&gt; or &lt;code&gt;.zsh_history&lt;/code&gt;. Put it in &lt;code&gt;.env&lt;/code&gt; and it's one accidental &lt;code&gt;git add&lt;/code&gt; away from the repo. Hardcode it in the deploy script and it's in version history the moment the file is committed. These aren't theoretical risks — they're how keys get exposed.&lt;/p&gt;

&lt;p&gt;There is a better way built directly into Foundry.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using Foundry's encrypted keystore
&lt;/h2&gt;

&lt;p&gt;Import your private key into an encrypted keystore:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cast wallet import deployer &lt;span class="nt"&gt;--interactive&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;--interactive&lt;/code&gt; flag prompts for your private key and a password. Foundry stores the key encrypted at &lt;code&gt;~/.foundry/keystores/deployer&lt;/code&gt;. Nothing touches shell history. The name &lt;code&gt;deployer&lt;/code&gt; is arbitrary — use whatever you'll recognise.&lt;/p&gt;

&lt;p&gt;Before running a deploy, confirm the import worked:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cast wallet list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This lists all keystores by name. If &lt;code&gt;deployer&lt;/code&gt; appears, the import succeeded. Two seconds, one less thing to debug mid-deploy.&lt;/p&gt;

&lt;p&gt;Now run the deploy referencing the keystore by name:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;forge script script/Deploy.s.sol &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--rpc-url&lt;/span&gt; &lt;span class="nv"&gt;$RPC_URL&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--account&lt;/span&gt; deployer &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--broadcast&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--verify&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;--account deployer&lt;/code&gt; tells Foundry which keystore to use. It prompts for the password at runtime. The private key is not in the command, not in &lt;code&gt;.env&lt;/code&gt;, not anywhere in the repository. The password prompt is the only moment the key decrypts, and it never leaves your machine.&lt;/p&gt;

&lt;p&gt;What this protects against: shell history logs every command you run — &lt;code&gt;.bash_history&lt;/code&gt;, &lt;code&gt;.zsh_history&lt;/code&gt;, your terminal emulator's scrollback. &lt;code&gt;.env&lt;/code&gt; files get committed. Command arguments show up in process listings. The key is encrypted at rest and decrypted only at deploy time. There is no passive exposure surface.&lt;/p&gt;

&lt;h2&gt;
  
  
  One note on the password
&lt;/h2&gt;

&lt;p&gt;The password matters. A weak password on an encrypted keystore holding a deploy key for a contract with real funds is not meaningfully safer than &lt;code&gt;.env&lt;/code&gt;. Use a password manager. The keystore adds a layer; the quality of your password determines what that layer is worth.&lt;/p&gt;




&lt;p&gt;This is now the default for every remaining week of this series. First-time setup took about ninety seconds. The instinct — "I don't feel okay putting it in the terminal" — was right; the tooling already had the answer.&lt;/p&gt;

&lt;p&gt;→ The full Week 1 build — deploy experience, faucet reality, rubric scores — is in the retrospective: &lt;a href="https://dev.to/satorigeeks/week-1-base-5660-4bga"&gt;Week 1: Base — 56/60&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;→ The live app is at &lt;a href="https://proof-of-support.pages.dev" rel="noopener noreferrer"&gt;https://proof-of-support.pages.dev&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;→ Scoring methodology for the series: &lt;a href="https://dev.to/satorigeeks/how-im-scoring-the-chains-clc"&gt;How I'm Scoring the Chains&lt;/a&gt;&lt;/p&gt;

</description>
      <category>foundry</category>
      <category>blockchain</category>
      <category>solidity</category>
      <category>security</category>
    </item>
    <item>
      <title>How I'm Scoring the Chains</title>
      <dc:creator>Satori Geeks</dc:creator>
      <pubDate>Wed, 01 Apr 2026 12:58:06 +0000</pubDate>
      <link>https://forem.com/satorigeeks/how-im-scoring-the-chains-clc</link>
      <guid>https://forem.com/satorigeeks/how-im-scoring-the-chains-clc</guid>
      <description>&lt;p&gt;Designing a scoring system after the build is convenient. The dimensions settle around what you happened to encounter; the weights drift toward what turned out to matter. The score feels right because the rubric was shaped around the result.&lt;/p&gt;

&lt;p&gt;This one was fixed before Week 1 started.&lt;/p&gt;

&lt;p&gt;A scoring system designed after the build will justify what happened rather than measure it. Building it first forces the criteria to be general enough to apply to every chain — EVM and non-EVM, polished and experimental — before you know which one you're deploying on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Eight dimensions. Three weights.&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;#&lt;/th&gt;
&lt;th&gt;Dimension&lt;/th&gt;
&lt;th&gt;Weight&lt;/th&gt;
&lt;th&gt;Max&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;D1&lt;/td&gt;
&lt;td&gt;Getting Started&lt;/td&gt;
&lt;td&gt;×1.0&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;D2&lt;/td&gt;
&lt;td&gt;Developer Tooling&lt;/td&gt;
&lt;td&gt;×2.0&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;D3&lt;/td&gt;
&lt;td&gt;Contract Authoring&lt;/td&gt;
&lt;td&gt;×2.0&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;D4&lt;/td&gt;
&lt;td&gt;Documentation Quality&lt;/td&gt;
&lt;td&gt;×1.5&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;D5&lt;/td&gt;
&lt;td&gt;Frontend / Wallet&lt;/td&gt;
&lt;td&gt;×2.0&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;D6&lt;/td&gt;
&lt;td&gt;Deployment Experience&lt;/td&gt;
&lt;td&gt;×1.5&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;D7&lt;/td&gt;
&lt;td&gt;Transaction Cost&lt;/td&gt;
&lt;td&gt;×1.0&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;D8&lt;/td&gt;
&lt;td&gt;Community &amp;amp; Ecosystem&lt;/td&gt;
&lt;td&gt;×1.0&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Maximum score: 60. Each dimension scores 1–5. The weight reflects how much that dimension determines whether you'd actually build a production app on the chain.&lt;/p&gt;

&lt;p&gt;D2, D3, and D5 carry the most weight because they are the daily surface: the tools you use every hour, the language you write in, the wallet integration you fight with on every frontend. Get those wrong and no amount of documentation or community enthusiasm compensates.&lt;/p&gt;

&lt;p&gt;D4 and D6 are mid-weight — important but recoverable. Bad documentation can be worked around with research; a difficult deploy flow can be scripted.&lt;/p&gt;

&lt;p&gt;D1, D7, and D8 are single-weight. Getting started friction matters, but you only experience it once. Transaction cost matters for production apps, but at this scale the variance between chains is more interesting than the absolute number. Community is context, not infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What I'm measuring&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Each dimension is scored from the perspective of a developer building this specific app — a social tip jar with a straightforward contract, wallet connect, and a message wall — on this specific chain, this specific week. Not a permanent rating. A snapshot of the developer experience at build time.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;D1: everything before the first line of code — wallet setup, testnet funds, network configuration. The faucet experience lives here.&lt;/li&gt;
&lt;li&gt;D2: the toolchain — Foundry or equivalent, CLI tools, local node, testing. How much of the standard EVM workflow translates unchanged?&lt;/li&gt;
&lt;li&gt;D3: the contract side — EVM equivalence, library support, Solidity version compatibility, anything that required rewriting logic.&lt;/li&gt;
&lt;li&gt;D4: documentation — official quickstarts, deployment guides, API references. Does the official docs get you to a working deploy, or do you end up in forum threads?&lt;/li&gt;
&lt;li&gt;D5: frontend and wallet layer — chain imports in wagmi/viem, wallet connector support, anything that required custom integration.&lt;/li&gt;
&lt;li&gt;D6: deploy and verify workflow — commands, manual steps, verification speed, block explorer quality.&lt;/li&gt;
&lt;li&gt;D7: transaction cost on mainnet — deploy cost and per-transaction cost for the app's primary function. For a tip jar, a transaction that costs more than the tip is broken.&lt;/li&gt;
&lt;li&gt;D8: broader ecosystem — community size, documentation freshness, signs of active development versus stagnation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The score bands&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;55–60: Outstanding — build here with confidence, caveats worth naming but none are blockers.&lt;/li&gt;
&lt;li&gt;45–54: Strong — solid foundation, specific gaps worth understanding before committing.&lt;/li&gt;
&lt;li&gt;35–44: Mixed — viable, but with meaningful friction or risk you need to plan around.&lt;/li&gt;
&lt;li&gt;Below 35: Challenged — fundamental issues that affect the build directly.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;→ The full Week 1 build — deploy experience, faucet reality, rubric scores — is in the retrospective: &lt;a href="https://dev.to/satorigeeks/week-1-base-5660-4bga"&gt;Week 1: Base — 56/60&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;→ The live app is at &lt;a href="https://proof-of-support.pages.dev" rel="noopener noreferrer"&gt;https://proof-of-support.pages.dev&lt;/a&gt;&lt;/p&gt;

</description>
      <category>blockchain</category>
      <category>buildinpublic</category>
      <category>devjournal</category>
      <category>learning</category>
    </item>
    <item>
      <title>Week 1: Base — 56/60</title>
      <dc:creator>Satori Geeks</dc:creator>
      <pubDate>Tue, 31 Mar 2026 11:58:52 +0000</pubDate>
      <link>https://forem.com/satorigeeks/week-1-base-5660-4bga</link>
      <guid>https://forem.com/satorigeeks/week-1-base-5660-4bga</guid>
      <description>&lt;p&gt;The app deployed cleanly. I ran the function. Nothing happened.&lt;/p&gt;

&lt;p&gt;That's &lt;code&gt;cast call&lt;/code&gt;. It simulates — never broadcasts. The fix was &lt;code&gt;cast send&lt;/code&gt;. One word. Thirty seconds once I asked the right question. (The honest version: I knew the difference, once. Muscle memory had just drifted.) Returning to a toolchain after time away means re-learning where the sharp edges are — even when the tooling is genuinely excellent.&lt;/p&gt;

&lt;p&gt;Before writing a line of code, I ran a verification pass on Base's OP Stack migration. Infrastructure consolidation, not a developer-surface change. Every tool confirmed unchanged. The full check is in a &lt;a href="https://dev.to/satorigeeks/base-is-migrating-off-the-op-stack-heres-what-i-checked-before-building-5f9"&gt;separate article&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Then the faucet situation. This is where the PM brain kicked in — because the developer brain just wants to deploy, and what I found was a TSA pat-down at every gate. Ethereum Ecosystem required an ENS name. Bware Labs was under maintenance. QuickNode and Alchemy required an ETH balance on Ethereum mainnet. LearnWeb3 wanted wallet connection and GitHub auth. Five faucets, five blocked. The one that worked — Coinbase Developer Platform — required sign-in and delivered 0.0001 Sepolia ETH. Enough to proceed. But the "multiple no-auth faucets" story from the research didn't survive contact with reality. That's a getting-started problem — the kind of friction that doesn't show until someone actually tries to move in.&lt;/p&gt;

&lt;p&gt;Private key: encrypted Foundry keystore, not &lt;code&gt;.env&lt;/code&gt;. Right call. Full method in a &lt;a href="https://dev.to/satorigeeks/your-private-key-doesnt-belong-in-your-terminal-heres-the-foundry-fix-8me"&gt;dedicated article&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The deploy itself:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;forge script script/Deploy.s.sol:Deploy &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--rpc-url&lt;/span&gt; baseSepolia &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--broadcast&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--verify&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--etherscan-api-key&lt;/span&gt; &lt;span class="nv"&gt;$BASESCAN_API_KEY&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;"Pass - Verified" on Basescan in forty-five seconds. No flattening, no manual ABI upload, no second command. That's the D6 story in one run.&lt;/p&gt;

&lt;p&gt;Mainnet followed the same path. Contract address: &lt;code&gt;0x6D89c4974f8f211eD07b8E8DA08177DEE627DeFa&lt;/code&gt;. Same address as Sepolia — the EVM derives contract addresses from deployer nonce, and mine matched on both networks. A separate Farcaster thread covers the mechanics. Frontend wiring was one environment variable: &lt;code&gt;VITE_BASE_CONTRACT_ADDRESS&lt;/code&gt;. Wallet connect, form submit, &lt;code&gt;sendSupport()&lt;/code&gt;, on-chain confirmation, message on the wall. End-to-end in one take. First live mainnet message: "First message outside the testnet." Verifiable on Basescan.&lt;/p&gt;

&lt;p&gt;One post-QA fix: the first mainnet message came in at 10,000,000,000,000 wei, and &lt;code&gt;formatAmount&lt;/code&gt; displayed it incorrectly below 0.0001 ETH. Tests written, implementation fixed. The dev half wrote the code; the PM half noticed the edge case existed. That split is the whole job.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9bkp8wh4oec0jlndn1px.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9bkp8wh4oec0jlndn1px.png" alt="live app on Base mainnet, message wall with first entry visible" width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Vibe Check.&lt;/strong&gt; &lt;em&gt;(Eight dimensions, three weights, max 60. Full methodology: &lt;a href="https://dev.to/satorigeeks/how-im-scoring-the-chains-clc"&gt;How I'm Scoring the Chains&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;D2, D3, D5, D6: all 5/5. The locals know what they're doing here. Foundry runs identically to Ethereum mainnet. OpenZeppelin v5.0.2 installed without ceremony. Wagmi ships a &lt;code&gt;base&lt;/code&gt; chain import out of the box. &lt;code&gt;forge script --broadcast --verify&lt;/code&gt; deploys and auto-verifies in a single run. No translation required — if you've built on EVM before, you already speak this language.&lt;/p&gt;

&lt;p&gt;D7 (5/5): under $0.005 per &lt;code&gt;sendSupport&lt;/code&gt; at current ETH prices, 0.005–0.006 gwei effective gas price. For a tip jar. Practically free.&lt;/p&gt;

&lt;p&gt;Three misses.&lt;/p&gt;

&lt;p&gt;D1 (4/5): the faucet. Five of six blocked — ENS requirements, mainnet balance gates, maintenance, GitHub auth. The one that opened required sign-in and gave 0.0001 ETH. The neighbourhood works fine once you're in. Getting there is harder than the welcome mat suggests.&lt;/p&gt;

&lt;p&gt;D4 (4/5): the official Base quickstart uses &lt;code&gt;forge create&lt;/code&gt;, not &lt;code&gt;forge script&lt;/code&gt;. The &lt;code&gt;[etherscan]&lt;/code&gt; block needed for auto-verification isn't in the primary guide — it came from the research doc. The street signs are in a language only locals speak. Not a blocker for someone who's been here before. A genuine detour for everyone else.&lt;/p&gt;

&lt;p&gt;D8 (4/5): large community, Farcaster-native, Coinbase-backed, highest daily transaction volume of any L2. The score is 4 because Base is departing the Optimism Superchain — OP token dropped roughly 7% on the announcement. Zero practical impact on this deploy. But a chain leaving a shared ecosystem is a signal, and I've been around long enough not to ignore signals.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Weighted total: 56/60.&lt;/strong&gt; Outstanding band.&lt;/p&gt;

&lt;p&gt;The research estimated 56. The build delivered 56. Zero delta is what a baseline is supposed to do — the rubric held up, which means it's functioning. The interesting data starts next week.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Verdict:&lt;/strong&gt; I would build a production app on Base, with the standard caveats — no live fraud proofs yet, Coinbase trust assumption, Superchain transition still in progress. None of those are blockers at this scale. Live app: &lt;code&gt;https://proof-of-support.pages.dev&lt;/code&gt;. Contract verified at &lt;code&gt;0x6D89c4974f8f211eD07b8E8DA08177DEE627DeFa&lt;/code&gt; on Base mainnet.&lt;/p&gt;

&lt;p&gt;Base is home. If you've built on Ethereum mainnet, you already know this toolchain — you just haven't paid these gas fees.&lt;/p&gt;

&lt;p&gt;Week 2 is coming. The chain is chosen. The toolchain will not be identical. That's the point.&lt;/p&gt;




&lt;p&gt;→ The live app is at &lt;a href="https://proof-of-support.pages.dev" rel="noopener noreferrer"&gt;https://proof-of-support.pages.dev&lt;/a&gt;&lt;/p&gt;

</description>
      <category>blockchain</category>
      <category>solidity</category>
      <category>webdev</category>
      <category>buildinpublic</category>
    </item>
    <item>
      <title>Why I'm Starting with Base (And What Comes After)</title>
      <dc:creator>Satori Geeks</dc:creator>
      <pubDate>Mon, 30 Mar 2026 10:31:47 +0000</pubDate>
      <link>https://forem.com/satorigeeks/why-im-starting-with-base-and-what-comes-after-1733</link>
      <guid>https://forem.com/satorigeeks/why-im-starting-with-base-and-what-comes-after-1733</guid>
      <description>&lt;p&gt;The decision that forced everything else was the Solana week.&lt;/p&gt;

&lt;p&gt;Once I knew there would be a non-EVM chain mid-series — a week using Rust and Anchor instead of Solidity and Foundry — a dozen downstream decisions snapped into place. What frontend framework to use. How to handle wallet connectors. Which hosting platform made sense for weekly chain swaps. Almost nothing about the technical stack was chosen independently of the chain lineup.&lt;/p&gt;

&lt;p&gt;So let's start with the chains.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Base for Week 1&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Week 1 is Base. It's the only locked decision in the series.&lt;/p&gt;

&lt;p&gt;Base is an Ethereum L2 mid-transition. In February 2026, Coinbase announced it's migrating away from the Optimism OP Stack toward its own unified in-house codebase — consolidating infrastructure under one roof while keeping the protocol open source and EVM-equivalent. The architecture is moving. The developer surface isn't: Foundry works out of the box, &lt;code&gt;wagmi&lt;/code&gt; has a built-in &lt;code&gt;base&lt;/code&gt; chain import, Basescan works like Etherscan. No custom compiler, no exotic account model.&lt;/p&gt;

&lt;p&gt;That stability — an infrastructure shift that produces no developer-visible friction — is exactly what makes it the right baseline week. The point of Week 1 is to establish a clean reference point. Starting with something exotic would make the baseline noisy and the comparisons less meaningful.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The five-week arc&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Weeks 2–4 have recommended picks — zkSync Era, Moonbeam, and Solana — confirmed at the start of each week, not in advance. The reason is straightforward: the quality of the research on a chain is only really testable by building on it. A rough Week 1 could change what matters most to investigate in Week 2. Locking every chain now would throw away that feedback loop.&lt;/p&gt;

&lt;p&gt;Week 5 is intentionally unplanned. The slot exists to follow wherever the first four weeks lead — or wherever feedback, a surprising discovery, or a chain worth reviewing points.&lt;/p&gt;

&lt;p&gt;The arc is intentional, though. Think of moving into a new neighbourhood each week, starting at the most familiar address on the street:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Week 1 (Base):&lt;/strong&gt; Standard EVM L2. Same toolchain, same patterns, just cheaper gas. The baseline.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Week 2 (plan: zkSync Era):&lt;/strong&gt; ZK-EVM — same Solidity, different execution environment. The ZK proofs themselves are transparent to the developer; the custom compiler and partial EVM equivalence are not.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Week 3 (plan: Moonbeam):&lt;/strong&gt; EVM on Polkadot. Same language, but the chain runs as a parachain on a different underlying platform. The toolchain feels familiar; the infrastructure beneath it doesn't.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Week 4 (plan: Solana):&lt;/strong&gt; Non-EVM. Rust, Anchor, a completely different account model. This is the week that moves to a different city entirely.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Week 5:&lt;/strong&gt; Open — deliberately unplanned. Not another EVM chain. Whatever the first four weeks point toward.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's a better story than picking chains arbitrarily. Each week changes the frame of reference for the one that follows — and the final slot stays open to follow where the experiment leads.&lt;/p&gt;

&lt;p&gt;Two chains that didn't make the list: Polygon PoS is technically a sidechain, not a rollup — but it's mid-transition. Polygon 2.0 is moving it toward a zkEVM-based architecture — rollup territory rather than pure sidechain — though the final form (rollup, validium, or hybrid) is still evolving. That's the real reason it doesn't fit here: it's too far into transition to serve as a clean EVM L2 baseline, and not far enough along to be the ZK-EVM week — a slot Week 2 already covers. Evmos had real potential as a Cosmos EVM option for Week 3, but the team pivoted to building "evmOS" as a framework rather than a destination chain. Ecosystem momentum has been uneven enough that its status as a destination chain — rather than a framework — is unclear, which makes it a higher-risk choice for a one-week build.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why React, and why per-chain adapters&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Back to Solana.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;@solana/wallet-adapter-react&lt;/code&gt; — the only mature, officially maintained Solana wallet adapter — requires React. Once that's true for one week, switching frameworks between weeks adds cognitive and tooling overhead for no gain. React + Vite is the call because of Week 4, not because it's the only option for the EVM weeks.&lt;/p&gt;

&lt;p&gt;The obvious shortcut for wallet connectors was Reown AppKit (formerly Web3Modal), which claims EVM and Solana support in a single React library. The problem: at the time of writing, there are documented open bugs in EVM-to-Solana session switching. For a project that changes chain configurations weekly and already defines a clean per-chain adapter interface, that abstraction layer adds instability without removing meaningful complexity.&lt;/p&gt;

&lt;p&gt;The approach instead: one adapter per week, each around 130 lines of code, each self-contained. EVM weeks (1–3) use &lt;code&gt;wagmi&lt;/code&gt; + &lt;code&gt;viem&lt;/code&gt;. Solana week (4) uses &lt;code&gt;@solana/wallet-adapter-react&lt;/code&gt; + &lt;code&gt;@coral-xyz/anchor&lt;/code&gt;. Week 5 gets its own adapter once the chain is confirmed. Each one implements the same five-method adapter interface — &lt;code&gt;connect&lt;/code&gt;, &lt;code&gt;disconnect&lt;/code&gt;, &lt;code&gt;sendSupport&lt;/code&gt;, &lt;code&gt;getMessages&lt;/code&gt;, &lt;code&gt;getChainMeta&lt;/code&gt; — behind a consistent contract. (&lt;code&gt;sendSupport&lt;/code&gt;, &lt;code&gt;getMessages&lt;/code&gt;, and &lt;code&gt;withdraw&lt;/code&gt; are the smart contract's three functions; the adapter adds wallet lifecycle on top.) The frontend calls the interface; the adapter handles the chain. Clean, auditable, no vendor dependency — and the isolation was a hard requirement, not a convention: FR-027 in the spec prohibited chain-specific imports outside &lt;code&gt;src/adapters/weekN/&lt;/code&gt; in any UI shell file, enforced by spec rather than left to discipline.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hosting: Cloudflare Pages&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Vercel was the obvious call. It wasn't the right one.&lt;/p&gt;

&lt;p&gt;The Hobby plan includes restrictions around commercial usage, which can be ambiguous for projects involving on-chain transactions. That ambiguity is a constraint, not a scandal — but it's enough to route around when there's a better option available.&lt;/p&gt;

&lt;p&gt;Cloudflare Pages has no such restriction, unlimited bandwidth on the free tier, and built-in preview deployments on every branch push. That last point matters for a project that swaps chain configurations weekly — testing a new adapter without pushing to production is not optional friction. GitHub Pages was ruled out for the same reason: preview deployments require manual setup there.&lt;/p&gt;

&lt;p&gt;One note on decentralisation: publishing on Paragraph.xyz already establishes that ethos clearly. The frontend doesn't need to be on IPFS every week to be consistent with the project's identity. The plan is Cloudflare Pages for weeks 1–4, with an optional Pinata IPFS pin on the final Week 5 deploy. That takes about thirty minutes to configure and adds one step to the final build — not to each weekly build.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The stack follows the chain lineup&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;None of these decisions were made before the chain research was done. React was forced by Solana. Per-chain adapters were forced by the architecture and the adapter instability in universal libraries. Cloudflare Pages was forced by Vercel's ToS.&lt;/p&gt;

&lt;p&gt;That's the honest version of how this stack got chosen. The chain list came first, and everything else fell out of it. The spec pre-committed to this outcome before any code existed: SC-005 required zero UI code changes to add a new chain — a measurable criterion written before the scaffold was built, not a post-hoc observation about how it turned out.&lt;/p&gt;

&lt;p&gt;The selections also serve a purpose beyond getting weeks built. Each chain is a data point in a question I don't have the answer to yet: which part of this space is worth going deeper into, which tooling is worth learning properly, which direction makes sense for what comes next. Five weeks is the structure. What follows depends on what those five weeks actually surface — and the open slot in Week 5 is the first acknowledgement of that.&lt;/p&gt;




&lt;p&gt;→ The full Week 1 build — deploy experience, faucet reality, rubric scores — is in the retrospective: &lt;a href="https://dev.to/satorigeeks/week-1-base-5660-4bga"&gt;Week 1: Base — 56/60&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;→ The live app is at &lt;a href="https://proof-of-support.pages.dev" rel="noopener noreferrer"&gt;https://proof-of-support.pages.dev&lt;/a&gt;&lt;/p&gt;

</description>
      <category>blockchain</category>
      <category>solidity</category>
      <category>solana</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Seven Agents Before a Line of Code: The Week 0 Agentic Planning Pipeline</title>
      <dc:creator>Satori Geeks</dc:creator>
      <pubDate>Fri, 27 Mar 2026 12:59:02 +0000</pubDate>
      <link>https://forem.com/satorigeeks/seven-agents-before-a-line-of-code-the-week-0-agentic-planning-pipeline-o7k</link>
      <guid>https://forem.com/satorigeeks/seven-agents-before-a-line-of-code-the-week-0-agentic-planning-pipeline-o7k</guid>
      <description>&lt;p&gt;Before writing a single line of contract code, I had seven agent specifications. Each one had a defined scope, a list of inputs it consumes, specific outputs it produces, and an explicit list of things it does not do.&lt;/p&gt;

&lt;p&gt;That last part matters. An agent with clear boundaries is more useful than one that tries to cover everything.&lt;/p&gt;

&lt;p&gt;This is what Week 0 actually was: not "I planned the project," but a structured pipeline of specialist AI agents — each given a specific brief and a bounded scope — coordinated by me as the Orchestrator. Most run on Claude. One deliberately doesn't. The pipeline produced concrete artefacts. Those artefacts feed the next agent. Nothing advances until the handoff is ready.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The agents are specialists, not generalists&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The pipeline that runs every week of the build is: &lt;code&gt;Research → Dev → Security → Deploy → QA → Copywriter → Distribution&lt;/code&gt;. Those are hard sequential dependencies. Before Week 1, I needed to establish the infrastructure every agent in that chain would rely on.&lt;/p&gt;

&lt;p&gt;A few examples of how specific the scope definitions get, from &lt;code&gt;AGENTS.md&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Dev Agent&lt;/strong&gt; "takes the research doc and the agreed contract interface and produces working code." It does NOT write the retrospective or the blog post. The devlog it produces is raw material, not polished prose — that's explicitly stated.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Security Agent&lt;/strong&gt; "reviews the smart contract for vulnerabilities before it is deployed with real funds." It does NOT rewrite or fix the contract. If it finds a blocker, it raises the issue to the Dev Agent. It also runs on a different model than the Dev Agent — deliberately, and for a specific reason I'll get to.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Copywriter Agent&lt;/strong&gt; "turns raw devlogs and research notes into the weekly blog post." It does NOT execute distribution — that's a separate agent with its own scope.&lt;/p&gt;

&lt;p&gt;This isn't fine-grained bureaucracy. It's the difference between a pipeline that runs cleanly and one where something quietly absorbs work it shouldn't be doing, or where a step gets skipped because the previous agent assumed the next one would handle it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Week 0 produced&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The pipeline generated seven concrete artefacts before any testnet was touched:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Canonical contract interface.&lt;/strong&gt; Three functions: &lt;code&gt;sendSupport&lt;/code&gt;, &lt;code&gt;getMessages&lt;/code&gt;, &lt;code&gt;withdraw&lt;/code&gt;. Two events: &lt;code&gt;SupportSent&lt;/code&gt;, &lt;code&gt;Withdrawn&lt;/code&gt;. OpenZeppelin &lt;code&gt;Ownable&lt;/code&gt; and &lt;code&gt;ReentrancyGuard&lt;/code&gt;. Named constants for name and message length limits. Chain-agnostic — valid for any week's implementation on any chain.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scoring rubric.&lt;/strong&gt; Eight dimensions with individual weights: Developer Tooling and Contract Authoring both at ×2, Frontend/Wallet Integration at ×2, Documentation and Deployment Experience at ×1.5, and Getting Started, Transaction Cost, and Community each at ×1. Maximum weighted score: 60 points. The Research Agent fills in estimated scores before each week; the Copywriter fills in actuals after. The delta between estimate and reality is part of the story.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Security checklist.&lt;/strong&gt; Four review sections — access control, reentrancy, input validation, state and event integrity — plus chain-specific addenda for standard EVM, ZK-EVM, and non-EVM chains. Non-negotiable gate before mainnet deployment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;QA checklist.&lt;/strong&gt; Seven verification checks against the live deployment: message wall loads, test transaction goes through end-to-end, block explorer confirms, contract is verified and readable, UI shows the correct chain badge, &lt;code&gt;withdraw()&lt;/code&gt; is callable by owner, mobile layout is usable. Issues block the Copywriter from starting.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Tone guide.&lt;/strong&gt; Voice principles, a "What to Avoid" table, format rules per content type and platform.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Distribution strategy.&lt;/strong&gt; Platform priority order, format requirements per platform, canonical URL discipline (every dev.to cross-post sets &lt;code&gt;canonical_url&lt;/code&gt; to the Paragraph post).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Series intro post.&lt;/strong&gt; This is what readers encounter before Week 1.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Two of those artefacts go deeper than the bullets suggest. The Dev Agent worked from a structured task list of 35 dependency-ordered tasks — each with an explicit file path, parallelism markers (&lt;code&gt;[P]&lt;/code&gt;), and checkpoint gates between phases. That's "defined inputs and outputs" made concrete, not abstract. Alongside it, a formal adapter contract document was a spec artefact produced before any code was written: five methods with JSDoc signatures and behavioral constraints. The UI shell was designed against this document, not the other way around. That's "no ambiguous handoffs" made concrete. Both were generated with &lt;a href="https://github.com/github/spec-kit" rel="noopener noreferrer"&gt;Speckit&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;That's a lot of infrastructure before a compiler has been invoked. That's the point.&lt;/p&gt;

&lt;p&gt;One more output lands at the end of Week 0 rather than before it: the live frontend. React + Vite, deployed on Cloudflare Pages, accessible at &lt;a href=""&gt;https://proof-of-support.pages.dev&lt;/a&gt;. The shared adapter interface sits empty — nothing chain-specific yet. That's Week 1's job. But the URL exists, which means every article and retrospective in this series can link to something real from day one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The parallelism model&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Not everything in the pipeline runs sequentially. Research for Week N+1 starts while Dev is building Week N. The Marketing Agent works on its own cadence and rarely blocks anyone. Distribution publishes Week N while Dev is already building Week N+1.&lt;/p&gt;

&lt;p&gt;But within a given week, the chain is hard. Research must finish before Dev starts. Security must pass before deployment. QA must pass before the Copywriter starts writing. The retrospective must be approved before Distribution cross-posts it.&lt;/p&gt;

&lt;p&gt;Knowing this upfront means there are no ambiguous handoffs. Every agent knows what it's waiting for and what it's handing off.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Honest about what this is&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Most agents run on Claude. The Security Agent is the exception: it runs on ChatGPT, with a locally-run model as a secondary check. The reason is specific. If the same model that writes the contract also reviews it for vulnerabilities, it may not catch its own errors — the same reasoning patterns that produced a flaw could also rationalise it away. Running security on a different model breaks that loop. It's not distrust of any one tool; it's a structural decision about where single-model risk is unacceptable.&lt;/p&gt;

&lt;p&gt;I am the Orchestrator — the person who reviews all output, rejects what doesn't meet the spec, and approves what does. The AI didn't plan this project. I ran a structured process using AI agents as specialist workers, with model selection made per-agent where it matters.&lt;/p&gt;

&lt;p&gt;Without the structure, every week risks scope creep, missed security steps, retrospectives that drift in quality, or a broken publishing pipeline. The structure enforces discipline. That's the product management brain applied to the development process — which is, honestly, what this whole project is about.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Does the pre-work pay off?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That's what the coming weeks answer. The rubric will score each chain. The retrospectives will document what actually happened. If the pipeline holds up under a week of real build pressure, the pre-work was worth it.&lt;/p&gt;

&lt;p&gt;If it doesn't, that's also worth writing about.&lt;/p&gt;




&lt;p&gt;→ The full Week 1 build — deploy experience, faucet reality, rubric scores — is in the retrospective: &lt;a href="https://dev.to/satorigeeks/week-1-base-5660-4bga"&gt;Week 1: Base — 56/60&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;→ The live app is at &lt;a href="https://proof-of-support.pages.dev" rel="noopener noreferrer"&gt;https://proof-of-support.pages.dev&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>devjournal</category>
      <category>buildinpublic</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Proof of Support: Five Weeks, Five Chains, One App</title>
      <dc:creator>Satori Geeks</dc:creator>
      <pubDate>Thu, 26 Mar 2026 20:49:58 +0000</pubDate>
      <link>https://forem.com/satorigeeks/proof-of-support-five-weeks-five-chains-one-app-4ion</link>
      <guid>https://forem.com/satorigeeks/proof-of-support-five-weeks-five-chains-one-app-4ion</guid>
      <description>&lt;p&gt;AI tools are doing something specific to the developer-PM divide. Things that used to require years of specialisation are changing fast enough that the old categories — developer, product person, technical lead — don't sit cleanly anymore. I've been in software for over twenty years. I spent eight of them doing product. I'm not entirely sure which one I am now. This project is one way to find out.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Proof of Support is&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The app is a social tip jar. Visitors come to a page, leave a name and a message, and attach a small amount of the chain's native currency. The message goes permanently on-chain. The funds go to the owner's wallet — mine, as it happens. Which makes this something other than a demo. If someone finds the work useful enough to leave a message and attach something to it, the contract is ready.&lt;/p&gt;

&lt;p&gt;It's simple enough to build in a week. It's complex enough to surface every meaningful difference between blockchain ecosystems: wallet integration, gas costs, frontend tooling, documentation quality, deployment friction. There's a reason I chose this over "hello world."&lt;/p&gt;

&lt;p&gt;The format is a controlled experiment. The same app — identical UI, same contract interface, same three functions — gets built and deployed on a different blockchain each week for five weeks. After each deployment, I score the experience on an eight-dimension rubric covering developer tooling, contract authoring, documentation, frontend/wallet integration, deployment, transaction cost, and community support. The rubric was built before Week 1 starts, so the scoring is consistent, not retrofitted to whatever happened.&lt;/p&gt;

&lt;p&gt;Five weeks, at least five chains. Categories: EVM L2, ZK-EVM, experimental EVM, non-EVM mainstream, and one wildcard slot — though the wildcard extends to the whole project. If a week surfaces something worth following, or feedback pulls in a direction, I'll add more. The structure is built for that. Chain selection is rolling — I'm picking each week's chain at the start of that week, based on the research, not in advance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Who's doing this and why now&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;At some point I stopped being able to say "I built that." Not overnight — the transition is gradual. You're still close to the code, then you're reviewing it, then you're writing the brief for it. And one day you realise you've been in rooms where the technical decisions got made and you were the one asking what was feasible, not the one answering.&lt;/p&gt;

&lt;p&gt;The landscape kept changing too. AI tools started doing things that used to require years of specialisation. The lines between developer and product manager started blurring in ways I hadn't expected. I found myself not quite sure which side of the table I should be sitting at — or whether that distinction still makes sense. This project is partly an attempt to find out.&lt;/p&gt;

&lt;p&gt;I got burned. Real money in projects that promised yields, delivered quarterly newsletter updates, and then went quiet. I'd been in this space since the early mining days — be it BTC, LTC, FTC, whateverTC; then Android apps on Nxt when that was a serious network (I believe now with empty blocks it's not). None of that made me immune. It just meant I knew exactly what I'd walked into.&lt;/p&gt;

&lt;p&gt;The return happened gradually: Solidity courses, a Lightning Network project, experiments on Gnosis... And then this — a structured experiment with AI as a collaborator, because that's the other part of the story. I'm explicitly using LLM tools throughout this project. Not to avoid the hard parts, but to see what the mode of "one senior plus AI pair programmer" actually produces.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What to expect, and what to do right now&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every week ends with a working live deployment — real app, real chain, accessible to anyone. And a retrospective: first-person, scored, specific. Not "great developer experience" but "here's the exact error I hit at 11pm and here's what it told me about the toolchain."&lt;/p&gt;

&lt;p&gt;At least five weeks of this. I'll be posting on Farcaster (&lt;a class="mentioned-user" href="https://dev.to/satorigeeks"&gt;@satorigeeks&lt;/a&gt;) as it happens. The full retrospectives live here on Paragraph.&lt;/p&gt;

&lt;p&gt;The series will also answer a different question for me personally: which part of this space is worth going deeper into. Which chain, which tooling, which kind of problem. The retrospectives are honest documentation — but they're also research for what comes next.&lt;/p&gt;

&lt;p&gt;If you're a developer who's curious about blockchain development but suspicious of the noise, you're exactly who I'm writing for. Come back next week — the first chain goes live then, and so does the message wall. Be one of the first entries on it.&lt;/p&gt;

&lt;p&gt;Week 1 starts soon. I've picked the chain. I'm not telling you yet.&lt;/p&gt;




&lt;p&gt;→ The full Week 1 build — deploy experience, faucet reality, rubric scores — is in the retrospective: &lt;a href="https://dev.to/satorigeeks/week-1-base-5660-4bga"&gt;Week 1: Base — 56/60&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;→ The live app is at &lt;a href="https://proof-of-support.pages.dev" rel="noopener noreferrer"&gt;https://proof-of-support.pages.dev&lt;/a&gt;&lt;/p&gt;

</description>
      <category>web3</category>
      <category>devjournal</category>
      <category>buildinpublic</category>
      <category>learning</category>
    </item>
  </channel>
</rss>
