<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Oluwaseun Olajide</title>
    <description>The latest articles on Forem by Oluwaseun Olajide (@oluwaseun_olajide_828e75d).</description>
    <link>https://forem.com/oluwaseun_olajide_828e75d</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/oluwaseun_olajide_828e75d"/>
    <language>en</language>
    <item>
      <title>CIR and the Blockchain Privacy Crisis: Why $2.47 Billion Was Stolen in Six Months</title>
      <dc:creator>Oluwaseun Olajide</dc:creator>
      <pubDate>Tue, 17 Feb 2026 09:14:05 +0000</pubDate>
      <link>https://forem.com/oluwaseun_olajide_828e75d/cir-and-the-blockchain-privacy-crisis-why-247-billion-was-stolen-in-six-months-nc9</link>
      <guid>https://forem.com/oluwaseun_olajide_828e75d/cir-and-the-blockchain-privacy-crisis-why-247-billion-was-stolen-in-six-months-nc9</guid>
      <description>&lt;p&gt;Blockchain was supposed to fix trust.&lt;/p&gt;

&lt;p&gt;The whole pitch was simple: remove the middleman, make everything verifiable, put the rules in code. No banks. No gatekeepers. Just math.&lt;/p&gt;

&lt;p&gt;But math has a problem nobody talks about enough.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It's completely transparent.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every transaction. Every price feed. Every oracle update. Every smart contract execution. All of it sitting in the open, visible to anyone who wants to look — including the people who want to exploit it.&lt;/p&gt;

&lt;p&gt;And in the first half of 2025 alone, those people stole &lt;strong&gt;$2.47 billion.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Not because the cryptography was broken. Not because the consensus mechanisms failed. But because the &lt;em&gt;execution layer&lt;/em&gt; — the place where computation actually happens — has no privacy whatsoever.&lt;/p&gt;

&lt;p&gt;This is the blockchain privacy crisis. And it's getting worse.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Numbers Are Staggering
&lt;/h2&gt;

&lt;p&gt;Let me give you the actual data, because the scale of this matters.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;2024 Full Year&lt;/th&gt;
&lt;th&gt;H1 2025&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Total stolen&lt;/td&gt;
&lt;td&gt;$2.36B&lt;/td&gt;
&lt;td&gt;$2.47B&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Private key compromise&lt;/td&gt;
&lt;td&gt;~80% of losses&lt;/td&gt;
&lt;td&gt;Persistent dominant vector&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Oracle manipulation losses&lt;/td&gt;
&lt;td&gt;$52M&lt;/td&gt;
&lt;td&gt;$8.8B YTD&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Phishing attacks&lt;/td&gt;
&lt;td&gt;Base level&lt;/td&gt;
&lt;td&gt;+40% increase&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ransomware surge&lt;/td&gt;
&lt;td&gt;Base level&lt;/td&gt;
&lt;td&gt;+60% increase&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;These aren't rounding errors. They're not edge cases. They're the predictable, systematic result of a fundamental architectural flaw.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The blockchain is transparent by design. But transparency at the execution layer creates vulnerabilities that no amount of auditing can fix.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let me explain why.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Audit Theater Problem
&lt;/h2&gt;

&lt;p&gt;Here's a number that should disturb you:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;In 2024, approximately 70% of major exploits occurred in smart contracts that had been professionally audited.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Read that again.&lt;/p&gt;

&lt;p&gt;Seven out of ten hacked contracts had been reviewed by security professionals. Someone looked at the code. Someone signed off. Someone said "this is safe."&lt;/p&gt;

&lt;p&gt;And then it got exploited anyway.&lt;/p&gt;

&lt;p&gt;Why? Three structural reasons:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Snapshot Problem&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;An audit reviews code at a specific moment in time. One line changed after the audit? New vulnerability. No one checks again.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Time-Boxing Problem&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Auditors get two to four weeks to review tens of thousands of lines of complex Solidity. They prioritize breadth over depth. Subtle logical errors get missed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Composability Problem&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A contract can be perfectly secure in isolation but completely vulnerable when it interacts with an oracle, a bridge, and an external liquidity pool simultaneously. No audit catches emergent vulnerabilities across systems.&lt;/p&gt;

&lt;p&gt;But here's the deeper issue audits fundamentally cannot address:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Audits review code. They cannot review execution.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The difference matters enormously.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Execution Layer Gap
&lt;/h2&gt;

&lt;p&gt;When a smart contract runs, it doesn't just execute in a vacuum. It:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reads from oracles (external price feeds)&lt;/li&gt;
&lt;li&gt;Interacts with liquidity pools&lt;/li&gt;
&lt;li&gt;Processes transaction data&lt;/li&gt;
&lt;li&gt;Makes decisions based on real-time inputs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All of this happens in the open. Anyone watching the mempool can see what's coming. Anyone analyzing timing patterns can infer what the contract is about to do.&lt;/p&gt;

&lt;p&gt;This is where the real attacks happen.&lt;/p&gt;

&lt;h3&gt;
  
  
  Oracle Manipulation: The $8.8 Billion Attack
&lt;/h3&gt;

&lt;p&gt;Here's how a flash loan oracle attack works step by step:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Attacker takes out a massive flash loan (borrow millions, repay in same transaction)&lt;/li&gt;
&lt;li&gt;Uses borrowed capital to manipulate price of a low-liquidity token on a DEX&lt;/li&gt;
&lt;li&gt;Oracle reports the manipulated price to a lending protocol&lt;/li&gt;
&lt;li&gt;Attacker borrows against artificially inflated asset value&lt;/li&gt;
&lt;li&gt;Drains the protocol's liquidity&lt;/li&gt;
&lt;li&gt;Repays flash loan&lt;/li&gt;
&lt;li&gt;Walks away with millions&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Total losses from oracle manipulation in 2025: $8.8 billion.&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Recovery rate: less than $100 million.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The exploit works because price data flows &lt;em&gt;visibly&lt;/em&gt; through the system. Attackers can watch it, predict it, and front-run it.&lt;/p&gt;

&lt;p&gt;If price feed ingestion happened inside a confidential execution environment — where the process itself is hidden — this attack vector disappears.&lt;/p&gt;

&lt;p&gt;That's the gap. That's what's missing.&lt;/p&gt;


&lt;h2&gt;
  
  
  The AI Problem Makes Everything Worse
&lt;/h2&gt;

&lt;p&gt;Now layer in what's happening with AI and Web3.&lt;/p&gt;

&lt;p&gt;Decentralized applications are increasingly integrating Large Language Models and autonomous AI agents. AI-powered oracles. AI-driven trading strategies. AI-based governance tools.&lt;/p&gt;

&lt;p&gt;This creates a new class of vulnerabilities that blockchain security wasn't designed to handle.&lt;/p&gt;

&lt;p&gt;The core problem is what researchers call the &lt;strong&gt;Verifiability Trilemma:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A decentralized AI inference system cannot simultaneously achieve:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Computational integrity&lt;/strong&gt; — cryptographic proof the output is correct&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Low latency&lt;/strong&gt; — sub-second response for real-time applications&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Economic efficiency&lt;/strong&gt; — verification costs negligible relative to inference costs&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Current solutions force you to pick two:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ZKML    → High integrity + low cost BUT proving time = minutes to hours 🐌
OpML    → Fast + cheap BUT execution is completely public 👀
FHE     → Private + correct BUT computationally prohibitive 💀
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;None of these work for production AI in Web3 environments.&lt;/p&gt;




&lt;h2&gt;
  
  
  What CIR Actually Solves
&lt;/h2&gt;

&lt;p&gt;This is where &lt;strong&gt;Confidential Inference Runtime (CIR)&lt;/strong&gt; enters.&lt;/p&gt;

&lt;p&gt;CIR uses &lt;strong&gt;Trusted Execution Environments (TEEs)&lt;/strong&gt; — secure hardware enclaves built into modern CPUs and GPUs — to create a protected space where computation happens privately &lt;em&gt;and&lt;/em&gt; verifiably.&lt;/p&gt;

&lt;p&gt;The key insight: TEEs break the Verifiability Trilemma by providing hardware-based integrity instead of cryptographic proofs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Performance Comparison
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Technology&lt;/th&gt;
&lt;th&gt;Speed&lt;/th&gt;
&lt;th&gt;Privacy&lt;/th&gt;
&lt;th&gt;Integrity Mechanism&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;CIR (TEEs)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Near-native (5–10% overhead)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Full&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Hardware attestation&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ZKML&lt;/td&gt;
&lt;td&gt;1400x slower&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;td&gt;Validity proofs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;OpML&lt;/td&gt;
&lt;td&gt;Native&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;td&gt;Fraud proofs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;FHE&lt;/td&gt;
&lt;td&gt;Prohibitively slow&lt;/td&gt;
&lt;td&gt;Full&lt;/td&gt;
&lt;td&gt;Mathematical&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;CIR delivers near-native performance with full privacy and hardware-backed integrity.&lt;/p&gt;

&lt;p&gt;But the more important feature for blockchain is what CIR does to &lt;strong&gt;execution behavior.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Constant-Time Execution: The Hidden Attack Vector
&lt;/h2&gt;

&lt;p&gt;Here's something most blockchain security discussions miss completely:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Even with memory isolation and encrypted inputs, execution timing leaks information.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If a smart contract takes 12ms to execute for one input and 47ms for another, an attacker watching transaction timing can infer:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What decision the contract made&lt;/li&gt;
&lt;li&gt;Which oracle value it used&lt;/li&gt;
&lt;li&gt;Which branch of logic it followed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is a &lt;strong&gt;timing side-channel attack.&lt;/strong&gt; And it's devastatingly effective against DeFi protocols making time-sensitive decisions.&lt;/p&gt;

&lt;p&gt;CIR addresses this through &lt;strong&gt;constant-time execution guarantees:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Every operation takes identical time regardless of input&lt;/span&gt;
&lt;span class="c1"&gt;// No data-dependent branching&lt;/span&gt;
&lt;span class="c1"&gt;// No variable-length operations on secret data&lt;/span&gt;
&lt;span class="c1"&gt;// No timing patterns that leak information&lt;/span&gt;
&lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;constant_time_matrix_multiply&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;Matrix&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;Matrix&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Matrix&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// All paths execute in identical time&lt;/span&gt;
    &lt;span class="c1"&gt;// Timing is input-independent by construction&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Combined with hardware attestation — a cryptographic proof generated by the CPU itself — you get something the blockchain ecosystem has never had:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Verifiable proof that execution was private.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Not just "the code is correct" (that's what audits try to do). But "the execution itself didn't leak anything."&lt;/p&gt;




&lt;h2&gt;
  
  
  How This Fixes the Core Attack Vectors
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Oracle Manipulation ✅
&lt;/h3&gt;

&lt;p&gt;Price feeds ingested inside a CIR enclave are invisible to the mempool. Attackers can't see what data is being processed. They can't front-run updates they can't observe. The flash loan attack disappears because the timing and data flow is hidden.&lt;/p&gt;

&lt;h3&gt;
  
  
  MEV Extraction ✅
&lt;/h3&gt;

&lt;p&gt;MEV attacks depend on transaction ordering visibility. If the computation determining transaction outcomes happens inside a CIR enclave, the MEV opportunity disappears before it can be exploited.&lt;/p&gt;

&lt;h3&gt;
  
  
  Model Downgrade Attacks ✅
&lt;/h3&gt;

&lt;p&gt;In decentralized AI, a malicious provider might charge for Llama-3-70B but secretly run a cheaper model. CIR prevents this through the &lt;strong&gt;MRENCLAVE measurement&lt;/strong&gt; — a hardware-signed fingerprint of the exact binary running in the enclave.&lt;/p&gt;

&lt;p&gt;The economic math for cheating:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;E[profit from cheating] = (1 - P_caught) × (revenue - cheap_model_cost) 
                        - P_caught × slashing_penalty
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When &lt;code&gt;P_caught → 1&lt;/code&gt; (because TEE signatures are unforgeable), expected profit becomes negative. The system stays honest without requiring trust.&lt;/p&gt;

&lt;h3&gt;
  
  
  IP Extraction ✅
&lt;/h3&gt;

&lt;p&gt;Developers hesitant to deploy high-value models on decentralized networks because node operators can steal weights? CIR keeps model weights encrypted inside the enclave throughout inference. The node operator never has access to raw weights.&lt;/p&gt;




&lt;h2&gt;
  
  
  What's Already Being Built
&lt;/h2&gt;

&lt;p&gt;This isn't theoretical. Production deployments exist right now.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phala Network&lt;/strong&gt; — TEEPods running Llama 3.3 70B and DeepSeek R1 with 100% privacy and only 5–10% performance overhead. Over 10,000 daily attestations in production.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ritual&lt;/strong&gt; — Infernet compute oracle network using TEEs to give smart contracts trustless off-chain AI inference. Making smart contracts "actually smart."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Marlin&lt;/strong&gt; — Oyster confidential runtime bridging TEEs and decentralized networks.&lt;/p&gt;

&lt;p&gt;The infrastructure for confidential Web3 execution is being assembled. The question is how quickly it becomes the standard.&lt;/p&gt;




&lt;h2&gt;
  
  
  The IP Dimension Nobody Talks About
&lt;/h2&gt;

&lt;p&gt;There's a deeper issue that goes beyond security: &lt;strong&gt;intellectual property.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AI systems are built on massive, often uncompensated contributions from creators and open-source developers. CIR enables &lt;strong&gt;verifiable attribution&lt;/strong&gt; — a system where the origins of intelligence are tracked on-chain.&lt;/p&gt;

&lt;p&gt;Model weights are fingerprinted. Decision traces are recorded. Every inference preserves the attribution chain.&lt;/p&gt;

&lt;p&gt;This creates what researchers are calling "attribution-backed intelligence units" — a new asset class where AI contributions can be priced, owned, and rewarded.&lt;/p&gt;

&lt;p&gt;For developers hesitant to open-source their models, CIR offers a middle path: deploy openly on decentralized infrastructure while maintaining control and compensation through cryptographically enforced attribution.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Legal Reality (2025)
&lt;/h2&gt;

&lt;p&gt;Recent rulings complicate the picture further.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Thaler v. Vidal&lt;/strong&gt; and &lt;strong&gt;Thaler v. Perlmutter&lt;/strong&gt; decisions (March 2025) established that autonomous AI-generated works are not copyrightable or patentable under US law. A "natural person" must be the inventor.&lt;/p&gt;

&lt;p&gt;But CIR's verifiable execution trace — what model ran, what inputs were processed, what outputs were generated — creates a digital paper trail demonstrating human involvement in AI-assisted creation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The ledger of computation becomes the ledger of authorship.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Practical Recommendations
&lt;/h2&gt;

&lt;p&gt;If you're building in Web3 in 2025:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Stop treating audits as your primary security signal&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;They're necessary. They're not sufficient. They review code, not execution. Add runtime monitoring.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Adopt multi-oracle architectures immediately&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Single-source oracles are a documented $8.8B attack vector. There's no justification for them in production protocols.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Think about execution privacy as a first-class concern&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The shift from "did we audit the code?" to "can we prove execution was private and correct?" is not optional. It's where the industry is heading.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Start evaluating TEE-based infrastructure now&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Phala, Ritual, Marlin — production deployments exist. The performance overhead is minimal. The security improvement is substantial.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where I'm Taking This
&lt;/h2&gt;

&lt;p&gt;I've spent the last six weeks building CIR as an inference runtime for AI workloads — starting with healthcare and enterprise AI where HIPAA compliance and side-channel resistance are existential requirements.&lt;/p&gt;

&lt;p&gt;The blockchain application is the natural next layer.&lt;/p&gt;

&lt;p&gt;The core technology — constant-time execution, hardware attestation, CPU-to-GPU encrypted bridging — works regardless of whether the workload is an LLM responding to a medical query or a smart contract processing a DeFi oracle update.&lt;/p&gt;

&lt;p&gt;The execution environment doesn't know what it's protecting. It just guarantees the protection.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you're building confidential infrastructure for Web3, or you're a protocol that's been hit by oracle manipulation or MEV extraction and you want to talk architecture — reach out.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The demo is live: &lt;a href="https://youtu.be/3_WAKX_2a6s" rel="noopener noreferrer"&gt;https://youtu.be/3_WAKX_2a6s&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The code is public: &lt;a href="https://github.com/OluwaseunOlajide/CIR-POC" rel="noopener noreferrer"&gt;https://github.com/OluwaseunOlajide/CIR-POC&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The crisis is real. The fix exists. Let's build it.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Building CIR in public. Week 6. Not yet 20.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;X: &lt;a href="https://twitter.com/Oluwase40973634" rel="noopener noreferrer"&gt;@Oluwase40973634&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Email: &lt;a href="mailto:davidseunolajide@gmail.com"&gt;davidseunolajide@gmail.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;
  Click to see: CIR technical stack
  &lt;p&gt;&lt;strong&gt;Current deployment:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Language: Rust (memory safety + performance)&lt;/li&gt;
&lt;li&gt;Cloud: DigitalOcean → Azure SEV-SNP (migration this week)&lt;/li&gt;
&lt;li&gt;Attestation: SHA-256 simulation → AMD hardware signing&lt;/li&gt;
&lt;li&gt;Benchmark: 16ms constant-time execution, &amp;lt;2% variance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Supported hardware (roadmap):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AMD SEV-SNP (Confidential VMs)&lt;/li&gt;
&lt;li&gt;Intel TDX (Trust Domain Extensions)&lt;/li&gt;
&lt;li&gt;NVIDIA H100 Confidential Computing GPUs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;GitHub:&lt;/strong&gt; &lt;a href="https://github.com/OluwaseunOlajide/CIR-POC" rel="noopener noreferrer"&gt;OluwaseunOlajide/CIR-POC&lt;/a&gt;&lt;/p&gt;



&lt;/p&gt;

</description>
      <category>blockchain</category>
      <category>security</category>
      <category>web3</category>
      <category>cryptography</category>
    </item>
    <item>
      <title>CIR: From Local Development to Production Cloud in One Week</title>
      <dc:creator>Oluwaseun Olajide</dc:creator>
      <pubDate>Tue, 10 Feb 2026 14:35:17 +0000</pubDate>
      <link>https://forem.com/oluwaseun_olajide_828e75d/cir-from-local-development-to-production-cloud-in-one-week-2im3</link>
      <guid>https://forem.com/oluwaseun_olajide_828e75d/cir-from-local-development-to-production-cloud-in-one-week-2im3</guid>
      <description>&lt;p&gt;Last Monday, CIR was running on my laptop. By Friday, it was deployed to DigitalOcean and generating cryptographic attestations remotely. Here’s what that journey looked like.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why does production matter?
&lt;/h2&gt;

&lt;p&gt;While a local demonstration is an essential first step to validate the core logic of your software, it ultimately serves as a controlled experiment that fails to account for the complexities of the real world. Relying solely on a “works on my machine” assurance suggests that the solution is potentially fragile, untested against external variables, and incapable of surviving outside a pristine development environment. In the high-stakes arena of B2B infrastructure, where stability is paramount, a local demo simply cannot provide the necessary evidence that a system is robust enough to handle the friction of actual usage.&lt;/p&gt;

&lt;p&gt;In contrast, a full production deployment serves as the definitive proof of concept, demonstrating that the software is portable, reproducible, and capable of seamless enterprise integration. It signals to potential clients and stakeholders that the engineering has matured beyond a mere prototype and is ready to withstand the rigors of a live ecosystem. By showcasing a deployed solution, you shift the conversation from theoretical functionality to tangible reliability, providing the concrete assurance B2B buyers require to trust your infrastructure with their critical operations.&lt;/p&gt;

&lt;h2&gt;
  
  
  The plan
&lt;/h2&gt;

&lt;p&gt;The initial strategy focused on leveraging Microsoft Azure’s advanced confidential computing capabilities, specifically targeting a deployment on AMD SEV-SNP hardware. To facilitate this, an application was submitted to the Microsoft Founders Hub, which offers $1,000 in Azure credits and would have provided a robust environment for testing secure enclaves at scale. This route was the ideal path for validatiing the infrastructure on enterprise-grade hardware without incurring immediate overhead costs.&lt;/p&gt;

&lt;p&gt;However, the approval process for the Founders Hub introduced a waiting period that threatened to stall momentum. In an attempt to bridge the gap and start testing immediately, a secondary attempt was made using the Azure for Students program, which provides a $100 credit. Unfortunately, this path hit a dead end due to strict age and identity verification hurdles, effectively locking out access to the necessary Azure resources and forcing a re-evaluation of the deployment strategy.&lt;/p&gt;

&lt;p&gt;To avoid further delays, the focus pivoted to DigitalOcean, utilizing the $200 credit available through the GitHub Student Pack. This decision proved to be the breakthrough needed; the credits were successfully activated on Friday morning, clearing the way for immediate infrastructure provisioning. With the barriers removed, the deployment process moved rapidly, transitioning from a funded account in the morning to a fully deployed live instance by Friday night.&lt;/p&gt;

&lt;h2&gt;
  
  
  The deployment process
&lt;/h2&gt;

&lt;p&gt;The deployment process began with the provisioning of the cloud infrastructure on DigitalOcean. I spun up a fresh Droplet in the Frankfurt region, selecting the latest Ubuntu 24.04 LTS image to ensure a modern and secure foundation. The instance was configured with 2GB of RAM, a specification chosen to balance cost-efficiency with enough memory to handle the compilation overhead without hitting swap space.&lt;br&gt;
With the server active, the focus shifted to migrating the codebase from the local environment to the cloud. I utilized SCP (Secure Copy Protocol) to transfer the project files, ensuring a secure and direct handoff of the source code. This method avoided the need for intermediate git pulling or credential management on the server side, keeping the initial setup clean and focused solely on the artifacts needed for the build.&lt;br&gt;
Once the code was successfully transferred, we initialized the build environment by installing the Rust toolchain directly on the remote instance. I then triggered the compilation process using the release profile to ensure the resulting binary was fully optimized for performance. despite the modest hardware resources, the compilation was efficient, completing the build of the project in approximately 5 to 10 minutes.&lt;/p&gt;

&lt;p&gt;Finally, I moved to the validation phase by executing the compiled CIR binary. I monitored the standard output and logs to verify that the application initialized correctly and processed data as intended. The output was cross-referenced with local results to confirm that the behavior was identical, successfully marking the transition from a local prototype to a functioning remote deployment.&lt;/p&gt;
&lt;h2&gt;
  
  
  The results
&lt;/h2&gt;

&lt;p&gt;The deployment of the CIR Proof of Concept (PoC) to the remote infrastructure yielded immediate performance gains, with the secure inference engine executing the constant-time calculation on a 200x200 matrix in just 16 milliseconds. This represents a significant improvement over the local development environment, leveraging the superior CPU architecture of the cloud instance to reduce latency. Crucially, the test confirmed that the strict constant-time execution properties required for security were preserved during the migration, validating that the system remains resistant to timing attacks even when running on shared, remote hardware.&lt;/p&gt;

&lt;p&gt;Following the calculation, the system successfully generated a specific cryptographic fingerprint, outputting a result hash to verify the integrity of the computation. The process concluded with the automatic generation and export of the &lt;code&gt;attestation_report.json&lt;/code&gt; file. This successful export demonstrates that the remote instance is fully functional, capable not only of performing secure calculations but also of producing the necessary cryptographic artifacts to prove the validity and privacy of the execution to external verifiers.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;--- CIR PoC: Secure Inference Engine (200x200) ---

[Step 1] Running Constant-Time Calculation...
   &amp;gt; Done in 16ms

[Step 2] Generating Cryptographic Fingerprint...
   &amp;gt; Result Hash: 704dc3569d50486d5b01f77aac85e961320ed4bf33cd611d555cc513b5cdc96a

[Step 3] Exporting Attestation Report...
   &amp;gt; SAVED: 'attestation_report.json' created successfully.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What this proves
&lt;/h2&gt;

&lt;p&gt;A critical advantage demonstrated by this deployment is the inherent portability of the architecture. Because the codebase compiles and runs seamlessly across different environments, transitioning effortlessly from local development machines to standard cloud instances, it establishes that the solution is not brittle or vendor-locked. This flexibility ensures that the secure inference engine can be adopted and scaled across diverse infrastructure setups without requiring significant refactoring or specialized handling.&lt;/p&gt;

&lt;p&gt;This successful software-level validation paves the way for the ultimate goal: hardware-enforced security. The system is now fully primed for integration with Trusted Execution Environments (TEEs), specifically targeting the upcoming migration to Azure’s AMD SEV-SNP nodes. By validating the application logic first, we have de-risked the transition to these hardware-isolated enclaves, ensuring that when the Azure deployment occurs, the focus can remain on hardening the security boundaries rather than debugging the application itself.&lt;/p&gt;

&lt;p&gt;Week 1–4: Built it. Week 5: Shipped it. Week 6: Pitching it to enterprise platforms. Building in public at 19. Follow for updates&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>security</category>
    </item>
    <item>
      <title>Building Constant-Time Algorithms: Why It’s Harder Than It Looks</title>
      <dc:creator>Oluwaseun Olajide</dc:creator>
      <pubDate>Thu, 05 Feb 2026 18:08:42 +0000</pubDate>
      <link>https://forem.com/oluwaseun_olajide_828e75d/building-constant-time-algorithms-why-its-harder-than-it-looks-3ik0</link>
      <guid>https://forem.com/oluwaseun_olajide_828e75d/building-constant-time-algorithms-why-its-harder-than-it-looks-3ik0</guid>
      <description>&lt;p&gt;I spent the last 4 weeks implementing constant-time matrix multiplication in Rust. Sounds simple, right? Just don’t use if-statements on secret data. Turns out the compiler, the CPU, and even Rust’s optimizer are all working against you. Here’s what I learned.&lt;/p&gt;

&lt;h2&gt;
  
  
  What “Constant-Time” Actually Means
&lt;/h2&gt;

&lt;p&gt;In the context of cryptography and secure systems, “constant-time” does not mean the code runs instantaneously or even quickly; rather, it means the execution duration is strictly independent of the value of the secret input data. Whether the system is processing a complex cryptographic key, a large integer, or a string of zeros, the CPU must execute the exact same sequence of instructions and take the exact same number of clock cycles to complete the task. This often requires deliberately bypassing standard compiler optimizations such as “early exits” in loops or conditional branches resulting in code that may technically be slower than a variable-time counterpart but is mathematically consistent in its performance profile.&lt;/p&gt;

&lt;p&gt;This consistency matters because any variation in execution time, no matter how microscopic, can be measured and exploited by adversaries to reconstruct secret information via “timing attacks.” If a comparison function returns false slightly faster for the first byte of an incorrect password than for the last byte, an attacker can use statistical analysis of those timing differences to guess the password character by character without ever seeing the memory. By enforcing constant-time execution—specifically avoiding data-dependent branching or memory lookups—developers close this "side channel," ensuring that the timing of the operation reveals absolutely nothing about the secrets being processed inside the "black box."&lt;/p&gt;

&lt;h2&gt;
  
  
  The Obvious Approach That Doesn’t Work
&lt;/h2&gt;

&lt;p&gt;The most intuitive way to multiply two matrices is the standard algorithm taught in introductory computer science: a triple-nested loop. You iterate through the rows of the first matrix, the columns of the second, and then perform a dot product for each cell. In a mathematical sense, this is perfectly correct and functional. It calculates the result by summing the products of corresponding elements such that matrix multiplication, where each element of the resulting matrix C is the dot product of the i-th row of A and the j-th column of B. However, this “naive” approach prioritizes logical correctness over execution behavior. It does not account for how the processor handles different values, assuming that multiplying 5 multiplied by 5 takes the same effort as 0 multiplied by 0, or that the loop will always run in a predictable rhythm regardless of the data it processes.&lt;/p&gt;

&lt;p&gt;The fundamental issue is that modern compilers (like rustc, gcc, or clang) are designed with a single, overriding goal: optimization for speed. They are built to identify "unnecessary" work and ruthlessly eliminate it. If you attempt to write "constant-time" code by adding dummy operations—such as calculating a value and then discarding it just to burn time—the compiler’s "dead code elimination" pass will see that the result isn't used and delete the instructions entirely. Similarly, if the compiler notices a chance to "short-circuit" a calculation (e.g., skipping a multiplication if one operand is known to be zero), it will insert a branch to jump over that code. While this makes the program faster, it re-introduces the very timing variations you tried to prevent, because the execution time now depends directly on your secret input data.&lt;/p&gt;

&lt;p&gt;A classic example of this vulnerability occurs when a developer tries to optimize sparse matrix operations. In the naive loop below, a standard compiler or a well-meaning developer might add a check to skip the multiplication if an element is zero. While this saves massive amounts of time for sparse matrices, it is catastrophic for security because the total execution time now reveals exactly how many zeros are in your secret matrix.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fn naive_matmul(a: &amp;amp;[[i32; 2]], b: &amp;amp;[[i32; 2]]) -&amp;gt; [[i32; 2]; 2] {
 let mut c = [[0; 2]; 2];
for i in 0..2 {
 for j in 0..2 {
 for k in 0..2 {
 // THE SECURITY FLAW:
 // If 'a[i][k]' is 0, the CPU skips the multiply.
 // An attacker measuring time can infer the number of zeros.
 if a[i][k] == 0 {
 continue; // Compiler/CPU optimization creates a timing leak
 }

 c[i][j] += a[i][k] * b[k][j];
 }
 }
 }
 c
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How I Made It Actually Work
&lt;/h2&gt;

&lt;p&gt;To stop the compiler from stripping away “unnecessary” work, we use optimization barriers like std::hint::black_box (in Rust). This function acts as an opaque container that tells the compiler, "I am using this value, and you cannot know what it is or how it’s produced." When you wrap a variable or an operation in black_box, the compiler is forced to assume the value is unknown and unpredictable. This prevents it from performing "constant folding" (pre-calculating results at compile time) or "dead code elimination" (deleting code it thinks has no effect), ensuring that the CPU actually executes the instructions you wrote.&lt;br&gt;
Making the control flow rigid requires enforcing fixed iteration counts. In a standard program, you might iterate only up to the length of a string or stop a loop the moment you find a matching search term (an “early exit”). In constant-time programming, this is forbidden. You must iterate through the entire maximum possible size of the data structure every single time. If you are checking a password, you compare every single character against the stored hash, even if the very first character is wrong. This ensures that the loop takes the same number of clock cycles for a valid input as it does for an invalid one.&lt;/p&gt;

&lt;p&gt;To eliminate timing leaks caused by conditional jumps (like if statements), we replace control flow with bitwise arithmetic. Processors are often faster at doing math than they are at predicting branches, and math operations generally take a fixed amount of time. Instead of saying "if x is 1, add y to the total," we create a "mask" using the value of x. If x is 1, the mask becomes all 1s (0xFF...); if x is 0, the mask is all 0s. We then perform a bitwise AND between y and the mask, and add the result to the total. This way, the CPU always performs an ADD instruction, but it effectively adds "0" when the condition is false.&lt;/p&gt;

&lt;p&gt;Here is how the previous matrix multiplication looks when hardened. We wrap the inputs in black_box to force the load operations, and we remove the "zero-skip" check entirely, performing the multiplication and addition blindly on every iteration regardless of the values.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;use std::hint::black_box;

fn secure_matmul(a: &amp;amp;[[i32; 2]], b: &amp;amp;[[i32; 2]]) -&amp;gt; [[i32; 2]; 2] {
    let mut c = [[0; 2]; 2];

    // 1. Fixed Iteration: Loops are hardcoded to 0..2.
    // No reliance on dynamic lengths or 'break' statements.
    for i in 0..2 {
        for j in 0..2 {
            for k in 0..2 {
                // 2. Optimization Barrier: black_box forces the compiler to 
                // treat these values as unknown, preventing it from optimizing 
                // based on specific values (like 0 or 1).
                let val_a = black_box(a[i][k]);
                let val_b = black_box(b[k][j]);

                // 3. Branchless Execution: We multiply and add unconditionally.
                // Even if val_a is 0, we pay the cost of the multiplication.
                c[i][j] += val_a * val_b; 
            }
        }
    }
    // Return through black_box to ensure the calculation isn't discarded.
    black_box(c)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How I Proved It Works
&lt;/h2&gt;

&lt;p&gt;To empirically verify the constant-time property, we must stress the system with two statistically polar opposite datasets. The first set consists of completely random matrices — high entropy data that forces the CPU to process distinct values in every register. The second set is the “degenerate” case: matrices filled entirely with zeros. In a non-secure implementation, this zero-filled dataset would trigger every possible optimization shortcut (like skipping multiplications), causing the code to fly through the CPU pipeline. By running thousands of iterations of both scenarios back-to-back, we create a comparative baseline to see if the “easy” work is actually finishing faster than the “hard” work.&lt;/p&gt;

&lt;p&gt;The results of this benchmark confirm the efficacy of the black_box barriers and branchless logic. When running the hardened secure_matmul function, both the random-data inputs and the zero-data inputs consistently clocked in at approximately 22ms per batch.&lt;/p&gt;

&lt;p&gt;Crucially, the variance between the two runtimes was less than 2%. This minuscule fluctuation is attributable to standard background “noise” from the operating system (like context switches or cache contention) rather than the code itself. The timing distributions for both datasets effectively overlap, proving that the CPU is performing the exact same amount of labor regardless of the input.&lt;/p&gt;

&lt;p&gt;This empirical measurement is the only “proof” that truly validates a side-channel defense. While analyzing the Rust source code or even the assembly instructions is necessary, it is not sufficient; micro-architectural features like branch prediction, speculative execution, and instruction reordering can introduce invisible timing leaks that static analysis misses. By observing that the “fastest possible” input (all zeros) takes the exact same time to execute as the “slowest possible” input (random large numbers), we confirm that the system treats all data as equal burdens. This transforms the security of the engine from a theoretical hope into an observable, physical reality.&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s Harder Than I Expected
&lt;/h2&gt;

&lt;p&gt;In standard high-performance computing, the goal is to do as little work as possible. Techniques like “early exits” (breaking a loop the moment a match is found) and caching (storing frequently accessed data for quick retrieval) are fundamental to speed. However, these are catastrophic for security because they make execution time dependent on the data being processed. Even powerful tools like SIMD (Single Instruction, Multiple Data) can introduce subtle vulnerabilities; if a processor throttles its frequency to handle power-hungry vector instructions or if the latency of an instruction varies based on the operand values, the resulting timing jitter can act as a beacon, signaling the internal state of your secrets to an attacker.&lt;/p&gt;

&lt;p&gt;This creates an unavoidable conflict between the compiler’s purpose and the cryptographer’s needs. Compilers and modern CPUs are aggressively engineered to predict future instructions and skip unnecessary tasks to maximize throughput. Constant-time programming essentially requires you to fight these architectural instincts, forcing the processor to take the “long way” for every calculation. While the performance engineer tries to exploit every shortcut the hardware offers, the security engineer must treat every shortcut as a potential leak, deliberately writing “inefficient” code to ensure the runtime remains perfectly flat and predictable.&lt;/p&gt;

&lt;p&gt;The trade-off for this robustness is a measurable but often acceptable dip in raw speed. Hardening a function to be constant-time typically incurs a performance penalty of roughly 5–10% compared to a naively optimized version. This “security tax” comes from the overhead of processing dummy data, calculating full iterations on sparse matrices, and blocking compiler optimizations. However, this cost buys something invaluable: provable resistance to timing attacks. In high-stakes environments, losing a few milliseconds is a negligible price to pay for the guarantee that your execution patterns reveal absolutely nothing to an adversary.&lt;/p&gt;

&lt;p&gt;This is just one primitive (matrix multiplication). Next: moving this from local simulation to a real AMD SEV-SNP environment and getting hardware-backed attestation working. &lt;br&gt;
Follow for updates.&lt;/p&gt;

</description>
      <category>algorithms</category>
      <category>rust</category>
      <category>security</category>
      <category>performance</category>
    </item>
    <item>
      <title>Inside AMD SEV: How memory encryption works today(and where it is lacking)</title>
      <dc:creator>Oluwaseun Olajide</dc:creator>
      <pubDate>Tue, 03 Feb 2026 14:46:19 +0000</pubDate>
      <link>https://forem.com/oluwaseun_olajide_828e75d/inside-amd-sev-how-memory-encryption-works-todayand-where-it-is-lacking-3n3d</link>
      <guid>https://forem.com/oluwaseun_olajide_828e75d/inside-amd-sev-how-memory-encryption-works-todayand-where-it-is-lacking-3n3d</guid>
      <description>&lt;p&gt;Most engineers deploying confidential AI VMs seem to almost have no idea what’s actually happening under the hood. Here’s what AMD SEV does, and what it doesn’t.&lt;/p&gt;

&lt;p&gt;AMD SEV (Secure Encrypted Virtualization) is a hardware-based security technology integrated into AMD EPYC processors. Its primary purpose is to protect data in use by encrypting the memory of virtual machines (VMs).In traditional virtualization, the hypervisor (the software managing the VMs) has full visibility into the memory of every guest VM. This creates a security risk: if the hypervisor is compromised (or if the cloud provider is untrusted), the data inside your VM can be read or tampered with. AMD SEV solves this by cryptographically isolating the VM from the hypervisor. That sounds great right? I mean a marvel of computing security, fixing a major issue in computing.&lt;/p&gt;

&lt;p&gt;You might be wondering what is the difference between AMD SEV and Intel SGX . Here is the schtick about AMD SEV imagine building a fortress around your entire house. Everything inside the house (the Operating System, all applications, and data) is safe from the outside world (the Cloud Provider/Hypervisor), then by some chance if a thief manages to get inside the house (e.g., a malware infection in your Guest OS), they can steal everything. Intel SGX on the the other hand is like placing a steel vault inside a room in your house. Even if the house has no walls and thieves (compromised OS or Hypervisor) are roaming freely, they cannot get into the vault. Only the specific items you put in the vault are safe. I bet you can already tell which one is favored by the cloud computing giants (of course it is AMD SEV) though Intel SGX can be prove to be quite difficult to use as you have build your entire application around it. Is it worth the trade of?&lt;br&gt;
The answer to that is for you to decide, the big guys (Azure, Google Cloud, Nvidia) are pushing for it massively and not without reasons. Model Weights are too expensive to risk and Training Data is too regulated to expose. AMD SEV (paired with NVIDIA GPUs) provides the hardware-level guarantee that not even the computer owner can watch what the computer is thinking. You have to admit that is pretty good take it for example you and I own banks we want to train a model for fraud detection (I used banks so you understand that this is purely hypothetical) and we need a clean room where we can not even peek to see what is going on inside then sure AMD SEV solves that like a champ. And as stated through out this entire blog we all Know it is not all rainbows and sunshine.&lt;/p&gt;

&lt;p&gt;I spent a week staring at AMD SEV’s security model trying to find the weak point. It took longer than I expected the encryption layer is genuinely solid. But then I looked past the data and started watching the behavior. How long each operation takes. Which memory addresses get touched. The shape of the computation itself. SEV doesn’t hide that. And for AI inference where the shape of the computation can leak information about the model or the input that’s a problem that nobody in the confidential computing space is really addressing yet. Imagine we launch the fraud detection model, and someone does some findings goes to the room and notices how the room itself was designed and decorated they could get a pretty good idea of what was going on inside, take it like this if I balloons in a room I don’t thin it is going to be a funeral.&lt;br&gt;
This is a first of many in a series if you want to dive deeper follow&lt;/p&gt;

</description>
      <category>azure</category>
      <category>aws</category>
    </item>
    <item>
      <title>Inside AMD SEV: How memory encryption works today(and where it is lacking)</title>
      <dc:creator>Oluwaseun Olajide</dc:creator>
      <pubDate>Tue, 03 Feb 2026 09:21:03 +0000</pubDate>
      <link>https://forem.com/oluwaseun_olajide_828e75d/inside-amd-sev-how-memory-encryption-works-todayand-where-it-is-lacking-1oin</link>
      <guid>https://forem.com/oluwaseun_olajide_828e75d/inside-amd-sev-how-memory-encryption-works-todayand-where-it-is-lacking-1oin</guid>
      <description>&lt;p&gt;Most engineers deploying confidential AI VMs seem to almost have no idea what's actually happening under the hood. Here's what AMD SEV does, and what it doesn't.&lt;br&gt;
AMD SEV (Secure Encrypted Virtualization) is a hardware-based security technology integrated into AMD EPYC processors. Its primary purpose is to protect data in use by encrypting the memory of virtual machines (VMs).In traditional virtualization, the hypervisor (the software managing the VMs) has full visibility into the memory of every guest VM. This creates a security risk: if the hypervisor is compromised (or if the cloud provider is untrusted), the data inside your VM can be read or tampered with. AMD SEV solves this by cryptographically isolating the VM from the hypervisor. That sounds great right? I mean a marvel of computing security, fixing a major issue in computing.&lt;/p&gt;

&lt;p&gt;You might be wondering what is the difference between AMD SEV and Intel SGX  . Here is the schtick about AMD SEV imagine building a fortress around your entire house. Everything inside the house (the Operating System, all applications, and data) is safe from the outside world (the Cloud Provider/Hypervisor), then by some chance if a thief manages to get inside the house (e.g., a malware infection in your Guest OS), they can steal everything. Intel SGX on the the other hand is like placing a steel vault inside a room in your house. Even if the house has no walls and thieves (compromised OS or Hypervisor) are roaming freely, they cannot get into the vault. Only the specific items you put in the vault are safe. I bet you can already tell which one is favored by the cloud computing giants (of course it is AMD SEV) though Intel SGX can be prove to be quite difficult to use as you have build your entire application around it. Is it worth the trade of? &lt;br&gt;
The answer to that is for you to decide,  the big guys (Azure, Google Cloud, Nvidia) are pushing for it massively and not without reasons. Model Weights are too expensive to risk and Training Data is too regulated to expose. AMD SEV (paired with NVIDIA GPUs) provides the hardware-level guarantee that not even the computer owner can watch what the computer is thinking. You have to admit that is pretty good take it for example you and I own banks we want to train a model for fraud detection (I used banks so you understand that this is purely hypothetical) and we need a clean room where we can not even peek to see what is going on inside then sure AMD SEV solves that like a champ. And   as stated through out this entire blog we all Know it is not all rainbows and sunshine. &lt;br&gt;
I spent a week staring at AMD SEV's security model trying to find the weak point. It took longer than I expected the encryption layer is genuinely solid. But then I looked past the data and started watching the behavior. How long each operation takes. Which memory addresses get touched. The shape of the computation itself. SEV doesn't hide that. And for AI inference where the shape of the computation can leak information about the model or the input that's a problem that nobody in the confidential computing space is really addressing yet. Imagine we launch the fraud detection model, and someone does some findings goes to the room and notices how the room itself was designed and decorated they could get a pretty good idea of what was going on inside, take it like this if I balloons in a room I don't think it is going to be a funeral.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cryptography</category>
      <category>cloudsecurity</category>
      <category>discuss</category>
    </item>
  </channel>
</rss>
