<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Sameer Khan</title>
    <description>The latest articles on Forem by Sameer Khan (@monkfromearth).</description>
    <link>https://forem.com/monkfromearth</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/monkfromearth"/>
    <language>en</language>
    <item>
      <title>Zuckerberg Is Writing Code Again. With Claude Code.</title>
      <dc:creator>Sameer Khan</dc:creator>
      <pubDate>Sun, 05 Apr 2026 10:24:18 +0000</pubDate>
      <link>https://forem.com/monkfromearth/zuckerberg-is-writing-code-again-with-claude-code-26b1</link>
      <guid>https://forem.com/monkfromearth/zuckerberg-is-writing-code-again-with-claude-code-26b1</guid>
      <description>&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; Mark Zuckerberg shipped 3 diffs to Meta's monorepo last month, his first code in 20 years. He's a heavy user of Claude Code CLI. One of his diffs got 200+ approvals from engineers who wanted to say they reviewed the CEO's code. He's not the only one. Garry Tan at Y Combinator is doing the same thing. The pattern is clear: AI coding tools are pulling founders back into the codebase.&lt;/p&gt;




&lt;h2&gt;
  
  
  What happened?
&lt;/h2&gt;

&lt;p&gt;Gergely Orosz at The Pragmatic Engineer &lt;a href="https://newsletter.pragmaticengineer.com/p/the-pulse-industry-leaders-return" rel="noopener noreferrer"&gt;reported this week&lt;/a&gt; that Mark Zuckerberg is back to writing code. Three diffs landed in Meta's monorepo in March 2026. His tool of choice: &lt;strong&gt;Claude Code CLI&lt;/strong&gt;, Anthropic's terminal-based AI coding assistant. &lt;sup id="fnref1"&gt;1&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;To put the scale in perspective: Meta's monorepo now has &lt;strong&gt;close to 100 million diffs&lt;/strong&gt;. Back in 2006, the entire Facebook codebase had fewer than 10,000. &lt;sup id="fnref1"&gt;1&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Zuckerberg's last meaningful code contributions were in 2006. That's a 20-year gap. The fact that he's back, and using an AI tool to do it, says something about where we are.&lt;/p&gt;

&lt;h2&gt;
  
  
  The 2010 diff that got force-merged
&lt;/h2&gt;

&lt;p&gt;This isn't Zuckerberg's first time making waves in code review.&lt;/p&gt;

&lt;p&gt;In 2010, he submitted a diff that made profile photos clickable on the profile page. Michael Novati, a senior engineer who would become the first person to hold Meta's L7 "coding machine" archetype, &lt;a href="https://newsletter.pragmaticengineer.com/p/the-coding-machine-at-meta" rel="noopener noreferrer"&gt;blocked it&lt;/a&gt;. The reason: formatting issues everywhere. &lt;sup id="fnref1"&gt;1&lt;/sup&gt; &lt;sup id="fnref2"&gt;2&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Zuckerberg overrode the block and force-merged it. &lt;sup id="fnref1"&gt;1&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Novati spent eight years at Meta and was recognized as the top code committer company-wide for several of them. The Pragmatic Engineer did &lt;a href="https://newsletter.pragmaticengineer.com/p/the-coding-machine-at-meta" rel="noopener noreferrer"&gt;a full episode&lt;/a&gt; with him about what it means to be a "coding machine" at that scale. &lt;sup id="fnref2"&gt;2&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;The 2010 story is funny in hindsight. But the 2026 version is different. This time, Zuckerberg isn't force-merging past reviewers. He's using AI to write code that engineers actually want to approve. &lt;strong&gt;One of his March diffs got more than 200 approvals&lt;/strong&gt;, with devs jumping at the chance to say they'd reviewed the CEO's work. &lt;sup id="fnref1"&gt;1&lt;/sup&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters beyond the anecdote
&lt;/h2&gt;

&lt;p&gt;Three diffs from the CEO of a 70,000-employee company is a footnote in a 100-million-diff monorepo. The signal isn't the code. It's the behavior.&lt;/p&gt;

&lt;p&gt;Zuckerberg isn't the only founder pulled back into the codebase by AI tools. Garry Tan, CEO of Y Combinator, &lt;a href="https://github.com/garrytan/gstack" rel="noopener noreferrer"&gt;returned to coding&lt;/a&gt; after 15 years and open-sourced gstack, a Claude Code system with 23 specialist tools that turns the CLI into a virtual engineering team: code reviewer, QA lead, security auditor, release engineer. &lt;sup id="fnref3"&gt;3&lt;/sup&gt; &lt;sup id="fnref4"&gt;4&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Tobias Lütke, CEO of Shopify, has been running experiments with &lt;a href="https://dev.to/blogs/karpathy-autoresearch-explained-ml-to-marketing"&gt;Karpathy's AutoResearch&lt;/a&gt; on internal company data. 37 experiments overnight. 19% performance gain.&lt;/p&gt;

&lt;p&gt;I wrote about &lt;a href="https://dev.to/blogs/karpathy-autoresearch-explained-ml-to-marketing"&gt;how AutoResearch works&lt;/a&gt; a few days ago. The throughline is the same: AI tools are collapsing the gap between "person with ideas" and "person who ships code." Founders used to be the first type. AI is turning them back into the second.&lt;/p&gt;

&lt;h2&gt;
  
  
  Meta's bet: AI writes most of the code
&lt;/h2&gt;

&lt;p&gt;Zuckerberg coding again isn't a hobby. It's a signal of where Meta is heading.&lt;/p&gt;

&lt;p&gt;Leaked internal documents from March 2026 show aggressive targets. Meta's creation org wants &lt;strong&gt;65% of engineers writing 75% or more of their committed code using AI&lt;/strong&gt; by mid-2026. The Scalable Machine Learning org set a target of 50-80% AI-assisted code. &lt;sup id="fnref5"&gt;5&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Zuckerberg himself said on Dwarkesh Patel's podcast that "in the next year, maybe half the development will be done by AI as opposed to people, and that will kind of increase from there." &lt;sup id="fnref6"&gt;6&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;He's not predicting this from the sidelines. He's using Claude Code in the terminal to ship diffs to his own monorepo. The CEO is the pilot customer.&lt;/p&gt;

&lt;h2&gt;
  
  
  The pattern worth watching
&lt;/h2&gt;

&lt;p&gt;There's a recurring shape here.&lt;/p&gt;

&lt;p&gt;Karpathy builds AutoResearch. Constrains the agent to one file, one metric, one 5-minute cycle. The constraint is the invention. Lütke runs it on Shopify data overnight. Marketers adapt it for landing pages.&lt;/p&gt;

&lt;p&gt;Anthropic builds Claude Code. Tan wraps it in 23 specialist agents. Zuckerberg uses it to ship his first code in 20 years.&lt;/p&gt;

&lt;p&gt;The tools don't just help engineers code faster. They re-open coding to people who stopped. Founders who moved into strategy, management, fundraising. People who haven't touched a codebase in a decade. The barrier to re-entry used to be months of catching up on tooling, frameworks, and conventions. Now it's a terminal and a prompt.&lt;/p&gt;

&lt;p&gt;That's a different kind of disruption than "AI replaces developers." It's closer to: AI brings back the builder-CEO. The person who can see a problem, describe a solution, and ship it before the meeting ends.&lt;/p&gt;

&lt;p&gt;Whether Zuckerberg's 3 diffs were good code is beside the point. The 200 engineers who approved them probably weren't reviewing for correctness. But the fact that a CEO can sit down with Claude Code and produce something that compiles, passes CI, and lands in a 100-million-diff monorepo? That's the new baseline.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Zuckerberg shipped 3 diffs&lt;/strong&gt; to Meta's monorepo in March 2026, his first code in ~20 years, using Claude Code CLI&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;One diff got 200+ approvals&lt;/strong&gt; from engineers eager to review the CEO's code&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Garry Tan&lt;/strong&gt; (Y Combinator) also returned to coding after 15 years, open-sourcing gstack for Claude Code&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Meta targets 65-75% AI-assisted code&lt;/strong&gt; across engineering by mid-2026&lt;/li&gt;
&lt;li&gt;AI coding tools are pulling &lt;strong&gt;founders back into codebases&lt;/strong&gt; they left years ago&lt;/li&gt;
&lt;li&gt;The disruption isn't "AI replaces developers," it's &lt;strong&gt;"AI re-opens development"&lt;/strong&gt; to people who stopped&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;I break down things like this on &lt;a href="https://linkedin.com/in/monkfromearth" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;, &lt;a href="https://x.com/monkfromearth" rel="noopener noreferrer"&gt;X&lt;/a&gt;, and &lt;a href="https://instagram.com/monkfrom.earth" rel="noopener noreferrer"&gt;Instagram&lt;/a&gt;. If this resonated, you'd probably like those too.&lt;/p&gt;







&lt;ol&gt;

&lt;li id="fn1"&gt;
&lt;p&gt;&lt;a href="https://newsletter.pragmaticengineer.com/p/the-pulse-industry-leaders-return" rel="noopener noreferrer"&gt;The Pulse: Industry leaders return to coding with AI — The Pragmatic Engineer&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn2"&gt;
&lt;p&gt;&lt;a href="https://newsletter.pragmaticengineer.com/p/the-coding-machine-at-meta" rel="noopener noreferrer"&gt;"The Coding Machine" at Meta with Michael Novati — The Pragmatic Engineer&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn3"&gt;
&lt;p&gt;&lt;a href="https://github.com/garrytan/gstack" rel="noopener noreferrer"&gt;gstack — Garry Tan's Claude Code setup (GitHub)&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn4"&gt;
&lt;p&gt;&lt;a href="https://techcrunch.com/2026/03/17/why-garry-tans-claude-code-setup-has-gotten-so-much-love-and-hate/" rel="noopener noreferrer"&gt;Why Garry Tan's Claude Code setup has gotten so much love, and hate — TechCrunch&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn5"&gt;
&lt;p&gt;&lt;a href="https://www.theweek.in/news/sci-tech/2026/03/27/how-aggressive-is-mark-zuckerberg-s-ai-native-push-for-meta-leaked-documents-offer-new-details-on-coding-targets.html" rel="noopener noreferrer"&gt;How aggressive is Mark Zuckerberg's 'AI-native' push for Meta? — The Week&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn6"&gt;
&lt;p&gt;&lt;a href="https://www.dwarkesh.com/p/mark-zuckerberg-2" rel="noopener noreferrer"&gt;Mark Zuckerberg — AI will write most Meta code in 18 months — Dwarkesh Patel&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;/ol&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>productivity</category>
      <category>career</category>
    </item>
    <item>
      <title>What OpenAI's $122 Billion Round Tells Us About AI's New Shape</title>
      <dc:creator>Sameer Khan</dc:creator>
      <pubDate>Sun, 05 Apr 2026 10:23:07 +0000</pubDate>
      <link>https://forem.com/monkfromearth/what-openais-122-billion-round-tells-us-about-ais-new-shape-58a7</link>
      <guid>https://forem.com/monkfromearth/what-openais-122-billion-round-tells-us-about-ais-new-shape-58a7</guid>
      <description>&lt;p&gt;On March 31, 2026, &lt;a href="https://openai.com" rel="noopener noreferrer"&gt;OpenAI&lt;/a&gt; closed a &lt;strong&gt;$122 billion&lt;/strong&gt; round at an &lt;strong&gt;$852 billion&lt;/strong&gt; valuation. Amazon put in $50 billion. Nvidia and SoftBank put in $30 billion each. Three billion came from retail investors. &lt;sup id="fnref1"&gt;1&lt;/sup&gt; &lt;sup id="fnref2"&gt;2&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;That single round is larger than every venture dollar raised across India's startup ecosystem in FY26 combined, which totalled $10.1 billion. &lt;sup id="fnref3"&gt;3&lt;/sup&gt; Two ecosystems, two different jobs being funded. More on that later.&lt;/p&gt;

&lt;p&gt;The reflex when you see numbers like $122B is to call it a bubble. I don't think it is. Look at what OpenAI has been doing with the capital and the check starts to make sense. Not because OpenAI will definitely win. Because nobody else is attempting what OpenAI is attempting.&lt;/p&gt;

&lt;h2&gt;
  
  
  What OpenAI Is Actually Doing
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;The shape of a category being drawn&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In the past six weeks, OpenAI has moved at every economic layer where AI touches the world.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Media.&lt;/strong&gt; Acquired &lt;a href="https://tbpn.com" rel="noopener noreferrer"&gt;TBPN&lt;/a&gt;, a daily three-hour founder-focused tech show hosted by John Coogan and Jordi Hays, for a reported low hundreds of millions. TBPN did $5M in ad revenue in 2025 and is on track for $30M in 2026. &lt;sup id="fnref4"&gt;4&lt;/sup&gt; OpenAI now owns three hours a day of the tech audience's attention.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consumer commerce.&lt;/strong&gt; ChatGPT Agent shipped with &lt;a href="https://walmart.com" rel="noopener noreferrer"&gt;Walmart&lt;/a&gt; integration for agentic shopping. Users browse, compare, and buy inside ChatGPT. &lt;sup id="fnref5"&gt;5&lt;/sup&gt; First agentic commerce deployment at national retail scale.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enterprise data and delivery.&lt;/strong&gt; &lt;a href="https://snowflake.com" rel="noopener noreferrer"&gt;Snowflake&lt;/a&gt; signed a $200 million multi-year partnership putting OpenAI's models directly inside enterprise data warehouses. &lt;sup id="fnref6"&gt;6&lt;/sup&gt; &lt;a href="https://accenture.com" rel="noopener noreferrer"&gt;Accenture&lt;/a&gt; is handling enterprise implementation and delivery. &lt;sup id="fnref7"&gt;7&lt;/sup&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Developer surface.&lt;/strong&gt; Codex now ships as a plugin inside Claude Code, &lt;a href="https://anthropic.com" rel="noopener noreferrer"&gt;Anthropic's&lt;/a&gt; coding agent. &lt;sup id="fnref8"&gt;8&lt;/sup&gt; OpenAI's model, running on their competitor's surface.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Infrastructure.&lt;/strong&gt; The Stargate project is building $500 billion of compute capacity across seven sites and ~7 GW of planned capacity. &lt;sup id="fnref9"&gt;9&lt;/sup&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No other AI-native company is operating across all five layers. Anthropic stays deep and narrow on models plus Claude Code. &lt;a href="https://google.com" rel="noopener noreferrer"&gt;Google&lt;/a&gt; is retrofitting Gemini into an existing conglomerate. &lt;a href="https://x.ai" rel="noopener noreferrer"&gt;xAI&lt;/a&gt; has one distribution surface, which is X. Chinese players face different constraints and a different market. Microsoft is already a conglomerate, and owns 27% of OpenAI anyway. &lt;sup id="fnref10"&gt;10&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;OpenAI is alone in attempting the breadth.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Edison Pattern
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Why building the surround is the innovation&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Matt Ridley makes a quiet argument in &lt;em&gt;How Innovation Works&lt;/em&gt; that's worth sitting with. The light bulb, he writes, was invented &lt;strong&gt;at least 23 times&lt;/strong&gt; before Edison. Joseph Swan had a working version. So did Heinrich Göbel, Hiram Maxim, Alexander Lodygin, and roughly twenty others. &lt;sup id="fnref11"&gt;11&lt;/sup&gt; Edison's genius wasn't the filament. It was understanding that a bulb is useless on its own.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"He was the first to bring everything together, to combine it with a system of generating and distributing electricity."&lt;/em&gt;&lt;br&gt;
— Matt Ridley, on Edison&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So Edison built the surround. Generators, copper distribution, meters, fuses, junction boxes, domestic wiring standards. He opened Pearl Street Station in 1882 as the first commercial central power plant because without it, the bulb could not be sold. He didn't invent electricity any more than he invented the bulb. He built the &lt;strong&gt;economy&lt;/strong&gt; that made both useful.&lt;/p&gt;

&lt;p&gt;Ridley's larger claim is that innovation is almost always incremental and collective, not heroic. What looks like one person's breakthrough is usually a decades-long relay. The genius lies in &lt;strong&gt;assembly&lt;/strong&gt;, in drawing together the necessary surrounding pieces so the core idea can actually be used.&lt;/p&gt;

&lt;p&gt;Read OpenAI's $122B through that lens. The frontier model isn't the innovation. Anthropic has one. Google has one. DeepSeek has one. Several companies are, as Ridley would say, thinking simultaneously about similar solutions. What OpenAI is building is the surround. Media, commerce, enterprise data, developer surfaces, compute infrastructure. The things that make the model &lt;em&gt;usable as an economy&lt;/em&gt;, not just as a tool.&lt;/p&gt;

&lt;p&gt;Whether they're drawing the right surround is the open question. That they're drawing it at all is what separates them from everyone else.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Gets Cut
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Reading direction by what someone walks away from&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;A large check is easy to turn into sprawl. What keeps this from being sprawl is visible in what OpenAI has walked away from in the same six-week window.&lt;/p&gt;

&lt;p&gt;The Sora consumer video app is shutting down April 26. &lt;sup id="fnref12"&gt;12&lt;/sup&gt; The &lt;a href="https://thewaltdisneycompany.com" rel="noopener noreferrer"&gt;Disney&lt;/a&gt; licensing deal, which included a $1 billion equity investment, never closed. &lt;sup id="fnref13"&gt;13&lt;/sup&gt; Sora's user count had collapsed from 1 million to under 500,000, and the app was burning roughly $1 million a day. &lt;sup id="fnref14"&gt;14&lt;/sup&gt; OpenAI walked from a live $1B check.&lt;/p&gt;

&lt;p&gt;The Stargate Abilene expansion, 600 MW of additional capacity, was cancelled in March. &lt;a href="https://oracle.com" rel="noopener noreferrer"&gt;Oracle&lt;/a&gt; publicly cited OpenAI's "often-changing demand forecasting" as the reason negotiations collapsed. &lt;sup id="fnref15"&gt;15&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;I wrote in an earlier post about how &lt;a href="https://dev.to/blogs/good-products-hard-to-vary"&gt;good products are hard to vary&lt;/a&gt;. Every element load-bearing, nothing extra. That principle has a corporate version. A good strategy, at this scale, is also hard to vary. Every layer of breadth has to earn its place. Sora didn't. The Abilene expansion couldn't. Whether the remaining layers will is the bet.&lt;/p&gt;

&lt;p&gt;You can read a lot about what someone believes by what they refuse to keep paying for.&lt;/p&gt;

&lt;h2&gt;
  
  
  Two Bets, Same Wave
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;What India's $10B is actually funding&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Back to the opening comparison. India's $10.1B across FY26 and OpenAI's $122B in a single round are not the same job, and the contrast is interesting for that reason, not because one is bigger.&lt;/p&gt;

&lt;p&gt;OpenAI's capital funds &lt;strong&gt;platform creation&lt;/strong&gt;. It flows toward compute, model capability, enterprise partnerships, distribution surfaces, and acquisitions that lock in attention.&lt;/p&gt;

&lt;p&gt;India's capital funds &lt;strong&gt;founders building on top of platforms&lt;/strong&gt;. Early-stage funding jumped 58% year-over-year in Q1 2026, while $100M+ deals hit zero for the first time since 2022. &lt;sup id="fnref16"&gt;16&lt;/sup&gt; The capital is intentionally horizontal: thousands of bets on use cases that assume a platform already exists.&lt;/p&gt;

&lt;p&gt;Both bets are rational. They sit at different layers of the same wave. One is Edison at Pearl Street. The other is the thousands of businesses that came alive the day the grid turned on: factories, streetcars, radios, refrigerators, telegrams. Neither layer makes sense without the other.&lt;/p&gt;

&lt;h2&gt;
  
  
  What To Watch
&lt;/h2&gt;

&lt;p&gt;Watch OpenAI not to see who wins AI, but to see what the new category actually looks like. $122 billion is the price of drawing that shape in real time. OpenAI happens to be holding the pencil.&lt;/p&gt;

&lt;p&gt;Whether this bet works will take three to five years to know. Meanwhile, the shape itself is the interesting thing. An AI-native attempt at breadth, at sovereign-fund scale, before the category even has settled edges.&lt;/p&gt;

&lt;p&gt;Nobody has tried this before in AI. That's the news.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;OpenAI raised $122 billion in one round at an $852 billion valuation, more than India's entire startup ecosystem raised in a year&lt;/li&gt;
&lt;li&gt;The capital services a category-creation bet, not a product bet&lt;/li&gt;
&lt;li&gt;Direction shows up in the cuts: Sora killed, $1B Disney investment walked away from, Stargate Abilene expansion cancelled&lt;/li&gt;
&lt;li&gt;OpenAI is the only AI-native company attempting breadth across media, consumer, enterprise, developer, and infrastructure simultaneously&lt;/li&gt;
&lt;li&gt;Anthropic stays narrow, Google retrofits, xAI has one surface, Chinese players are constrained by market and chip access&lt;/li&gt;
&lt;li&gt;India's $10.1B funds founders building on platforms. OpenAI's $122B funds being the platform. Different jobs, both real.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;I break down things like this on &lt;a href="https://linkedin.com/in/monkfromearth" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;, &lt;a href="https://x.com/monkfromearth" rel="noopener noreferrer"&gt;X&lt;/a&gt;, and &lt;a href="https://instagram.com/monkfrom.earth" rel="noopener noreferrer"&gt;Instagram&lt;/a&gt;. If this resonated, you'd probably like those too.&lt;/p&gt;







&lt;ol&gt;

&lt;li id="fn1"&gt;
&lt;p&gt;OpenAI, &lt;a href="https://openai.com/index/accelerating-the-next-phase-ai/" rel="noopener noreferrer"&gt;"Accelerating the next phase of AI"&lt;/a&gt; (March 31, 2026). ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn2"&gt;
&lt;p&gt;Bloomberg, &lt;a href="https://www.bloomberg.com/news/articles/2026-03-31/openai-valued-at-852-billion-after-completing-122-billion-round" rel="noopener noreferrer"&gt;"OpenAI Valued at $852 Billion After Completing $122 Billion Round"&lt;/a&gt; (March 31, 2026). Amazon $50B ($35B contingent on IPO/AGI), Nvidia $30B, SoftBank $30B. Retail investors $3B via TechCrunch. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn3"&gt;
&lt;p&gt;Economic Times via LinkedIn News, FY26 India startup funding totals $10.1 billion, down 9% YoY. Moneycontrol/Bain-IVCA reported VC fundraising rebounded to ~$5.4 billion in 2025. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn4"&gt;
&lt;p&gt;TechCrunch, &lt;a href="https://techcrunch.com/2026/04/02/openai-acquires-tbpn-the-buzzy-founder-led-business-talk-show/" rel="noopener noreferrer"&gt;"OpenAI acquires TBPN"&lt;/a&gt; (April 2, 2026). TBPN sits within OpenAI's Strategy org under Chris Lehane. Editorial independence preserved. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn5"&gt;
&lt;p&gt;Digital Commerce 360, &lt;a href="https://www.digitalcommerce360.com/2026/03/24/openai-agentic-commerce-updates-chatgpt-walmart/" rel="noopener noreferrer"&gt;"OpenAI reveals updates to its agentic commerce experience for ChatGPT"&lt;/a&gt; (March 24, 2026). ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn6"&gt;
&lt;p&gt;Snowflake, &lt;a href="https://www.snowflake.com/en/news/press-releases/snowflake-and-openAI-forge-200-million-partnership-to-bring-enterprise-ready-ai-to-the-worlds-most-trusted-data-platform/" rel="noopener noreferrer"&gt;"Snowflake and OpenAI Forge $200 Million Partnership"&lt;/a&gt;. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn7"&gt;
&lt;p&gt;OpenAI, &lt;a href="https://openai.com/index/accenture-partnership/" rel="noopener noreferrer"&gt;"Accenture and OpenAI accelerate enterprise AI success"&lt;/a&gt;. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn8"&gt;
&lt;p&gt;OpenAI Codex Plugin for Claude Code, &lt;a href="https://github.com/openai/codex-plugin-cc" rel="noopener noreferrer"&gt;github.com/openai/codex-plugin-cc&lt;/a&gt;. Commands include &lt;code&gt;/codex:review&lt;/code&gt;, &lt;code&gt;/codex:adversarial-review&lt;/code&gt;, &lt;code&gt;/codex:rescue&lt;/code&gt;. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn9"&gt;
&lt;p&gt;OpenAI, &lt;a href="https://openai.com/index/announcing-the-stargate-project/" rel="noopener noreferrer"&gt;"Announcing The Stargate Project"&lt;/a&gt;. $500 billion planned investment over four years. Nearly 7 GW across flagship Abilene site, five new sites, and CoreWeave partnerships. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn10"&gt;
&lt;p&gt;OpenAI, &lt;a href="https://openai.com/index/next-chapter-of-microsoft-openai-partnership/" rel="noopener noreferrer"&gt;"The next chapter of the Microsoft-OpenAI partnership"&lt;/a&gt;. Microsoft holds ~27% on as-converted diluted basis, ~$135 billion value post-recap. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn11"&gt;
&lt;p&gt;Matt Ridley, &lt;a href="https://en.wikipedia.org/wiki/How_Innovation_Works" rel="noopener noreferrer"&gt;&lt;em&gt;How Innovation Works and Why It Flourishes in Freedom&lt;/em&gt;&lt;/a&gt; (2020). Ridley draws on Robert Friedel, Paul Israel, and Bernard Finn's history of the incandescent bulb, which identifies at least 23 inventors who produced working versions before Edison. Ridley's argument: "Edison was the first to bring everything together, to combine it with a system of generating and distributing electricity." Pearl Street Station opened in Manhattan on September 4, 1882 as the world's first commercial central power plant. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn12"&gt;
&lt;p&gt;The Decoder, &lt;a href="https://the-decoder.com/openai-sets-two-stage-sora-shutdown-with-app-closing-april-2026-and-api-following-in-september/" rel="noopener noreferrer"&gt;"OpenAI sets two-stage Sora shutdown"&lt;/a&gt;. App discontinued April 26, 2026. API discontinued September 24, 2026. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn13"&gt;
&lt;p&gt;Variety, &lt;a href="https://variety.com/2026/digital/news/openai-shutting-down-sora-video-disney-1236698277/" rel="noopener noreferrer"&gt;"OpenAI Will Shut Down Sora Video App; Disney Drops Plans for $1 Billion Investment"&lt;/a&gt;. Original Disney-OpenAI Sora agreement (December 2025) included $1B equity investment plus warrants, 3-year licensing of 200+ Disney/Marvel/Pixar/Star Wars characters. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn14"&gt;
&lt;p&gt;TechCrunch, &lt;a href="https://techcrunch.com/2026/03/29/why-openai-really-shut-down-sora/" rel="noopener noreferrer"&gt;"Why OpenAI really shut down Sora"&lt;/a&gt;. User count peaked near 1 million, fell below 500,000. App burning roughly $1 million per day. Sora research team pivoting to world simulation for robotics. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn15"&gt;
&lt;p&gt;Noah Bean, &lt;a href="https://medium.com/@noahbean3396/stargates-first-crack-reveals-the-fault-lines-beneath-ai-s-trillion-dollar-buildout-1a3e5476b760" rel="noopener noreferrer"&gt;"Stargate's first crack reveals the fault lines"&lt;/a&gt; (March 2026). Oracle and OpenAI abandoned plans to expand Abilene from 1.2 GW to ~2.0 GW. Oracle cited financing terms and OpenAI's "often-changing demand forecasting". ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn16"&gt;
&lt;p&gt;Inc42, &lt;a href="https://inc42.com/reports/indian-tech-startup-funding-report-q1-2026/" rel="noopener noreferrer"&gt;"Indian Tech Startup Funding Report Q1 2026"&lt;/a&gt;. Q1 2026 funding: $2.3 billion (-26% YoY). Zero $100M+ deals, first time since 2022. Early-stage +58% YoY. 48% of investors call AI the most investment-ready sector, fewer than 10% willing to pay premium valuations. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;/ol&gt;

</description>
      <category>ai</category>
      <category>openapi</category>
      <category>discuss</category>
      <category>news</category>
    </item>
    <item>
      <title>Axios Supply Chain Attack: How North Korean Hackers Social-Engineered an Open Source Maintainer</title>
      <dc:creator>Sameer Khan</dc:creator>
      <pubDate>Fri, 03 Apr 2026 17:43:44 +0000</pubDate>
      <link>https://forem.com/monkfromearth/axios-supply-chain-attack-how-north-korean-hackers-social-engineered-an-open-source-maintainer-2ae9</link>
      <guid>https://forem.com/monkfromearth/axios-supply-chain-attack-how-north-korean-hackers-social-engineered-an-open-source-maintainer-2ae9</guid>
      <description>&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; North Korean hackers built a fake company, complete with a Slack workspace, LinkedIn activity, and a full team of fake profiles, to trick the lead maintainer of axios into installing malware. One Teams meeting later, they had full control of his machine. They used that access to push malicious versions of a library with &lt;strong&gt;100 million weekly downloads&lt;/strong&gt;. The attack was live for 3 hours. It's the most sophisticated social engineering of an open source maintainer we've seen, and it exposes gaps in npm's security model that no amount of 2FA can fix.&lt;/p&gt;




&lt;p&gt;On March 31, 2026, two versions of axios that had never been through the project's CI pipeline appeared on npm. Versions 1.14.1 and 0.30.4 both carried a new dependency nobody had seen before: &lt;code&gt;plain-crypto-js&lt;/code&gt;. &lt;sup id="fnref1"&gt;1&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Within six minutes, Socket's automated scanner flagged the package. &lt;sup id="fnref2"&gt;2&lt;/sup&gt; Within three hours, npm pulled both versions. But in those three hours, an unknown number of developers, CI pipelines, and production systems had already installed a cross-platform Remote Access Trojan.&lt;/p&gt;

&lt;p&gt;The story of &lt;em&gt;how&lt;/em&gt; those versions got published is more interesting than the malware itself.&lt;/p&gt;

&lt;h2&gt;
  
  
  How do you trick someone who maintains code for 100 million developers?
&lt;/h2&gt;

&lt;p&gt;Jason Saayman, the lead maintainer of axios, &lt;a href="https://github.com/axios/axios/issues/10636#issuecomment-4180237789" rel="noopener noreferrer"&gt;shared the playbook&lt;/a&gt; in the project's post-mortem. &lt;sup id="fnref1"&gt;1&lt;/sup&gt; It reads less like a hacking story and more like a con movie.&lt;/p&gt;

&lt;p&gt;The attackers reached out masquerading as the founder of a real company. They had cloned the founder's identity and the company itself. Then came the invite to a Slack workspace.&lt;/p&gt;

&lt;p&gt;This wasn't a hastily thrown-together channel. The workspace was branded with the company's visual identity. It had channels where "team members" shared the company's LinkedIn posts (likely linking to the real company's account). There were fake profiles for the company's team &lt;em&gt;and&lt;/em&gt; for other open source maintainers, giving the whole setup social proof.&lt;/p&gt;

&lt;p&gt;After establishing trust through the Slack workspace, they scheduled an MS Teams meeting. Multiple people appeared to be on the call. During the meeting, something on Saayman's system was flagged as "out of date." He installed the update, thinking it was related to Teams.&lt;/p&gt;

&lt;p&gt;That update was the RAT.&lt;/p&gt;

&lt;p&gt;"Everything was extremely well co-ordinated, looked legit and was done in a professional manner," Saayman &lt;a href="https://github.com/axios/axios/issues/10636#issuecomment-4180237789" rel="noopener noreferrer"&gt;wrote&lt;/a&gt;. &lt;sup id="fnref1"&gt;1&lt;/sup&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Saayman wasn't the only target
&lt;/h2&gt;

&lt;p&gt;Weeks before the axios compromise, &lt;a href="https://github.com/axios/axios/issues/10636" rel="noopener noreferrer"&gt;voxpelli&lt;/a&gt;, a maintainer of packages like Mocha, described a nearly identical approach. &lt;sup id="fnref1"&gt;1&lt;/sup&gt; Someone invited him to be on a "podcast." A week of lead-up followed: social media images, preparatory interview questions, other guests in a group chat. Everything felt real.&lt;/p&gt;

&lt;p&gt;When it came time to "record," the fake streaming website claimed a connection issue and tried to get him to install a non-notarized macOS app. When he refused, they tried a &lt;code&gt;curl&lt;/code&gt; command to download and run something. When that failed too, they went dark and deleted every conversation.&lt;/p&gt;

&lt;p&gt;"It's creepy how they target you, no matter if they are real people or possibly AI," voxpelli wrote. &lt;sup id="fnref1"&gt;1&lt;/sup&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What the malware actually did
&lt;/h2&gt;

&lt;p&gt;The technical chain was clean. &lt;code&gt;plain-crypto-js@4.2.1&lt;/code&gt; used a &lt;code&gt;postinstall&lt;/code&gt; hook to run &lt;code&gt;setup.js&lt;/code&gt;, a 4,209-byte dropper obfuscated with reversed Base64 and XOR cipher (key: &lt;code&gt;OrDeR_7077&lt;/code&gt;). &lt;sup id="fnref2"&gt;2&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;It deployed platform-specific payloads:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;macOS:&lt;/strong&gt; A C++ binary disguised as an Apple system daemon at &lt;code&gt;/Library/Caches/com.apple.act.mond&lt;/code&gt;, supporting remote code execution and process injection &lt;sup id="fnref2"&gt;2&lt;/sup&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Windows:&lt;/strong&gt; Renamed &lt;code&gt;powershell.exe&lt;/code&gt; to &lt;code&gt;wt.exe&lt;/code&gt; (disguised as Windows Terminal), launched via VBScript &lt;sup id="fnref2"&gt;2&lt;/sup&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Linux:&lt;/strong&gt; A Python script at &lt;code&gt;/tmp/ld.py&lt;/code&gt; running as a detached process &lt;sup id="fnref2"&gt;2&lt;/sup&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All variants beaconed every 60 seconds to &lt;code&gt;sfrclak[.]com&lt;/code&gt;. The dropper then cleaned up after itself: deleted &lt;code&gt;setup.js&lt;/code&gt;, deleted &lt;code&gt;package.json&lt;/code&gt;, and renamed a clean backup to &lt;code&gt;package.json&lt;/code&gt;. The directory looked normal after execution. &lt;sup id="fnref2"&gt;2&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Google/Mandiant attributes the malware to &lt;strong&gt;UNC1069&lt;/strong&gt;, a North Korea-nexus threat actor active since 2018, based on overlap with the WAVESHAPER backdoor family. &lt;sup id="fnref3"&gt;3&lt;/sup&gt; Microsoft independently attributes it to &lt;strong&gt;Sapphire Sleet&lt;/strong&gt;, also North Korean. &lt;sup id="fnref4"&gt;4&lt;/sup&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  2FA didn't matter. That's the real story.
&lt;/h2&gt;

&lt;p&gt;Saayman had two-factor authentication enabled on his npm account. It didn't help.&lt;/p&gt;

&lt;p&gt;Once a RAT has full control of your machine, software-based TOTP is just another application the attacker can interact with. They changed his npm email to a Proton Mail address under their control (&lt;code&gt;ifstap@proton.me&lt;/code&gt;) and used a long-lived classic npm access token to publish. &lt;sup id="fnref5"&gt;5&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Here's what makes this worse: axios &lt;em&gt;already had&lt;/em&gt; OIDC-based publishing with provenance attestations since 2023. The last four legitimate v1 releases all went through GitHub Actions with Trusted Publishing. The malicious v1.14.1 had neither provenance nor attestations. Any tool checking for this would have flagged it instantly. &lt;sup id="fnref6"&gt;6&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;But npm has no setting to enforce OIDC-only publishing. There is no way to tell the registry: "reject anything not published through CI." The strictest option npm offers still allows local &lt;code&gt;npm publish&lt;/code&gt; with a browser-based 2FA prompt, which a RAT can trivially intercept. &lt;sup id="fnref6"&gt;6&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;As contributor shaanmajid &lt;a href="https://github.com/axios/axios/issues/10636" rel="noopener noreferrer"&gt;put it&lt;/a&gt;: "The only mitigation on Axios's end that could have actually prevented this would have been using hardware FIDO2 keys for maintainer npm auth, which can't be hijacked by a RAT." &lt;sup id="fnref6"&gt;6&lt;/sup&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What would have actually prevented this?
&lt;/h2&gt;

&lt;p&gt;Three things, none of which axios alone could control:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Registry-level OIDC enforcement.&lt;/strong&gt; If npm allowed packages to opt in to "reject all non-OIDC publishes," the RAT would have been useless for publishing. Other registries like crates.io already support this. &lt;sup id="fnref6"&gt;6&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Dependency cooldown periods.&lt;/strong&gt; The malicious versions were live for 3 hours. A 3-day cooldown on new versions (supported by Dependabot, Renovate, uv, and bun via &lt;code&gt;minimumReleaseAge&lt;/code&gt;) would have meant zero downloads of the poisoned packages. &lt;sup id="fnref6"&gt;6&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Provenance verification by default.&lt;/strong&gt; Every legitimate axios v1 release had OIDC provenance. The malicious one didn't. If package managers verified attestations by default instead of opt-in, this would have been caught at install time. &lt;sup id="fnref6"&gt;6&lt;/sup&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The pattern is bigger than axios
&lt;/h2&gt;

&lt;p&gt;This attack follows the playbook Google documented for UNC1069: social engineering that targets individuals in crypto and AI, building elaborate fake identities and companies to establish trust before delivering malware. &lt;sup id="fnref7"&gt;7&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;What's different here is the target. This wasn't a crypto startup founder. It was a maintainer of a general-purpose HTTP library embedded in millions of projects globally. The blast radius isn't one company's treasury. It's the software supply chain itself.&lt;/p&gt;

&lt;p&gt;Feross Aboukhadijeh, founder of Socket, &lt;a href="https://github.com/axios/axios/issues/10636" rel="noopener noreferrer"&gt;summarized it&lt;/a&gt;: "This kind of targeted social engineering against individual maintainers is the new normal. It's not a reflection on Jason or the axios team. These campaigns are sophisticated and persistent. We're seeing them across the ecosystem and they're only accelerating." &lt;sup id="fnref1"&gt;1&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Singapore's Cyber Security Agency issued a formal advisory. &lt;sup id="fnref8"&gt;8&lt;/sup&gt; Microsoft, Google, SANS, Elastic, Snyk, Datadog, Huntress, and Malwarebytes all published analyses within days.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Social engineering is the attack vector.&lt;/strong&gt; The malware was simple. The social engineering was extraordinary. Fake companies, branded Slack workspaces, multi-person Teams calls, weeks of relationship building.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Software 2FA is not 2FA when your machine is compromised.&lt;/strong&gt; Hardware keys (FIDO2/WebAuthn) are the only defense against RAT-based credential theft.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;npm's security model has a structural gap.&lt;/strong&gt; There is no way to enforce "publish only from CI." Until registries support OIDC-only publishing, every maintainer's laptop is a viable attack surface.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Provenance attestations work, but nobody checks them.&lt;/strong&gt; The malicious version was missing attestations that every legitimate version had. The signal was there. The ecosystem isn't wired to use it yet.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dependency cooldowns are free protection.&lt;/strong&gt; Configure &lt;code&gt;minimumReleaseAge&lt;/code&gt; in your dependency tools. A 3-day delay would have neutralized this entire attack.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Open source maintainers are high-value targets for state actors.&lt;/strong&gt; This is the new normal.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How to check if you're affected
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-E&lt;/span&gt; &lt;span class="s2"&gt;"axios@(1&lt;/span&gt;&lt;span class="se"&gt;\.&lt;/span&gt;&lt;span class="s2"&gt;14&lt;/span&gt;&lt;span class="se"&gt;\.&lt;/span&gt;&lt;span class="s2"&gt;1|0&lt;/span&gt;&lt;span class="se"&gt;\.&lt;/span&gt;&lt;span class="s2"&gt;30&lt;/span&gt;&lt;span class="se"&gt;\.&lt;/span&gt;&lt;span class="s2"&gt;4)|plain-crypto-js"&lt;/span&gt; package-lock.json yarn.lock bun.lock pnpm-lock.yaml 2&amp;gt;/dev/null
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you find a match: downgrade to &lt;code&gt;axios@1.14.0&lt;/code&gt; or &lt;code&gt;0.30.3&lt;/code&gt;, remove &lt;code&gt;plain-crypto-js&lt;/code&gt; from &lt;code&gt;node_modules&lt;/code&gt;, rotate every secret and credential on the affected machine, and check network logs for connections to &lt;code&gt;sfrclak[.]com&lt;/code&gt; or &lt;code&gt;142.11.206.73&lt;/code&gt;. &lt;sup id="fnref1"&gt;1&lt;/sup&gt;&lt;/p&gt;




&lt;p&gt;I break down stories like this on &lt;a href="https://linkedin.com/in/monkfromearth" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;, &lt;a href="https://x.com/monkfromearth" rel="noopener noreferrer"&gt;X&lt;/a&gt;, and &lt;a href="https://instagram.com/monkfrom.earth" rel="noopener noreferrer"&gt;Instagram&lt;/a&gt;. If this was useful, you'd probably like those too.&lt;/p&gt;







&lt;ol&gt;

&lt;li id="fn1"&gt;
&lt;p&gt;&lt;a href="https://github.com/axios/axios/issues/10636" rel="noopener noreferrer"&gt;axios post-mortem and maintainer comments, GitHub Issues #10636&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn2"&gt;
&lt;p&gt;&lt;a href="https://socket.dev/blog/axios-npm-package-compromised" rel="noopener noreferrer"&gt;Socket technical analysis of axios compromise&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn3"&gt;
&lt;p&gt;&lt;a href="https://cloud.google.com/blog/topics/threat-intelligence/north-korea-threat-actor-targets-axios-npm-package" rel="noopener noreferrer"&gt;Google Cloud / Mandiant attribution to UNC1069&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn4"&gt;
&lt;p&gt;&lt;a href="https://www.microsoft.com/en-us/security/blog/2026/04/01/mitigating-the-axios-npm-supply-chain-compromise/" rel="noopener noreferrer"&gt;Microsoft Security Blog: Mitigating the Axios npm supply chain compromise&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn5"&gt;
&lt;p&gt;&lt;a href="https://thehackernews.com/2026/03/axios-supply-chain-attack-pushes-cross.html" rel="noopener noreferrer"&gt;The Hacker News: Axios Supply Chain Attack&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn6"&gt;
&lt;p&gt;&lt;a href="https://gist.github.com/shaanmajid/fa1bb71f063476f3e8fa726f54fd2d37" rel="noopener noreferrer"&gt;shaanmajid's registry evidence analysis&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn7"&gt;
&lt;p&gt;&lt;a href="https://cloud.google.com/blog/topics/threat-intelligence/unc1069-targets-cryptocurrency-ai-social-engineering" rel="noopener noreferrer"&gt;Google Cloud: UNC1069 social engineering playbook&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn8"&gt;
&lt;p&gt;&lt;a href="https://www.csa.gov.sg/alerts-and-advisories/advisories/ad-2026-002/" rel="noopener noreferrer"&gt;Singapore CSA Advisory AD-2026-002&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;/ol&gt;

</description>
      <category>ai</category>
      <category>cybersecurity</category>
      <category>webdev</category>
      <category>opensource</category>
    </item>
    <item>
      <title>AI Doesn't Replace Thinking. It Replaces Forgetting.</title>
      <dc:creator>Sameer Khan</dc:creator>
      <pubDate>Fri, 03 Apr 2026 05:07:15 +0000</pubDate>
      <link>https://forem.com/monkfromearth/ai-doesnt-replace-thinking-it-replaces-forgetting-1hni</link>
      <guid>https://forem.com/monkfromearth/ai-doesnt-replace-thinking-it-replaces-forgetting-1hni</guid>
      <description>&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; You've read thousands of articles. You can use almost none of them right now. The bottleneck in knowledge work isn't thinking. It's forgetting. Andrej Karpathy just showed a system where an LLM organizes your research into a living wiki, and the questions you ask feed back into it. No elaborate RAG pipelines. Just markdown, folders, and a loop that compounds.&lt;/p&gt;




&lt;p&gt;Think about how many articles you've read this year. Papers you've skimmed. Threads you've bookmarked. Podcasts you half-listened to while cooking.&lt;/p&gt;

&lt;p&gt;Now ask yourself: how many of those insights are available to you &lt;em&gt;right now&lt;/em&gt;, in this moment, for the thing you're working on today?&lt;/p&gt;

&lt;p&gt;The number is embarrassingly close to zero. Not because you're lazy. Not because you're not smart. Because your brain is a leaky bucket, and it always has been. You pour knowledge in, and most of it drains out before you need it. Every new project, every new question, you start from scratch. Even though the insight you need is somewhere in your past. You just can't reach it.&lt;/p&gt;

&lt;p&gt;That's the real problem with knowledge work. Not thinking. Forgetting.&lt;/p&gt;

&lt;h2&gt;
  
  
  What did Karpathy actually build?
&lt;/h2&gt;

&lt;p&gt;Andrej Karpathy, former Senior Director of AI at Tesla and founding member of OpenAI, &lt;a href="https://x.com/karpathy/status/2039805659525644595" rel="noopener noreferrer"&gt;shared a system&lt;/a&gt; this week that sounds almost too simple to be interesting. &lt;sup id="fnref1"&gt;1&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Raw documents go into a folder. Articles, papers, repos, datasets, anything. An LLM reads them, then compiles everything into a structured markdown wiki. Summaries, backlinks, conceptual categories. Obsidian serves as the frontend. You browse the wiki like a personal Wikipedia.&lt;/p&gt;

&lt;p&gt;That part alone isn't new. People have been building "second brains" in Notion and Obsidian for years. The difference is what happens next.&lt;/p&gt;

&lt;p&gt;When you ask the system a question, the LLM doesn't just answer it. It researches its own wiki and synthesizes a response. Karpathy then often &lt;strong&gt;files that response back into the knowledge base&lt;/strong&gt;. The wiki grows. The next question is easier to answer because the system now knows more than it did an hour ago.&lt;/p&gt;

&lt;p&gt;Karpathy says he's running this at around 100 articles and 400,000 words. No elaborate RAG pipeline. Just organized markdown and an LLM that maintains its own indexes. "I rarely touch it directly," &lt;a href="https://x.com/karpathy/status/2039805659525644595" rel="noopener noreferrer"&gt;he wrote&lt;/a&gt;. &lt;sup id="fnref1"&gt;1&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Think of it like a research assistant who doesn't just answer your questions. They reorganize your entire filing cabinet after every conversation, so the next question takes half the time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5f4ohnjqa9js7kcdbx1n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5f4ohnjqa9js7kcdbx1n.png" alt="The compounding knowledge loop: raw docs flow into an LLM wiki, questions make the wiki richer, answers get filed back" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why does the loop matter more than the tool?
&lt;/h2&gt;

&lt;p&gt;The tool is markdown files and Obsidian. You could rebuild this in a weekend. The &lt;em&gt;loop&lt;/em&gt; is what makes it work.&lt;/p&gt;

&lt;p&gt;Most "second brain" systems die. You start a Notion workspace, organize it beautifully for two weeks, then life happens and it decays. The organization was the hard part, and it depended entirely on you showing up to maintain it. You were the bottleneck.&lt;/p&gt;

&lt;p&gt;Karpathy's system flips that. The LLM maintains the organization. The LLM runs "health checks" to find inconsistencies and suggest new articles. The system maintains itself. Every time you use it, it gets better. Not because you put in extra effort, but because &lt;em&gt;using it is the maintenance&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;That's compound interest applied to knowledge. Each question doesn't just give you an answer. It makes every future question cheaper. The blank page dies, not because AI writes for you, but because AI &lt;em&gt;remembers&lt;/em&gt; for you.&lt;/p&gt;

&lt;p&gt;I wrote about &lt;a href="https://dev.to/blogs/karpathy-autoresearch-explained-ml-to-marketing"&gt;Karpathy's AutoResearch&lt;/a&gt; two days ago. A loop that runs ML experiments while you sleep. Same pattern showing up again: &lt;strong&gt;the loop is the invention, not the tool&lt;/strong&gt;. A simple cycle that compounds is worth more than a sophisticated tool that doesn't.&lt;/p&gt;

&lt;h2&gt;
  
  
  Do we even need bigger context windows?
&lt;/h2&gt;

&lt;p&gt;Here's the contrarian part. The AI industry is racing toward bigger context windows. 1 million tokens. 10 million. Bigger windows and structured memory aren't mutually exclusive, but the default assumption is clear: if we can fit everything into one prompt, the model will figure it out.&lt;/p&gt;

&lt;p&gt;Karpathy's system uses markdown files and folders.&lt;/p&gt;

&lt;p&gt;Developer &lt;a href="https://x.com/jumperz/status/2039826228224430323" rel="noopener noreferrer"&gt;JUMPERZ put it well&lt;/a&gt;: "Agents that own their own knowledge layer do not need infinite context windows. They need good file organisation and the ability to read their own indexes. Way cheaper, way more scalable, and way more inspectable than stuffing everything into one giant prompt." &lt;sup id="fnref2"&gt;2&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;There's something familiar here. I keep noticing that &lt;a href="https://dev.to/blogs/good-products-hard-to-vary"&gt;constraints beat complexity&lt;/a&gt;. In product design, in engineering, and now in AI architecture. The pneumatic tyre hasn't changed in a century. The iPhone has been the same rectangle since 2017. And maybe the answer to AI's memory problem isn't a bigger brain. It's a better filing cabinet.&lt;/p&gt;

&lt;p&gt;A 10-million-token context window is brute force. An organized knowledge base with good indexes is architecture. One scales with money. The other scales with use.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where does this go?
&lt;/h2&gt;

&lt;p&gt;Karpathy sees the endpoint. "Every question to a frontier-grade LLM spawns a team of LLMs to automate the whole thing," &lt;a href="https://x.com/karpathy/status/2039805659525644595" rel="noopener noreferrer"&gt;he wrote&lt;/a&gt;. "Iteratively construct an entire ephemeral wiki, lint it, loop a few times, then write a full report. Way beyond a &lt;code&gt;.decode()&lt;/code&gt;." &lt;sup id="fnref1"&gt;1&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Today, it's one person and one loop building a knowledge base over weeks. Tomorrow, a swarm of agents builds an entire wiki &lt;em&gt;per question&lt;/em&gt;. Assembling, cross-referencing, linting for errors, then handing you the distilled result. Not a chat response. A researched report backed by a temporary knowledge base that was purpose-built for your specific question, then discarded.&lt;/p&gt;

&lt;p&gt;The compound interest endpoint isn't just "you never start from zero." It's "you never even have to ask twice."&lt;/p&gt;

&lt;h2&gt;
  
  
  Key takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The bottleneck in knowledge work isn't thinking. It's forgetting.&lt;/strong&gt; You've already had most of the insights you need. You just can't connect them to what you're working on now.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Karpathy's system is a loop, not a tool.&lt;/strong&gt; Raw documents → LLM-compiled wiki → Q&amp;amp;A that feeds back into the wiki → compound growth. No elaborate RAG. Just markdown and folders.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Self-maintaining beats self-organizing.&lt;/strong&gt; Traditional second brains decay because you're the maintenance bottleneck. This system maintains itself. Using it &lt;em&gt;is&lt;/em&gt; the upkeep.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bigger context windows might be the wrong bet.&lt;/strong&gt; Good file organization and LLM-maintained indexes can be cheaper, more scalable, and more inspectable than stuffing everything into one massive prompt.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The blank page is a symptom.&lt;/strong&gt; The disease is forgetting. The cure is a system where every question makes the next one easier.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;I break down things like this on &lt;a href="https://linkedin.com/in/monkfromearth" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;, &lt;a href="https://x.com/monkfromearth" rel="noopener noreferrer"&gt;X&lt;/a&gt;, and &lt;a href="https://instagram.com/monkfrom.earth" rel="noopener noreferrer"&gt;Instagram&lt;/a&gt;. Usually shorter, sometimes as carousels. If this resonated, you'd probably like those too&lt;/p&gt;




&lt;ol&gt;

&lt;li id="fn1"&gt;
&lt;p&gt;&lt;a href="https://x.com/karpathy/status/2039805659525644595" rel="noopener noreferrer"&gt;Andrej Karpathy on X: LLM Knowledge Bases&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn2"&gt;
&lt;p&gt;&lt;a href="https://x.com/jumperz/status/2039826228224430323" rel="noopener noreferrer"&gt;JUMPERZ on X: commentary on Karpathy's knowledge base system&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;/ol&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>productivity</category>
      <category>agents</category>
    </item>
  </channel>
</rss>
