<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Viktor Lázár</title>
    <description>The latest articles on Forem by Viktor Lázár (@lazarv).</description>
    <link>https://forem.com/lazarv</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/lazarv"/>
    <language>en</language>
    <item>
      <title>Runtime Is Not the Problem</title>
      <dc:creator>Viktor Lázár</dc:creator>
      <pubDate>Wed, 13 May 2026 12:21:36 +0000</pubDate>
      <link>https://forem.com/lazarv/runtime-is-not-the-problem-1b9</link>
      <guid>https://forem.com/lazarv/runtime-is-not-the-problem-1b9</guid>
      <description>&lt;p&gt;The most popular story about modern UI frameworks is wonderfully clean. Svelte is small because it is compiled. React is large because it ships a runtime. One moves work to build time; the other carries a machine into the browser. If the question is why a small Svelte app often starts smaller than a small React app, that story is not wrong.&lt;/p&gt;

&lt;p&gt;It is only too small.&lt;/p&gt;

&lt;p&gt;The important distinction is not compiled vs runtime. The important distinction is &lt;strong&gt;specialized output vs packaged capability&lt;/strong&gt;. A compiler can specialize the program because it sees the component. A runtime can be small if it is packaged as a set of capabilities the application actually uses. The waste appears when a runtime is distributed as a single old monolith: one root API makes the app pay for the whole engine, including paths that only matter to applications much more complex than the one being shipped.&lt;/p&gt;

&lt;p&gt;That is not the inevitable cost of React's model. It is the cost of React's packaging shape.&lt;/p&gt;

&lt;p&gt;React is not large because runtime frameworks must be large. React is large because the browser-facing React we install today is still assembled like a general-purpose engine rather than a capability graph. If that graph were exposed to bundlers and compilers as static structure, dead-code elimination and tree shaking could do much more of the work people currently credit only to compiled frameworks.&lt;/p&gt;

&lt;p&gt;The compiler is not magic. The runtime is not the enemy. The question is where the framework pays for generality, and whether the application is allowed to decline the parts of that generality it does not use.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Clean Story
&lt;/h2&gt;

&lt;p&gt;Svelte's pitch has always been easy to understand. The &lt;a href="https://svelte.dev/" rel="noopener noreferrer"&gt;official site&lt;/a&gt; describes Svelte as a framework that uses a compiler so components do minimal work in the browser. &lt;a href="https://v4.svelte.dev/" rel="noopener noreferrer"&gt;Older Svelte copy&lt;/a&gt; made the contrast even sharper: move as much work as possible out of the browser and into the build step. That is a powerful architectural statement because the browser receives code shaped around the application, not a general interpreter for a component model.&lt;/p&gt;

&lt;p&gt;React's browser story is different. A React app calls &lt;code&gt;createRoot&lt;/code&gt; or &lt;code&gt;hydrateRoot&lt;/code&gt; from &lt;a href="https://react.dev/reference/react-dom/client" rel="noopener noreferrer"&gt;&lt;code&gt;react-dom/client&lt;/code&gt;&lt;/a&gt;, and from that moment React owns the tree. The application ships a runtime because the runtime is the thing that keeps React's programming model true after the JavaScript has loaded.&lt;/p&gt;

&lt;p&gt;At the scale of a counter, the contrast is almost unfair.&lt;/p&gt;

&lt;p&gt;A compiler can look at a tiny counter and emit code that changes the text when the number changes. A runtime framework has to make even that counter an instance of a broader component language. That language is the source of React's power, but it also means the smallest program starts by importing a model built for much larger programs.&lt;/p&gt;

&lt;p&gt;This is why small examples make compiled frameworks look so good. They are not paying for the general case before the general case appears.&lt;/p&gt;

&lt;p&gt;But the small example also distorts the argument. A real application is not one counter. It is product pressure accumulated over time. Local decisions become global constraints. Dependencies surround the framework. Behavior that began in one place starts to matter somewhere else. At that point, "compiled vs runtime" stops being a binary and becomes a curve.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Size Curve
&lt;/h2&gt;

&lt;p&gt;The size curve begins with the floor: the amount of JavaScript you ship before the application has done anything interesting. React's floor is visible because &lt;code&gt;react&lt;/code&gt; and &lt;code&gt;react-dom&lt;/code&gt; are real packages with real client runtime code. Svelte's floor is lower because more of the component model has been consumed by the compiler before the browser ever sees it.&lt;/p&gt;

&lt;p&gt;But the floor is only the beginning. The slope matters just as much, because the application does not stay at its starter size. A framework that starts tiny can grow quickly if its compiled output repeats itself. A framework with a higher floor can become relatively cheaper if its runtime amortizes shared behavior well.&lt;/p&gt;

&lt;p&gt;Compiled frameworks often have a low floor because they emit specialized code. But specialized code can repeat. If every component carries its own little version of a pattern, the app may pay the same idea many times. Good compilers avoid this by sharing helpers and by lowering common patterns into reusable runtime pieces, which is another way of saying that compiled frameworks also have runtimes. The difference is that those runtimes have already been filtered through the compiler's view of the app.&lt;/p&gt;

&lt;p&gt;Runtime frameworks often have a higher floor because they ship the general machine once. But after that first payment, repeated components can be cheap because they are data for the same machine. The runtime does not need a new implementation of the update model for every component. The application describes the tree; the runtime interprets the description.&lt;/p&gt;

&lt;p&gt;So the size question should not be "which framework is smaller?" That question hides the shape of the payment. A framework has an initial fixed cost, and it has a growth rate as features are added. The architecture is healthy when the route mostly pays for what it uses. Repeated behavior should be amortized. Absent capability should remain absent from the output.&lt;/p&gt;

&lt;p&gt;Tiny islands usually favor the compiler. Larger applications make the answer depend on the curve rather than the category. Once product dependencies dominate the bundle, the framework tax may look smaller in percentage terms, but it has become more architectural: it follows the application into every place that wanted to stay small.&lt;/p&gt;

&lt;p&gt;The useful measurement is not the total size of &lt;code&gt;node_modules&lt;/code&gt;. It is not even the total output directory. It is the JavaScript on the critical path for a user action. The important bytes are the ones that stand between the user and the first interactive route. After that, the question is whether new code arrives only when the user moves into new behavior, or whether the framework has forced unrelated capability onto the path.&lt;/p&gt;

&lt;p&gt;That last question is where React becomes interesting.&lt;/p&gt;

&lt;h2&gt;
  
  
  React's Monolith Problem
&lt;/h2&gt;

&lt;p&gt;React's public model is beautifully small. Its user-facing idea is compact enough that it survived a decade of ecosystem churn: components make UI feel like ordinary program structure.&lt;/p&gt;

&lt;p&gt;The shipped runtime is not compact in the same way.&lt;/p&gt;

&lt;p&gt;That is not a criticism of the React team's engineering discipline. React carries a lot because React does a lot. It has to keep a very broad rendering contract true across browsers, across rendering modes, and across years of ecosystem assumptions. The weight is not accidental. The question is whether all of that weight belongs on every route.&lt;/p&gt;

&lt;p&gt;The problem is that the browser package is arranged around the general product, not around the current application's capability set.&lt;/p&gt;

&lt;p&gt;A simple client-rendered widget does not ask the same question as a server-rendered application that hydrates, streams, recovers from errors, and coordinates work across a large tree. A tiny island with one click handler should be allowed to stay tiny. A component that only needs local interaction should not inherit the full mental weight of an application root.&lt;/p&gt;

&lt;p&gt;Today, those distinctions are mostly semantic. They matter to the developer. They matter to the runtime once the app is running. But they are not exposed as a clean static import graph that a bundler can prune aggressively.&lt;/p&gt;

&lt;p&gt;The package boundary says: this is React DOM for the client.&lt;/p&gt;

&lt;p&gt;It does not say: this route needs the small DOM-and-state subset, while the machinery for richer rendering modes can stay out of the bundle.&lt;/p&gt;

&lt;p&gt;That second sentence is what a capability-shaped React would need to make visible.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Host Is Part of the Cost
&lt;/h2&gt;

&lt;p&gt;There is another asymmetry hidden inside the usual Svelte-vs-React comparison. Svelte's mainstream target is the web DOM. That focus is part of why the compiler can be so effective. It knows the host it is lowering into. It can turn a component into browser-shaped code because the browser is the world the component is meant to inhabit.&lt;/p&gt;

&lt;p&gt;That is not an insult. It is a strength. A compiler gains power when the target is narrow enough to make strong decisions.&lt;/p&gt;

&lt;p&gt;React's abstraction boundary is different. React DOM is not React; it is one renderer for React. The component model sits above the host. PDF and canvas renderers make the point clearly: React's component approach is not inherently a DOM approach. Those targets do not make the browser bundle smaller. But they do explain why React wants to be a component model before it is a DOM compiler.&lt;/p&gt;

&lt;p&gt;This matters because some of React's weight is the price of that separation. A framework tied closely to the DOM can specialize earlier. A framework that treats the DOM as one host among several has to preserve a more abstract contract. That contract is valuable. It lets the same mental model cross output targets in a way a DOM-first compiler does not naturally promise.&lt;/p&gt;

&lt;p&gt;But the conclusion should not be that every DOM app must carry the full cost of host-agnostic generality. The renderer boundary is exactly where capability packaging should help. If an application is only using React as a small DOM island, it should not pay as if it were exercising the entire host-independent model. React's multi-target nature explains the need for abstraction. It does not justify an undifferentiated browser bundle.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tree Shaking Needs Shape
&lt;/h2&gt;

&lt;p&gt;Tree shaking is often described as if it were a magic vacuum that removes whatever code the app does not use. It is less magical than that. Bundlers need static structure. &lt;a href="https://webpack.js.org/guides/tree-shaking/" rel="noopener noreferrer"&gt;Webpack's own guide&lt;/a&gt; is blunt about the ingredients: ES module syntax, production optimizations, and accurate side-effect information are what let unused exports and whole modules disappear.&lt;/p&gt;

&lt;p&gt;This is why library shape matters so much.&lt;/p&gt;

&lt;p&gt;If a package exposes independent modules with pure exports, the bundler has something to understand. If a package exposes one entry point whose evaluation may affect the whole runtime, the bundler has to be conservative. In JavaScript, conservatism means bytes. If evaluating a module might matter, the module stays.&lt;/p&gt;

&lt;p&gt;React is especially difficult here because many capabilities are not normal userland functions. They are semantics inside the renderer. The app imports the renderer, not a set of isolated implementations the bundler can reason about one by one.&lt;/p&gt;

&lt;p&gt;That is the old monolith shape.&lt;/p&gt;

&lt;p&gt;Not old because the code is bad. Old because the distribution model assumes that the framework is one coherent runtime and the application either uses that runtime or it does not. That assumption made sense when coarse package boundaries were normal and framework competition was mostly about programming model rather than transferred JavaScript. It makes less sense now that applications are expected to move through finer-grained delivery paths, and when build tools care deeply about static structure.&lt;/p&gt;

&lt;p&gt;Dead-code elimination cannot remove a capability it cannot see.&lt;/p&gt;

&lt;h2&gt;
  
  
  What a Capability Graph Would Mean
&lt;/h2&gt;

&lt;p&gt;Imagine React distributed less like a runtime blob and more like a set of capabilities.&lt;/p&gt;

&lt;p&gt;The application root would not mean "give me the whole browser renderer." It would declare a rendering mode, and that mode would imply the runtime capabilities needed to preserve React's semantics for that part of the app.&lt;/p&gt;

&lt;p&gt;Hooks would not be one undifferentiated runtime assumption. The common local primitives would form the base layer; coordination primitives would be added only when the app uses them. Some of that machinery would still be shared, and some of it would be impossible to remove in practice because the component model depends on it. But the graph would at least describe the difference between "this app uses local state" and "this app uses the full coordination model React exposes."&lt;/p&gt;

&lt;p&gt;The DOM event system would follow the same rule. A form route should not pay for event families it never uses. A static island with one button should not inherit the same event surface as a canvas-heavy editor.&lt;/p&gt;

&lt;p&gt;Hydration would be a capability, not a tax hidden behind the same import shape as client rendering. The richer runtime features would be visible in the graph instead of being treated as ambient facts of the renderer. Development diagnostics would remain development-only with a boundary that production bundlers can see without heroic inference.&lt;/p&gt;

&lt;p&gt;The compiler would participate, but it would not replace the runtime. JSX compilation and &lt;a href="https://react.dev/learn/react-compiler" rel="noopener noreferrer"&gt;React Compiler&lt;/a&gt; output could describe what the app actually uses. Framework and bundler layers could then carry that information into the package graph. This is the shape of the missing information: the app already contains the answer, but the framework does not package itself in a way that lets the build pipeline use the answer fully.&lt;/p&gt;

&lt;p&gt;In that world, React would still be a runtime framework. It would still provide the live component semantics people choose React for. But a small app would no longer pay for the whole semantic universe before it had earned it.&lt;/p&gt;

&lt;p&gt;That is the part the compiled-vs-runtime argument misses. A runtime can be tree-shaken if it is designed as something tree-shakable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Compilation Is a Form of Packaging
&lt;/h2&gt;

&lt;p&gt;The word "compiled" makes Svelte sound like it lives in a different category, but compilation is also a packaging strategy.&lt;/p&gt;

&lt;p&gt;The compiler looks at the application and decides what code to emit. Its real advantage is that it filters the runtime surface through what the program can prove at build time. The result is not "no runtime." The result is a runtime surface that has already been filtered through the program.&lt;/p&gt;

&lt;p&gt;That filtering is the real advantage.&lt;/p&gt;

&lt;p&gt;A compiler gets to ask: what does this component actually do?&lt;/p&gt;

&lt;p&gt;A capability-shaped runtime gets to ask almost the same question: what capabilities does this application actually use?&lt;/p&gt;

&lt;p&gt;Those two approaches are closer than the marketing categories suggest. The best future is probably not compiled frameworks on one side and runtime frameworks on the other. It is compiler-assisted runtimes with small, explicit capability graphs. It is frameworks whose core semantics can remain general while their shipped code becomes specific.&lt;/p&gt;

&lt;p&gt;Svelte starts from specialization and adds shared machinery when specialization would repeat too much. React starts from shared machinery and could recover specialization by making the machinery divisible. The direction is different. The destination is similar: the browser should receive the smallest faithful implementation of the app's semantics.&lt;/p&gt;

&lt;h2&gt;
  
  
  The App Size Conversation We Should Have
&lt;/h2&gt;

&lt;p&gt;When people compare Svelte and React bundle sizes, they often compare starter apps. Starter apps are useful because they reveal the floor. They are also dangerous because they make the floor feel like the whole building.&lt;/p&gt;

&lt;p&gt;A better comparison would walk the same frameworks through a growth path. It would start with a tiny island and keep adding the kinds of pressure real products accumulate. The point would not be to make the examples impressive. The point would be to see how the framework's fixed cost behaves as the application stops being a toy.&lt;/p&gt;

&lt;p&gt;For each one, the measurement should separate framework cost from app cost and route cost from total build output. A framework that looks expensive at the beginning may disappear behind the product later. A framework that looks tiny at the beginning may duplicate enough specialized output to make the curve less obvious. A framework that lazy-loads well may win on the first route even if its total app output is larger.&lt;/p&gt;

&lt;p&gt;The point of this exercise is not to crown a universal winner. The point is to see the shape of payment.&lt;/p&gt;

&lt;p&gt;Svelte's bet is that many UI programs are better served by paying at build time and shipping specialized code. React's bet has been that many UI programs are better served by a stable runtime model that can express a very wide range of behavior. Both bets are legitimate. The problem is when React's bet is implemented as if every page needs the full runtime model up front.&lt;/p&gt;

&lt;p&gt;That is where React's size becomes less philosophical and more mechanical.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Smaller React That Could Exist
&lt;/h2&gt;

&lt;p&gt;There is a smaller React hiding inside React.&lt;/p&gt;

&lt;p&gt;Not Preact. Not a compatibility clone. Not a new framework with React-like syntax. React itself, if its distribution model matched the way modern applications are built.&lt;/p&gt;

&lt;p&gt;That React would have a small root for client-only islands. Server-rendered roots would opt into hydration as a visible capability. More advanced rendering behavior would be added by use, not smuggled in as part of the default client entry point. The important change would be static structure: bundlers would be able to remove entire subtrees without understanding React's internals, and framework adapters could declare the rendering mode they need per route.&lt;/p&gt;

&lt;p&gt;This would not be easy. React's internal semantics are deeply connected. Splitting a renderer after years of integrated design is harder than designing a small library from scratch. Some capabilities that look optional from the outside may share invariants that make them hard to separate safely. The compatibility contract is enormous, and every new boundary is another place where bugs can hide.&lt;/p&gt;

&lt;p&gt;But difficulty is not impossibility, and it is not a rebuttal to the architectural point.&lt;/p&gt;

&lt;p&gt;The React programming model does not require the browser bundle to be a monolith. It requires a runtime capable of preserving React's semantics for the capabilities the application uses. Those are different requirements. One is historical packaging. The other is the actual product.&lt;/p&gt;

&lt;p&gt;If React were designed today for the way applications are delivered now, it is hard to imagine it would expose the same coarse client runtime as the only normal path. It would be capability-first from the beginning because the web now punishes undifferentiated JavaScript more visibly than it did when React's package shape hardened.&lt;/p&gt;

&lt;h2&gt;
  
  
  Runtime Is Still Valuable
&lt;/h2&gt;

&lt;p&gt;It is tempting, after all of this, to conclude that runtimes are a regrettable compromise. They are not.&lt;/p&gt;

&lt;p&gt;Runtimes buy consistency. They make dynamic behavior composable across the lifetime of an app and across code-splitting boundaries. They can also see more of the live application than any one compiled component can, which makes the runtime behavior richer than a pile of emitted code.&lt;/p&gt;

&lt;p&gt;Those are real advantages. They are why React won so much mindshare in the first place.&lt;/p&gt;

&lt;p&gt;The mistake is treating runtime value and runtime size as inseparable. A runtime is not a single object by nature. It can be layered, declared, and assisted by a compiler that proves which layers are needed. A framework can keep a high-level programming model without forcing every route to ship every lower-level mechanism.&lt;/p&gt;

&lt;p&gt;The right criticism of React is not "React has a runtime."&lt;/p&gt;

&lt;p&gt;The right criticism is "React's runtime is not packaged according to the capabilities of the app."&lt;/p&gt;

&lt;p&gt;That is a much more useful criticism because it points toward a better React rather than toward a world where every framework has to become Svelte.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Divide
&lt;/h2&gt;

&lt;p&gt;The real divide is not compiled vs runtime.&lt;/p&gt;

&lt;p&gt;The real divide is &lt;strong&gt;specific vs undifferentiated&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Specific means the browser receives code shaped around the current program. For a compiled framework, that shape comes from emitted output. For a runtime framework, it has to come from capability boundaries. Undifferentiated means the framework ships its generality before the app has asked for it.&lt;/p&gt;

&lt;p&gt;This is the lens that makes the argument clearer.&lt;/p&gt;

&lt;p&gt;Svelte is not small because compilers are holy. It is small because the compiler gives the package manager and bundler a more app-shaped output. React is not large because runtimes are doomed. It is large because the output is still too framework-shaped.&lt;/p&gt;

&lt;p&gt;The browser does not care whether a byte came from a compiler or a runtime package. It cares whether the byte is necessary for the current experience. Users do not reward architectural purity. They reward pages that load quickly, become interactive quickly, and stay responsive under real product pressure.&lt;/p&gt;

&lt;p&gt;So the question for any framework should be simple:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Can the smallest app receive the smallest faithful version of your model?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If the answer is yes, the framework scales downward. It can start small without being a toy. If the answer is no, the framework may still be powerful, mature, and worth choosing, but its size problem is not a law of nature. It is a distribution problem.&lt;/p&gt;

&lt;p&gt;React could be small. Not by becoming Svelte. Not by abandoning runtime semantics. By admitting, in its package shape, that applications do not use frameworks all at once.&lt;/p&gt;

&lt;p&gt;They use capabilities.&lt;/p&gt;

&lt;p&gt;And capabilities are exactly the kind of thing a modern build pipeline can remove when they are absent, if only the framework is built to let them be absent.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>frontend</category>
      <category>javascript</category>
      <category>react</category>
    </item>
    <item>
      <title>RSC Is Not the Input Boundary</title>
      <dc:creator>Viktor Lázár</dc:creator>
      <pubDate>Sat, 09 May 2026 18:57:17 +0000</pubDate>
      <link>https://forem.com/lazarv/rsc-is-not-the-input-boundary-2aao</link>
      <guid>https://forem.com/lazarv/rsc-is-not-the-input-boundary-2aao</guid>
      <description>&lt;p&gt;Every major React Server Components security release seems to trigger the same little ritual. An advisory lands, someone sees the letters RSC, and a few hours later the lesson has already collapsed into: "RSC is bad."&lt;/p&gt;

&lt;p&gt;That lesson is convenient. It is also imprecise.&lt;/p&gt;

&lt;p&gt;The same thing happened around the Next.js security release from May 7, 2026. Vercel shipped fixes for several Next.js and upstream React issues, including a high-severity denial-of-service vulnerability affecting the React Server Components packages. But the interesting part of the advisory was not that rendering a Server Component is inherently dangerous. The interesting part was that specially crafted HTTP requests sent to Server Function endpoints could cause excessive CPU usage or out-of-memory failures while the payload was being processed.&lt;/p&gt;

&lt;p&gt;That distinction is not a footnote. It is the center of the issue.&lt;/p&gt;

&lt;p&gt;A Server Component is not the same attack surface as a Server Function. One sends a representation of a component tree from the server to the client. The other receives a payload from the client, asks the server runtime to deserialize it, and then invokes server-side code. They can both live inside the RSC model. They can both involve the Flight protocol. But from a security perspective, they ask opposite questions.&lt;/p&gt;

&lt;p&gt;The Server Component question is: what are we allowing to leave the server and reach the client?&lt;/p&gt;

&lt;p&gt;The Server Function question is: what are we allowing to enter the server from the client?&lt;/p&gt;

&lt;p&gt;The second one is an input boundary. If that boundary is enforced too late, the failure is not that the RSC model is broken. The failure is that an RPC-shaped input surface was treated as if it were merely a framework ergonomic.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Category Error
&lt;/h2&gt;

&lt;p&gt;There is a recurring confusion in RSC discussions. People often talk about "RSC" as one thing, when in practice they are combining several distinct mechanisms:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Server Components: components that run on the server.&lt;/li&gt;
&lt;li&gt;Client Components: components that run in the browser, while still participating in the same React tree.&lt;/li&gt;
&lt;li&gt;Server References / Server Functions: server-side functions for which the client receives a reference and can later issue a call.&lt;/li&gt;
&lt;li&gt;Flight protocol: the serialization format that carries component payloads, references, and a broader set of values between the server and the client.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Together, these form an architecture. They do not share the same risk profile.&lt;/p&gt;

&lt;p&gt;When a Server Component renders, the direction of execution is primarily server to client. The server produces a payload. The client consumes it. The usual class of bug is that the server puts something into that payload that it should not have put there. That can be a data leak, a cache-boundary mistake, or a component-level authorization bug.&lt;/p&gt;

&lt;p&gt;When a Server Function runs, the direction reverses. The client sends something to the server. The server runtime has to understand the payload, identify the action, materialize the arguments, and pass them to the handler.&lt;/p&gt;

&lt;p&gt;That is a very different moment. The browser is no longer just a consumer. The browser, or anything capable of sending an HTTP request, is now providing input to the server.&lt;/p&gt;

&lt;p&gt;An endpoint like that cannot be treated as a React composition detail. It is a public RPC surface. It may look like a function call to the developer. TypeScript may make it feel wonderfully local. Over the network, it is still a hostile input boundary.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Happens in a Server Function Call?
&lt;/h2&gt;

&lt;p&gt;In simplified form, a Server Function request looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;client event
  -&amp;gt; POST request
    -&amp;gt; identify action id / server reference
      -&amp;gt; deserialize Flight payload
        -&amp;gt; materialize arguments
          -&amp;gt; invoke server function handler
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Most application code focuses on the last step:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;updateProfile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;input&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;use server&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;schema&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;input&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;update&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At the application level, this is better than nothing. The handler is not blindly trusting the input. But for the class of problem described in the 2026 advisory, this is already late.&lt;/p&gt;

&lt;p&gt;By the time &lt;code&gt;schema.parse(input)&lt;/code&gt; runs, the runtime has already done much of the risky work: it has read the request, walked the payload, materialized values, built objects, interpreted references, and potentially dealt with streams, binary values, and nested structures. If the goal of the attack is not to smuggle invalid business data into the handler, but to make deserialization itself consume too much CPU or memory, validation inside the handler does not protect the server from the relevant cost.&lt;/p&gt;

&lt;p&gt;So "validate your input" is not specific enough.&lt;/p&gt;

&lt;p&gt;The question is where.&lt;/p&gt;

&lt;p&gt;If validation happens inside the handler, it protects application invariants.&lt;/p&gt;

&lt;p&gt;If validation happens in the Server Function layer after the request has already been deserialized, it gives the developer a better contract, but it may still leave the decoder cost exposed.&lt;/p&gt;

&lt;p&gt;If validation happens while the protocol payload is being deserialized, the runtime can know during the argument walk what it expects, what it should drop, what it should reject, and when it should stop processing the request.&lt;/p&gt;

&lt;p&gt;That is the difference that matters.&lt;/p&gt;

&lt;h2&gt;
  
  
  A WAF Is the Wrong Boundary
&lt;/h2&gt;

&lt;p&gt;One important line in Vercel's release was that these advisories could not be reliably blocked at the WAF layer.&lt;/p&gt;

&lt;p&gt;That should not be surprising.&lt;/p&gt;

&lt;p&gt;A WAF sees HTTP requests. It can inspect headers, size, URLs, known patterns, maybe parts of the body. It does not fully understand the semantics of a Flight payload. It does not know which function a given server reference points to. It does not know how many arguments that function expects. It does not know that slot zero must be a string, slot one must be &lt;code&gt;FormData&lt;/code&gt;, slot two must be a &lt;code&gt;Map&amp;lt;string, number&amp;gt;&lt;/code&gt;, and the file field must be at most five megabytes and either &lt;code&gt;image/png&lt;/code&gt; or &lt;code&gt;image/jpeg&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;If the WAF tried to know all of that, it would effectively be reimplementing the framework protocol at the edge. That is brittle, version-dependent, and it puts the responsibility in the wrong place.&lt;/p&gt;

&lt;p&gt;The right boundary is where the runtime already knows which Server Function it is about to call, but has not yet handed materialized input to the handler.&lt;/p&gt;

&lt;p&gt;That is the protocol layer.&lt;/p&gt;

&lt;h2&gt;
  
  
  This Is Not a TypeScript Problem
&lt;/h2&gt;

&lt;p&gt;Types usually enter the discussion here. A Server Function might look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;savePost&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;post&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;PostInput&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;use server&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The developer experience suggests that &lt;code&gt;post&lt;/code&gt; is a &lt;code&gt;PostInput&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;From the network's point of view, that is only a hope.&lt;/p&gt;

&lt;p&gt;The TypeScript type does not exist in the request. It does not exist in the Flight payload. It will not tell the decoder when to stop. It will not reject an overly deep structure, an oversized string, an oversized binary value, an unexpected &lt;code&gt;FormData&lt;/code&gt; field, or a &lt;code&gt;Map&lt;/code&gt; whose size is itself enough to become a denial-of-service attempt.&lt;/p&gt;

&lt;p&gt;Types are useful documentation for the contract. But if the contract protects a runtime boundary, runtime information has to exist too.&lt;/p&gt;

&lt;p&gt;That is why the Server Function definition needs metadata that does more than narrow the handler's TypeScript type. The metadata has to reach the protocol decoder.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Late Validation Looks Like
&lt;/h2&gt;

&lt;p&gt;Consider an abstract Server Function API:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;savePost&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;createServerFn&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;inputValidator&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;postSchema&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;post&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a good direction. Validation lives at the definition site, not scattered through the handler body. The handler receives validated &lt;code&gt;data&lt;/code&gt;. TypeScript inference moves together with the runtime schema. An API like this is much healthier than a bare &lt;code&gt;async function (input: Whatever)&lt;/code&gt; and a comment saying "the client calls this."&lt;/p&gt;

&lt;p&gt;TanStack Start is on this side of the line. Its &lt;code&gt;createServerFn&lt;/code&gt; API makes the Server Function explicit, treats the input validator as part of the function contract, and documents that client-side calls become network calls. That is much better than hiding the request-shaped nature of the operation.&lt;/p&gt;

&lt;p&gt;But it is still a different category.&lt;/p&gt;

&lt;p&gt;A TanStack Start Server Function is not an RSC Flight Server Function in the same sense as an RSC server reference. Based on the documented API, validation is part of the Server Function layer: the function receives a &lt;code&gt;data&lt;/code&gt; input, the runtime validates that input, and then the handler runs. That is a good application-level contract. If the runtime has already deserialized the body into a JavaScript value before the validator sees it, then the validator is working in the post-deserialization world.&lt;/p&gt;

&lt;p&gt;This is not a criticism in the sense of "TanStack Start is bad." It is not. A definition-site validator is the right direction for a classic RPC API.&lt;/p&gt;

&lt;p&gt;It is simply not the same protection as giving the Flight decoder the Server Function's argument-slot contract and letting validation happen during the payload walk.&lt;/p&gt;

&lt;p&gt;For an RPC API, the question is how much work the framework's serialization layer has to do before the validator gets control. For an RSC Server Function, the question is even sharper because the Flight payload can carry a richer value space. We are not only talking about JSON objects. The protocol can represent references, form data, binary values, streams, iterables, promises, typed arrays, &lt;code&gt;Map&lt;/code&gt;, and &lt;code&gt;Set&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The richer the wire format, the less satisfying "we parse it at the top of the handler" becomes.&lt;/p&gt;

&lt;h2&gt;
  
  
  The react-server Approach
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;@lazarv/react-server&lt;/code&gt; approach is not merely to provide a convenient validation wrapper around the handler.&lt;/p&gt;

&lt;p&gt;The important part is that the Server Function definition attaches metadata to the server reference, and that metadata reaches the Flight decoder.&lt;/p&gt;

&lt;p&gt;For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;createFunction&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;formData&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@lazarv/react-server/function&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;zod&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;uploadAvatar&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;createFunction&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
  &lt;span class="nf"&gt;formData&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;displayName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;string&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;min&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;max&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
      &lt;span class="na"&gt;avatar&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;file&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;maxBytes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="nx"&gt;_000_000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;mime&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;image/png&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;image/jpeg&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
      &lt;span class="p"&gt;}),&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;unknown&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;reject&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;])(&lt;/span&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;uploadAvatar&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;form&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;use server&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;displayName&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;form&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;displayName&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;avatar&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;form&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;avatar&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;saveAvatar&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;displayName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;avatar&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The point is not only that &lt;code&gt;form.get("avatar")&lt;/code&gt; becomes nicer inside the handler.&lt;/p&gt;

&lt;p&gt;The more important contract is this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the first argument to the Server Function is &lt;code&gt;FormData&lt;/code&gt;;&lt;/li&gt;
&lt;li&gt;the allowed fields are known;&lt;/li&gt;
&lt;li&gt;unknown fields can be rejected by default;&lt;/li&gt;
&lt;li&gt;the file has a size limit;&lt;/li&gt;
&lt;li&gt;the MIME allowlist is part of the wire contract;&lt;/li&gt;
&lt;li&gt;the handler only runs if the decoder successfully validates that slot.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is not business logic. That is an input boundary.&lt;/p&gt;

&lt;h2&gt;
  
  
  Slot-Walk Validation
&lt;/h2&gt;

&lt;p&gt;The technical shape is roughly this.&lt;/p&gt;

&lt;p&gt;When a Server Function export is wrapped with &lt;code&gt;createFunction(...)&lt;/code&gt;, the parse/validate spec associated with that wrapper is registered as server reference metadata. When a request comes in, the runtime first tries to recover the action id. For header-based action calls, that can come from the &lt;code&gt;react-server-action&lt;/code&gt; header. For progressive-enhancement form submissions, it can be encoded in the submitted &lt;code&gt;FormData&lt;/code&gt;. If the token is encrypted, the runtime decrypts it first so it knows which action the request is trying to call.&lt;/p&gt;

&lt;p&gt;Then comes a small but important step: the action module has to be loaded before decoding.&lt;/p&gt;

&lt;p&gt;That may sound incidental. It is not. The server-function metadata registry is populated by the module's top-level &lt;code&gt;registerServerReference(...)&lt;/code&gt; calls. If the runtime deserialized the payload first and only loaded the action module later, the first call to an action could silently skip validation. So react-server preloads the action module first, then calls &lt;code&gt;decodeReply&lt;/code&gt; with the recovered action id.&lt;/p&gt;

&lt;p&gt;From there, the decoder is no longer walking the argument list blindly. It knows which Server Function it is decoding for. It can look up the associated metadata. It can apply parse and validate slot by slot.&lt;/p&gt;

&lt;p&gt;If the first argument is &lt;code&gt;z.string()&lt;/code&gt;, slot zero has to validate as a string.&lt;/p&gt;

&lt;p&gt;If the second argument is &lt;code&gt;arrayBuffer({ maxBytes: 1024 })&lt;/code&gt;, the decoder can reject an oversized buffer based on byte length.&lt;/p&gt;

&lt;p&gt;If a &lt;code&gt;formData(...)&lt;/code&gt; spec uses &lt;code&gt;unknown: "reject"&lt;/code&gt;, an injected extra field does not reach the handler.&lt;/p&gt;

&lt;p&gt;If a &lt;code&gt;file(...)&lt;/code&gt; spec declares a MIME allowlist and a size limit, the runtime does not wait for application code to decide whether the file is acceptable.&lt;/p&gt;

&lt;p&gt;If a &lt;code&gt;map(...)&lt;/code&gt; or &lt;code&gt;set(...)&lt;/code&gt; spec has a maximum size, the collection cannot grow without bound in the pre-handler world.&lt;/p&gt;

&lt;p&gt;If a stream or async iterable has a maximum chunk count or byte limit, the boundary remains active as the handler consumes it.&lt;/p&gt;

&lt;p&gt;That is the key property: the shape of the Server Function input is not only TypeScript inference. It is a decoder contract.&lt;/p&gt;

&lt;h2&gt;
  
  
  Failure Is Structural
&lt;/h2&gt;

&lt;p&gt;A protocol-level validation failure is not a business validation failure.&lt;/p&gt;

&lt;p&gt;It is not the same as a user typing a bad email address into a form and receiving a field error. It means the wire payload did not satisfy the contract under which the server was willing to materialize a Server Function call at all.&lt;/p&gt;

&lt;p&gt;The right response is structural rejection.&lt;/p&gt;

&lt;p&gt;In react-server, a validation failure during the slot walk becomes a &lt;code&gt;DecodeValidationError&lt;/code&gt; and is mapped to a 400 response. The handler does not run. The argument list is not bound. The client does not receive detailed schema diagnostics, because those details can reveal useful shape information to an attacker. The operator log can still keep the useful parts: action id, slot index, and failure reason.&lt;/p&gt;

&lt;p&gt;Again, this is different from application-level validation.&lt;/p&gt;

&lt;p&gt;A form validation error is a user experience concern.&lt;/p&gt;

&lt;p&gt;A decode validation error is a protocol concern.&lt;/p&gt;

&lt;p&gt;If we merge those two paths, we either reveal too much to an attacker or give too little feedback to a real user. They should not be the same path.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bound Arguments Are Not Call Arguments
&lt;/h2&gt;

&lt;p&gt;There is another detail that is easy to lose in RSC Server Function discussions: call arguments and bound captures are not the same thing.&lt;/p&gt;

&lt;p&gt;A Server Function can be created with bound values. These are values carried by a server-side closure or binding and later associated with the server reference. They should not be treated the same way as runtime arguments sent by the client.&lt;/p&gt;

&lt;p&gt;In the react-server model, bound captures are integrity-protected by the action token. That is a different kind of protection than per-argument validation. They do not need the same schema path as client input, because they are not crossing the same trust boundary.&lt;/p&gt;

&lt;p&gt;Arguments sent by the client are hostile input.&lt;/p&gt;

&lt;p&gt;Bound captures are integrity-protected server-side state representation.&lt;/p&gt;

&lt;p&gt;If both are collapsed into the same "validate the input" bucket, the model becomes muddy again.&lt;/p&gt;

&lt;h2&gt;
  
  
  Global Decode Limits
&lt;/h2&gt;

&lt;p&gt;Per-function contracts need a second layer: global resource ceilings.&lt;/p&gt;

&lt;p&gt;There will be unvalidated legacy actions. There will be code in the middle of migration. There will be Server Functions where some slots are intentionally loose. And there are payload characteristics that should not have to be repeated manually on every function.&lt;/p&gt;

&lt;p&gt;So the runtime needs limits such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;maximum payload byte size;&lt;/li&gt;
&lt;li&gt;maximum Flight row count;&lt;/li&gt;
&lt;li&gt;maximum materialization depth;&lt;/li&gt;
&lt;li&gt;maximum number of bound arguments;&lt;/li&gt;
&lt;li&gt;maximum BigInt digit count;&lt;/li&gt;
&lt;li&gt;maximum string length;&lt;/li&gt;
&lt;li&gt;maximum stream chunk count.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These do not replace per-function validation. They are the safety floor. The function spec is the precise contract. The global limits are the ceilings that still stop obviously abusive payloads when a function has not yet been declared perfectly.&lt;/p&gt;

&lt;p&gt;Together, they form a more meaningful defense.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dev-Time Strictness
&lt;/h2&gt;

&lt;p&gt;A runtime's behavior is not only what it does in production. It is also what it teaches during development.&lt;/p&gt;

&lt;p&gt;If a &lt;code&gt;"use server"&lt;/code&gt; export can be called from the client without validation, that is an attack surface that may not be visible at the call site. The developer sees a function. The browser sees an endpoint. The reviewer often reads the handler body, not the wire boundary.&lt;/p&gt;

&lt;p&gt;That is why a dev-time warning for bare Server Functions is useful:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Server function ... called without validation
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The point is not that every function needs a complex schema. Some functions have no input. Some migration paths need a temporary escape hatch. But that should be an explicit decision. The default should not be that a publicly callable Server Function has no runtime input contract and nobody notices until a security release makes the boundary visible.&lt;/p&gt;

&lt;p&gt;In that sense, the no-spec &lt;code&gt;createFunction()&lt;/code&gt; form is useful too. It does not add validation, but it records intent. The runtime can tell that the developer has seen the boundary and chosen not to narrow it yet.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Missing Runtime Contract in Next.js
&lt;/h2&gt;

&lt;p&gt;Next.js fixed the affected upstream React issues, and everyone affected should upgrade. That is not optional. After an advisory like this, the first correct move is always to patch.&lt;/p&gt;

&lt;p&gt;But patching is not the same thing as learning the architectural lesson.&lt;/p&gt;

&lt;p&gt;In the current Next.js Server Function model, there is no generally documented framework API that gives the runtime a per-function Flight decode contract. There is &lt;code&gt;"use server"&lt;/code&gt;. There is a server-side handler. You can validate inside that handler. You can build your own helper around it. But that is not the same as attaching argument-slot metadata to the server reference so the decoder knows what it is allowed to materialize before the handler runs.&lt;/p&gt;

&lt;p&gt;That is why I consider the react-server approach stronger here.&lt;/p&gt;

&lt;p&gt;Not because it has "schema validation." Schema validation exists in many places.&lt;/p&gt;

&lt;p&gt;Because the validation happens in a better place.&lt;/p&gt;

&lt;p&gt;Because the function contract appears on the Flight protocol decode path.&lt;/p&gt;

&lt;p&gt;Because malformed payloads can be structurally rejected before handler execution.&lt;/p&gt;

&lt;p&gt;Because the wire-aware specs cover not only application data models, but also the richer value space of the protocol: &lt;code&gt;FormData&lt;/code&gt;, &lt;code&gt;File&lt;/code&gt;, &lt;code&gt;Blob&lt;/code&gt;, &lt;code&gt;ArrayBuffer&lt;/code&gt;, typed arrays, &lt;code&gt;Map&lt;/code&gt;, &lt;code&gt;Set&lt;/code&gt;, streams, iterables, and promises.&lt;/p&gt;

&lt;p&gt;And because this defense is not a WAF rule, not a convention, not "remember to parse at the top of the handler," but a runtime boundary.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Server Function Is a Public Endpoint
&lt;/h2&gt;

&lt;p&gt;The simplest way to say all of this is:&lt;/p&gt;

&lt;p&gt;A Server Function is a public endpoint.&lt;/p&gt;

&lt;p&gt;Not because it looks like a REST route. Not because the developer wrote a URL for it. Because the client can send a request that causes the server to attempt to invoke a function.&lt;/p&gt;

&lt;p&gt;Once we accept that, the security consequences become clearer:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;every Server Function should have an input contract;&lt;/li&gt;
&lt;li&gt;the contract should live at the definition site, not be scattered through the handler body;&lt;/li&gt;
&lt;li&gt;the runtime should know the contract as early as possible;&lt;/li&gt;
&lt;li&gt;deserialization cost should be bounded;&lt;/li&gt;
&lt;li&gt;unknown fields should not be treated as harmless by default;&lt;/li&gt;
&lt;li&gt;file and blob inputs should have size and MIME constraints;&lt;/li&gt;
&lt;li&gt;authorization should be explicit in the Server Function, not inferred from the surrounding component tree;&lt;/li&gt;
&lt;li&gt;the WAF should be an extra layer, not the primary interpreter.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is not an anti-RSC position.&lt;/p&gt;

&lt;p&gt;It is the position that takes RSC seriously enough not to treat all of its parts as one mystical box.&lt;/p&gt;

&lt;h2&gt;
  
  
  RSC Is Not the Scapegoat
&lt;/h2&gt;

&lt;p&gt;The interesting thing about RSC is that it gives us a formal boundary between two different execution environments. The server and the client are not the same place. They have different capabilities, different costs, different failure modes, and different security responsibilities.&lt;/p&gt;

&lt;p&gt;That is the strength of the model.&lt;/p&gt;

&lt;p&gt;But a boundary is only useful if we are precise about what crosses it, and in which direction.&lt;/p&gt;

&lt;p&gt;For Server Components, the question is what the server sends to the client.&lt;/p&gt;

&lt;p&gt;For Server Functions, the question is what the server accepts from the client.&lt;/p&gt;

&lt;p&gt;When a Server Function input payload is validated too late, or when the runtime does too much work before it even knows what it expects, the lesson is not that Server Components are a bad idea. The lesson is that an RPC-shaped input surface was treated as framework ergonomics for too long.&lt;/p&gt;

&lt;p&gt;That mistake is fixable. But only if we name it precisely.&lt;/p&gt;

&lt;p&gt;Not "RSC is bad."&lt;/p&gt;

&lt;p&gt;Not "Server Components are insecure."&lt;/p&gt;

&lt;p&gt;This:&lt;/p&gt;

&lt;p&gt;Server Function input payloads need protocol-level validation.&lt;/p&gt;

&lt;p&gt;That sentence is less dramatic. It is also more true.&lt;/p&gt;

&lt;p&gt;And if we want RSC to have a healthy future, that is exactly the kind of sentence we need: less drama around the model, and more attention on the few boundaries where the model actually meets a hostile network.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://vercel.com/changelog/next-js-may-2026-security-release" rel="noopener noreferrer"&gt;Next.js May 2026 security release&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/facebook/react/security/advisories/GHSA-rv78-f8rc-xrxh" rel="noopener noreferrer"&gt;React security advisory: DoS in React Server Components&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://react-server.dev/guide/server-functions#validation" rel="noopener noreferrer"&gt;react-server server functions validation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://tanstack.com/start/latest/docs/framework/react/guide/server-functions" rel="noopener noreferrer"&gt;TanStack Start server functions&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>rsc</category>
      <category>react</category>
      <category>nextjs</category>
      <category>security</category>
    </item>
    <item>
      <title>Dissatisfaction Is a Spark</title>
      <dc:creator>Viktor Lázár</dc:creator>
      <pubDate>Sat, 09 May 2026 18:55:43 +0000</pubDate>
      <link>https://forem.com/lazarv/dissatisfaction-is-a-spark-2ejp</link>
      <guid>https://forem.com/lazarv/dissatisfaction-is-a-spark-2ejp</guid>
      <description>&lt;p&gt;I have a particular relationship with dissatisfaction.&lt;/p&gt;

&lt;p&gt;When something does not feel right, I rarely manage to leave it there. A library feels too heavy. A framework hides the thing I want to touch. A tool solves the wrong half of the problem. A program almost understands its own shape, but not quite. A game has a wonderful idea buried under a system that keeps getting in its own way.&lt;/p&gt;

&lt;p&gt;Most sane people, I think, complain for a minute and move on.&lt;/p&gt;

&lt;p&gt;I complain for a minute and open a new project.&lt;/p&gt;

&lt;p&gt;This is not always wise. It is not always efficient. There is a whole graveyard of half-built answers behind that impulse, each one started with the private conviction that the world would be slightly better if this one irritating thing were different. But I have learned not to distrust the impulse too much, because it has carried me toward almost everything I have cared about building.&lt;/p&gt;

&lt;p&gt;For me, dissatisfaction is not only rejection. It is attention becoming specific.&lt;/p&gt;

&lt;p&gt;There is a kind of annoyance that is just noise. Something is broken, ugly, slow, badly named, overdesigned, underdesigned. Fine. The world is full of those. But sometimes the annoyance has a shape. It keeps returning to the same edge. I can feel, before I can explain, that the problem is not accidental. Something in the design is pointing in the wrong direction. Something wants to be inverted, simplified, pulled apart, made composable, made honest.&lt;/p&gt;

&lt;p&gt;That feeling is dangerous in the best way.&lt;/p&gt;

&lt;p&gt;It turns passive criticism into motion. It moves the question from "why is this like this?" to "could I make something better than this?" and then, eventually, "what would it look like if I did?" And once that question becomes vivid enough, building stops feeling like work and starts feeling like a form of thinking. The project is not a product yet. It is an argument I can run.&lt;/p&gt;

&lt;p&gt;I think this is why so many of my projects begin as irritations. Not because I enjoy being annoyed, but because annoyance gives the mind a surface to push against. Pure satisfaction rarely asks anything of me. It closes the loop. Dissatisfaction leaves the loop open, and an open loop is where imagination gets in.&lt;/p&gt;

&lt;p&gt;Over time, though, I have noticed that the most persistent version of this is not even about other people's tools.&lt;/p&gt;

&lt;p&gt;Most of the time, the thing I am dissatisfied with is my own work. I build something, and then I see the compromise inside it. A decision I made too early. A boundary I drew in the wrong place. A design that seemed clean until real use put pressure on it. An implementation that works, but carries the shape of a mistake I had not yet learned how to name.&lt;/p&gt;

&lt;p&gt;That feeling is sharper, because I cannot blame anyone else for it. The flaw is mine. I put it there, in the design or in the implementation, usually for reasons that made sense at the time. But once I can see it, I want to make the whole thing better. Not slightly patched. Better. So I open the project again. Or I start the next one, carrying the correction forward.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://react-server.dev" rel="noopener noreferrer"&gt;&lt;code&gt;@lazarv/react-server&lt;/code&gt;&lt;/a&gt; came from that kind of place. Not from a clean plan, not from market analysis, not from the abstract desire to make a framework. It came from a series of small refusals. I did not want the boundaries to be there. I did not want the conventions to decide so much. I did not want the runtime to feel like a menu when it could feel like a set of primitives. At some point, the refusals became more than complaints. They became a thing I could build.&lt;/p&gt;

&lt;p&gt;That transformation still feels a little mysterious to me. The same emotion that could have become bitterness becomes a prototype. The same frustration that could have ended in a thread becomes a repository. The same little "no" turns, if I stay with it long enough, into a more interesting "what if?"&lt;/p&gt;

&lt;p&gt;Maybe that is the difference that matters. Dissatisfaction by itself is cheap. Everyone can see what is wrong. Everyone has taste when something fails them. The creative part begins when I let the dissatisfaction obligate me. If I really believe the thing could be better, then for a while I have to stop being only its critic. I have to become responsible for an alternative, even a small one, even a flawed one, even one nobody asked for.&lt;/p&gt;

&lt;p&gt;That responsibility is where the energy is.&lt;/p&gt;

&lt;p&gt;I do not think every irritation deserves a project. Life is too short, and most tools are allowed to be imperfect. But I have stopped treating dissatisfaction as a negative state I need to escape from quickly. Sometimes it is the first draft of care. Sometimes it is the mind noticing a possible world and being unable to unsee it.&lt;/p&gt;

&lt;p&gt;When I am not satisfied, something in me starts looking for a door.&lt;/p&gt;

&lt;p&gt;Sometimes the door is real.&lt;/p&gt;

&lt;p&gt;What does dissatisfaction do in you?&lt;/p&gt;

</description>
      <category>developer</category>
      <category>devjournal</category>
      <category>sideprojects</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>The Master Builder, Unleashed</title>
      <dc:creator>Viktor Lázár</dc:creator>
      <pubDate>Thu, 07 May 2026 06:40:36 +0000</pubDate>
      <link>https://forem.com/lazarv/the-master-builder-unleashed-48bf</link>
      <guid>https://forem.com/lazarv/the-master-builder-unleashed-48bf</guid>
      <description>&lt;p&gt;There is a particular kind of pain in software work: sitting in a meeting about a thing you already know how to build.&lt;/p&gt;

&lt;p&gt;Not vaguely. Not optimistically. You can see the first version. You can see the shape of the data, the awkward part of the UI, the one integration that will probably hurt, the test that should exist before anyone trusts it, the part that can be ugly for a week, and the part that must be right from the beginning. The work is not done, but the form is already present in your head.&lt;/p&gt;

&lt;p&gt;Then the meeting continues.&lt;/p&gt;

&lt;p&gt;The discussion moves through alignment, ownership, prioritization, stakeholder expectations, dependency mapping, launch risk, follow-up meetings, and the increasingly ceremonial question of who should "drive" the thing. None of those words are fake. Some of them point at real constraints. But the emotional fact remains: the software could have started existing an hour ago.&lt;/p&gt;

&lt;p&gt;This is not the impatience of someone who does not understand organizations. It is the frustration of someone who understands both the work and the organization well enough to feel the gap between them.&lt;/p&gt;

&lt;p&gt;I have spent most of my career building things that were not supposed to fit where I put them: old game engines in the browser, data protocols in JavaScript, React Server Components outside the frameworks that tried to own them.&lt;/p&gt;

&lt;p&gt;That kind of work teaches you something uncomfortable: the hard part is rarely the first line of code. The hard part is keeping the shape of the thing intact while the world asks you to translate it into smaller, safer pieces.&lt;/p&gt;

&lt;p&gt;This is where AI agents change the equation.&lt;/p&gt;

&lt;p&gt;For a long time, the gap between seeing the shape of the thing and getting it built without losing that shape was just the cost of doing serious software. Big products needed big teams. Big teams needed coordination. Coordination needed meetings. The developer who could see the shape of the thing still needed designers, reviewers, frontend engineers, backend engineers, QA, release managers, platform support, security review, product sign-off, and enough calendar space for all of those people to agree that the thing should become real.&lt;/p&gt;

&lt;p&gt;The company owned execution. The individual owned at most a piece of intent.&lt;/p&gt;

&lt;p&gt;AI agents have started to disturb that bargain.&lt;/p&gt;

&lt;h2&gt;
  
  
  The master builder
&lt;/h2&gt;

&lt;p&gt;The developer I am talking about is not any developer.&lt;/p&gt;

&lt;p&gt;This is not a beginner with a prompt box. It is not a mid-level engineer asking a model to fill in the parts they do not yet understand. It is not the fantasy that software can now be produced by desire alone, where a person describes an app, accepts the first plausible artifact, and calls the result engineering.&lt;/p&gt;

&lt;p&gt;The person at the center of this shift is closer to the old idea of the master builder.&lt;/p&gt;

&lt;p&gt;A master builder does not merely place bricks. They understand the structure before it exists. They know what can be improvised and what cannot. They know which details are cosmetic, which details are load-bearing, and which shortcuts will become expensive only after the room is full of people. They can work with specialists without being dissolved by specialization, because they carry a model of the whole.&lt;/p&gt;

&lt;p&gt;In software, this is the staff-level engineer, the principal engineer, the technical founder, the experienced IC with taste and ownership, the person who has built enough systems to know that implementation is never just implementation. They can read a product problem and see a system. They can read a system and see the product assumptions hiding inside it. They know when a design is under-specified, when an abstraction is premature, when a test suite is giving false comfort, when the happy path is lying, and when a release is safe enough to learn from.&lt;/p&gt;

&lt;p&gt;That kind of developer was already valuable. AI does not create that value. It gives that value a larger surface to act on.&lt;/p&gt;

&lt;p&gt;The agent is not the builder. The agent is a tool in the builder's workshop.&lt;/p&gt;

&lt;h2&gt;
  
  
  Execution used to be scarce
&lt;/h2&gt;

&lt;p&gt;Most software organizations were shaped by a simple historical fact: writing, changing, and maintaining code required human time in large quantities.&lt;/p&gt;

&lt;p&gt;If a roadmap had more work than the current team could do, the answer was usually headcount. More frontend engineers. More backend engineers. More QA. More managers to coordinate the larger group. More process to make sure the larger group did not destroy itself by moving independently. The shape of the organization followed the scarcity of implementation.&lt;/p&gt;

&lt;p&gt;That scarcity made the company powerful. A small team might have a sharper idea, but the large company had the machinery to grind through the implementation. It could assign ten people to a problem, put a manager over them, attach design and product, run research, staff a platform dependency, and push the thing through a release train. The small team could move quickly at the beginning, but the large company could eventually bring mass to bear.&lt;/p&gt;

&lt;p&gt;That is why the old acquisition story made sense. A small company found a shape the market wanted. A large company bought it, copied it, or slowly surrounded it with distribution and resources. The small company had clarity. The large company had execution capacity.&lt;/p&gt;

&lt;p&gt;AI agents do not eliminate the large company's advantages. Distribution still matters. Trust still matters. Compliance, support, procurement, brand, data access, sales channels, regulatory knowledge, and operational maturity still matter. A bank is not replaced by a weekend app. A payments company is not replaced by a clever clone. NASA is not made less capable at space exploration because a web page could be more inspiring.&lt;/p&gt;

&lt;p&gt;But a particular advantage has weakened: the assumption that serious software requires organizational mass before it can be executed.&lt;/p&gt;

&lt;p&gt;That assumption is what &lt;a href="https://www.youtube.com/watch?v=p2aea9dytpE" rel="noopener noreferrer"&gt;Theo was circling in "Software engineering is dead now"&lt;/a&gt;. The provocative title is less interesting than the operational shift underneath it. When code becomes cheaper to produce, the bottleneck moves. The important question stops being "how many engineers can we assign?" and becomes "who understands the problem well enough to direct the work?"&lt;/p&gt;

&lt;p&gt;That is a very different question.&lt;/p&gt;

&lt;h2&gt;
  
  
  The agent changes the unit of leverage
&lt;/h2&gt;

&lt;p&gt;The most important thing about AI coding agents is not that they write code.&lt;/p&gt;

&lt;p&gt;It is that they let one coherent intent remain coherent across more of the work.&lt;/p&gt;

&lt;p&gt;Before agents, even a strong engineer had to break their intent apart to get enough capacity. One person could hold the whole shape, but the work had to be distributed across a team. That meant translation. The product shape became tickets. The tickets became implementation slices. The slices moved through people with different contexts, incentives, calendars, and levels of taste. Review tried to recover coherence after the fact.&lt;/p&gt;

&lt;p&gt;Sometimes that worked beautifully. Good teams are real. Collaboration can improve an idea. A second pair of eyes can catch the thing the builder missed. The point is not that teams are bad.&lt;/p&gt;

&lt;p&gt;The point is that teams are expensive, not only in salary but in semantic loss.&lt;/p&gt;

&lt;p&gt;Every handoff risks changing the idea. Every meeting turns part of the artifact back into language. Every approval step asks the work to justify itself before it has had a chance to become visible. Every person added to the loop increases capacity and coordination at the same time. When implementation was scarce, that trade was often worth it. When implementation becomes cheaper, the cost becomes easier to see.&lt;/p&gt;

&lt;p&gt;An AI agent changes the trade because it adds execution without adding a second will.&lt;/p&gt;

&lt;p&gt;That sentence is dangerous if read carelessly, so it needs the adult version immediately: the agent adds mistakes, hallucinations, overconfidence, style drift, security risk, and an endless appetite for plausible wrongness. It must be constrained, reviewed, tested, and corrected. It does not remove engineering discipline.&lt;/p&gt;

&lt;p&gt;But it also does not need to be aligned in the human sense. It does not need a career path, a meeting, a roadmap narrative, a title, a territory, or a week to build context from office politics. It can be pointed at a narrow part of the system, given constraints, corrected when it drifts, and asked to try again. It is not autonomous in the way a teammate is autonomous. That is precisely why it is useful as leverage.&lt;/p&gt;

&lt;p&gt;For the master builder, this is new. The builder can keep the whole artifact in view while delegating pieces of execution to tools that do not dilute the intent. The work still needs judgment. It needs more judgment, not less. But the distance between judgment and execution shrinks.&lt;/p&gt;

&lt;h2&gt;
  
  
  This is not vibe coding
&lt;/h2&gt;

&lt;p&gt;This distinction matters because the public language around AI-assisted development has been polluted by "vibe coding."&lt;/p&gt;

&lt;p&gt;Vibe coding is useful as a name for a real phenomenon: someone repeatedly prompts an AI system, accepts whatever looks close enough, and moves forward without deeply understanding the result. It can be fun. It can produce charming prototypes. It can help people explore personal software. It can also produce systems nobody should be asked to maintain.&lt;/p&gt;

&lt;p&gt;Syntax has been good on this distinction. In &lt;a href="https://syntax.fm/show/887/vibe-coding-is-a-problem" rel="noopener noreferrer"&gt;"Vibe Coding Is a Problem"&lt;/a&gt;, the problem is not that AI helps write code. The problem is the absence of close review, the willingness to stay at the surface, and the illusion that running software is the same thing as understood software. Their later episode, &lt;a href="https://syntax.fm/show/998/how-to-fix-vibe-coding" rel="noopener noreferrer"&gt;"How to Fix Vibe Coding"&lt;/a&gt;, points in the better direction: deterministic tools, linting, quality analysis, headless browsers, task workflows, observability, and tighter feedback loops.&lt;/p&gt;

&lt;p&gt;That is the line.&lt;/p&gt;

&lt;p&gt;The future worth taking seriously is not vibe coding. It is developer-led AI engineering.&lt;/p&gt;

&lt;p&gt;The developer supplies the intent. The developer supplies the taste. The developer supplies the constraints. The developer decides where the agent is allowed to roam and where it must stay on rails. The developer reads the diff. The developer runs the tests. The developer notices when the solution is locally correct but globally wrong. The developer decides whether the artifact deserves to exist.&lt;/p&gt;

&lt;p&gt;The agent accelerates the loop. It does not own the loop.&lt;/p&gt;

&lt;p&gt;This is why AI does not flatten all developers equally. It amplifies what is already there. A developer without judgment can now produce more code than before, which mostly means they can produce more unresolved consequence than before. A developer with judgment can produce more finished thought than before.&lt;/p&gt;

&lt;p&gt;The difference is not typing speed. The difference is taste under acceleration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quality was never guaranteed by size
&lt;/h2&gt;

&lt;p&gt;One of the quiet revelations of this era is that large institutions do not automatically produce better artifacts.&lt;/p&gt;

&lt;p&gt;They can produce extraordinary things. They can coordinate missions, operate infrastructure, satisfy regulators, support millions of users, and preserve knowledge across decades. But the artifact in front of the user is not always where that strength appears.&lt;/p&gt;

&lt;p&gt;NASA's &lt;a href="https://www.nasa.gov/ignition/" rel="noopener noreferrer"&gt;Ignition&lt;/a&gt; page is a useful object to look at for this reason. The underlying subject is enormous: Artemis, commercial lunar transportation, moon base capabilities, lunar terrain vehicles, procurement strategy, timelines, technical ambition. The page itself is largely a resource hub: PDFs, videos, advisories, requests for information, presentations, links. That may be the correct institutional shape for NASA's internal and public obligations. It is not the same thing as a product experience that makes the ambition legible.&lt;/p&gt;

&lt;p&gt;This is not a dunk on NASA. NASA can do things that no web developer can do.&lt;/p&gt;

&lt;p&gt;The point is more specific: institutional seriousness does not automatically become interface quality. A large organization can have the facts, the mission, the budget, the experts, and the public mandate, and still produce a web artifact that feels assembled by process rather than shaped by taste.&lt;/p&gt;

&lt;p&gt;That is exactly the kind of gap an AI-amplified master builder can attack. Not because they know more about lunar transportation than NASA. They do not. Because they can take a pile of material, infer the narrative shape, build an explorable interface, tighten the hierarchy, improve the pacing, test the interactions, and iterate before the institutional process has finished deciding which department owns the page.&lt;/p&gt;

&lt;p&gt;The same pattern shows up in developer tooling. &lt;a href="https://pingdotgg-t3code.mintlify.app/introduction" rel="noopener noreferrer"&gt;T3 Code&lt;/a&gt; is interesting not only as a tool for coding agents, but as an artifact of the new workflow. It is a minimal web GUI around agents like Codex, with sessions, git integration, worktrees, runtime modes, and a developer-facing surface designed around actual agent use. Whether or not that particular product becomes the winner is beside the point. Its existence is a sign of the tempo change. A small team can feel a workflow problem, build directly into it, and ship a tool that makes the new loop more usable.&lt;/p&gt;

&lt;p&gt;The old world made this kind of thing harder. The new world makes it common.&lt;/p&gt;

&lt;h2&gt;
  
  
  The small team becomes dangerous again
&lt;/h2&gt;

&lt;p&gt;The small team always had one advantage: fewer people had to agree before the work moved.&lt;/p&gt;

&lt;p&gt;That advantage used to be balanced by a brutal limitation: fewer people could build. A small team could choose quickly but execute slowly once the surface area grew. A large team could choose slowly but execute with force once the organization aligned.&lt;/p&gt;

&lt;p&gt;AI changes the ratio. It gives the small team, and sometimes the single master builder, access to execution capacity that used to require organizational size. It does not give them the large company's distribution, trust, legal department, customer base, or operational maturity. But for many software products, the first decisive question is not "who has the biggest organization?" It is "who can turn a clear product judgment into a working artifact fastest?"&lt;/p&gt;

&lt;p&gt;That is where the small team becomes dangerous.&lt;/p&gt;

&lt;p&gt;Not because bureaucracy is stupid. Bureaucracy is often memory. It is risk encoded as procedure. It is how large systems avoid repeating failures that individuals would happily rediscover. But bureaucracy becomes pathological when it continues to price execution as scarce after execution has become abundant.&lt;/p&gt;

&lt;p&gt;That is the source of the meeting pain.&lt;/p&gt;

&lt;p&gt;The master builder is not angry because other people exist. They are angry because the organization is still spending days converting intent into permission while the toolchain has made it possible to convert intent into a prototype, a test, a diff, a demo, or a shipped internal version. The old process insists on discussing the work in the abstract because it was designed for a world where making the work concrete was expensive.&lt;/p&gt;

&lt;p&gt;In the new world, concreteness is cheap enough to be part of the conversation.&lt;/p&gt;

&lt;p&gt;Instead of six meetings to decide whether an idea is viable, the builder can return with a working version. Instead of arguing about a flow in a document, they can put the flow in front of users. Instead of writing a speculative architecture proposal for a small feature, they can branch, build, test, measure, and throw it away if it fails. The artifact can arrive earlier in the decision process.&lt;/p&gt;

&lt;p&gt;That should make organizations better. Often it will make them uncomfortable first.&lt;/p&gt;

&lt;h2&gt;
  
  
  What still belongs to the team
&lt;/h2&gt;

&lt;p&gt;There is an easy but wrong conclusion here: if agents give execution back to individuals, teams no longer matter.&lt;/p&gt;

&lt;p&gt;Teams still matter. They matter most where reality is wider than the artifact.&lt;/p&gt;

&lt;p&gt;A master builder can build a remarkable first version, but production software lives in obligations. Security matters. Accessibility matters. On-call matters. Data retention matters. Customer migration matters. Billing matters. Support matters. Legal review matters. Incident response matters. The larger the promise a product makes to the world, the more the work extends beyond the person who first saw the shape.&lt;/p&gt;

&lt;p&gt;The mistake is not having a team. The mistake is using the team as a substitute for clear intent.&lt;/p&gt;

&lt;p&gt;A healthy team around a master builder should sharpen the artifact, not dissolve it. It should bring constraints into the work at the moment those constraints become real. It should catch risks, improve taste, protect users, and make the result operable. It should not turn every act of building into a negotiation over whether building may begin.&lt;/p&gt;

&lt;p&gt;That is the organizational challenge of AI-assisted engineering. The best teams will learn to let artifacts arrive earlier, then apply discipline around them. The worst teams will keep demanding consensus before concreteness, and they will slowly discover that the builders with the clearest intent have stopped waiting.&lt;/p&gt;

&lt;p&gt;Some will leave to start companies. Some will stay and route around the process. Some will become the people inside large organizations who quietly change the operating model. But the psychological shift is already here: the experienced engineer no longer has to accept that execution belongs somewhere else.&lt;/p&gt;

&lt;h2&gt;
  
  
  The work after code gets cheap
&lt;/h2&gt;

&lt;p&gt;When code gets cheap, software does not get easy.&lt;/p&gt;

&lt;p&gt;The hard parts move. Understanding users becomes harder to fake. Taste becomes more visible. QA becomes more important, because the amount of code that can be produced now exceeds the amount of code anyone should trust. Architecture becomes less about preventing people from typing the wrong thing and more about preserving coherence under acceleration. Product judgment becomes load-bearing.&lt;/p&gt;

&lt;p&gt;This is why the master builder matters more, not less.&lt;/p&gt;

&lt;p&gt;The builder is the person who can keep asking the questions the agent cannot answer by itself:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is this the right problem?&lt;/li&gt;
&lt;li&gt;Is this the right shape?&lt;/li&gt;
&lt;li&gt;Did the implementation preserve the intent?&lt;/li&gt;
&lt;li&gt;What did we make harder by making this easy?&lt;/li&gt;
&lt;li&gt;Where is the hidden coupling?&lt;/li&gt;
&lt;li&gt;What would a user misunderstand?&lt;/li&gt;
&lt;li&gt;What will break when the happy path ends?&lt;/li&gt;
&lt;li&gt;Is this good, or merely complete?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Those questions were always part of engineering. AI makes them more central because it makes the lower layers faster. When implementation slows down, weak judgment can hide inside the schedule. When implementation speeds up, weak judgment becomes visible almost immediately.&lt;/p&gt;

&lt;p&gt;That is good news for the kind of developer who has spent years building taste, systems sense, and ownership. It is bad news for organizations that treated those people as interchangeable implementation capacity.&lt;/p&gt;

&lt;p&gt;The master builder was never just a ticket processor. The ticket processor is the part AI threatens most directly. The builder is the person who knows what the tickets should have been, which tickets should not exist, and what artifact the tickets are failing to describe.&lt;/p&gt;

&lt;h2&gt;
  
  
  Permission was the bottleneck
&lt;/h2&gt;

&lt;p&gt;The deepest change is not that one person can now write more code.&lt;/p&gt;

&lt;p&gt;The deepest change is that one person can now carry an idea farther before asking an organization to believe in it.&lt;/p&gt;

&lt;p&gt;That changes the emotional contract of software work. A developer with a clear idea used to need permission early, because execution required resources. They needed time from other people. They needed a sprint slot. They needed a team. They needed the machinery. The idea had to survive as language long enough to earn the right to become software.&lt;/p&gt;

&lt;p&gt;Now the idea can become software sooner.&lt;/p&gt;

&lt;p&gt;That does not mean it deserves to ship. It does not mean it is correct. It does not mean the builder gets to ignore everyone else. It means the first artifact no longer has to wait for the full social machinery of production software to assemble around it.&lt;/p&gt;

&lt;p&gt;This is the thing many corporate developers feel before they can name it. The meeting hurts because the artifact is now closer than the organization thinks it is. The work is waiting behind a door that used to require a team to open. The builder now has tools in their hands.&lt;/p&gt;

&lt;p&gt;AI agents do not make developers optional. They make engineering judgment more important. They do not remove the need for teams. They remove the automatic advantage of organizational mass. They do not turn software into vibes. They give execution capacity back to the people who can already see the whole thing.&lt;/p&gt;

&lt;p&gt;The master builder is not unleashed because the machine became smart enough to replace them.&lt;/p&gt;

&lt;p&gt;The master builder is unleashed because the machine became useful enough to follow them.&lt;/p&gt;

</description>
      <category>ai</category>
    </item>
    <item>
      <title>A Framework Is Not a Platform</title>
      <dc:creator>Viktor Lázár</dc:creator>
      <pubDate>Wed, 06 May 2026 18:32:23 +0000</pubDate>
      <link>https://forem.com/lazarv/a-framework-is-not-a-platform-33ef</link>
      <guid>https://forem.com/lazarv/a-framework-is-not-a-platform-33ef</guid>
      <description>&lt;p&gt;For most of the time we have been writing web applications, two different teams answered two different questions. The framework team decided what the application looked like. The platform team decided where it ran. The line between the two questions held quietly for thirty years, and it held because nobody seriously challenged it.&lt;/p&gt;

&lt;p&gt;Rails decided how a controller talked to a model. Spring decided how a bean was wired. Express decided what a route handler looked like. None of them decided what database, proxy, cache, message bus, CDN, or regional topology the organization bought.&lt;/p&gt;

&lt;p&gt;That separation was not an accident. It was a property of how those frameworks were built. They produced a process. The process did its job. The infrastructure around the process — the CDN, the cache, the queue, the database, the function runtime, the regional layout — was someone else's job, and that someone else worked on a different review cycle, with different KPIs, accountable to different parts of the org chart.&lt;/p&gt;

&lt;p&gt;The line is being erased, and the cleanest place to see it being erased is Next.js 16. Cache Components did not just change caching. They moved an infrastructure decision into a framework API.&lt;/p&gt;

&lt;h2&gt;
  
  
  The handshake we used to have
&lt;/h2&gt;

&lt;p&gt;A Node.js web application running on Kubernetes is a clean handshake. The application produces a request handler. The platform team picks the cluster, the ingress, the CDN, the cache backend, the secrets store, the regional topology, the function runtime if there is one. They pick those things based on cost, security posture, vendor portfolio, contractual obligations, the team's existing operational expertise, and whatever standards the org has already paid down.&lt;/p&gt;

&lt;p&gt;The framework's job, in that handshake, is to be agnostic about all of it. The same code runs behind any reverse proxy. The same code uses whatever cache the platform team chose to put in front of it. The same code can be moved between vendors without changes that touch the application's source — only the deployment surface changes, and the deployment surface is a thin layer the platform team owns end-to-end.&lt;/p&gt;

&lt;p&gt;This is what Incremental Static Regeneration looked like in practice. A Next.js application built with ISR produced HTML files and a small revalidation loop. A CDN sat in front. The CDN served the file. Occasionally, on a stale-while-revalidate window, a function regenerated the file in the background. The shape was familiar to every CDN-fronted Node host. Vercel hosted it; Netlify hosted it; Kubernetes with Cloudflare in front hosted it; a bare VPS with nginx and a cron job hosted a recognizable version of it. The economics were similar everywhere because the architecture was platform-neutral, built from a CDN-and-function shape every platform team already understood.&lt;/p&gt;

&lt;p&gt;That shape is what Cache Components walks away from.&lt;/p&gt;

&lt;h2&gt;
  
  
  What v16 changed
&lt;/h2&gt;

&lt;p&gt;Cache Components, the headline feature of Next.js 16, replaces the route-segment caching model with a directive-based one. A page is dynamic by default. The developer marks regions with &lt;code&gt;'use cache'&lt;/code&gt; to opt those regions into caching. The framework prerenders a static shell where it can, streams the dynamic regions when they resolve, and stitches the response together at request time. Inside the page, the model is elegant. I have written about it from the directive-design angle in &lt;a href="https://dev.to/lazarv/the-cache-belongs-to-the-function-6f5"&gt;The Cache Belongs to the Function&lt;/a&gt; and will not repeat that argument here.&lt;/p&gt;

&lt;p&gt;The argument here is not about what &lt;code&gt;'use cache'&lt;/code&gt; looks like to the developer writing it. It is about what the runtime requires of the infrastructure underneath, once the flag is on.&lt;/p&gt;

&lt;p&gt;A page that uses Cache Components is, mechanically, a page whose response is produced per request by the framework's renderer, with cached fragments spliced in. In the general case, the CDN can no longer serve the full response without invoking the renderer. The static parts of the page exist as cached &lt;em&gt;fragments&lt;/em&gt;, not as cacheable artifacts. The renderer must run, even on a request where every fragment is a hit, because the renderer is what knows how to assemble the fragments into a streamed response.&lt;/p&gt;

&lt;p&gt;This is a small architectural change with large consequences. It moves the unit of caching from "a complete response a CDN can serve" to "a piece of a response the renderer assembles." A CDN is the infrastructure that serves complete responses. It is not the infrastructure that assembles responses from pieces. The framework, in choosing the second model, has chosen to be the assembler — which means the framework has become a piece of infrastructure that did not used to exist between the application and the CDN.&lt;/p&gt;

&lt;p&gt;Once the framework is in the request path on every request, three secondary requirements appear, each of which used to be the platform team's choice and is now the framework's demand. A cache backend has to exist, because the default in-memory cache is per-process; in practice, the framework expects a &lt;code&gt;cacheHandlers&lt;/code&gt; implementation pointing at a real backing store such as Redis. Tag invalidation has to be coordinated across instances, typically by refreshing a local view of shared invalidation state on the request path; in a clustered deployment, that becomes a round trip to shared storage the application did not used to make. The function runtime starts to matter in ways it did not before, because the dynamic-by-default model only amortizes its renderer cost on a platform that multiplexes concurrent requests across warm function invocations; on a platform without that, the cost is paid linearly with traffic.&lt;/p&gt;

&lt;p&gt;None of these requirements are illegitimate as choices. They are illegitimate as &lt;em&gt;framework outputs&lt;/em&gt;. The team did not pick Redis because it wanted Redis; the team did not put a per-request lookup on the request path because it wanted one there; the team did not select a function-runtime billing model because it had a view about how Cache Components should amortize. Redis is not the problem. The problem is when Redis stops being an application choice and becomes part of the framework's performance contract.&lt;/p&gt;

&lt;h2&gt;
  
  
  The escape hatches that closed
&lt;/h2&gt;

&lt;p&gt;In Next.js 15, the team that wanted to keep the platform-neutral economics had options. Mark a route &lt;code&gt;force-static&lt;/code&gt;. Enable Partial Prerendering per route with &lt;code&gt;experimental_ppr&lt;/code&gt;. Set a route's &lt;code&gt;revalidate&lt;/code&gt; value. Each of those decisions was visible at the route-segment level, and each one was a way for the developer to opt a route into a model the platform team's existing infrastructure already knew how to host.&lt;/p&gt;

&lt;p&gt;In v16, with &lt;code&gt;cacheComponents: true&lt;/code&gt;, those options are gone. The migration guide tells you to delete &lt;code&gt;force-dynamic&lt;/code&gt; and &lt;code&gt;force-static&lt;/code&gt;. The &lt;code&gt;experimental_ppr&lt;/code&gt; segment configuration is removed. The &lt;code&gt;revalidate&lt;/code&gt; and &lt;code&gt;fetchCache&lt;/code&gt; exports are replaced by &lt;code&gt;cacheLife&lt;/code&gt; inside &lt;code&gt;'use cache'&lt;/code&gt; boundaries. The route-segment escape hatches that used to let an application express "this page is static, please serve it as a file" are no longer in the API.&lt;/p&gt;

&lt;p&gt;The flag is opt-in, today. A team that wants the v15 economics can leave it off. But the docs already treat Cache Components as the recommended path, the dedicated PPR test suites in the repository are migrating away from a separate identity, and the trajectory of any flag that the framework team owns and recommends is well-known. Within a release or two, the recommended path becomes the default. Within a release or two after that, the legacy path becomes deprecated. The ability to refuse the new model is on a clock, and the clock is the framework team's.&lt;/p&gt;

&lt;h2&gt;
  
  
  Technically portable, economically captive
&lt;/h2&gt;

&lt;p&gt;The runtime is open source. The contract is documented. The adapters work. By the strict definition of vendor lock-in — &lt;em&gt;you cannot leave&lt;/em&gt; — there is no lock-in. Every claim a salesperson would make about the framework's portability is true.&lt;/p&gt;

&lt;p&gt;The honest definition of lock-in is not the strict one. The honest definition is: &lt;em&gt;you can leave, but the cost of leaving is large enough to change the build-vs-buy decision.&lt;/em&gt; Under that definition, Cache Components introduces a soft form of capture that ISR did not have. The runtime runs anywhere; the cost-effectiveness lives on one platform. Off that platform, the same code shape produces a meaningfully worse cost profile, a meaningfully higher operational burden, and a meaningfully lower performance ceiling.&lt;/p&gt;

&lt;p&gt;The performance ceiling is the part that is hardest to recover. On a platform that owns both the proxy and the function runtime, the static shell of a Cache-Components page can be served from the edge before the renderer is even invoked, with the dynamic stream stitched into the same response over a single connection. This is not a standard CDN primitive. It is not the contract a generic CDN signs with the application in front of it — serve a complete response, or proxy through to the origin and serve that. The handoff between a static shell and a function-produced stream, on the same connection, mid-response, is a vendor-aware proxy/runtime product. It can be built; it has not been standardized; and the team that wants it on Kubernetes is not picking it from a menu of CDN features. They are integrating bespoke pieces, or they are accepting a TTFB floor of "pod-reachable plus first render byte" instead of "edge node plus first static byte." The gap is structural, not operational.&lt;/p&gt;

&lt;p&gt;The question is not whether another platform can build the missing machinery. The question is whether an application framework should require that machinery to recover the economics it used to preserve by default.&lt;/p&gt;

&lt;p&gt;None of this is impossible to operate. It is only impossible to operate &lt;em&gt;optimally&lt;/em&gt;, because the optimum has been moved to a place only one vendor lives.&lt;/p&gt;

&lt;h2&gt;
  
  
  The pattern, beyond Next.js
&lt;/h2&gt;

&lt;p&gt;Next.js is the most aggressive case, but it is not the only framework being pulled in this direction, and the direction is the more interesting story than any one framework.&lt;/p&gt;

&lt;p&gt;Remix and React Router 7 sit at the other end of the spectrum, partly by inheritance and partly by deliberate choice. The cache contract has historically been a &lt;code&gt;headers()&lt;/code&gt; function on a loader returning standard &lt;code&gt;Cache-Control&lt;/code&gt; directives. The CDN does what CDNs do; the framework does not need a backing store, a tag manifest, or a request-time invalidation hook. Whether that posture survives future product pressure is an open question, but today the cache story is platform-neutral by construction.&lt;/p&gt;

&lt;p&gt;SvelteKit and Astro preserve the older bargain through adapters and static-first output. The application produces a generic artifact; the adapter materializes it into a deployment-specific shape only when the application has earned a dynamic runtime. The specifics stay at the deployment seam rather than seeping into the application source.&lt;/p&gt;

&lt;p&gt;Nuxt sits in the middle. Nitro's caching primitives are function-level and storage-pluggable rather than render-coupled, so a Nuxt application can express a cached value without dragging the rendering pipeline into the request path. The framework has caching, but it has not annexed caching as infrastructure.&lt;/p&gt;

&lt;p&gt;TanStack Start sits on a different axis altogether. It is router-and-query first, not renderer-and-cache first. Its primitives — TanStack Router, TanStack Query, server functions, loaders — describe what data should flow where, not what infrastructure should hold the cache. The cache lives with the query, function-level and storage-pluggable, the way TanStack Query has always shipped it. The framework does not need a Redis backing store, a tag manifest, or a request-time invalidation hook to be correct; the application's freshness is a property of its queries, not of the framework's renderer. It is a different architecture from Next.js, not a competing implementation of the same one.&lt;/p&gt;

&lt;p&gt;The structural caution is general, not aimed at any one project: a framework that adopts the renderer-and-cache architecture without the matching platform machinery inherits the hard part without inheriting the economic advantage.&lt;/p&gt;

&lt;p&gt;Some runtimes refuse this trade by construction. That is the line I have tried to hold in &lt;code&gt;@lazarv/react-server&lt;/code&gt; — a cache primitive that lives with the function, a router that is opt-in rather than load-bearing, a deployment story handled at the build seam rather than at the source. Hono, Fastify, Express, the older Node frameworks never had this problem because they never tried to absorb infrastructure decisions in the first place. They stay frameworks because they stay small.&lt;/p&gt;

&lt;p&gt;The point is not that every framework should look like the smaller ones. The point is that there is a spectrum, the spectrum has been visible for years, and the choice each framework makes about where to sit on it shapes the economics of every team that picks it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What "framework" used to mean
&lt;/h2&gt;

&lt;p&gt;A framework, historically, is a thing you pick up to write an application. The decision is local. The team's senior engineer reads two days of docs, the team's frontend lead does a spike, the team picks one, and the work moves forward. The decision does not require sign-off from security, platform, FinOps, procurement, or an architecture review board. It does not need to, because the framework's blast radius is the application source.&lt;/p&gt;

&lt;p&gt;A platform is a thing you provision. The decision is organizational. It involves vendor risk review, multi-year contracts, integration with the org's authentication and observability, alignment with the org's existing infrastructure, and the long tail of "what happens if this provider gets acquired" thinking. Those reviews exist because the wrong platform decision is hard to walk back, and because the people who feel the consequences are not the same people who made the call.&lt;/p&gt;

&lt;p&gt;When a framework's correctness and performance start to require a specific cache topology, a specific function runtime, a specific proxy behavior, the framework has crossed the category line. Picking it is no longer a local decision. It is a platform decision dressed as a framework decision, and the people who would normally weigh in on a platform decision are not in the room when it is made. The frontend lead picks Next.js because Next.js is what frontend leads pick; the cost of that choice shows up months later, in a Redis bill, in a Lambda invocation count, in a p99 graph that nobody can explain to the CFO without a paragraph of caveats.&lt;/p&gt;

&lt;p&gt;This is the part of the trade that does not recover quickly. Money recovers. A team can switch frameworks; it is painful but bounded. What does not recover is the org's awareness that infrastructure was a thing the org was supposed to choose. The next framework that ships on the same model finds the ground already prepared. Each one normalizes the next.&lt;/p&gt;

&lt;h2&gt;
  
  
  The line we forgot
&lt;/h2&gt;

&lt;p&gt;A framework is not a platform, and a platform should not pretend to be a framework.&lt;/p&gt;

&lt;p&gt;The honest test for any tool wearing the framework label is the one this article has been circling. &lt;em&gt;What infrastructure does it require us to operate? What is the degraded-mode cost if we don't?&lt;/em&gt; A tool whose answers are "your existing Node host, and roughly the same as before" is a framework. A tool whose answers are "vendor-shaped infrastructure, and meaningfully worse" is something else. It does not have to be a worse thing. It does have to be named for what it is, because the people responsible for the answers to those two questions used to be the ones making the decision.&lt;/p&gt;

&lt;p&gt;The dev/ops handshake we used to have was not nostalgia. It was a real division of labor that let frameworks evolve without dragging infrastructure along, and let platforms evolve without rewriting applications. It let teams stay in motion. It let small projects stay small. It let large projects choose where they ran on the basis of their own constraints, not the framework's.&lt;/p&gt;

&lt;p&gt;We are losing that division of labor one framework choice at a time, mostly without noticing, and the cost is showing up in places — bills, latency floors, operational complexity, vendor leverage — that nobody connected to the original decision back when it was just "what should we use to build the app."&lt;/p&gt;

&lt;p&gt;A framework should be replaceable without replacing the infrastructure underneath it. Infrastructure should not become a consequence of the framework. When those two roles invert, the team has stopped owning the most important architectural surface in the system, and the framework's authors have started.&lt;/p&gt;

&lt;p&gt;A framework is not a platform. The two have always known what they were. We are the ones who forgot.&lt;/p&gt;

</description>
      <category>nextjs</category>
      <category>redis</category>
      <category>architecture</category>
      <category>infrastructure</category>
    </item>
    <item>
      <title>Time to Yield</title>
      <dc:creator>Viktor Lázár</dc:creator>
      <pubDate>Sun, 03 May 2026 12:10:32 +0000</pubDate>
      <link>https://forem.com/lazarv/time-to-yield-20m8</link>
      <guid>https://forem.com/lazarv/time-to-yield-20m8</guid>
      <description>&lt;p&gt;&lt;em&gt;An SSG benchmark across five React frameworks, from one thousand&lt;br&gt;
pages to half a million.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You're building a marketplace. Or a documentation site. A wiki,&lt;br&gt;
a generated archive, any of a dozen things that ship a static&lt;br&gt;
catalogue at scale. Your CMS has a hundred thousand entries.&lt;br&gt;
You've picked your SSG. You run the build.&lt;/p&gt;

&lt;p&gt;Five minutes. Ten. Twenty. Maybe an hour. Maybe a stack trace.&lt;/p&gt;

&lt;p&gt;You don't know in advance — and the public benchmarks won't tell&lt;br&gt;
you. Most stop at a thousand pages, where most real catalogues&lt;br&gt;
start. The gap between what gets measured and what gets shipped&lt;br&gt;
is where the unpleasant surprises live, and the engineer who has&lt;br&gt;
to ship into that gap usually finds out which side of it their&lt;br&gt;
tool was designed for at deploy time.&lt;/p&gt;

&lt;p&gt;So I built a &lt;a href="https://github.com/lazarv/ssg-bench" rel="noopener noreferrer"&gt;benchmark&lt;/a&gt; for the gap.&lt;/p&gt;


&lt;h2&gt;
  
  
  The benchmark
&lt;/h2&gt;

&lt;p&gt;Five frameworks in a pnpm workspace, each rendering one dynamic&lt;br&gt;
route &lt;code&gt;/posts/[id]&lt;/code&gt; from a shared deterministic data source. Same&lt;br&gt;
content, same shape, idiomatic config per tool. The output has to&lt;br&gt;
be pure deployable static HTML — no Node runtime is allowed at&lt;br&gt;
request time, which is the whole point of SSG. The harness sweeps&lt;br&gt;
&lt;code&gt;PAGE_COUNT&lt;/code&gt; across &lt;code&gt;1k → 10k → 100k → 200k → 300k → 400k → 500k&lt;/code&gt;,&lt;br&gt;
measures wall time, time-to-first-page (TTFP), peak RSS, output&lt;br&gt;
size, and validates a sample of generated HTML actually contains&lt;br&gt;
the right &lt;code&gt;Post #N&lt;/code&gt; content. It's all in&lt;br&gt;
&lt;a href="https://github.com/lazarv/ssg-bench/blob/main/bench" rel="noopener noreferrer"&gt;&lt;code&gt;bench/&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  The contestants
&lt;/h2&gt;

&lt;p&gt;Five different bets on what static-site generation should look&lt;br&gt;
like in 2026.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Next.js (&lt;code&gt;apps/next&lt;/code&gt;)&lt;/strong&gt; — Vercel's framework, version 16, App&lt;br&gt;
Router and Turbopack. The most-deployed React tool in the world&lt;br&gt;
and the default reference point for any tooling comparison. Its&lt;br&gt;
strengths are well documented elsewhere; what this benchmark&lt;br&gt;
exercises is one of its many output modes — &lt;code&gt;output: "export"&lt;/code&gt;,&lt;br&gt;
the fully static path with no Node runtime at request time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TanStack Start (&lt;code&gt;apps/tanstack&lt;/code&gt;)&lt;/strong&gt; — the youngest entry, from&lt;br&gt;
the team behind TanStack Router and Query. Vite plus a Nitro-&lt;br&gt;
backed prerender plugin, file-system routing, currently in the&lt;br&gt;
1.x line and rapidly evolving. Prerendering takes a materialized&lt;br&gt;
&lt;code&gt;pages&lt;/code&gt; array of paths declared inside the Vite config.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Gatsby (&lt;code&gt;apps/gatsby&lt;/code&gt;)&lt;/strong&gt; — the old guard. GraphQL-driven by&lt;br&gt;
default, Redux-backed build cache, a sprawling plugin ecosystem,&lt;br&gt;
now maintained by Netlify after acquisition. It pre-dates every&lt;br&gt;
other entry here by years and has a distinct mental model:&lt;br&gt;
imperative &lt;code&gt;createPage&lt;/code&gt; calls inside a &lt;code&gt;gatsby-node.mjs&lt;/code&gt;&lt;br&gt;
lifecycle hook. People left it for Next.js partly because Gatsby&lt;br&gt;
builds were slow at scale; it's interesting to find out whether&lt;br&gt;
that's still the relevant fact.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Astro (&lt;code&gt;apps/astro&lt;/code&gt;)&lt;/strong&gt; — a static-first multi-framework site&lt;br&gt;
builder. Strictly speaking it isn't running React in this&lt;br&gt;
benchmark; pages are written in Astro's own &lt;code&gt;.astro&lt;/code&gt; template&lt;br&gt;
language with a fast static optimizer. It's included as the&lt;br&gt;
ceiling — the answer to "how fast can a non-React SSG go?" —&lt;br&gt;
against which the React-runtime entries can be measured fairly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.npmjs.com/package/@lazarv/react-server" rel="noopener noreferrer"&gt;@lazarv/react-server&lt;/a&gt; (&lt;code&gt;apps/react-server&lt;/code&gt;)&lt;/strong&gt; —&lt;br&gt;
an open React Server Components runtime built on Vite 8's&lt;br&gt;
Environment API with Rolldown as the production bundler.&lt;br&gt;
Disclosure: I wrote it. It's in this comparison because it's the&lt;br&gt;
only React-runtime entry whose static-export pipeline accepts a&lt;br&gt;
streaming path source — which, as the rest of this article will&lt;br&gt;
show, turns out to be the decisive design choice.&lt;/p&gt;
&lt;h2&gt;
  
  
  The headline
&lt;/h2&gt;

&lt;p&gt;At a thousand pages, every modern tool finishes in seconds and&lt;br&gt;
the table is a wash. At ten thousand, the leaders pull a small&lt;br&gt;
lead. The interesting story starts at a hundred thousand. The&lt;br&gt;
decisive story starts above two hundred thousand.&lt;/p&gt;

&lt;p&gt;I'll give you the whole thing chart by chart, but here's the&lt;br&gt;
spoiler. At 100,000 pages:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Framework&lt;/th&gt;
&lt;th&gt;wall&lt;/th&gt;
&lt;th&gt;ttfp&lt;/th&gt;
&lt;th&gt;output bytes&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Astro&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;22.6s&lt;/td&gt;
&lt;td&gt;2.18s&lt;/td&gt;
&lt;td&gt;47 MiB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;&lt;a class="mentioned-user" href="https://dev.to/lazarv"&gt;@lazarv&lt;/a&gt;/react-server&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;26.1s&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;1.63s&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;83 MiB&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;TanStack Start&lt;/td&gt;
&lt;td&gt;36.9s&lt;/td&gt;
&lt;td&gt;2.65s&lt;/td&gt;
&lt;td&gt;172 MiB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Gatsby&lt;/td&gt;
&lt;td&gt;62.1s&lt;/td&gt;
&lt;td&gt;7.91s&lt;/td&gt;
&lt;td&gt;189 MiB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Next.js&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;264.5s&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;124s&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;1.84 GiB&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;At 200,000 pages, Next.js's build crashes — exit 1, no HTML.&lt;/p&gt;


&lt;h2&gt;
  
  
  The chart that broke the pattern
&lt;/h2&gt;

&lt;p&gt;Most benchmark charts are roughly parallel lines: the same&lt;br&gt;
ranking from one page count to the next, gaps roughly constant,&lt;br&gt;
nothing that asks you to stop and look. This one isn't.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fllrtu2lp4i13ao3vz6ii.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fllrtu2lp4i13ao3vz6ii.png" alt="Time to first page" width="800" height="467"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;react-server's TTFP is a flat line. From a thousand pages to half&lt;br&gt;
a million, the time between "I started the build" and "the first&lt;br&gt;
HTML file appeared on disk" stays between 1.4 and 3.2 seconds.&lt;br&gt;
Astro and TanStack Start curve gently upward. Gatsby's curve&lt;br&gt;
starts mid-air at 5 seconds and climbs to over a hundred. Next.js&lt;br&gt;
sits between them within its working range, climbing from 2.9s at&lt;br&gt;
1k pages to 124s at 100k.&lt;/p&gt;

&lt;p&gt;What you're looking at is a single architectural decision, made&lt;br&gt;
once, repeated through every layer of each pipeline. One framework&lt;br&gt;
streams its work. The others batch it.&lt;/p&gt;
&lt;h2&gt;
  
  
  Yield, don't return
&lt;/h2&gt;

&lt;p&gt;When you tell an SSG to render &lt;code&gt;/posts/[id]&lt;/code&gt; for many IDs, it has&lt;br&gt;
to ask you for the list. The shape of that question — the API your&lt;br&gt;
config file uses — turns out to determine almost everything else.&lt;/p&gt;

&lt;p&gt;Most frameworks ask you for an array.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Next.js — apps/next/app/posts/[id]/page.jsx&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;dynamicParams&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;generateStaticParams&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;allIds&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;}));&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
// Astro — apps/astro/src/pages/posts/[id].astro
export async function getStaticPaths() {
  return allIds().map((id) =&amp;gt; ({ params: { id: String(id) } }));
}
---
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// TanStack Start — apps/tanstack/vite.config.mjs&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;pages&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;allIds&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`/posts/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;prerender&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;outputPath&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`/posts/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/index.html`&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;}));&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The shape is identical: build an array, return an array. The&lt;br&gt;
runtime then has to materialize that array — all hundred thousand&lt;br&gt;
elements of it — before any rendering can start. The first page&lt;br&gt;
of HTML cannot be written before the last entry of the path list&lt;br&gt;
has been allocated.&lt;/p&gt;

&lt;p&gt;react-server asks the same question differently:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// react-server — apps/react-server/src/pages/posts/[id].static.mjs&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;idStream&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@ssg-test/shared&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nf"&gt;idStream&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;yield&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It's an async generator. The router pulls one descriptor at a&lt;br&gt;
time, when a render worker is free. The path list is never in&lt;br&gt;
memory all at once; peak memory of the path source is &lt;code&gt;O(1)&lt;/code&gt;,&lt;br&gt;
regardless of N. As soon as the first descriptor is yielded, the&lt;br&gt;
first page can render. As soon as the first page renders, it lands&lt;br&gt;
on disk. The rest of the build is just keeping the workers fed.&lt;/p&gt;

&lt;p&gt;The runtime documents this contract explicitly at&lt;br&gt;
&lt;a href="https://react-server.dev/router/static#streaming-static-paths" rel="noopener noreferrer"&gt;react-server.dev/router/static#streaming-static-paths&lt;/a&gt;&lt;br&gt;
— and the detection is by &lt;strong&gt;function kind&lt;/strong&gt;: write &lt;code&gt;async&lt;br&gt;
function*&lt;/code&gt; directly as the default export, or fall back to the&lt;br&gt;
legacy array contract. There's no opt-in flag. The shape of your&lt;br&gt;
function is the shape of the build.&lt;/p&gt;

&lt;p&gt;You can chain the same idea at the config level, which is what the&lt;br&gt;
benchmark does to skip RSC payload sidecars (the other frameworks&lt;br&gt;
emit HTML only; we want the bytes column to compare like with like):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// react-server — apps/react-server/react-server.config.mjs&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="nf"&gt;defineConfig&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;root&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;src/pages&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="k"&gt;export&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;paths&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="k"&gt;await &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;p&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;paths&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;yield&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;p&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;rsc&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Two &lt;code&gt;async function*&lt;/code&gt; shapes — one in the route, one in the&lt;br&gt;
config. The whole streaming property of the build comes from&lt;br&gt;
those two declarations. Look at the TTFP chart again with this in&lt;br&gt;
mind: react-server is renderer-bound; everyone else is array-bound.&lt;/p&gt;
&lt;h2&gt;
  
  
  Things start to fall apart at a hundred thousand
&lt;/h2&gt;

&lt;p&gt;If TTFP is the early-warning signal, total wall time is where the&lt;br&gt;
architecture pays its real bill.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ircrhbjop5gj1za5jeu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ircrhbjop5gj1za5jeu.png" alt="Build wall time" width="800" height="467"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At a thousand pages, every framework here finishes in single-digit&lt;br&gt;
seconds and you'd struggle to feel the difference in a CI log. The&lt;br&gt;
slope of the curves is what matters, and the slope diverges hard&lt;br&gt;
above ten thousand.&lt;/p&gt;

&lt;p&gt;By a hundred thousand pages, react-server has finished in &lt;strong&gt;26&lt;br&gt;
seconds&lt;/strong&gt;. Astro, the leader, in &lt;strong&gt;22.6 seconds&lt;/strong&gt;. TanStack Start&lt;br&gt;
in 37. Gatsby in just over a minute.&lt;/p&gt;

&lt;p&gt;Next.js takes &lt;strong&gt;four and a half minutes&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;For the same work. Same content, same hundred thousand pages on&lt;br&gt;
disk. Next.js's curve is steeper than linear above 50k pages, and&lt;br&gt;
by 100k the wall time is into the "go for a coffee" territory&lt;br&gt;
that distinguishes a benchmark from a real engineering decision.&lt;/p&gt;

&lt;p&gt;The other notable result at this scale: at 100,000 pages, Gatsby&lt;br&gt;
finishes faster than Next.js. 62 seconds versus 264. Gatsby&lt;br&gt;
has a long-standing reputation for slow builds at scale, and&lt;br&gt;
that reputation isn't unfair, but on this specific workload it&lt;br&gt;
crosses the line first. The framework people moved off of for&lt;br&gt;
build performance is now, on this measurement, the faster of&lt;br&gt;
the two.&lt;/p&gt;

&lt;p&gt;The same data reads sharper as throughput: pages produced per&lt;br&gt;
second, the per-page work each framework does once it's warmed&lt;br&gt;
up.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fab0nkqs2drpckfhl67kk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fab0nkqs2drpckfhl67kk.png" alt="Throughput" width="800" height="467"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The four frameworks that complete the workload all reach a&lt;br&gt;
plateau somewhere above ten thousand pages — a steady-state&lt;br&gt;
pages-per-second ceiling that holds up the rest of the way.&lt;br&gt;
&lt;a class="mentioned-user" href="https://dev.to/lazarv"&gt;@lazarv&lt;/a&gt;/react-server runs around 3,000–3,800 pages/s; Astro&lt;br&gt;
3,000–4,400; TanStack Start 1,900–2,700; Gatsby 1,500–1,700.&lt;br&gt;
The plateaus tell you how much overhead each framework has&lt;br&gt;
amortized away once the build is steady.&lt;/p&gt;

&lt;p&gt;Next.js never reaches a plateau. Its throughput peaks at 480&lt;br&gt;
pages/s at 10k, drops to 378 pages/s at 100k, and crashes before&lt;br&gt;
it can be measured at higher counts. The build is doing &lt;strong&gt;more&lt;br&gt;
work per page as the page count grows&lt;/strong&gt; — the opposite of what&lt;br&gt;
amortization should produce. That trajectory is what makes the&lt;br&gt;
next section's failure mode predictable in retrospect: a&lt;br&gt;
pipeline whose per-page cost is increasing was always going to&lt;br&gt;
hit a ceiling.&lt;/p&gt;
&lt;h2&gt;
  
  
  The wall
&lt;/h2&gt;

&lt;p&gt;Then I cranked the count to two hundred thousand.&lt;/p&gt;

&lt;p&gt;The build crashed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;RangeError: Maximum call stack size exceeded
    at ignore-listed frames

&amp;gt; Build error occurred
Error: Failed to collect page data for /posts/[id]
    at ignore-listed frames {
  type: 'Error'
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Three seconds of CPU. No HTML. Exit code 1. Next.js's "collect&lt;br&gt;
page data" phase — the step that runs after Turbopack compiles&lt;br&gt;
your app and before the worker pool starts rendering — overflows&lt;br&gt;
V8's call stack.&lt;/p&gt;

&lt;p&gt;I bumped to 300k, 400k, 500k. Same crash, every time. The error&lt;br&gt;
itself is forthright: stack overflow, here's the phase. What the&lt;br&gt;
error can't tell you is that the input the pipeline cannot handle&lt;br&gt;
is your own page list, and that there is no flag in &lt;code&gt;next.config&lt;/code&gt;&lt;br&gt;
to ask for a different consumer of it.&lt;/p&gt;

&lt;p&gt;A &lt;code&gt;RangeError: Maximum call stack size exceeded&lt;/code&gt; is a recursion&lt;br&gt;
fingerprint. Something in Next's pipeline is walking the params&lt;br&gt;
array via naive recursion — JSON-serializing it, normalizing it,&lt;br&gt;
hashing it for the data cache, building a tree from it, take your&lt;br&gt;
pick — with recursion depth proportional to the array length&lt;br&gt;
itself, not to its log. (A balanced-tree traversal would push&lt;br&gt;
log₂(200,000) ≈ 18 frames; nowhere near a stack limit. The&lt;br&gt;
overflow only makes sense if each entry contributes a constant&lt;br&gt;
share of frames.) At 100k entries the depth still fits inside&lt;br&gt;
V8's default ~10k-frame stack. At 200k it doesn't.&lt;/p&gt;

&lt;p&gt;This is not something &lt;code&gt;--max-old-space-size=8192&lt;/code&gt; can fix (we&lt;br&gt;
tried). It's not a memory issue at all. It's an &lt;strong&gt;algorithmic&lt;br&gt;
ceiling&lt;/strong&gt;: Next.js's page-data collection is implemented as&lt;br&gt;
recursive traversal over the materialized params array, and that&lt;br&gt;
recursion has a depth limit baked into the JavaScript engine. You&lt;br&gt;
cannot grow your way past it. There is no flag because there is&lt;br&gt;
no scalar to turn.&lt;/p&gt;

&lt;p&gt;The runtime &lt;em&gt;requires&lt;/em&gt; the array contract — &lt;code&gt;generateStaticParams&lt;/code&gt;&lt;br&gt;
must return one — and the pipeline that consumes it cannot tolerate&lt;br&gt;
arrays past a certain size. Both halves of that statement are&lt;br&gt;
architecture, not bugs.&lt;/p&gt;

&lt;p&gt;react-server, on the same hardware, with the same content, spent&lt;br&gt;
&lt;strong&gt;155 seconds&lt;/strong&gt; on five hundred thousand pages. First HTML on&lt;br&gt;
disk: 2.87 seconds. The same TTFP it has at a thousand pages.&lt;br&gt;
Nothing in its pipeline ever sees a 500,000-element array, because&lt;br&gt;
nothing in its pipeline is allowed to construct one.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's actually in the output directory
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F651z8ubr6ggoupdghkie.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F651z8ubr6ggoupdghkie.png" alt="Deployable output size" width="800" height="467"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If the wall time is the loud problem, the output bytes are the&lt;br&gt;
quiet one. At 100,000 pages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Astro emits &lt;strong&gt;47 MiB&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;react-server emits &lt;strong&gt;83 MiB&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;TanStack Start emits &lt;strong&gt;172 MiB&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Gatsby emits &lt;strong&gt;189 MiB&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Next.js emits &lt;strong&gt;1.84 GiB&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At 100k pages, the deployable Next.js output is roughly forty&lt;br&gt;
times larger than Astro's and twenty times larger than react-&lt;br&gt;
server's. The bulk of it is per-page files: a &lt;code&gt;.txt&lt;/code&gt; RSC payload&lt;br&gt;
sidecar for every route, used to power client-router prefetch on&lt;br&gt;
navigation, plus a runtime bundle the page links to for hydration&lt;br&gt;
even on routes without &lt;code&gt;"use client"&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Both files are part of the App Router's contract: the &lt;code&gt;.txt&lt;/code&gt;&lt;br&gt;
payload exists so the client router can prefetch, the runtime&lt;br&gt;
exists so client components can hydrate. They're features of the&lt;br&gt;
deployment topology Next.js is designed for. The trade-off, when&lt;br&gt;
the deployment is fully static and no client component is ever&lt;br&gt;
going to run, is that the contract still ships. There's no&lt;br&gt;
documented flag to drop either for &lt;code&gt;output: "export"&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;react-server makes the equivalent choice in the opposite&lt;br&gt;
direction: emit HTML only by default for fully static export, and&lt;br&gt;
let the user opt back into RSC payload sidecars per path if they&lt;br&gt;
want them. The benchmark's config-level &lt;code&gt;export()&lt;/code&gt; hook tags every&lt;br&gt;
yielded path with &lt;code&gt;rsc: false&lt;/code&gt; to keep the bytes column comparing&lt;br&gt;
HTML to HTML.&lt;/p&gt;

&lt;h2&gt;
  
  
  Memory: where Gatsby still hurts and &lt;a class="mentioned-user" href="https://dev.to/lazarv"&gt;@lazarv&lt;/a&gt;/react-server stays quiet
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjv319xmw67jji8m3toqs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjv319xmw67jji8m3toqs.png" alt="Peak resident memory" width="800" height="467"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The memory chart is shaped a lot like the wall chart, with one&lt;br&gt;
outlier: Gatsby. Gatsby's build cache is a Redux store that&lt;br&gt;
appends every &lt;code&gt;createPage&lt;/code&gt; call into in-memory state, and it never&lt;br&gt;
sheds that state until the build finishes. At 500k pages, Gatsby's&lt;br&gt;
peak resident set hits &lt;strong&gt;9.55 GiB&lt;/strong&gt;. Long-time Gatsby users will&lt;br&gt;
be unsurprised; this is what &lt;code&gt;gatsby build&lt;/code&gt; has always done.&lt;/p&gt;

&lt;p&gt;react-server holds between &lt;strong&gt;1.2 GiB at a thousand pages and 2.6&lt;br&gt;
GiB at half a million&lt;/strong&gt; — essentially flat above 10k. TanStack&lt;br&gt;
Start ranges from &lt;strong&gt;600 MiB at 1k to 3.6 GiB at 400k&lt;/strong&gt; before&lt;br&gt;
nudging back down to 3.1 GiB at 500k. Astro is the leanest of all&lt;br&gt;
at &lt;strong&gt;0.6 to 1.8 GiB&lt;/strong&gt; across the same range.&lt;/p&gt;

&lt;p&gt;The streaming path source is one reason react-server's memory&lt;br&gt;
curve flattens. The bigger reason is what it doesn't accumulate:&lt;br&gt;
no per-route manifest, no fingerprinted asset graph for every&lt;br&gt;
page, no client-router prefetch index. Whatever doesn't exist&lt;br&gt;
doesn't take memory.&lt;/p&gt;




&lt;h2&gt;
  
  
  A note about Astro
&lt;/h2&gt;

&lt;p&gt;Astro is the fastest tool in this benchmark. It deserves the&lt;br&gt;
credit, with one important asterisk: &lt;strong&gt;Astro isn't running React&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In &lt;code&gt;apps/astro/src/pages/posts/[id].astro&lt;/code&gt;, the page is written in&lt;br&gt;
Astro's own template language. There's no React reconciler, no&lt;br&gt;
hydration framework, no Server Components flight protocol — it's&lt;br&gt;
closer to JSX-flavored server-side templating with a fast static&lt;br&gt;
optimizer. Astro is the &lt;em&gt;right ceiling&lt;/em&gt; for "what can a static-&lt;br&gt;
site generator do at all," but it isn't an apples-to-apples&lt;br&gt;
comparison with React-runtime tools.&lt;/p&gt;

&lt;p&gt;Which makes the next sentence the actual story.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a class="mentioned-user" href="https://dev.to/lazarv"&gt;@lazarv&lt;/a&gt;/react-server matches Astro.&lt;/strong&gt; Within ~15% on wall time&lt;br&gt;
at 100k (26s vs 22.6s), and &lt;strong&gt;faster on TTFP&lt;/strong&gt; (1.63s vs 2.18s). And it&lt;br&gt;
does this while running the actual React Server Components&lt;br&gt;
production server — the same one a deployment would serve at&lt;br&gt;
request time, bundled by Vite 8 and Rolldown, driven by a&lt;br&gt;
streaming path source. The HTML on disk after the export is the&lt;br&gt;
HTML the production server would have produced for a real&lt;br&gt;
request. A real React runtime moving at static-template-engine&lt;br&gt;
speed.&lt;/p&gt;

&lt;p&gt;That isn't a result you get by accident.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this works
&lt;/h2&gt;

&lt;p&gt;A React Server Components runtime keeping pace with a static-&lt;br&gt;
template engine doesn't happen because someone optimized a hot&lt;br&gt;
loop. It happens because the architecture has fewer places for&lt;br&gt;
work to pile up. Five things contribute, and none of them are&lt;br&gt;
clever; they're all just the absence of unnecessary buffers.&lt;/p&gt;

&lt;p&gt;The build phase produces a &lt;strong&gt;real production server&lt;/strong&gt;. Vite 8 and&lt;br&gt;
Rolldown bundle the runtime exactly as it would run at request&lt;br&gt;
time; the static export then starts that bundled server and asks&lt;br&gt;
it to render each yielded path. The thing that produces your HTML&lt;br&gt;
during the export is the same thing that would serve your HTML if&lt;br&gt;
you weren't exporting. There is no separate build-only renderer,&lt;br&gt;
no compile-time-only sandbox, no special static-export pipeline&lt;br&gt;
running its own copy of half the framework. Whatever the&lt;br&gt;
production server can render at request time, the export can&lt;br&gt;
produce. Two phases — bundle, then render — but the second phase&lt;br&gt;
is the production server you'd deploy, not a parallel universe&lt;br&gt;
of build-time machinery.&lt;/p&gt;

&lt;p&gt;The static path source &lt;strong&gt;streams by contract&lt;/strong&gt;. Both &lt;code&gt;[id]&lt;br&gt;
.static.mjs&lt;/code&gt; and the config-level &lt;code&gt;export()&lt;/code&gt; are &lt;code&gt;async function*&lt;/code&gt;&lt;br&gt;
shapes that the router pulls from. Memory of the path source is&lt;br&gt;
&lt;code&gt;O(1)&lt;/code&gt;. Rendering can start on the first yielded path.&lt;/p&gt;

&lt;p&gt;Render workers are &lt;strong&gt;driven by the stream&lt;/strong&gt;. The&lt;br&gt;
&lt;code&gt;--export-concurrency&lt;/code&gt; flag forks N child processes; each runs its&lt;br&gt;
own RSC main thread plus an SSR worker thread; the coordinator&lt;br&gt;
dispatches one path per free worker. Output bytes never cross the&lt;br&gt;
IPC boundary — every artifact (HTML, optional &lt;code&gt;.gz&lt;/code&gt; / &lt;code&gt;.br&lt;/code&gt;&lt;br&gt;
sidecars, postponed-fragment cache) is written to disk inside the&lt;br&gt;
child. There is no central "collect page data" buffer because&lt;br&gt;
there is no central buffer.&lt;/p&gt;

&lt;p&gt;There is &lt;strong&gt;no per-page runtime tax&lt;/strong&gt;. Pages without &lt;code&gt;"use client"&lt;/code&gt;&lt;br&gt;
get pure HTML. The runtime doesn't inject bootstrap scripts,&lt;br&gt;
doesn't write &lt;code&gt;_buildManifest.js&lt;/code&gt;, doesn't emit per-page payload&lt;br&gt;
sidecars unless you ask. The 22× output-size delta vs. Next.js&lt;br&gt;
collapses to: emit only what the page needs.&lt;/p&gt;

&lt;p&gt;And there is &lt;strong&gt;no extra compiler in the path&lt;/strong&gt;. No Turbopack-&lt;br&gt;
style parallel compiler stack, no SWC custom plugins, no static-&lt;br&gt;
build renderer that's a different runtime from the production&lt;br&gt;
server. Vite 8, Rolldown, Node, JavaScript — and the runtime&lt;br&gt;
itself. Phase 2 is just the runtime. Fewer moving parts than its&lt;br&gt;
peers, which is precisely why fewer of them break at scale.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this means if you have to pick
&lt;/h2&gt;

&lt;p&gt;If you have a thousand pages, all of these tools work. The&lt;br&gt;
differences are noise. Pick on developer experience.&lt;/p&gt;

&lt;p&gt;If you have ten thousand, Next.js is already five times slower&lt;br&gt;
than the leaders. Worth knowing before your next pitch deck.&lt;/p&gt;

&lt;p&gt;If you have a hundred thousand:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Astro&lt;/strong&gt; is the fastest if you don't need React.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a class="mentioned-user" href="https://dev.to/lazarv"&gt;@lazarv&lt;/a&gt;/react-server&lt;/strong&gt; is the fastest React runtime that
completes the workload, on par with Astro while running RSC
end-to-end, with the smallest HTML-only output of any React
option.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TanStack Start&lt;/strong&gt; completes but loses time to the materialized
&lt;code&gt;pages&lt;/code&gt; array.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gatsby&lt;/strong&gt; completes, slowly, with high memory.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Next.js&lt;/strong&gt; completes but takes about ten times as long as the
leaders and emits roughly twenty times the bytes; both numbers
follow from defaults that aren't configurable away in the
&lt;code&gt;output: "export"&lt;/code&gt; path today.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you have two hundred thousand pages or more of pre-rendered&lt;br&gt;
routes — a CMS-backed catalogue, a docs archive, a programmatically&lt;br&gt;
generated index — &lt;strong&gt;Next.js's static-export pipeline does not&lt;br&gt;
complete.&lt;/strong&gt; The build crashes with a &lt;code&gt;RangeError: Maximum call&lt;br&gt;
stack size exceeded&lt;/code&gt; during page-data collection. The failure is&lt;br&gt;
recursion depth in V8, not heap size, so it isn't fixable by&lt;br&gt;
flags or environment variables. The right framing is that&lt;br&gt;
&lt;code&gt;output: "export"&lt;/code&gt; at this scale isn't a supported topology for&lt;br&gt;
Next.js — its answer for catalogues this large is ISR, which is a&lt;br&gt;
different topology, which is the next section.&lt;/p&gt;

&lt;h2&gt;
  
  
  But what about ISR?
&lt;/h2&gt;

&lt;p&gt;Whenever the sentence "Next.js can't pre-render two hundred&lt;br&gt;
thousand pages" appears in public, someone responds: just use ISR.&lt;/p&gt;

&lt;p&gt;Incremental Static Regeneration is Next.js's answer to large&lt;br&gt;
catalogues. Don't pre-render every page at build time. Build the&lt;br&gt;
app shell, deploy it, and have the runtime generate each page on&lt;br&gt;
first request and cache the result. A &lt;code&gt;revalidate: N&lt;/code&gt; knob handles&lt;br&gt;
freshness. On Vercel it works well; on a Next.js-aware host it&lt;br&gt;
mostly works.&lt;/p&gt;

&lt;p&gt;For a strictly static deployment, it doesn't work at all.&lt;/p&gt;

&lt;p&gt;The unspoken word in "Incremental Static &lt;strong&gt;Regeneration&lt;/strong&gt;" is&lt;br&gt;
the regeneration, and regeneration requires a runtime. ISR turns&lt;br&gt;
your "static site" into an HTTP server that lazily produces HTML&lt;br&gt;
on the way to the browser. If your deployment target is a CDN&lt;br&gt;
that only serves files — GitHub Pages, S3 + CloudFront, an nginx&lt;br&gt;
in front of a directory, Cloudflare Pages without a Worker, the&lt;br&gt;
static-files product on Netlify, an air-gapped intranet, the&lt;br&gt;
classic shared-hosting plan your client insists on — there is no&lt;br&gt;
runtime for ISR to run on. The feature isn't degraded, it's&lt;br&gt;
missing.&lt;/p&gt;

&lt;p&gt;This is the case the benchmark was designed for: pure static HTML&lt;br&gt;
plus assets, no Node runtime at request time. All five tools in&lt;br&gt;
the comparison advertise themselves as supporting that mode. The&lt;br&gt;
point of measuring at 100k+ is to find out whether the advertised&lt;br&gt;
mode survives at the scale a real catalogue produces. ISR doesn't&lt;br&gt;
enter the comparison because it isn't the same product — it's a&lt;br&gt;
different deployment topology that swaps a build-time problem for&lt;br&gt;
a request-time one. Both are valid; they aren't interchangeable,&lt;br&gt;
and the trade-offs should be visible to whoever signs off on&lt;br&gt;
hosting cost, security posture, or operational surface area.&lt;/p&gt;

&lt;p&gt;Three concrete consequences of that swap, worth knowing before&lt;br&gt;
reaching for ISR as a workaround:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The first visitor to every page pays the bill.&lt;/strong&gt; A hundred&lt;br&gt;
thousand product pages and a hundred thousand unique long-tail&lt;br&gt;
visits over a quarter mean each visitor is the unlucky one for&lt;br&gt;
exactly one page. Cold start plus render time plus cache write —&lt;br&gt;
typically a hundred milliseconds to a few seconds, depending on&lt;br&gt;
the page. A static export amortizes that work into one build. ISR&lt;br&gt;
amortizes it into one hundred thousand request-time renders, each&lt;br&gt;
on the critical path of someone's pageview.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You are now paying for compute you weren't paying for.&lt;/strong&gt; A&lt;br&gt;
static site sits on CDN edge cache and costs essentially nothing&lt;br&gt;
above bandwidth. ISR requires a serverless function (or a long-&lt;br&gt;
running process) that's billable per invocation and per millisecond&lt;br&gt;
of execution. The bigger the catalogue, the more pages enter the&lt;br&gt;
"never visited" tail and the more compute you allocate for HTML&lt;br&gt;
that nobody reads.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cache invalidation enters your application's design.&lt;/strong&gt; ISR's&lt;br&gt;
freshness story is &lt;code&gt;revalidate: N&lt;/code&gt; plus on-demand revalidation&lt;br&gt;
hooks. Both are reasonable, both are concepts your team now has to&lt;br&gt;
think about, and both are operational surface area that didn't&lt;br&gt;
exist when the deployment was files in a directory. For sites&lt;br&gt;
whose content really doesn't change often, this is purely added&lt;br&gt;
complexity.&lt;/p&gt;

&lt;p&gt;And there's a subtler point. &lt;strong&gt;ISR doesn't fix the underlying&lt;br&gt;
build ceiling.&lt;/strong&gt; If you mark some routes as fully pre-rendered&lt;br&gt;
via the array contract — &lt;code&gt;dynamicParams: false&lt;/code&gt;, &lt;code&gt;generateStatic&lt;br&gt;
Params&lt;/code&gt; returning the full set — you're back in the recursion-&lt;br&gt;
overflow territory from earlier in this article. ISR side-steps&lt;br&gt;
the wall by routing around it. It doesn't move the wall.&lt;/p&gt;

&lt;p&gt;None of this makes ISR a bad feature. It makes ISR an answer to a&lt;br&gt;
different question. "How do I serve a hundred thousand pages&lt;br&gt;
without paying for a build that materializes them all" is a real&lt;br&gt;
problem. "How do I generate a hundred thousand pages of pure&lt;br&gt;
static HTML to a CDN" is a different real problem. You don't&lt;br&gt;
solve the second with the answer to the first.&lt;/p&gt;

&lt;p&gt;react-server, Astro, TanStack Start, and Gatsby answer the second&lt;br&gt;
one. Next.js, in its &lt;code&gt;output: "export"&lt;/code&gt; mode, scales to about&lt;br&gt;
150,000 pages and is designed around ISR for the rest.&lt;/p&gt;




&lt;h2&gt;
  
  
  The contract is the product
&lt;/h2&gt;

&lt;p&gt;&lt;a class="mentioned-user" href="https://dev.to/lazarv"&gt;@lazarv&lt;/a&gt;/react-server didn't win this benchmark with new&lt;br&gt;
technology. The runtime is Node. The bundler is Vite 8 with&lt;br&gt;
Rolldown. The API is &lt;code&gt;async function*&lt;/code&gt; — a primitive that's been&lt;br&gt;
in JavaScript engines since 2018. There's nothing in the build&lt;br&gt;
pipeline you couldn't have shipped seven years ago.&lt;/p&gt;

&lt;p&gt;What's novel is choosing it.&lt;/p&gt;

&lt;p&gt;Most of the React ecosystem has spent the last half-decade&lt;br&gt;
optimizing the wrong layer. The renderer is fast everywhere. The&lt;br&gt;
worker pool is fast everywhere. The compiler — Turbopack, SWC,&lt;br&gt;
take your pick — is fast everywhere. The bottleneck at scale&lt;br&gt;
turns out to be one decision made at the top of your route file:&lt;br&gt;
&lt;strong&gt;does the path source return, or does it yield?&lt;/strong&gt; And the only&lt;br&gt;
way to fix the bottleneck is to change the contract. Nobody else&lt;br&gt;
has.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;generateStaticParams&lt;/code&gt; returns an array. &lt;code&gt;getStaticPaths&lt;/code&gt; returns&lt;br&gt;
an array. TanStack Start's &lt;code&gt;pages&lt;/code&gt; is an array. Gatsby's&lt;br&gt;
&lt;code&gt;createPage&lt;/code&gt; is an array smuggled in through a loop. Every layer&lt;br&gt;
downstream of those APIs is forced to assume the worst case lives&lt;br&gt;
in memory at once. At a thousand pages the assumption costs&lt;br&gt;
nothing. At a hundred thousand it costs minutes. At two hundred&lt;br&gt;
thousand, in Next.js, it costs the build — &lt;code&gt;RangeError: Maximum&lt;br&gt;
call stack size exceeded&lt;/code&gt;, exit one, zero pages produced.&lt;/p&gt;

&lt;p&gt;react-server's &lt;code&gt;[id].static.mjs&lt;/code&gt; doesn't return anything. It&lt;br&gt;
yields. The renderer pulls. Memory of the path source is &lt;code&gt;O(1)&lt;/code&gt;.&lt;br&gt;
N is unbounded. The build is the same shape at a thousand pages&lt;br&gt;
as it is at half a million, because the architecture has nothing&lt;br&gt;
that grows with it.&lt;/p&gt;

&lt;p&gt;If you are picking an SSG in 2026 and your roadmap has more than&lt;br&gt;
ten thousand pages in it, look at the path-list API before you&lt;br&gt;
look at anything else. The framework that lets you yield will&lt;br&gt;
scale with your content. The framework that asks for a return&lt;br&gt;
will, eventually, give you back an empty &lt;code&gt;out/&lt;/code&gt; directory and a&lt;br&gt;
stack trace.&lt;/p&gt;

&lt;p&gt;This isn't really a Next.js problem. It's a generation-of-tooling&lt;br&gt;
problem. Static-site generation at scale has been treated as a&lt;br&gt;
build-pipeline optimization for years. It isn't. It's an API&lt;br&gt;
design problem, and the API is the array.&lt;/p&gt;

&lt;p&gt;Change the API. Yield, don't return.&lt;/p&gt;

&lt;p&gt;It's time.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;The full benchmark is open source. &lt;a href="https://github.com/lazarv/ssg-bench/tree/main/apps" rel="noopener noreferrer"&gt;&lt;code&gt;apps/&lt;/code&gt;&lt;/a&gt; for each&lt;br&gt;
framework's setup, &lt;a href="https://github.com/lazarv/ssg-bench/tree/main/bench" rel="noopener noreferrer"&gt;&lt;code&gt;bench/&lt;/code&gt;&lt;/a&gt; for the harness, and&lt;br&gt;
&lt;a href="https://github.com/lazarv/ssg-bench/blob/main/bench/REPORT.md" rel="noopener noreferrer"&gt;&lt;code&gt;bench/REPORT.md&lt;/code&gt;&lt;/a&gt; for the complete table. To&lt;br&gt;
reproduce: &lt;code&gt;pnpm install &amp;amp;&amp;amp; pnpm bench:sweep &amp;amp;&amp;amp; pnpm report &amp;amp;&amp;amp;&lt;br&gt;
pnpm chart&lt;/code&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Disagreements welcome
&lt;/h2&gt;

&lt;p&gt;I wrote &lt;a class="mentioned-user" href="https://dev.to/lazarv"&gt;@lazarv&lt;/a&gt;/react-server. The disclosure is at the top of&lt;br&gt;
the article, and I built the bench to keep the comparison honest&lt;br&gt;
despite it — same content per route, fastest successful run wins&lt;br&gt;
per cell, sample HTML validated per build, failed cells reported&lt;br&gt;
as failed rather than dropped, every framework's idiomatic&lt;br&gt;
configuration used as documented. I believe the comparison is&lt;br&gt;
fair.&lt;/p&gt;

&lt;p&gt;But I'm one person reading my own benchmark. If you spot a flag&lt;br&gt;
I should have set, a version I should have tried, an inadvertent&lt;br&gt;
advantage I've handed &lt;a class="mentioned-user" href="https://dev.to/lazarv"&gt;@lazarv&lt;/a&gt;/react-server — open an issue or&lt;br&gt;
send a PR. The harness is in &lt;code&gt;bench/&lt;/code&gt;, the apps are in &lt;code&gt;apps/&lt;/code&gt;,&lt;br&gt;
and any change that produces a fairer comparison wins.&lt;/p&gt;

&lt;p&gt;If the data lands somewhere different in your read than in mine,&lt;br&gt;
that's the conversation worth having. I'd rather the article get&lt;br&gt;
the technical story right than win an argument.&lt;/p&gt;

</description>
      <category>nextjs</category>
      <category>tanstack</category>
      <category>astro</category>
      <category>gatsby</category>
    </item>
    <item>
      <title>A Low Floor Is Not a Low Ceiling</title>
      <dc:creator>Viktor Lázár</dc:creator>
      <pubDate>Fri, 01 May 2026 18:58:19 +0000</pubDate>
      <link>https://forem.com/lazarv/a-low-floor-is-not-a-low-ceiling-2o2f</link>
      <guid>https://forem.com/lazarv/a-low-floor-is-not-a-low-ceiling-2o2f</guid>
      <description>&lt;p&gt;There is a moment at the beginning of using a framework when the framework tells you what kind of developer it thinks you are.&lt;/p&gt;

&lt;p&gt;It rarely says this directly. It says it by what it asks of you before your own idea is allowed to appear. It says it through the scaffold it generates, the folders it names, the configuration files it creates, the conventions it assumes you already understand, and the amount of system you must accept before the smallest useful program can run.&lt;/p&gt;

&lt;p&gt;This first moment matters because it defines the emotional shape of the tool. Some systems begin with a primitive: a function, a component, a request handler, a file. They let the idea arrive first and allow structure to grow around it. Other systems begin with an institution. Before there is behavior, there is a project. Before there is a program, there is a topology.&lt;/p&gt;

&lt;p&gt;We have become used to this, especially in frontend development. A new app is expected to be born as a tree. It has routing before it has routes, build configuration before it has a build problem, lint rules before it has a team, deployment assumptions before it has users, and a package graph before it has a reason to exist. Each piece may be defensible on its own. The problem is not that any one file is absurd. The problem is that the smallest idea is asked to carry the shape of a much larger future.&lt;/p&gt;

&lt;p&gt;That is a strange bargain. It is especially strange now, because the two kinds of developers most exposed to the beginning of a system, &lt;strong&gt;beginners and AI agents&lt;/strong&gt;, are exactly the two least able to separate essential shape from accumulated ceremony.&lt;/p&gt;

&lt;h2&gt;
  
  
  What experts stop seeing
&lt;/h2&gt;

&lt;p&gt;Experienced developers have a skill we do not talk about enough: &lt;em&gt;selective blindness&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;We can open a repository and immediately reduce its apparent size. We know that some files are behavior, some files are policy, some files are boilerplate, some files are generated, and some files are present only because a tool once needed a place to write down its preferences. We know when a folder name is meaningful to the framework and when it is merely organizational. We know when a config file is actively shaping the program and when it is an artifact of the scaffold.&lt;/p&gt;

&lt;p&gt;This is not the same as simplicity. It is &lt;em&gt;familiarity doing compression&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;A beginner does not have that compression. When they open a scaffolded project, the entire tree arrives with equal authority. Every file might matter. Every convention might be something they are already supposed to know. Every import, suffix, folder, generated type, and default export might be part of the lesson. To an expert, the surrounding machinery is background. To a beginner, it is the room.&lt;/p&gt;

&lt;p&gt;That changes what the first lesson becomes. Instead of learning that a program is an idea made executable, the beginner learns that software begins inside a prepared environment whose rules are not yet visible. They learn that making even a small thing requires standing in the correct place, naming files correctly, accepting the correct project shape, and trusting that the framework will interpret the structure as intended.&lt;/p&gt;

&lt;p&gt;Some of that knowledge will eventually be necessary. But "eventually" is the important word. The first encounter with a tool should not require the learner to distinguish core concepts from scaffolding residue. A good beginning should bring the irreducible thing close: data becomes UI, input becomes state, a request becomes a response. Architecture should arrive as a way to preserve clarity as the program grows, not as the admission price for writing the first line.&lt;/p&gt;

&lt;h2&gt;
  
  
  The agent has the same problem
&lt;/h2&gt;

&lt;p&gt;AI agents make this problem visible in a different way. They are not beginners in the usual sense; they have absorbed patterns from more code than any human will read. But when an agent enters a particular repository, it does not bring the local memory of the team. It does not know which conventions are intentional, which are obsolete, which are inherited from the starter template, and which are workarounds nobody likes but everyone is afraid to remove.&lt;/p&gt;

&lt;p&gt;The agent has to discover the system by reading it. That sounds obvious, but it changes the economics of ceremony. What used to be a one-time human annoyance at project creation becomes a recurring cost paid on every AI-assisted change. The model must spend attention on the filesystem, the dependency graph, the framework conventions, the version-specific behavior, and the shape of the surrounding setup before it can safely reason about the user's request.&lt;/p&gt;

&lt;p&gt;It is tempting to reduce this to token count. More files mean more tokens; more tokens mean more cost. That is true, but it is the least interesting part. The deeper issue is that tokens do not all have the same semantic weight. In a real project, some text defines behavior, some configures behavior, some describes behavior that used to exist, some is framework glue, and some is simply the fossil record of how the project began. A human teammate can often point at a file and say, "ignore that." The model has to infer it.&lt;/p&gt;

&lt;p&gt;This is where bloated systems become dangerous for AI. They do not merely give the model more to read. They give it more ways to be plausibly wrong. It can follow a pattern that exists in the repository but no longer represents the intended direction. It can apply a framework rule from the wrong version. It can miss that a file path changes rendering mode, or that a cache option interacts with a parent segment, or that a wrapper exists only because a previous tool could not express the smaller thing directly.&lt;/p&gt;

&lt;p&gt;The beginner asks, "where do I put the code?" The agent asks the same question in another form: &lt;em&gt;"which of these tokens are the program?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Systems with too much ceremony answer both questions poorly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Code size is a reasoning surface
&lt;/h2&gt;

&lt;p&gt;We often talk about code size as if it were a maintenance problem that appears after the fact. The project gets larger, so it becomes harder to maintain. That is true, but it misses the more immediate effect: &lt;strong&gt;code size changes the way a system can be understood&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;A small program can be held in the mind. You can read it and keep the whole shape present: inputs, outputs, state, effects, and failure modes. As the program grows, understanding has to move through supports: names, tests, types, boundaries, conventions, documentation, and trust. Those supports are necessary, but they are not free. Each one helps organize the system while also becoming another surface on which a wrong assumption can land.&lt;/p&gt;

&lt;p&gt;The growth is not linear because the problem is not only the number of lines. It is the number of relationships between them. A route can interact with a layout, a cache rule, a bundling boundary, a server/client split, a deployment target, and a default inherited from somewhere the developer is not currently looking. A config file can change the meaning of a component that does not mention it. A directory name can affect runtime behavior even though it looks like organization.&lt;/p&gt;

&lt;p&gt;At small sizes, adding code mostly adds capability. At larger sizes, adding code increasingly adds interaction. The surface the next change has to cross becomes wider, less local, and harder to see at once. That is the familiar moment when &lt;strong&gt;a small change stops being small&lt;/strong&gt; because the system around it must be understood first. You want to add a button, but first you need to know whether it belongs on the client. You want to move data fetching, but first you need to know which cache owns freshness. You want to simplify a file, but first you need to know whether the filename itself is an API.&lt;/p&gt;

&lt;p&gt;For humans, this becomes onboarding time, superstition, fatigue, and the slow accumulation of "don't touch that" knowledge. For AI agents, it becomes larger prompts, weaker locality, pattern matching where understanding should be, and edits that are syntactically reasonable but semantically misplaced.&lt;/p&gt;

&lt;p&gt;This is why "use a bigger context window" is not a complete answer. A bigger context window lets the model carry more of the maze. It does not tell us whether the maze needed to be there.&lt;/p&gt;

&lt;h2&gt;
  
  
  The toy path is not kindness
&lt;/h2&gt;

&lt;p&gt;Once the weight of modern tooling becomes visible, the obvious solution is to give beginners something smaller. A simpler framework. A reduced mode. A teaching tool. A toy environment with fewer concepts and fewer ways to get lost.&lt;/p&gt;

&lt;p&gt;Sometimes this is useful. Teaching often requires choosing a smaller surface. But as an architectural answer, it fails if the small path is not part of the same world as the large path. If the beginner learns one model and then has to abandon it when the application becomes real, the simplicity was not a doorway. It was a waiting room.&lt;/p&gt;

&lt;p&gt;The same is true for small projects. A tiny internal tool should not have to choose between a toy framework that will be outgrown and a production framework that arrives already bloated. A prototype should be allowed to be real. A first file should be allowed to become the first file of the final system. The path from "almost nothing" to "something serious" should be continuous.&lt;/p&gt;

&lt;p&gt;This is the part that is easy to miss: &lt;strong&gt;beginners do not need worse tools&lt;/strong&gt;. They need real tools with lower entry points.&lt;/p&gt;

&lt;p&gt;If the only way to make a framework approachable is to remove its power, then the framework has not solved approachability. It has outsourced it to a different tool. A better framework shape lets the same primitive participate at multiple scales. The first component is not a demo artifact; it is a legitimate member of the system. The first route is not a special tutorial mode; it is the smallest case of the routing model. The first cache is not a global doctrine; it is a local decision next to the computation it affects.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A low floor is not a low ceiling.&lt;/strong&gt; In the best systems, the low floor is evidence that the ceiling is supported by real structure rather than by ceremony.&lt;/p&gt;

&lt;h2&gt;
  
  
  Almost nothing should work
&lt;/h2&gt;

&lt;p&gt;There is a design principle hiding here that sounds more radical than it is: &lt;em&gt;almost nothing should work&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;A single file should work. A single component should work. No configuration should work. No router should work until there is more than one place to go. No cache policy should exist until freshness has become a question. No deployment adapter should change the meaning of the application before deployment is actually being discussed.&lt;/p&gt;

&lt;p&gt;Absence should be a valid state of the system.&lt;/p&gt;

&lt;p&gt;This is not minimalism for its own sake. It is maintainability in its most practical form. A file that does not exist cannot go stale. A wrapper that was never extracted cannot become a place where names drift. A configuration key that was never introduced cannot be copied into the next project without understanding. A convention that was never required cannot become folklore. The strongest abstraction is often not the clever one, but the missing one.&lt;/p&gt;

&lt;p&gt;Frameworks are usually better at adding capabilities than at preserving absence, because capabilities are easier to demonstrate. A router can be documented. A cache layer can be benchmarked. A deployment adapter can be announced. "You do not have to think about this yet" is harder to turn into a feature page, even though it may be the most important feature for the first hour, the first week, and every AI agent session after that.&lt;/p&gt;

&lt;p&gt;The discipline is not to avoid power. The discipline is to &lt;strong&gt;delay power until the problem asks for it&lt;/strong&gt;. Configuration is good when it changes something the developer has chosen to care about. Project structure is good when the project has enough internal gravity to need one. Defaults are good when they remain defaults. They become bloat when they appear before the program has earned them and then pretend their presence is neutral.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scaling downward
&lt;/h2&gt;

&lt;p&gt;We usually use "scalable" to mean that a system can grow upward. More users, more routes, more teams, more data, more features, more deployment targets. That kind of scale matters, and a framework that cannot grow upward will eventually trap serious applications.&lt;/p&gt;

&lt;p&gt;But there is another kind of scale that is just as important: &lt;strong&gt;a system must scale downward&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It must scale down to one file, one component, one endpoint, one idea tested before lunch. It must scale down to the beginner trying to see the whole program at once. It must scale down to the AI agent trying to make a narrow change without reconstructing the entire framework context first. A system that scales upward but not downward is not truly scalable. It is only &lt;em&gt;large-capable&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;This distinction changes how we judge architecture. The question is not only whether a framework can host an enormous application. The question is whether it can host a tiny one without making it pretend. Can the smallest useful program be written directly? Can it grow by adding concepts one at a time? Can each new layer explain itself by answering a pressure already present in the code?&lt;/p&gt;

&lt;p&gt;That is what a grown-up framework should feel like. At the beginning, most decisions should be &lt;em&gt;not yet&lt;/em&gt;. Not yet a routing tree. Not yet a cache hierarchy. Not yet a deployment-specific semantic. Not yet a global configuration file. Just the program. Then, when the program needs a second page, routing appears. When it needs shared structure, layout appears. When it needs data freshness control, caching appears next to the data. When it needs background isolation, a worker boundary appears around the work. When it needs deployment specificity, an adapter appears at the edge rather than changing the meaning of the center.&lt;/p&gt;

&lt;p&gt;Each new concept should feel like a door opening from the room you are already standing in.&lt;/p&gt;

&lt;h2&gt;
  
  
  The same world
&lt;/h2&gt;

&lt;p&gt;The deepest mistake is believing that beginners, experts, and AI agents need different worlds. They do not. They need &lt;em&gt;different distances from the same center&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;The beginner needs to stand close to the irreducible idea, where the relationship between code and behavior is visible. The expert needs to move outward into power, performance, specificity, and control without being trapped by the framework author's fixed menu. The AI agent needs the same locality both of them need: code whose meaning is present in the text before it is hidden in conventions that must be inferred.&lt;/p&gt;

&lt;p&gt;These are not competing requirements. They are the same architectural requirement seen from different heights.&lt;/p&gt;

&lt;p&gt;Make the primitive honest. Make the first step real. Make absence valid. Make defaults optional. Make every layer replaceable when it finally appears. &lt;strong&gt;Let the small thing belong to the same world as the large thing.&lt;/strong&gt; Then the beginner is not trapped in a toy path, the expert is not trapped in a convention path, and the agent is not trapped in a fog of scaffolding.&lt;/p&gt;

&lt;p&gt;We should stop admiring systems merely because they can host enormous applications. That is only one kind of strength. The more interesting strength is the ability to be gentle with beginnings: to let an idea exist before it has proved that it deserves architecture, and to let it grow without exile.&lt;/p&gt;

&lt;p&gt;A serious framework should be able to hold almost nothing.&lt;/p&gt;

&lt;p&gt;And if the idea grows, it should not have to leave home.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>discuss</category>
      <category>programming</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>A Function Should Know Where It Runs</title>
      <dc:creator>Viktor Lázár</dc:creator>
      <pubDate>Thu, 30 Apr 2026 10:27:03 +0000</pubDate>
      <link>https://forem.com/lazarv/a-function-should-know-where-it-runs-3721</link>
      <guid>https://forem.com/lazarv/a-function-should-know-where-it-runs-3721</guid>
      <description>&lt;p&gt;There is an obvious appeal to a server function you can call from anywhere. The old version of the same idea was not pleasant. You wrote an endpoint, then a client helper for that endpoint, then some shared schema to keep the two sides honest, then error handling in both places, and eventually a small pile of files whose main job was to move one value from the browser to the server and another value back again.&lt;/p&gt;

&lt;p&gt;So when a framework lets you write the server part as a normal function and call it as a normal function, it feels like the right kind of progress.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;getUser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;createServerFn&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;findCurrent&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Somewhere else:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;getUser&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is much nicer than wiring an endpoint by hand. The function is typed. The caller is typed. Refactors have a path through the codebase instead of disappearing into a string URL. For a lot of application code, especially small reads and mutations, this is exactly the kind of boilerplate a framework should remove.&lt;/p&gt;

&lt;p&gt;The question is not whether the API is useful. It is. The question is what gets hidden when the call becomes this smooth.&lt;/p&gt;

&lt;h2&gt;
  
  
  The same call is not always the same operation
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;await getUser()&lt;/code&gt; can mean slightly different things depending on where it appears. If the call happens while the application is already running on the server, it can be a direct path into server code. If it happens in the browser, it has to become a request. If it happens in a route loader, it belongs to the router's data lifecycle. If it happens after a click, it belongs to an interaction that the user is waiting on.&lt;/p&gt;

&lt;p&gt;Those cases can all share the same TypeScript signature, but they are not the same situation. The value that comes back may have the same shape; the act of getting it does not.&lt;/p&gt;

&lt;p&gt;That is the part of isomorphic server functions that makes me uneasy. The abstraction removes a lot of code nobody wanted to write, but it also makes the call site less descriptive. The line looks ordinary in places where the operation behind it may not be ordinary at all.&lt;/p&gt;

&lt;h2&gt;
  
  
  What TanStack makes pleasant
&lt;/h2&gt;

&lt;p&gt;TanStack Start leans into this trade quite naturally. A server function is explicit when it is defined, and then the exported value is designed to be called from the places where application code tends to need it: loaders, components, hooks, event handlers, other server functions. That fits the rest of TanStack's style. The router is central, the data flow is typed, and the application is assembled out of explicit functions rather than a large menu of special filenames. If that is already the way you want to build, the server function API feels consistent.&lt;/p&gt;

&lt;p&gt;There is nothing dishonest about the definition site. &lt;code&gt;createServerFn()&lt;/code&gt; tells you that the handler is server code. It can touch a database. It can read secrets. It can do work the browser cannot do. The ambiguity appears later, when the call has been made deliberately ordinary.&lt;/p&gt;

&lt;p&gt;That ordinariness is useful while you are writing the code. You know where you are. You know whether the call is inside a loader or inside a button handler. You know what the framework is going to do. The problem shows up later, when the code is read without all of that context already loaded into someone's head.&lt;/p&gt;

&lt;h2&gt;
  
  
  A small refactor changes the role
&lt;/h2&gt;

&lt;p&gt;Imagine a settings page that starts like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;Route&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;createFileRoute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/settings&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)({&lt;/span&gt;
  &lt;span class="na"&gt;loader&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;getUser&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
  &lt;span class="na"&gt;component&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;SettingsPage&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Later, someone adds a refresh button inside the page:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;refresh&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;getUser&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="nf"&gt;setUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Both calls are reasonable. Both may be exactly what the application wants. But they are not playing the same role anymore. The first call belongs to navigation. The second call belongs to an interaction after the page is already on screen. It has a different timing, a different failure shape, probably a different loading state, and possibly a different relationship to invalidation.&lt;/p&gt;

&lt;p&gt;Nothing about &lt;code&gt;getUser()&lt;/code&gt; is wrong here. The issue is that the call is too polite to mention that its role changed. The code moved from one part of the application to another, and the most important difference is now carried by the surrounding framework context rather than by the expression itself.&lt;/p&gt;

&lt;h2&gt;
  
  
  Types do not carry place
&lt;/h2&gt;

&lt;p&gt;Types do not really solve this. They solve an important part of it, but not this part. &lt;code&gt;Promise&amp;lt;User&amp;gt;&lt;/code&gt; tells me what value I will eventually get. It does not tell me why I am waiting. It does not tell me whether the delay is a database query in the same process or a request from the browser to the server. It does not tell me whether cookies are involved, whether middleware runs, whether a rate limit can trip, or whether the user is now staring at a disabled button.&lt;/p&gt;

&lt;p&gt;All of those things can live behind the same return type.&lt;/p&gt;

&lt;h2&gt;
  
  
  What RSC keeps visible
&lt;/h2&gt;

&lt;p&gt;This is where React Server Components come from a different direction. RSC does not try to make server code and client code feel like the same kind of code. It lets them participate in the same React tree, but it keeps their environments distinct. Server Components run on the server. Client Components run in the browser. Server Functions are server code that can be referenced across the boundary.&lt;/p&gt;

&lt;p&gt;The same settings page has a different shape in that model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;SettingsPage&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;getUser&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;SettingsForm&lt;/span&gt; &lt;span class="na"&gt;user&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="na"&gt;refreshUser&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;refreshUser&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;getUser&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;findCurrent&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;refreshUser&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;use server&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;findCurrent&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;use client&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;SettingsForm&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;refreshUser&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;currentUser&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setCurrentUser&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;refresh&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;setCurrentUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;refreshUser&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There is more ceremony here. The client piece has to be named. The server function has to be passed across the boundary. Depending on the framework, this may also mean another file. But the roles are visible in the shape of the code: the initial read belongs to the Server Component, and the later refresh is a client interaction calling a Server Function.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The split does not necessarily have to be a file split. With function-level boundaries, the same idea could live much closer to the place where it is used:&lt;/p&gt;


&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;SettingsPage&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;getUser&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

  &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;SettingsForm&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;use client&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;currentUser&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setCurrentUser&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;refreshUser&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;use server&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;findCurrent&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;refresh&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nf"&gt;setCurrentUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;refreshUser&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="c1"&gt;// ...&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;SettingsForm&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;That is the argument in &lt;a href="https://dev.to/lazarv/the-use-client-tax-1ed0"&gt;The "use client" Tax&lt;/a&gt;: the boundary should stay visible, but it should be allowed to live closer to the code it describes.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The current ergonomics of that model are not perfect. Next's file-level &lt;code&gt;"use client"&lt;/code&gt; boundary creates real friction, and small interactive pieces often end up in files that exist mostly because the bundler needs a module boundary. That is not a minor annoyance; it changes how code is organized. But the underlying idea is still important: a piece of code should communicate where it belongs.&lt;/p&gt;

&lt;p&gt;When something is server code, the reader should be able to expect server capabilities. When something is client code, the reader should be able to expect browser capabilities. When a value or reference crosses from one side to the other, the model should have a visible place for that crossing. Not because visible boundaries are beautiful in themselves, but because hidden boundaries tend to come back later as surprises about latency, failure, serialization, or state.&lt;/p&gt;

&lt;p&gt;This is the difference I care about between the two approaches. With an isomorphic server function, the definition says "server", but the call site tries to feel universal. With RSC, the model keeps insisting that server and client are different places, even when they are composed together.&lt;/p&gt;

&lt;h2&gt;
  
  
  The boundary should be cheap, not invisible
&lt;/h2&gt;

&lt;p&gt;I do not think the answer is to give up the convenience of server functions. Hand-written endpoints are not some lost paradise. A framework should make it cheap to invoke server code from the client, and TanStack's version of that idea is useful. The part I would be careful with is the framing. There is a difference between "this is server code with a convenient client invocation mechanism" and "this is just a function you can call from anywhere."&lt;/p&gt;

&lt;p&gt;The first framing keeps the boundary in the reader's mind. The second makes the boundary feel incidental until some operational detail forces it back into view.&lt;/p&gt;

&lt;p&gt;That is not just a matter of taste. It becomes a real maintenance problem.&lt;/p&gt;

&lt;p&gt;It shows up in code review, when a harmless-looking call has moved from a loader into an event handler and the diff does not make the change feel as large as it is. It shows up in debugging, when a line that reads like a function call fails like a network interaction. It shows up in refactors, when moving code across an invisible boundary changes timing, failure, and user-visible behavior without changing the expression that caused it.&lt;/p&gt;

&lt;p&gt;That is why I find the RSC direction healthier, even with its current rough edges. The goal should not be to make every server call dramatic. It should not be to reintroduce ceremony for its own sake. It should be to make the boundary cheap enough that we can keep it visible without resenting it.&lt;/p&gt;

&lt;p&gt;A function does not need to shout where it runs. But if understanding the function requires knowing whether it is local code, server code, or a request in disguise, then that fact should not live only in the reader's memory of the framework. Once the boundary is invisible at the call site, every reader has to rediscover it later.&lt;/p&gt;

</description>
      <category>api</category>
      <category>architecture</category>
      <category>javascript</category>
      <category>webdev</category>
    </item>
    <item>
      <title>The Cache Belongs to the Function</title>
      <dc:creator>Viktor Lázár</dc:creator>
      <pubDate>Wed, 29 Apr 2026 09:38:00 +0000</pubDate>
      <link>https://forem.com/lazarv/the-cache-belongs-to-the-function-6f5</link>
      <guid>https://forem.com/lazarv/the-cache-belongs-to-the-function-6f5</guid>
      <description>&lt;p&gt;A few years ago, the question about caching in modern web frameworks was whether it should be on by default. That question is largely settled. Frameworks that defaulted to caching every fetch and rendering every page statically have walked the defaults back; frameworks that didn't, didn't have to. The argument that caching should be opt-in, and that the developer should be the one who decides where it pays, has won. Anyone arguing it today is arguing against a position the industry has already conceded.&lt;/p&gt;

&lt;p&gt;What is not settled is where the caching primitive &lt;em&gt;lives&lt;/em&gt;. The directive that marks a function as cacheable can be implemented in two structurally different ways, and the difference is not yet obvious to most of the people writing code that uses it.&lt;/p&gt;

&lt;p&gt;The first design treats &lt;code&gt;"use cache"&lt;/code&gt; as a marker on a function. The function carries its own caching contract. Wherever the function runs — on a server, on the edge, in a worker, in a browser — the directive means the same thing. The cache is a property of the function.&lt;/p&gt;

&lt;p&gt;The second design treats &lt;code&gt;"use cache"&lt;/code&gt; as a marker on a region of a rendering tree. The function exists, the directive is on it, but the cache machinery underneath is part of the framework's rendering pipeline. The cached output is a streaming shell that the framework stitches into a partially prerendered response, with dynamic holes carved out of it by &lt;code&gt;&amp;lt;Suspense&amp;gt;&lt;/code&gt; boundaries. The cache is a property of how the page is built.&lt;/p&gt;

&lt;p&gt;Both designs are coherent. Both are reasonable answers to a real engineering problem. They are not the same answer. This article is for the first one.&lt;/p&gt;

&lt;h2&gt;
  
  
  What was settled, and what wasn't
&lt;/h2&gt;

&lt;p&gt;A short reset on the territory is worth the paragraph.&lt;/p&gt;

&lt;p&gt;The case against default caching used to be obvious to anyone who had shipped a production application: you spend more time disabling the cache than enabling it. Routes get marked &lt;code&gt;dynamic&lt;/code&gt;. Fetches get &lt;code&gt;cache: 'no-store'&lt;/code&gt;. Layout segments get tagged &lt;code&gt;force-dynamic&lt;/code&gt;. The defaults were calibrated for a population of pages where staleness is cheap and slowness is expensive, and most production applications are not that population. Every site that mattered ended up annotating its way out of the default.&lt;/p&gt;

&lt;p&gt;The frameworks that shipped this model heard the criticism and inverted the defaults. In Next.js 15, &lt;code&gt;fetch&lt;/code&gt; is no longer cached, segments are no longer static. The &lt;code&gt;dynamicIO&lt;/code&gt; mode introduced &lt;code&gt;"use cache"&lt;/code&gt; as the primitive a developer reaches for when caching actually pays. Inside that mode, uncached is the baseline; cache only what you mark. This is the design the critics asked for. They got it.&lt;/p&gt;

&lt;p&gt;So when this article talks about what a caching primitive should look like, it is not arguing against caching by default. The default is gone. The argument that survives is about what the primitive in its place is &lt;em&gt;made of&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Two places the cache can live
&lt;/h2&gt;

&lt;p&gt;A caching primitive has to live somewhere. The two structural choices are &lt;em&gt;with the function&lt;/em&gt; and &lt;em&gt;with the renderer&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;A cache that lives with the function is portable. It travels wherever the function travels. The directive is a contract between a function and a runtime; any runtime that understands the directive can honor it; any runtime that does not, ignores it and runs the function. The cache key is the function's inputs. The cache value is the function's output. Nothing about the surrounding system needs to be present for the cache to work, because the cache &lt;em&gt;is&lt;/em&gt; the function's contract with its host.&lt;/p&gt;

&lt;p&gt;A cache that lives with the renderer is part of a rendering pipeline. This is the design Next.js's Cache Components ships under &lt;code&gt;dynamicIO&lt;/code&gt;. It works because the pipeline is in the loop — the cached value is a stream of bytes representing a region of the rendered tree, the dynamic holes are &lt;code&gt;&amp;lt;Suspense&amp;gt;&lt;/code&gt; boundaries, the streaming response stitches the cached shell back together with the live data. The directive is a marker on a region of the tree the renderer cares about, and the cache is the part of the renderer that remembers what that region produced. Take the function out of the renderer and the cache disappears, because there is nothing to cache.&lt;/p&gt;

&lt;p&gt;The first design is small. The second is integrated. Each one has a thing it is good at and a thing it cannot do.&lt;/p&gt;

&lt;p&gt;The first cannot stitch shells around dynamic holes. It does not know about Suspense. It does not produce streaming responses with prerendered prefixes. If you want partial prerendering, the second design is the one you want.&lt;/p&gt;

&lt;p&gt;The second cannot run outside the renderer. It cannot ship as a primitive in a library. It cannot run on the edge before the framework boots. It cannot run in the browser. It cannot dedupe a database lookup that happens during a worker job that has nothing to do with rendering. If you want a caching primitive you can reach for in any program, the first design is the one you want.&lt;/p&gt;

&lt;p&gt;The two designs are not interchangeable. A cache that requires the framework to be present is a feature of the framework. A cache that requires only a function is a feature of the program. This article is for the second one, and the rest of it is the structural case for that choice — four properties the cache acquires when it belongs to the function and gives up when it belongs to the renderer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Atomic, not ambient
&lt;/h2&gt;

&lt;p&gt;A function-level cache is atomic. One function, one cache, one set of inputs, one output. The developer can assert — locally, by inspection — that the output is a function of the inputs and that nothing else in the world matters. The function's inputs are the developer's parameters; there is nowhere else for hidden state to come from.&lt;/p&gt;

&lt;p&gt;Render-coupled caches give some of this up. A region of a rendering tree closes over the request, the user, the route parameters, the surrounding component state — and the cache machinery has to chase those captures and decide what is safe to serialize. The result is a more powerful cache, but the unit of reasoning has moved. The function is no longer what the developer reasons about. The region is.&lt;/p&gt;

&lt;p&gt;The hard part of caching is not the syntax. It is the honesty — marking a function only when its output really is a function of its inputs, and not closing over state the key cannot see. A smaller unit makes that honesty checkable. A larger unit makes it a research project.&lt;/p&gt;

&lt;h2&gt;
  
  
  Caches that compose
&lt;/h2&gt;

&lt;p&gt;A function-level cache composes. Two cached functions written by two different people, in two different libraries, called from a third function that caches nothing of its own, all behave the way the source reads. The outer call is uncached. The inner calls each consult their own caches. Each layer's decision belongs to that layer.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;getOrder&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;use cache&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;order&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;orders&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;find&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;customer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;getCustomer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;order&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;customerId&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;   &lt;span class="c1"&gt;// also "use cache"&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;products&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;all&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nx"&gt;order&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;lineItems&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;item&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;getProduct&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;item&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;productId&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="c1"&gt;// also "use cache"&lt;/span&gt;
  &lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;order&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;customer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;products&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Three caches stacked, no coordination. The outer cache stores the assembled order. The inner caches store the customer and each product. They have different TTLs, different tags, different lifetimes. None of them know about the others. A request that asks for an order seen recently hits one cache. A request for a fresh order whose customer is well-known hits two. A request for an entirely new order misses everything and fills in three caches at once. Every path through the system is correct, because every cache was a local decision.&lt;/p&gt;

&lt;p&gt;Render-coupled caches compose under the renderer's rules. The shell's lifetime, the hole boundary, the streaming order — these are properties of the pipeline that two cached pieces inherit when they share a tree. The function-level cache carries no surrounding model. The diff is a one-line claim about the function. The blast radius is the function.&lt;/p&gt;

&lt;h2&gt;
  
  
  A function is a function
&lt;/h2&gt;

&lt;p&gt;A directive that marks a single function does not care what runtime is reading it. The contract is between a function and its caller. The caller might be a server rendering RSC, an SSR pipeline streaming HTML, a worker, an edge runtime, a browser tab running the same code on the client side of an isomorphic boundary. The directive's meaning does not change. The function says: my output is my inputs. The runtime, whichever runtime that is, says: I will hold it.&lt;/p&gt;

&lt;p&gt;This is the property the render-coupled cache cannot have, by construction. It works because the renderer is in the loop. Take the same code out of that loop — run it before the framework boots, in a worker job, in a browser tab that does not go through the request lifecycle — and the cache disappears, because the cache is the renderer remembering, and the renderer is not there.&lt;/p&gt;

&lt;p&gt;A function-level cache survives the move. A library can ship cacheable utilities and rely on whatever runtime hosts them to honor the directive. There is no &lt;code&gt;if (server)&lt;/code&gt; branch, no &lt;code&gt;if (browser)&lt;/code&gt; branch, no separate cache wiring per environment. The same function, in any host that understands the directive, has the same contract. A host that does not understand it leaves the function alone.&lt;/p&gt;

&lt;p&gt;This is what it means for a caching primitive to be &lt;em&gt;portable&lt;/em&gt;. Not that the framework runs in many places — that is a deployment concern, and a different one. The render-coupled cache is a property of the host. The function-level cache is a property of the source. The function carries its own contract.&lt;/p&gt;

&lt;h2&gt;
  
  
  Locality buys you removability
&lt;/h2&gt;

&lt;p&gt;A function-level cache is removable in a one-line diff. Delete the directive. Ship. The function reverts to first principles — it runs every time it is called — and you can investigate the staleness offline, where it is cheap to be wrong.&lt;/p&gt;

&lt;p&gt;Render-coupled caches are removable too, when the unit being uncached is the unit the renderer marks. The harder cases are the surrounding ones: a cached region whose contents vary in ways the renderer's closure analysis did not capture, a tag-based revalidation that turned out to invalidate too much, a &lt;code&gt;cacheLife&lt;/code&gt; profile that turned out to be wrong for one specific function in one specific context. The diff is still small; the diagnosis is not, because the failure isn't in the function — it's in the relationship between the function and the renderer.&lt;/p&gt;

&lt;p&gt;The same property holds for code review. A &lt;code&gt;"use cache"&lt;/code&gt; directive shows up in a diff. A reviewer asks: is this function actually a function of its inputs? Is the TTL right? When the unit being marked is a function, those questions have function-shaped answers. When the unit being marked is a region, the questions also have to ask about Suspense boundaries, about what the renderer captures, about how the streaming response composes. More variables, more places to be subtly wrong.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scope is also a per-function decision
&lt;/h2&gt;

&lt;p&gt;The strongest underdeveloped property of the function-level cache is scope. The clearest way to see it is to look at a page that needs three caches.&lt;/p&gt;

&lt;p&gt;A product page calls &lt;code&gt;getProduct(id)&lt;/code&gt; from three different components in the same render. They should see the same value; the database lookup should run once. This is request-scoped dedup.&lt;/p&gt;

&lt;p&gt;The same page calls &lt;code&gt;getProductCatalog()&lt;/code&gt;, the company's full catalog — refreshed nightly, shared across every request, identical for every user. This is a long-lived in-memory cache.&lt;/p&gt;

&lt;p&gt;The same page calls &lt;code&gt;getInventoryStatus(sku)&lt;/code&gt;, which has to be synchronized across every server in the fleet, because two requests landing on different machines cannot disagree about whether an item is in stock. This is a shared store.&lt;/p&gt;

&lt;p&gt;In current Next.js, those are three primitives. &lt;code&gt;React.cache&lt;/code&gt; for the first. &lt;code&gt;"use cache"&lt;/code&gt; for the second. A custom cache provider, or an external store reached through a server function, for the third. Each has its own API, its own keying, its own invalidation model. A developer who picks the wrong one rewrites the function when they discover the choice was wrong.&lt;/p&gt;

&lt;p&gt;In a function-level design, all three are options on the same directive. &lt;code&gt;"use cache: request"&lt;/code&gt; for the first. &lt;code&gt;"use cache"&lt;/code&gt; for the second. &lt;code&gt;"use cache: shared"&lt;/code&gt; for the third. The function shape does not change. The directive carries the answer to which scope it belongs to. Picking the wrong one is a one-line fix.&lt;/p&gt;

&lt;p&gt;This is a real, structural advantage, and one of the places where the function-level design has not yet fully shipped in the largest framework that uses the syntax.&lt;/p&gt;

&lt;h2&gt;
  
  
  The shape of the contract
&lt;/h2&gt;

&lt;p&gt;The directive is a contract between the developer and the runtime, and the mandatory part of the contract has only three lines.&lt;/p&gt;

&lt;p&gt;The developer says: this function's output is determined by its inputs.&lt;/p&gt;

&lt;p&gt;The runtime says: I will hold the output and serve it again the next time the inputs match.&lt;/p&gt;

&lt;p&gt;Both parties say: when this is no longer true, the directive comes off.&lt;/p&gt;

&lt;p&gt;That is the surface that has to be there. Everything else — a TTL, a tag, a named profile, a choice of storage, a choice of scope — is an &lt;em&gt;option&lt;/em&gt; the developer attaches to the directive when the function calls for it. Tags are useful when there is something to invalidate by group. TTLs are useful when freshness has a known half-life. Named profiles are useful when several functions share the same caching shape and the shape is worth naming once.&lt;/p&gt;

&lt;p&gt;None of these options are wrong. They are all part of the directive's optional surface, all developer-attached, all visible at the call site. The asymmetry that matters is between options the developer wrote down and options the runtime applied silently. A developer adding &lt;code&gt;ttl=60&lt;/code&gt; or &lt;code&gt;tags=todos&lt;/code&gt; to a directive is making a decision visible in the source. A framework deciding the same thing on the developer's behalf is making the same decision invisible. Only the first kind is in the diff.&lt;/p&gt;

&lt;p&gt;The same argument applies, structurally, to every directive in this family. &lt;code&gt;"use client"&lt;/code&gt; is a marker that asserts a piece of code crosses a runtime boundary; the value of the marker is that you can read the program and see where the boundaries are. I have argued elsewhere that the directive should be allowed at finer granularity than a file — see &lt;a href="https://dev.to/lazarv/the-use-client-tax-1ed0"&gt;The "use client" Tax&lt;/a&gt; — but the underlying point is the same. A directive is the developer telling the runtime something the runtime could not have inferred. That contract scales because it is small.&lt;/p&gt;

&lt;h2&gt;
  
  
  Two kinds of software
&lt;/h2&gt;

&lt;p&gt;Underneath the function-vs-renderer choice is a more general one about what kind of software you are writing.&lt;/p&gt;

&lt;p&gt;A framework is a packaged product. It optimizes for the page — for the user-visible artifact at the end of the rendering pipeline. Coupling caching to rendering is the breakthrough that makes partial prerendering work: a static shell streamed first, dynamic holes filled in afterward, no full server roundtrip on a navigation. That is a real win, and it is the win the render-coupled cache exists to deliver. A framework architect choosing the render-coupled design is making a coherent product decision.&lt;/p&gt;

&lt;p&gt;A runtime is a primitive. It optimizes for the cache — for the contract a developer can hold in their head and reach for in any program. The function-level cache is not better than the render-coupled cache for the page. It is better for the cache. It composes outside a render tree. It runs in any environment. It survives library packaging. It does not require a mode flag to be turned on. A runtime architect choosing the function-level design is making a coherent primitive decision.&lt;/p&gt;

&lt;p&gt;Both choices are defensible. They produce different software. A developer who wants the page wants the render-coupled cache; a developer who wants a caching primitive they can reach for unconditionally wants the function-level one. The directive looks the same in both. The systems underneath it do not.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the ecosystem pays
&lt;/h2&gt;

&lt;p&gt;The decision about where a caching primitive lives looks, inside one application, like a trade-off between two designs. From above the application — at the level of the JavaScript ecosystem the application depends on — it is something else.&lt;/p&gt;

&lt;p&gt;A library author shipping a function cannot assume the consumer is in &lt;code&gt;dynamicIO&lt;/code&gt; mode. They cannot assume the consumer is using a framework at all. So a library that wants to ship a cacheable utility — a database client that should dedupe identical queries, a markdown renderer that should not re-parse the same input twice, an API client that should pool requests — has one option under the render-coupled design: do not provide the cache. The library exposes raw functions; the consumer wires them into their framework's caching themselves; everyone reinvents the same wrappers in slightly different shapes, and the bugs all live in the wiring.&lt;/p&gt;

&lt;p&gt;Under a function-level design, the library author writes &lt;code&gt;"use cache"&lt;/code&gt; at the top of the function and ships. Any consumer whose runtime understands the directive gets the cache. Any consumer whose runtime does not, gets the raw function. The library does not have to know. The consumer does not have to wrap.&lt;/p&gt;

&lt;p&gt;This is a pattern. Every time a framework absorbs a capability that could have been a primitive — caching, server functions, partial hydration, request-scoped state, routing — the ecosystem pays. The capability becomes available only inside that framework. Libraries that want it pick the framework as a hard dependency, ship their own version, or expose it as a configuration surface for the consumer to wire up. None of these are good for the developers a level removed from the framework. Each one moves complexity out of the framework and into a thousand small repositories that did not need to invent it.&lt;/p&gt;

&lt;p&gt;The render-coupled cache is not the only place this happens. It is one place where the trade is unusually clear. The capability — memoizing a function on its inputs — has a canonical shape. The shape is small. It does not need a renderer to be useful. Putting the renderer in the loop trades that universality for an integration the framework can use to power partial prerendering. That trade is fine for the framework. It is paid by everyone else.&lt;/p&gt;

&lt;p&gt;I have made a parallel version of this argument about a different misplaced primitive: a framework exposing a capability as a library API where a directive would have been smaller (&lt;a href="https://dev.to/lazarv/rsc-as-a-serializer-not-a-model-56nj"&gt;RSC as a serializer, not a model&lt;/a&gt;). Misplaced primitives look different in the small. From far enough away they look the same.&lt;/p&gt;

&lt;h2&gt;
  
  
  The smaller point
&lt;/h2&gt;

&lt;p&gt;Where a primitive lives determines what the developer can do with it. A cache that lives in the renderer can do things a function cache cannot — partial prerendering, streamed shells, suspense-aware regions. A cache that lives with the function can do things a render-coupled cache cannot — travel between environments, compose without coordination, ship in a library that does not know what framework will host it.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;"use cache"&lt;/code&gt; is the same five letters in either design. The choice the developer is making by writing the directive is not really a choice about caching. It is a choice about which of those two things the caching primitive should be.&lt;/p&gt;

&lt;p&gt;A cache that lives in the framework belongs to the framework. A cache that lives in the function belongs to the developer. Only one of those travels.&lt;/p&gt;

</description>
      <category>react</category>
      <category>framework</category>
      <category>cache</category>
      <category>architecture</category>
    </item>
    <item>
      <title>The "use client" Tax</title>
      <dc:creator>Viktor Lázár</dc:creator>
      <pubDate>Tue, 28 Apr 2026 17:16:59 +0000</pubDate>
      <link>https://forem.com/lazarv/the-use-client-tax-1ed0</link>
      <guid>https://forem.com/lazarv/the-use-client-tax-1ed0</guid>
      <description>&lt;p&gt;&lt;em&gt;Why React Server Components force small interactive ideas into file-sized boundaries — and why that boundary should be lexical instead.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;There is a moment that every developer who tries React Server Components hits, usually within their first hour. They write a server component. It fetches some data. It renders a list. Beautiful. Then they want a button that toggles a filter, and the compiler stops them: "you can't use &lt;code&gt;useState&lt;/code&gt; here." So they cut the interactive piece out, paste it into a new file, sprinkle &lt;code&gt;"use client"&lt;/code&gt; at the top, import it back into the parent, and move on.&lt;/p&gt;

&lt;p&gt;A week later their &lt;code&gt;components/&lt;/code&gt; directory looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;components/
├── product-list.tsx
├── product-list-filter.tsx
├── product-list-filter-input.tsx
├── product-list-sort.tsx
├── product-list-sort-dropdown.tsx
├── product-card.tsx
├── product-card-actions.tsx
├── product-card-favorite-button.tsx
└── product-card-quantity-stepper.tsx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Nine files for one product list. Each one a thin wrapper. Each one with two or three lines of real logic. Each one named with an increasingly desperate suffix because the original &lt;code&gt;Filter&lt;/code&gt; already exists three directories up.&lt;/p&gt;

&lt;p&gt;This is the "use client" tax, and it is real.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where the tax comes from
&lt;/h2&gt;

&lt;p&gt;The directive is not arbitrary. &lt;code&gt;"use client"&lt;/code&gt; marks a module boundary that the bundler uses to split graphs: everything reachable from a &lt;code&gt;"use client"&lt;/code&gt; entry becomes part of the client bundle; everything else stays on the server. The directive has to live at the top of a file because that is the granularity the bundler operates on. Modules in, modules out.&lt;/p&gt;

&lt;p&gt;That works fine in theory. In practice it forces a one-to-one correspondence between &lt;em&gt;interactive concerns&lt;/em&gt; and &lt;em&gt;files on disk&lt;/em&gt;, and interactive concerns are not file-sized. They are paragraph-sized. A "favorite" button that toggles state is not a module — it is two lines inside the card that displays the product. But the runtime can't see those two lines unless you lift them into their own module, give them a name, export them, import them back, and pass props across the boundary.&lt;/p&gt;

&lt;p&gt;The result is a particular kind of friction that compounds:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;File sprawl.&lt;/strong&gt; Trivial widgets become trivial files. Most of the file is the import header.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Naming fatigue.&lt;/strong&gt; Every extracted leaf needs a name. Names that were unique in their lexical scope are no longer unique once they live in a flat directory. You end up with &lt;code&gt;ProductCardFavoriteButtonInner&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lost colocation.&lt;/strong&gt; A server function that writes to the database and the form that calls it now live in two files. The relationship between them survives only as an import statement. To understand the feature you alt-tab.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Indirection without abstraction.&lt;/strong&gt; Each extracted client component is a wrapper that accepts everything the parent had in scope, as props. You are manually performing closure conversion — by hand, every time, with no help from the compiler.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Compositions you can't write.&lt;/strong&gt; The pattern that hurts most is the one you cannot express at all: a server function that computes some data and &lt;em&gt;returns&lt;/em&gt; a small interactive component bound to that data. You cannot do this in standard RSC, because the client component has to be a separate module, which means it cannot close over server-side values. You always end up exporting the client component, exporting the data fetch, and re-assembling them at the call site. The expression you wanted to write — a factory — is not available to you.&lt;/p&gt;

&lt;h2&gt;
  
  
  The shape of the pain
&lt;/h2&gt;

&lt;p&gt;Here is what a real fragment looks like under the current rules:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="c1"&gt;// product-card.tsx&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;FavoriteButton&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./product-card-favorite-button&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;ProductCard&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;product&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;product&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;find&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;article&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;h3&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;product&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;h3&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;FavoriteButton&lt;/span&gt; &lt;span class="na"&gt;productId&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;product&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="na"&gt;initial&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;product&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;isFavorite&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;article&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="c1"&gt;// product-card-favorite-button.tsx&lt;/span&gt;
&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;use client&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;useState&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;react&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;toggleFavorite&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./product-actions&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;FavoriteButton&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;productId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;initial&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;favorite&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setFavorite&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;initial&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;button&lt;/span&gt;
      &lt;span class="na"&gt;onClick&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nf"&gt;setFavorite&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;favorite&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;toggleFavorite&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;productId&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;favorite&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;★&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;☆&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;button&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="c1"&gt;// product-actions.ts&lt;/span&gt;
&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;use server&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;toggleFavorite&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;productId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Three files for a star button. Two of them exist purely as plumbing for the directive system. The actual interesting code — eight lines of state and a database write — is buried under thirty lines of imports, exports, and prop-passing.&lt;/p&gt;

&lt;p&gt;This is what people mean when they say RSC is heavy. It is not the data fetching. It is not the streaming. It is this: the directive system asks you to manually re-architect every interactive idea into a multi-file module graph, and it does so for the smallest possible units of behavior.&lt;/p&gt;

&lt;p&gt;Zoom out one level and the same pressure exists at the project boundary: modern frontend frameworks force entire micro-apps to be scaffolded across directory trees, config files, and &lt;code&gt;node_modules&lt;/code&gt; for the same kind of mechanical reason — tooling that operates at a coarser unit than the developer's idea. I covered that version of the problem in &lt;a href="https://dev.to/lazarv/the-forgotten-joy-of-node-appjs-5761"&gt;The Forgotten Joy of &lt;code&gt;node app.js&lt;/code&gt;&lt;/a&gt;. The fix is structurally the same as the one proposed below: stop letting the file system be the unit of expression.&lt;/p&gt;

&lt;h2&gt;
  
  
  The constraint is in the tool, not in the model
&lt;/h2&gt;

&lt;p&gt;Here is the part that is worth saying out loud: the file-level restriction is a property of how bundlers were built, not a property of what the directive &lt;em&gt;means&lt;/em&gt;. &lt;code&gt;"use client"&lt;/code&gt; is asserting that a piece of code runs on the client and must be serialized across a runtime boundary. That assertion is perfectly meaningful at any function scope. It only has to live at the top of a file because that is what the bundler can see.&lt;/p&gt;

&lt;p&gt;A compiler that knows about RSC directives can do better. Given a server module that contains a nested function marked &lt;code&gt;"use client"&lt;/code&gt;, it can:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Identify the nested function and the variables it captures from its lexical scope.&lt;/li&gt;
&lt;li&gt;Lift the function into a synthetic module that the bundler treats exactly like a regular &lt;code&gt;"use client"&lt;/code&gt; module.&lt;/li&gt;
&lt;li&gt;Replace the original definition with a reference to the lifted module.&lt;/li&gt;
&lt;li&gt;Inject the captured variables as props at the call site.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The developer wrote one file. The bundler sees the module graph it needs. Nothing about the underlying RSC contract changes — the same serialization rules apply, the same boundary is enforced — but the file system stops being the unit of expression. The function does.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this should look like
&lt;/h2&gt;

&lt;p&gt;Imagine writing the favorite button like this instead:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;ProductCard&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;product&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;product&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;find&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;FavoriteButton&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;use client&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;favorite&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setFavorite&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;product&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;isFavorite&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;toggle&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;use server&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
      &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;product&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toggleFavorite&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;product&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;button&lt;/span&gt; &lt;span class="na"&gt;onClick&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nf"&gt;setFavorite&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;favorite&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;toggle&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;favorite&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;★&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;☆&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;button&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;article&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;h3&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;product&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;h3&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;FavoriteButton&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;article&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One file. Server fetch, client interaction, server function — colocated, in the order you would read them, sharing a closure. The compiler does the lifting; the captured &lt;code&gt;product&lt;/code&gt; becomes a prop on the synthesized client module; the inner &lt;code&gt;"use server"&lt;/code&gt; function becomes a bound server function with the right scope. Server → client → server nesting works recursively because the same extraction pass runs until no nested directives remain.&lt;/p&gt;

&lt;p&gt;This is what a real RSC ergonomic story looks like. Not a new mental model — the same one — just expressed at the granularity humans actually think in.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this should be a standard feature
&lt;/h2&gt;

&lt;p&gt;The technique is not exotic. It is closure conversion, a transform compilers have been doing since the seventies. The hard part is wiring it into the RSC plugin chain so that virtual modules generated for inline directives flow through the same client/server graph the rest of the system already uses. That is engineering, not research.&lt;/p&gt;

&lt;p&gt;There is no fundamental reason an RSC-capable runtime cannot support this. The directive system is already a contract between the developer and the compiler; expanding it to cover function scopes in addition to module scopes does not change serialization, bundling, streaming, or the security boundary. It only changes where the developer is allowed to write the directive.&lt;/p&gt;

&lt;p&gt;If you are building an RSC runtime: pick this up. If you are using one that does not have it: ask for it. A "use client" file is not a feature. It is a workaround for a constraint we no longer need to accept.&lt;/p&gt;

&lt;p&gt;The point of RSC was to let us put server logic and client logic next to each other. The directive system, taken at face value, does the opposite: it forces them apart, file by file, until your repository is ninety percent wrappers. We can fix this. It is time to make the fix standard.&lt;/p&gt;

</description>
      <category>rsc</category>
      <category>react</category>
      <category>bundler</category>
      <category>compiler</category>
    </item>
    <item>
      <title>The Forgotten Joy of `node app.js`</title>
      <dc:creator>Viktor Lázár</dc:creator>
      <pubDate>Tue, 28 Apr 2026 17:16:40 +0000</pubDate>
      <link>https://forem.com/lazarv/the-forgotten-joy-of-node-appjs-5761</link>
      <guid>https://forem.com/lazarv/the-forgotten-joy-of-node-appjs-5761</guid>
      <description>&lt;p&gt;There used to be a moment, ten years or so ago, when you could go from "I have an idea" to "I have a running web server" in about thirty seconds:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// app.js&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;express&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;express&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;express&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;hello&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;listen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3000&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;node app.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That was the whole thing. One file. One command. You could paste it into a Slack message. You could drop it in a Gist and someone could run it. A tiny webhook receiver, a debug dashboard, an internal tool, a stub API — the entire project lived in a single buffer in your editor.&lt;/p&gt;

&lt;p&gt;Then frontend frameworks happened, and somewhere along the way we collectively decided that "starting a new project" meant something else entirely.&lt;/p&gt;

&lt;h2&gt;
  
  
  The scaffold tax
&lt;/h2&gt;

&lt;p&gt;Today, the canonical first step in starting a new app is no longer writing code. It is running a command that writes code &lt;em&gt;for&lt;/em&gt; you:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx create-next-app@latest my-app
npx create-react-app my-app
npm create vite@latest
npx create-remix
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What comes back is not a file. It is a &lt;em&gt;tree&lt;/em&gt;. Configuration files for tooling you have not yet decided to use. A &lt;code&gt;pages/&lt;/code&gt; or &lt;code&gt;app/&lt;/code&gt; directory with conventions you must learn before you can write a single line. A &lt;code&gt;tsconfig.json&lt;/code&gt; you did not write. ESLint rules. Prettier rules. A &lt;code&gt;.gitignore&lt;/code&gt;. A &lt;code&gt;README.md&lt;/code&gt; describing the scaffold itself. A &lt;code&gt;package.json&lt;/code&gt; with twelve dependencies and four scripts you did not pick.&lt;/p&gt;

&lt;p&gt;And, critically, there is no path &lt;em&gt;back&lt;/em&gt; to a single file. The scaffold is the unit of starting. There is no &lt;code&gt;framework dev ./App.jsx&lt;/code&gt;. There is only &lt;code&gt;framework new my-project&lt;/code&gt;, which produces forty files, of which you will edit two.&lt;/p&gt;

&lt;p&gt;This is fine when you are starting a real product. It is absurd when you are not.&lt;/p&gt;

&lt;h2&gt;
  
  
  What we lost
&lt;/h2&gt;

&lt;p&gt;The single-file app is not a relic of a less mature ecosystem. It is a fundamentally different &lt;em&gt;mode&lt;/em&gt; of working — one the modern frontend toolchain has quietly priced out of existence.&lt;/p&gt;

&lt;p&gt;Specifically, we lost:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The throwaway.&lt;/strong&gt; The five-minute hack to verify that an idea works. The "let me just see what this looks like rendered" experiment. With a scaffold, the cost of starting is high enough that you don't bother. You either pollute an existing big project, or you open the browser DevTools console.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The teaching artifact.&lt;/strong&gt; A blog post used to be able to say &lt;em&gt;here, run this file&lt;/em&gt;. Now it says &lt;em&gt;clone this repo&lt;/em&gt;. The reader is no longer reading code; they are operating a project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The micro-app.&lt;/strong&gt; The three-route admin tool. The internal status page. The webhook that posts a Slack message. Things that should be one file are now twenty, because the framework demands it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The shareable Gist.&lt;/strong&gt; I cannot send you a single &lt;code&gt;.jsx&lt;/code&gt; file and have you run it. I have to send you a repository — or a CodeSandbox URL, which is its own confession that the local toolchain has gotten too heavy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The curl-and-run.&lt;/strong&gt; Plain Node lets you stream a program straight from a URL into the runtime, no file on disk: &lt;code&gt;curl https://gist.githubusercontent.com/.../app.js | node&lt;/code&gt;. No clone, no install, no project to set up. The source travels over the wire, lands in the interpreter, runs. The same pattern should work for a single-file frontend app — &lt;code&gt;curl https://.../App.jsx | npx some-framework dev -&lt;/code&gt; — and the fact that this is &lt;em&gt;unimaginable&lt;/em&gt; today is the most concrete possible measurement of how heavy "starting a frontend app" has become. We have a JavaScript-shaped hole in our shells that the language used to fit through.&lt;/p&gt;

&lt;p&gt;There is a fractal version of this same pain one level down. Even &lt;em&gt;inside&lt;/em&gt; a project, modern React's &lt;code&gt;"use client"&lt;/code&gt; directive forces single features to be sharded across multiple files for purely mechanical reasons — the same disease, at smaller scale. I wrote about that version separately in &lt;a href="https://dev.to/lazarv/the-use-client-tax-1ed0"&gt;The "use client" Tax&lt;/a&gt;. What follows here is the project-level shape of the same problem: even when the whole app &lt;em&gt;should&lt;/em&gt; be one file, you are not allowed to write it that way.&lt;/p&gt;

&lt;h2&gt;
  
  
  The shape of the fix
&lt;/h2&gt;

&lt;p&gt;Imagine, for a second, that this just worked:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx next dev ./App.jsx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One file. One command. The framework picks it up, runs it, hot-reloads it, serves it. No &lt;code&gt;next.config.js&lt;/code&gt;, no &lt;code&gt;pages/&lt;/code&gt;, no &lt;code&gt;app/&lt;/code&gt;, no &lt;code&gt;package.json&lt;/code&gt;. If you decide later that you want a real project, you make the directory, add the config, split the file. The framework grows with you instead of demanding everything upfront.&lt;/p&gt;

&lt;p&gt;The technology to do this is not hard. Frameworks already build on dev servers — Vite, esbuild, Turbopack — that can resolve and bundle a single entry point. The framework conventions (file-based routing, layouts, server components) are conventions &lt;em&gt;over&lt;/em&gt; the bundler, not &lt;em&gt;replacements&lt;/em&gt; for it. There is no fundamental reason a framework's CLI cannot accept a path to a &lt;code&gt;.jsx&lt;/code&gt; file and Just Work, with the conventions kicking in only once you opt into a directory layout.&lt;/p&gt;

&lt;p&gt;The reason it doesn't work is not technical. It's cultural. We have decided, somewhere along the way, that &lt;em&gt;the project&lt;/em&gt; is the unit of frontend code, and the file is merely an implementation detail. Backend frameworks never made that mistake. You can still write a fifteen-line &lt;code&gt;server.js&lt;/code&gt; and run it. You can still write a Flask app in one file. You can still put a Go HTTP handler in &lt;code&gt;main.go&lt;/code&gt; and ship it. Scaffolds are offered as a convenience, not enforced as a precondition.&lt;/p&gt;

&lt;p&gt;Frontend should be no different.&lt;/p&gt;

&lt;h2&gt;
  
  
  One file in, one file out
&lt;/h2&gt;

&lt;p&gt;The single-file dev story is only half of the picture. The other half is what comes out when you build.&lt;/p&gt;

&lt;p&gt;Today, building a frontend project produces another tree. A &lt;code&gt;.next/&lt;/code&gt; directory. A &lt;code&gt;dist/&lt;/code&gt; directory. A &lt;code&gt;.output/&lt;/code&gt; directory. Hundreds of chunked JavaScript files, manifests, server bundles, client bundles, route maps — and a &lt;code&gt;node_modules&lt;/code&gt; you must ship alongside it, or carefully fold into the deployment artifact. Running the result usually means another framework-specific command (&lt;code&gt;next start&lt;/code&gt;, &lt;code&gt;node .output/server/index.mjs&lt;/code&gt;) that depends on the surrounding directory structure being intact.&lt;/p&gt;

&lt;p&gt;It should be possible to do this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;some-framework build ./App.jsx &lt;span class="nt"&gt;-o&lt;/span&gt; app.js
node app.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One file in. One file out. No &lt;code&gt;node_modules&lt;/code&gt;, no config, no manifest, no &lt;code&gt;dist/&lt;/code&gt; to preserve. A single &lt;code&gt;.js&lt;/code&gt; that boots an HTTP server, serves the assets it needs (inlined or referenced), and runs on any Node install with nothing else next to it.&lt;/p&gt;

&lt;p&gt;Backend developers have had this for years, just under different names. Go produces a static binary. Deno compiles to a single executable. esbuild can bundle a Node program into one file. The pattern is universal: take everything the program needs, fold it into one artifact, ship that. Nothing about a React app — even a server-rendered, server-component-heavy React app — fundamentally prevents the same thing.&lt;/p&gt;

&lt;p&gt;What this unlocks is bigger than convenience:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Trivial deployment.&lt;/strong&gt; &lt;code&gt;scp app.js server:/srv/ &amp;amp;&amp;amp; node app.js&lt;/code&gt;. No CI artifact pipelines, no Docker images for a webhook receiver, no Kubernetes for a status page.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reproducibility.&lt;/strong&gt; The artifact is a file. You can hash it, version it, archive it, email it. Not a directory whose contents quietly differ depending on which &lt;code&gt;npm install&lt;/code&gt; produced it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sandboxes.&lt;/strong&gt; A single file is something a sandbox runtime — a serverless platform, a worker, a container — can swallow whole, with no need to mount a &lt;code&gt;node_modules&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Distribution.&lt;/strong&gt; Internal tools become as easy to share as a CLI binary. "Drop this on the server and run it" is a workflow we lost the moment frontends grew a build directory.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The deploy story for a small app should be as small as the app. Right now, even a thirty-line frontend deploys like a monorepo.&lt;/p&gt;

&lt;h2&gt;
  
  
  And then AI showed up
&lt;/h2&gt;

&lt;p&gt;The scaffold tax used to be paid mostly by humans — a one-time annoyance you absorbed at project start, then forgot about. AI coding tools have quietly turned it into a recurring tax, paid on every interaction.&lt;/p&gt;

&lt;p&gt;When you ask an AI to modify a single-file app, it can read the entire program in one shot, hold the whole behavior in its working memory, and reason about a change with confidence. The file &lt;em&gt;is&lt;/em&gt; the project. There is nothing else to discover.&lt;/p&gt;

&lt;p&gt;When you ask an AI to modify a scaffolded project, it has to do archaeology first. Where does routing live? Which &lt;code&gt;tsconfig&lt;/code&gt; paths are aliased? Is that import resolved by a framework convention or by the bundler? Is &lt;code&gt;app/&lt;/code&gt; the routing root, or a coincidentally-named folder? What does the project's ESLint config forbid? Half the request gets spent loading context that wasn't actually relevant to the change.&lt;/p&gt;

&lt;p&gt;This shows up as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Worse answers&lt;/strong&gt;, because the model is reasoning under a noisier prompt.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Slower answers&lt;/strong&gt;, because more files have to be read before it can act.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;More expensive answers&lt;/strong&gt;, because tokens are not free, and a fresh agent re-discovers the same project structure on every session.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;More fragile answers&lt;/strong&gt;, because the model has more surface area on which to misread a convention.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A one-file app is, by accident, the ideal substrate for AI-assisted coding: the entire program fits in a single attention window, every symbol resolves locally, and the change you ask for can be made without crawling a directory tree first. The convention overhead we built up to make starting a project "easier" turns out to be overhead we now pay &lt;em&gt;every time&lt;/em&gt; we ask a tool to help us edit one.&lt;/p&gt;

&lt;p&gt;The same things that made the single-file app pleasant to write by hand — small surface, no hidden conventions, nothing to discover — make it the format AI tools handle best. We just stopped producing apps in that shape.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters
&lt;/h2&gt;

&lt;p&gt;There is a subtle compounding effect to all of this. When the cost of starting is high, people start fewer things. When people start fewer things, the ecosystem gets less weird, less experimental, less playful. The thirty-line idea that would have become a beloved internal tool never gets written, because the scaffolding tax was higher than the energy budget for the experiment.&lt;/p&gt;

&lt;p&gt;The modern frontend stack is extraordinarily capable. It can render server components, stream HTML, hydrate selectively, generate static pages, run on the edge, do incremental builds. None of that is at odds with also being able to do &lt;em&gt;this&lt;/em&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;some-framework dev ./App.jsx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It's a small surface area. It's enormously valuable. And it is, conspicuously, missing from almost every option you'd reach for today.&lt;/p&gt;

&lt;p&gt;The good news is that &lt;em&gt;almost&lt;/em&gt;. If you look around carefully, this capability is starting to reappear in the corners of the ecosystem — runtimes that treat the single file as a first-class entry point, not as a degenerate case of a project. It's worth keeping an eye on.&lt;/p&gt;

&lt;p&gt;The thirty-second app deserves to come back.&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>javascript</category>
      <category>node</category>
      <category>webdev</category>
    </item>
    <item>
      <title>The Two Joys of Coding Before AI</title>
      <dc:creator>Viktor Lázár</dc:creator>
      <pubDate>Mon, 27 Apr 2026 19:51:28 +0000</pubDate>
      <link>https://forem.com/lazarv/the-two-joys-of-coding-before-ai-1pbp</link>
      <guid>https://forem.com/lazarv/the-two-joys-of-coding-before-ai-1pbp</guid>
      <description>&lt;p&gt;There is a particular kind of grief floating around right now. You see it in blog posts, in conference talks, in late-night threads: a mourning for the joy of coding before AI. People describe it as if a forest has been paved over. Something they loved is gone, and something colder has taken its place.&lt;/p&gt;

&lt;p&gt;I think most of these conversations talk past each other because they skip the only question that matters: &lt;strong&gt;what was the joy actually made of?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;"Coding" is not one activity. It is at least two, braided together so tightly that for decades nobody had to separate them. AI pulls on one of those strands and not the other, and whether that feels like loss or liberation depends entirely on which strand you were holding.&lt;/p&gt;

&lt;h2&gt;
  
  
  Two kinds of joy
&lt;/h2&gt;

&lt;p&gt;Strip a programming session down to its emotional core and you find two distinct rewards.&lt;/p&gt;

&lt;p&gt;The first is &lt;strong&gt;the joy of materializing a vision&lt;/strong&gt;. You see something in your head — a tool, an interface, a system, a small clever thing that does not exist yet — and you bring it into the world. The pleasure here is in the gap closing. The thing in your imagination and the thing on the screen converge until they are the same thing. The keyboard, the language, the build system: these are friction. Necessary friction, often beautiful friction, but friction. The joy lives at the moment of arrival.&lt;/p&gt;

&lt;p&gt;The second is &lt;strong&gt;the joy of figuring something out&lt;/strong&gt;. A problem resists you. You sit with it. You pull on threads, build mental models, get them wrong, refine them, and somewhere — sometimes in the shower, sometimes at 2 a.m., sometimes mid-sentence in a meeting — the shape of the answer clicks into place. The pleasure here is not in arrival but in the act of comprehension itself. You understand something now that you did not understand an hour ago, and your brain rewards you for it the way it rewards eating when you are hungry.&lt;/p&gt;

&lt;p&gt;These are not the same feeling. They use different muscles. They satisfy different hungers. And — this is the important part — they leave behind different kinds of memory. A vision-materializer remembers what they built. A problem-solver remembers how the world bent into a new shape inside their head.&lt;/p&gt;

&lt;p&gt;Most working programmers feel both, in different proportions, sometimes in the same hour. But if you ask honestly which one is the &lt;em&gt;core&lt;/em&gt; — the thing that made you a programmer rather than something else — almost everyone has an answer.&lt;/p&gt;

&lt;h2&gt;
  
  
  What AI actually changes
&lt;/h2&gt;

&lt;p&gt;AI coding assistants are extraordinary materializers. That is what they are built to be. You describe the thing, they produce the thing. The friction between vision and artifact collapses dramatically. What used to be an afternoon of plumbing is now a paragraph and a review pass.&lt;/p&gt;

&lt;p&gt;If your joy was in the materialization — in seeing the thing exist — AI is not stealing anything from you. It is &lt;strong&gt;giving you more of what you loved&lt;/strong&gt;. The gap closes faster, which means you can close more gaps, which means more visions per unit of life. The friction you tolerated was never the source of the joy; the arrival was. You can build the second thing now, and the third, and the weird side project you never had time for. The hands-on craft loss is real but it is a craft loss, not a joy loss. You can still write the loop by hand on a Saturday if you want to. Nothing stops you.&lt;/p&gt;

&lt;p&gt;One thing has to be said clearly here, because it is the most common bad-faith reading of the materializer position: &lt;strong&gt;materialization is not "whatever shipped, shipped."&lt;/strong&gt; A vision is not just a silhouette of a feature; it has internal coherence, a way it behaves under pressure, a quality it carries. A materializer who accepts slop because it superficially resembles the artifact in their head has not closed the gap — they have moved the goalposts to meet the output. That is not the joy of materializing a vision. That is the relief of being done. They are different feelings, and conflating them is how teams end up shipping confident-sounding garbage at unprecedented speed. The AI gives you a draft. The work — the actual materializer's work — is to keep pushing the draft until it matches the thing in your head, including the parts of the thing in your head that have to do with correctness, taste, performance, and how it will read to the next person. Acceleration is acceleration toward the &lt;em&gt;right&lt;/em&gt; artifact, not toward any artifact. A materializer who forgets this is no longer practicing their craft; they are just hitting accept.&lt;/p&gt;

&lt;p&gt;If your joy was in the figuring out, the picture is genuinely different — and the grief is genuinely earned. The AI is not just removing friction; it is removing &lt;strong&gt;the problem itself&lt;/strong&gt;. The puzzle you would have sat with for three days, turning it over in your head on the train and in the shower, building the mental model that becomes part of how you think forever — the AI hands you an answer in twelve seconds. The answer is often correct. And the comprehension that would have grown in you while you struggled does not grow, because you did not struggle.&lt;/p&gt;

&lt;p&gt;This is not a complaint about laziness or skill atrophy, though those are real concerns. It is something more specific: a category of human pleasure, the pleasure of &lt;em&gt;understanding something hard&lt;/em&gt;, requires the hardness. Remove the hardness and you remove the pleasure, even if you keep the answer. You cannot have the satisfaction of a crossword without the crossword.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why the debate is so confused
&lt;/h2&gt;

&lt;p&gt;Once you see this split, the public conversation about AI and coding starts to make more sense. The two camps are not actually disagreeing about AI. They are reporting honestly on two different inner experiences.&lt;/p&gt;

&lt;p&gt;The "AI is wonderful, I ship five times faster" camp is overwhelmingly populated by materializers. They are telling the truth. Their joy is intact and amplified.&lt;/p&gt;

&lt;p&gt;The "AI is hollowing out my craft" camp is overwhelmingly populated by problem-solvers. They are also telling the truth. Their joy is, in fact, being eroded — not by malice or hype, but by the specific mechanism of having the puzzles solved before they get to play with them.&lt;/p&gt;

&lt;p&gt;When these two groups argue, they sound like they are arguing about a tool. They are actually arguing about which of two pleasures is the real one. There is no answer to that question, because there are two answers, and they are both correct for the person giving them.&lt;/p&gt;

&lt;p&gt;Notice how each camp uses the same phrase to dismiss the other's grief. To the materializer, the problem-solving was always an &lt;em&gt;implementation detail&lt;/em&gt; — a means, a tax you paid on the way to the artifact, something a sufficiently advanced tool was supposed to absorb eventually. To the problem-solver, the shipped artifact was the implementation detail — the residue, the visible echo of an internal event that had already happened in their head. Each side, in good faith, treats the other side's joy as the boring scaffolding around their own. That is why the conversation goes nowhere: both sides are correctly identifying what is, &lt;em&gt;for them&lt;/em&gt;, incidental.&lt;/p&gt;

&lt;h2&gt;
  
  
  What to do about it
&lt;/h2&gt;

&lt;p&gt;If you are mourning, the first useful move is to ask yourself the honest version of the question. Not "do I miss coding," but: &lt;em&gt;which kind of coding?&lt;/em&gt; When you replay the moments you would call joyful, are you watching yourself ship the thing, or are you watching yourself understand the thing? The answer tells you whether you are facing an opportunity or a loss.&lt;/p&gt;

&lt;p&gt;For materializers, the path is mostly forward. Use the tools. Build more. The thing you loved is more available now, not less.&lt;/p&gt;

&lt;p&gt;For problem-solvers, the answer is harder and more deliberate. The puzzles still exist; they have just stopped arriving on their own. Production code paths now route around them. To keep the joy, you have to &lt;strong&gt;choose the friction back in&lt;/strong&gt; — pick problems the AI cannot solve cleanly, work in domains where the model is weak, build from scratch on weekends, read papers, do the leetcode-equivalent that is actually interesting to you, contribute to runtimes and compilers and other places where the problem space is still deep enough that no autocomplete can shortcut it. The protected hour where you do not ask the assistant is not a Luddite stance; it is a deliberate preservation of the conditions your joy requires.&lt;/p&gt;

&lt;p&gt;Both responses are healthy. Both are grown-up. What is not healthy is conflating them — using a materializer's optimism to dismiss a problem-solver's grief, or using a problem-solver's grief to deny a materializer's genuine, earned acceleration.&lt;/p&gt;

&lt;h2&gt;
  
  
  The thing under the thing
&lt;/h2&gt;

&lt;p&gt;The deeper claim hiding inside all of this is that &lt;em&gt;coding was never one thing&lt;/em&gt;. It was a workbench where two very different human pleasures happened to use the same tools. The industry treated them as one because the workflow forced them to be one — you could not materialize a vision without solving a hundred small problems along the way, and you could not solve interesting problems without something to materialize them into.&lt;/p&gt;

&lt;p&gt;AI is the first force strong enough to pull those two pleasures apart. It is doing so cleanly and without asking permission. What we are watching is not the death of the joy of coding. It is the unbundling of two joys that were always separate, finally being forced to admit it.&lt;/p&gt;

&lt;p&gt;Which one was yours?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
