<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: RAXXO Studios</title>
    <description>The latest articles on Forem by RAXXO Studios (@raxxostudios).</description>
    <link>https://forem.com/raxxostudios</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/raxxostudios"/>
    <language>en</language>
    <item>
      <title>Claude Connectors Now Reach Into Adobe, Blender, Ableton, Affinity, And Fusion</title>
      <dc:creator>RAXXO Studios</dc:creator>
      <pubDate>Mon, 04 May 2026 06:15:41 +0000</pubDate>
      <link>https://forem.com/raxxostudios/claude-connectors-now-reach-into-adobe-blender-ableton-affinity-and-fusion-5el</link>
      <guid>https://forem.com/raxxostudios/claude-connectors-now-reach-into-adobe-blender-ableton-affinity-and-fusion-5el</guid>
      <description>&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Claude Connectors now reach into Adobe (After Effects, Photoshop, Illustrator), Blender, Ableton Live, Affinity, and Autodesk Fusion via MCP&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Brief-to-board flows, prompt-to-comp animations, MIDI session ideas, and CAD-to-render handoffs without leaving the chat&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This sits on top of the existing 200+ Claude Connectors directory and is available across Claude tiers&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Workflow tests across motion, design, music, and 3D show real time savings, not press-release theater&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Affinity gets a Photoshop-alternative connector, Fusion gets parametric tweaks via prompt, Ableton gets MIDI scaffolding&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I've spent the last week pointing Claude at the apps I actually open every day. Not browser tabs. Real creative software. The new wave of Claude Connectors hooks directly into Adobe, Blender, Ableton Live, Affinity, and Autodesk Fusion. The first thing that surprised me: it works.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Anthropic Actually Shipped
&lt;/h2&gt;

&lt;p&gt;The headline is simple. Claude Connectors, the same plumbing that powers the &lt;a href="https://dev.to/blogs/lab/claudes-200-connectors-changed-how-i-use-ai"&gt;200+ integrations directory&lt;/a&gt;, now reaches into the creative tooling stack. Adobe is the big one. Blender, Ableton Live, Affinity, and Fusion fill in the rest.&lt;/p&gt;

&lt;p&gt;Under the hood it is the same MCP machinery I covered in &lt;a href="https://dev.to/blogs/lab/mcp-servers-are-how-claude-actually-talks-to-everything"&gt;MCP Servers Are How Claude Actually Talks to Everything&lt;/a&gt;. Each app exposes a server. Claude reads the project state, calls actions, gets results back. The connector layer is the consumer surface that hides the YAML and OAuth from anyone who is not into terminal life.&lt;/p&gt;

&lt;p&gt;A few things that matter:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The connectors are available across Claude tiers, not gated behind Enterprise.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;They run on your existing project files. No re-import, no proprietary format lock.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;They are scoped, so Claude can read a Photoshop document without writing to your Lightroom catalog.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That is the boring part. The interesting part is what happens when you actually use them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Brief To Board In Adobe
&lt;/h2&gt;

&lt;p&gt;I run all my motion work in After Effects. The new Adobe connector reaches into the Creative Cloud apps I touch most: Photoshop, Illustrator, and AE.&lt;/p&gt;

&lt;p&gt;The first test was brief-to-board. I dropped a one-paragraph campaign brief into Claude, told it which Illustrator file to scaffold, and asked for a 6-frame storyboard with placeholder copy. Two minutes later I had an .ai file with 6 artboards, headline text in the brand font, and a rough composition per frame. Not finished art. A starting point that is 80% closer than a blank canvas.&lt;/p&gt;

&lt;p&gt;The second test was prompt-to-comp in After Effects. I described a 10-second logo reveal. Claude built the comp, added a shape layer logo placeholder, set up two keyframes on scale and opacity, and named the layers in a way I did not have to rename. Easy ease was already applied. The render came out at 1080p in roughly 40 seconds.&lt;/p&gt;

&lt;p&gt;What makes this useful is not the speed. It is that I stayed in the chat to iterate. "Make the logo settle 200ms later. Add a subtle blur on entry. Change the background to brand lime." Each instruction edited the existing comp instead of regenerating from scratch. That is the difference between a toy and a tool.&lt;/p&gt;

&lt;p&gt;For Photoshop, I tried batch edits across a folder of product shots: remove background, normalize crop to 1:1, export at 2000px. Claude did it without me touching the Actions panel. I have spent years writing those Actions by hand. Not anymore.&lt;/p&gt;

&lt;h2&gt;
  
  
  Blender And Fusion: 3D Without The Menu Hunt
&lt;/h2&gt;

&lt;p&gt;Blender has been MCP-accessible for a while via community servers. The new connector cleans up the rough edges. I asked Claude to build a procedural shelf system, parametric on width and depth, with chamfered edges and a wood material. It scripted the mesh, set up the modifiers, and exposed the parameters as drivers. Two iterations later I had a usable asset.&lt;/p&gt;

&lt;p&gt;Fusion is the bigger story for product folks. The connector reads the parametric tree, edits dimensions, regenerates the body, and triggers a render. I rebuilt a small CAD part by describing the changes I wanted: "Make the mounting hole 8mm. Add a 2mm fillet on the top edge. Render with the studio HDRI." It did all three.&lt;/p&gt;

&lt;p&gt;The thing that nobody mentions: this kills the menu-hunting tax. In Fusion specifically, you can lose 20 minutes finding the right command. Claude knows where everything lives and just calls it. If you are coming from CAD, that alone is worth the setup time.&lt;/p&gt;

&lt;p&gt;For motion designers using AE alongside 3D, the round-trip is now reasonable. Build the asset in Blender via Claude, export, drop into AE, animate via the AE connector. I did this end-to-end for a 5-second product reveal in under 30 minutes. That used to be a half day.&lt;/p&gt;

&lt;h2&gt;
  
  
  MIDI Sessions In Ableton Live
&lt;/h2&gt;

&lt;p&gt;The Ableton connector is the one I expected to be a toy. It is not. It scaffolds. You give Claude a vibe, a tempo, and a key. It returns a session: drum pattern, bass line, two MIDI clips with chord progressions, and a melody sketch.&lt;/p&gt;

&lt;p&gt;I ran three tests. A 90 BPM lo-fi session for a YouTube intro. A 128 BPM techno scaffold for a product launch teaser. A 70 BPM cinematic bed for a tutorial voiceover. All three were usable as starting points. None of them were finished tracks. That is the right level of ambition for a connector. Get me past the blank-session paralysis.&lt;/p&gt;

&lt;p&gt;What I liked specifically: it suggests instrument racks from the stock Ableton library, so I do not have to fight third-party plugin licensing. It also names the clips, which sounds trivial until you have shipped a project full of "MIDI 1, MIDI 2, MIDI 3" and tried to find the kick drum at 1am.&lt;/p&gt;

&lt;p&gt;For sound designers and podcast producers, the prompt-to-pattern flow is real. Stems are still your job. Sketches are not.&lt;/p&gt;

&lt;h2&gt;
  
  
  Affinity Steps In As The Photoshop Alternative
&lt;/h2&gt;

&lt;p&gt;Affinity has been the quiet Photoshop alternative for years. The connector makes it competitive in a way it was not before.&lt;/p&gt;

&lt;p&gt;I tested the same batch product-shot workflow I ran in Photoshop. Affinity did it. Slightly different terminology, same result. The connector understands Affinity's persona model, which is the part most ports get wrong. You can ask for "vector adjustments to the logo" and it switches to the Designer persona. Ask for "raster cleanup on the hero shot" and it switches to Photo persona.&lt;/p&gt;

&lt;p&gt;If you are paying 60 EUR a month for Creative Cloud and only using Photoshop and Illustrator, the math just changed. Affinity is a one-time purchase. With a Claude connector, the workflow gap is closed enough for most solo work. I am not telling anyone to ditch Adobe. I am saying the option is now real.&lt;/p&gt;

&lt;p&gt;This is the kind of move that quietly reshapes a stack. Same story as when &lt;a href="https://dev.to/blogs/lab/claude-design-launches-everything-you-need-to-know"&gt;Claude Design launched as a Canva engine&lt;/a&gt;: the AI surface flattens the difference between tools, and the cheaper tool wins on price.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Means For Solo Creative Studios
&lt;/h2&gt;

&lt;p&gt;Three things changed for me this week.&lt;/p&gt;

&lt;p&gt;First, the Adobe tax is now negotiable. If I can run 80% of my Photoshop work through Affinity via Claude, my software stack drops by 30 EUR a month without losing capability. That is 360 EUR a year I can put into &lt;a href="https://referral.magnific.com/mQMIvsh" rel="noopener noreferrer"&gt;Magnific&lt;/a&gt; credits or a render farm subscription.&lt;/p&gt;

&lt;p&gt;Second, the briefing layer collapsed. Brief-to-board used to be 2 hours. Now it is 20 minutes plus polish. The polish part still needs me. The scaffolding does not.&lt;/p&gt;

&lt;p&gt;Third, music and 3D are no longer adjacent skills. They are accessible from the same chat. I am not going to claim I can score a film now. I can put a usable scratch track behind a tutorial in 10 minutes. I can rebuild a CAD part from a verbal description. That is enough to ship more work without hiring out.&lt;/p&gt;

&lt;p&gt;The catch is the same as every connector wave. You still need taste. Claude will scaffold a bad campaign brief into a polished bad storyboard. The connector does not know if your idea is good. That part is on you.&lt;/p&gt;

&lt;p&gt;For scheduling all this new output across platforms, I still run &lt;a href="https://join.buffer.com/raxxo-studios" rel="noopener noreferrer"&gt;Buffer&lt;/a&gt;. The creative pipeline got shorter. The distribution pipeline did not.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bottom Line
&lt;/h2&gt;

&lt;p&gt;Claude Connectors for Adobe, Blender, Ableton, Affinity, and Fusion are not a press release. They are working tools. I tested all five this week. Each saved me real time on real projects. Adobe is the headline. Affinity is the sleeper. Fusion is the productivity bomb for anyone doing CAD. Ableton is a useful sketchpad. Blender cleaned up its rough edges.&lt;/p&gt;

&lt;p&gt;If you want the directory-level overview of how Connectors fit together, see &lt;a href="https://dev.to/blogs/lab/claudes-200-connectors-changed-how-i-use-ai"&gt;Claude's 200+ Connectors Changed How I Use AI&lt;/a&gt;. If you want the technical undercurrent, &lt;a href="https://dev.to/blogs/lab/mcp-servers-are-how-claude-actually-talks-to-everything"&gt;MCP Servers Are How Claude Actually Talks to Everything&lt;/a&gt; is the deeper read. If you are a designer trying to keep up with the AI surface that keeps eating your tools, &lt;a href="https://dev.to/blogs/lab/claude-design-launches-everything-you-need-to-know"&gt;Claude Design Launches&lt;/a&gt; sets the context.&lt;/p&gt;

&lt;p&gt;The pattern is clear. The chat is becoming the workspace. The apps are becoming the canvas. The work is still yours.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>claudecode</category>
      <category>automation</category>
    </item>
    <item>
      <title>Server-Sent Events Beat WebSockets for 80% of My AI Streaming UIs: 5 Patterns</title>
      <dc:creator>RAXXO Studios</dc:creator>
      <pubDate>Mon, 04 May 2026 06:15:04 +0000</pubDate>
      <link>https://forem.com/raxxostudios/server-sent-events-beat-websockets-for-80-of-my-ai-streaming-uis-5-patterns-49ac</link>
      <guid>https://forem.com/raxxostudios/server-sent-events-beat-websockets-for-80-of-my-ai-streaming-uis-5-patterns-49ac</guid>
      <description>&lt;ul&gt;
&lt;li&gt;&lt;p&gt;SSE handles 80% of AI streaming UIs with one HTTP/2 connection and zero WebSocket plumbing&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;EventSource auto-reconnects in 3 seconds with no client retry logic&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;5 patterns: chat token paint, agent task feeds, cron dashboards, image generation, dev hot reload&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;WebSockets still win for sub-50ms duplex, binary frames, or true bidirectional flows&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;EventSource adds 14 bytes overhead per message vs 2-6 for WS frames, irrelevant under 100 msg/sec&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I shipped 14 AI streaming UIs across raxxo.shop, the Lab tools, and three client projects last quarter. Twelve of them use Server-Sent Events. Two use WebSockets. The split surprised me, because I started every one of them assuming I needed WebSockets.&lt;/p&gt;

&lt;p&gt;LLM inference is one-way streaming. Tokens flow from server to browser. The browser does not interrupt. That is the textbook SSE use case, and yet every "build a Claude clone" tutorial reaches for WebSockets out of habit. Here is what I actually use, with the 5 patterns that cover most of my streaming work.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Server-Sent Events Beat WebSockets for AI Streaming
&lt;/h2&gt;

&lt;p&gt;The full WebSocket dance is a connection upgrade, ping/pong heartbeats, manual reconnect logic, frame queueing, and a parallel auth path because cookies do not always travel through the upgrade. For LLM token streams, none of that earns its keep.&lt;/p&gt;

&lt;p&gt;Server-Sent Events are plain HTTP. The browser opens a &lt;code&gt;text/event-stream&lt;/code&gt; response, the server pushes lines, the connection stays open. EventSource handles reconnection in roughly 3 seconds with no client code. Cookies, CORS, and HTTP/2 multiplexing all work the way the rest of your stack already works.&lt;/p&gt;

&lt;p&gt;Headers cost more. SSE messages carry a &lt;code&gt;data:&lt;/code&gt; prefix and double newline, around 14 bytes per event. WebSocket binary frames cost 2 to 6 bytes. At 60 tokens per second from Claude, that 8 byte difference is 480 bytes per second. Not a problem. At 100,000 messages per second on a trading feed, it is. Pick the right tool for the throughput you actually have.&lt;/p&gt;

&lt;p&gt;When WebSockets still win:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Sub-50ms latency duplex (multiplayer games, voice rooms, collaborative cursors)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Binary frames at scale (audio chunks, video, protobuf)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;True bidirectional flows where the client constantly pushes back&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Every other streaming UI I have built fits SSE.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pattern 1: Stream Claude API to Browser With EventSource
&lt;/h2&gt;

&lt;p&gt;The Claude API streams via SSE already. You proxy that stream straight to the browser. No queueing, no message broker, no Redis pub/sub.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;
&lt;span class="c1"&gt;// Hono backend on Vercel&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;streamSSE&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;hono/streaming&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;Anthropic&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@anthropic-ai/sdk&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/chat&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;prompt&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;streamSSE&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;anthropic&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;claude-opus-4-7&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;max_tokens&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;4096&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;user&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;prompt&lt;/span&gt; &lt;span class="p"&gt;}]&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="k"&gt;await &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;chunk&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;chunk&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;content_block_delta&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;writeSSE&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;chunk&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;delta&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;text&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;writeSSE&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;event&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;done&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;''&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;
&lt;span class="c1"&gt;// React frontend&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;es&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;EventSource&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/chat&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nx"&gt;es&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;onmessage&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;setText&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;prev&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;prev&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nx"&gt;es&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addEventListener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;done&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;es&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;close&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That is the full streaming chat flow. 18 lines of server, 3 of client. I covered the broader Hono setup at &lt;a href="https://dev.to/blogs/lab/hono-the-tiny-framework-that-runs-my-entire-backend"&gt;Hono: The Tiny Framework That Runs My Entire Backend&lt;/a&gt;, which pairs perfectly with SSE because Hono's &lt;code&gt;streamSSE&lt;/code&gt; helper handles all the framing.&lt;/p&gt;

&lt;p&gt;A note on POST: standard EventSource only supports GET. For prompts longer than ~2KB I either send a session ID via GET, or use the &lt;code&gt;@microsoft/fetch-event-source&lt;/code&gt; library which adds POST support without giving up auto-reconnect.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pattern 2: Long-Running Agent Task Progress Feeds
&lt;/h2&gt;

&lt;p&gt;Claude Code agents run multi-step plans. The user wants to see "Reading file 1 of 12", "Running tests", "Writing patch". Each step is a discrete event, not a token stream, but the shape is the same: server pushes, client paints.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;
&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/agent/run/:id&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;param&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;id&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;streamSSE&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;events&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;subscribe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`agent:&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;// Postgres LISTEN, Redis sub, whatever&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="k"&gt;await &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;evt&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;events&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;writeSSE&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;event&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;evt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;type&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;        &lt;span class="c1"&gt;// 'step', 'tool_call', 'error', 'done'&lt;/span&gt;
        &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;evt&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;evt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;seq&lt;/span&gt;             &lt;span class="c1"&gt;// for resume&lt;/span&gt;
      &lt;span class="p"&gt;})&lt;/span&gt;
      &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;evt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;done&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;break&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;id&lt;/code&gt; field on each event is the killer feature. EventSource sends the last received ID back as &lt;code&gt;Last-Event-ID&lt;/code&gt; after a reconnect. Your backend can replay missed events from a queue. The 1M context window matters for agents like this, because long agent runs can dump huge context payloads in their final event. I wrote about that at &lt;a href="https://dev.to/blogs/lab/the-1m-context-window-actually-changes-how-i-code"&gt;The 1M Context Window Actually Changes How I Code&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Real numbers from a deploy bot I shipped two weeks ago: 47 events per agent run on average, 3.2 seconds for the average reconnect to complete a full replay, zero events lost across 1,200 runs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pattern 3: Server-Side Cron Dashboards With SSE
&lt;/h2&gt;

&lt;p&gt;Build status, deploy events, last-100 syndication results. The data updates every few seconds and every connected dashboard wants the same view. Classic pub/sub, but you do not need WS.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;
&lt;span class="c1"&gt;// Cron writes to a fanout channel&lt;/span&gt;
&lt;span class="nx"&gt;cron&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;schedule&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;*/30 * * * * *&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;checkBuilds&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="nf"&gt;publish&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;builds:fanout&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;

&lt;span class="c1"&gt;// SSE endpoint subscribes per browser&lt;/span&gt;
&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/dashboard/builds&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;
  &lt;span class="nf"&gt;streamSSE&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;sub&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;subscribe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;builds:fanout&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="k"&gt;await &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;sub&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;writeSSE&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I run six dashboards on this exact pattern. Vercel deploys, Shopify product sync, blog syndication results, GitHub Actions, Cloudflare cache hit rate, and a custom one for raxxo.shop sales pulse. They all hit the same Redis fanout, each browser opens one SSE connection, and the server fans out without per-tab state.&lt;/p&gt;

&lt;p&gt;This was 1,400 lines of WebSocket code in the previous job. It is now 80 lines of SSE.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pattern 4: AI Image Generation Progress Without Polling
&lt;/h2&gt;

&lt;p&gt;Image gen jobs take 8 to 40 seconds. The classic browser pattern is poll every 2 seconds, which is wasteful and laggy. SSE is much cleaner.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;
&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/imagine&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;prompt&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;jobId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;crypto&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;randomUUID&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="nf"&gt;enqueueJob&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;jobId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;jobId&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/imagine/:id/stream&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;
  &lt;span class="nf"&gt;streamSSE&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;sub&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;subscribe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`job:&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;param&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;id&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="k"&gt;await &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;evt&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;sub&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;writeSSE&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;event&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;evt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;phase&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;       &lt;span class="c1"&gt;// 'queued', 'progress', 'preview', 'done'&lt;/span&gt;
        &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;evt&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="p"&gt;})&lt;/span&gt;
      &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;evt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;phase&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;done&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;break&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Phases I push: &lt;code&gt;queued&lt;/code&gt; with queue position, &lt;code&gt;progress&lt;/code&gt; every 5% with a percentage, &lt;code&gt;preview&lt;/code&gt; with a low-res blurhash thumbnail at 30%, &lt;code&gt;done&lt;/code&gt; with the final URL. The browser shows a progressive blur-to-sharp reveal that feels twice as fast as a polling spinner, even though the underlying generation time is identical.&lt;/p&gt;

&lt;p&gt;Average connection time per job: 22 seconds. Average events received: 14. Average bytes per event: 180 (mostly preview thumbnails encoded as blurhash strings).&lt;/p&gt;

&lt;h2&gt;
  
  
  Pattern 5: Hot Reload And Dev Preview Events Without WebSocket Plumbing
&lt;/h2&gt;

&lt;p&gt;Vite uses WebSockets for HMR because Vite needs bidirectional flow (the client tells the server what modules to invalidate first). For my own lighter dev tools, the pure server-push pattern is enough.&lt;/p&gt;

&lt;p&gt;I built a preview server for the Lab blog drafts that watches the markdown folder, rebuilds on save, and tells every open preview tab to reload. Three lines of &lt;code&gt;chokidar&lt;/code&gt;, one SSE endpoint, one EventSource on the client.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;
&lt;span class="nx"&gt;chokidar&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;watch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;blog-drafts/&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;change&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;publish&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;reload&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;ts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/dev/reload&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;
  &lt;span class="nf"&gt;streamSSE&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;sub&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;subscribe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;reload&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="k"&gt;await &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;evt&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;sub&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;writeSSE&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;evt&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;
&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;EventSource&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/dev/reload&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;onmessage&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;location&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;reload&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This replaced a 600-line WebSocket dev server I had been maintaining for two years. It does the same job in 12 lines. No reconnect logic, no ping/pong, no upgrade dance. If the dev server restarts, EventSource reconnects in 3 seconds and the next file save triggers a reload. The model context protocol world has the same shape: request, stream of events, done. I broke that down at &lt;a href="https://dev.to/blogs/lab/mcp-servers-are-how-claude-actually-talks-to-everything"&gt;MCP Servers Are How Claude Actually Talks to Everything&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bottom Line
&lt;/h2&gt;

&lt;p&gt;If your AI UI is one-way streaming (tokens, progress events, preview pushes), use Server-Sent Events. The code is shorter, the auth is simpler, the reconnect is automatic, and HTTP/2 makes the connection cost negligible. WebSockets earn their place when the client genuinely talks back at high frequency, when frames are binary and large, or when 50ms latency matters more than developer time.&lt;/p&gt;

&lt;p&gt;I keep a running stack of these patterns in the &lt;a href="https://dev.to/pages/claude-blueprint"&gt;RAXXO Blueprint&lt;/a&gt;, the same playbook I use to ship Lab tools each week. The next stream you build, try the EventSource version first. If the patterns above do not cover it, then reach for WebSockets with a clear reason. Most of the time, you will not need to.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>claudecode</category>
      <category>automation</category>
    </item>
    <item>
      <title>Claude Security Goes Public Beta: Repo Scanning, Vuln Explainer, Patch Guidance</title>
      <dc:creator>RAXXO Studios</dc:creator>
      <pubDate>Mon, 04 May 2026 06:14:29 +0000</pubDate>
      <link>https://forem.com/raxxostudios/claude-security-goes-public-beta-repo-scanning-vuln-explainer-patch-guidance-4fp1</link>
      <guid>https://forem.com/raxxostudios/claude-security-goes-public-beta-repo-scanning-vuln-explainer-patch-guidance-4fp1</guid>
      <description>&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Claude Security entered public beta on May 2026, scanning repos and explaining vulnerabilities for Claude Enterprise customers only&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Three core capabilities ship now: full repo scans, plain-English vulnerability explainer, and patch guidance with diff-ready suggestions&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Pricing sits inside Claude Enterprise (no standalone tier yet), so solo devs and small studios cannot buy it directly today&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Snyk, GitGuardian, and Dependabot still own the SMB market, but Claude Security pushes the bar on context quality and explanation depth&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For solo founders: wait for the SMB tier, run free Dependabot now, and route critical CVE triage through Claude Code in the meantime&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Anthropic shipped Claude Security in public beta this week, alongside the May 2026 Claude Code 2.1.126 update wave. Repo scanning, vulnerability explanations that read like a senior engineer wrote them, and patch guidance that lands as a diff. One catch: it is Claude Enterprise only.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Claude Security actually does
&lt;/h2&gt;

&lt;p&gt;Three features ship in the public beta, and each one has a clear job.&lt;/p&gt;

&lt;p&gt;The first is full repo scanning. Point Claude Security at a GitHub or GitLab repo and it walks the dependency tree, the lockfiles, the workflow YAML, and the source itself. It flags known CVEs in your packages, secret leaks in commits, and misconfigurations in CI. So far this is what every other scanner does. Snyk does it. GitGuardian does it. Dependabot does a thinner version of it. The difference shows up in the next two features.&lt;/p&gt;

&lt;p&gt;The second is the vulnerability explainer. When Snyk flags a CVE, you get the CVE id, a severity score, a one-line summary, and a link. That works if you already know the package, the attack surface, and what an SSRF actually means in your context. If you do not, you copy the CVE into a separate tab and read for 20 minutes. Claude Security writes the explanation inline. It tells you which file in your repo is vulnerable, which call path triggers it, what an attacker would need to exploit it, and whether your specific usage is even reachable. Reachability matters. Most CVE alerts are noise because the vulnerable function never gets called in real code paths. Claude Security shows you the path or tells you it could not find one.&lt;/p&gt;

&lt;p&gt;The third is patch guidance. After the explainer, you get a suggested fix. Not a "bump to version 4.2.1" line. An actual code diff against your file, with the reasoning underneath, and a note on whether the fix is breaking. For a one-person studio that ships every day, that turns a CVE alert from a 90-minute investigation into a 10-minute review.&lt;/p&gt;

&lt;p&gt;The combination is the product. Scan, explain, fix.&lt;/p&gt;

&lt;p&gt;There is one more thing worth flagging: Claude Security runs on the same agentic backbone as Claude Code, which means it does not just surface findings, it can operate on them. In the demos Anthropic ran, the tool opens a draft PR with the patch already applied, the explanation in the description, and the test it ran to confirm the fix did not break the build. That is the full closed loop. For a solo dev who already lives in PR review, the workflow shape is familiar. For a small studio without a security engineer, it removes the hardest part of a CVE response, which is figuring out whether the fix is safe to merge.&lt;/p&gt;

&lt;h2&gt;
  
  
  Public beta means Claude Enterprise only
&lt;/h2&gt;

&lt;p&gt;Here is the part that matters for anyone reading this on a solo or small-team budget.&lt;/p&gt;

&lt;p&gt;Claude Security is gated to Claude Enterprise customers right now. No standalone tier. No add-on for Pro, Team, or Max. The Anthropic announcement positions it as catching risks earlier for "small companies," but the smallest company that can buy it today is one with an Enterprise seat license. That is not a 50 EUR/month decision.&lt;/p&gt;

&lt;p&gt;Anthropic has done this before. Claude Code Ultraplan launched as Enterprise-first and trickled down to Team within a few months. Claude Connectors launched on Free and Pro because they were a consumer-surface play. Security tooling tends to start at the top of the funnel and work downward. The reason is simple: Enterprise security teams pay for context, and context is what an LLM-grade explainer actually delivers. The pricing reflects who values it most today.&lt;/p&gt;

&lt;p&gt;Expect a smaller-tier rollout. The realistic timeline based on past launches sits between three and six months. If you want the deeper context on the rollout pattern, see &lt;a href="https://dev.to/blogs/lab/claude-code-ultraplan-plan-in-the-cloud-run-anywhere"&gt;Claude Code Ultraplan: Plan in the Cloud, Run Anywhere&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  How it compares to Snyk, GitGuardian, and Dependabot
&lt;/h2&gt;

&lt;p&gt;The honest framing: Claude Security is not yet a replacement for the SMB-priced tools. It is a different shape of product. Here is the lay of the land.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dependabot.&lt;/strong&gt; Free with GitHub. Bumps dependency versions when CVEs land. Zero context, zero explanation, just PRs that say "bump lodash from 4.17.20 to 4.17.21." Solid baseline. Every solo dev should have it on. Claude Security does not replace it for the auto-bump workflow, because Claude Security is human-in-the-loop by design.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Snyk.&lt;/strong&gt; Strong CVE database, decent IDE integration, paid tiers start around 25 EUR per developer per month. The explanations are template-driven and shallow. Reachability analysis exists but is conservative (lots of false positives). Claude Security beats it on explanation quality and context, loses on price, integrations, and language coverage breadth.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitGuardian.&lt;/strong&gt; Best-in-class secret detection. Catches API keys, tokens, and credentials in commits and history. Free tier covers solo work. Claude Security flags secrets too, but GitGuardian's database and rule set is years ahead. Keep GitGuardian for secrets, regardless of what else you adopt.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Claude Security.&lt;/strong&gt; Wins on explanation depth, reachability reasoning, and patch quality. Loses on price (Enterprise gate), maturity, and ecosystem integrations. The product is six days old in public beta. It will get better. The question is what to do until it does.&lt;/p&gt;

&lt;p&gt;If you want background on the broader Anthropic security push, &lt;a href="https://dev.to/blogs/lab/project-glasswing-anthropics-claude-mythos-cybersecurity-bet"&gt;Project Glasswing: Anthropic's Claude Mythos Cybersecurity Bet&lt;/a&gt; covers the wider context. Glasswing is the zero-day hunting program. Claude Security is the productized version pointed at customer code instead of the wild internet.&lt;/p&gt;

&lt;h2&gt;
  
  
  What solo devs and small studios should actually do
&lt;/h2&gt;

&lt;p&gt;Three moves, ranked by effort.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;One, turn on Dependabot today.&lt;/strong&gt; It is free, it is one toggle in GitHub, and it covers 60 percent of the value Claude Security delivers for dependency CVEs. If you do not have it on, that is the first 10-minute task this week.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Two, route critical CVE triage through Claude Code.&lt;/strong&gt; When Dependabot flags a high-severity CVE, paste the alert into Claude Code with the relevant file open. Ask for the same three things Claude Security delivers: explain the vuln, show the call path in this repo, suggest a patch. You are doing the orchestration manually, but you get 80 percent of the explainer-and-patch experience for the cost of a Pro or Max plan you probably already have. This is not a long-term workflow. It is a bridge.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Three, layer GitGuardian for secrets.&lt;/strong&gt; Free tier, ten minutes to set up, catches the worst class of leak. Most public repo incidents involve a leaked key, not a complex CVE chain. GitGuardian closes that gap.&lt;/p&gt;

&lt;p&gt;A practical note on the Claude Code triage pattern: keep a short prompt template saved. Something like "Given this CVE alert and this file, tell me if the vulnerable code path is reachable in our usage. If yes, suggest a minimal patch and call out any breaking changes." Paste the alert, paste the file, run. The output sits in your terminal in under a minute. That is roughly what the Claude Security explainer will do once it lands at SMB pricing, just done by hand. If you ship five or six security fixes per quarter, the time savings compared to reading raw CVE pages add up fast, and the skill of writing a tight triage prompt carries over to every other Claude Code workflow you build.&lt;/p&gt;

&lt;p&gt;When the SMB tier of Claude Security drops (and it will), the migration will be a one-day job. The explanation quality is worth waiting for. Until then, the stack above costs zero euros and covers most of the surface.&lt;/p&gt;

&lt;p&gt;For the wider toolkit story, &lt;a href="https://dev.to/blogs/lab/claudes-200-connectors-changed-how-i-use-ai"&gt;Claude's 200+ Connectors Changed How I Use AI&lt;/a&gt; shows how the integration layer has matured. Security is the next surface to absorb that same treatment. The pattern is consistent: Anthropic ships Enterprise-grade context first, productizes it second, opens it to solo budgets third.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bottom Line
&lt;/h2&gt;

&lt;p&gt;Claude Security in public beta is a real upgrade in vulnerability tooling, gated behind Claude Enterprise pricing today. The explainer and patch features beat Snyk on context, beat Dependabot on depth, and complement GitGuardian on secrets. None of that helps a solo founder this week, because the product is not buyable at solo prices yet.&lt;/p&gt;

&lt;p&gt;The play for now: Dependabot on, GitGuardian on, Claude Code as your manual triage layer. When the SMB tier opens, swap the Claude Code workaround for Claude Security and keep the rest. The cost of waiting is low. The cost of paying Enterprise prices for a one-person studio is not.&lt;/p&gt;

&lt;p&gt;Want more weekly breakdowns of new AI dev tooling? Bookmark &lt;a href="https://dev.to/blogs/lab"&gt;the Lab&lt;/a&gt; and check back. Every shipped article goes deep on what changed, what it costs, and what it means for solo and small-team builders.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>claudecode</category>
      <category>automation</category>
    </item>
    <item>
      <title>PostHog Error Tracking Killed My Sentry Bill</title>
      <dc:creator>RAXXO Studios</dc:creator>
      <pubDate>Sun, 03 May 2026 06:54:05 +0000</pubDate>
      <link>https://forem.com/raxxostudios/posthog-error-tracking-killed-my-sentry-bill-2eda</link>
      <guid>https://forem.com/raxxostudios/posthog-error-tracking-killed-my-sentry-bill-2eda</guid>
      <description>&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Sentry cost me 80 EUR/month for ~12k errors that PostHog now catches on the free tier&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;posthog-js already loaded on the storefront, so error tracking was a 4-line config change&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Source maps upload in ~3s through a Vite plugin, stack traces stay readable&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Tool count dropped from 2 to 1, and OpenTelemetry handles the gaps PostHog leaves&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I was paying 80 EUR/month to Sentry for a one-person studio shipping ~12k errors per month, while PostHog already ran in the same page for analytics and session replay. Once I noticed PostHog had quietly shipped a real Error Tracking product, the math was insulting. One evening later, Sentry was gone and my observability stack was a single SDK.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Sentry Bill That Stopped Making Sense
&lt;/h2&gt;

&lt;p&gt;I started with Sentry years ago because it was the obvious choice. Errors come in, they get grouped, you fix them, you move on. The Team plan at 80 EUR/month felt fine when I was on a Pro consulting contract. As a solo studio, every recurring line item gets re-evaluated, and Sentry kept losing the argument.&lt;/p&gt;

&lt;p&gt;Here is what I was actually paying for. About 12,000 errors per month across &lt;a href="https://shopify.pxf.io/5k5rj9" rel="noopener noreferrer"&gt;Shopify&lt;/a&gt; storefront JS, a couple of Vercel apps, and a few cron jobs. Most of those 12k were the same five issues, which Sentry of course grouped, but the event quota still ticked down. I was also paying a seat tax for a team of one. Performance monitoring sat unused because I never trusted the sampling.&lt;/p&gt;

&lt;p&gt;The real problem was duplication. PostHog was already on every page for product analytics and session replay. Two SDKs, two dashboards, two billing portals, two sets of data retention rules. When something broke at 23:00 on a Friday, I would jump between the Sentry issue view and the PostHog session replay to figure out what the user was doing when the error fired. Cross-referencing the same incident across two tools is the worst kind of busywork.&lt;/p&gt;

&lt;p&gt;I had been running PostHog Cloud on the generous free tier for a year. The tier covers 1 million events, 5,000 session recordings, and now error tracking events too. My actual usage was nowhere near the cap. So I was paying Sentry 960 EUR per year to do something a free tier could absorb without noticing.&lt;/p&gt;

&lt;p&gt;The trigger was a quota email. Sentry told me I was approaching my error limit because a deploy had introduced a noisy &lt;code&gt;null is not an object&lt;/code&gt; somewhere in the cart drawer. I fixed the bug in 20 minutes, then sat down and asked the obvious question: why am I still here. The answer was nostalgia and switching cost, both bad reasons to keep a recurring bill alive.&lt;/p&gt;

&lt;h2&gt;
  
  
  What PostHog Error Tracking Actually Does
&lt;/h2&gt;

&lt;p&gt;PostHog Error Tracking is not a wrapper. It catches uncaught exceptions, unhandled promise rejections, and manual &lt;code&gt;captureException&lt;/code&gt; calls, then groups them by fingerprint, stores stack traces, and ties each error to the same session replay and analytics events PostHog already has. That last part is the unfair advantage. When I open an issue, the session replay is right there, no UUID copy-paste required.&lt;/p&gt;

&lt;p&gt;The grouping is good enough. Not Sentry-good, I will admit that. Sentry has years of fingerprinting heuristics and they do squeeze duplicates better. PostHog groups by stack trace shape and message, which catches 90% of cases. For the other 10%, I add a manual fingerprint hint and move on.&lt;/p&gt;

&lt;p&gt;The init is small. Here is the relevant block from my storefront entry:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;posthog&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;posthog-js&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="nx"&gt;posthog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;init&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;import&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;meta&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;VITE_POSTHOG_KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;api_host&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://eu.i.posthog.com&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;capture_exceptions&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;capture_pageview&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;session_recording&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;maskAllInputs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;capture_exceptions: true&lt;/code&gt; is the whole feature flip. The same SDK that was already firing pageviews now also catches errors. I did not add a single byte to the bundle.&lt;/p&gt;

&lt;p&gt;Manual capture works the way you expect:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;
&lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;checkoutMutation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;cart&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;posthog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;captureException&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;cart_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;cart&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;step&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;checkout_submit&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;
  &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The second argument lands as searchable properties on the issue, so I can filter by &lt;code&gt;step = checkout_submit&lt;/code&gt; in the dashboard. Alerts route through the same PostHog notification system I already use for product metrics. One Slack channel, one alert format, one place to silence noise.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Migration Took One Evening
&lt;/h2&gt;

&lt;p&gt;I budgeted a weekend. It took three hours. The reason it was fast: posthog-js was already loaded on every surface I cared about, so the work was flipping a flag, wiring source maps, swapping function calls, and turning Sentry off.&lt;/p&gt;

&lt;p&gt;Step one was the config flip above. Deploy, wait an hour, watch errors flow in. They did, immediately. The first issue I caught was a Klaviyo embed throwing on a country code I had never seen.&lt;/p&gt;

&lt;p&gt;Step two was source maps. PostHog ships a Vite plugin that uploads on build:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;defineConfig&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;vite&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;sourcemapsPlugin&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@posthog/vite-plugin&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="nf"&gt;defineConfig&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;sourcemap&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;hidden&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;plugins&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="nf"&gt;sourcemapsPlugin&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;project&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;raxxo-storefront&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;apiKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;POSTHOG_PERSONAL_API_KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://eu.posthog.com&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;}),&lt;/span&gt;
  &lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;sourcemap: 'hidden'&lt;/code&gt; keeps the maps off the public CDN. The plugin uploads them to PostHog at the end of the build. Upload time on my project is around 3 seconds. Stack traces in the dashboard now point at real source lines, not minified &lt;code&gt;t.js:1:42031&lt;/code&gt; nonsense.&lt;/p&gt;

&lt;p&gt;Step three was a find-and-replace. I had 38 call sites using &lt;code&gt;Sentry.captureException&lt;/code&gt;. Most of them looked like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;
&lt;span class="c1"&gt;// before&lt;/span&gt;
&lt;span class="nx"&gt;Sentry&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;captureException&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;extra&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;userId&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;

&lt;span class="c1"&gt;// after&lt;/span&gt;
&lt;span class="nx"&gt;posthog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;captureException&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;user_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;userId&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A codemod would have been clean, but the sites were spread across three repos and a Shopify theme, so I did it by hand in 40 minutes. I left the Sentry SDK installed for one week as a safety net, comparing issue counts in both dashboards. They tracked within 4%. The 4% gap was Sentry deduping more aggressively, not PostHog missing events.&lt;/p&gt;

&lt;p&gt;Step four was decommission. I uninstalled &lt;code&gt;@sentry/browser&lt;/code&gt; and &lt;code&gt;@sentry/vite-plugin&lt;/code&gt;, removed the init blocks, deleted the env vars from Vercel, cancelled the Sentry subscription, and exported a year of historical issues to a JSON file in cold storage. Bundle size dropped by 38KB gzipped. The 80 EUR/month line item went to 0.&lt;/p&gt;

&lt;p&gt;If you are running a similar consolidation, my &lt;a href="https://dev.to/blogs/lab/first-party-analytics-without-google-the-2026-stack"&gt;first-party analytics stack write-up&lt;/a&gt; covers the same one-tool philosophy applied to traffic data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where PostHog Loses, And How I Cover It
&lt;/h2&gt;

&lt;p&gt;I am not going to pretend this is a clean win on every axis. Sentry still has things PostHog does not, and I had to decide which of those things actually mattered to me.&lt;/p&gt;

&lt;p&gt;Release health is the big one. Sentry tracks crash-free session rates per release, regression detection across deploys, and a proper release timeline. PostHog has releases, but the release-health view is shallow. For a one-person studio shipping a few times a week, I do not need cohort-level crash analysis. If you are running a 50-engineer org with a weekly train, this is a real tradeoff.&lt;/p&gt;

&lt;p&gt;Advanced fingerprinting is the second gap. Sentry lets you write fingerprint rules to merge or split issue groups with surgical control. PostHog gives you a manual fingerprint hint and that is it. I have hit two cases in three months where I wanted Sentry-grade grouping. Both times I solved it by adding a custom error class with a stable &lt;code&gt;name&lt;/code&gt; property, which PostHog groups by cleanly.&lt;/p&gt;

&lt;p&gt;Server-side performance traces are the third. PostHog does have backend SDKs, but distributed tracing across services is not its strength. I run OpenTelemetry into Vercel Logs and Grafana Cloud for backend tracing. That stack is free at my volume and gives me proper span timing, which I needed for a slow Shopify webhook handler last month.&lt;/p&gt;

&lt;p&gt;The bridge stack ends up looking like this. PostHog handles all browser errors, session replay, product analytics, and feature flags. OpenTelemetry plus Vercel Logs handles backend traces and structured logs. Cron job failures land in PostHog through a tiny wrapper that sends &lt;code&gt;posthog.captureException&lt;/code&gt; from the Node side. For LLM-specific error patterns I am building on top of the eval setup in &lt;a href="https://dev.to/blogs/lab/running-llm-evals-in-production-the-2026-guide"&gt;Running LLM Evals in Production&lt;/a&gt;, where errors and quality regressions live in the same place.&lt;/p&gt;

&lt;p&gt;The honest summary: PostHog Error Tracking is 85% of Sentry at 0% of the cost when you already run PostHog. The missing 15% is solvable with a free tracing tier and one custom error class. For a solo studio, that math is not even close.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bottom Line
&lt;/h2&gt;

&lt;p&gt;Solo studios should not pay for two observability tools. If PostHog is already loaded for analytics or session replay, error tracking is a config flag and a Vite plugin away. The migration cost me one evening and saved 960 EUR a year on a stack that now ships fewer bytes and fewer dashboards.&lt;/p&gt;

&lt;p&gt;I will say the obvious thing. If you have a real release-health need, or a team big enough to justify Sentry's grouping rules, stay on Sentry. If you are one person shipping a Shopify storefront and a couple of side apps, the second tool is dead weight. Cut it.&lt;/p&gt;

&lt;p&gt;You can see the rest of how I run a one-person AI studio on minimal infrastructure over at &lt;a href="https://dev.to/pages/studio"&gt;Studio&lt;/a&gt;, where I keep the running list of what is loaded into the stack and what got dropped. The pattern is always the same: one tool that does 85% of the job beats two tools that each do 100%, every single time the bill comes due.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>claudecode</category>
      <category>automation</category>
    </item>
    <item>
      <title>Shopify Functions Replaced 8 Apps In One Saturday</title>
      <dc:creator>RAXXO Studios</dc:creator>
      <pubDate>Sun, 03 May 2026 06:48:05 +0000</pubDate>
      <link>https://forem.com/raxxostudios/shopify-functions-replaced-8-apps-in-one-saturday-2ke9</link>
      <guid>https://forem.com/raxxostudios/shopify-functions-replaced-8-apps-in-one-saturday-2ke9</guid>
      <description>&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Cancelled 8 paid apps in one Saturday and dropped 180 EUR/month off the app bill to zero recurring spend&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Shopify Functions ship as compiled wasm with a 256KB binary cap, 5MB sources, and sub-millisecond cold starts&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ported a bundle builder, a BOGO discount, payment hiding by cart total, and geo-locked shipping using cart-transform, discount, and customization Functions&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Local &lt;code&gt;shopify app function run&lt;/code&gt; plus structured logs replaced four app-vendor dashboards I never wanted to log into&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I spent one Saturday porting eight apps to Shopify Functions. Monthly app spend went from 180 EUR to zero. The merchant gets a faster cart and I get one repo to maintain instead of eight admin panels.&lt;/p&gt;

&lt;h2&gt;
  
  
  The 8 apps I cancelled and what they were costing
&lt;/h2&gt;

&lt;p&gt;The store had eight apps stacked over two years. Each solved a real thing on the day it was installed. Together they were bleeding 180 EUR per month and adding three to four hundred milliseconds of cart logic that ran inside vendor webhooks.&lt;/p&gt;

&lt;p&gt;Here is what the eight did, generically, so I do not name vendors:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;A 19 EUR/month bundle-builder that swapped three line items into a single discounted parent.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A 24 EUR/month BOGO app for "buy 2 get 1 free" on a single collection.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A 14 EUR/month app that hid Cash on Delivery for orders above 200 EUR.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A 19 EUR/month app that hid PayPal for B2B customers tagged &lt;code&gt;wholesale&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A 29 EUR/month app that geo-restricted certain SKUs from shipping to two specific countries.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A 19 EUR/month app that re-named shipping methods based on cart weight.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A 29 EUR/month tiered-discount app that auto-applied 10/15/20 percent at 50/100/200 EUR cart totals.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A 27 EUR/month gift-with-purchase app that injected a free SKU once a threshold hit.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That is 180 EUR/month, 2160 EUR/year, for logic that lives inside the cart. Every one of those apps was doing the same dance: receive a webhook, run their own server-side logic, mutate the order or the checkout, write to a vendor database I cannot inspect. Six of the eight had overlapping settings panels, two of them silently disagreed about which discount took priority when both fired, and three asked for permissions I did not want a third party holding (read_customers, write_orders, read_inventory).&lt;/p&gt;

&lt;p&gt;The latency was the part nobody talks about. Each external call adds 200 to 400 milliseconds. Two of them ran in series during checkout init, which I could see in the network panel as a visible pause before the express-pay buttons painted.&lt;/p&gt;

&lt;p&gt;The actual logic in those eight apps, when you write it down, is roughly 600 lines of TypeScript. Six hundred lines and 180 EUR/month versus six hundred lines and zero EUR/month, running inside Shopify's own infrastructure. The decision was not hard. The Saturday was about whether &lt;a href="https://shopify.pxf.io/5k5rj9" rel="noopener noreferrer"&gt;Shopify&lt;/a&gt; Functions could actually replace the eight surfaces, not whether I should bother trying.&lt;/p&gt;

&lt;h2&gt;
  
  
  The 4 Function types that map to 95 percent of cart apps
&lt;/h2&gt;

&lt;p&gt;Shopify Functions is a free feature on Shopify Plus and Advanced plans. It runs your code as compiled WebAssembly inside the platform, not on a server you rent. Four extension points cover almost every cart-side app on the App Store:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;cart-transform&lt;/strong&gt;: Merge multiple line items into one (bundles), split one line item into several (kits), or update line-item properties on the fly. The only function type that actually reshapes what is in the cart.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;discount&lt;/strong&gt;: Apply product, order, or shipping discounts based on cart contents. Replaces 90 percent of "automatic discount" apps. You return a list of discount targets and the platform applies them. Unlike legacy script editor, multiple discount Functions can stack and you control combine rules per discount.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;payment-customization&lt;/strong&gt;: Hide, re-order, or rename payment methods at checkout. This is what kills the "hide COD over 200 EUR" and "hide PayPal for tag X" apps in one shot.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;delivery-customization&lt;/strong&gt;: Hide, re-order, or rename delivery options. Same shape as payment-customization, different surface. Geo-restrict, weight-based renaming, B2B-only methods, all here.&lt;/p&gt;

&lt;p&gt;What Functions cannot do, on purpose: write to external databases, call third-party HTTP APIs, run on a schedule, modify orders post-checkout, or touch customer PII outside the input you ask for. They are pure functions of &lt;code&gt;(input) -&amp;gt; mutations&lt;/code&gt;. If the logic needs an outside lookup, you need an app or a webhook, not a Function.&lt;/p&gt;

&lt;p&gt;The mental model that finally clicked: Functions are a query against the cart that returns a list of mutations. You ask GraphQL for the fields you need, you return the smallest possible set of changes. That is the whole API surface. Once I stopped trying to do "fetch this product, then check that", and started thinking "what fields do I need declared in the input query, and what mutations do I return", the porting got fast.&lt;/p&gt;

&lt;h2&gt;
  
  
  The actual ports: code that replaced four of the apps
&lt;/h2&gt;

&lt;p&gt;The eight ports broke down into four distinct shapes. Here are three of them as real code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bundle builder (cart-transform, Rust)&lt;/strong&gt;, replacing the 19 EUR/month bundle app:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;
&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;shopify_function&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;prelude&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;shopify_function&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nb"&gt;Result&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="nd"&gt;#[shopify_function_target(query_path&lt;/span&gt; &lt;span class="nd"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"src/run.graphql"&lt;/span&gt;&lt;span class="nd"&gt;,&lt;/span&gt; &lt;span class="nd"&gt;schema_path&lt;/span&gt; &lt;span class="nd"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"schema.graphql"&lt;/span&gt;&lt;span class="nd"&gt;)]&lt;/span&gt;
&lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;input&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nn"&gt;input&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;ResponseData&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;Result&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;bundle_parent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;input&lt;/span&gt;&lt;span class="py"&gt;.cart_transform.bundle_parent_variant_id&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;children&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Vec&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;input&lt;/span&gt;&lt;span class="py"&gt;.cart.lines&lt;/span&gt;&lt;span class="nf"&gt;.iter&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="nf"&gt;.filter&lt;/span&gt;&lt;span class="p"&gt;(|&lt;/span&gt;&lt;span class="n"&gt;l&lt;/span&gt;&lt;span class="p"&gt;|&lt;/span&gt; &lt;span class="n"&gt;l&lt;/span&gt;&lt;span class="py"&gt;.merchandise.bundle_child&lt;/span&gt;&lt;span class="nf"&gt;.is_some&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
        &lt;span class="nf"&gt;.map&lt;/span&gt;&lt;span class="p"&gt;(|&lt;/span&gt;&lt;span class="n"&gt;l&lt;/span&gt;&lt;span class="p"&gt;|&lt;/span&gt; &lt;span class="nn"&gt;output&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;CartOperationMergeOperation&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;cart_line_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;l&lt;/span&gt;&lt;span class="py"&gt;.id&lt;/span&gt;&lt;span class="nf"&gt;.clone&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;quantity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;l&lt;/span&gt;&lt;span class="py"&gt;.quantity&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
        &lt;span class="nf"&gt;.collect&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;children&lt;/span&gt;&lt;span class="nf"&gt;.len&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;Ok&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;output&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;FunctionRunResult&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;operations&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nd"&gt;vec!&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="nf"&gt;Ok&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;output&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;FunctionRunResult&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;operations&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nd"&gt;vec!&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nn"&gt;output&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;CartOperation&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;Merge&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;output&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;MergeOperation&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;parent_variant_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;bundle_parent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cart_lines&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;children&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;..&lt;/span&gt;&lt;span class="nn"&gt;Default&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;default&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;})]&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;BOGO 2+1 (discount, JS)&lt;/strong&gt;, replacing the 24 EUR/month BOGO app:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;DiscountApplicationStrategy&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;../generated/api&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;input&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;eligible&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;input&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cart&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;lines&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;l&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;l&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;merchandise&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;product&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;inCollection&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;totalQty&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;eligible&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;reduce&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;l&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;s&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;l&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;quantity&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;totalQty&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;discounts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[],&lt;/span&gt; &lt;span class="na"&gt;discountApplicationStrategy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;DiscountApplicationStrategy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;First&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;freeCount&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;floor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;totalQty&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;discounts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt; &lt;span class="na"&gt;targets&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;eligible&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;l&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;productVariant&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;l&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;merchandise&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;quantity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;freeCount&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;})),&lt;/span&gt; &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;percentage&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;100.0&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;}],&lt;/span&gt; &lt;span class="na"&gt;discountApplicationStrategy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;DiscountApplicationStrategy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;First&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Hide COD over 200 EUR (payment-customization, JS)&lt;/strong&gt;, replacing the 14 EUR/month payment-hiding app:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;input&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;total&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;parseFloat&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;input&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cart&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cost&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;totalAmount&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;amount&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;total&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mf"&gt;200.0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;operations&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;cod&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;input&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;paymentMethods&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;find&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;m&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;m&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toLowerCase&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;includes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;cash on delivery&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;cod&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;operations&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;operations&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt; &lt;span class="na"&gt;hide&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;paymentMethodId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;cod&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;}]&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The fourth port (geo-restrict shipping for two countries on three SKUs) was the same shape as the COD hide, swapping &lt;code&gt;paymentMethods&lt;/code&gt; for &lt;code&gt;deliveryOptions&lt;/code&gt; and reading &lt;code&gt;input.cart.deliveryGroups.deliveryAddress.countryCode&lt;/code&gt;. Each Function compiled to a wasm binary between 80 and 180 KB. The whole eight-app replacement landed in 540 lines of source plus four &lt;code&gt;run.graphql&lt;/code&gt; query files.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy, test, and the stuff vendor dashboards never showed me
&lt;/h2&gt;

&lt;p&gt;The local loop is the part that actually made one Saturday possible. &lt;code&gt;shopify app function run --input=fixtures/big-cart.json&lt;/code&gt; runs the Function against a JSON input on my machine in around 50 milliseconds, prints the mutations, and exits non-zero on schema mismatch. I wrote five fixtures per Function (empty cart, single item, threshold-minus-one, threshold-plus-one, edge case) and ran them on save with a file watcher. Eight apps, forty fixtures, all green before I touched the deploy.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;shopify app deploy&lt;/code&gt; pushes every Function in the app as one versioned bundle. Rollback is one CLI flag back to the previous version. No "contact support to revert" emails.&lt;/p&gt;

&lt;p&gt;In production, Shopify Functions emit structured logs to the Partner dashboard with input, output, fuel used, and execution time per run. I can grep by Function ID and timestamp. Four of the apps I cancelled had dashboards I had to log into separately to see whether their logic even fired, two of them charged extra for "advanced logging". Now I have one log surface for all eight surfaces.&lt;/p&gt;

&lt;p&gt;Two limits that bit me, so they will not bite you: the compiled wasm is capped at 256KB and a single Function run is metered in "instructions" (a fuel budget, roughly 11 million instructions per call). The 540-line port came nowhere near either limit, but I caught a regex-heavy first draft of the BOGO Function blowing the fuel budget on a 50-line cart. Replacing the regex with a plain &lt;code&gt;filter&lt;/code&gt; dropped the cost by 90 percent. Cold starts are sub-millisecond because the wasm is pre-loaded; I measured the same Function adding around 8 to 12 ms to checkout init versus the 200 to 400 ms the old apps cost. Shipping fast logic is part of why I keep recommending &lt;a href="https://shopify.pxf.io/5k5rj9" rel="noopener noreferrer"&gt;Shopify&lt;/a&gt; over hosted carts for solo merchants who want the speed without the platform tax.&lt;/p&gt;

&lt;p&gt;For the deeper theme work that surrounds these Functions, I keep notes on &lt;a href="https://dev.to/blogs/lab/shopify-section-schema-patterns-editors-actually-love"&gt;Shopify section schema patterns editors actually love&lt;/a&gt; and how I pushed the storefront in &lt;a href="https://dev.to/blogs/lab/shopify-theme-performance-from-62-to-98-lighthouse-in-one-weekend"&gt;Shopify theme performance: from 62 to 98 Lighthouse in one weekend&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bottom Line
&lt;/h2&gt;

&lt;p&gt;One Saturday, 540 lines of code, eight apps gone. The store now runs cart logic inside the platform, in compiled wasm, with 8 to 12 ms added per call and zero recurring app spend. The vendor dashboards are uninstalled. The webhook chain that used to pause checkout init is gone. The merchant has one repo I can read, version, and roll back.&lt;/p&gt;

&lt;p&gt;If you are running a small Shopify store and your monthly app bill is creeping past 100 EUR for cart-side logic, sit down with the App Store list and mark which apps are doing pure cart math. Most of them are. Those are the ones a Function can replace in an afternoon. The ones that need outside data or scheduled work are not, and that is fine, keep those.&lt;/p&gt;

&lt;p&gt;If you want the project shape I keep reusing for these Saturday ports, I share the full template and the fixtures pattern inside &lt;a href="https://dev.to/pages/studio"&gt;Studio&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>claudecode</category>
      <category>automation</category>
    </item>
    <item>
      <title>Bun Shell Replaced Every Bash Script in My Studio</title>
      <dc:creator>RAXXO Studios</dc:creator>
      <pubDate>Sun, 03 May 2026 06:42:04 +0000</pubDate>
      <link>https://forem.com/raxxostudios/bun-shell-replaced-every-bash-script-in-my-studio-2g75</link>
      <guid>https://forem.com/raxxostudios/bun-shell-replaced-every-bash-script-in-my-studio-2g75</guid>
      <description>&lt;ul&gt;
&lt;li&gt;&lt;p&gt;I deleted 1,400 lines of bash and 23 scripts, replaced by 600 lines of Bun Shell TypeScript&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Bun Shell ($&lt;code&gt;...&lt;/code&gt;) gives typed args, automatic escaping, .text(), .json(), .lines() out of the box&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;jq, awk, and most GNU coreutils dependencies are gone, scripts now boot in around 30ms&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Bash still wins for interactive REPLs and very long pipelines, everything else moved to Bun&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I used to keep a folder called &lt;code&gt;scripts/&lt;/code&gt; in every project. It always rotted. Quoting bugs, missing &lt;code&gt;set -e&lt;/code&gt;, jq versions that did not match across machines. Last quarter I migrated 23 of them to Bun Shell and deleted 1,400 lines of bash in one afternoon.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why bash scripts always rot in a one-person studio
&lt;/h2&gt;

&lt;p&gt;I write a lot of glue code. Push a build, post the changelog, mirror images to a CDN, diff two env files, tail a log and grep for a regex. For a long time those lived as &lt;code&gt;.sh&lt;/code&gt; files. They worked on the day I wrote them and broke six weeks later for reasons I could never reproduce on the first try.&lt;/p&gt;

&lt;p&gt;Bash has four problems I kept hitting. Quoting is the obvious one. &lt;code&gt;"$file"&lt;/code&gt; versus &lt;code&gt;$file&lt;/code&gt; versus &lt;code&gt;'$file'&lt;/code&gt; matters and the difference is invisible until a path with a space shows up. &lt;code&gt;set -euo pipefail&lt;/code&gt; is mandatory and nobody remembers it on script number 12. Errors are swallowed silently, then you discover three weeks later that your nightly job has been writing empty files. And there are no types, so passing &lt;code&gt;--dry-run&lt;/code&gt; versus &lt;code&gt;-n&lt;/code&gt; versus &lt;code&gt;--dry&lt;/code&gt; is a coin flip every time.&lt;/p&gt;

&lt;p&gt;The portability story is worse. My laptop has GNU coreutils via Homebrew. The CI runner has BSD coreutils. &lt;code&gt;sed -i&lt;/code&gt; takes different arguments. &lt;code&gt;date&lt;/code&gt; formats are different. &lt;code&gt;jq&lt;/code&gt; was 1.6 on one box and 1.7 on another and one of my scripts depended on a 1.7-only flag without me realising it. Every cross-platform fix added another &lt;code&gt;if [[ "$OSTYPE" == "darwin"* ]]&lt;/code&gt; branch.&lt;/p&gt;

&lt;p&gt;The killer for a solo studio is that I cannot afford to debug a 60-line shell script at 23:00. I want a script to either run or fail loudly with a stack trace pointing at the exact line. Bash gives you neither.&lt;/p&gt;

&lt;p&gt;I tried Python for a while. The startup time of a venv plus &lt;code&gt;boto3&lt;/code&gt; plus &lt;code&gt;requests&lt;/code&gt; is around 800ms cold, which is fine until you call it inside a hot loop. I tried zx (Node + JS template literals). It worked but it dragged Node, npm, and a &lt;code&gt;node_modules&lt;/code&gt; directory into every script folder.&lt;/p&gt;

&lt;p&gt;Then &lt;a href="https://dev.to/blogs/lab/bun-1-2-replaced-node-in-every-new-raxxo-project"&gt;Bun 1.2 replaced Node in every new RAXXO project&lt;/a&gt; and I tried Bun Shell almost by accident.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bun Shell mental model
&lt;/h2&gt;

&lt;p&gt;Bun ships a built-in shell as a tagged template. You import &lt;code&gt;$&lt;/code&gt; from &lt;code&gt;bun&lt;/code&gt; and write commands inline:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;$&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;bun&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;branch&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;$&lt;/span&gt;&lt;span class="s2"&gt;`git rev-parse --abbrev-ref HEAD`&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;text&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Deploying &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;branch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;trim&lt;/span&gt;&lt;span class="p"&gt;()}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Three things make this different from spawning bash. Arguments are escaped automatically when you interpolate them, so &lt;code&gt;$&lt;/code&gt;mv ${userInput} /tmp`&lt;code&gt;cannot be tricked into running&lt;/code&gt;; rm -rf /&lt;code&gt;. The return value is a chainable promise with&lt;/code&gt;.text()&lt;code&gt;,&lt;/code&gt;.json()&lt;code&gt;,&lt;/code&gt;.lines()&lt;code&gt;,&lt;/code&gt;.blob()&lt;code&gt;,&lt;/code&gt;.arrayBuffer()&lt;code&gt;, plus modifiers like&lt;/code&gt;.quiet()&lt;code&gt;,&lt;/code&gt;.nothrow()&lt;code&gt;,&lt;/code&gt;.cwd(path)&lt;code&gt;, and&lt;/code&gt;.env({...})&lt;code&gt;. And it works the same way on macOS, Linux, and Windows because Bun ships its own minimal shell, not a wrapper around&lt;/code&gt;/bin/sh`.&lt;/p&gt;

&lt;p&gt;The handful of patterns I use most:&lt;/p&gt;

&lt;p&gt;`&lt;code&gt;&lt;/code&gt;javascript&lt;/p&gt;

&lt;p&gt;// Read JSON from a command directly&lt;br&gt;
const tags = await $&lt;code&gt;git tag --list&lt;/code&gt;.lines();&lt;/p&gt;

&lt;p&gt;// Suppress output, do not throw on non-zero&lt;br&gt;
const result = await $&lt;code&gt;grep ERROR app.log&lt;/code&gt;.quiet().nothrow();&lt;br&gt;
if (result.exitCode === 0) await alert(result.stdout.toString());&lt;/p&gt;

&lt;p&gt;// Pipe between commands, still inside template&lt;br&gt;
await $&lt;code&gt;cat data.csv | sort -u &amp;gt; clean.csv&lt;/code&gt;;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`&lt;/p&gt;

&lt;p&gt;Cold startup on my M2 is around 30ms for a Bun script that does one shell call, versus around 250ms for the equivalent zx script and around 800ms for Python with &lt;code&gt;boto3&lt;/code&gt;. For a script that runs inside a &lt;code&gt;watch&lt;/code&gt; loop or a git hook, that gap matters.&lt;/p&gt;

&lt;p&gt;The thing I did not expect: I no longer need &lt;code&gt;jq&lt;/code&gt;, &lt;code&gt;awk&lt;/code&gt;, &lt;code&gt;sed&lt;/code&gt;, &lt;code&gt;cut&lt;/code&gt;, or &lt;code&gt;tr&lt;/code&gt; for most jobs. Bun ships &lt;code&gt;Bun.file()&lt;/code&gt;, &lt;code&gt;JSON.parse&lt;/code&gt;, regex, and &lt;code&gt;.replaceAll()&lt;/code&gt;. A bash one-liner like &lt;code&gt;cat events.json | jq '.[].user' | sort -u | wc -l&lt;/code&gt; becomes:&lt;/p&gt;

&lt;p&gt;`&lt;code&gt;&lt;/code&gt;javascript&lt;/p&gt;

&lt;p&gt;const events = await Bun.file("events.json").json();&lt;br&gt;
const users = new Set(events.map(e =&amp;gt; e.user));&lt;br&gt;
console.log(users.size);&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`&lt;/p&gt;

&lt;p&gt;Same length, no external binaries, runs identically on the CI runner.&lt;/p&gt;

&lt;h2&gt;
  
  
  Five real scripts I migrated
&lt;/h2&gt;

&lt;p&gt;I went through the &lt;code&gt;scripts/&lt;/code&gt; folders across my active repos. 23 candidates, all under 200 lines each. I kept 5 representative ones to show what the migration actually looks like.&lt;/p&gt;

&lt;p&gt;The first one was a deploy notifier. It ran after every Vercel deploy, pulled the commit message, posted to a &lt;a href="https://join.buffer.com/raxxo-studios" rel="noopener noreferrer"&gt;Buffer&lt;/a&gt; queue for X and LinkedIn, and dropped a card into a private status page. The bash version was 87 lines with three &lt;code&gt;curl&lt;/code&gt; calls, two &lt;code&gt;jq&lt;/code&gt; filters, and a heredoc for the JSON payload. The Bun version is 34 lines:&lt;/p&gt;

&lt;p&gt;`&lt;code&gt;&lt;/code&gt;javascript&lt;/p&gt;

&lt;p&gt;import { $ } from "bun";&lt;/p&gt;

&lt;p&gt;const sha = (await $&lt;code&gt;git rev-parse HEAD&lt;/code&gt;.text()).trim();&lt;br&gt;
const msg = (await $&lt;code&gt;git log -1 --pretty=%s&lt;/code&gt;.text()).trim();&lt;/p&gt;

&lt;p&gt;await fetch("&lt;a href="https://api.bufferapp.com/1/updates/create.json" rel="noopener noreferrer"&gt;https://api.bufferapp.com/1/updates/create.json&lt;/a&gt;", {&lt;br&gt;
  method: "POST",&lt;br&gt;
  headers: { "Content-Type": "application/json", Authorization: &lt;code&gt;Bearer ${process.env.BUFFER_TOKEN}&lt;/code&gt; },&lt;br&gt;
  body: JSON.stringify({ text: &lt;code&gt;Shipped ${sha.slice(0,7)}: ${msg}&lt;/code&gt;, profile_ids: [process.env.BUFFER_PROFILE] })&lt;br&gt;
});&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`&lt;/p&gt;

&lt;p&gt;The second was an image batch resizer. Bash version called ImageMagick in a &lt;code&gt;find -exec&lt;/code&gt; loop and crashed on filenames with parentheses. The Bun version uses &lt;code&gt;Bun.glob()&lt;/code&gt; to walk the tree and shells out to &lt;code&gt;magick&lt;/code&gt; per file with proper escaping. 41 lines became 18.&lt;/p&gt;

&lt;p&gt;The third was a log tailer that watched a Caddy access log and posted to a webhook when a 5xx burst happened. Bash needed &lt;code&gt;tail -F&lt;/code&gt;, &lt;code&gt;awk&lt;/code&gt;, &lt;code&gt;grep&lt;/code&gt;, and a state file in &lt;code&gt;/tmp&lt;/code&gt;. Bun does it with &lt;code&gt;for await (const line of $&lt;/code&gt;tail -F access.log&lt;code&gt;.lines())&lt;/code&gt; and an in-memory counter.&lt;/p&gt;

&lt;p&gt;The fourth was an S3 mirror. The bash version hardcoded &lt;code&gt;aws-cli&lt;/code&gt; paths. The Bun version uses the &lt;a href="https://shopify.pxf.io/5k5rj9" rel="noopener noreferrer"&gt;Shopify&lt;/a&gt; Files API for theme assets and Bun's native S3 client (&lt;code&gt;Bun.s3.write()&lt;/code&gt;) for everything else. No external CLI, no credentials in argv.&lt;/p&gt;

&lt;p&gt;The fifth was an env diff between dev and prod, which previously needed &lt;code&gt;comm&lt;/code&gt;, &lt;code&gt;sort&lt;/code&gt;, and a temp directory. Now it is two &lt;code&gt;Bun.file().text()&lt;/code&gt; calls and a &lt;code&gt;Set&lt;/code&gt; comparison.&lt;/p&gt;

&lt;p&gt;Total: 1,400 lines of bash, plus a &lt;code&gt;requirements.txt&lt;/code&gt; of &lt;code&gt;jq&lt;/code&gt;, &lt;code&gt;awk&lt;/code&gt;, &lt;code&gt;gnu-sed&lt;/code&gt;, &lt;code&gt;aws-cli&lt;/code&gt;, and &lt;code&gt;imagemagick&lt;/code&gt; shrunk to 600 lines of TypeScript and one dependency (Bun itself). The CI image dropped from 380MB to 95MB once I removed the system tools none of the scripts needed any more. Build time on cold cache went from 51 seconds to 18.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where bash still wins
&lt;/h2&gt;

&lt;p&gt;I am not going to pretend Bun Shell replaces every shell job. Three places where I still reach for bash.&lt;/p&gt;

&lt;p&gt;Interactive REPL work. When I am poking at a server, ssh'd in, exploring file structures, the bash REPL with tab completion is faster than writing a script. Bun is a script runtime, not an interactive shell, and &lt;code&gt;bun repl&lt;/code&gt; is a JavaScript REPL, not a shell one. If I am exploring, I am still in zsh.&lt;/p&gt;

&lt;p&gt;Very long pipelines with niche tools. I have one pipeline that goes &lt;code&gt;dtrace | grep | awk | sort | uniq -c | sort -rn | head&lt;/code&gt; while profiling a slow process on macOS. Rewriting that as TypeScript adds nothing. Six binaries chained with pipes is bash's home turf. I keep that one as a &lt;code&gt;.sh&lt;/code&gt; file and call it from Bun when I need to.&lt;/p&gt;

&lt;p&gt;POSIX-only environments. A few CI runners and the occasional Docker base image do not have Bun installed and I cannot add it. For those I keep a tiny bootstrap script in bash that installs Bun, then hands off. The bootstrap is 12 lines and has not changed in six months.&lt;/p&gt;

&lt;p&gt;The workaround for the long-pipeline case is straightforward: Bun Shell can call any binary, including bash itself. If I really want a 200-character pipeline, I write &lt;code&gt;await $&lt;/code&gt;bash -c "${pipeline}"`` and Bun handles spawning, output capture, and error codes. I keep one or two of those per project, well commented, and treat them as the exception.&lt;/p&gt;

&lt;p&gt;I also keep my pre-commit hook in bash. It has been six lines for two years and rewriting it would be vanity.&lt;/p&gt;

&lt;p&gt;One nice side effect of the migration: my &lt;code&gt;scripts/&lt;/code&gt; folder now ships with the rest of the project's TypeScript. The same &lt;code&gt;tsconfig.json&lt;/code&gt; lints it. The same Biome rules format it. When I rename a function used in a script and a test file, the LSP catches both. Bash never gave me that.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bottom Line
&lt;/h2&gt;

&lt;p&gt;I will not go back. Bun Shell hit the spot Python and zx never quite did: typed arguments, automatic escaping, no &lt;code&gt;node_modules&lt;/code&gt;, ~30ms cold start, and the same script working on macOS, Linux, and the CI runner without conditional branches.&lt;/p&gt;

&lt;p&gt;The migration was not a heroic rewrite. I did one script per coffee, kept the bash version in git history, and stopped when the remaining ones were either trivial pre-commit hooks or genuinely better as pipelines.&lt;/p&gt;

&lt;p&gt;If you are running a solo studio, I would start with your deploy or notifier script. Those are the ones that break at 23:00 on a Friday and cost the most to debug.&lt;/p&gt;

&lt;p&gt;For the rest of how Bun fits into my stack, &lt;a href="https://dev.to/blogs/lab/buns-test-runner-replaced-vitest-in-my-new-projects"&gt;Bun's test runner replaced Vitest in my new projects&lt;/a&gt; covers the testing side, and &lt;a href="https://dev.to/pages/studio"&gt;Studio&lt;/a&gt; shows the projects this stack actually ships.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>claudecode</category>
      <category>automation</category>
    </item>
    <item>
      <title>Neon Database Branching Saved Me 200 EUR Every Month</title>
      <dc:creator>RAXXO Studios</dc:creator>
      <pubDate>Sun, 03 May 2026 06:34:37 +0000</pubDate>
      <link>https://forem.com/raxxostudios/neon-database-branching-saved-me-200-eur-every-month-15np</link>
      <guid>https://forem.com/raxxostudios/neon-database-branching-saved-me-200-eur-every-month-15np</guid>
      <description>&lt;ul&gt;
&lt;li&gt;&lt;p&gt;I run 14 Neon database branches per project for the price of one Postgres instance&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Neon copy-on-write branching clones a 12 GB database in under 1 second&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A GitHub Action spins up a fresh branch per pull request and tears it down on merge&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;My Postgres bill dropped from 240 EUR to 40 EUR per month after switching to Neon Launch&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I used to pay 240 EUR a month for Postgres because every project had a dev DB, a staging DB, and a prod DB, and I had four projects. Then I switched to Neon, kept 14 environments running, and the bill dropped to 40 EUR. Here is exactly how that math works.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I had 14 database environments anyway
&lt;/h2&gt;

&lt;p&gt;Solo dev, four products, one me. The math gets out of hand fast.&lt;/p&gt;

&lt;p&gt;Every product needs at least three Postgres environments. Local for me, staging for the GitHub preview deploy, prod for the customers. That is twelve. Add two long-running feature branches I keep around (a content migration and an experimental schema for the &lt;a href="https://dev.to/pages/studio"&gt;Studio&lt;/a&gt; backend), and I am at 14.&lt;/p&gt;

&lt;p&gt;On the old setup I had a Supabase instance per environment. Each one cost roughly 25 EUR a month at the smallest paid tier. I needed paid tiers because the free tier auto-pauses after a week of inactivity, and a paused dev DB at 9 PM on a Sunday is the worst kind of papercut. So 14 instances times ~17 EUR average came out to roughly 240 EUR.&lt;/p&gt;

&lt;p&gt;The painful part: 13 of those 14 databases sat idle 95% of the time. I would touch the dev DB twice a day, push to staging maybe four times a week, and prod hummed along on its own. I was paying for 14 always-on compute boxes to use about one and a half of them.&lt;/p&gt;

&lt;p&gt;Two more annoyances pushed me to look elsewhere. First, every time I wanted to test a destructive migration locally, I had to pg_dump prod, restore to dev, and pray the schema lined up. That was 20 minutes per test. Second, when I opened a pull request on the &lt;a href="https://shopify.pxf.io/5k5rj9" rel="noopener noreferrer"&gt;Shopify&lt;/a&gt; backend project, the preview deploy pointed at staging, which meant two PRs in flight would step on each other.&lt;/p&gt;

&lt;p&gt;I needed something where compute scales to zero when I am not looking, and where forking the database is free. Neon does both.&lt;/p&gt;

&lt;p&gt;The other thing I underestimated was how often I avoided risky work because the test loop was slow. If running a destructive migration on a clean copy takes 20 minutes, I do it twice a week. If it takes 1 second, I do it twenty times a day. That changes how I write migrations. I started writing smaller, more reversible ones, because the cost of trying was effectively zero.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Neon branching actually works under the hood
&lt;/h2&gt;

&lt;p&gt;Neon splits Postgres into two pieces: storage and compute. Storage is a shared, content-addressable layer. Compute is a regular Postgres instance that reads and writes against that storage.&lt;/p&gt;

&lt;p&gt;A branch is just a pointer. When I create a branch from &lt;code&gt;main&lt;/code&gt;, Neon does not copy the data. It marks the current LSN (log sequence number), then any new writes on the branch go to fresh pages, and reads fall through to the parent's pages for anything unchanged. Copy-on-write, the same trick ZFS and BTRFS use for snapshots.&lt;/p&gt;

&lt;p&gt;The result: forking a 12 GB production database takes under 1 second. I have timed it. Here is the CLI call I run a few times a day:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
neon branches create &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--project-id&lt;/span&gt; rough-sun-12345 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--name&lt;/span&gt; pr-247-fix-checkout &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--parent&lt;/span&gt; main

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That returns a connection string in maybe 800 ms. The branch is a full read-write copy of prod as of the moment I ran the command. I can drop tables, run migrations, seed garbage data, whatever, and &lt;code&gt;main&lt;/code&gt; does not notice.&lt;/p&gt;

&lt;p&gt;Two more details that matter for the cost story. Each branch gets its own compute endpoint, but compute autosuspends after 5 minutes of inactivity by default. A suspended branch costs zero compute. Storage is billed once for the parent plus only the diff each branch has written. My 14 branches use about 14.3 GB total because the diffs are tiny.&lt;/p&gt;

&lt;p&gt;When I actually hit a suspended branch, it cold-starts in around 300 ms. Annoying for a single curl, invisible inside any real app session. Worth it for a 200 EUR a month delta.&lt;/p&gt;

&lt;p&gt;One thing that surprised me: the storage layer is genuinely shared, not "shared-ish". I ran a test where I created a branch, dropped a 4 GB table, and checked the parent's storage. No change. The drop only updated the branch pointer. That same 4 GB table existed in 13 other branches at the same time and Neon stored it once. The branch-per-PR workflow only works because of that property, otherwise 14 branches of a 12 GB DB would cost as much as the 14 separate Postgres instances I started with.&lt;/p&gt;

&lt;h2&gt;
  
  
  The branch-per-pull-request workflow
&lt;/h2&gt;

&lt;p&gt;The cleanest workflow I built is one branch per GitHub PR, created by CI on PR open and destroyed on merge. The preview deploy points at it. Every reviewer gets an isolated database that mirrors prod schema.&lt;/p&gt;

&lt;p&gt;Here is the GitHub Action fragment that does the create step:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;
&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pull_request&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;types&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;opened&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;reopened&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;neon-branch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;neondatabase/create-branch-action@v5&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;branch&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;project_id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ vars.NEON_PROJECT_ID }}&lt;/span&gt;
          &lt;span class="na"&gt;branch_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pr-${{ github.event.number }}&lt;/span&gt;
          &lt;span class="na"&gt;api_key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.NEON_API_KEY }}&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Push DB URL to Vercel&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;vercel env add DATABASE_URL preview \&lt;/span&gt;
            &lt;span class="s"&gt;--token ${{ secrets.VERCEL_TOKEN }} \&lt;/span&gt;
            &lt;span class="s"&gt;--git-branch ${{ github.head_ref }} \&lt;/span&gt;
            &lt;span class="s"&gt;&amp;lt;&amp;lt;&amp;lt; "${{ steps.branch.outputs.db_url }}"&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A matching workflow on PR close runs &lt;code&gt;neondatabase/delete-branch-action&lt;/code&gt;. The whole loop costs nothing extra because compute autosuspends within 5 minutes of the preview deploy going quiet.&lt;/p&gt;

&lt;p&gt;For local dev I keep one personal branch per machine, and I switch to it with a tiny env injection block in my shell:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;DATABASE_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;neon connection-string &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--project-id&lt;/span&gt; rough-sun-12345 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--branch-name&lt;/span&gt; local-acme-ltd-fixtures&lt;span class="si"&gt;)&lt;/span&gt;
bun run dev

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The branch &lt;code&gt;local-acme-ltd-fixtures&lt;/code&gt; has my seed data for an Acme Ltd test customer. If I want a fresh copy of prod, I delete that branch and recreate it from &lt;code&gt;main&lt;/code&gt;. 1 second, zero ceremony. Compare that to the old pg_dump-and-restore dance that ate 20 minutes every morning I wanted clean data.&lt;/p&gt;

&lt;p&gt;This is also the workflow I documented in &lt;a href="https://dev.to/blogs/lab/the-5-postgres-extensions-every-shopify-backend-needs"&gt;The 5 Postgres Extensions Every Shopify Backend Needs&lt;/a&gt;, because pg_stat_statements and pgvector both need to be enabled per branch.&lt;/p&gt;

&lt;p&gt;One useful side effect: every PR review now happens against real-shaped data. I seed each PR branch from prod, scrub the personal info with a single SQL script, and reviewers can poke at the preview deploy with realistic counts and edge cases. Bugs that only show up at 50,000 rows actually show up. Bugs that only happen with three special-character customer names also show up. That alone caught two bugs last month that would have shipped to prod under my old "test against an empty seeded DB" workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cost math, the gotchas, and what broke
&lt;/h2&gt;

&lt;p&gt;The Neon Launch plan sits at 50 EUR per month and includes 300 compute hours, 10 GB storage, and unlimited branches. I never hit the branch limit because there is none on Launch. I do flirt with the compute hour cap when I forget to close a long-running psql session against a branch (autosuspend does not kick in while a connection is active).&lt;/p&gt;

&lt;p&gt;Real numbers from my last invoice:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Compute: 247 hours at 0.16 EUR overage = 0 EUR (under the cap)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Storage: 14.3 GB total, 4.3 GB over the included 10 = 1.50 EUR&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Plan base: 50 EUR&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Total: 51.50 EUR for one project&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Wait, I said 40 EUR. The trick is I consolidated three of my four products into one Neon project, each as a separate database within the project. Branches are scoped per database, which means I get the same isolation without paying for four projects. The fourth product runs on the free tier because it is the &lt;a href="https://dev.to/pages/claude-blueprint"&gt;Claude Blueprint&lt;/a&gt; demo data and barely sees traffic.&lt;/p&gt;

&lt;p&gt;Three things broke along the way.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cold starts on tiny endpoints.&lt;/strong&gt; A 300 ms cold start is fine inside a request, painful for a serverless cron that fires every 30 seconds. I switched those crons to fire every 5 minutes so autosuspend never gets a chance to kick in.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Connection pooler quirks.&lt;/strong&gt; Neon's PgBouncer-flavored pooler does not support session-level features like prepared statements by default. I had to swap to the unpooled endpoint for a job runner that uses LISTEN/NOTIFY. Worth knowing before you migrate. I covered the same gotcha pattern in &lt;a href="https://dev.to/blogs/lab/the-7-postgres-indexes-that-took-my-api-from-400ms-to-40ms"&gt;The 7 Postgres Indexes That Took My API From 400ms to 40ms&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Branch sprawl.&lt;/strong&gt; Without a cleanup action, I ended up with 60+ stale PR branches inside a month. The delete-on-merge workflow fixes that, plus a weekly cron that nukes anything older than 14 days.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Schema drift between long-lived branches.&lt;/strong&gt; Two of my 14 branches are not PR-scoped, they are the content migration and the experimental schema I mentioned earlier. After three weeks of parallel work, those branches had drifted from &lt;code&gt;main&lt;/code&gt; enough that merging back was painful. The fix was a Friday ritual: rebase each long-lived branch on top of fresh &lt;code&gt;main&lt;/code&gt;, run the test suite against it, fix what broke. 30 minutes a week, no surprises at merge time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Region pinning.&lt;/strong&gt; I run prod in eu-central-1. The first time CI created a branch it defaulted to us-east-2 and added 90 ms to every preview deploy round trip. Pin the region in the create call.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bottom Line
&lt;/h2&gt;

&lt;p&gt;I went from 240 EUR a month for 14 always-on Postgres instances to 40 EUR a month for the same 14 environments, plus instant clones of prod whenever I want them. The savings paid for my &lt;a href="https://shopify.pxf.io/5k5rj9" rel="noopener noreferrer"&gt;Shopify&lt;/a&gt; plan with room to spare, but the bigger win is the workflow change. Every PR gets a real database. Every destructive migration gets a real test against real data in 1 second instead of 20 minutes.&lt;/p&gt;

&lt;p&gt;If you are still running one always-on Postgres per environment as a solo dev or tiny team, the math almost always favors a switch. Start with one project, port a single product, and watch what your dev velocity does when forking a fresh DB is free. The first time a teammate (or future me) opens a PR and sees a green preview deploy with isolated data, the migration pays for itself.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>claudecode</category>
      <category>automation</category>
    </item>
    <item>
      <title>5 Vercel Edge Config Patterns I Use For Shopify A/B Tests</title>
      <dc:creator>RAXXO Studios</dc:creator>
      <pubDate>Sun, 03 May 2026 06:28:44 +0000</pubDate>
      <link>https://forem.com/raxxostudios/5-vercel-edge-config-patterns-i-use-for-shopify-ab-tests-4c16</link>
      <guid>https://forem.com/raxxostudios/5-vercel-edge-config-patterns-i-use-for-shopify-ab-tests-4c16</guid>
      <description>&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Edge Config reads in ~15ms at the edge, no cold start, no extra request to Shopify&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I cut a 50 EUR/month LaunchDarkly bill to 0 EUR on Vercel Pro, 4 storefronts share one config&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Five patterns covered: hero copy split, price experiment, geo free-shipping flag, kill-switch, multi-arm bandit&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Each flag is a JSON key, flipped via REST API, picked up by middleware.ts before the page renders&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I run four Shopify storefronts off the same Vercel Pro account. Last quarter I ripped out a feature-flag SaaS and replaced it with Vercel Edge Config. The bill went from 50 EUR a month to 0 EUR, and my A/B reads got faster.&lt;/p&gt;

&lt;p&gt;Here are the five patterns I actually use, in production, on real traffic.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Edge Config (And Not KV, Postgres, Or A SaaS Flag Tool)
&lt;/h2&gt;

&lt;p&gt;I tried three things before landing on Edge Config.&lt;/p&gt;

&lt;p&gt;First was Vercel KV (now Upstash Redis). It works, but every read is a network call to the KV region. From a European edge node hitting an EU KV instance I was seeing 25-40ms per read. Multiply that by three flags on a single page render and you have just added 100ms of TTFB for no good reason. KV is great for session state. It is the wrong tool for "is this hero variant on or off".&lt;/p&gt;

&lt;p&gt;Second was Postgres via Neon. Same problem, worse numbers. Adding pg pool warmup just to read a boolean felt insane. Plus I was paying for compute hours to answer "show banner: yes/no".&lt;/p&gt;

&lt;p&gt;Third was &lt;a href="https://launchdarkly.com" rel="noopener noreferrer"&gt;LaunchDarkly&lt;/a&gt;. It is a proper product, but their starter plan started at 50 EUR/month for the seat I needed, and the SDK adds bundle weight to every storefront. For a solo studio shipping experiments on five-figure-monthly traffic, the math did not work.&lt;/p&gt;

&lt;p&gt;Edge Config solves the specific problem I had. It is a tiny read-only JSON blob (8KB on Hobby, 64KB on Pro) that Vercel replicates to every edge node. Reads from middleware run in ~15ms because the data is already sitting next to the function. Writes go through a REST API and propagate globally in a few seconds.&lt;/p&gt;

&lt;p&gt;It is included on Pro at 0 EUR extra. I share one Edge Config across all four storefronts by referencing the same &lt;code&gt;EDGE_CONFIG&lt;/code&gt; connection string. One flip, four sites updated.&lt;/p&gt;

&lt;p&gt;The mental model that finally clicked: Edge Config is for things you read on every request and change rarely. Feature flags. A/B variants. Kill-switches. Banner copy. Country lists. It is not a database. It is the thing you reach for when you want a config file you can edit without redeploying.&lt;/p&gt;

&lt;p&gt;If you are building on Shopify Hydrogen or a custom Next.js storefront talking to the &lt;a href="https://shopify.pxf.io/5k5rj9" rel="noopener noreferrer"&gt;Shopify&lt;/a&gt; Storefront API, this slots in cleanly. Middleware reads the flag. The page reads the variant. Done.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pattern 1 + 2: Hero Copy Split And Price Display Test
&lt;/h2&gt;

&lt;p&gt;Pattern one is the boring-but-useful one: hero banner copy split. I want 50% of visitors to see headline A and 50% to see headline B, and I want to flip which copy wins live without a deploy.&lt;/p&gt;

&lt;p&gt;Edge Config holds it as one JSON object:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"hero_test"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"active"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"variants"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"bold_promise"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"soft_curiosity"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"split"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In &lt;code&gt;middleware.ts&lt;/code&gt; I read it once, pick a variant deterministically off the visitor cookie, and stamp the choice into a header the page route reads.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="kd"&gt;get&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@vercel/edge-config&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;NextResponse&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;next/server&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;middleware&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;test&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;hero_test&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;test&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;active&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;NextResponse&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;next&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cookies&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;visitor_id&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)?.&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="o"&gt;??&lt;/span&gt; &lt;span class="nx"&gt;crypto&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;randomUUID&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;variant&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;pickVariant&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;test&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;variants&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;test&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;split&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;NextResponse&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;next&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;x-hero-variant&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;variant&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cookies&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;visitor_id&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;maxAge&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;24&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The whole read is ~15ms. No SDK. No initialization. Just &lt;code&gt;get()&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Pattern two is the same shape with higher stakes: price display. On one storefront I tested anchoring (showing a struck-through "was 49 EUR" next to "now 33 EUR") versus a clean single price.&lt;/p&gt;

&lt;p&gt;The variant gate sat in Edge Config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"price_anchor"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"active"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"split"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;70&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Server component reads the header, branches the JSX, ships the variant. Both arms point at the same Shopify product (no risk of price drift between arms because the actual checkout price is the same, only the display differs). I tracked clicks to PDP and add-to-cart events into Shopify customer events.&lt;/p&gt;

&lt;p&gt;The 30% arm with the anchor lifted add-to-cart by 11% over two weeks. I flipped the split to 100/0 from a curl call without redeploying. That is the real win, the speed of iteration, not any specific lift number.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pattern 3 + 4: Country-Aware Free Shipping And Emergency Kill-Switch
&lt;/h2&gt;

&lt;p&gt;Pattern three is geo-aware free shipping. Vercel middleware exposes &lt;code&gt;request.geo.country&lt;/code&gt;. I keep a country allowlist in Edge Config and toggle it during campaigns.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"free_shipping_countries"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"DE"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"AT"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"CH"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"NL"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"free_shipping_threshold_eur"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Middleware reads both keys, attaches them to a header, the cart UI reads the header and renders the right banner ("Free shipping in DE, AT, CH, NL on orders over 50 EUR" or nothing).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;allow&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;free_shipping_countries&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;country&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;geo&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;country&lt;/span&gt; &lt;span class="o"&gt;??&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;XX&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;allow&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nf"&gt;includes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;country&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;x-free-ship&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;1&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When a campaign ends I remove a country from the array. Live in seconds.&lt;/p&gt;

&lt;p&gt;Pattern four is the one I sleep better with: a global kill-switch. Every experimental feature reads a master flag first.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"kill_switch"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"experiments"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If &lt;code&gt;experiments&lt;/code&gt; is &lt;code&gt;false&lt;/code&gt;, middleware short-circuits every A/B branch and serves the control. I have flipped this twice. Once when a third-party script tanked LCP on the test arm and once when a cart bug looked like it might be variant-specific. From spotting the issue to "everything back to control on all four storefronts" was under 30 seconds. I did not have to redeploy anything, I did not have to log into a dashboard, I curl'd a write to the Edge Config REST API.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
curl &lt;span class="nt"&gt;-X&lt;/span&gt; PATCH &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="s2"&gt;"https://api.vercel.com/v1/edge-config/&lt;/span&gt;&lt;span class="nv"&gt;$ID&lt;/span&gt;&lt;span class="s2"&gt;/items?teamId=&lt;/span&gt;&lt;span class="nv"&gt;$TEAM&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Authorization: Bearer &lt;/span&gt;&lt;span class="nv"&gt;$VERCEL_TOKEN&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{"items":[{"operation":"update","key":"kill_switch","value":{"experiments":false}}]}'&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I keep that as a one-liner alias. Killing experiments in 4 storefronts at once is a single command.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pattern 5: Multi-Arm Bandit Rollout Via Edge Middleware
&lt;/h2&gt;

&lt;p&gt;Pattern five is where I get a little fancy. A standard 50/50 A/B test wastes traffic on a losing arm once you have signal. A multi-arm bandit shifts traffic toward the winner as data comes in.&lt;/p&gt;

&lt;p&gt;I run a lightweight Thompson sampling setup. The bandit state lives in Edge Config and gets updated by a Vercel cron every 30 minutes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"checkout_cta_bandit"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"arms"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"buy_now"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"trials"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;4120&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"wins"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;372&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"get_yours"&lt;/span&gt;&lt;span class="p"&gt;:{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"trials"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;4080&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"wins"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;401&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"claim"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"trials"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;3990&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"wins"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;318&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Middleware does the assignment. It reads the arm stats, samples each arm from a Beta distribution, picks the highest, stamps a header. The page renders the matching CTA copy.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;bandit&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;checkout_cta_bandit&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;pick&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;sampleThompson&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;bandit&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;arms&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;x-cta-arm&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;pick&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A separate cron job, also on Vercel, queries my events store every 30 minutes, recomputes trials and wins per arm, and PATCHes the new numbers back into Edge Config. The whole job runs in under 5 seconds.&lt;/p&gt;

&lt;p&gt;The benefit over fixed-split A/B: by week two, ~70% of traffic was getting the winning arm instead of being stuck at 33%. Lift on add-to-click jumped accordingly without me touching anything.&lt;/p&gt;

&lt;p&gt;The shape that makes this work: Edge Config is the read path (fast, edge-cached, free), the cron is the write path (slow is fine, runs every 30 min), and the actual event data lives somewhere else (Shopify analytics, in my case). Three jobs, three tools, each doing what it is good at.&lt;/p&gt;

&lt;p&gt;I documented the cron side in &lt;a href="https://dev.to/blogs/lab/the-5-vercel-cron-jobs-that-keep-my-studio-running"&gt;The 5 Vercel Cron Jobs That Keep My Studio Running&lt;/a&gt;. And if you are weighing edge platforms, &lt;a href="https://dev.to/blogs/lab/5-cloudflare-workers-patterns-i-use-for-shopify-edge-logic"&gt;5 Cloudflare Workers Patterns I Use for Shopify Edge Logic&lt;/a&gt; covers the other side of the fence.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bottom Line
&lt;/h2&gt;

&lt;p&gt;Five patterns, one shared Edge Config, four storefronts, 0 EUR added cost on top of the Vercel Pro plan I was already paying for. The thing that surprised me was not the price, it was how often I now ship experiments because the friction is gone. Edit a JSON value, hit save, watch the next request pick it up. No deploy, no SDK, no dashboard.&lt;/p&gt;

&lt;p&gt;If you are running a Shopify storefront on Vercel and you have ever thought "I would test this if it were not such a hassle", Edge Config is probably your missing piece. Start with the kill-switch (pattern 4), it pays for itself the first time something breaks. Add the hero copy split next, you will feel the iteration speed inside a week.&lt;/p&gt;

&lt;p&gt;I keep a running list of these small infra patterns and the actual configs I use over at the &lt;a href="https://dev.to/pages/studio"&gt;Studio&lt;/a&gt; page if you want to see what else is in rotation.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>claudecode</category>
      <category>automation</category>
    </item>
    <item>
      <title>The 6 Vite Plugins That Replaced My Webpack Config</title>
      <dc:creator>RAXXO Studios</dc:creator>
      <pubDate>Sun, 03 May 2026 06:22:32 +0000</pubDate>
      <link>https://forem.com/raxxostudios/the-6-vite-plugins-that-replaced-my-webpack-config-g3e</link>
      <guid>https://forem.com/raxxostudios/the-6-vite-plugins-that-replaced-my-webpack-config-g3e</guid>
      <description>&lt;ul&gt;
&lt;li&gt;&lt;p&gt;I cut 11 Webpack plugins down to 6 Vite plugins and dev startup dropped from 4.2s to 0.8s&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;vite-plugin-checker runs TypeScript and ESLint in a worker so the dev server never blocks&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;unplugin-icons and vite-imagetools shrank my asset pipeline by 38 percent and killed three loaders&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;vite-plugin-pwa plus plugin-legacy gave me offline support and old-Safari fallbacks in 14 lines of config&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I spent four years tuning a Webpack config that nobody else wanted to touch. Last quarter I deleted it. The replacement is a single vite.config.ts with 6 plugins, and the dev server boots in under a second on the same machine.&lt;/p&gt;

&lt;h2&gt;
  
  
  The DX win that finally made me switch
&lt;/h2&gt;

&lt;p&gt;The thing that broke me was waiting. My old Webpack setup booted in 4.2 seconds on a warm cache, 9 seconds cold. Hot reload took 1.8 seconds for a CSS change. I timed it because I started counting how many times a day I stared at the terminal.&lt;/p&gt;

&lt;p&gt;Vite booted in 0.8 seconds on the same project. HMR for CSS is around 90ms. That alone would have been enough, but the real shift was vite-plugin-checker. In Webpack I had fork-ts-checker-webpack-plugin and eslint-webpack-plugin both racing the dev server, blocking compilation, and printing errors in two different formats. vite-plugin-checker runs both in a worker thread, prints a unified overlay in the browser, and never blocks the request pipeline.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;
&lt;span class="c1"&gt;// vite.config.ts&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;defineConfig&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;vite&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;checker&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;vite-plugin-checker&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="nf"&gt;defineConfig&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;plugins&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="nf"&gt;checker&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;typescript&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;eslint&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;lintCommand&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;eslint "./src/**/*.{ts,tsx}"&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="na"&gt;overlay&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;initialIsOpen&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;}),&lt;/span&gt;
  &lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The second DX plugin I added is unplugin-auto-import. I was tired of typing &lt;code&gt;import { useState, useEffect, useMemo } from 'react'&lt;/code&gt; at the top of every file. With auto-import I list the libraries once and the plugin generates a &lt;code&gt;.d.ts&lt;/code&gt; file that teaches the editor about the globals. It saved me roughly 200 lines of boilerplate across 80 components in the first repo I migrated.&lt;/p&gt;

&lt;p&gt;The combo of these two plugins replaced four Webpack plugins (fork-ts-checker, eslint-webpack-plugin, babel-plugin-import, and a custom alias resolver I wrote in 2023). My PR review on the migration was 41 lines added, 1,103 removed. That number was the whole sales pitch.&lt;/p&gt;

&lt;p&gt;The other thing nobody tells you is that vite-plugin-checker is honest. fork-ts-checker had a habit of caching stale type errors so you would fix a bug, save, get a green dev server, and discover the error was still there in CI. checker writes its cache per file, invalidates on save, and the worker thread is fast enough that a full project type-check finishes before I have switched back to the browser. I have not had a CI surprise from a type error in three months.&lt;/p&gt;

&lt;h2&gt;
  
  
  Assets without the loader graveyard
&lt;/h2&gt;

&lt;p&gt;Webpack asset handling was the part of the config I was most afraid to touch. file-loader, url-loader, image-webpack-loader, svg-sprite-loader, and a custom resolver for icon sets. It worked, but it took 2.4 seconds of build time on its own, and every new icon library asked me to add another loader rule.&lt;/p&gt;

&lt;p&gt;I replaced the whole asset pipeline with two plugins. unplugin-icons handles every icon. I drop a Phosphor or Lucide name in JSX, the plugin tree-shakes the SVG at build time, and the bundle only carries icons I actually used. My icon bundle went from 312 KB (a full sprite I shipped because nobody had time to audit) down to 11 KB.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;Icons&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;unplugin-icons/vite&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="nx"&gt;plugins&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="nc"&gt;Icons&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;compiler&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;jsx&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;jsx&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;react&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;autoInstall&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;vite-imagetools handles photography. I write &lt;code&gt;import hero from './hero.jpg?w=400;800;1600&amp;amp;format=avif;webp;jpg&amp;amp;as=picture'&lt;/code&gt; and get back a srcset object I drop into a `` tag. No loader chain, no separate sharp config, no CI step. Build time for the assets folder dropped from 2.4 seconds to 0.7. I cover the same trick for the runtime layer in &lt;a href="https://dev.to/blogs/lab/hono-the-tiny-framework-that-runs-my-entire-backend"&gt;Hono: The Tiny Framework That Runs My Entire Backend&lt;/a&gt;, where I serve the AVIF variants directly without a CDN middle layer.&lt;/p&gt;

&lt;p&gt;These two plugins replaced five Webpack loaders and a postbuild script. The PR description for that change was three bullets. I read the old asset config one last time, then I deleted 312 lines of it.&lt;/p&gt;

&lt;p&gt;The bonus I did not expect: vite-imagetools query strings live in the source file, not in a config. If a designer hands me a new hero crop, I change the query string, save, and the dev server rebuilds the variants in 280ms. With Webpack I would have had to edit the loader rule, restart the dev server, and pray the cache invalidated. The colocation alone is worth the migration even if you ignore the speed numbers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Performance and reach without a second build
&lt;/h2&gt;

&lt;p&gt;A Vite setup is fast in dev, but my production target still includes a Shopify theme that gets visited from old iPads, kiosk browsers, and a surprising number of Samsung Internet users. I needed two things: a service worker for the PWA shell, and a legacy bundle for browsers that do not speak modern JS. The Vite config that ships my &lt;a href="https://shopify.pxf.io/5k5rj9" rel="noopener noreferrer"&gt;Shopify&lt;/a&gt; storefront customization does both with two plugins.&lt;/p&gt;

&lt;p&gt;vite-plugin-pwa is the simplest service worker I have used. I tell it which routes to precache, which to network-first, and it generates the manifest, the worker, and the offline fallback. The first time I shipped it, my Lighthouse PWA score went from 47 to 100 with no other changes.&lt;/p&gt;

&lt;p&gt;`&lt;code&gt;&lt;/code&gt;javascript&lt;/p&gt;

&lt;p&gt;import { VitePWA } from 'vite-plugin-pwa'&lt;/p&gt;

&lt;p&gt;VitePWA({&lt;br&gt;
  registerType: 'autoUpdate',&lt;br&gt;
  workbox: { globPatterns: ['*&lt;em&gt;/&lt;/em&gt;.{js,css,html,svg,woff2}'] },&lt;br&gt;
  manifest: { name: 'Lab', short_name: 'Lab', theme_color: '#1f1f21' },&lt;br&gt;
})&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`&lt;/p&gt;

&lt;p&gt;@vitejs/plugin-legacy handles the long tail. It builds a second bundle with nomodule polyfills for browsers that do not support native ES modules. The modern bundle is 142 KB gzipped. The legacy bundle is 218 KB and only loads if a visitor needs it. I do not pay the cost in 96 percent of sessions, but I do not lose the 4 percent either.&lt;/p&gt;

&lt;p&gt;Together these two plugins replaced workbox-webpack-plugin, babel-loader with three preset plugins, and an html-webpack-plugin template. Production build time went from 38 seconds to 11 seconds, and the modern-browser bundle is 22 percent smaller because Vite tree-shakes harder than my old Babel chain ever did. I covered why the runtime is faster too in &lt;a href="https://dev.to/blogs/lab/bun-1-2-replaced-node-in-every-new-raxxo-project"&gt;Bun 1.2 Replaced Node in Every New RAXXO Project&lt;/a&gt;, and the same migration logic applies here.&lt;/p&gt;

&lt;p&gt;One quiet detail about plugin-legacy: it ships a &lt;code&gt;modulepreload&lt;/code&gt; polyfill for Safari 14 that I had been forgetting to load manually. After the migration, the time-to-interactive on an old iPad in the kitchen drawer dropped from 2.9s to 1.6s. I did not change a single line of app code. The plugin defaulted me into the right behavior, which is the kind of trade I will take every day of the week.&lt;/p&gt;

&lt;h2&gt;
  
  
  When something breaks, vite-plugin-inspect is the first stop
&lt;/h2&gt;

&lt;p&gt;Six plugins is fewer than 11, but it is not zero, and plugins still fight each other. The plugin that paid for itself the first week was vite-plugin-inspect.&lt;/p&gt;

&lt;p&gt;It mounts a &lt;code&gt;/__inspect/&lt;/code&gt; route in dev. I open it, pick a module, and see every transformation step in order. Which plugin touched the file. What the input was. What the output was. How long each step took. The first time I used it I found that unplugin-icons was being run twice on the same file because I had registered it before and after a custom plugin that called &lt;code&gt;transform&lt;/code&gt; on SVGs. I deleted my custom plugin (it was a 2022 leftover), the duplicate transform vanished, and dev startup dropped another 180ms.&lt;/p&gt;

&lt;p&gt;`&lt;code&gt;&lt;/code&gt;javascript&lt;/p&gt;

&lt;p&gt;import Inspect from 'vite-plugin-inspect'&lt;/p&gt;

&lt;p&gt;plugins: [&lt;br&gt;
  // ...other plugins&lt;br&gt;
  process.env.NODE_ENV === 'development' &amp;amp;&amp;amp; Inspect(),&lt;br&gt;
]&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`&lt;/p&gt;

&lt;p&gt;I only enable it in dev. It adds nothing to the production bundle. If you have ever stared at a Webpack stats.json file trying to figure out which loader mangled your import, this plugin is the version of that experience that does not make you cry. It is also the only debugging tool I keep installed permanently across every Vite project I run, including the static-site repo behind my &lt;a href="https://dev.to/pages/studio"&gt;Studio&lt;/a&gt; page.&lt;/p&gt;

&lt;p&gt;The second time inspect saved me was on a build that had quietly grown by 90 KB over two weeks. I opened the route, sorted modules by size, and found a date library being pulled in by three different paths because two plugins were resolving the same import differently. Fixing the resolution dropped the bundle back down in under an hour. Without inspect I would have spent a day on it. Bundle analysis tools exist as separate packages, but nothing else shows you the per-plugin transformation order, which is where most surprises actually live.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bottom Line
&lt;/h2&gt;

&lt;p&gt;I replaced 11 Webpack plugins with 6 Vite plugins. Dev startup went from 4.2s to 0.8s, production build from 38s to 11s, the icon bundle from 312 KB to 11 KB, and the config file from 1,103 lines to 41. The migration took me one weekend per repo. The hardest part was not the new tooling, it was reading the old config one last time and admitting how much of it I had been afraid to delete.&lt;/p&gt;

&lt;p&gt;If you are still running Webpack and you have not tried Vite since the early 2.x days, the plugin ecosystem is the part that changed most. checker, auto-import, icons, imagetools, pwa, legacy, and inspect cover everything I needed. Six plugins, one config file, faster everything. That is the whole pitch.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>claudecode</category>
      <category>automation</category>
    </item>
    <item>
      <title>Microsoft Agent 365 Launches With Claude Inside: What It Means</title>
      <dc:creator>RAXXO Studios</dc:creator>
      <pubDate>Fri, 01 May 2026 09:47:07 +0000</pubDate>
      <link>https://forem.com/raxxostudios/microsoft-agent-365-launches-with-claude-inside-what-it-means-4hhc</link>
      <guid>https://forem.com/raxxostudios/microsoft-agent-365-launches-with-claude-inside-what-it-means-4hhc</guid>
      <description>&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Microsoft Agent 365 launches today, May 1, 2026, as the enterprise control plane for autonomous agents inside Microsoft 365&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Copilot Cowork brings autonomous multi-step task execution into Word, Excel, Outlook, and Teams, built directly with Anthropic on Claude technology&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For solo studios, the headline is that Claude is now embedded across Microsoft's product surface, not just in chat windows&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The pricing tier is enterprise-only at launch, but the integrations and patterns will reach Pro and Business plans within months&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you already work in Claude Code or Claude Cowork, the workflow will feel familiar, multi-step plans, tool use, and human approval gates&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For a one-person business this changes which apps you should bother to learn next, and which legacy workflows are about to disappear&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I opened LinkedIn at 8 AM and watched a Microsoft press release roll across half my feed in real time. Microsoft Agent 365 is live. Copilot Cowork is generally available. Claude is now running inside Office for everyone with the right enterprise license. This is the launch that makes the case for "AI as platform" rather than "AI as app." For a solo studio, the implications are bigger than the headline suggests, and they are not all good.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Microsoft actually shipped today
&lt;/h2&gt;

&lt;p&gt;Microsoft Agent 365 is positioned as the dedicated control plane for enterprise agents. In plain language, it is the place inside Microsoft 365 where IT admins create agents, assign them permissions, monitor their actions, audit their decisions, and route their outputs into Word, Excel, Outlook, Teams, SharePoint, and Power Platform. It is not a single product. It is an admin layer for the agent economy that Microsoft has been hinting at since November.&lt;/p&gt;

&lt;p&gt;Copilot Cowork is the user-facing component. It runs autonomous multi-step tasks across Microsoft 365 apps. The example in the demo: an analyst asks Copilot Cowork to prepare a quarterly review. Cowork pulls last quarter's report from SharePoint, queries the financial dataset in Excel via the new agent connector, summarizes the findings in a Word draft, builds the deck in PowerPoint, and emails the stakeholders for review. Each step is logged, each tool call is auditable, and a human approval gate fires before anything is sent externally.&lt;/p&gt;

&lt;p&gt;The piece that made me pay attention: Microsoft confirmed that Cowork is "built in direct collaboration with Anthropic using Claude technology." It is not Microsoft's house model. It is Claude inside Microsoft's product. The same Claude that runs inside Claude Code, Claude Cowork on Mac, and the Claude API. Same model family, same tool-use patterns, same Constitutional AI training. The only thing that changes is the surface.&lt;/p&gt;

&lt;p&gt;That is a strange and interesting position for Microsoft to hold. They have invested heavily in OpenAI, they ship GPT-5.5 inside the standard Copilot, and now they ship Claude inside the agent layer. The cynical read is that Microsoft is hedging. The honest read, which I think is closer to true, is that Microsoft has decided that for autonomous multi-step work, Claude's tool use and refusal patterns are the better fit, and they are willing to ship it under a different brand to keep the consumer Copilot story clean.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters for a one-person studio
&lt;/h2&gt;

&lt;p&gt;The first thing I asked myself was whether this affects me at all. I do not run on Microsoft 365. I run on Shopify, Vercel, Notion, and a Claude Code stack that lives inside my terminal. My initial answer was no. My second answer, after thinking for an hour, was that it absolutely does, indirectly.&lt;/p&gt;

&lt;p&gt;Three reasons.&lt;/p&gt;

&lt;p&gt;The first is that my clients run on Microsoft 365. Most enterprise clients I have worked with in the last two years live in Outlook, Teams, and SharePoint by default. If their internal teams start using Copilot Cowork to draft briefs, generate first-pass content, and triage agency deliverables, the bar for what they expect from me changes. A draft that took me three hours used to look impressive. The same draft will now look unfinished if their Cowork instance is producing competing versions in 90 seconds.&lt;/p&gt;

&lt;p&gt;The second is that the patterns that Cowork ships are the patterns that solo operators will have to match. Multi-step plans, tool calls, human approval gates, audit logs. If you read &lt;a href="https://dev.to/blogs/lab/the-9-claude-code-hooks-that-audit-every-file-i-write"&gt;The 9 Claude Code Hooks That Audit Every File I Write&lt;/a&gt; you know I have been building toward this for months. The Microsoft launch validates the pattern at enterprise scale. The hooks I run locally are the same shape as the audit logs Cowork is producing. The lesson is not that I built it wrong, the lesson is that the pattern is now table stakes.&lt;/p&gt;

&lt;p&gt;The third is that Microsoft just normalized "Claude inside another product." This is the launch that makes Anthropic's enterprise reach larger than just the API and the Claude apps. Claude Cowork on Mac was already general availability earlier this month, but Cowork inside Microsoft 365 is a different kind of distribution. It puts Claude in front of every knowledge worker at every Fortune 500 that runs on Office. The total addressable surface for Claude-shaped workflows just multiplied.&lt;/p&gt;

&lt;h2&gt;
  
  
  What changes inside the agent stack
&lt;/h2&gt;

&lt;p&gt;The technical interesting part of today's launch is the agent connector model. Microsoft Agent 365 introduces a standard pattern for agents to authenticate, list permissions, call tools, and report back. It is not MCP exactly, but it is MCP-shaped. The agent declares which tools it needs, the admin grants or denies, and the agent operates inside the granted scope.&lt;/p&gt;

&lt;p&gt;For studios that have been building on MCP (which we covered in &lt;a href="https://dev.to/blogs/lab/mcp-servers-are-how-claude-actually-talks-to-everything"&gt;MCP Servers Are How Claude Actually Talks to Everything&lt;/a&gt;), the practical question is whether Microsoft's connectors will be MCP-compatible at the protocol level or whether they will require a translation layer. The launch documentation hints at MCP compatibility on the server side but leaves the client side for the next release.&lt;/p&gt;

&lt;p&gt;If MCP wins as the dominant agent protocol, every MCP server I write today (and I have written 9 so far) becomes plug-compatible with the Microsoft agent stack on the day they ship the bridge. If Microsoft ships a proprietary connector format, every studio will have to choose where to invest. The smart bet, in my read, is that Anthropic has been pushing MCP at every venue and Microsoft has too much to gain from compatibility to fork the protocol. That is the bet I am making with my own infrastructure.&lt;/p&gt;

&lt;p&gt;The other shift is that the human-in-the-loop pattern is now the default, not the exception. Cowork demos all show approval gates before any external action. This matches what Anthropic has been doing inside Claude Code with hooks and MCP since 2.1, and it matches the auditability features that enterprise procurement has been demanding since the first wave of agent failures last year. The era of "the agent did a thing and we are not sure why" is ending. Audit logs and approval gates are the new normal.&lt;/p&gt;

&lt;h2&gt;
  
  
  The pricing question and who actually gets this
&lt;/h2&gt;

&lt;p&gt;At launch, Microsoft Agent 365 is enterprise-only. It requires a Microsoft 365 E5 license plus the new Agent 365 add-on, which is rumored to be in the 30 to 50 EUR per user per month range. Copilot Cowork inside that bundle costs nothing additional, but the underlying license cost is the gate.&lt;/p&gt;

&lt;p&gt;That puts it out of reach for a one-person studio at the moment. It also puts it out of reach for most agencies under 50 people. The customers Microsoft is targeting are large enterprises that already pay for E5 and want to add agent capabilities without buying a separate platform.&lt;/p&gt;

&lt;p&gt;The historical pattern, which I expect to repeat, is that Microsoft drops these features down the price tiers within 6 to 12 months. Copilot itself launched at 30 EUR per user per month for E5 customers, and now it is 22 EUR per user per month for Business Standard. The same path is likely for Cowork. By Q3 or Q4 of 2026 I expect the agent layer to be available on Business Premium, which is a license tier that solo operators and small studios actually buy.&lt;/p&gt;

&lt;p&gt;The pragmatic move for me is to learn the patterns now using Claude Cowork on Mac (already GA) and Claude Code (where I already work). When the Microsoft surface lands, the workflow will feel familiar and I will not be learning a new tool, I will be using a tool I already know inside a different chrome.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this signals for the rest of 2026
&lt;/h2&gt;

&lt;p&gt;Three trends are going to compound from today's launch.&lt;/p&gt;

&lt;p&gt;The first is that Claude is becoming infrastructure. It runs inside Claude Code, inside Cowork on Mac and Windows, inside Microsoft 365, inside enterprise agent platforms, and inside the Anthropic API that everyone else builds on. Anthropic does not have to win the consumer chat war to win the agent war, and today is evidence that they are not trying to.&lt;/p&gt;

&lt;p&gt;The second is that audit, approval, and governance are now the boring competitive moats. The agent that wins is the agent that the procurement department signs off on. Six months ago every demo focused on what the agent could do. Today's demos all focus on what the agent cannot do without permission, what it logs, and how the human stays in the loop. That shift will keep happening for the rest of the year.&lt;/p&gt;

&lt;p&gt;The third is that solo operators have a window. The enterprise stack is figuring itself out in real time. Copilot Cowork at 50 EUR per user per month is expensive for a solo studio but reasonable for a Fortune 500. The window is roughly nine months, in my estimate, where a one-person operation can ship work that looks competitive against a Cowork-augmented mid-market team. After that, the pricing tiers come down, the patterns become standardized, and the solo lead shrinks. The work I do with Claude Code and a Shopify stack is well inside that window today, and the &lt;a href="https://dev.to/pages/studio"&gt;studio overview&lt;/a&gt; is the closest thing I have to a manual for keeping that runtime relevant.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bottom line
&lt;/h2&gt;

&lt;p&gt;Microsoft Agent 365 and Copilot Cowork shipping today is not a product launch I can buy as a solo operator. It is a market signal. The signal is that the agent layer is real, that Claude is now infrastructure across two of the three biggest software platforms in the world, and that the operators who learn the multi-step plan plus tool-use plus approval-gate pattern this quarter will be the ones competing on equal terms when the pricing comes down later this year. The work I am doing inside Claude Code today is the same shape as the work that just shipped to enterprise. I would rather be six months early on a pattern than six months late, and today is a good day to commit to that.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>claudecode</category>
      <category>automation</category>
    </item>
    <item>
      <title>Magnific vs Topaz vs Krea: Which AI Upscaler Actually Works</title>
      <dc:creator>RAXXO Studios</dc:creator>
      <pubDate>Fri, 01 May 2026 09:42:32 +0000</pubDate>
      <link>https://forem.com/raxxostudios/magnific-vs-topaz-vs-krea-which-ai-upscaler-actually-works-3j87</link>
      <guid>https://forem.com/raxxostudios/magnific-vs-topaz-vs-krea-which-ai-upscaler-actually-works-3j87</guid>
      <description>&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Topaz wins on photo fidelity for portraits, product shots, and any source where the original detail must survive&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Magnific wins on creative invention, where you want a 512px reference turned into a printable hero with new texture&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Krea is the best unified canvas if you want one app that runs Topaz, Magnific, and its own enhancer side by side&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Pricing splits the field, Topaz is 199 EUR per year for desktop, Magnific is 79 EUR per month for the full creative stack, Krea sits between them&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For a one-person studio shipping product mockups and blog headers, Magnific is the daily driver and Topaz is the safety net&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The wrong tool on the wrong source costs you the job, the rest of this is which tool fits which source&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I had a client call last week where I sent an upscaled product image and the founder said it looked off. The image was sharp, the lighting was right, the resolution was correct. The shape of the bottle was wrong. Magnific had reimagined the curve of the glass into something prettier than the photograph. I had used the wrong tool for the wrong job. This article is the version of that lesson I wish I had read three months ago.&lt;/p&gt;

&lt;h2&gt;
  
  
  The category split that nobody explains clearly
&lt;/h2&gt;

&lt;p&gt;Every "best AI upscaler 2026" article I read before buying treated these tools as if they were doing the same thing with different brand names. They are not. There are two distinct categories, and confusing them costs you billable hours.&lt;/p&gt;

&lt;p&gt;Faithful upscalers reconstruct what was probably in the source. Topaz Gigapixel, Photo AI, and Let's Enhance fall in this camp. The model has been trained on millions of high-resolution images and tries to predict the most likely high-res version of your low-res input. The output stays close to the source. A 1080p photo of a person becomes a 4K photo of the same person, with the same face, the same skin, the same shadows.&lt;/p&gt;

&lt;p&gt;Creative upscalers invent plausible new detail. &lt;a href="https://referral.magnific.com/mQMIvsh" rel="noopener noreferrer"&gt;Magnific&lt;/a&gt; (formerly Freepik), Krea Enhance, and a handful of newer Stable Diffusion based tools sit here. The model uses latent diffusion to hallucinate texture, fabric weave, brushwork, atmosphere. The output is gorgeous when you want it. It is wrong when the source was a photograph that needed to stay a photograph.&lt;/p&gt;

&lt;p&gt;The mistake I made was using Magnific on a real product. The right tool for that job was Topaz Photo AI. The right tool for upscaling a 512px AI render into a 4K hero is Magnific. They are not interchangeable. The headline of every comparison article should be this paragraph, but it almost never is.&lt;/p&gt;

&lt;h2&gt;
  
  
  Topaz Gigapixel and Photo AI in 2026
&lt;/h2&gt;

&lt;p&gt;Topaz still wins on detail recovery for any image where the source contains real information that needs to survive. The 2026 release added six new specialty models, including a portrait skin model that no longer hallucinates pores into people who do not have them, a product shot model trained on e-commerce photography, and a low-light recovery model that I have used to save indoor event photos that were two stops underexposed.&lt;/p&gt;

&lt;p&gt;The pricing changed and people are unhappy. Topaz moved from a 99 EUR one-time license to a 199 EUR per year subscription for the Photo AI bundle, or 12 EUR per month for cloud rendering. The defenders point out that the model improvements are now monthly instead of yearly. The detractors point out that they bought a perpetual license and now feel like they are renting it back. Both are right.&lt;/p&gt;

&lt;p&gt;For my work, the value math is simple. I run roughly 80 product upscales per month for the shop and for client mockups. The cloud tier covers that with headroom. The desktop bundle is overkill unless you also restore old photos or process raw files at scale. If you are a photographer or a hybrid designer who shoots your own product, the desktop bundle is correct. If you are a one-person studio that shoots 10 photos per month and renders 100 hero images, the cloud tier is correct.&lt;/p&gt;

&lt;p&gt;The Topaz limitation that nobody mentions: it cannot make a bad photo into a good one. It can make a good photo bigger. If your source is blurry, badly composed, or wrong-angle, no upscaler fixes that and Topaz will not pretend to. Magnific will, and that is sometimes what you want.&lt;/p&gt;

&lt;h2&gt;
  
  
  Magnific in 2026, post-rebrand
&lt;/h2&gt;

&lt;p&gt;Magnific is the creative upscaler that doubles as a hero-image generator. Since the &lt;a href="https://dev.to/blogs/lab/freepik-rebranded-magnific-what-changes-for-solo-creators"&gt;rebrand from Freepik on April 28&lt;/a&gt;, it now lives inside the unified Magnific platform alongside the asset library, video generation, and 40+ image and video models. The upscaler itself stays at magnific.ai with the same Hallucination, Creativity, HDR, and Resemblance sliders that built its reputation.&lt;/p&gt;

&lt;p&gt;The two controls that matter most are Creativity and Resemblance. Creativity at 0 produces a sharper version of the input. Creativity at 10 produces a new image that vaguely remembers the input was once there. Resemblance pulls the output back toward the source. The trick is not to set either to a number, it is to test both at three settings (low, mid, high) and pick the one that does not break the brief.&lt;/p&gt;

&lt;p&gt;Where Magnific wins for me: any source under 1024px that needs to print at 4K, any AI-generated reference where the model produced a beautiful low-res draft, any concept render where the original had the right vibe but the wrong detail, any background plate that needs more texture and grain than was originally captured. Hero images for blog posts. Product mockups where the product itself is rendered, not photographed. Marketing visuals where the source was a moodboard not a photograph.&lt;/p&gt;

&lt;p&gt;Where Magnific loses: anything that has to remain literally the original. Faces, brand assets, customer-supplied photos, real product shots, food photography, anything legal or evidentiary. The Hallucination property that makes it brilliant for one job makes it dangerous for another.&lt;/p&gt;

&lt;p&gt;The 79 EUR per month Pro tier covers the full stack, not just the upscaler. If you only want the upscaler, the standalone tier at magnific.ai is still cheaper. If you also use the asset library, the video generation, or the audio tools, the unified plan pays for itself in roughly 12 days for me.&lt;/p&gt;

&lt;h2&gt;
  
  
  Krea: the third option that nobody compares correctly
&lt;/h2&gt;

&lt;p&gt;Krea is the dark horse. It is the only tool of the three that runs as a unified canvas across multiple models from multiple vendors, including Topaz's enhancer as a paid third-party integration. You can generate in Flux, refine in Imagen 4, upscale in Krea Enhance or Topaz, all without leaving the canvas. That alone makes it worth the trial for anyone who currently switches between five tabs.&lt;/p&gt;

&lt;p&gt;Krea Enhance has its own model that sits between Topaz and Magnific in personality. More creative than Topaz, more faithful than Magnific. The Strength, Clarity, Sharpness, and Color Match sliders are the cleanest UI of the three. I find Krea easier to recommend to designers who do not want to read the documentation, because the defaults usually produce a usable result on the first run.&lt;/p&gt;

&lt;p&gt;Pricing is 35 EUR per month for the Pro tier, which is the cheapest of the three for anyone who only wants the unified canvas. The catch is that some of the models cost extra credits per render, so heavy users end up paying closer to Magnific's all-inclusive 79 EUR. The math depends entirely on which models you actually use.&lt;/p&gt;

&lt;p&gt;Where Krea wins: real-time generation (the live brush that paints Flux as you drag), unified canvas with multi-vendor models, the cleanest upscaler UX, the best collaboration features for small teams. Where it loses: the asset library is thin, video generation is behind both Magnific and Runway, and the platform feels less stable than the other two when under load.&lt;/p&gt;

&lt;p&gt;If I were starting from scratch today and I did not already have habits, I would probably try Krea first because the canvas pattern is the cleanest and you can always swap the underlying models. If I had to pick one tool to keep for a year, I would still pick Magnific because it covers more of the work I actually do.&lt;/p&gt;

&lt;h2&gt;
  
  
  How I actually use all three
&lt;/h2&gt;

&lt;p&gt;The honest answer is that I use all three for different jobs, and the cost in absolute terms is roughly 130 EUR per month. The cost in time saved is multiple hours per week.&lt;/p&gt;

&lt;p&gt;For product photography and customer-supplied images: Topaz Photo AI on the cloud tier, no exceptions. The fidelity matters more than the price.&lt;/p&gt;

&lt;p&gt;For hero images and blog headers: Magnific on the unified plan. I generate in Flux 1.1 Pro at 1024px, upscale with Magnific at Creativity 4 and Resemblance 6, export at 1920x1080. The whole process takes under three minutes.&lt;/p&gt;

&lt;p&gt;For client review and quick iteration: Krea, mostly because the real-time canvas is the closest thing to "designing alongside the model" that I have used. I do not ship from Krea often, but I prototype in it.&lt;/p&gt;

&lt;p&gt;For everything else: whichever tool is already open. The trick to using three tools is to assign each one a clear role and never argue with the assignment when you are tired and a deadline is close. That is when the wrong-tool-on-the-wrong-source mistake happens.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bottom line
&lt;/h2&gt;

&lt;p&gt;The right answer to "which AI upscaler should I buy" is "which job are you actually doing." Topaz for fidelity, Magnific for invention, Krea for the unified canvas. The wrong choice on the wrong source ships a hero image with a deformed product silhouette, and you find out about it on the client call.&lt;/p&gt;

&lt;p&gt;If you only want one tool and you do creative work where the source is a render, an AI generation, a moodboard, or a low-res reference, &lt;a href="https://referral.magnific.com/mQMIvsh" rel="noopener noreferrer"&gt;Magnific&lt;/a&gt; is the right pick and the rebrand has made the rest of the unified stack worth the seat by itself. If you need fidelity and you shoot real photographs, Topaz is the right pick and you can pair it with anything else for generation. If you want one canvas and the cleanest UX, Krea is the right pick and the cost depends on which models you lean on. The full breakdown of how this fits into a one-person studio runtime is in &lt;a href="https://dev.to/blogs/lab/how-i-run-a-15-repo-studio-from-one-claude-md-file"&gt;How I Run a 15-Repo Studio From One CLAUDE.md File&lt;/a&gt;, and the studio overview at &lt;a href="https://dev.to/pages/studio"&gt;/pages/studio&lt;/a&gt; shows the rest of the stack I run alongside.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>claudecode</category>
      <category>automation</category>
    </item>
    <item>
      <title>Freepik Just Rebranded as Magnific: What Changes for Solo Creators</title>
      <dc:creator>RAXXO Studios</dc:creator>
      <pubDate>Fri, 01 May 2026 09:40:12 +0000</pubDate>
      <link>https://forem.com/raxxostudios/freepik-just-rebranded-as-magnific-what-changes-for-solo-creators-1jac</link>
      <guid>https://forem.com/raxxostudios/freepik-just-rebranded-as-magnific-what-changes-for-solo-creators-1jac</guid>
      <description>&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Freepik renamed itself to Magnific on April 28, 2026, unifying its asset library, upscaler, and 40+ AI models under one brand&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The company hit 230M EUR ARR with over a million paying subscribers, mostly bootstrapped, no venture capital theatre&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;My Freepik affiliate link still works, the same code now points to magnific.com, no action needed for partners&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The rebrand shifts positioning from a stock asset library to a full AI creative stack for the no-collar economy&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Magnific.ai stays put as the upscaler product, magnific.com is the new home for everything else&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For solo studios it means fewer tabs, one login, and a real challenger to Adobe Creative Cloud&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I clicked my own affiliate link this morning expecting to land on Freepik. Instead I landed on a black homepage that said Magnific. Same login, same files, new face. The 15-year-old design giant that quietly bought Magnific in May 2024 has now adopted the smaller brand's name, swallowed its own legacy, and bet the whole company on the upscaler's positioning. After ten minutes inside the new product I understand why.&lt;/p&gt;

&lt;h2&gt;
  
  
  The rebrand in plain numbers
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://referral.magnific.com/mQMIvsh" rel="noopener noreferrer"&gt;Magnific&lt;/a&gt; (formerly Freepik) announced the change on April 28, 2026. Three numbers do most of the explaining. 230M EUR ARR, more than one million paying subscribers, and over 250 enterprise customers including the BBC, Puma, Carl's Jr, Delivery Hero, Huel, R/GA, Damm, Job&amp;amp;Talent, and Amazon Prime Video's House of David. Roughly half of that ARR now comes from video generation, which did not exist as a Freepik product 18 months ago.&lt;/p&gt;

&lt;p&gt;The company is bootstrapped. Joaquin Cuenca Abela, the CEO, has run it for 15 years without raising the kind of headline rounds you read about every Tuesday. That changes the math. When most AI startups are losing money on every render and hoping growth pays for it later, Magnific is profitable on a stack that includes 40+ frontier image, video, and audio models. They are paying the inference bills with subscriber money. The rebrand is not a fundraising signal. It is a positioning signal.&lt;/p&gt;

&lt;p&gt;The old name carried 15 years of stock assets, 250M+ files, and a mental shortcut: cheap design resources. Useful, but not where the money is going in 2026. The new name carries the upscaler that art directors at Disney and Apple's marketing partners have been quietly using since 2023. It says creative pro, not template farm.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the new Magnific actually contains
&lt;/h2&gt;

&lt;p&gt;The product I logged into is meaningfully different from the Freepik I used last week. The home dashboard now opens to a single canvas where I can choose Generate, Upscale, Edit, Video, or Audio without leaving the page. The asset library is still there, two clicks deep, but it is no longer the front door.&lt;/p&gt;

&lt;p&gt;The bundled tools as of the rebrand: text to image with model switching across Flux 1.1 Pro, Imagen 4, Seedream, Recraft V3, Nano Banana, and others. Text to video with Veo 3, Kling 2.5, Hailuo 02, Seedance, and Magnific's own video models. The Magnific upscaler stays as a distinct product at magnific.ai, with the same tier pricing and the same Hallucination and Reimagining controls that built its reputation. Audio includes voice generation, sound effects, and music with internal models. Edit covers inpainting, background removal, relighting, expand, and the new Spaces feature for collaborative review.&lt;/p&gt;

&lt;p&gt;The asset library is no longer the headline product. It is the safety net under a generative pipeline. You generate something, it is not quite right, you grab a stock asset to composite, you upscale, you export. That is the workflow the new homepage assumes.&lt;/p&gt;

&lt;p&gt;For me the surprise was Spaces. Multiple seats inside a single asset board, comments, version history. It is not Figma, but for a one-person studio it removes a class of problem (sharing a render with a client without DM-ing the file).&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this rebrand actually makes sense
&lt;/h2&gt;

&lt;p&gt;Cuenca Abela has been talking about a no-collar economy for a year. The phrase is awkward but the idea is clear. Blue collar made things, white collar managed processes, no-collar workers operate AI tools to produce both. The argument is that the bottleneck for creative production has moved from skill to taste, and that the people who win are the ones who can direct AI well, not the ones who can render in After Effects faster than the next person.&lt;/p&gt;

&lt;p&gt;If you accept that framing, Freepik (a stock asset library) is the wrong vehicle. Magnific (a creative AI platform) is the right one. The rebrand says we are not selling templates anymore, we are selling the studio.&lt;/p&gt;

&lt;p&gt;There is a second reason that gets less attention. Adobe is in an awkward spot. Firefly is good but Adobe's pricing model was built for the Creative Cloud era, not the per-render era. Cuenca Abela has the unusual position of running a profitable AI creative tool that does not need to charge Photoshop prices. Magnific Pro is 79 EUR per month for the full stack. That is roughly half a Creative Cloud All Apps plan, and it includes generation, upscaling, video, and audio.&lt;/p&gt;

&lt;p&gt;For solo studios this is the first time I have seen a real Adobe alternative that is not just a price play. The output quality on Flux and Imagen 4 routinely matches anything I get from Photoshop's generative fill, and the upscaler is genuinely better than anything Adobe ships.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this changes for paying users
&lt;/h2&gt;

&lt;p&gt;Nothing breaks. That is the headline. If you had a Freepik subscription, it migrates to Magnific Pro at the same tier, same renewal date, same payment method. Existing assets in your library show up in the new dashboard. API keys keep working under their old endpoints for at least 12 months.&lt;/p&gt;

&lt;p&gt;The Magnific upscaler subscription, if you had one separately, continues to bill at magnific.ai until the end of the period. After that it folds into the unified Magnific Pro plan. No double billing, no surprise prorations.&lt;/p&gt;

&lt;p&gt;For affiliate partners (which is me, and possibly you), the affiliate code mQMIvsh now lives at referral.magnific.com instead of referral.freepik.com. The old URL redirects with the same code attached, so existing blog posts keep paying out. New articles should use the new domain. I updated 15 articles on raxxo.shop in five minutes with a sed pass and a redeploy. If you run a similar setup, your update is roughly that small.&lt;/p&gt;

&lt;p&gt;The thing I would not do is delete the old domain references in your published archive. Magnific has committed to keeping the redirect alive, and overwriting your historic articles to "Magnific" everywhere creates the kind of revisionism that Wayback Machine catches and Reddit eventually mocks. I left the rebrand context in any article that explicitly compared tools side by side.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Magnific compares to Topaz and Krea now
&lt;/h2&gt;

&lt;p&gt;The unified product changes the competitive picture. Topaz remains the photographer's tool. Gigapixel and Photo AI still beat anything else for faithful detail recovery, scanned negatives, restoration work, and product shots that need to stay true to the original capture. Topaz moved to a 199 EUR per year subscription this year, which annoyed long-time users, but the technology is unmatched if your job is making the source bigger without making it different.&lt;/p&gt;

&lt;p&gt;Krea is the closest direct competitor to Magnific now. Both ship a unified canvas with multiple models, both let you upscale, generate, and edit without switching apps. Krea has a slight edge on real-time generation (the brushes that paint in Flux output as you drag) and on third-party model access (it lets you call Topaz's enhancer from inside Krea). Magnific has a deeper asset library, video generation that is genuinely usable for short cuts, and the more mature upscaler.&lt;/p&gt;

&lt;p&gt;For a solo design studio that does product mockups, blog headers, motion graphics, and the occasional client deliverable, Magnific covers more of the surface for less money. For a photographer or illustrator with a strict fidelity requirement, Topaz is still the answer and you pair it with whatever generation tool you prefer. The article I wrote earlier this year on this exact question is up at &lt;a href="https://dev.to/blogs/lab/i-tested-5-ai-image-generators-head-to-head-only-2-shipped"&gt;I Tested 5 AI Image Generators Head to Head (Only 2 Shipped)&lt;/a&gt;, and I am updating it this week to reflect the new positioning.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this signals about AI creative tools
&lt;/h2&gt;

&lt;p&gt;Three things stand out to me from this rebrand.&lt;/p&gt;

&lt;p&gt;The first is that the days of single-purpose AI tools are ending. A year ago I had separate logins for Midjourney, Magnific, Runway, ElevenLabs, and Adobe Firefly. The single-purpose tool is becoming the feature inside a unified platform. Krea, Magnific, and likely the next wave of Adobe releases will all converge on the same canvas pattern: pick a model, generate, refine, ship.&lt;/p&gt;

&lt;p&gt;The second is that bootstrapped AI companies are quietly outperforming the venture-backed cohort. Magnific is profitable. Most of the AI tools that show up in headlines are not. When the inevitable funding correction happens, the survivors will be the ones who built a business that works at unit-economics level. This is also why I keep paying for Magnific instead of chasing every new launch.&lt;/p&gt;

&lt;p&gt;The third is that the no-collar framing is going to enter the conversation whether we like the phrase or not. The companies pricing AI tools are betting that solo operators with taste will spend 79 EUR per month on a creative stack that used to cost 600 EUR through Adobe and three other vendors. If they are right, the floor for what a one-person studio can produce just dropped a level, and the ceiling for what a designer needs to know just lowered with it. The flip side is that the ones who do not adapt will compete with the ones who did, and that competition shows up in client work fast. We covered this shift in &lt;a href="https://dev.to/blogs/lab/how-i-run-a-15-repo-studio-from-one-claude-md-file"&gt;How I Run a 15-Repo Studio From One CLAUDE.md File&lt;/a&gt;, which is the operating manual for that kind of solo setup.&lt;/p&gt;

&lt;p&gt;If you want to see the actual frontend and developer choices behind a brand that runs on this stack, the &lt;a href="https://dev.to/pages/studio"&gt;RAXXO Studio overview&lt;/a&gt; is the closest thing I have to a cheat sheet.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bottom line
&lt;/h2&gt;

&lt;p&gt;Freepik becoming Magnific is not just a logo swap. It is the moment a 15-year-old asset library publicly committed to being an AI creative platform. The product is better, the pricing is more honest than the alternatives, and the affiliate code my readers have been clicking for months still pays out at the same rate.&lt;/p&gt;

&lt;p&gt;For solo studios that already built around the Freepik stack, you do not need to do anything except notice the change. For studios that have been waiting for a serious Adobe alternative, this is closer than anything I have used in 20 years of design work. If you want to test it, the trial covers the full unified stack, and the &lt;a href="https://referral.magnific.com/mQMIvsh" rel="noopener noreferrer"&gt;Magnific entry point&lt;/a&gt; is where I would start. The next article I am publishing today walks through the upscaler comparison in more detail, and the one after that covers what Microsoft Agent 365 means for studios that pair this kind of creative stack with Claude on the operations side.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>claudecode</category>
      <category>automation</category>
    </item>
  </channel>
</rss>
