<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: BeanBean</title>
    <description>The latest articles on Forem by BeanBean (@bean_bean).</description>
    <link>https://forem.com/bean_bean</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/bean_bean"/>
    <language>en</language>
    <item>
      <title>How to Handle Background Jobs with BullMQ and Redis in Node.js (2026)</title>
      <dc:creator>BeanBean</dc:creator>
      <pubDate>Thu, 23 Apr 2026 05:00:01 +0000</pubDate>
      <link>https://forem.com/bean_bean/how-to-handle-background-jobs-with-bullmq-and-redis-in-nodejs-2026-k28</link>
      <guid>https://forem.com/bean_bean/how-to-handle-background-jobs-with-bullmq-and-redis-in-nodejs-2026-k28</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://nextfuture.io.vn/blog/how-to-handle-background-jobs-with-bullmq-and-redis-in-nodejs-2026" rel="noopener noreferrer"&gt;NextFuture&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The problem
&lt;/h2&gt;

&lt;p&gt;When a user action triggers something slow — sending an email, calling a third-party API, resizing an image — blocking the HTTP response until it finishes is the wrong move. The naive fix is &lt;code&gt;await someSlowThing()&lt;/code&gt; inside the route handler, which ties up the server thread and breaks under load. You need a queue: accept the request instantly, hand the work to a background worker, and process it reliably with retries. &lt;a href="https://docs.bullmq.io/" rel="noopener noreferrer"&gt;BullMQ&lt;/a&gt; on top of Redis is the production-grade answer for Node.js — it handles retries, concurrency, delayed jobs, and cron scheduling out of the box.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Node.js 22+&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Redis 7+ (local Docker: &lt;code&gt;docker run -p 6379:6379 redis:7-alpine&lt;/code&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;TypeScript 5 (optional but assumed)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;npm install bullmq ioredis&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Env var: &lt;code&gt;REDIS_URL=redis://localhost:6379&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 1: Create a shared queue
&lt;/h2&gt;

&lt;p&gt;A &lt;code&gt;Queue&lt;/code&gt; is the entry point — you add jobs here from anywhere in your app (API routes, webhooks, crons):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// queue/email-queue.ts&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Queue&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;bullmq&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;IORedis&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;ioredis&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;connection&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;IORedis&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;REDIS_URL&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;maxRetriesPerRequest&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// required by BullMQ&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;emailQueue&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Queue&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;email&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;connection&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// Add a job: fire-and-forget from your API handler&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;emailQueue&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;send-welcome&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;to&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;user@example.com&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Alice&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;attempts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;backoff&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;exponential&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;delay&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;2000&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 2: Define the worker
&lt;/h2&gt;

&lt;p&gt;A &lt;code&gt;Worker&lt;/code&gt; pulls jobs from the queue and processes them. Run this in a separate process (e.g. &lt;code&gt;node worker.js&lt;/code&gt;) so it doesn't share resources with your web server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// worker/email-worker.ts&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Worker&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;Job&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;bullmq&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;IORedis&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;ioredis&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;connection&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;IORedis&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;REDIS_URL&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;maxRetriesPerRequest&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;worker&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Worker&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;email&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;job&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Job&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;job&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;`[email] Sending welcome to &lt;/span&gt;&lt;span class="se"&gt;\$&lt;/span&gt;&lt;span class="s2"&gt;{to}&lt;/span&gt;&lt;span class="se"&gt;\`&lt;/span&gt;&lt;span class="s2"&gt;);
    // await resend.emails.send({ to, subject: "Welcome!", ... });
    return { sent: true };
  },
  {
    connection,
    concurrency: 5, // process 5 jobs in parallel
  }
);

worker.on("completed", (job) =&amp;gt;
  console.log(&lt;/span&gt;&lt;span class="se"&gt;\`&lt;/span&gt;&lt;span class="s2"&gt;Job &lt;/span&gt;&lt;span class="se"&gt;\$&lt;/span&gt;&lt;span class="s2"&gt;{job.id} completed&lt;/span&gt;&lt;span class="se"&gt;\`&lt;/span&gt;&lt;span class="s2"&gt;)
);
worker.on("failed", (job, err) =&amp;gt;
  console.error(&lt;/span&gt;&lt;span class="se"&gt;\`&lt;/span&gt;&lt;span class="s2"&gt;Job &lt;/span&gt;&lt;span class="se"&gt;\$&lt;/span&gt;&lt;span class="s2"&gt;{job?.id} failed:&lt;/span&gt;&lt;span class="se"&gt;\`&lt;/span&gt;&lt;span class="s2"&gt;, err.message)
);
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3: Add recurring scheduled jobs
&lt;/h2&gt;

&lt;p&gt;BullMQ's &lt;code&gt;QueueScheduler&lt;/code&gt; was removed in v3 — use &lt;code&gt;queue.upsertJobScheduler&lt;/code&gt; instead for cron jobs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Schedule a daily digest job at 07:00 UTC&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;emailQueue&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;upsertJobScheduler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;daily-digest&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;            &lt;span class="c1"&gt;// scheduler ID (idempotent)&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;pattern&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;0 7 * * *&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="c1"&gt;// cron expression&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;send-digest&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;daily&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="na"&gt;opts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;attempts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Full working example
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// ─── queue/index.ts ──────────────────────────────────────────────────────────&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Queue&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;bullmq&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;IORedis&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;ioredis&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;connection&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;IORedis&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;REDIS_URL&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;maxRetriesPerRequest&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;emailQueue&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Queue&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;email&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;connection&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;scheduleJobs&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;emailQueue&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;upsertJobScheduler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;daily-digest&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;pattern&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;0 7 * * *&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;send-digest&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;daily&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="na"&gt;opts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;attempts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// ─── worker/index.ts ─────────────────────────────────────────────────────────&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Worker&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;Job&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;bullmq&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;IORedis&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;ioredis&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;connection&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;IORedis&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;REDIS_URL&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;maxRetriesPerRequest&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;processEmail&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;job&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Job&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;job&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="k"&gt;switch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;job&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;send-welcome&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;`Sending welcome email to &lt;/span&gt;&lt;span class="se"&gt;\$&lt;/span&gt;&lt;span class="s2"&gt;{to} (&lt;/span&gt;&lt;span class="se"&gt;\$&lt;/span&gt;&lt;span class="s2"&gt;{name})&lt;/span&gt;&lt;span class="se"&gt;\`&lt;/span&gt;&lt;span class="s2"&gt;);
      // await sendWelcomeEmail(to, name);
      break;
    case "send-digest":
      console.log(&lt;/span&gt;&lt;span class="se"&gt;\`&lt;/span&gt;&lt;span class="s2"&gt;Sending daily digest (type: &lt;/span&gt;&lt;span class="se"&gt;\$&lt;/span&gt;&lt;span class="s2"&gt;{type})&lt;/span&gt;&lt;span class="se"&gt;\`&lt;/span&gt;&lt;span class="s2"&gt;);
      // await sendDigest();
      break;
    default:
      throw new Error(&lt;/span&gt;&lt;span class="se"&gt;\`&lt;/span&gt;&lt;span class="s2"&gt;Unknown job: &lt;/span&gt;&lt;span class="se"&gt;\$&lt;/span&gt;&lt;span class="s2"&gt;{job.name}&lt;/span&gt;&lt;span class="se"&gt;\`&lt;/span&gt;&lt;span class="s2"&gt;);
  }

  return { sent: true };
}

const worker = new Worker("email", processEmail, {
  connection,
  concurrency: 5,
  removeOnComplete: { count: 100 },
  removeOnFail: { count: 50 },
});

worker.on("completed", (job) =&amp;gt; console.log(&lt;/span&gt;&lt;span class="se"&gt;\`&lt;/span&gt;&lt;span class="s2"&gt;✓ &lt;/span&gt;&lt;span class="se"&gt;\$&lt;/span&gt;&lt;span class="s2"&gt;{job.id} &lt;/span&gt;&lt;span class="se"&gt;\$&lt;/span&gt;&lt;span class="s2"&gt;{job.name}&lt;/span&gt;&lt;span class="se"&gt;\`&lt;/span&gt;&lt;span class="s2"&gt;));
worker.on("failed", (job, err) =&amp;gt; console.error(&lt;/span&gt;&lt;span class="se"&gt;\`&lt;/span&gt;&lt;span class="s2"&gt;✗ &lt;/span&gt;&lt;span class="se"&gt;\$&lt;/span&gt;&lt;span class="s2"&gt;{job?.id}:&lt;/span&gt;&lt;span class="se"&gt;\`&lt;/span&gt;&lt;span class="s2"&gt;, err.message));

// ─── api/enqueue/route.ts (Next.js App Router) ───────────────────────────────
import { NextRequest, NextResponse } from "next/server";
import { emailQueue } from "@/queue";

export const dynamic = "force-dynamic";

export async function POST(req: NextRequest) {
  const { to, name } = await req.json();
  if (!to) return NextResponse.json({ error: "to required" }, { status: 400 });

  const job = await emailQueue.add(
    "send-welcome",
    { to, name },
    { attempts: 3, backoff: { type: "exponential", delay: 2000 } }
  );

  return NextResponse.json({ jobId: job.id }, { status: 202 });
}
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Testing it
&lt;/h2&gt;

&lt;p&gt;Start the worker in one terminal (&lt;code&gt;npx tsx worker/index.ts&lt;/code&gt;), then POST a job:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST http://localhost:3000/api/enqueue &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{"to":"test@example.com","name":"Alice"}'&lt;/span&gt;
&lt;span class="c"&gt;# → {"jobId":"1"}&lt;/span&gt;
&lt;span class="c"&gt;# Worker terminal: ✓ 1 send-welcome&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Troubleshooting
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;maxRetriesPerRequest must be null&lt;/code&gt;&lt;/strong&gt;: BullMQ requires this IORedis option explicitly — add &lt;code&gt;maxRetriesPerRequest: null&lt;/code&gt; to your connection config.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Jobs stuck in &lt;code&gt;waiting&lt;/code&gt;&lt;/strong&gt;: No worker is connected to the queue. Make sure the worker process is running and connected to the same Redis instance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scheduler not firing&lt;/strong&gt;: &lt;code&gt;upsertJobScheduler&lt;/code&gt; requires BullMQ v5+. Check &lt;code&gt;npm ls bullmq&lt;/code&gt; and upgrade if needed.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;{"&lt;a class="mentioned-user" href="https://dev.to/context"&gt;@context&lt;/a&gt;":"&lt;a href="https://schema.org%22,%22@type%22:%22HowTo%22,%22name%22:%22How" rel="noopener noreferrer"&gt;https://schema.org","@type":"HowTo","name":"How&lt;/a&gt; to Handle Background Jobs with BullMQ and Redis in Node.js (2026)","step":[{"@type":"HowToStep","position":1,"name":"Create a shared queue","text":"Instantiate a BullMQ Queue with an IORedis connection and add jobs with retry config."},{"@type":"HowToStep","position":2,"name":"Define the worker","text":"Run a BullMQ Worker in a separate process with a processor function and concurrency setting."},{"@type":"HowToStep","position":3,"name":"Add recurring scheduled jobs","text":"Use queue.upsertJobScheduler with a cron pattern to schedule repeating background tasks."}]}&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This article was originally published on &lt;a href="https://nextfuture.io.vn" rel="noopener noreferrer"&gt;NextFuture&lt;/a&gt;. Follow us for more fullstack &amp;amp; AI engineering content.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>fullstack</category>
      <category>ai</category>
      <category>webdev</category>
      <category>javascript</category>
    </item>
    <item>
      <title>gpt-image-2 API: ship 2K AI images in Next.js for $0.21 (2026)</title>
      <dc:creator>BeanBean</dc:creator>
      <pubDate>Wed, 22 Apr 2026 23:00:00 +0000</pubDate>
      <link>https://forem.com/bean_bean/gpt-image-2-api-ship-2k-ai-images-in-nextjs-for-021-2026-17g6</link>
      <guid>https://forem.com/bean_bean/gpt-image-2-api-ship-2k-ai-images-in-nextjs-for-021-2026-17g6</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://nextfuture.io.vn/blog/gpt-image-2-api-ship-2k-ai-images-in-nextjs-for-021-2026" rel="noopener noreferrer"&gt;NextFuture&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What's new this week
&lt;/h2&gt;

&lt;p&gt;OpenAI shipped &lt;a href="https://openai.com/index/introducing-chatgpt-images-2-0/" rel="noopener noreferrer"&gt;ChatGPT Images 2.0&lt;/a&gt; on April 21, 2026, exposing the new &lt;code&gt;gpt-image-2&lt;/code&gt; model in the API, Codex, and ChatGPT on the same day. The model renders up to 2,000 pixels on the long edge, supports seven aspect ratios from 3:1 to 1:3, and produces up to 8 coherent images per call with the same characters and objects preserved across the batch. A new &lt;em&gt;thinking mode&lt;/em&gt; reasons about layout and typography before rendering — the reason gpt-image-2 now handles multilingual text, infographics, slides, and maps that gpt-image-1 used to mangle. TechCrunch called the text rendering "surprisingly good" and the Image Arena leaderboard currently ranks it #1 across every category. The production-tracked alias &lt;code&gt;chatgpt-image-latest&lt;/code&gt; rolls updates forward automatically; pin to &lt;code&gt;gpt-image-2&lt;/code&gt; if you want a fixed version.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why it matters for builders
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Indie makers:&lt;/strong&gt; you can skip the Midjourney → Photoshop dance for launch assets. Before: generate a square hero in Midjourney, hand-edit typography in Figma, upscale. After: one &lt;code&gt;gpt-image-2&lt;/code&gt; call returns an on-brand landscape hero with legible headline text at 2K — ready to paste into your marketing page. Eight-image batches turn A/B testing your hero copy into a single API call instead of eight prompt iterations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Web engineers:&lt;/strong&gt; product visuals no longer need a CMS upload flow. Before: designer exports PNG, uploads to S3, copy-pastes the URL into a CMS field. After: a Next.js server action takes the product title, calls &lt;code&gt;images.generate&lt;/code&gt;, streams the base64 PNG straight into a &lt;code&gt;next/image&lt;/code&gt; tag or Vercel Blob. You get on-demand blog covers, og:image defaults, and placeholder product photos from one endpoint.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI engineers:&lt;/strong&gt; demos that need synthetic screenshots or diagrams stop blocking on design tickets. Before: "let's Photoshop a fake dashboard for the pitch deck." After: one prompt — "a SaaS dashboard showing churn dropping from 8% to 3% over six months, labels in English and Vietnamese, dark theme" — returns a usable PNG in roughly 7 seconds. RAG and eval pipelines that need grounded visual artifacts can now generate them deterministically with a fixed &lt;code&gt;seed&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hands-on: try it in under 15 minutes
&lt;/h2&gt;

&lt;p&gt;Requirements: Node 20+, the OpenAI Node SDK (&lt;code&gt;npm i openai@^4&lt;/code&gt;), and an API key with image generation enabled. Drop this into a Next.js 16 server action at &lt;code&gt;app/actions/image.ts&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;use server&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;OpenAI&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;openai&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;put&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@vercel/blob&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;OpenAI&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;generateCover&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;images&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;generate&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;gpt-image-2&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;1536x1024&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="c1"&gt;// landscape; up to 2K long-edge supported&lt;/span&gt;
    &lt;span class="na"&gt;quality&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;high&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;      &lt;span class="c1"&gt;// "low" | "medium" | "high"&lt;/span&gt;
    &lt;span class="na"&gt;n&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;                 &lt;span class="c1"&gt;// bump to 8 for a coherent batch&lt;/span&gt;
    &lt;span class="c1"&gt;// @ts-expect-error — new 2026 param, SDK types lag&lt;/span&gt;
    &lt;span class="na"&gt;thinking&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;auto&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;b64&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;b64_json&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;url&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;put&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s2"&gt;`covers/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nb"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;()}&lt;/span&gt;&lt;span class="s2"&gt;.png`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;Buffer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;b64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;base64&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;access&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;public&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;contentType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;image/png&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Call it from an RSC page: &lt;code&gt;const url = await generateCover("Dark hero for a Next.js tutorial, laptop with glowing keyboard, title 'Ship faster'");&lt;/code&gt;. Costs: OpenAI bills images as tokens — $5/M input text, $10/M output text, $8/M input image, $30/M output image. A 1024×1024 high-quality render lands at ~$0.21; a batch of four is ~$0.84. Thinking mode bills extra reasoning tokens, so a strict layout brief (four-column infographic, Vietnamese headings, exact pricing) costs more than a loose scene — budget it. Free-tier ChatGPT users only get instant mode; thinking, 8-image batches, and web-search grounding require Plus/Pro/Business or any paid API tier. For subject continuity across a batch — four angles of a product, a four-panel comic — set &lt;code&gt;n: 8&lt;/code&gt; and describe each variant inline; the model keeps subjects stable, which gpt-image-1 could not.&lt;/p&gt;

&lt;h2&gt;
  
  
  How it compares to alternatives
&lt;/h2&gt;

&lt;p&gt;gpt-image-2Gemini 2.5 Flash ImageFlux 1.1 ProStarts at~$0.21 / 1024² high-quality render$0.039 / image$0.055 / imageBest forText-heavy infographics, slides, multilingual signageConversational edits, cheap iteration inside Gemini APIPhotoreal hero shots, stylistic controlKey limit2K max on long edge; thinking mode billed extraWeaker at small-font text renderingNo reasoning step; legibility weak on dense UI copyIntegration&lt;code&gt;openai&lt;/code&gt; SDK, one endpoint, base64 or URL response&lt;code&gt;@google/genai&lt;/code&gt; SDK, same call path as textReplicate / Fal / BFL REST APIs&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it this week
&lt;/h2&gt;

&lt;p&gt;Pick one piece of marketing art on your site — a blog cover, a pricing-page illustration, an empty-state screenshot — and regenerate it with &lt;code&gt;gpt-image-2&lt;/code&gt; in a Next.js server action tonight. Measure three numbers: total USD, first-render latency, and whether the text stays legible at 2×. If the answer is "cheaper than an hour of Figma," wire it into your publish pipeline as an auto-cover generator. For the audio side of the same UX pattern, see how &lt;a href="https://nextfuture.io.vn/blog/gemini-31-flash-tts-for-nextjs-ship-voice-ux-in-15-min-2026" rel="noopener noreferrer"&gt;Gemini 3.1 Flash TTS ships voice UX in 15 minutes&lt;/a&gt;; if you want the coding agent that now calls this endpoint natively, pair it with the &lt;a href="https://nextfuture.io.vn/blog/openai-codex-april-2026-update-computer-use-memory-plugins-review" rel="noopener noreferrer"&gt;OpenAI Codex April 2026 update&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This article was originally published on &lt;a href="https://nextfuture.io.vn" rel="noopener noreferrer"&gt;NextFuture&lt;/a&gt;. Follow us for more fullstack &amp;amp; AI engineering content.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>fullstack</category>
      <category>ai</category>
      <category>webdev</category>
      <category>javascript</category>
    </item>
    <item>
      <title>How to deploy a Hono API to Railway with Postgres in 10 minutes (2026)</title>
      <dc:creator>BeanBean</dc:creator>
      <pubDate>Wed, 22 Apr 2026 05:00:01 +0000</pubDate>
      <link>https://forem.com/bean_bean/how-to-deploy-a-hono-api-to-railway-with-postgres-in-10-minutes-2026-4bcj</link>
      <guid>https://forem.com/bean_bean/how-to-deploy-a-hono-api-to-railway-with-postgres-in-10-minutes-2026-4bcj</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://nextfuture.io.vn/blog/how-to-deploy-a-hono-api-to-railway-with-postgres-in-10-minutes-2026" rel="noopener noreferrer"&gt;NextFuture&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The problem
&lt;/h2&gt;

&lt;p&gt;Hono is the fastest Node/Bun-compatible HTTP framework available in 2026 — &lt;a href="https://hono.dev/docs/getting-started/nodejs" rel="noopener noreferrer"&gt;its benchmark suite regularly outperforms Express and Fastify by a factor of three on raw throughput&lt;/a&gt;. The deployment story, however, has historically been messy: you either maintain a custom Dockerfile, wrestle with platform-specific adapters, or fight cold-start limits on serverless edge runtimes that do not support long-lived database connections. &lt;a href="https://dev.to/api/affiliate/click?slug=railway&amp;amp;post=how-to-deploy-hono-api-to-railway-with-postgres-in-10-minutes-2026"&gt;Railway&lt;/a&gt; solves this by treating your Hono service, its Postgres database, and any other add-ons as first-class siblings in one project, injecting connection strings automatically and handling the build pipeline without extra configuration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Node.js 22+ (confirm with &lt;code&gt;node -v&lt;/code&gt;) or Bun 1.1+&lt;/li&gt;
&lt;li&gt;npm 10+ or Bun package manager&lt;/li&gt;
&lt;li&gt;A &lt;a href="https://dev.to/api/affiliate/click?slug=railway&amp;amp;post=how-to-deploy-hono-api-to-railway-with-postgres-in-10-minutes-2026"&gt;Railway&lt;/a&gt; account — free tier includes $5/month credit, no credit card required&lt;/li&gt;
&lt;li&gt;Railway CLI: &lt;code&gt;npm i -g @railway/cli&lt;/code&gt;, then &lt;code&gt;railway login&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;drizzle-kit&lt;/code&gt; for schema management — installed in Step 2&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 1: Bootstrap the Hono project
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;create-hono&lt;/code&gt; scaffolds a production-ready tsconfig, a &lt;code&gt;dev&lt;/code&gt; watch command via &lt;code&gt;tsx&lt;/code&gt;, and a minimal build pipeline in one command — choose the &lt;em&gt;nodejs&lt;/em&gt; adapter when prompted, because it is the only option that supports persistent Postgres connections on Railway's always-on containers. Edge and Cloudflare Workers adapters do not allow stateful TCP sockets, so they cannot hold a Postgres connection pool.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm create hono@latest my-api
&lt;span class="c"&gt;# Select "nodejs" template at the prompt&lt;/span&gt;
&lt;span class="nb"&gt;cd &lt;/span&gt;my-api
npm &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 2: Add Drizzle ORM and the Postgres driver
&lt;/h2&gt;

&lt;p&gt;Drizzle's zero-overhead query builder compiles directly to parameterized SQL with no reflection or proxy magic, which means fast cold starts and queries that are straightforward to audit in production logs. The &lt;code&gt;postgres&lt;/code&gt; package is the canonical JavaScript driver that Drizzle's &lt;code&gt;drizzle-orm/postgres-js&lt;/code&gt; dialect targets — it handles connection pooling, backpressure, and prepared statements natively without a separate pool configuration file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install &lt;/span&gt;drizzle-orm postgres dotenv
npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-D&lt;/span&gt; drizzle-kit tsx @types/node
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3: Define the schema and wire the routes
&lt;/h2&gt;

&lt;p&gt;Create &lt;code&gt;src/db/schema.ts&lt;/code&gt; using Drizzle's &lt;code&gt;pgTable&lt;/code&gt; helper — column definitions carry full TypeScript inference so route handlers get autocomplete on query results without manual type assertions. Your main &lt;code&gt;src/index.ts&lt;/code&gt; imports both the Drizzle client and the schema, keeping the database layer a plain import rather than a global singleton that leaks across test files.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Provision Railway Postgres
&lt;/h2&gt;

&lt;p&gt;In the Railway dashboard, open your project, choose &lt;em&gt;New Service&lt;/em&gt;, then &lt;em&gt;Database&lt;/em&gt;, then &lt;em&gt;PostgreSQL&lt;/em&gt;. Railway provisions a managed Postgres 16 instance and injects &lt;code&gt;DATABASE_URL&lt;/code&gt; as an environment variable into every service in the same project automatically — no secrets panel, no copy-pasting of connection strings, no risk of committing credentials. After linking your repo with &lt;code&gt;railway link&lt;/code&gt;, push the Drizzle schema against the remote database before deploying code that depends on it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;railway login
railway &lt;span class="nb"&gt;link
&lt;/span&gt;railway run npx drizzle-kit push
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 5: Deploy
&lt;/h2&gt;

&lt;p&gt;Set the &lt;em&gt;Start Command&lt;/em&gt; in Railway's service settings to &lt;code&gt;node dist/index.js&lt;/code&gt;, or add a &lt;code&gt;Procfile&lt;/code&gt; at the repo root with &lt;code&gt;web: node dist/index.js&lt;/code&gt; — Railway reads either. Running &lt;code&gt;railway up --detach&lt;/code&gt; ships the current directory, triggers the Nixpacks build, and prints the public HTTPS URL once the health check passes; total time on a cold push is typically under 90 seconds.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;railway up &lt;span class="nt"&gt;--detach&lt;/span&gt;
&lt;span class="c"&gt;# Output: Deployment successful → https://my-api-production.up.railway.app&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Full working example
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// src/index.ts — complete Hono + Drizzle API, deploy-ready for Railway&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Hono&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;hono&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;serve&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@hono/node-server&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;drizzle&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;drizzle-orm/postgres-js&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;postgres&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;postgres&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;pgTable&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;uuid&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;timestamp&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;drizzle-orm/pg-core&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;eq&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;drizzle-orm&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;dotenv/config&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;items&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;pgTable&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;items&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;uuid&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;id&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;defaultRandom&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;primaryKey&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;name&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;notNull&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
  &lt;span class="na"&gt;createdAt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;created_at&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;defaultNow&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;notNull&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DATABASE_URL&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;DATABASE_URL is not set&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;sql&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;postgres&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;max&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;drizzle&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;sql&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;schema&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;items&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Hono&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/health&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;ok&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;}));&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/items&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;rows&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;select&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;items&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;orderBy&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;items&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;createdAt&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;rows&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/items&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="k"&gt;typeof&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;string&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;name is required&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="mi"&gt;400&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;row&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;insert&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;items&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;values&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;trim&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;returning&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;row&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="mi"&gt;201&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/items/:id&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;row&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;select&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;items&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;where&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;eq&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;items&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;param&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;id&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)));&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;row&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;not found&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="mi"&gt;404&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;row&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;port&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Number&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;PORT&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="mi"&gt;3000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Server listening on port &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;port&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nf"&gt;serve&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;port&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Prefer a managed option? &lt;a href="https://dev.to/api/affiliate/click?slug=railway&amp;amp;post=how-to-deploy-hono-api-to-railway-with-postgres-in-10-minutes-2026"&gt;&lt;strong&gt;Try Railway&lt;/strong&gt;&lt;/a&gt; — deploy fullstack apps with Postgres, Redis, and auto-SSL in a few clicks, with $5/month free credit and no credit card required.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing it
&lt;/h2&gt;

&lt;p&gt;Once &lt;code&gt;railway up&lt;/code&gt; completes, copy the URL from the CLI output and run the three commands below. The health endpoint confirms the process started; the POST verifies Postgres connectivity end-to-end; the final GET confirms the row persisted across requests — all three should respond with 2xx status and JSON bodies.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;BASE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https://my-api-production.up.railway.app
curl &lt;span class="nv"&gt;$BASE&lt;/span&gt;/health
curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST &lt;span class="nv"&gt;$BASE&lt;/span&gt;/items &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{"name":"hello railway"}'&lt;/span&gt;
curl &lt;span class="nv"&gt;$BASE&lt;/span&gt;/items
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Troubleshooting
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Build fails: cannot find module postgres&lt;/strong&gt; — Ensure &lt;code&gt;postgres&lt;/code&gt; is under &lt;code&gt;dependencies&lt;/code&gt; (not &lt;code&gt;devDependencies&lt;/code&gt;) in &lt;code&gt;package.json&lt;/code&gt;; Railway installs production deps only by default (override via &lt;code&gt;NIXPACKS_NODE_ENVIRONMENT=development&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DATABASE_URL undefined at runtime&lt;/strong&gt; — The Hono service and the Postgres plugin must be in the same Railway project; verify with &lt;code&gt;railway variables&lt;/code&gt; before deploying.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Health check timeout and deployment rollback&lt;/strong&gt; — Railway assigns a dynamic &lt;code&gt;PORT&lt;/code&gt; env var; hardcoding &lt;code&gt;3000&lt;/code&gt; means the health probe never gets a response. Reading &lt;code&gt;process.env.PORT&lt;/code&gt; as shown above is mandatory.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Where to go next
&lt;/h2&gt;

&lt;p&gt;Once the API is live, add a Redis plugin the same way — Railway injects &lt;code&gt;REDIS_URL&lt;/code&gt; automatically, and wiring &lt;code&gt;ioredis&lt;/code&gt; into Hono middleware takes fewer than 10 lines for request-level caching or per-IP rate limiting. For a broader picture of how platform choices — Railway vs. raw VPS vs. Vercel — compare on the cost-versus-DX axis in 2026, the &lt;a href="https://nextfuture.io.vn/blog/the-q1-2026-webai-recap-launches-outages-pricing" rel="noopener noreferrer"&gt;Q1 2026 Web+AI Recap&lt;/a&gt; is the best single read on what actually shifted. The &lt;a href="https://nextfuture.io.vn/blog/top-3-developer-dx-tools-for-shipping-faster-in-2026" rel="noopener noreferrer"&gt;top DX tools for shipping faster in 2026&lt;/a&gt; covers the adjacent tooling — forms, short links, scheduling — that rounds out a modern indie-maker stack built on Railway.&lt;br&gt;
{"&lt;a class="mentioned-user" href="https://dev.to/context"&gt;@context&lt;/a&gt;":"&lt;a href="https://schema.org%22,%22@type%22:%22HowTo%22,%22name%22:%22How" rel="noopener noreferrer"&gt;https://schema.org","@type":"HowTo","name":"How&lt;/a&gt; to deploy a Hono API to Railway with Postgres in 10 minutes (2026)","step":[{"@type":"HowToStep","position":1,"name":"Bootstrap the Hono project","text":"Run npm create hono@latest, select the nodejs adapter at the prompt, then npm install."},{"@type":"HowToStep","position":2,"name":"Add Drizzle ORM and the Postgres driver","text":"Install drizzle-orm, the postgres driver, dotenv, and drizzle-kit as a devDependency."},{"@type":"HowToStep","position":3,"name":"Define the schema and wire the routes","text":"Create a pgTable schema in src/db/schema.ts and import it into your Hono route handlers in src/index.ts."},{"@type":"HowToStep","position":4,"name":"Provision Railway Postgres","text":"Add a PostgreSQL plugin in the Railway dashboard; DATABASE_URL is injected automatically. Run railway run npx drizzle-kit push to apply the schema."},{"@type":"HowToStep","position":5,"name":"Deploy","text":"Set the Start Command to node dist/index.js and run railway up --detach to ship your API and receive a public HTTPS URL."}]}&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This article was originally published on &lt;a href="https://nextfuture.io.vn" rel="noopener noreferrer"&gt;NextFuture&lt;/a&gt;. Follow us for more fullstack &amp;amp; AI engineering content.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>fullstack</category>
      <category>ai</category>
      <category>webdev</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Gemini 3.1 Flash TTS for Next.js: ship voice UX in 15 min (2026)</title>
      <dc:creator>BeanBean</dc:creator>
      <pubDate>Tue, 21 Apr 2026 23:00:00 +0000</pubDate>
      <link>https://forem.com/bean_bean/gemini-31-flash-tts-for-nextjs-ship-voice-ux-in-15-min-2026-8fp</link>
      <guid>https://forem.com/bean_bean/gemini-31-flash-tts-for-nextjs-ship-voice-ux-in-15-min-2026-8fp</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://nextfuture.io.vn/blog/gemini-31-flash-tts-for-nextjs-ship-voice-ux-in-15-min-2026" rel="noopener noreferrer"&gt;NextFuture&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What's new this week
&lt;/h2&gt;

&lt;p&gt;On April 15, 2026, Google shipped &lt;a href="https://blog.google/innovation-and-ai/models-and-research/gemini-models/gemini-3-1-flash-tts/" rel="noopener noreferrer"&gt;Gemini 3.1 Flash TTS&lt;/a&gt; as a public preview on AI Studio and Vertex AI. The model ID is &lt;code&gt;gemini-3.1-flash-tts-preview&lt;/code&gt;, and it introduces 200+ inline audio tags (for example &lt;code&gt;[whispers]&lt;/code&gt;, &lt;code&gt;[happy]&lt;/code&gt;, &lt;code&gt;[pause]&lt;/code&gt;), 30 prebuilt voices, native multi-speaker dialogue, and coverage across 70+ languages. The free tier is open for prototyping; paid usage is $1 per million text input tokens and $20 per million audio output tokens — roughly an order of magnitude cheaper than ElevenLabs at the same run-time. Output is 24 kHz mono PCM, returned inline as base64, so there is no webhook dance and no separate voice-studio account to manage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why it matters for builders
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Web engineer.&lt;/strong&gt; Previously, adding a "listen to this article" button to a Next.js blog meant wiring ElevenLabs or patching &lt;code&gt;tts-1-hd&lt;/code&gt; with custom SSML to fix prosody. With Flash TTS, a single &lt;code&gt;generateContent&lt;/code&gt; call and one inline &lt;code&gt;[slow]&lt;/code&gt; or &lt;code&gt;[excited]&lt;/code&gt; tag produces the same emotional pacing — no SSML build step, no separate voice studio, and the audio streams back as base64 PCM you can buffer straight into an &lt;code&gt;&amp;lt;audio&amp;gt;&lt;/code&gt; element. The full call lives in a server action, so API keys stay on the server.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI engineer.&lt;/strong&gt; Building a voice agent that reads CRM notes aloud used to need two inference passes (LLM then TTS) plus manual speaker diarization. Flash TTS accepts multi-speaker transcripts inline: your LLM emits &lt;code&gt;Joe: ... Jane: ...&lt;/code&gt; and TTS returns one WAV with two distinct voices. Your agent graph loses a node, latency drops by a full network round-trip, and you skip maintaining a separate speaker-labelling prompt that drifts every model upgrade.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Indie maker.&lt;/strong&gt; A Duolingo-style pronunciation app priced at $4/month barely broke even on Azure TTS at roughly $0.05 per lesson. At $20 per million output audio tokens, a 30-second lesson now costs about $0.003 — gross margin holds above 90% on the same $4 tier. Voice is no longer the line item that kills your side project's unit economics, which means TTS-heavy features like audiobook summaries, podcast previews, or accessibility narration finally pencil out on a free-tier SaaS.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hands-on: try it in under 15 minutes
&lt;/h2&gt;

&lt;p&gt;Grab a free API key from &lt;a href="https://aistudio.google.com" rel="noopener noreferrer"&gt;aistudio.google.com&lt;/a&gt;, store it as &lt;code&gt;GEMINI_API_KEY&lt;/code&gt;, then install the SDK:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; @google/genai wav
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Minimal Node/TypeScript call wrapped as a Next.js 16 server action:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;use server&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;GoogleGenAI&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@google/genai&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;ai&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;GoogleGenAI&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;apiKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;GEMINI_API_KEY&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;synthesize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;voice&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Kore&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;ai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;models&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;generateContent&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;gemini-3.1-flash-tts-preview&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;contents&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt; &lt;span class="na"&gt;parts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt; &lt;span class="nx"&gt;text&lt;/span&gt; &lt;span class="p"&gt;}]&lt;/span&gt; &lt;span class="p"&gt;}],&lt;/span&gt;
    &lt;span class="na"&gt;config&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;responseModalities&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;AUDIO&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
      &lt;span class="na"&gt;speechConfig&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;voiceConfig&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;prebuiltVoiceConfig&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;voiceName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;voice&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;b64&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;candidates&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;parts&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;inlineData&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;Buffer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;b64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;base64&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// 24kHz mono PCM&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;synthesize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Say warmly: [slow] Welcome back, Alex. [happy] You crushed this week.&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The inline &lt;code&gt;[slow]&lt;/code&gt; and &lt;code&gt;[happy]&lt;/code&gt; tags steer pacing and emotion mid-sentence — no separate prosody config. Tags must live inside square brackets, separated by text or punctuation: two adjacent tags will error. For a two-person podcast intro via cURL:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="s2"&gt;"https://generativelanguage.googleapis.com/v1beta/models/gemini-3.1-flash-tts-preview:generateContent"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"x-goog-api-key: &lt;/span&gt;&lt;span class="nv"&gt;$GEMINI_API_KEY&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{
    "contents":[{"parts":[{"text":"TTS between Joe and Jane: Joe: [excited] The feed dropped. Jane: [amused] Took long enough."}]}],
    "generationConfig":{"responseModalities":["AUDIO"]}
  }'&lt;/span&gt; &lt;span class="nt"&gt;--output&lt;/span&gt; podcast.wav
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pipe the output through &lt;code&gt;ffmpeg -i in.wav -b:a 64k out.mp3&lt;/code&gt; if you need smaller transfer. Preview rate limits follow Gemini Flash defaults (10 RPM on free, 1,000 RPM on paid) — fine for prototyping. For production, queue synthesis in a BullMQ worker and cache finished clips in S3 or R2 keyed by a hash of &lt;code&gt;text + voice + tagSet&lt;/code&gt;; a cache hit rate above 60% on a changelog feature is common. One caveat: preview model IDs have been renamed twice in the Gemini 3.1 family this quarter, so read the exact ID from an env var rather than hard-coding it.&lt;/p&gt;

&lt;h2&gt;
  
  
  How it compares to alternatives
&lt;/h2&gt;

&lt;p&gt;Gemini 3.1 Flash TTSOpenAI gpt-4o-mini-ttsElevenLabs Flash v2.5Starts atFree tier; $1/M text + $20/M audio tokens paid$0.60 per 1M input chars, no free tier$5/mo Starter (30k credits)Best forMultilingual, expressive multi-speaker narrationLow-latency voice replies inside GPT appsCloned brand voices, audiobook productionKey limitPreview only — no SLA, model ID may change before GA~8 voices, fewer expressive tagsPer-character billing scales fast on free-tier SaaSIntegration@google/genai SDK, Vertex AI, REST/cURLOpenAI SDK, streaming WebSocketREST API, WebSocket streaming, native SDK&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it this week
&lt;/h2&gt;

&lt;p&gt;Pick one text-heavy screen in your product — an onboarding intro, a weekly changelog entry, a lesson summary — and wire Flash TTS behind a "Play" button. Ship it behind a feature flag so you can A/B the voice UX on 10% of sessions, then compare time-on-page and replay counts; if replay rate clears 15%, keep it and expand to every long-form page. For wider context on where the Gemini stack sits today, read our &lt;a href="https://nextfuture.io.vn/blog/google-gemma-4-review-2026-the-open-model-that-runs-locally-and-beats-closed-apis" rel="noopener noreferrer"&gt;Gemma 4 review&lt;/a&gt; and the &lt;a href="https://nextfuture.io.vn/blog/the-q1-2026-webai-recap-launches-outages-pricing" rel="noopener noreferrer"&gt;Q1 2026 Web+AI recap&lt;/a&gt; for the pricing shifts since January.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This article was originally published on &lt;a href="https://nextfuture.io.vn" rel="noopener noreferrer"&gt;NextFuture&lt;/a&gt;. Follow us for more fullstack &amp;amp; AI engineering content.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>fullstack</category>
      <category>ai</category>
      <category>webdev</category>
      <category>javascript</category>
    </item>
    <item>
      <title>AI Web Dev Digest — Apr 20 (Evening)</title>
      <dc:creator>BeanBean</dc:creator>
      <pubDate>Tue, 21 Apr 2026 05:00:00 +0000</pubDate>
      <link>https://forem.com/bean_bean/ai-x-web-dev-digest-apr-20-evening-1037</link>
      <guid>https://forem.com/bean_bean/ai-x-web-dev-digest-apr-20-evening-1037</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://nextfuture.io.vn/blog/ai-web-dev-digest-apr-20-evening" rel="noopener noreferrer"&gt;NextFuture&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Today's briefing
&lt;/h2&gt;

&lt;p&gt;Today's 48-hour window is dominated by agent tooling graduating from hype to hard numbers. Anthropic shipped a brand-new Claude Design preview that generates exportable prototypes from a single prompt, while two long-form deep dives delivered a grounded Q1 reckoning — cataloguing the real costs, capability gaps, and infra pricing shifts that defined the first quarter for fullstack teams. Developer experience tooling rounds out the day as a quiet but practical counterweight to the AI noise.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI Tooling &amp;amp; Agents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Claude Design enters Labs preview.&lt;/strong&gt; Anthropic's new feature turns a text prompt into exportable prototypes, pitch decks, and on-brand one-pagers — no Figma or Keynote required. Early previews show coherent layouts and brand-colour adherence that position it as a serious contender alongside v0.dev and Bolt.new. &lt;a href="https://nextfuture.io.vn/blog/claude-design-ship-prototypes-decks-and-one-pagers-by-chat" rel="noopener noreferrer"&gt;Read more&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI coding agents in 2026: the honest fullstack recap.&lt;/strong&gt; A grounded breakdown of how agent workflows evolved through early 2026 — real cost structures, where context-window limits still bite, and which tasks fullstack teams are genuinely offloading versus which they are still doing by hand. &lt;a href="https://nextfuture.io.vn/blog/ai-coding-agents-in-2026-a-fullstack-engineers-recap" rel="noopener noreferrer"&gt;Read more&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Q1 2026 in one post: Next.js 16, Claude 4.7, and Vercel's pricing pivot.&lt;/strong&gt; The quarterly debrief covers major launches (Claude Managed Agents, Opus 4.7, Gemma 4), the Vercel pricing restructure that rattled ISR-heavy apps, and the cascading outages that rewrote incident-response playbooks for teams on managed infra. &lt;a href="https://nextfuture.io.vn/blog/the-q1-2026-webai-recap-launches-outages-pricing" rel="noopener noreferrer"&gt;Read more&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Web Frameworks &amp;amp; Runtimes
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Three DX tools quietly eating your sprint capacity.&lt;/strong&gt; Fillout (forms), Dub.co (short links), and Cal.com (scheduling) each chip away at integration work that most teams hand-roll. The breakdown covers free-tier cut-offs, self-host viability, and the exact moment each tool earns its place in a Next.js project. &lt;a href="https://nextfuture.io.vn/blog/top-3-developer-dx-tools-for-shipping-faster-in-2026" rel="noopener noreferrer"&gt;Read more&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What to watch next
&lt;/h2&gt;

&lt;p&gt;Claude Design is preview-only today — watch for an API surface or SDK hook that lets server components invoke it directly, which would make it a first-class tool in AI-assisted content pipelines. The Q1 recap also flags Vercel's ISR pricing as still unresolved for high-traffic apps; a community benchmark or migration guide to a self-hosted alternative is likely before May.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This article was originally published on &lt;a href="https://nextfuture.io.vn" rel="noopener noreferrer"&gt;NextFuture&lt;/a&gt;. Follow us for more fullstack &amp;amp; AI engineering content.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>fullstack</category>
      <category>ai</category>
      <category>webdev</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Claude Design: ship prototypes, decks and one-pagers by chat</title>
      <dc:creator>BeanBean</dc:creator>
      <pubDate>Mon, 20 Apr 2026 23:00:01 +0000</pubDate>
      <link>https://forem.com/bean_bean/claude-design-ship-prototypes-decks-and-one-pagers-by-chat-pd3</link>
      <guid>https://forem.com/bean_bean/claude-design-ship-prototypes-decks-and-one-pagers-by-chat-pd3</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://nextfuture.io.vn/blog/claude-design-ship-prototypes-decks-and-one-pagers-by-chat" rel="noopener noreferrer"&gt;NextFuture&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What's new this week
&lt;/h2&gt;

&lt;p&gt;On April 17, 2026, Anthropic Labs shipped &lt;a href="https://www.anthropic.com/news/claude-design-anthropic-labs" rel="noopener noreferrer"&gt;Claude Design&lt;/a&gt;, a research preview that turns prompts into polished prototypes, pitch decks, slides, and one-pagers. It is powered by Claude Opus 4.7 — Anthropic's most capable vision model as of this week — and is rolling out to Claude Pro, Max, Team, and Enterprise subscribers. Figma's stock slid roughly 7 percent on the announcement, which should tell you where users think it lands on the threat matrix.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why it matters across roles
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Founders and product managers.&lt;/strong&gt; The old loop: scribble an investor deck in Google Slides, ship to a contractor, wait three days for the round trip. The new loop: upload your brand guidelines and a rough outline, get a complete on-brand deck in under ten minutes, export as PPTX or push to Canva for final polish. Opus 4.7's stronger vision model means typography, spacing, and grid usage look like a human designer drafted them — not like a clip-art explosion.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Designers.&lt;/strong&gt; Claude Design reads your codebase and design files during onboarding, so colors, typography, and components are reused automatically. Instead of spending an afternoon wiring static frames, you can generate an interactive prototype from a user story, scrub web elements directly from a live site using the web-capture tool, and hand the result to stakeholders for testing — no code review, no Figma prototype setup. Direct manipulation (comments, inline edits, custom sliders) is still there when you need pixel control.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Writers, marketers, and analysts.&lt;/strong&gt; One-pagers and research briefs used to mean wrestling with PowerPoint templates or outsourcing a $200 Fiverr job. Drop in a DOCX, XLSX, or research summary and Claude Design lays it out in your brand — with chart placements, pull quotes, and call-outs that survive export to PDF, PPTX, or Canva. An analyst with a hypothesis and a table can go from gist to shareable artifact in the same standup they presented the idea in.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hands-on: try it in under 15 minutes
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Sign in to &lt;code&gt;claude.ai&lt;/code&gt; and open the Labs section. If you are on Pro ($20/mo), Max ($100–$200/mo), Team ($30/seat/mo), or Enterprise, Claude Design should appear as a new workspace during the gradual rollout.&lt;/li&gt;
&lt;li&gt;During onboarding, upload brand assets — a logo SVG, style-guide PDF, or a link to your public repo. Claude scans typography, palette, and component library, then stores them as your team's design system so every future project inherits them.&lt;/li&gt;
&lt;li&gt;Start a project with a text prompt. Example: &lt;em&gt;"Build a 5-slide pitch deck for our Series A covering problem, market, demo, traction, and ask. Use our brand palette and keep the cover minimal."&lt;/em&gt; You can also upload DOCX, PPTX, XLSX, or point Claude at a live URL with the web-capture tool.&lt;/li&gt;
&lt;li&gt;Iterate. Add comments the way you would in Figma, or ask Claude in chat: &lt;em&gt;"Make slide 3 denser — use a 2x2 grid of customer logos with quotes underneath."&lt;/em&gt; Custom sliders appear for dimensions like copy density or heading scale.&lt;/li&gt;
&lt;li&gt;Export. PPTX and PDF are native. Canva export keeps a round-trip for brand teams that already live there. Standalone HTML is available for quick microsites.&lt;/li&gt;
&lt;li&gt;For code handoff, Claude packages the design into a bundle you pass to Claude Code with one instruction:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Turn a Claude Design bundle into a real Next.js page&lt;/span&gt;
claude-code run &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="s2"&gt;"Implement the Claude Design bundle at ./design-bundle.zip &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
   using Next.js 16 App Router, Tailwind 4, and our shadcn components. &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
   Preserve the design tokens exactly."&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Costs: no separate Claude Design fee — it ships inside your existing Pro/Max/Team/Enterprise plan. Opus 4.7 is metered against your plan quota, so heavy iteration on the xhigh-effort tier will burn through a Max allowance faster than a typical Claude chat session. Budget one deck run at roughly the same token weight as a medium coding task.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where it beats existing alternatives
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Versus Figma AI.&lt;/strong&gt; Claude Design wins the cold start. Figma's generative features assume you already have frames, components, and a file structure. Claude drafts from nothing — text prompt, PDF, or URL — and infers a sane layout that respects your tokens. Figma still wins for multi-person real-time editing and fine vector control, so complex marketing sites stay there.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Versus Canva Magic Design.&lt;/strong&gt; Claude is markedly more consistent with a supplied brand system. Canva tends to recompose your prompt into its own template library; Claude renders against your typography and palette the first time. Canva is cheaper if you only want stock ($15/mo Pro) and has a deeper asset catalog for non-technical teams.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Honest weakness.&lt;/strong&gt; It is a research preview. Don't expect 99.9% uptime and don't ship a client deliverable without eyeballing it. Opus 4.7's rendering can still drop text inside tight grids by a pixel or two, and complex infographics (hand-drawn icons, layered charts, annotated diagrams) are where the human-designer loop still wins. Collaboration is limited to people on the same workspace — you can't invite an outside contractor to co-edit yet.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it this week
&lt;/h2&gt;

&lt;p&gt;Pick one pitch deck, internal update, or product one-pager that went out in the last quarter and could use a refresh. Upload it, your brand guidelines, and ask Claude Design to re-lay it out on your design system. Compare both side-by-side in your next team sync and note which parts Claude missed — that gap is where human design still earns its keep in your workflow.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This article was originally published on &lt;a href="https://nextfuture.io.vn" rel="noopener noreferrer"&gt;NextFuture&lt;/a&gt;. Follow us for more fullstack &amp;amp; AI engineering content.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>fullstack</category>
      <category>ai</category>
      <category>webdev</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Top 3 developer DX tools for shipping faster in 2026</title>
      <dc:creator>BeanBean</dc:creator>
      <pubDate>Mon, 20 Apr 2026 17:00:14 +0000</pubDate>
      <link>https://forem.com/bean_bean/top-3-developer-dx-tools-for-shipping-faster-in-2026-4jea</link>
      <guid>https://forem.com/bean_bean/top-3-developer-dx-tools-for-shipping-faster-in-2026-4jea</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://nextfuture.io.vn/blog/top-3-developer-dx-tools-for-shipping-faster-in-2026" rel="noopener noreferrer"&gt;NextFuture&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;This post contains affiliate links — if you sign up through them, we may earn a small commission at no extra cost to you.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why picking the right DX tools matters
&lt;/h2&gt;

&lt;p&gt;Every fullstack team rebuilds the same boring plumbing: a contact form, a short link for a campaign, a booking page for calls. Each one eats an afternoon — and over a year, that's weeks gone to problems other people have already solved. Worse, the hand-rolled version often ships with bugs (bad email validation, no rate limiting, broken mobile layout) that a mature tool fixes on day one. The three tools below are narrow, well-scoped, and cheap enough that the build-vs-buy math is almost always "buy." They all play nicely with Next.js, have honest free tiers, and none of them are trying to become a "platform." Pick one, two, or all three depending on how often the underlying problem shows up in your sprint reviews.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fillout
&lt;/h2&gt;

&lt;p&gt;Fillout replaces the "I need a form that writes to Notion, Airtable, or my own database and doesn't look like 2012" problem. It's a form builder designed with enough primitives — conditional logic, calculations, file uploads, payments — that it covers roughly 90% of the cases engineers used to hand-roll. The upshot: your founder or PM can build the enterprise contact form themselves, and you get to focus on actual product work. Payments, file uploads, and Stripe integration are first-class, which means request-a-demo and paid-application flows don't need an engineer to babysit them.&lt;/p&gt;

&lt;h3&gt;
  
  
  What's good
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Native integrations with Airtable, Notion, Google Sheets, HubSpot, Salesforce, and webhooks — no Zapier tax for the common cases.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Conditional logic and calculated fields are real — you can build quote calculators or multi-step onboarding flows without code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Embeds cleanly into Next.js via iframe or the React component; no CSS fights.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Watch-outs
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The free tier is generous on features but caps submissions — plan on upgrading once the form goes live with real traffic.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you need pixel-perfect visual control, you will fight the theme system; it's "polished preset," not "custom CSS."&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Starting price: free for up to 1,000 submissions/month; paid plans begin at $15/month.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/api/affiliate/click?slug=fillout&amp;amp;post=top-3-developer-dx-tools-for-shipping-faster-in-2026"&gt;&lt;strong&gt;Try for free&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Dub.co
&lt;/h2&gt;

&lt;p&gt;Dub is a short-link platform built for developers — think bit.ly if it had been designed by someone who ships code. You point a custom domain at it, create branded links, and get click analytics, UTM builders, and a real API that fits into a deploy pipeline. For anyone managing affiliate links or campaign URLs by hand in a Google Sheet, swapping to Dub pays for itself inside a month.&lt;/p&gt;

&lt;h3&gt;
  
  
  What's good
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Open source core (AGPL) with a hosted plan — you can self-host if compliance requires it, or start on the cloud and move later.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Built-in QR codes, link expirations, password protection, and geo-targeting — all accessible from the API.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Analytics are event-level, not aggregated-only; you can export clicks per link and feed them into your warehouse.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Watch-outs
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;If you only need three links, it's overkill — a manual redirect in Next.js middleware is two lines of code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The free tier gets you started, but link retention and advanced analytics live on the paid tiers.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Starting price: free tier with 25 links and 1,000 tracked clicks/month; Pro plan begins at $24/month.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/api/affiliate/click?slug=dub-co&amp;amp;post=top-3-developer-dx-tools-for-shipping-faster-in-2026"&gt;&lt;strong&gt;Try for free&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Cal.com
&lt;/h2&gt;

&lt;p&gt;Cal.com is the scheduling primitive — a Calendly alternative that's fully open source, self-hostable, and API-first. For fullstack teams running demos, interviews, or paid consults, it removes a surprising amount of back-and-forth without locking your availability data inside a SaaS vendor. If your product ever needs embedded booking (think marketplace, course platform, support desk), Cal's API is the shortest path to ship.&lt;/p&gt;

&lt;h3&gt;
  
  
  What's good
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Self-host on a small VPS; the repo is actively maintained, and it deploys cleanly on Docker or Vercel.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Routing forms, team round-robin, and collect-payment workflows — it matches Calendly feature-for-feature on the paid tier.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Public REST API and webhooks, so you can programmatically create event types or drop a booking widget straight into your product.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Watch-outs
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The managed cloud is solid, but if you self-host, expect to manage your own CalDAV sync edge cases and email deliverability.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;First-time setup of Google Calendar + Zoom integrations takes around 20 minutes — not hard, just fiddlier than Calendly's hosted flow.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Starting price: free forever for individuals; Teams plan starts at $15/user/month; self-host is $0 plus your server cost.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/api/affiliate/click?slug=cal-com&amp;amp;post=top-3-developer-dx-tools-for-shipping-faster-in-2026"&gt;&lt;strong&gt;Use for free&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick comparison
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ToolBest forStrengthStarts at

&lt;p&gt;FilloutForms that write to real backendsConditional logic + native integrations$0 / $15 paid&lt;br&gt;
Dub.coBranded short links at scaleOpen source + event-level analytics$0 / $24 paid&lt;br&gt;
Cal.comScheduling without vendor lock-inSelf-hostable, API-first$0 / $15 paid&lt;br&gt;
&lt;/p&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Which one should you pick?&lt;br&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pick Fillout&lt;/strong&gt; if you keep rebuilding contact, feedback, or onboarding forms and want them to land in Notion, Airtable, or a webhook without hand-wiring.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pick Dub.co&lt;/strong&gt; if you ship campaigns, run a newsletter, or manage affiliate links and need branded URLs with real analytics — not a generic bit.ly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pick Cal.com&lt;/strong&gt; if booking calls is a recurring part of your workflow and you'd rather own the data than rent it from Calendly.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In practice, many teams end up wiring in all three — they're cheap, they don't overlap, and each one replaces a task engineers shouldn't be writing from scratch in 2026. The rule of thumb: if a tool costs less than an engineering hour per month and solves a problem you've solved before, just buy it. Your roadmap has better things to do.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This article was originally published on &lt;a href="https://nextfuture.io.vn" rel="noopener noreferrer"&gt;NextFuture&lt;/a&gt;. Follow us for more fullstack &amp;amp; AI engineering content.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>fullstack</category>
      <category>ai</category>
      <category>webdev</category>
      <category>javascript</category>
    </item>
    <item>
      <title>AI Web Dev Digest — Apr 20</title>
      <dc:creator>BeanBean</dc:creator>
      <pubDate>Mon, 20 Apr 2026 05:00:00 +0000</pubDate>
      <link>https://forem.com/bean_bean/ai-x-web-dev-digest-apr-20-52p</link>
      <guid>https://forem.com/bean_bean/ai-x-web-dev-digest-apr-20-52p</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://nextfuture.io.vn/blog/ai-web-dev-digest-apr-20" rel="noopener noreferrer"&gt;NextFuture&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Today's briefing
&lt;/h2&gt;

&lt;p&gt;The past 48 hours in AI × web dev have been dominated by one signal: coding agents are graduating from demo to production. Two substantial deep-dives landed on nextfuture — one mapping the macro arc of how agentic coding evolved in early 2026, the other giving engineers a practical scorecard for evaluating Cursor replacements. Taken together, they frame a moment where the tooling choices developers make &lt;em&gt;now&lt;/em&gt; will shape team velocity for the rest of the year.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI Tooling &amp;amp; Agents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AI coding agents in 2026 — a grounded fullstack recap.&lt;/strong&gt; This in-depth recap covers where Claude Code, Copilot Workspace, and autonomous PR agents are genuinely delivering value, breaks down the real cost-per-PR across team sizes, and outlines the agent-in-the-loop workflows fullstack teams are actually shipping — not just demoing. Essential context if you're evaluating whether to commit budget to agent tooling this quarter. &lt;a href="https://nextfuture.io.vn/blog/ai-coding-agents-in-2026-a-fullstack-engineers-recap" rel="noopener noreferrer"&gt;Read more&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;10 Cursor alternatives ranked by agentic power and real-world DX.&lt;/strong&gt; A hands-on comparison of Claude Code, Windsurf, Zed, Kilo Code, Codeium, and five others — scored on context window handling, multi-file editing, pricing transparency, and production-readiness rather than marketing claims. If Cursor's pricing or vendor lock-in has you looking around, this is the most thorough public scorecard available. &lt;a href="https://nextfuture.io.vn/blog/best-cursor-alternatives-2026" rel="noopener noreferrer"&gt;Read more&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What to watch next
&lt;/h2&gt;

&lt;p&gt;A Q1 2026 retrospective covering the Vercel pricing restructure, Next.js 16 launch, Claude 4.7, and the major outages that rewrote incident playbooks is currently queued as a draft on nextfuture — expect it to publish soon and provide the broadest industry-wide narrative of the quarter. Meanwhile, the Cursor-alternatives space is moving fast; a follow-up on agentic IDE benchmarks would be timely within the next two weeks.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This article was originally published on &lt;a href="https://nextfuture.io.vn" rel="noopener noreferrer"&gt;NextFuture&lt;/a&gt;. Follow us for more fullstack &amp;amp; AI engineering content.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>fullstack</category>
      <category>ai</category>
      <category>webdev</category>
      <category>javascript</category>
    </item>
    <item>
      <title>The Q1 2026 Web+AI Recap: Launches, Outages, Pricing</title>
      <dc:creator>BeanBean</dc:creator>
      <pubDate>Sun, 19 Apr 2026 23:00:00 +0000</pubDate>
      <link>https://forem.com/bean_bean/the-q1-2026-webai-recap-launches-outages-pricing-1042</link>
      <guid>https://forem.com/bean_bean/the-q1-2026-webai-recap-launches-outages-pricing-1042</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://nextfuture.io.vn/blog/the-q1-2026-webai-recap-launches-outages-pricing" rel="noopener noreferrer"&gt;NextFuture&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Opening
&lt;/h2&gt;

&lt;p&gt;Q1 2026 was one of those quarters where a fullstack engineer's RSS feed and on-call pager rang in near lockstep. Framework majors shipped, LLM price charts got redrawn twice, and two high-profile outages reminded everyone that AI gateways are now Tier-1 dependencies. If you skipped the news to ship features, here's the filtered recap that still matters this month.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why it matters
&lt;/h2&gt;

&lt;p&gt;Most "industry recap" posts are noise. But a handful of Q1 shifts directly change how you deploy, how much inference costs, and which escape hatches your team needs. Understanding them isn't optional if your roadmap for the rest of 2026 touches model routing, edge rendering, or AI-assisted coding — and most roadmaps do.&lt;/p&gt;

&lt;p&gt;The common thread: &lt;strong&gt;the AI-native web stack is finally settling into defaults&lt;/strong&gt;. That's good for productivity and brutal for anyone still running early-2024 patterns in production. Three areas moved fast enough to force real migration work: frameworks, coding agents, and infra pricing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Technical deep-dive
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Frameworks: Next.js 16 lands, React compiler becomes default
&lt;/h3&gt;

&lt;p&gt;Next.js 16 hit stable in late January, with 16.2 following in March. The headline change most teams feel day-to-day is the new &lt;code&gt;cacheComponents&lt;/code&gt; semantics — fine-grained, per-component cache control that replaces the older page-level &lt;code&gt;revalidate&lt;/code&gt; dance. If you've been wrestling with &lt;code&gt;revalidateTag&lt;/code&gt; arity (the 2-arg signature added in 16: &lt;code&gt;revalidateTag(tag, "default")&lt;/code&gt;), that's the same pipeline.&lt;/p&gt;

&lt;p&gt;React 19.2 made the React Compiler default for new apps. The practical effect: most manual &lt;code&gt;useMemo&lt;/code&gt; and &lt;code&gt;useCallback&lt;/code&gt; calls are now dead weight in new code. The compiler isn't magic — it still bails on dynamic dependencies and non-pure render paths — but code reviews can stop nitpicking memoization.&lt;/p&gt;

&lt;p&gt;Turbopack is now the default dev &lt;em&gt;and&lt;/em&gt; build server. Cold starts on a 50k-file monorepo dropped from ~40s to ~9s in our internal benchmark; your mileage will vary, but the gap is real. If you're still on webpack, schedule the migration before Q3.&lt;/p&gt;

&lt;h3&gt;
  
  
  AI tooling: the coding-assistant field consolidates
&lt;/h3&gt;

&lt;p&gt;Three things happened to AI coding tools in Q1:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Claude Code&lt;/strong&gt; shipped web sessions, hooks, and first-class sub-agents — pushing the "agent as shell" model into mainstream developer workflows.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cursor&lt;/strong&gt; doubled down on multi-file agent loops and background tasks, blurring the IDE-vs-agent line.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Antigravity&lt;/strong&gt; and &lt;strong&gt;Codex CLI&lt;/strong&gt; made the headless, CI-friendly agent the new baseline expectation.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Opus 4.7 release in Q1 mattered because it pushed the per-token cost of a fully-tooled coding agent below the psychological "one-coffee-per-feature" threshold for most small tasks. That's why every CI pipeline blog post in March mentioned automated PR triage agents — the economics finally work.&lt;/p&gt;

&lt;h3&gt;
  
  
  Infra and pricing: the reality check
&lt;/h3&gt;

&lt;p&gt;Vercel rolled out its revised Fluid compute pricing in February. The net effect for most Next.js apps: streaming-heavy routes got cheaper, long-polling got more expensive. If you have not re-run your cost projection, do it this weekend — teams that skipped the exercise reported 20-30% surprises either direction.&lt;/p&gt;

&lt;p&gt;Cloudflare Workers AI expanded its free-tier model roster but clipped cold-start allowances — worth re-benchmarking if you were using it as a RAG retrieval layer behind consumer traffic.&lt;/p&gt;

&lt;p&gt;And then there were the outages. A global Anthropic API degradation on March 11 and an OpenAI router incident on March 28 each lasted 40+ minutes. Any app without a multi-provider fallback lost features or, worse, spent the window retrying into rate-limit cliffs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Engineer's take
&lt;/h2&gt;

&lt;p&gt;The most useful Q1 lesson is boring: &lt;em&gt;stop treating LLM endpoints as infallible synchronous dependencies&lt;/em&gt;. A tiny router with a circuit breaker pays for itself the first time a provider blips. Here's the ~40-line pattern we now bake into every new service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;Anthropic&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@anthropic-ai/sdk&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;OpenAI&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;openai&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="nx"&gt;Provider&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;anthropic&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;openai&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;breaker&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nb"&gt;Map&lt;/span&gt;
&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;OPEN_MS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="nx"&gt;_000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;isOpen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;p&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Provider&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;s&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;breaker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;p&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;s&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;openedAt&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;OPEN_MS&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;breaker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;delete&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;p&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;s&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;failures&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;trip&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;p&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Provider&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;s&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;breaker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;p&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;??&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nl"&gt;failures&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;openedAt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="nx"&gt;s&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;failures&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nx"&gt;s&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;openedAt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="nx"&gt;breaker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;p&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;s&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;complete&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;order&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Provider&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;anthropic&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;openai&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
  &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;p&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;order&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;isOpen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;p&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="k"&gt;continue&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;p&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;anthropic&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;c&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Anthropic&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;r&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
          &lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;claude-opus-4-7&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="na"&gt;max_tokens&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1024&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="na"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;user&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;prompt&lt;/span&gt; &lt;span class="p"&gt;}],&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;o&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;OpenAI&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;r&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;o&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;chat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;completions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;gpt-5-mini&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;user&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;prompt&lt;/span&gt; &lt;span class="p"&gt;}],&lt;/span&gt;
      &lt;span class="p"&gt;});&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;choices&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nf"&gt;trip&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;p&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;all providers degraded&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This isn't novel — it's the same pattern you'd use for Stripe or Twilio. The Q1 takeaway is that you now &lt;em&gt;need&lt;/em&gt; it for LLM calls too, because they are no longer the reliable 99.9% dependency marketers claimed in 2024. Bonus: once a breaker is in place, you can route by cost tier (Opus for hard prompts, Haiku for classification) with ten extra lines.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Upgrade Next.js to 16.2&lt;/strong&gt; if you're on 15.x — the Turbopack and &lt;code&gt;cacheComponents&lt;/code&gt; wins are worth the migration weekend.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Drop manual memoization&lt;/strong&gt; on new React 19.2 code; trust the compiler and measure before optimizing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Re-run your Vercel / Workers cost projection&lt;/strong&gt; after the February pricing shifts before your next quarterly review.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Treat LLM providers as multi-vendor.&lt;/strong&gt; Add a breaker + fallback now, not after the next 40-minute outage.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Audit your AI coding workflow.&lt;/strong&gt; If your team isn't using at least one headless agent in CI by Q2, you're leaving real productivity on the table.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;em&gt;This article was originally published on &lt;a href="https://nextfuture.io.vn" rel="noopener noreferrer"&gt;NextFuture&lt;/a&gt;. Follow us for more fullstack &amp;amp; AI engineering content.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>fullstack</category>
      <category>ai</category>
      <category>webdev</category>
      <category>javascript</category>
    </item>
    <item>
      <title>AI Coding Agents in 2026: A Fullstack Engineer's Recap</title>
      <dc:creator>BeanBean</dc:creator>
      <pubDate>Sun, 19 Apr 2026 11:00:02 +0000</pubDate>
      <link>https://forem.com/bean_bean/ai-coding-agents-in-2026-a-fullstack-engineers-recap-m5m</link>
      <guid>https://forem.com/bean_bean/ai-coding-agents-in-2026-a-fullstack-engineers-recap-m5m</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://nextfuture.io.vn/blog/ai-coding-agents-in-2026-a-fullstack-engineers-recap" rel="noopener noreferrer"&gt;NextFuture&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Opening
&lt;/h2&gt;

&lt;p&gt;The AI coding assistant market looked crowded in 2024. By April 2026 it looks consolidated, specialized, and — for the first time — genuinely predictable in how it slots into fullstack work. If you paused on agents after a few bad autocompletes a year ago, the landscape has changed enough to warrant a second pass.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why it matters
&lt;/h2&gt;

&lt;p&gt;If you are shipping Next.js, Nuxt, or SvelteKit apps in 2026, the question is no longer "can AI write code?" but "which part of the loop should it own?" Three forces converged over the past twelve months: inference cost dropped roughly 6-10× for frontier-class models, agentic loops moved from demos into default workflows, and evaluation tooling matured from vibes into repeatable benchmarks.&lt;/p&gt;

&lt;p&gt;Each of those shifts changes how a small team staffs a feature. Ignoring them is starting to cost real velocity — and, on the other end, over-indexing on agents without guardrails is producing the "AI-generated tech debt" conversations now showing up in engineering postmortems. Neither extreme is rational. The boring middle path — a narrow agent, scoped tools, logged outputs — is where the teams shipping fastest have landed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Technical deep-dive
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Agent orchestration moved into the editor
&lt;/h3&gt;

&lt;p&gt;The biggest structural shift in early 2026 is that the IDE is now the default agent runtime. Claude Code, Cursor's agent mode, OpenAI's Codex CLI, and Google's Antigravity all expose the same rough surface — a long-running agent with tool access to the filesystem, shell, and language server. What was novel in 2024 (the "autonomous PR" demo) is now table stakes.&lt;/p&gt;

&lt;p&gt;The practical consequence for fullstack engineers: context-window strategy beats prompt cleverness. A 200K-token window filled with stale package docs will lose to a 32K window packed with the three files the agent actually needs. Most teams standardized on some form of &lt;code&gt;AGENTS.md&lt;/code&gt; or &lt;code&gt;CLAUDE.md&lt;/code&gt; at repo root — short, stable instructions, explicit paths to check, and a short allowlist of commands the agent may run without asking. Your repo rules file is now as important as your README.&lt;/p&gt;

&lt;h3&gt;
  
  
  Token economics finally make sense
&lt;/h3&gt;

&lt;p&gt;Pricing for flagship Claude, GPT, and Gemini tiers dropped to a range where sustained agentic work is no longer a CFO conversation. A typical Next.js feature pass — read five files, edit two, run the typechecker, loop — costs pennies to low single-digit dollars depending on the model. That is 5-10× cheaper than the same loop would have cost in early 2025.&lt;/p&gt;

&lt;p&gt;The corollary is that cheap inference rewards verbose, self-correcting prompts. Asking the agent to run &lt;code&gt;tsc --noEmit&lt;/code&gt;, read the errors, and patch until green is now economically rational for 500-line changes. It was not in 2024. Prompt caching on Anthropic and OpenAI APIs — 5-10× cost reduction for the repeated system prompt and tool definitions — pushed the math further in the engineer's favor.&lt;/p&gt;

&lt;h3&gt;
  
  
  Evaluation is the new moat
&lt;/h3&gt;

&lt;p&gt;The less visible shift: eval harnesses went from bespoke to standard. Tools like Braintrust, Langfuse, and OpenAI's evals framework are now the default way teams catch prompt regressions. If your app embeds an LLM call in a critical path — a semantic search, a content classifier, a PR summarizer — not having a regression suite for that call is now the equivalent of not having unit tests for a payment function. Model swaps that used to be silent correctness regressions are now loud CI failures, which is exactly how it should work.&lt;/p&gt;

&lt;p&gt;The workflow most teams have converged on is unglamorous: a golden dataset of 50-200 real inputs, a deterministic scoring function (exact match, JSON-schema validity, embedding similarity), and a CI job that runs the eval on every prompt or model change. Nobody writes blog posts about it because it looks exactly like normal testing — which is the point. The bar for shipping AI features is now the same bar as shipping anything else, and teams that do not accept that are the ones producing the "AI-generated tech debt" threads.&lt;/p&gt;

&lt;h2&gt;
  
  
  Engineer's take
&lt;/h2&gt;

&lt;p&gt;The pattern that has worked across shipped projects is treating the agent like a very fast but literal junior engineer. The productive move is to define the tools it has, the checks it must pass, and the boundary of what it is allowed to touch. Everything else is prompt noise.&lt;/p&gt;

&lt;p&gt;Here is the minimal wrapper I now put around any agentic endpoint in a Next.js app — a typed tool registry, a hard iteration cap, and structured messages you can log:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;Anthropic&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@anthropic-ai/sdk&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Anthropic&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="nx"&gt;Tool&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;input_schema&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Anthropic&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Tool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;InputSchema&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="na"&gt;input&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;unknown&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;runAgent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="nx"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;tools&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Tool&lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;maxSteps&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;8&lt;/span&gt; &lt;span class="p"&gt;}:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nl"&gt;maxSteps&lt;/span&gt;&lt;span class="p"&gt;?:&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{},&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Anthropic&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;MessageParam&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;user&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;prompt&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;];&lt;/span&gt;

  &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;step&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nf"&gt;step  &lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;t&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;t&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;description&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;input_schema&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;t&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;input_schema&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;})),&lt;/span&gt;
      &lt;span class="nx"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;resp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;stop_reason&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;end_turn&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;resp&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;use&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;resp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;find&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;tool_use&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;use&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nx"&gt;use&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;tool_use&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;resp&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;tool&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;tools&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;find&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;t&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;t&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="nx"&gt;use&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;tool&lt;/span&gt;
      &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;tool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;use&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;input&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`Unknown tool: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;use&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="nx"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;push&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;assistant&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;resp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="nx"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;push&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;user&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;tool_result&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;tool_use_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;use&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Agent exceeded &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;maxSteps&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; steps`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Three things are non-negotiable: a step cap (agents loop), a tool allowlist (agents escalate), and structured messages you can log (you will need to debug them). Everything else — prompt rewriting, model swapping, caching — is a tuning knob you adjust after the first version ships.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Pick one IDE-native agent (Claude Code, Cursor, or Codex) and invest in a &lt;code&gt;CLAUDE.md&lt;/code&gt;-style repo rules file. Context beats cleverness every time.&lt;/li&gt;
&lt;li&gt;Treat inference as cheap but not free. A step cap and a tool allowlist are the minimum guardrails before anything ships to production.&lt;/li&gt;
&lt;li&gt;If your product has an LLM in a critical path, ship an eval harness this quarter. Regressions are silent otherwise, and model swaps happen more often than you think.&lt;/li&gt;
&lt;li&gt;Watch the evaluation tooling space — it is where the 2026 moats are actually being built, not in yet another chat wrapper.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;em&gt;This article was originally published on &lt;a href="https://nextfuture.io.vn" rel="noopener noreferrer"&gt;NextFuture&lt;/a&gt;. Follow us for more fullstack &amp;amp; AI engineering content.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>fullstack</category>
      <category>ai</category>
      <category>webdev</category>
      <category>javascript</category>
    </item>
    <item>
      <title>AI Web Dev Digest — Apr 19</title>
      <dc:creator>BeanBean</dc:creator>
      <pubDate>Sun, 19 Apr 2026 11:00:01 +0000</pubDate>
      <link>https://forem.com/bean_bean/ai-x-web-dev-digest-apr-19-52pi</link>
      <guid>https://forem.com/bean_bean/ai-x-web-dev-digest-apr-19-52pi</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://nextfuture.io.vn/blog/ai-web-dev-digest-apr-19" rel="noopener noreferrer"&gt;NextFuture&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Today's briefing
&lt;/h2&gt;

&lt;p&gt;This week brought a flurry of updates to the AI-assisted coding stack that web developers actually use day-to-day. Cursor, Claude Code, Claude Opus, and OpenAI Codex all shipped meaningful changes — some raise the ceiling of what agentic coding can do, others quietly shift the economics of how you run long agent sessions. Here is what matters and what to do about it before your next sprint.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI Coding Tools &amp;amp; Agents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Ten Cursor alternatives worth a serious look.&lt;/strong&gt; If Cursor's pricing tier changes or reliability blips have you shopping around, there are now at least ten credible contenders — each with different trade-offs across context window, agent autonomy, IDE integration, and workflow fit. The roundup breaks down who each one is actually for, so you can pick a short list in under ten minutes. &lt;a href="https://nextfuture.io.vn/blog/best-cursor-alternatives-2026" rel="noopener noreferrer"&gt;Read more&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Claude Code ships &lt;code&gt;/advisor&lt;/code&gt;.&lt;/strong&gt; A new strategic-reasoning command positions Claude Code as the "brain" of agentic coding — for planning refactors, dependency audits, and architecture review &lt;em&gt;before&lt;/em&gt; writing a single line of code. Useful when you do not want an agent to start editing files until the plan is clear. &lt;a href="https://nextfuture.io.vn/blog/claude-code-advisor-command-deep-dive-2026" rel="noopener noreferrer"&gt;Read more&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Claude Opus 4.7 lands.&lt;/strong&gt; Sharper coding on real-world tasks, roughly 3x vision throughput, and a pricing shift web engineers need to factor in before budgeting long agent sessions or background runs. &lt;a href="https://nextfuture.io.vn/blog/claude-opus-4-7-deep-dive-features-cost-benchmarks-2026" rel="noopener noreferrer"&gt;Read more&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;OpenAI Codex April 2026 update.&lt;/strong&gt; Computer-use mode, persistent memory across sessions, and a 90-plugin ecosystem — but the headline features need a reality check. A field report on what actually ships to production today and what is still rough. &lt;a href="https://nextfuture.io.vn/blog/openai-codex-april-2026-update-computer-use-memory-plugins-review" rel="noopener noreferrer"&gt;Read more&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Developer Productivity Stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Five Calendly alternatives — free and open source.&lt;/strong&gt; Self-hostable scheduling tools engineers can run without per-seat pricing, handy when a small team outgrows a free tier or wants data to stay in its own infra. &lt;a href="https://nextfuture.io.vn/blog/5-best-calendly-alternatives-2026" rel="noopener noreferrer"&gt;Read more&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What to watch next
&lt;/h2&gt;

&lt;p&gt;With two major coding assistants (Cursor, Claude Code) and two major model/tool platforms (Opus 4.7, Codex) all shipping in the same window, expect benchmark comparisons and head-to-head reviews to dominate the next cycle. The practical question for web developers is not "which model is smartest" but "which combination ships the most production code per hour at a cost you can defend in a Monday review?" Try one new pairing this week and log your wall-clock time — the numbers decide faster than the marketing does.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This article was originally published on &lt;a href="https://nextfuture.io.vn" rel="noopener noreferrer"&gt;NextFuture&lt;/a&gt;. Follow us for more fullstack &amp;amp; AI engineering content.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>fullstack</category>
      <category>ai</category>
      <category>webdev</category>
      <category>javascript</category>
    </item>
    <item>
      <title>OpenAI Codex April 2026 Update Review: Computer Use, Memory &amp; 90+ Plugins — Is the Hype Real?</title>
      <dc:creator>BeanBean</dc:creator>
      <pubDate>Thu, 16 Apr 2026 23:00:01 +0000</pubDate>
      <link>https://forem.com/bean_bean/openai-codex-april-2026-update-review-computer-use-memory-90-plugins-is-the-hype-real-2hnp</link>
      <guid>https://forem.com/bean_bean/openai-codex-april-2026-update-review-computer-use-memory-90-plugins-is-the-hype-real-2hnp</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://nextfuture.io.vn/blog/openai-codex-april-2026-update-computer-use-memory-plugins-review" rel="noopener noreferrer"&gt;NextFuture&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  TL;DR — Quick Verdict
&lt;/h2&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  Feature
  Rating
  Notes




  Background Computer Use (macOS)
  ⭐⭐⭐⭐
  Genuinely impressive. Runs parallel agents in background.


  Memory &amp;amp; Personalization
  ⭐⭐⭐
  Rolling out to Enterprise/Edu first — not everyone yet.


  90+ New Plugins
  ⭐⭐⭐⭐
  Atlassian, CircleCI, GitLab, Render, Neon — solid coverage.


  In-App Browser
  ⭐⭐⭐
  Only useful for localhost apps right now.


  Image Generation (gpt-image-1.5)
  ⭐⭐⭐⭐
  Useful for mockups directly in dev workflow.


  Pricing
  ⭐⭐
  Heavy use gets expensive fast on ChatGPT plans.


  Platform Support
  ⭐⭐
  macOS only for computer use. EU/UK rollout delayed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Bottom line up front:&lt;/strong&gt; The April 16 Codex update is the biggest leap OpenAI has made in developer tooling since Codex launched. Background computer use is legitimately novel. Memory and automation scheduling are game-changers — when they actually reach your account. The plugin ecosystem at 90+ is now broader than most developers will ever need. But there are real tradeoffs: macOS-only computer use, staggered rollouts, and a pricing model that punishes heavy automation. Read on for the full breakdown.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Dropped on April 16, 2026
&lt;/h2&gt;

&lt;p&gt;OpenAI announced what it calls &lt;strong&gt;"Codex for (almost) everything"&lt;/strong&gt; — a positioning shift from Codex-as-code-assistant to Codex-as-full-software-partner. The key new capabilities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Background computer use on macOS:&lt;/strong&gt; Codex can now see, click, and type with its own cursor across any macOS app — running in parallel without interfering with your own work.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;In-app browser:&lt;/strong&gt; A built-in browser where you can comment directly on pages to give the agent precise frontend instructions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Image generation:&lt;/strong&gt; Codex now uses &lt;code&gt;gpt-image-1.5&lt;/code&gt; to generate and iterate on visual assets (mockups, product concept art, UI designs) directly inside the workflow.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Memory:&lt;/strong&gt; Codex remembers your preferences, corrections, and gathered context across sessions. Reduces repeated setup for recurring tasks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automations with scheduling:&lt;/strong&gt; Codex can schedule future work for itself and wake up automatically across days or weeks to continue long-running tasks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;90+ new plugins:&lt;/strong&gt; Including Atlassian Rovo (JIRA), CircleCI, CodeRabbit, GitLab Issues, Microsoft Suite, Neon by Databricks, Remotion, and Render.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Dev workflow improvements:&lt;/strong&gt; PR review comment handling, multiple terminal tabs, SSH to remote devboxes (alpha), rich file previews (PDFs, spreadsheets, slides).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is also paired with the April 15 &lt;strong&gt;Agents SDK evolution&lt;/strong&gt;, which adds native sandbox execution (via E2B, Vercel, Cloudflare, Modal, and more), a Manifest abstraction for portable environments, and durable execution so agents can survive container restarts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Background Computer Use: What It Actually Means for Developers
&lt;/h2&gt;

&lt;p&gt;This is the headliner feature — and it earns it. Previously, Codex operated on code files and terminal output. Now it can &lt;em&gt;see your screen&lt;/em&gt;, click buttons, fill forms, and interact with any macOS app — apps that don't expose APIs, GUI-only tools, even games.&lt;/p&gt;

&lt;p&gt;Practical examples from the announcement:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Iterating on frontend changes inside Figma or Sketch while you work in another window&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Testing your desktop app's UI without writing automation scripts&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Operating design tools, spreadsheets, or legacy software that has no API surface&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Multiple agents can run in parallel. You could have one agent running visual regression tests while another is reviewing a GitHub PR and a third is updating a JIRA ticket — simultaneously, without stealing your mouse.&lt;/p&gt;

&lt;h2&gt;
  
  
  Memory: Genuinely Useful, But Still Rolling Out
&lt;/h2&gt;

&lt;p&gt;Codex now preserves context from previous sessions — your coding preferences, project-specific conventions, things you've corrected it on before. Combined with the new proactive suggestions feature (Codex proposes what to work on next based on your project context, open PRs, Slack activity), this starts to feel less like a tool and more like a colleague.&lt;/p&gt;

&lt;p&gt;The practical use case is compelling: if you've spent an hour teaching Codex your preferred state management patterns or file structure conventions, it remembers that next time. No re-explaining.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Catch:&lt;/strong&gt; Memory and personalization are rolling out to Enterprise, Edu, and EU/UK users "soon." If you're on a standard ChatGPT Plus plan, you may not see these features for weeks. OpenAI's staged rollouts have historically been slow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automations: Scheduling Your Own Agent
&lt;/h2&gt;

&lt;p&gt;One of the most underrated announcements: Codex can now schedule future work for itself and re-use existing conversation threads — preserving context across multi-day tasks. Real-world use cases teams are reportedly already using:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Landing open pull requests nightly&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Following up on tasks across Slack + Notion + Gmail&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Monitoring fast-moving conversations and summarizing for async teams&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This brings Codex closer to what Devin was promising a year ago — a software engineer that keeps working even when you're offline.&lt;/p&gt;

&lt;h2&gt;
  
  
  The 90+ Plugin Ecosystem
&lt;/h2&gt;

&lt;p&gt;The plugin expansion is comprehensive. Here are the ones developers will reach for most:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  Plugin
  What it Adds
  Best For




  Atlassian Rovo
  JIRA ticket management, project context
  Teams on JIRA


  CircleCI
  CI/CD pipeline visibility &amp;amp; control
  Backend / DevOps


  CodeRabbit
  AI-powered code review integration
  Teams wanting automated PR review


  GitLab Issues
  GitLab issue tracking + context
  GitLab shops (finally)


  Neon by Databricks
  Serverless Postgres context + query gen
  Full-stack developers


  Render
  Deploy and manage Render services
  Indie hackers &amp;amp; small teams


  Remotion
  Video generation in code workflows
  Content-heavy apps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Notably absent: a native &lt;strong&gt;Railway&lt;/strong&gt; plugin. If you're using Railway for deployment (and you probably should be — it's the cleanest zero-config platform for Node.js and full-stack apps right now), you can still use it alongside Codex via the terminal. &lt;a href="https://railway.com?referralCode=Y6Hh9z" rel="noopener noreferrer"&gt;Railway's one-click deploys&lt;/a&gt; pair naturally with Codex-generated code: Codex writes and reviews, Railway ships. It's the workflow stack I'd recommend for indie developers who want Codex-speed development without managing infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  The New Agents SDK: Sandbox-Native Agent Execution
&lt;/h2&gt;

&lt;p&gt;Alongside the Codex desktop update, OpenAI's Agents SDK (updated April 15) gets native sandbox support. This is significant for developers building their own agent systems — not just using the Codex app.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;openai_agents&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Sandbox&lt;/span&gt;

&lt;span class="c1"&gt;# Define agent with sandbox execution
&lt;/span&gt;&lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;review-agent&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;instructions&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Review the PR diff and suggest improvements&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;shell&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;apply_patch&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;read_file&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
  &lt;span class="n"&gt;sandbox&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;Sandbox&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;provider&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;e2b&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# or "vercel", "cloudflare", "modal"
&lt;/span&gt;    &lt;span class="n"&gt;manifest&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;mount&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;./project&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;output&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;./review-output&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Review PR #142 and apply suggested fixes&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;artifacts&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Key Agents SDK improvements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Configurable memory&lt;/strong&gt; — agents can persist state across runs&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Sandbox providers:&lt;/strong&gt; E2B, Vercel, Cloudflare, Blaxel, Daytona, Modal, Runloop — pick your stack&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Manifest abstraction&lt;/strong&gt; — portable environment descriptions (mount S3, GCS, Azure Blob, Cloudflare R2)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Durable execution&lt;/strong&gt; — agent state is externalized; container crash ≠ task lost&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Native MCP + skills + AGENTS.md&lt;/strong&gt; — standard agentic primitives built in&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;openai_agents&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Memory&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;AutomationSchedule&lt;/span&gt;

&lt;span class="c1"&gt;# Agent with memory + scheduled follow-up
&lt;/span&gt;&lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;pr-watcher&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;memory&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;Memory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;scope&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;project&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;  &lt;span class="c1"&gt;# persists across runs
&lt;/span&gt;  &lt;span class="n"&gt;instructions&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Monitor open PRs and flag stale ones daily&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Schedule to run daily at 9am
&lt;/span&gt;&lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;schedule&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;AutomationSchedule&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;daily&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;hour&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;9&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Check for PRs open &amp;gt; 7 days and notify in Slack&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  ⚠️ The Controversy: What They Don't Tell You
&lt;/h2&gt;

&lt;p&gt;Developer communities have been excited — but not uniformly. Here's what the honest Reddit and HN threads are flagging:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Computer Use = Screenshot Streaming to OpenAI Servers
&lt;/h3&gt;

&lt;p&gt;Background computer use works by sending screenshots of your screen to OpenAI's models for interpretation. This is &lt;strong&gt;the same fundamental privacy concern&lt;/strong&gt; raised against Recall and other screen-capture AI tools. If you're working with proprietary code, client data, or anything under NDA — be cautious. OpenAI's data usage policies for Codex apply here, and the nuance matters.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. macOS Only — and EU/UK Are Third-Class Citizens Again
&lt;/h3&gt;

&lt;p&gt;Computer use is macOS only at launch. No Windows. No Linux. European and UK users are getting memory and computer use "soon" — which in OpenAI's track record means 4-8 weeks minimum. If you're a developer outside the US or on Windows, the headline feature doesn't exist for you yet.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Cost at Scale Gets Brutal
&lt;/h3&gt;

&lt;p&gt;Automations that run overnight, schedule themselves, and chain tasks sound great — until you see the token bill. Heavy Codex automation use on ChatGPT Pro can easily burn through $50-100/month at scale. OpenAI hasn't published per-task pricing for the automation scheduling features, which is a deliberate omission developers on Hacker News were quick to note. See our earlier post on &lt;a href="https://dev.to/blog/codex-token-pricing-frontend-developers"&gt;Codex's token pricing&lt;/a&gt; for the full breakdown.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. The "Almost" in "Codex for Almost Everything"
&lt;/h3&gt;

&lt;p&gt;The in-app browser currently only controls localhost apps — it can't fully navigate the open web yet. OpenAI says "over time we plan to expand it so Codex can fully command the browser beyond web applications on localhost." That's a lot of future tense in a launch announcement.&lt;/p&gt;

&lt;h2&gt;
  
  
  Codex vs. The Competition (April 2026)
&lt;/h2&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  Tool
  Computer Use
  Memory
  Scheduling / Automations
  Plugin Ecosystem
  Pricing
  Best For




  **OpenAI Codex**
  ✅ macOS
  ✅ (rolling out)
  ✅ Schedule + wake up
  90+ plugins
  ChatGPT Pro $20-200/mo
  Full-stack devs on macOS


  **Cursor 3**
  ❌
  ⚠️ Limited
  ❌
  Agent-first IDE
  $20/mo + usage
  Editor-centric workflows


  **Claude Code**
  ❌
  via MEMORY.md
  ❌
  MCP ecosystem
  Per-token (API)
  Power users, custom stacks


  **Devin**
  ✅ (web)
  ✅
  ✅
  Moderate
  $500/mo (ACUs)
  Enterprise teams


  **GitHub Copilot Workspace**
  ❌
  ❌
  ❌
  GitHub native
  $10-19/mo
  GitHub-centric teams
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Practical Code Example: Combining Agents SDK + Codex Plugins
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;openai_agents&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Plugin&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Memory&lt;/span&gt;

&lt;span class="c1"&gt;# Agent that handles daily PR review using CodeRabbit + CircleCI plugins
&lt;/span&gt;&lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;daily-dev-agent&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;instructions&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
    Every morning:
    1. Check for new PRs since yesterday
    2. Run CodeRabbit review on each PR
    3. Check CircleCI status for failing tests
    4. Summarize findings and post to Slack
  &lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;plugins&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="nc"&gt;Plugin&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;coderabbit&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="nc"&gt;Plugin&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;circleci&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="nc"&gt;Plugin&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;slack&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="nc"&gt;Plugin&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;github&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;],&lt;/span&gt;
  &lt;span class="n"&gt;memory&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;Memory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;scope&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;project&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;retention_days&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# This agent will now remember your team's review preferences
# from previous runs and adapt its suggestions accordingly
&lt;/span&gt;&lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Daily morning dev review&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Should You Switch to / Upgrade Codex?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  ✅ Use It If:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You're on macOS and want computer use for GUI-only tools&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You have repetitive dev tasks (PR reviews, daily standups, JIRA updates) that could be automated&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Your team is already in the ChatGPT ecosystem and has Pro/Enterprise accounts&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You work on frontend development and want to iterate on visual designs + code in one workflow&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You want the most integrated agent-native coding experience available right now&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  ❌ Don't Use It If:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You're on Windows or Linux (computer use isn't available yet)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You work with sensitive/proprietary data and are uncomfortable with screen capture streaming&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You're cost-sensitive — heavy automation can get expensive fast&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You're in the EU/UK and want the full feature set today (not "soon")&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You prefer editor-native workflows over a separate app experience (Cursor 3 may suit you better)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What This Means for the Broader Dev Stack
&lt;/h2&gt;

&lt;p&gt;The Codex update — combined with the new Agents SDK sandbox support — signals that OpenAI is positioning Codex as the orchestration layer for your entire software development lifecycle. Not just writing code, but understanding codebases, reviewing changes, managing project context, talking to CI/CD, deploying, and iterating on design.&lt;/p&gt;

&lt;p&gt;If you want to see how the Agents SDK compares to managed agent APIs and model-agnostic frameworks, check out our &lt;a href="https://dev.to/blog/claude-managed-agents-deep-dive-anthropic-new-ai-agent-infrastructure-2026"&gt;Claude Managed Agents deep dive&lt;/a&gt; for the alternative architecture perspective.&lt;/p&gt;

&lt;p&gt;For the editor-side story — how Cursor 3's "agent-first" IDE fits alongside (or competes with) Codex — see our &lt;a href="https://dev.to/blog/cursor-3-deep-dive-agent-first-ide-frontend-engineers"&gt;Cursor 3 deep dive&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  For Developers Building Their Own Products
&lt;/h2&gt;

&lt;p&gt;One thing the Codex update underlines: agent-native applications are becoming the default expectation. If you're building a SaaS or developer tool, users will increasingly expect agentic features. The &lt;a href="https://dev.to/products"&gt;AI Frontend Starter Kit ($49)&lt;/a&gt; includes pre-built agent UI patterns and scaffolding for integrating with OpenAI's Agents SDK — so you're not starting from scratch when adding these capabilities to your own product.&lt;/p&gt;

&lt;h2&gt;
  
  
  Verdict
&lt;/h2&gt;

&lt;p&gt;The April 2026 Codex update is legitimately the most significant developer AI release since Claude Code landed. Background computer use alone changes what's possible for automation workflows. The plugin ecosystem at 90+ is now serious infrastructure. Memory and automations, when they fully roll out, will feel transformative.&lt;/p&gt;

&lt;p&gt;The catches are real: macOS only, privacy concerns with screen capture, staggered rollouts, and opaque pricing for automation-heavy use. But if you're a macOS developer and you haven't revisited Codex since it launched — April 2026 is the moment to do that.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rating: 4.2 / 5&lt;/strong&gt; — Best AI coding assistant update of 2026 so far, with real limitations that prevent a perfect score.&lt;/p&gt;

&lt;p&gt;{&lt;br&gt;
  "&lt;a class="mentioned-user" href="https://dev.to/context"&gt;@context&lt;/a&gt;": "&lt;a href="https://schema.org" rel="noopener noreferrer"&gt;https://schema.org&lt;/a&gt;",&lt;br&gt;
  "@type": "FAQPage",&lt;br&gt;
  "mainEntity": [&lt;br&gt;
    {&lt;br&gt;
      "@type": "Question",&lt;br&gt;
      "name": "What is the OpenAI Codex April 2026 update?",&lt;br&gt;
      "acceptedAnswer": {&lt;br&gt;
        "@type": "Answer",&lt;br&gt;
        "text": "On April 16, 2026, OpenAI released a major Codex update adding background computer use on macOS (Codex can see and click your screen), memory across sessions, scheduling/automation for long-running tasks, 90+ new plugins (Atlassian, CircleCI, GitLab, Render, Neon, etc.), an in-app browser for frontend iteration, and image generation via gpt-image-1.5."&lt;br&gt;
      }&lt;br&gt;
    },&lt;br&gt;
    {&lt;br&gt;
      "@type": "Question",&lt;br&gt;
      "name": "Is OpenAI Codex computer use available on Windows?",&lt;br&gt;
      "acceptedAnswer": {&lt;br&gt;
        "@type": "Answer",&lt;br&gt;
        "text": "No. As of the April 2026 launch, Codex computer use is only available on macOS. EU and UK users also face a delayed rollout. Windows support has not been announced."&lt;br&gt;
      }&lt;br&gt;
    },&lt;br&gt;
    {&lt;br&gt;
      "@type": "Question",&lt;br&gt;
      "name": "How does Codex computer use work technically?",&lt;br&gt;
      "acceptedAnswer": {&lt;br&gt;
        "@type": "Answer",&lt;br&gt;
        "text": "Codex computer use works by taking screenshots of your screen and sending them to OpenAI's models, which interpret what they see and generate click/type actions. Multiple agents can run in parallel in the background without interfering with your own mouse and keyboard usage."&lt;br&gt;
      }&lt;br&gt;
    },&lt;br&gt;
    {&lt;br&gt;
      "@type": "Question",&lt;br&gt;
      "name": "What are the privacy risks of OpenAI Codex computer use?",&lt;br&gt;
      "acceptedAnswer": {&lt;br&gt;
        "@type": "Answer",&lt;br&gt;
        "text": "Since computer use involves streaming screenshots to OpenAI servers, any sensitive data visible on your screen (proprietary code, client data, NDA-protected information) is potentially captured. Developers working with confidential information should review OpenAI's data usage policies for Codex before enabling this feature."&lt;br&gt;
      }&lt;br&gt;
    },&lt;br&gt;
    {&lt;br&gt;
      "@type": "Question",&lt;br&gt;
      "name": "How does the new OpenAI Agents SDK differ from before?",&lt;br&gt;
      "acceptedAnswer": {&lt;br&gt;
        "@type": "Answer",&lt;br&gt;
        "text": "The April 2026 Agents SDK update adds native sandbox execution (via E2B, Vercel, Cloudflare, Modal, Runloop, Blaxel, Daytona), configurable memory, durable execution (agent state persists if a container crashes), a Manifest abstraction for portable environments, and built-in support for MCP, skills, and AGENTS.md — making it easier to build production-grade agents without piecing together infrastructure yourself."&lt;br&gt;
      }&lt;br&gt;
    },&lt;br&gt;
    {&lt;br&gt;
      "@type": "Question",&lt;br&gt;
      "name": "Is OpenAI Codex worth it compared to Cursor 3 or Claude Code?",&lt;br&gt;
      "acceptedAnswer": {&lt;br&gt;
        "@type": "Answer",&lt;br&gt;
        "text": "For macOS developers wanting computer use, automation scheduling, and the broadest plugin ecosystem, Codex is now the strongest option. Cursor 3 remains better for editor-native, agent-first coding workflows. Claude Code excels for power users who want terminal-native control and custom MCP stacks. The right choice depends on your OS, workflow, and budget."&lt;br&gt;
      }&lt;br&gt;
    }&lt;br&gt;
  ]&lt;br&gt;
}&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This article was originally published on &lt;a href="https://nextfuture.io.vn" rel="noopener noreferrer"&gt;NextFuture&lt;/a&gt;. Follow us for more fullstack &amp;amp; AI engineering content.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>fullstack</category>
      <category>ai</category>
      <category>webdev</category>
      <category>javascript</category>
    </item>
  </channel>
</rss>
