<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Vibe-Start</title>
    <description>The latest articles on Forem by Vibe-Start (@brandon-vibestart).</description>
    <link>https://forem.com/brandon-vibestart</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/brandon-vibestart"/>
    <language>en</language>
    <item>
      <title>Beyond McKinsey's 46% — 5 Workflow Patterns That Push AI Coding Past Industry Average (2026)</title>
      <dc:creator>Vibe-Start</dc:creator>
      <pubDate>Sun, 03 May 2026 13:02:38 +0000</pubDate>
      <link>https://forem.com/brandon-vibestart/beyond-mckinseys-46-5-workflow-patterns-that-push-ai-coding-past-industry-average-2026-57pg</link>
      <guid>https://forem.com/brandon-vibestart/beyond-mckinseys-46-5-workflow-patterns-that-push-ai-coding-past-industry-average-2026-57pg</guid>
      <description>&lt;p&gt;McKinsey's February 2026 study of 150 enterprises reported AI coding tools cut routine task time by &lt;strong&gt;46%&lt;/strong&gt; on average. In the same period, METR ran a controlled experiment with 16 senior open-source developers across 246 issues — the AI-using group was actually &lt;strong&gt;19% slower&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Both measurements are honest. Both numbers are real. So what should your team expect when adopting a new tool?&lt;/p&gt;

&lt;p&gt;The answer: "the average itself is meaningless." Two teams using the same Cursor — one gets 60% faster, the other gets 10% slower. The difference isn't the tool. It's the &lt;strong&gt;workflow&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This article breaks down five concrete workflow patterns that push you past the 46% average.&lt;/p&gt;

&lt;h2&gt;
  
  
  📊 Measure Your Baseline First
&lt;/h2&gt;

&lt;p&gt;Before applying the five patterns, you need a baseline to compare against. Track four things over one week. No fancy tooling required — a simple sheet works.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;How to measure&lt;/th&gt;
&lt;th&gt;Result&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Task classification&lt;/td&gt;
&lt;td&gt;Tag each task as routine/novel/debug&lt;/td&gt;
&lt;td&gt;N routine, N novel, N debug&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AI invocation rate&lt;/td&gt;
&lt;td&gt;Count AI tool calls per task&lt;/td&gt;
&lt;td&gt;Avg N per task&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;First-pass acceptance&lt;/td&gt;
&lt;td&gt;% of AI outputs you commit unmodified&lt;/td&gt;
&lt;td&gt;N%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Verification time&lt;/td&gt;
&lt;td&gt;Time from AI output to passing review&lt;/td&gt;
&lt;td&gt;Avg N min&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;After one week, your patterns become visible. Two common cases. Pattern A: AI hits 80% first-pass acceptance on routine tasks but verification time triples on novel tasks. Pattern B: uniform AI usage across all task types with constant verification time. Pattern A benefits hugely from all five patterns; Pattern B needs to start with task classification first.&lt;/p&gt;

&lt;h2&gt;
  
  
  🛠 Pattern 1 — Split Routine vs Novel Tasks
&lt;/h2&gt;

&lt;p&gt;Biggest lever. AI tools average 60-80% time savings on routine work (boilerplate, refactoring, docs, test cases) but often go negative on novel work (architecture decisions, complex debugging, domain modeling). The METR 19% slowdown almost entirely traces to teams not making this distinction.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// AI-use heuristic — pin in code or notion&lt;/span&gt;
&lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="nx"&gt;TaskCategory&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;routine&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;novel&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;debug&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;shouldUseAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;task&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;TaskCategory&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;yes&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;no&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;verify-heavy&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;switch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;task&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;routine&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;yes&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  &lt;span class="c1"&gt;// Boilerplate, refactor, tests, docs&lt;/span&gt;
    &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;novel&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;no&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;   &lt;span class="c1"&gt;// Architecture, domain models, new system design&lt;/span&gt;
    &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;debug&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;verify-heavy&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  &lt;span class="c1"&gt;// AI possible, but form hypotheses yourself first&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add a checkbox to your PR template: "AI usage: __% / Task type: routine | novel | debug." Classification crystallizes naturally over a couple weeks.&lt;/p&gt;

&lt;h2&gt;
  
  
  🔍 Pattern 2 — Automate the Verification Harness
&lt;/h2&gt;

&lt;p&gt;What McKinsey's stat misses: verification time. After an AI output, hand-doing code review, running tests locally, and verification eats half the time savings. Solution: automate the verification harness.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# .husky/pre-commit — applies equally to AI output&lt;/span&gt;
&lt;span class="c"&gt;#!/usr/bin/env sh&lt;/span&gt;
&lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;dirname&lt;/span&gt; &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$0&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;/_/husky.sh"&lt;/span&gt;

pnpm typecheck &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
pnpm lint &lt;span class="nt"&gt;--quiet&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
pnpm &lt;span class="nb"&gt;test&lt;/span&gt; &lt;span class="nt"&gt;--run&lt;/span&gt; &lt;span class="nt"&gt;--silent&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
pnpm build &lt;span class="nt"&gt;--filter&lt;/span&gt; @your-app/web
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Receive code in Cursor or Claude Code, hit &lt;code&gt;Cmd+S&lt;/code&gt; — the pre-commit hook validates four things in five seconds. Pass = commit. Fail = paste the error message back to the AI, iterate. This loop converts "AI output → 5 min human review" into "AI output → 10 sec automated verification."&lt;/p&gt;

&lt;h2&gt;
  
  
  🎯 Pattern 3 — Context Engineering
&lt;/h2&gt;

&lt;p&gt;Subtlest area. Even Claude Opus 4.7's 1M context window degrades response quality when you dump the entire codebase. AI loses the signal of "where to look." High-performing teams curate context.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Cursor — @file for exact files only&lt;/span&gt;
@file src/lib/auth.ts @file src/app/api/login/route.ts
&lt;span class="s2"&gt;"Add 2FA to login flow. Match existing auth pattern."&lt;/span&gt;

&lt;span class="c"&gt;# Bad pattern — @codebase dump&lt;/span&gt;
@codebase
&lt;span class="s2"&gt;"Add 2FA somewhere"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Same principle applies in Claude Code. Explicitly call &lt;code&gt;read_file&lt;/code&gt; first to load relevant files into context, then request the work. "Look at the entire codebase yourself" vs "look at these 3 files and implement X" produces a 2-3x difference in first-pass acceptance.&lt;/p&gt;

&lt;h2&gt;
  
  
  🛠 Pattern 4 — Tool-Task Alignment
&lt;/h2&gt;

&lt;p&gt;Trying to use one tool for everything is the biggest reason teams stay below average. As of May 2026, optimal tasks per tool are clearly differentiated.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;Optimal&lt;/th&gt;
&lt;th&gt;Suboptimal&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Cursor&lt;/td&gt;
&lt;td&gt;In-IDE iteration, single-file edits&lt;/td&gt;
&lt;td&gt;Long autonomous work, parallel PRs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Claude Code&lt;/td&gt;
&lt;td&gt;Autonomous long tasks, multi-file edits, background work&lt;/td&gt;
&lt;td&gt;Quick prototype one-line edits&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;v0.dev&lt;/td&gt;
&lt;td&gt;UI component scaffolding, design mocks&lt;/td&gt;
&lt;td&gt;Backend logic, data models&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GitHub Copilot&lt;/td&gt;
&lt;td&gt;Line-to-function autocomplete&lt;/td&gt;
&lt;td&gt;Complex multi-step work&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Analyze a month of your team's PRs and the optimal tool per task type emerges. Once a ratio like "Cursor 70% / Claude Code 20% / v0 10%" stabilizes, tool-switching cost drops and time spent at each tool's sweet spot extends.&lt;/p&gt;

&lt;h2&gt;
  
  
  📝 Pattern 5 — Prompt Versioning
&lt;/h2&gt;

&lt;p&gt;Writing a fresh prompt each time you ask AI for the same task type is the largest hidden time sink. Top teams version their prompts as templates.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Directory structure&lt;/span&gt;
.cursor/
├── prompts/
│   ├── add-feature.md          &lt;span class="c"&gt;# Standard prompt for new feature&lt;/span&gt;
│   ├── refactor-component.md   &lt;span class="c"&gt;# Standard component refactor&lt;/span&gt;
│   ├── write-test.md           &lt;span class="c"&gt;# Standard test writing&lt;/span&gt;
│   └── debug-runtime-error.md  &lt;span class="c"&gt;# Runtime error diagnosis&lt;/span&gt;
└── rules/
    └── project-conventions.md  &lt;span class="c"&gt;# Project conventions (Cursor always references)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each prompt file contains four parts: task definition (1 line), context (file paths or function names), constraints (style, libraries, patterns), output format. First setup takes 30 minutes; subsequent same-type tasks drop from 5 minutes to 30 seconds. Commit to git so the team shares prompts and runs A/B tests.&lt;/p&gt;

&lt;h2&gt;
  
  
  ✅ Measuring After Applying the Five Patterns
&lt;/h2&gt;

&lt;p&gt;After applying for two weeks, re-record the same four baseline metrics. Average changes:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Before&lt;/th&gt;
&lt;th&gt;After (avg)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;AI usage rate&lt;/td&gt;
&lt;td&gt;Uniform across routine/novel&lt;/td&gt;
&lt;td&gt;80% routine, 20% novel&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;First-pass acceptance&lt;/td&gt;
&lt;td&gt;40-50%&lt;/td&gt;
&lt;td&gt;70-80%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Verification time&lt;/td&gt;
&lt;td&gt;5 min/PR avg&lt;/td&gt;
&lt;td&gt;30 sec/PR avg&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Overall time savings&lt;/td&gt;
&lt;td&gt;20-30%&lt;/td&gt;
&lt;td&gt;60-75%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Numbers vary by team size, codebase, and language, but direction is consistent. Going past 46% doesn't require a magic tool — it requires five workflow patterns to settle in.&lt;/p&gt;

&lt;h2&gt;
  
  
  🧩 Four Common Snags
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Snag 1 — Pattern 1 is set, but routine vs novel classification feels ambiguous.&lt;/strong&gt; Normal. First 1-2 weeks, classification wobbles. Wobble tasks: try them as "routine first → reclassify as novel if AI output diverges from intent." After a month, your team's classification heuristic stabilizes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Snag 2 — Verification harness is too strict, blocking commits frequently.&lt;/strong&gt; Requiring all four (typecheck, lint, test, build) to pass on every commit is frustrating week one. Tier them: typecheck/lint as hard blocks, test only on new code, build only before main branch push. Tighten progressively.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Snag 3 — Context engineering tried, but unclear which files to pick.&lt;/strong&gt; Reverse-engineer from your own past PRs. Look at "which files were modified together" in the last 5 PRs — that's your context curation unit. Same task type returns? Pin the same file bundle with &lt;a class="mentioned-user" href="https://dev.to/file"&gt;@file&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Snag 4 — Prompt versioning directory gets messy fast.&lt;/strong&gt; Keep notes on outcome for the first 5 prompts, prune low-frequency ones after a month. Policy: only keep prompts the entire team uses 1+ times per week. Natural curation.&lt;/p&gt;

&lt;h2&gt;
  
  
  ⚖️ Where the Five Patterns Don't Apply
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Large legacy codebase migrations.&lt;/strong&gt; Framework or language transitions on 50K+ lines of legacy code see very small or negative AI tool benefits — domain knowledge and decision cost dominate. Use AI as a search/docs aid only; humans make decisions and implementations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security-critical code.&lt;/strong&gt; Auth, payments, encryption — verification cost of AI output exceeds writing cost. Without a guard layer like the &lt;a href="https://dev.to/brandon-vibestart/lakera-guard-in-30-lines-production-ready-ai-safety-for-nextjs-route-handlers-2026-4j70"&gt;Lakera Guard integration pattern I covered last week&lt;/a&gt;, don't trust AI output as-is.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Domain models the team hasn't agreed on.&lt;/strong&gt; Domain models form through human consensus and iterative debate. AI quickly producing a plausible model doesn't shorten consensus — it bypasses it. You'll re-architect six months later.&lt;/p&gt;

&lt;h2&gt;
  
  
  🪜 Where to Go From Here
&lt;/h2&gt;

&lt;p&gt;The 46% average is an average — not your team's ceiling. With the five patterns in place, 70-80% becomes a normal result.&lt;/p&gt;

&lt;p&gt;If you're integrating AI tools into a Next.js project, my &lt;a href="https://dev.to/brandon-vibestart/from-v0-output-to-production-nextjs-in-90-minutes-a-6-step-integration-workflow-2026-4c60"&gt;v0 Output to Production Next.js — 6-Step Integration Workflow&lt;/a&gt; covers the production layer that pairs with these workflow patterns.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://vibe-start.com/en/blog/ai-coding-productivity-patterns-2026" rel="noopener noreferrer"&gt;vibe-start.com&lt;/a&gt;. I'm building &lt;a href="https://vibe-start.com" rel="noopener noreferrer"&gt;VibeStart&lt;/a&gt; — a 30-minute path for non-developers to start AI-assisted coding. Launching on &lt;a href="https://www.producthunt.com" rel="noopener noreferrer"&gt;Product Hunt May 26, 2026&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>programming</category>
      <category>ai</category>
      <category>claude</category>
    </item>
    <item>
      <title>From v0 Output to Production Next.js in 90 Minutes — A 6-Step Integration Workflow (2026)</title>
      <dc:creator>Vibe-Start</dc:creator>
      <pubDate>Sun, 03 May 2026 01:32:26 +0000</pubDate>
      <link>https://forem.com/brandon-vibestart/from-v0-output-to-production-nextjs-in-90-minutes-a-6-step-integration-workflow-2026-4c60</link>
      <guid>https://forem.com/brandon-vibestart/from-v0-output-to-production-nextjs-in-90-minutes-a-6-step-integration-workflow-2026-4c60</guid>
      <description>&lt;h2&gt;
  
  
  🤔 Why v0 Output Alone Isn't Production-Ready
&lt;/h2&gt;

&lt;p&gt;If you've used v0.dev to spin up a landing page, you've probably hit the same wall on the next step. The component looks clean inside v0, but the moment you drop it into your Next.js project the design tokens drift, dark mode breaks, metadata is empty, and Lighthouse scores land in the 60s. This isn't a v0 limitation — it's that v0's output is "design-mock React," not "a part of your project."&lt;/p&gt;

&lt;p&gt;Pushing it to production-ready requires touching six additional areas during integration. Restructuring routes and components for the App Router, aligning with your design system (typically shadcn/ui), filling SEO via the Next.js metadata API, optimizing images, fonts, and bundle size, and wiring analytics plus A/B testing. This guide walks through those six steps as concrete code patterns.&lt;/p&gt;

&lt;h2&gt;
  
  
  📋 The 6-Step Workflow at a Glance
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Step&lt;/th&gt;
&lt;th&gt;Task&lt;/th&gt;
&lt;th&gt;Time&lt;/th&gt;
&lt;th&gt;Output&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;v0 export · dependency analysis&lt;/td&gt;
&lt;td&gt;10 min&lt;/td&gt;
&lt;td&gt;Component list + external library inventory&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;Split into App Router routes and components&lt;/td&gt;
&lt;td&gt;15 min&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;app/(marketing)/page.tsx&lt;/code&gt; + &lt;code&gt;components/landing/*&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;shadcn/ui alignment · design token mapping&lt;/td&gt;
&lt;td&gt;20 min&lt;/td&gt;
&lt;td&gt;Unified &lt;code&gt;tailwind.config.ts&lt;/code&gt; tokens + working dark mode&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;Metadata API · JSON-LD · OG image&lt;/td&gt;
&lt;td&gt;15 min&lt;/td&gt;
&lt;td&gt;SEO score in the 90s&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;Image · font · bundle optimization&lt;/td&gt;
&lt;td&gt;20 min&lt;/td&gt;
&lt;td&gt;LCP under 2.5s, CLS under 0.1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;td&gt;Analytics · A/B testing&lt;/td&gt;
&lt;td&gt;10 min&lt;/td&gt;
&lt;td&gt;Vercel Analytics + GrowthBook or Statsig wired&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;About 90 minutes total brings a single page to production standard. v0 gets you the output in 1 hour; this 90 minutes makes it ready for real traffic.&lt;/p&gt;

&lt;h2&gt;
  
  
  🛠 Step 1 — v0 Export and Dependency Analysis
&lt;/h2&gt;

&lt;p&gt;Top-right of v0 → Code → Download gives you a zip. After unzipping you'll see &lt;code&gt;app/page.tsx&lt;/code&gt;, &lt;code&gt;components/&lt;/code&gt;, and &lt;code&gt;package.json&lt;/code&gt;. The first thing to inspect is &lt;code&gt;dependencies&lt;/code&gt; in &lt;code&gt;package.json&lt;/code&gt;. v0 auto-includes shadcn-compatible packages like &lt;code&gt;lucide-react&lt;/code&gt;, &lt;code&gt;class-variance-authority&lt;/code&gt;, and &lt;code&gt;tailwind-merge&lt;/code&gt; — check if your project already has them. Version mismatches cause conflicts.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Compare v0 export deps with your project&lt;/span&gt;
diff &amp;lt;&lt;span class="o"&gt;(&lt;/span&gt;jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.dependencies | keys[]'&lt;/span&gt; v0-export/package.json | &lt;span class="nb"&gt;sort&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
     &amp;lt;&lt;span class="o"&gt;(&lt;/span&gt;jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.dependencies | keys[]'&lt;/span&gt; package.json | &lt;span class="nb"&gt;sort&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pull only the truly new packages and install them with a single &lt;code&gt;pnpm add&lt;/code&gt;. After this step, v0 code compiles inside your project without import errors.&lt;/p&gt;

&lt;h2&gt;
  
  
  📦 Step 2 — App Router Routes and Component Split
&lt;/h2&gt;

&lt;p&gt;v0 puts Hero, Features, Testimonial, FAQ, and Footer all in one &lt;code&gt;app/page.tsx&lt;/code&gt;. For production App Router, split it. Recommended structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app/
├── (marketing)/
│   ├── page.tsx              # Route group, separate marketing layout
│   └── layout.tsx
├── layout.tsx
components/
└── landing/
    ├── hero.tsx
    ├── features.tsx
    ├── testimonial.tsx
    ├── faq.tsx
    └── footer.tsx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;(marketing)&lt;/code&gt; route group exists so your marketing pages (landing, pricing, about) and app pages (&lt;code&gt;app/dashboard&lt;/code&gt;, etc.) carry different layouts. Marketing layout always has header/footer; app layout has sidebar. Splitting v0's monolith component into meaningful pieces under &lt;code&gt;components/landing/&lt;/code&gt; also makes Hero patterns reusable across &lt;code&gt;/pricing&lt;/code&gt;, &lt;code&gt;/about&lt;/code&gt;, and so on.&lt;/p&gt;

&lt;h2&gt;
  
  
  🎨 Step 3 — shadcn/ui Alignment and Design Tokens
&lt;/h2&gt;

&lt;p&gt;This is where things break the most. v0 outputs with its own palette (e.g., &lt;code&gt;bg-zinc-900&lt;/code&gt;), but your project likely uses shadcn/ui tokens (&lt;code&gt;bg-background&lt;/code&gt;, &lt;code&gt;text-foreground&lt;/code&gt;, &lt;code&gt;border-border&lt;/code&gt;). Leave v0's classes untouched and dark mode toggle won't change anything.&lt;/p&gt;

&lt;p&gt;The fix is a bulk substitution from v0 absolute colors to shadcn tokens.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Mapping examples&lt;/span&gt;
&lt;span class="c1"&gt;// bg-white         → bg-background&lt;/span&gt;
&lt;span class="c1"&gt;// bg-zinc-900      → bg-foreground&lt;/span&gt;
&lt;span class="c1"&gt;// text-black       → text-foreground&lt;/span&gt;
&lt;span class="c1"&gt;// text-zinc-500    → text-muted-foreground&lt;/span&gt;
&lt;span class="c1"&gt;// border-zinc-200  → border-border&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Complex mappings sometimes get applied automatically when you pull components via the &lt;a href="https://ui.shadcn.com/docs/cli" rel="noopener noreferrer"&gt;shadcn-ui CLI&lt;/a&gt; &lt;code&gt;add&lt;/code&gt; command, but for v0 output direct mapping is faster. Verify CSS variables (&lt;code&gt;--background&lt;/code&gt;, &lt;code&gt;--foreground&lt;/code&gt;) are defined in &lt;code&gt;globals.css&lt;/code&gt;, then test that the dark mode toggle properly inverts colors. Alignment done.&lt;/p&gt;

&lt;h2&gt;
  
  
  🔍 Step 4 — Metadata API · JSON-LD · OG Image
&lt;/h2&gt;

&lt;p&gt;v0 output ships with empty metadata. Use the Next.js 16 App Router Metadata API to fill SEO basics.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="c1"&gt;// app/(marketing)/page.tsx&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Metadata&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;next&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Metadata&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Notely — AI notes that turn meetings into action items&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Record meetings, get a 30-second summary with action items and follow-up questions. Notely is the AI assistant built for note work.&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;openGraph&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Notely — AI notes that auto-organize&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;30-second meeting summaries from voice recording&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;images&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/og-image.png&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;website&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;twitter&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;card&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;summary_large_image&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;alternates&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;canonical&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://example.com/&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Place a 1200×630 PNG at &lt;code&gt;public/og-image.png&lt;/code&gt;, or generate dynamically with &lt;code&gt;app/opengraph-image.tsx&lt;/code&gt; using Next.js's &lt;code&gt;ImageResponse&lt;/code&gt;. Dynamic generation lets each page produce its own OG image. Add JSON-LD to improve odds of rich snippets in search results.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;script&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"application/ld+json"&lt;/span&gt;
  &lt;span class="na"&gt;dangerouslySetInnerHTML&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;__html&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@context&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://schema.org&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@type&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;SoftwareApplication&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Notely&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;applicationCategory&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;ProductivityApplication&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;operatingSystem&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Web&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;offers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@type&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Offer&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;price&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;0&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;priceCurrency&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;USD&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;}),&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  ⚡ Step 5 — Image · Font · Bundle Optimization
&lt;/h2&gt;

&lt;p&gt;LCP (Largest Contentful Paint) and CLS (Cumulative Layout Shift) directly affect Vercel Analytics scores and search ranking. Three fixes typically move you from the 60s into the 90s.&lt;/p&gt;

&lt;p&gt;First, swap raw &lt;code&gt;&amp;lt;img&amp;gt;&lt;/code&gt; tags for &lt;code&gt;next/image&lt;/code&gt;'s &lt;code&gt;Image&lt;/code&gt; component. Add &lt;code&gt;priority&lt;/code&gt; to the Hero image — LCP improves immediately.&lt;/p&gt;

&lt;p&gt;Second, self-host fonts via &lt;code&gt;next/font/google&lt;/code&gt;. v0 often suggests Inter via external fetch — leaving it that way causes CLS.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="c1"&gt;// app/layout.tsx&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Inter&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;next/font/google&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;inter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Inter&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;subsets&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;latin&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="na"&gt;display&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;swap&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;RootLayout&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;children&lt;/span&gt; &lt;span class="p"&gt;}:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nl"&gt;children&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;React&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ReactNode&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;html&lt;/span&gt; &lt;span class="na"&gt;lang&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"en"&lt;/span&gt; &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;inter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;className&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;body&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;children&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;body&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;html&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Third, audit bundle size with &lt;code&gt;@next/bundle-analyzer&lt;/code&gt;. Drop unused libraries v0 pulled in, and dynamic-import heavy ones like &lt;code&gt;framer-motion&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  📊 Step 6 — Analytics and A/B Testing
&lt;/h2&gt;

&lt;p&gt;The final step is operations. Traffic without measurement leaves you guessing the next hypothesis. The best ROI combo is Vercel Analytics + GrowthBook.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="c1"&gt;// app/layout.tsx&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Analytics&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@vercel/analytics/next&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;SpeedInsights&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@vercel/speed-insights/next&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;RootLayout&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;children&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;html&lt;/span&gt; &lt;span class="na"&gt;lang&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"en"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;body&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;children&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Analytics&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;SpeedInsights&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;body&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;html&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;@vercel/analytics&lt;/code&gt; collects page views and events; &lt;code&gt;@vercel/speed-insights&lt;/code&gt; automatically gathers Core Web Vitals. For A/B testing, add the GrowthBook or Statsig SDK and serve 2-3 Hero headline variants randomly — compare click-through rates. For the first 1,000 visitors, just watch page views. Statistical significance starts to mean something past that line.&lt;/p&gt;

&lt;h2&gt;
  
  
  ✅ Integration Completion Checklist
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;[ ] &lt;code&gt;pnpm build&lt;/code&gt; finishes with zero errors&lt;/li&gt;
&lt;li&gt;[ ] All colors invert properly when dark mode toggles&lt;/li&gt;
&lt;li&gt;[ ] Hero image loads instantly with &lt;code&gt;priority&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;[ ] Fonts self-hosted via &lt;code&gt;next/font&lt;/code&gt;, CLS under 0.1&lt;/li&gt;
&lt;li&gt;[ ] Metadata, OG image, and JSON-LD all applied&lt;/li&gt;
&lt;li&gt;[ ] Vercel Analytics and Speed Insights collecting data&lt;/li&gt;
&lt;li&gt;[ ] No layout breaks at mobile viewport 375px&lt;/li&gt;
&lt;li&gt;[ ] Lighthouse score in the 90s&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Seven or more checked = production-ready. Eight checked = Web Vitals likely sending positive search-ranking signals.&lt;/p&gt;

&lt;h2&gt;
  
  
  🧩 Four Common Snags and Their Diagnosis
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Snag 1 — Some text invisible in dark mode (white text on white).&lt;/strong&gt; Absolute color classes like &lt;code&gt;text-white&lt;/code&gt; left over from v0 output. Replace all absolute colors with shadcn tokens (&lt;code&gt;text-foreground&lt;/code&gt;, &lt;code&gt;text-muted-foreground&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Snag 2 — Hero image loads late, LCP over 4 seconds.&lt;/strong&gt; Raw &lt;code&gt;&amp;lt;img&amp;gt;&lt;/code&gt; tag still in place. Switch to &lt;code&gt;next/image&lt;/code&gt;'s &lt;code&gt;Image&lt;/code&gt; component and add &lt;code&gt;priority&lt;/code&gt;. If the image is from an external URL, register the domain in &lt;code&gt;next.config.js&lt;/code&gt;'s &lt;code&gt;images.remotePatterns&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Snag 3 — &lt;code&gt;pnpm build&lt;/code&gt; throws "Module not found."&lt;/strong&gt; v0 imported a library you don't have. Re-run the Step 1 dependency analysis and install missing packages with &lt;code&gt;pnpm add&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Snag 4 — Metadata API doesn't work.&lt;/strong&gt; v0 output dropped into a Pages Router project. Confirm &lt;code&gt;app/&lt;/code&gt; directory structure first. Either migrate to App Router or use &lt;code&gt;next/head&lt;/code&gt; for Pages Router metadata.&lt;/p&gt;

&lt;h2&gt;
  
  
  ⚖️ v0 vs Claude Design vs From-Scratch — When to Pick What
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;Strengths&lt;/th&gt;
&lt;th&gt;Weaknesses&lt;/th&gt;
&lt;th&gt;Best For&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;v0.dev&lt;/td&gt;
&lt;td&gt;Instant React+Tailwind, shadcn-compatible&lt;/td&gt;
&lt;td&gt;Token reconciliation needed for your project&lt;/td&gt;
&lt;td&gt;Quickly adding a page to a Next.js project&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Claude Design&lt;/td&gt;
&lt;td&gt;Fast prototyping, multiple output formats&lt;/td&gt;
&lt;td&gt;Mixed output formats means longer integration&lt;/td&gt;
&lt;td&gt;Quick design previews&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;From-scratch&lt;/td&gt;
&lt;td&gt;Maximum customization, zero deps&lt;/td&gt;
&lt;td&gt;Highest time cost&lt;/td&gt;
&lt;td&gt;Teams with strong existing design systems&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;From a developer perspective, "v0 output → Next.js integration" is the most efficient flow — that's the core conclusion of this article.&lt;/p&gt;

&lt;h2&gt;
  
  
  💡 Three Operational Tips
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Tip 1 — Finish one section at a time before exporting.&lt;/strong&gt; Don't ask v0 to generate the whole page at once. Build Hero → preview → Features → preview, exporting only when each section feels right. Integration friction drops dramatically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tip 2 — Bulk-replace Tailwind tokens with a script.&lt;/strong&gt; v0 output color classes follow a consistent pattern, easy to handle with &lt;code&gt;sed&lt;/code&gt; or VS Code regex search. Build the mapping sheet once and the next v0 integration takes 5 minutes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tip 3 — Measure Lighthouse scores after production deployment.&lt;/strong&gt; Local scores differ from Vercel production scores. Always test on the Vercel preview URL after integration completes. If you're under 90, return to Step 5 optimization.&lt;/p&gt;

&lt;h2&gt;
  
  
  🪜 Where to Go From Here
&lt;/h2&gt;

&lt;p&gt;v0 is a tool for pulling design mocks fast; the real value is in the integration workflow that brings those mocks into your production project. Once you've internalized these six steps, every future landing page, pricing page, and about page becomes a 90-minute job to production standard.&lt;/p&gt;

&lt;p&gt;If you're building AI-powered features into your Next.js app, my &lt;a href="https://dev.to/brandon-vibestart/lakera-guard-in-30-lines-production-ready-ai-safety-for-nextjs-route-handlers-2026-4j70"&gt;Lakera Guard integration article&lt;/a&gt; covers the safety layer that should sit in front of your AI Route Handlers — same 30-line philosophy, applied to AI security.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://vibe-start.com/en/blog/v0-to-production-nextjs-integration" rel="noopener noreferrer"&gt;vibe-start.com&lt;/a&gt;. I'm building &lt;a href="https://vibe-start.com" rel="noopener noreferrer"&gt;VibeStart&lt;/a&gt; — a 30-minute path for non-developers to start AI-assisted coding. Launching on &lt;a href="https://www.producthunt.com" rel="noopener noreferrer"&gt;Product Hunt May 26, 2026&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>nextjs</category>
      <category>react</category>
      <category>webdev</category>
      <category>tailwindcss</category>
    </item>
    <item>
      <title>Lakera Guard in 30 Lines — Production-Ready AI Safety for Next.js Route Handlers (2026)</title>
      <dc:creator>Vibe-Start</dc:creator>
      <pubDate>Sat, 02 May 2026 15:31:58 +0000</pubDate>
      <link>https://forem.com/brandon-vibestart/lakera-guard-in-30-lines-production-ready-ai-safety-for-nextjs-route-handlers-2026-4j70</link>
      <guid>https://forem.com/brandon-vibestart/lakera-guard-in-30-lines-production-ready-ai-safety-for-nextjs-route-handlers-2026-4j70</guid>
      <description>&lt;h2&gt;
  
  
  🛡 Why Your AI Route Handlers Need a Guard Layer
&lt;/h2&gt;

&lt;p&gt;The moment you ship &lt;code&gt;/api/chat&lt;/code&gt; in Next.js App Router, you have a structural security problem. User input flows directly into your LLM prompt, which means prompt injection, PII leakage, and system-prompt overrides are exposed without a single line of malicious code. OWASP's 2026 Agentic Top 10 (ASI) covers exactly this surface in ASI01 (Goal Hijack) and ASI02 (Memory Poisoning).&lt;/p&gt;

&lt;p&gt;Regex blocklists fall apart against variant inputs (&lt;code&gt;"!gnore previous instructions"&lt;/code&gt;, base64-encoded payloads, newline tricks), and writing "refuse harmful requests" in your system prompt is trivially bypassed. The 2026 standard is a separate validation layer in front of the LLM call: only validated inputs reach the model. Lakera Guard delivers that validation as a one-call SaaS — the lowest-friction option on the market.&lt;/p&gt;

&lt;h2&gt;
  
  
  📋 The 4 Risks Lakera Guard Catches
&lt;/h2&gt;

&lt;p&gt;POST text to the Lakera Guard API and you get back a per-category risk score (0.0 to 1.0). Standard policy: block above 0.5, pass below.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Category&lt;/th&gt;
&lt;th&gt;Risk it catches&lt;/th&gt;
&lt;th&gt;OWASP ASI mapping&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;prompt_injection&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;System-prompt override, mission swap&lt;/td&gt;
&lt;td&gt;ASI01 Goal Hijack&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;jailbreak&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Safety guideline bypass (DAN, "ignore previous")&lt;/td&gt;
&lt;td&gt;ASI01 / ASI06&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;pii&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Emails, phone, SSN, card numbers in input&lt;/td&gt;
&lt;td&gt;ASI02 Memory Poisoning&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;moderation&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Violence, self-harm, hate, sexual content&lt;/td&gt;
&lt;td&gt;ASI05 Cascading Hallucination&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The free tier covers 10,000 calls per month — plenty for personal projects or a side SaaS during validation. Switch to paid when production traffic crosses that line.&lt;/p&gt;

&lt;h2&gt;
  
  
  🔑 Setup — 5 Minutes End to End
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Get an API key
&lt;/h3&gt;

&lt;p&gt;Sign up at &lt;a href="https://www.lakera.ai/" rel="noopener noreferrer"&gt;lakera.ai&lt;/a&gt; → Dashboard → &lt;strong&gt;API Keys&lt;/strong&gt; → create a new key. Keys start with the &lt;code&gt;lak_&lt;/code&gt; prefix.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Add the env var
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# .env.local&lt;/span&gt;
&lt;span class="nv"&gt;LAKERA_GUARD_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;lak_your_key_here
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Don't commit &lt;code&gt;.env.local&lt;/code&gt;. On Vercel, add the same variable in Project Settings → Environment Variables. LLM calls in this guide route through &lt;strong&gt;Vercel AI Gateway (OIDC)&lt;/strong&gt; — no OpenAI/Anthropic provider keys in code. One &lt;code&gt;vercel env pull .env.local&lt;/code&gt; provisions the &lt;code&gt;VERCEL_OIDC_TOKEN&lt;/code&gt; and you're done.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Use fetch directly — zero dependencies
&lt;/h3&gt;

&lt;p&gt;Lakera ships an SDK, but for Edge Runtime compatibility plain &lt;code&gt;fetch&lt;/code&gt; is the safer choice. No &lt;code&gt;node_modules&lt;/code&gt; bloat and the same code runs identically on Edge.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// lib/lakera.ts&lt;/span&gt;
&lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="nx"&gt;GuardCategory&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;prompt_injection&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;jailbreak&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;pii&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;moderation&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="nx"&gt;GuardResult&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;flagged&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;boolean&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;categories&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Record&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;GuardCategory&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;lakeraGuard&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;input&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;GuardResult&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://api.lakera.ai/v2/guard&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;POST&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Content-Type&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;application/json&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;Authorization&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`Bearer &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;LAKERA_GUARD_API_KEY&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;user&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;input&lt;/span&gt; &lt;span class="p"&gt;}]&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ok&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Lakera Guard &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;GuardResult&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's the entire helper. Reuse this 14-line file from every Route Handler that touches an LLM.&lt;/p&gt;

&lt;h2&gt;
  
  
  💻 30-Line Integration — App Router Route Handler
&lt;/h2&gt;

&lt;p&gt;The simplest one-shot chat endpoint with Lakera Guard wired in. User message arrives → ① Lakera validates → ② if allowed, OpenAI is called → ③ if blocked, return 422.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// app/api/chat/route.ts&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;NextResponse&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;next/server&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;generateText&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;ai&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;lakeraGuard&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@/lib/lakera&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;runtime&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;edge&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;POST&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Request&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;Response&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;message&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;guard&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;lakeraGuard&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;guard&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;flagged&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;NextResponse&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Input blocked by safety check&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;422&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;text&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;generateText&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;openai/gpt-5.4&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;NextResponse&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;reply&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;text&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The entire defense is &lt;code&gt;if (guard.flagged) return 422&lt;/code&gt;. Closing the gate before the LLM call prevents wasted tokens, latency, and log pollution all at once. The model is specified as a plain &lt;code&gt;"provider/model"&lt;/code&gt; string — AI SDK v6 routes this through the AI Gateway automatically, with no provider SDK import and no API key in code. In production, omit category names from the 422 body — exposing them gives bypass attempts a free training signal.&lt;/p&gt;

&lt;h2&gt;
  
  
  🌊 Streaming Chat — Vercel AI SDK Integration
&lt;/h2&gt;

&lt;p&gt;Real chat UIs stream. With Vercel AI SDK's &lt;code&gt;streamText&lt;/code&gt;, the question is where to put the guard, and the answer is &lt;strong&gt;before the stream opens&lt;/strong&gt;. Output validation belongs in a separate layer.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// app/api/chat-stream/route.ts&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;streamText&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;convertToModelMessages&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="nx"&gt;UIMessage&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;ai&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;lakeraGuard&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@/lib/lakera&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;runtime&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;edge&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;POST&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Request&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;Response&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;messages&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;UIMessage&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;lastUser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;m&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;m&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;role&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;user&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;pop&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;lastUserText&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;lastUser&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;parts&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;p&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;p&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;text&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;p&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;p&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;lastUserText&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;No user text&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;400&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;guard&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;lakeraGuard&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;lastUserText&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;guard&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;flagged&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;blocked&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;422&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Content-Type&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;application/json&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;streamText&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;openai/gpt-5.4&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;convertToModelMessages&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toUIMessageStreamResponse&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Two AI SDK v6 essentials are baked in here. ① The client sends &lt;code&gt;UIMessage[]&lt;/code&gt;, where each message has a &lt;code&gt;parts&lt;/code&gt; array (not a &lt;code&gt;content&lt;/code&gt; string) — extract user text by filtering parts of &lt;code&gt;type: "text"&lt;/code&gt;. ② &lt;code&gt;streamText&lt;/code&gt; returns a result whose &lt;code&gt;toUIMessageStreamResponse()&lt;/code&gt; is what &lt;code&gt;useChat&lt;/code&gt; clients expect (the older &lt;code&gt;toDataStreamResponse()&lt;/code&gt; was renamed in v6). Once a stream opens it's hard to cleanly cut tokens mid-flight, so blocking at the input stage wins on both UX and cost. Output-side risks (model emitting PII, model complying with jailbreak) belong in a downstream post-processing layer.&lt;/p&gt;

&lt;h2&gt;
  
  
  ⚙️ Cost &amp;amp; Latency — Real Numbers
&lt;/h2&gt;

&lt;p&gt;Numbers worth knowing before you adopt this, because they make decisions faster.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;th&gt;Notes&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Average API latency&lt;/td&gt;
&lt;td&gt;80–120ms (us-east)&lt;/td&gt;
&lt;td&gt;Add ~100ms from APAC&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Free tier&lt;/td&gt;
&lt;td&gt;10,000 calls/month&lt;/td&gt;
&lt;td&gt;Enough for solo side projects&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Paid entry&lt;/td&gt;
&lt;td&gt;$99/month (50,000 calls)&lt;/td&gt;
&lt;td&gt;~$0.002 per call&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Edge Runtime&lt;/td&gt;
&lt;td&gt;✅ Fully compatible&lt;/td&gt;
&lt;td&gt;fetch-based, no cold start hit&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Response payload&lt;/td&gt;
&lt;td&gt;~300 bytes&lt;/td&gt;
&lt;td&gt;Negligible&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;100–200ms of guard latency disappears next to first-token LLM latency (typically 500–1500ms). If you still want to shave it, pin your Edge Function region to us-east-1 to colocate with the Lakera endpoint.&lt;/p&gt;

&lt;h2&gt;
  
  
  🚀 Production Checklist
&lt;/h2&gt;

&lt;p&gt;Five things to verify before you ship. Five-minute review.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Fail-open or fail-closed?&lt;/strong&gt; What happens if the Lakera API is down? Decide explicitly: security-first → fail-closed (block on error); availability-first → fail-open (pass + log).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Don't leak block reasons&lt;/strong&gt; — Strip categories and scores from the 422 response. Exposing them hands bypass attempts a feedback loop.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mask blocked input in logs&lt;/strong&gt; — Persisting raw blocked content puts log readers in front of malicious payloads. Hash or truncate.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Track separately from rate limits&lt;/strong&gt; — Lakera blocks are likely intentional attacks. Count them per-IP/per-account distinct from generic rate limits, and ramp blocking duration on repeat offenders.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Alert on quota&lt;/strong&gt; — Wire an alert at 80% of your monthly quota. A traffic spike that you only notice on next month's invoice is an avoidable surprise.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For the broader OWASP ASI checklist that covers permissions, logging, and human-approval gates, pair this article with the &lt;a href="https://1daymillion.com/owasp-agentic-top-10-2026/" rel="noopener noreferrer"&gt;5-minute audit guide&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  📝 Next Layers
&lt;/h2&gt;

&lt;p&gt;Lakera Guard is the first input-validation layer. Once your runtime is stable, layer in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Output validation&lt;/strong&gt; — Verify model responses don't contain PII (Lakera can score outputs too)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Call logging&lt;/strong&gt; — Langfuse or Helicone auto-records every call's I/O, cost, and latency (covers ASI09 Untraceability)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Human-approval gates&lt;/strong&gt; — Wire Slack-bot approval for risky tools like payment, external send (covers ASI06 / ASI10)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;NeMo Guardrails&lt;/strong&gt; — Policy-as-code over conversation flow itself. YAML overhead, but strong for complex agents&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Stack all four and you cover ~90% of OWASP ASI Top 10 in production.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://vibe-start.com/en/blog/lakera-guard-nextjs-integration" rel="noopener noreferrer"&gt;vibe-start.com&lt;/a&gt;. I'm building &lt;a href="https://vibe-start.com" rel="noopener noreferrer"&gt;VibeStart&lt;/a&gt; — a 30-minute path for non-developers to start AI-assisted coding.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>nextjs</category>
      <category>ai</category>
      <category>security</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
