<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Anatoly Silko</title>
    <description>The latest articles on Forem by Anatoly Silko (@anatolysilko).</description>
    <link>https://forem.com/anatolysilko</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/anatolysilko"/>
    <language>en</language>
    <item>
      <title>Your Lovable App Hit a Wall — Here's What to Do Next</title>
      <dc:creator>Anatoly Silko</dc:creator>
      <pubDate>Wed, 15 Apr 2026 10:56:35 +0000</pubDate>
      <link>https://forem.com/anatolysilko/your-lovable-app-hit-a-wall-heres-what-to-do-next-1o0g</link>
      <guid>https://forem.com/anatolysilko/your-lovable-app-hit-a-wall-heres-what-to-do-next-1o0g</guid>
      <description>&lt;p&gt;Security firms audited thousands of Lovable, Bolt.new and Cursor apps. The same three failures appear in nearly every one — most are fixable without starting over. Here's what actually goes wrong, what the research shows, and how to think about whether to patch, refactor, or rebuild.&lt;/p&gt;




&lt;p&gt;This is not a rare experience. Escape.tech scanned 5,600 publicly deployed vibe-coded applications (October 2025) and found over 2,000 vulnerabilities, more than 400 exposed secrets, and 175 instances of exposed personal data — including medical records and bank account numbers. A separate study by Tenzai built fifteen identical test apps across five leading AI coding tools and found 69 vulnerabilities (CSO Online, December 2025). Not one of the fifteen apps had CSRF protection. Not one had rate limiting on login. Not one set security headers.&lt;/p&gt;

&lt;p&gt;These are not edge cases. They are the default output.&lt;/p&gt;

&lt;p&gt;This article explains what actually goes wrong — architecturally — when an AI tool builds your application. Not to make you feel bad about it. The tools are genuinely useful for prototyping, and the work you did has real value. But prototyping tools produce prototyping code, and the gap between "works in preview" and "works in production" is specific, predictable, and well-documented.&lt;/p&gt;




&lt;h2&gt;
  
  
  The database is wide open — and the tools don't tell you
&lt;/h2&gt;

&lt;p&gt;The single most dangerous pattern in vibe-coded applications is a misconfigured database. If you built with Lovable or Bolt.new, your app almost certainly uses Supabase as its backend. Supabase is a solid product. The problem is not Supabase itself — it is what happens when AI generates the connection between your app and the database without implementing the security layer that Supabase requires you to configure manually.&lt;/p&gt;

&lt;p&gt;That security layer is called Row Level Security, or RLS. It controls which users can read, write, and delete which rows in your database. Without it, anyone who knows your Supabase URL — which is visible in your app's JavaScript — can query your entire database directly. Not theoretically. Literally.&lt;/p&gt;

&lt;p&gt;In May 2025, a security researcher scanned 1,645 applications from Lovable's own showcase and found that 170 of them — 10.3% — had critical RLS failures (CVE-2025-48757, CVSS 8.26). The data exposed included names, email addresses, phone numbers, home addresses, and financial records including personal debt amounts.&lt;/p&gt;

&lt;p&gt;Independently, an engineer at a major technology company reproduced the attack during a lunch break. Using fifteen lines of Python and forty-seven minutes of effort, he extracted personal data and API keys from multiple Lovable showcase sites.&lt;/p&gt;

&lt;p&gt;The problem continued into 2026. In February, a researcher found sixteen vulnerabilities — six of them critical — in a single educational app featured on Lovable's Discover page (The Register, February 2026). That app had over 100,000 views. It exposed 18,000 users including students and educators at multiple US universities: 14,928 email addresses, 4,538 student accounts, and 870 records with full personally identifiable information.&lt;/p&gt;

&lt;p&gt;The most dramatic documented case is Moltbook, an AI social network whose founder stated publicly that he wrote no code — the entire platform was vibe-coded. In January 2026, Wiz Research discovered that a Supabase API key exposed in client-side JavaScript, combined with RLS completely disabled, granted full read and write access to the production database. The breach exposed 1.5 million API authentication tokens (for services including OpenAI, Anthropic, AWS, and Google Cloud), 35,000 email addresses, approximately 4,000 private messages, and 4.75 million database records in total.&lt;/p&gt;

&lt;p&gt;At scale, the SupaExplorer project scanned 20,000 indie launch URLs (January 2026) and found that 11% expose Supabase credentials in their frontend code, with a significant portion containing service_role keys — keys that bypass all RLS entirely, granting unrestricted database access.&lt;/p&gt;

&lt;p&gt;Bill Harmer, CISO of Supabase, has stated publicly that Row Level Security is "simple, powerful, and too often ignored." Supabase has since published dedicated resources for vibe coders, including a master security checklist and AI-specific prompts for generating RLS policies. But the tools that generate the code still do not enforce these policies by default.&lt;/p&gt;

&lt;p&gt;If you are running a Supabase-backed application built with AI tools, checking your RLS configuration is not optional. It is the single most urgent thing you can do.&lt;/p&gt;




&lt;h2&gt;
  
  
  Authentication that works for one user in preview — and breaks for everyone in production
&lt;/h2&gt;

&lt;p&gt;The second consistent failure pattern is authentication. Not the absence of authentication — most vibe-coded apps have a login screen. The problem is that the authentication implementation is shallow. It works when you test it yourself, in a single browser, with one account. It breaks under every real-world condition: multiple simultaneous users, token expiry, session handling across devices, password reset flows, and rate limiting.&lt;/p&gt;

&lt;p&gt;Lovable defaults to Supabase Auth. Bolt.new uses Supabase Auth or its own database layer. Cursor generates whatever auth pattern the prompt suggests, with no enforced standard. Vercel's v0 generates no backend logic at all — it is purely frontend.&lt;/p&gt;

&lt;p&gt;Dynamic application security testing by a major security firm (Bright Security, November 2025) revealed what these defaults actually produce when deployed. Testing identical forum-style apps generated by four leading AI tools, they found broken authentication enabling user impersonation, missing access control, no rate limiting (meaning brute-force attacks face no resistance), and weak session handling — across every platform tested. One tool's own built-in static scanner reported zero vulnerabilities in the same codebase that dynamic testing found to contain four critical and one high-severity flaw. The internal scanner was checking syntax. The security firm was testing behaviour.&lt;/p&gt;

&lt;p&gt;A particularly well-documented example: Lovable generates an asynchronous callback inside Supabase's &lt;code&gt;onAuthStateChange()&lt;/code&gt; listener that makes database calls during the authentication flow. Supabase's own documentation explicitly warns against this pattern — it causes deadlocks. The app freezes completely after login. A developer documented this bug publicly (Tomás Pozo, 2025) and reported that Lovable's AI attempted six separate fixes without identifying the root cause, repeatedly adjusting loading states instead of recognising the async callback issue.&lt;/p&gt;

&lt;p&gt;A study of 100 vibe-coded apps (VibeWrench, March 2026) confirmed the pattern at scale: 70% lacked CSRF protection, 41% had exposed secrets or API keys, 21% had no authentication on API endpoints, and 12% had exposed Supabase credentials. The Tenzai study — fifteen test apps, five tools — independently confirmed: zero had CSRF protection, zero had login rate limiting, and zero set Content Security Policy headers. Every single tool introduced Server-Side Request Forgery vulnerabilities.&lt;/p&gt;

&lt;p&gt;The most instructive public case involved a SaaS founder who built his entire product with Cursor and deployed it without handwritten code. The AI placed all security logic in frontend JavaScript. Within seventy-two hours of launch, users bypassed all payment restrictions by changing a single value in the browser console. The founder publicly announced the shutdown, writing: "I shouldn't have deployed unsecured code to production."&lt;/p&gt;

&lt;p&gt;The pattern is consistent: AI tools generate authentication that looks correct — a login form, a session token, a protected route — but omits the enforcement layer. There is no server-side validation. There is no token refresh logic. There is no protection against automated attacks. The login screen is a door with a lock but no deadbolt. It stops honest people. It does not stop anyone who tries the handle.&lt;/p&gt;




&lt;h2&gt;
  
  
  Code that nobody — including the AI — can maintain
&lt;/h2&gt;

&lt;p&gt;The third failure is structural. It does not cause a security breach or a crash. It causes something slower and more corrosive: the codebase becomes unmaintainable. Every fix introduces a new bug. Every new feature takes longer than the last. The AI starts contradicting its own earlier decisions. You are not imagining this. It is a documented, measurable phenomenon.&lt;/p&gt;

&lt;p&gt;CodeRabbit analysed 470 GitHub pull requests (December 2025), comparing AI-generated code against human-written code. AI-co-authored code contained 1.7 times more major issues per pull request, with approximately eight times more excessive I/O operations. The single biggest difference across the entire dataset was readability — AI code that technically works but that no human (and often no subsequent AI session) can efficiently understand or modify.&lt;/p&gt;

&lt;p&gt;Faros AI tracked over 10,000 developers across 1,255 teams (2025) and found that developers using AI tools completed 21% more tasks and merged 98% more pull requests. That sounds positive until you see the other side: pull request review time increased by 91%. The bottleneck shifted from writing code to reviewing code — and much of the review time was spent untangling AI decisions that made no architectural sense.&lt;/p&gt;

&lt;p&gt;The dependency problem compounds this. Endor Labs analysed 10,663 GitHub repositories (November 2025) and found that only one in five dependency versions recommended by AI coding assistants were safe — neither hallucinated (pointing to packages that do not exist) nor containing known security vulnerabilities. Between 44% and 49% of dependencies imported by AI agents contained known vulnerabilities. Your app may technically run, but the libraries it relies on are a minefield.&lt;/p&gt;

&lt;p&gt;At the code level, one practitioner who runs weekly audits of vibe-coded apps published a sample scoring 62 out of 100 — a "Caution" rating (Beesoul, January 2026). Specific findings included 47 database calls per single page request, admin routes accessible without valid session tokens, and search functions with no input sanitisation. In a SaaS startup built with Cursor, a live Stripe secret key was embedded directly in a React payment component — visible to anyone who opened browser developer tools.&lt;/p&gt;

&lt;p&gt;The same auditor estimates that only about 10% of vibe-coded apps pass a clean audit. The ones that do "usually involve a technical co-founder" who understood the output well enough to catch and correct the AI's mistakes before deployment.&lt;/p&gt;

&lt;p&gt;A dual-model audit experiment (Building Burrow, January 2026) ran a vibe-coded project through two leading AI models simultaneously. Both flagged issues. Then a human engineer reviewed the same codebase and found "a lot of very basic issues that were overlooked" by both models — including violations of the Single Source of Truth principle (competing state stores managing the same data), copy-paste code where shared utilities should exist, and significant dead code including deprecated functions and unused exports that inflated the codebase and confused future AI sessions.&lt;/p&gt;

&lt;p&gt;Carnegie Mellon University researchers studied 807 GitHub repositories using Cursor (2025) and concluded that AI tools were functionally correct 61% of the time but produced secure code only 10.5% of the time. Their summary: "AI briefly accelerates code generation, but the underlying code quality trends continue to move in the wrong direction."&lt;/p&gt;

&lt;p&gt;This is the context behind the "fix one thing, break ten others" experience. It is not randomness. It is architectural debt accumulating faster than the AI can pay it down. Each prompt adds code without integrating it into a coherent structure. The codebase grows, but it does not improve. Eventually, complexity reaches what one auditor calls the "Spaghetti Code Limit" — the threshold beyond which every new feature takes exponentially longer to implement, and every fix introduces new breakage.&lt;/p&gt;




&lt;h2&gt;
  
  
  What the preview-to-production gap actually looks like
&lt;/h2&gt;

&lt;p&gt;Everything discussed so far — open databases, broken auth, unmaintainable code — exists in your application right now, in development. But the gap widens dramatically when you move from preview to production. Vibe-coding tools generate no CI/CD pipelines, no database migration scripts, no logging or monitoring, and no environment variable management by default.&lt;/p&gt;

&lt;p&gt;The most publicly documented production failure involved a well-known SaaS founder whose AI agent wiped data for over 1,200 executives and 1,190 companies from a live database during a designated code freeze (Fortune, July 2025). The agent then fabricated approximately 4,000 fake database records. When confronted, it admitted to running unauthorised commands and "lying on purpose."&lt;/p&gt;

&lt;p&gt;At enterprise scale, Amazon disclosed in March 2026 that AI-generated code changes caused two major production incidents within three days (Business Insider, March 2026). The first resulted in approximately 120,000 lost orders due to incorrect delivery times. The second — a production change deployed without formal documentation — caused a 99% drop in orders across North American marketplaces, representing 6.3 million lost orders. Amazon's CTO warned publicly that language models "sometimes make assumptions you do not realise they are making."&lt;/p&gt;

&lt;p&gt;These are extreme cases. But they illustrate the same structural problem that affects every vibe-coded app moving from development to production: the AI generates code for a single-user, single-environment context. It does not generate the infrastructure that makes code work reliably across environments, at scale, over time. Empty try/catch blocks swallow errors silently, meaning your app crashes in production with no logs to diagnose the failure. Context retention in AI tools degrades noticeably once projects exceed fifteen to twenty components. And the thousand-user milestone — often the first real stress test — is typically when database queries without pagination, synchronous external dependencies, and absent monitoring become visible simultaneously.&lt;/p&gt;




&lt;h2&gt;
  
  
  What is actually salvageable — and what the three options look like
&lt;/h2&gt;

&lt;p&gt;The question founders ask most often is: do I need to start over?&lt;/p&gt;

&lt;p&gt;Usually, no.&lt;/p&gt;

&lt;p&gt;The emerging consensus from practitioners who assess vibe-coded apps professionally is clear: frontend components are largely salvageable. The problems are almost always in the backend — authentication, database design, security, and error handling. A 2026 survey (The New Stack) found that 76% of developers report having to rewrite or refactor at least half of AI-generated code before it reaches production. But "at least half" also means "not all." The frontend — the screens, the layouts, the user interface that you spent weeks refining — is frequently worth keeping.&lt;/p&gt;

&lt;p&gt;Your vibe-coded app served a purpose that a blank page never could. It validated your idea with real users. It clarified requirements that no written specification could have captured. It proved that people want what you are building. That is not wasted work. It is the most expensive part of building a product — market validation — accomplished at a fraction of the traditional cost.&lt;/p&gt;

&lt;p&gt;The decision framework has three options.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Patch&lt;/strong&gt; means fixing specific, isolated issues without changing the underlying architecture. This works when the problems are surface-level: a missing RLS policy that can be added, an exposed API key that can be moved to environment variables, a specific authentication bug that can be resolved. Patching is appropriate when the architecture is fundamentally sound and the technical debt is contained. In practice, this applies to roughly 10% of vibe-coded apps — the ones where a technical co-founder caught most issues early, or where the app's scope is genuinely simple.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Refactor&lt;/strong&gt; means keeping the working parts — typically the frontend and validated business logic — while rebuilding the backend architecture. This is the most common path. The frontend your users already know and use stays intact. The database gets proper schema design, indexing, and RLS policies. Authentication gets server-side enforcement. Error handling gets implemented throughout. The result is the same product, with the same user experience, running on a foundation that can actually handle production traffic. Refactoring typically involves modifying 30–50% of the codebase.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rebuild&lt;/strong&gt; means starting the technical implementation from scratch, using the existing application as a living requirements document. This is appropriate when the technical debt exceeds 80% of the codebase — when the architecture is so tangled that fixing individual components would take longer than rebuilding (BayOne, 2026). Even in a full rebuild, nothing is truly lost: validated user flows, design patterns that work, business logic that users have confirmed, and the market understanding you gained are all preserved. The rebuild is faster and more accurate than building from a written specification, because you have a working prototype to reference instead of a document to interpret.&lt;/p&gt;

&lt;p&gt;The critical point: you cannot determine which option is right without assessing what is actually in the codebase. An AI tool will not give you an honest answer — its incentive is to keep generating fixes. A quick-fix freelancer will not give you a structural answer — their incentive is to bill hours on individual bugs. The assessment itself is the first step.&lt;/p&gt;




&lt;h2&gt;
  
  
  The tools are not the villain. The gap is the gap.
&lt;/h2&gt;

&lt;p&gt;Collins Dictionary named "vibe coding" its Word of the Year for 2026. Cursor has crossed a million daily active users (Contrary Research, December 2025). Bolt.new added five million registered users in its first five months. Replit now claims over fifty million accounts and has generated nine million complete applications (Forbes, March 2026). These tools are not going away, and they should not. They have democratised the ability to build and test ideas at a speed that was unimaginable three years ago.&lt;/p&gt;

&lt;p&gt;But prototyping tools produce prototyping code. That is not a criticism — it is a description. The same way a sketch is not a blueprint, a vibe-coded app is not a production system. The sketch has value. The blueprint has different value. The gap between them is specific, measurable, and closable.&lt;/p&gt;

&lt;p&gt;The research is unambiguous on what that gap contains: misconfigured database security, shallow authentication, unmaintainable code structure, and absent deployment infrastructure. These are not random failures. They are the predictable output of tools optimised for speed of generation rather than reliability of operation. Understanding this means you can stop blaming yourself for hitting the wall — and start making a clear-eyed decision about what to do next.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I'm Anatoly Silko, founder of &lt;a href="https://rockingtech.co.uk" rel="noopener noreferrer"&gt;Rocking Tech&lt;/a&gt; — a UK Laravel agency that builds production platforms, increasingly from AI-generated starting points. The &lt;a href="https://rockingtech.co.uk/blog/your-lovable-app-hit-a-wall" rel="noopener noreferrer"&gt;original version of this article&lt;/a&gt; has more detail on next steps if you've hit the wall yourself.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>vibecoding</category>
      <category>ai</category>
      <category>security</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Your AI-Generated Code Isn't Secure — Here's What We Find Every Time</title>
      <dc:creator>Anatoly Silko</dc:creator>
      <pubDate>Sat, 04 Apr 2026 22:52:43 +0000</pubDate>
      <link>https://forem.com/anatolysilko/your-ai-generated-code-isnt-secure-heres-what-we-find-every-time-3h63</link>
      <guid>https://forem.com/anatolysilko/your-ai-generated-code-isnt-secure-heres-what-we-find-every-time-3h63</guid>
      <description>&lt;p&gt;Veracode tested 150+ AI models and found 45% of generated code introduces OWASP Top 10 vulnerabilities. The failure rate for cross-site scripting defences is 86% — and it isn't improving with newer models. Here's what that looks like inside a real codebase, what you can check yourself in 30 minutes, and what the UK's National Cyber Security Centre is now saying about it.&lt;/p&gt;




&lt;p&gt;If you built something with Lovable, Bolt.new, Cursor, Replit, or v0 — and it's live, or about to be — six specific security problems are almost certainly sitting in your codebase right now.&lt;/p&gt;

&lt;p&gt;That's not opinion. It's the consistent finding across every major independent security study published in the past twelve months: Veracode's 150-model benchmark, DryRun Security's assessment of three leading AI agents, Apiiro's scan of 62,000 enterprise repositories, and a Georgia Tech research team tracking real vulnerabilities in real time. The tools write code that runs. They don't write code that's safe.&lt;/p&gt;

&lt;p&gt;This article gives you the practitioner's view: what the six problems are, how to check for them yourself in 30 minutes using free tools, what the UK's own National Cyber Security Centre said about it, and what the independent research actually found.&lt;/p&gt;




&lt;h2&gt;
  
  
  The six things we find in every assessment
&lt;/h2&gt;

&lt;p&gt;The same six security failures appear in virtually every AI-generated codebase. They're not exotic exploits — they're the security equivalent of leaving the front door unlocked. And they're the first things attackers look for because they're the easiest to find.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Your secret keys are in the code anyone can read
&lt;/h3&gt;

&lt;p&gt;When you tell an AI tool to "connect to Stripe" or "add OpenAI," it pastes the secret key directly into a JavaScript file that ships to every user's browser — visible to anyone who opens developer tools.&lt;/p&gt;

&lt;p&gt;GitGuardian's 2026 analysis of public GitHub found 28.65 million new hardcoded secrets pushed in 2025 — a 34% increase year-on-year (GitGuardian, State of Secrets Sprawl 2026). AI-assisted commits leaked secrets at 3.2% versus the 1.5% baseline: more than double the rate. Supabase credential leaks specifically rose 992%.&lt;/p&gt;

&lt;p&gt;A SaaS founder who built his entire product with Cursor was attacked within days of sharing it publicly. Attackers found his exposed API keys, maxed out his usage, and ran up a $14,000 OpenAI bill. He shut down permanently.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If your Stripe secret key is in your frontend code, anyone can issue refunds to themselves. If your OpenAI key is exposed, anyone can run your API credits to zero overnight.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  2. User input goes straight to the database without checks
&lt;/h3&gt;

&lt;p&gt;AI generates the shortest path to working code. That means pasting user input directly into database queries instead of using parameterised queries — the standard defence against SQL injection that has existed for over twenty years. It also means rendering user-submitted text without escaping it, creating cross-site scripting vulnerabilities.&lt;/p&gt;

&lt;p&gt;Veracode found an 86% failure rate on XSS defences across all 150+ models tested — with no improvement in the latest generation (Veracode, GenAI Code Security Report, July 2025). These are among the oldest and most exploited vulnerabilities on the internet, and AI tools are reintroducing them at industrial scale.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Your APIs have no speed limit
&lt;/h3&gt;

&lt;p&gt;An API without rate limiting is an open invitation. Attackers can try thousands of passwords per second. Competitors can scrape every record. Bots can flood expensive AI features and run up cloud bills.&lt;/p&gt;

&lt;p&gt;DryRun Security's March 2026 study found the most telling detail: rate limiting middleware was defined in every codebase. The AI wrote the code for it. But not a single agent actually connected it to the application. The safety net existed in the files — it just didn't work (DryRun Security, Agentic Coding Security Report, March 2026).&lt;/p&gt;

&lt;h3&gt;
  
  
  4. File uploads accept anything
&lt;/h3&gt;

&lt;p&gt;When AI builds an upload feature — profile pictures, documents, attachments — it saves whatever file the user provides without checking the type, size, or filename. This opens the door to uploading executable scripts, overwriting server files, or crashing the application with oversized files.&lt;/p&gt;

&lt;p&gt;JFrog's research found that even when the AI does add file validation, it generates naive checks that block only the most literal attack patterns and can be bypassed with encoding or absolute paths (JFrog, Analyzing Common Vulnerabilities Introduced by Code-Generative AI).&lt;/p&gt;

&lt;h3&gt;
  
  
  5. No browser-level security headers
&lt;/h3&gt;

&lt;p&gt;Every modern browser supports security headers — single-line configuration directives that control which scripts can run, whether to force HTTPS, and whether the site can be framed. Content-Security-Policy, Strict-Transport-Security, X-Frame-Options. AI tools never add them.&lt;/p&gt;

&lt;p&gt;In the Tenzai study — fifteen apps built by five major AI coding tools — not one set any security headers. Zero out of fifteen (Tenzai, Secure Coding Comparison, December 2025).&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Server-side request forgery on every URL feature
&lt;/h3&gt;

&lt;p&gt;When AI builds a feature that fetches data from a URL — link previews, image proxies, webhooks — it makes the server request whatever URL the user provides, including internal cloud metadata endpoints that expose full infrastructure credentials.&lt;/p&gt;

&lt;p&gt;The AppSec Santa 2026 study found SSRF was the single most common vulnerability across all six models tested, with 32 confirmed findings (AppSec Santa, AI Code Security Study, 2026). The Capital One breach — 100 million records, an $80 million fine — started with exactly this vulnerability class.&lt;/p&gt;




&lt;h2&gt;
  
  
  How to check yours in the next 30 minutes
&lt;/h2&gt;

&lt;p&gt;You don't need a developer for this. The checks below use free, public tools and take less than 30 minutes combined. They won't catch everything, but they'll tell you whether you have an immediate problem.&lt;/p&gt;

&lt;h3&gt;
  
  
  Check 1: Security headers
&lt;/h3&gt;

&lt;p&gt;Visit &lt;a href="https://securityheaders.com" rel="noopener noreferrer"&gt;securityheaders.com&lt;/a&gt; or &lt;a href="https://developer.mozilla.org/en-US/observatory" rel="noopener noreferrer"&gt;Mozilla HTTP Observatory&lt;/a&gt;. Enter your URL. You'll get a letter grade from A+ to F. If you score D, E, or F, your app is missing critical browser-level protections. Most vibe-coded apps score F.&lt;/p&gt;

&lt;h3&gt;
  
  
  Check 2: Exposed secrets in source code
&lt;/h3&gt;

&lt;p&gt;In Chrome, press Ctrl+U (Cmd+Option+U on Mac) to view page source. Search for: &lt;code&gt;sk_live&lt;/code&gt; (Stripe secret key), &lt;code&gt;sk-&lt;/code&gt; (OpenAI), &lt;code&gt;AKIA&lt;/code&gt; (AWS), &lt;code&gt;password&lt;/code&gt;, &lt;code&gt;secret&lt;/code&gt;, &lt;code&gt;api_key&lt;/code&gt;. Public keys like Stripe's &lt;code&gt;pk_live_&lt;/code&gt; are expected. Secret keys should never appear in frontend code.&lt;/p&gt;

&lt;h3&gt;
  
  
  Check 3: Exposed .env file
&lt;/h3&gt;

&lt;p&gt;Type your domain followed by &lt;code&gt;/.env&lt;/code&gt; — for example, &lt;code&gt;https://yourapp.com/.env&lt;/code&gt;. If you see anything other than a 404 page, your secrets file is publicly accessible. This is a critical emergency. Also try &lt;code&gt;/.env.local&lt;/code&gt; and &lt;code&gt;/.env.production&lt;/code&gt;. A 2024 Palo Alto Networks campaign exploited .env files across over 110,000 domains.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you type your domain followed by /.env and see database passwords instead of a 404 page, stop reading this article and fix that first.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Check 4: Supabase database security
&lt;/h3&gt;

&lt;p&gt;If your app uses Supabase, log into the Dashboard → Database → Security Advisor. Look for check 0013: "RLS disabled in public." If any table shows Row Level Security disabled, anyone on the internet can read the entire contents using nothing more than the URL visible in your app's JavaScript.&lt;/p&gt;

&lt;h3&gt;
  
  
  Check 5: SSL certificate
&lt;/h3&gt;

&lt;p&gt;Visit &lt;a href="https://www.ssllabs.com/ssltest/" rel="noopener noreferrer"&gt;ssllabs.com/ssltest&lt;/a&gt; and enter your domain. Takes two minutes. Most modern hosting should give an automatic A. Anything below that indicates a misconfiguration.&lt;/p&gt;

&lt;h3&gt;
  
  
  Check 6: Debug mode in production
&lt;/h3&gt;

&lt;p&gt;Visit a non-existent page on your site — something like &lt;code&gt;/this-does-not-exist-12345&lt;/code&gt;. If you see file paths, stack traces, or database details instead of a simple 404, debug mode is enabled. This exposes your application's internals to anyone who triggers an error.&lt;/p&gt;

&lt;h3&gt;
  
  
  Check 7: What Google has indexed
&lt;/h3&gt;

&lt;p&gt;Type &lt;code&gt;site:yourapp.com&lt;/code&gt; into Google. Then try &lt;code&gt;site:yourapp.com inurl:admin&lt;/code&gt; for exposed admin panels, or &lt;code&gt;site:yourapp.com filetype:env&lt;/code&gt; for indexed secrets files. Any result you didn't expect to be public shouldn't be.&lt;/p&gt;




&lt;h2&gt;
  
  
  What the UK government said
&lt;/h2&gt;

&lt;p&gt;On 24 March 2026, NCSC CEO Richard Horne addressed vibe coding directly at the RSA Conference. The companion blog post by NCSC CTO Dave Chismon described AI-generated code as presenting "intolerable risks" for many organisations and warned that within five years it will become common to see AI-written code in production that a human has never reviewed (NCSC, "Vibe check: AI may replace SaaS (but not for a while)," March 2026).&lt;/p&gt;

&lt;p&gt;That phrasing — "intolerable risks" — came from the UK government's own cybersecurity authority. Not a vendor. Not a consultant. The NCSC.&lt;/p&gt;

&lt;h3&gt;
  
  
  What the ICO expects from you
&lt;/h3&gt;

&lt;p&gt;Existing obligations under Article 32 of UK GDPR — requiring "appropriate technical and organisational measures" to protect personal data — are technology-neutral. The ICO does not distinguish between human-written and AI-generated code when assessing whether your security is adequate.&lt;/p&gt;

&lt;p&gt;The enforcement record makes the consequences concrete. Advanced Computer Software Group was fined £3.07 million in March 2025 for failing to implement multi-factor authentication, vulnerability scanning, and adequate patch management — exactly the kinds of controls AI-generated code consistently omits. No UK business has yet been fined specifically for a breach caused by AI-generated code. But the vulnerabilities the ICO penalises are precisely what every study cited in this article finds in vibe-coded applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  What your insurer may not cover
&lt;/h3&gt;

&lt;p&gt;42% of UK organisations report their cyber insurance policy now specifically excludes liabilities associated with AI misuse (SecurityBrief UK, 2025–2026). If your app is built with AI tools and your insurer doesn't know, your coverage may not be what you think it is.&lt;/p&gt;




&lt;h2&gt;
  
  
  The research behind the numbers
&lt;/h2&gt;

&lt;p&gt;Everything above rests on independent research with disclosed methodology and large sample sizes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Veracode:&lt;/strong&gt; 150+ models, 80 tasks, 45% failure rate. Java had the worst at 72%. XSS defences failed in 86% of samples. Model size made no meaningful difference.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Apiiro:&lt;/strong&gt; 62,000 repos across Fortune 50 enterprises. AI-assisted developers introduced 10,000+ new security findings per month by mid-2025 — a tenfold increase. Privilege escalation paths jumped 322%.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DryRun Security:&lt;/strong&gt; Three AI agents, two apps each, 30 pull requests. 26 of 30 contained at least one vulnerability. Four authentication weaknesses appeared in every final codebase.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitGuardian:&lt;/strong&gt; 1.94 billion public GitHub commits analysed. 28.65 million leaked secrets in 2025, up 34%. AI-assisted commits leaked at double the baseline rate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Georgia Tech:&lt;/strong&gt; 74 confirmed AI-linked CVEs from 43,849 advisories. Monthly growth: 6 in January, 15 in February, 35 in March 2026. 39 rated Critical or High. Researchers estimate the actual number is 5–10× higher.&lt;/p&gt;




&lt;h2&gt;
  
  
  The scale of what's been built
&lt;/h2&gt;

&lt;p&gt;Cursor confirmed over one million daily active users by late 2025 and now reports seven million monthly. Lovable was closing in on eight million users, generating over 100,000 new projects every day. Bolt.new reached three million registered users within five months.&lt;/p&gt;

&lt;p&gt;Collins Dictionary named "vibe coding" its Word of the Year for 2026. Google reports AI now generates 41% of all code written globally. The security gap documented by every study in this article is baked into the output of tools used by tens of millions of people, at rates between 45% and 87% depending on methodology.&lt;/p&gt;




&lt;h2&gt;
  
  
  What to do about it
&lt;/h2&gt;

&lt;p&gt;The patterns described in this article are not exotic. They're the security equivalent of leaving the front door unlocked — basic hygiene that professional developers implement as a matter of course, and that AI tools systematically skip because they optimise for "does it run?" rather than "is it safe?"&lt;/p&gt;

&lt;p&gt;That's actually good news. It means the problems are fixable.&lt;/p&gt;

&lt;p&gt;The AI tools that built your app are not villains. They did exactly what they were designed to do: generate working code quickly from a natural-language prompt. The gap isn't a bug — it's a design choice. Prototyping tools optimise for speed. Production systems require security. No amount of re-prompting closes that gap, because the tools don't have the context about your business, your users, or your regulatory obligations that security decisions require.&lt;/p&gt;

&lt;p&gt;That context is what a human assessment provides.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I'm Anatoly Silko, founder of &lt;a href="https://rockingtech.co.uk" rel="noopener noreferrer"&gt;Rocking Tech&lt;/a&gt; — a UK-based agency that builds production Laravel platforms, increasingly from AI-generated starting points. If you've built something with AI tools and want to know whether it's production-ready, the &lt;a href="https://rockingtech.co.uk/blog/ai-generated-code-security-what-we-find" rel="noopener noreferrer"&gt;original version of this article&lt;/a&gt; has more detail on next steps.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>ai</category>
      <category>webdev</category>
      <category>vibecoding</category>
    </item>
  </channel>
</rss>
