<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Aditi Bhatnagar</title>
    <description>The latest articles on Forem by Aditi Bhatnagar (@aditi_bhatnagar_0250c01e4).</description>
    <link>https://forem.com/aditi_bhatnagar_0250c01e4</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/aditi_bhatnagar_0250c01e4"/>
    <language>en</language>
    <item>
      <title>A must read this week</title>
      <dc:creator>Aditi Bhatnagar</dc:creator>
      <pubDate>Mon, 04 May 2026 17:01:04 +0000</pubDate>
      <link>https://forem.com/aditi_bhatnagar_0250c01e4/a-must-read-this-week-517m</link>
      <guid>https://forem.com/aditi_bhatnagar_0250c01e4/a-must-read-this-week-517m</guid>
      <description>&lt;div class="ltag__link--embedded"&gt;
  &lt;div class="crayons-story "&gt;
  &lt;a href="https://dev.to/aditi_bhatnagar_0250c01e4/we-scanned-ai-built-apps-and-found-holes-that-would-end-companies-heres-what-we-found-12p4" class="crayons-story__hidden-navigation-link"&gt;We Scanned AI-Built Apps and Found Holes That Would End Companies. Here's What We Found.&lt;/a&gt;


  &lt;div class="crayons-story__body crayons-story__body-full_post"&gt;
    &lt;div class="crayons-story__top"&gt;
      &lt;div class="crayons-story__meta"&gt;
        &lt;div class="crayons-story__author-pic"&gt;

          &lt;a href="/aditi_bhatnagar_0250c01e4" class="crayons-avatar  crayons-avatar--l  "&gt;
            &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3798080%2Fab864152-172d-49ba-91af-9bf4061ba47e.jpg" alt="aditi_bhatnagar_0250c01e4 profile" class="crayons-avatar__image"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
        &lt;div&gt;
          &lt;div&gt;
            &lt;a href="/aditi_bhatnagar_0250c01e4" class="crayons-story__secondary fw-medium m:hidden"&gt;
              Aditi Bhatnagar
            &lt;/a&gt;
            &lt;div class="profile-preview-card relative mb-4 s:mb-0 fw-medium hidden m:inline-block"&gt;
              
                Aditi Bhatnagar
                
              
              &lt;div id="story-author-preview-content-3610134" class="profile-preview-card__content crayons-dropdown branded-7 p-4 pt-0"&gt;
                &lt;div class="gap-4 grid"&gt;
                  &lt;div class="-mt-4"&gt;
                    &lt;a href="/aditi_bhatnagar_0250c01e4" class="flex"&gt;
                      &lt;span class="crayons-avatar crayons-avatar--xl mr-2 shrink-0"&gt;
                        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3798080%2Fab864152-172d-49ba-91af-9bf4061ba47e.jpg" class="crayons-avatar__image" alt=""&gt;
                      &lt;/span&gt;
                      &lt;span class="crayons-link crayons-subtitle-2 mt-5"&gt;Aditi Bhatnagar&lt;/span&gt;
                    &lt;/a&gt;
                  &lt;/div&gt;
                  &lt;div class="print-hidden"&gt;
                    
                      Follow
                    
                  &lt;/div&gt;
                  &lt;div class="author-preview-metadata-container"&gt;&lt;/div&gt;
                &lt;/div&gt;
              &lt;/div&gt;
            &lt;/div&gt;

          &lt;/div&gt;
          &lt;a href="https://dev.to/aditi_bhatnagar_0250c01e4/we-scanned-ai-built-apps-and-found-holes-that-would-end-companies-heres-what-we-found-12p4" class="crayons-story__tertiary fs-xs"&gt;&lt;time&gt;May 4&lt;/time&gt;&lt;span class="time-ago-indicator-initial-placeholder"&gt;&lt;/span&gt;&lt;/a&gt;
        &lt;/div&gt;
      &lt;/div&gt;

    &lt;/div&gt;

    &lt;div class="crayons-story__indention"&gt;
      &lt;h2 class="crayons-story__title crayons-story__title-full_post"&gt;
        &lt;a href="https://dev.to/aditi_bhatnagar_0250c01e4/we-scanned-ai-built-apps-and-found-holes-that-would-end-companies-heres-what-we-found-12p4" id="article-link-3610134"&gt;
          We Scanned AI-Built Apps and Found Holes That Would End Companies. Here's What We Found.
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;div class="crayons-story__tags"&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/ai"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;ai&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/security"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;security&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/vibecoding"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;vibecoding&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/mythos"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;mythos&lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="crayons-story__bottom"&gt;
        &lt;div class="crayons-story__details"&gt;
          &lt;a href="https://dev.to/aditi_bhatnagar_0250c01e4/we-scanned-ai-built-apps-and-found-holes-that-would-end-companies-heres-what-we-found-12p4" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left"&gt;
            &lt;div class="multiple_reactions_aggregate"&gt;
              &lt;span class="multiple_reactions_icons_container"&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/multi-unicorn-b44d6f8c23cdd00964192bedc38af3e82463978aa611b4365bd33a0f1f4f3e97.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/raised-hands-74b2099fd66a39f2d7eed9305ee0f4553df0eb7b4f11b01b6b1b499973048fe5.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/sparkle-heart-5f9bee3767e18deb1bb725290cb151c25234768a0e9a2bd39370c382d02920cf.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
              &lt;/span&gt;
              &lt;span class="aggregate_reactions_counter"&gt;8&lt;span class="hidden s:inline"&gt; reactions&lt;/span&gt;&lt;/span&gt;
            &lt;/div&gt;
          &lt;/a&gt;
            &lt;a href="https://dev.to/aditi_bhatnagar_0250c01e4/we-scanned-ai-built-apps-and-found-holes-that-would-end-companies-heres-what-we-found-12p4#comments" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left flex items-center"&gt;
              Comments


              1&lt;span class="hidden s:inline"&gt; comment&lt;/span&gt;
            &lt;/a&gt;
        &lt;/div&gt;
        &lt;div class="crayons-story__save"&gt;
          &lt;small class="crayons-story__tertiary fs-xs mr-2"&gt;
            5 min read
          &lt;/small&gt;
            
              &lt;span class="bm-initial"&gt;
                

              &lt;/span&gt;
              &lt;span class="bm-success"&gt;
                

              &lt;/span&gt;
            
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;

&lt;/div&gt;


</description>
    </item>
    <item>
      <title>We Scanned AI-Built Apps and Found Holes That Would End Companies. Here's What We Found.</title>
      <dc:creator>Aditi Bhatnagar</dc:creator>
      <pubDate>Mon, 04 May 2026 16:54:47 +0000</pubDate>
      <link>https://forem.com/aditi_bhatnagar_0250c01e4/we-scanned-ai-built-apps-and-found-holes-that-would-end-companies-heres-what-we-found-12p4</link>
      <guid>https://forem.com/aditi_bhatnagar_0250c01e4/we-scanned-ai-built-apps-and-found-holes-that-would-end-companies-heres-what-we-found-12p4</guid>
      <description>&lt;p&gt;I want to tell you about a bootstrap endpoint.&lt;/p&gt;

&lt;p&gt;It was in a production app, live, serving real users. The endpoint existed because an AI assistant had helpfully included it during scaffolding. It returned the application's master authentication token. To anyone who asked. No login required. No API key. No nothing.&lt;/p&gt;

&lt;p&gt;One HTTP request and you had every LLM API key, every database password, every third-party credential the app had ever touched. Fourteen secrets, handed over cheerfully by an app that had no idea it was doing anything wrong.&lt;/p&gt;

&lt;p&gt;The developer who built this wasn't careless. They were fast. That's the whole point.&lt;/p&gt;




&lt;h2&gt;
  
  
  How We Got Here
&lt;/h2&gt;

&lt;p&gt;Georgetown's CSET ran the numbers on AI-generated code versus hand-written code. The result was 2.74 times more vulnerabilities. Not a little worse. Nearly three times.&lt;/p&gt;

&lt;p&gt;That tracks with what we see. Not because AI is bad at writing code — it's genuinely extraordinary at writing code. But writing code that works and writing code that's secure are two completely different objectives, and AI coding tools are only trained on one of them.&lt;/p&gt;

&lt;p&gt;The model doesn't have a threat model. It has never been attacked. It doesn't know what SQL injection feels like from the inside, or why that particular pattern of URL handling becomes a problem the moment someone points it at an internal metadata service. It knows what working code looks like, and it reproduces those patterns at a speed no human can match — including the insecure ones that have been quietly dangerous in codebases for years.&lt;/p&gt;

&lt;p&gt;This matters more than it used to because the other side has also gotten faster. Time from CVE to working exploit used to be weeks. With a language model it's now under an hour. The cost to build a targeted attack has dropped 100x. Your attack surface is expanding daily. &lt;a href="//offgridsec.com"&gt;Check it out.&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What's Actually Sitting in Production
&lt;/h2&gt;

&lt;p&gt;We've been scanning AI-built apps. Here's a sample of what we've found this year. Every single one of these teams responded immediately, patched within the same day or same week, and handled disclosure with more professionalism than most companies twice their size. That's worth saying upfront. These are security-conscious teams. The vulnerabilities got in anyway — because that's the nature of AI-generated code at speed, not the nature of the people shipping it.&lt;/p&gt;




&lt;h3&gt;
  
  
  Cognithor · Critical · CVSS 9.8
&lt;/h3&gt;

&lt;p&gt;The bootstrap endpoint I opened with. Cognithor is an AI assistant with an active user base, and when we reported this they patched it the same day and credited the disclosure in their release notes, SECURITY.md, and commit message. That's the response of a team that takes this seriously.&lt;/p&gt;

&lt;p&gt;The vulnerability itself: the app generated a master bearer token at startup. The /api/v1/bootstrap route returned it to any caller. No authentication. Not even a rate limit — that route was explicitly exempted. The API server bound to 0.0.0.0 by default, so any host that could reach the port could retrieve the token. With it you had access to every API key, every database password, the ability to wipe the entire configuration. Everything, one unauthenticated request away.&lt;/p&gt;

&lt;p&gt;The AI built authentication that worked perfectly for the frontend. It just also built it for everyone else.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fixed in v0.78.2. Same day.&lt;/strong&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  LiteLLM · Critical · CVSS 9.0
&lt;/h3&gt;

&lt;p&gt;LiteLLM is the proxy layer many teams use to manage access to language model APIs. Thousands of deployments. When we reported this they moved fast — patched and shipped within the week across two separate releases that addressed both failure points.&lt;/p&gt;

&lt;p&gt;The bug was subtle. An org admin hitting POST /user/bulk_update could elevate any user on the entire platform to proxy_admin — full access to all model configurations, API keys, spend data, every tenant — in a single request. The authorization check read organization_id from the raw request body, confirmed it, then handed the payload to a Pydantic model that didn't declare that field. Pydantic silently dropped it. The handler never saw it. The scope constraint vanished between middleware and execution.&lt;/p&gt;

&lt;p&gt;The team identified both root causes immediately and addressed them in separate targeted fixes rather than a band-aid patch. That's engineering maturity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Patched in v1.83.7 and v1.83.8 within the week.&lt;/strong&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Microsoft VibeVoice · High · CVSS 7.8
&lt;/h3&gt;

&lt;p&gt;Microsoft acknowledged the report promptly and patched the same week. Their response was exactly what responsible disclosure is supposed to look like.&lt;/p&gt;

&lt;p&gt;The vulnerability was in VibeVoice's checkpoint conversion script, which called torch.load() without the weights_only=True flag. PyTorch's default deserialization executes arbitrary Python during unpickling. The fix has existed since PyTorch 1.13. The AI wrote the code without it because most torch.load() examples in training data don't include it — a safe pattern that became unsafe at scale.&lt;/p&gt;

&lt;p&gt;A crafted model file passed to that script executes whatever the attacker embedded, before any application code runs, before any checks happen. In CI that means secrets, tokens, and internal network access exfiltrated silently during a routine pipeline run.&lt;/p&gt;

&lt;p&gt;One parameter. The team knew exactly what to do the moment we showed them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Patched same week. Credited in the commit.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Point Isn't That These Teams Made Mistakes
&lt;/h2&gt;

&lt;p&gt;They didn't, really. They built fast, used AI to help them build faster, and ended up with vulnerabilities that no reasonable code review process would have caught. The same vulnerabilities are sitting in hundreds of other codebases right now, in teams that haven't had someone look.&lt;/p&gt;

&lt;p&gt;What these teams did right — and why we're naming them — is that they fixed it. Immediately. Professionally. Without drama. Every one of them treated this like the security-conscious organizations they are.&lt;/p&gt;

&lt;p&gt;The vulnerability wasn't a character flaw. It was a structural problem with how AI writes code, and it's a problem every team shipping AI-generated code is carrying whether they know it or not.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why The Old Approach Can't Keep Up
&lt;/h2&gt;

&lt;p&gt;Quarterly pentests are a snapshot. You pay $15k, they examine the codebase as it stood when the engagement started, you get a report two weeks later. Your team shipped daily in between.&lt;/p&gt;

&lt;p&gt;One security engineer against ten developers with AI assistants isn't a program. It's a gesture.&lt;/p&gt;

&lt;p&gt;Static scanners generate noise. Most teams stop reading them after a few weeks because the signal-to-noise ratio destroys their usefulness as a daily workflow. The findings above would pass most static scans — they require tracing data flow across the whole stack, not pattern matching on individual files.&lt;/p&gt;

&lt;p&gt;None of these solutions run at the speed you're actually shipping.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Kira Does
&lt;/h2&gt;

&lt;p&gt;Kira runs on every commit. Connects to GitHub, GitLab, or Bitbucket in one click, nothing to install.&lt;/p&gt;

&lt;p&gt;It traces data flow across your entire codebase looking for real attack paths — the ones where untrusted input travels through several functions and ends up somewhere it shouldn't, where a permission check happens before a parameter that gets modified afterward, where the combination of individually reasonable decisions creates something exploitable.&lt;/p&gt;

&lt;p&gt;When it finds something it doesn't ask you to investigate. It shows you whether it's actually exploitable and exactly how. Proof of concept. Reproduction steps you can run yourself.&lt;/p&gt;

&lt;p&gt;Findings come out as shareable reports. Clean enough to hand to a customer asking about your security posture or an auditor who wants to see evidence of process.&lt;/p&gt;

&lt;p&gt;The findings above all came from Kira. So did a lot of others we haven't written up yet.&lt;/p&gt;




&lt;h2&gt;
  
  
  One Thing Worth Knowing
&lt;/h2&gt;

&lt;p&gt;Every line your AI writes today is code a security engineer didn't review. That's not a criticism, it's arithmetic. You're moving too fast for manual review to keep up, and that gap is exactly where the vulnerabilities that matter tend to live.&lt;/p&gt;

&lt;p&gt;Your first scan is free. No credit card. Connect your repo and see what's actually there.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://offgridsec.com" rel="noopener noreferrer"&gt;offgridsec.com&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If it comes back clean, great. If it doesn't, you'll be glad you looked before someone else did.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>security</category>
      <category>vibecoding</category>
      <category>mythos</category>
    </item>
    <item>
      <title>The LiteLLM Supply Chain Attack Just Changed Everything - Here's How to Protect Your AI Stack</title>
      <dc:creator>Aditi Bhatnagar</dc:creator>
      <pubDate>Thu, 26 Mar 2026 10:41:52 +0000</pubDate>
      <link>https://forem.com/aditi_bhatnagar_0250c01e4/the-litellm-supply-chain-attack-just-changed-everything-heres-how-to-protect-your-ai-stack-ff6</link>
      <guid>https://forem.com/aditi_bhatnagar_0250c01e4/the-litellm-supply-chain-attack-just-changed-everything-heres-how-to-protect-your-ai-stack-ff6</guid>
      <description>&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;p&gt;On March 24, 2026, the widely-used LiteLLM Python package was compromised in a supply chain attack. Malicious versions stole credentials from tens of thousands of developers. This post breaks down what happened, why AI tooling is uniquely vulnerable, and how MCP-based security tools like &lt;code&gt;kira-lite-mcp&lt;/code&gt; can catch these threats before they hit production.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Happened: The LiteLLM Compromise
&lt;/h2&gt;

&lt;p&gt;Two days ago, the AI development community got a brutal wake-up call.&lt;/p&gt;

&lt;p&gt;LiteLLM - the open-source Python library that provides a unified interface across 100+ LLMs (OpenAI, Anthropic, VertexAI, and more) - was hit by a supply chain attack. Versions &lt;strong&gt;1.82.7&lt;/strong&gt; and &lt;strong&gt;1.82.8&lt;/strong&gt;, published directly to PyPI, contained a multi-stage credential stealer targeting SSH keys, cloud provider tokens, Kubernetes secrets, cryptocurrency wallets, and &lt;code&gt;.env&lt;/code&gt; files.&lt;/p&gt;

&lt;p&gt;The attack was carried out by &lt;strong&gt;TeamPCP&lt;/strong&gt;, a threat group that had been chaining compromises across the open-source ecosystem since late 2025. They gained access to a LiteLLM maintainer's PyPI credentials through a &lt;em&gt;prior&lt;/em&gt; compromise of Aqua Security's Trivy scanner - meaning one breach cascaded into another.&lt;/p&gt;

&lt;p&gt;Here's a timeline of what went down:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;~08:30 UTC&lt;/strong&gt; - Malicious versions 1.82.7 and 1.82.8 published to PyPI&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;10:39-16:00 UTC&lt;/strong&gt; - Compromised packages available for download&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;~11:25 UTC&lt;/strong&gt; - PyPI quarantined the packages&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;By end of day&lt;/strong&gt; - Versions yanked, credentials rotated, external IR engaged&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The payload was embedded in &lt;code&gt;proxy_server.py&lt;/code&gt; and, in version 1.82.8, a &lt;code&gt;.pth&lt;/code&gt; file (&lt;code&gt;litellm_init.pth&lt;/code&gt;) that &lt;strong&gt;executed automatically on every Python process startup&lt;/strong&gt;. Not when you ran your app. Not when you imported litellm. When &lt;em&gt;Python itself started&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;If you ran &lt;code&gt;pip install litellm&lt;/code&gt; without a pinned version during that window, the malware was already running before your code did.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Matters More Than a Typical Supply Chain Attack
&lt;/h2&gt;

&lt;p&gt;LiteLLM isn't just another package. It sits at the &lt;strong&gt;center of the AI stack&lt;/strong&gt;, acting as a gateway between applications and multiple LLM providers. That means it often has access to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;API keys for OpenAI, Anthropic, Azure, GCP, and other providers&lt;/li&gt;
&lt;li&gt;Environment variables with database passwords and service credentials&lt;/li&gt;
&lt;li&gt;Kubernetes cluster secrets when deployed as a proxy&lt;/li&gt;
&lt;li&gt;CI/CD pipeline tokens when used in automated workflows&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;According to Wiz, &lt;strong&gt;LiteLLM is present in 36% of all cloud environments&lt;/strong&gt;. That's an astonishing blast radius for a three-hour window of exposure.&lt;/p&gt;

&lt;p&gt;The stolen data was encrypted and exfiltrated to a C2 domain designed to look legitimate: &lt;code&gt;models.litellm[.]cloud&lt;/code&gt;. The payload also attempted to deploy privileged Kubernetes pods to every node in a cluster, giving the attackers persistent access even after the initial malware was removed.&lt;/p&gt;

&lt;p&gt;This wasn't an isolated incident. TeamPCP's campaign has now spanned &lt;strong&gt;five ecosystems&lt;/strong&gt; - GitHub Actions, Docker Hub, npm, Open VSX, and PyPI - with each compromise yielding credentials that unlock the next target.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Real Problem: We Don't Scan What We Should
&lt;/h2&gt;

&lt;p&gt;Here's the uncomfortable truth: most development teams have &lt;strong&gt;zero automated security checks&lt;/strong&gt; between their AI coding assistant generating code and that code hitting production.&lt;/p&gt;

&lt;p&gt;Think about the typical AI-assisted development workflow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Developer asks an AI assistant to write a function&lt;/li&gt;
&lt;li&gt;AI generates code, often pulling in dependencies&lt;/li&gt;
&lt;li&gt;Developer reviews it (maybe), copies it, and ships it&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;At no point in that flow did anyone check whether those dependencies have known vulnerabilities. Nobody verified that the packages the AI suggested &lt;em&gt;actually exist&lt;/em&gt; (package hallucination is a real attack vector). And nobody scanned the generated code itself for injection vulnerabilities, hardcoded secrets, or insecure patterns.&lt;/p&gt;

&lt;p&gt;The LiteLLM incident makes this painfully clear. If your project had an unpinned &lt;code&gt;litellm&lt;/code&gt; dependency and your CI pipeline ran &lt;code&gt;pip install&lt;/code&gt; during that three-hour window, you were compromised. Period.&lt;/p&gt;




&lt;h2&gt;
  
  
  Enter kira-lite-mcp: Security Scanning Inside the AI Loop
&lt;/h2&gt;

&lt;p&gt;This is where tools like &lt;a href="https://www.npmjs.com/package/@offgridsec/kira-lite-mcp" rel="noopener noreferrer"&gt;&lt;code&gt;@offgridsec/kira-lite-mcp&lt;/code&gt;&lt;/a&gt; become critical.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;kira-lite-mcp&lt;/strong&gt; is an MCP (Model Context Protocol) server that brings real-time security scanning directly into your AI coding workflow. Instead of scanning &lt;em&gt;after&lt;/em&gt; code is written and committed, it scans code &lt;strong&gt;as it's being generated&lt;/strong&gt; - inside the same conversation where your AI assistant is writing it.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Makes It Different
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;It runs entirely on your machine.&lt;/strong&gt; Kira-lite ships with Kira-Core, a compiled Go binary bundled for macOS, Linux, and Windows. Your code never leaves your laptop. For proprietary codebases or regulated industries, this is non-negotiable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;376 security rules out of the box.&lt;/strong&gt; It covers OWASP Top 10:2025, OWASP API Security Top 10, and - critically - &lt;strong&gt;OWASP LLM Top 10:2025&lt;/strong&gt;. That last category catches vulnerabilities that traditional scanners completely miss: LLM output passed directly to &lt;code&gt;eval()&lt;/code&gt; or &lt;code&gt;exec()&lt;/code&gt;, prompt injection patterns, and user input concatenated into prompt templates.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Five MCP tools that your AI assistant can invoke contextually:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;scan_code&lt;/code&gt; - scans a snippet &lt;em&gt;before it's written to disk&lt;/em&gt;. The AI literally checks its own work.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;scan_file&lt;/code&gt; - scans an existing file and auto-triggers dependency scanning on lockfiles.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;scan_diff&lt;/code&gt; - compares original vs. modified code and flags only &lt;em&gt;new&lt;/em&gt; vulnerabilities.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;scan_dependency&lt;/code&gt; - checks your lockfiles against CVE databases across 13 formats (npm, PyPI, Go, Maven, crates.io, RubyGems, and more).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;scan_project&lt;/code&gt; - full project-level scan with configurable severity thresholds.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  How This Would Have Helped With LiteLLM
&lt;/h3&gt;

&lt;p&gt;Let's map kira-lite's capabilities directly to the LiteLLM attack scenario:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dependency scanning catches known-bad versions.&lt;/strong&gt; If your project's &lt;code&gt;requirements.txt&lt;/code&gt; or lockfile pulled in litellm 1.82.7 or 1.82.8 after the CVE was published, kira-lite's dependency scanner would flag it. It checks against live CVE databases, not just a static list - so as advisories are published, your scans pick them up.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Code scanning catches suspicious patterns.&lt;/strong&gt; The LiteLLM payload used &lt;code&gt;os.system()&lt;/code&gt; calls, base64-encoded strings, and network exfiltration patterns. These are exactly the kinds of patterns that SAST rules detect. If an AI assistant generated code that imports from or interacts with a compromised dependency in a suspicious way, &lt;code&gt;scan_code&lt;/code&gt; would catch it before it's saved.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Project-level scanning catches transitive risks.&lt;/strong&gt; Many developers weren't even using LiteLLM directly - it was pulled in as a &lt;em&gt;transitive dependency&lt;/em&gt; through AI agent frameworks, MCP servers, or LLM orchestration tools. A full project scan would surface these hidden dependencies.&lt;/p&gt;




&lt;h2&gt;
  
  
  Setting It Up (It Takes 30 Seconds)
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx @offgridsec/kira-lite-mcp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. Zero config. Works with Claude Code, Claude Desktop, Cursor, VS Code, and any MCP-compatible client.&lt;/p&gt;

&lt;p&gt;For Claude Desktop, add to your config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"kira-lite"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"npx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"-y"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"@offgridsec/kira-lite-mcp"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once configured, your AI assistant can call the security scanning tools automatically - catching vulnerabilities in the same conversation where code is being written.&lt;/p&gt;




&lt;h2&gt;
  
  
  Broader Lessons From the LiteLLM Incident
&lt;/h2&gt;

&lt;p&gt;The LiteLLM compromise isn't the end of this campaign. As Endor Labs put it: &lt;em&gt;"TeamPCP has demonstrated a consistent pattern: each compromised environment yields credentials that unlock the next target."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Here's what every team building with AI should take away:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Pin Your Dependencies
&lt;/h3&gt;

&lt;p&gt;If your &lt;code&gt;requirements.txt&lt;/code&gt; says &lt;code&gt;litellm&lt;/code&gt; instead of &lt;code&gt;litellm==1.82.6&lt;/code&gt;, you were exposed. Pin versions. Use lockfiles. Verify hashes. This is basic supply chain hygiene that too many AI projects skip because they move fast.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Treat AI-Generated Code Like Untrusted Code
&lt;/h3&gt;

&lt;p&gt;Your AI assistant doesn't know about zero-day supply chain attacks. It doesn't verify that the packages it suggests are uncompromised. Treat its output the same way you'd treat code from a contractor who doesn't know your security policies.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Scan In the Loop, Not After the Fact
&lt;/h3&gt;

&lt;p&gt;The traditional model - write code, commit, CI scans, fix later - doesn't scale when AI is generating code at 10x the speed of manual development. Tools like kira-lite-mcp shift security scanning &lt;em&gt;left of left&lt;/em&gt;: into the generation step itself.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Watch Your Transitive Dependencies
&lt;/h3&gt;

&lt;p&gt;LiteLLM wasn't just installed directly. It was pulled in by MCP servers, agent frameworks, and orchestration tools. Know your dependency tree. Audit it regularly. Use &lt;code&gt;scan_dependency&lt;/code&gt; on every lockfile in your project.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Assume Breach, Rotate Fast
&lt;/h3&gt;

&lt;p&gt;If you installed litellm 1.82.7 or 1.82.8, assume every credential on that machine is compromised. SSH keys, cloud tokens, API keys, database passwords, crypto wallets - rotate all of them. Check for persistence mechanisms (&lt;code&gt;~/.config/sysmon/sysmon.py&lt;/code&gt;, &lt;code&gt;~/.config/systemd/user/sysmon.service&lt;/code&gt;). In Kubernetes environments, audit &lt;code&gt;kube-system&lt;/code&gt; for pods matching &lt;code&gt;node-setup-*&lt;/code&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Bottom Line
&lt;/h2&gt;

&lt;p&gt;The AI supply chain is now a high-value target. LiteLLM's 3.4 million daily downloads made it an incredibly efficient attack vector - and the attackers knew it.&lt;/p&gt;

&lt;p&gt;We can't prevent every supply chain compromise. But we &lt;em&gt;can&lt;/em&gt; build security into the development workflow itself, catching vulnerabilities before they become incidents. Tools like kira-lite-mcp represent a fundamental shift: from security as a gate at the end of the pipeline to security as a collaborator sitting next to you while you code.&lt;/p&gt;

&lt;p&gt;The LiteLLM incident cost organizations time, trust, and potentially sensitive data. The next supply chain attack - and there will be a next one - doesn't have to.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;If you found this useful, give it a like and follow for more on AI security and developer tooling. Have questions about securing your AI development workflow? Drop them in the comments.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Tags:&lt;/strong&gt; &lt;code&gt;#security&lt;/code&gt; &lt;code&gt;#ai&lt;/code&gt; &lt;code&gt;#programming&lt;/code&gt; &lt;code&gt;#devops&lt;/code&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  FAQ: LiteLLM Supply Chain Attack and AI Security
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What was the LiteLLM supply chain attack?&lt;/strong&gt;&lt;br&gt;
On March 24, 2026, the LiteLLM Python package on PyPI was compromised by a threat group called TeamPCP. Versions 1.82.7 and 1.82.8 contained credential-stealing malware that targeted SSH keys, cloud tokens, API keys, and Kubernetes secrets. The malicious packages were available for approximately three hours before being removed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How do I know if I was affected by the LiteLLM attack?&lt;/strong&gt;&lt;br&gt;
You were likely affected if you ran &lt;code&gt;pip install litellm&lt;/code&gt; without a pinned version between 10:39 UTC and 16:00 UTC on March 24, 2026, and received version 1.82.7 or 1.82.8. Run &lt;code&gt;pip show litellm&lt;/code&gt; to check your installed version. Users of the official LiteLLM Proxy Docker image were not impacted.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is kira-lite-mcp?&lt;/strong&gt;&lt;br&gt;
kira-lite-mcp (&lt;code&gt;@offgridsec/kira-lite-mcp&lt;/code&gt;) is an MCP-based security scanner that integrates directly into AI coding assistants like Claude Code, Cursor, and VS Code. It scans code for vulnerabilities, checks dependencies against CVE databases, and detects insecure patterns - all in real-time during the development process, entirely on your local machine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How does MCP help with security scanning?&lt;/strong&gt;&lt;br&gt;
MCP (Model Context Protocol) allows AI assistants to invoke external tools contextually. With an MCP security scanner, the AI can proactively check its own generated code for vulnerabilities, scan dependencies for known CVEs, and flag insecure patterns before the code is ever saved to disk.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What should I do if I installed a compromised version of LiteLLM?&lt;/strong&gt;&lt;br&gt;
Immediately remove the package, purge your pip cache, rotate all credentials that were accessible on the affected machine (SSH keys, cloud tokens, API keys, database passwords), check for persistence mechanisms, and if running in Kubernetes, audit for unauthorized pods. Consider rebuilding affected systems from a known clean state.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Your AI Coding Assistant is Probably Writing Vulnerabilities. Here's How to Catch Them.</title>
      <dc:creator>Aditi Bhatnagar</dc:creator>
      <pubDate>Sat, 28 Feb 2026 10:40:47 +0000</pubDate>
      <link>https://forem.com/aditi_bhatnagar_0250c01e4/your-ai-coding-assistant-is-probably-writing-vulnerabilities-heres-how-to-catch-them-3k8j</link>
      <guid>https://forem.com/aditi_bhatnagar_0250c01e4/your-ai-coding-assistant-is-probably-writing-vulnerabilities-heres-how-to-catch-them-3k8j</guid>
      <description>&lt;p&gt;Hi there, my fellow people on the internet. Hope you're doing well and your codebase isn't on fire (yet).&lt;/p&gt;

&lt;p&gt;So here's the thing. Over the past year I've been watching something unfold that genuinely worries me. Everyone and their dog is using AI to write code now. Copilot, Cursor, Claude Code, ChatGPT, you name it. Vibe coding is real, and the productivity gains are no joke. I've used these tools myself while building Kira at Offgrid Security, and I'm not about to pretend they aren't useful.&lt;/p&gt;

&lt;p&gt;But I've also spent a decade in security, building endpoint protection at Microsoft, securing cloud infrastructure at Atlassian, and now running my own security company. And that lens makes it impossible for me to look at AI-generated code and not ask my favorite question: what can go wrong?&lt;/p&gt;

&lt;p&gt;Turns out, a lot.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Numbers Don't Lie (And They Aren't Pretty)
&lt;/h2&gt;

&lt;p&gt;Veracode recently published their 2025 GenAI Code Security Report after testing code from over 100 large language models. The headline finding? AI-generated code introduced security flaws in 45% of test cases. Not edge cases. Not obscure languages. Common OWASP Top 10 vulnerabilities across Java, Python, JavaScript, and C#.&lt;/p&gt;

&lt;p&gt;Java was the worst offender with a 72% security failure rate. Cross-Site Scripting had an 86% failure rate. Let that sink in.&lt;/p&gt;

&lt;p&gt;And here's the part that surprised even me: bigger, newer models don't do any better. Security performance has stayed flat even as models have gotten dramatically better at writing code that compiles and runs. They've learned syntax. They haven't learned security.&lt;/p&gt;

&lt;p&gt;Apiiro's independent research across Fortune 50 companies backed this up, finding 2.74x more vulnerabilities in AI-generated code compared to human-written code. That's not a rounding error. That's a systemic problem.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi64ft4s7h2cmfvmty34x.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi64ft4s7h2cmfvmty34x.webp" alt=" " width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Does This Keep Happening?
&lt;/h2&gt;

&lt;p&gt;If you think about how LLMs learn to code, it makes total sense. They're trained on massive amounts of publicly available code from GitHub, Stack Overflow, tutorials, blog posts. The thing is, a huge chunk of that code is insecure. Old patterns, missing input validation, hardcoded credentials, SQL queries built with string concatenation. If the training data is full of bad habits, the model will confidently reproduce those bad habits.&lt;/p&gt;

&lt;p&gt;The other piece is that LLMs don't understand your threat model. They don't know your application's architecture, your trust boundaries, your authentication flow. When you ask for an API endpoint, the model will happily generate one that accepts input without validation, because you didn't tell it to validate. And honestly, most developers don't include security constraints in their prompts. That's the whole premise of vibe coding: tell the AI what you want, trust it to figure out the how.&lt;/p&gt;

&lt;p&gt;The problem is that "the how" often skips the security bits entirely.&lt;/p&gt;

&lt;p&gt;I categorize these into three buckets:&lt;/p&gt;

&lt;p&gt;The Obvious Stuff - missing input sanitization, SQL injection, XSS. These are the classics that have been plaguing us for two decades and LLMs are very good at reintroducing them because they're overrepresented in training data.&lt;/p&gt;

&lt;p&gt;The Subtle Stuff - business logic flaws, missing access controls, race conditions. The code looks correct. It passes basic tests. But it's missing the guardrails that a security-conscious developer would add. This is harder to catch because there's no obvious "bad pattern" to scan for.&lt;/p&gt;

&lt;p&gt;The Novel Stuff - hallucinated dependencies (packages that don't exist but an attacker could register), overly complex dependency trees for simple tasks, and the reintroduction of deprecated or known-vulnerable libraries. This one is uniquely AI-flavored and it's growing fast.&lt;/p&gt;

&lt;h2&gt;
  
  
  So What Do We Do About It?
&lt;/h2&gt;

&lt;p&gt;Here's where I get to talk about what I've been building.&lt;/p&gt;

&lt;p&gt;At Offgrid Security, the core problem we're solving with Kira is: can we use AI to actually catch the security issues that AI introduces? Fight fire with fire, if you will.&lt;/p&gt;

&lt;p&gt;We recently released &lt;a href="https://www.npmjs.com/package/@offgridsec/kira-lite-mcp" rel="noopener noreferrer"&gt;kira-lite&lt;/a&gt;, an MCP (Model Context Protocol) server that plugs directly into your AI-powered development workflow. If you haven't been following the MCP ecosystem, here's the quick version: MCP is a standard protocol that lets AI assistants connect to external tools and data sources. Think of it as giving your AI coding assistant the ability to call out to specialized services while it's working.&lt;/p&gt;

&lt;p&gt;The idea behind kira-lite is straightforward. Instead of generating code and hoping for the best, your AI assistant can call kira-lite during the development process to scan for security issues before the code is even written to disk. It sits in the workflow, not after it.&lt;/p&gt;

&lt;p&gt;Here's how you'd set it up:&lt;/p&gt;

&lt;p&gt;Claude Code:&lt;br&gt;
&lt;code&gt;claude mcp add --scope user kira-lite -- npx -y @offgridsec/kira-lite-mcp&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
Cursor / Windsurf / Other MCP Clients:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;{&lt;br&gt;
  "kira-lite": {&lt;br&gt;
    "command": "npx",&lt;br&gt;
    "args": ["-y", "@offgridsec/kira-lite-mcp"]&lt;br&gt;
  }&lt;br&gt;
}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;No API keys. No accounts. No external servers. One command and you're scanning.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Makes This a One-Stop Solution
&lt;/h2&gt;

&lt;p&gt;I've used a lot of security scanners in my career. Most of them do one thing okay. Some catch secrets, some catch injection flaws, some handle dependency vulnerabilities. Kira-lite was built to be the thing you don't have to supplement with five other tools.&lt;/p&gt;

&lt;p&gt;Here's what ships out of the box:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;376 built-in security rules across 15 languages and formats.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We're not talking about a toy regex scanner here. &lt;br&gt;
This covers JavaScript, TypeScript, Python, Java, Go, C#, PHP, Ruby, C/C++, Shell, Terraform, Dockerfile, Kubernetes YAML, and more. Each language has framework-specific rules too. &lt;br&gt;
Django's DEBUG=True in production, Spring's CSRF disabled, Express.js missing helmet middleware, React's dangerouslySetInnerHTML with unsanitized input. The stuff that generic scanners miss because they don't understand the framework context.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;92 secret detectors.&lt;/strong&gt;&lt;br&gt;
Not just AWS keys and GitHub tokens. We're detecting credentials for Anthropic, OpenAI, Groq, HuggingFace, DeepSeek (relevant given the AI coding boom), plus cloud providers like GCP, DigitalOcean, Vercel, Netlify, Fly.io. CI/CD tokens for CircleCI, Buildkite, Terraform Cloud. SaaS tokens for Atlassian, Okta, Auth0, Sentry, Datadog. Payment keys for Stripe, PayPal, Square, Razorpay. The list goes on. If an AI coding assistant hardcodes a credential (and they love doing this), kira-lite will catch it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dependency vulnerability scanning across 11 ecosystems.&lt;/strong&gt;&lt;br&gt;
This one is huge. Kira-lite scans your lockfiles against the OSV.dev database (the same data source behind Google's osv-scanner) for known CVEs. It supports npm, PyPI, Go, Maven, crates.io, RubyGems, Packagist, NuGet, Pub, and Hex. Thirteen lockfile formats total. Remember what I said about AI assistants introducing too many dependencies? This is how you catch the ones with known vulnerabilities before they become your problem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Full OWASP coverage.&lt;/strong&gt;&lt;br&gt;
And I don't mean "we cover a few items from the Top 10." Kira-lite maps to OWASP Top 10:2025, OWASP API Security Top 10, and OWASP LLM Top 10:2025. That last one is particularly relevant. It catches things like LLM output being passed directly to eval() or exec(), prompt injection patterns, and user input concatenated into prompt templates. If you're building AI-powered applications (and who isn't, these days), these are the vulnerabilities that existing scanners completely ignore.&lt;/p&gt;

&lt;p&gt;Five distinct MCP tools that your AI assistant can invoke contextually:&lt;/p&gt;

&lt;p&gt;`scan_code scans a snippet before it's written to disk. The AI literally checks its own work before handing it to you.&lt;/p&gt;

&lt;p&gt;scan_file scans an existing file and automatically triggers dependency scanning if it hits a lockfile.&lt;/p&gt;

&lt;p&gt;scan_diff compares original vs modified code and reports only new vulnerabilities. This is incredibly useful during refactors where you don't want noise from pre-existing issues.&lt;/p&gt;

&lt;p&gt;scan_dependencies does a full dependency audit on demand.&lt;/p&gt;

&lt;p&gt;fix_vulnerability provides remediation guidance for specific vulnerability IDs or CWEs.`&lt;/p&gt;

&lt;p&gt;And the scanning happens &lt;strong&gt;entirely on your machine&lt;/strong&gt;. Kira-lite ships with Kira-Core, a compiled Go binary bundled for macOS, Linux, and Windows. Your code never leaves your laptop. For anyone working on proprietary codebases or in regulated industries, that's not a nice-to-have, it's a requirement.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why MCP and Why Now?
&lt;/h2&gt;

&lt;p&gt;I've been thinking about this a lot. The traditional security tooling model is built around gates and checkpoints. Write code, commit, run CI pipeline, scanner finds issues, developer goes back to fix. It works, but it's slow and creates friction that developers (understandably) resent.&lt;/p&gt;

&lt;p&gt;With MCP, the security tool becomes a collaborator rather than a gatekeeper. The AI assistant can proactively check its own work. It can call scan_code before presenting a snippet to you, catch the SQL injection in the Python function or the missing authentication check on the API endpoint, and fix it in the same conversation. No context switch. No waiting for CI. No separate dashboard to check.&lt;/p&gt;

&lt;p&gt;With Claude Code, you can even set it up so that every edit is automatically scanned. Drop a CLAUDE.md file in your project that tells Claude to call scan_code before every write operation, and you've essentially got a security co-pilot riding shotgun on every line of AI-generated code.&lt;/p&gt;

&lt;p&gt;This isn't a magic bullet. I want to be clear about that. No tool catches everything, and the security landscape for AI-generated code is evolving faster than any single solution can keep up with. But the shift from "scan after the fact" to "scan during generation" is significant. It's the difference between finding the fire after it's spread and catching the spark.&lt;/p&gt;

&lt;h2&gt;
  
  
  Things I'd Recommend Right Now
&lt;/h2&gt;

&lt;p&gt;Whether you use kira-lite or not, here are some things I'd strongly suggest if your team is using AI coding assistants:&lt;/p&gt;

&lt;p&gt;Don't trust, verify. Treat AI-generated code the same way you'd treat code from a new contractor who doesn't know your codebase. Review it. Question it. Don't assume it's handling edge cases or security concerns just because it compiles.&lt;/p&gt;

&lt;p&gt;Add security context to your prompts. If you're asking an AI to write an API endpoint, explicitly say "include input validation, authentication checks, and parameterized queries." It won't add these by default.&lt;/p&gt;

&lt;p&gt;Automate scanning in the loop. Whether it's through an MCP server like kira-lite, a SAST tool in your CI pipeline, or both, don't ship AI-generated code without automated security analysis. The volume of code being generated is too high for manual review alone.&lt;/p&gt;

&lt;p&gt;Watch your dependencies. AI assistants love adding packages. Check that those packages actually exist, are maintained, and don't have known vulnerabilities. Package hallucination is a real attack vector now. Tools like kira-lite's dependency scanner can automatically check your lockfiles against CVE databases, which saves you from manually auditing every npm install your AI assistant decides to run.&lt;/p&gt;

&lt;p&gt;Educate your team. The developers using AI tools need to understand that "working code" and "secure code" are not the same thing. This isn't about slowing people down. It's about building awareness so they know what to look for.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Road Ahead
&lt;/h2&gt;

&lt;p&gt;I genuinely believe AI is going to transform how we build software. I'm building an AI security company, so clearly I'm bought in on that future. But we're in this weird in-between phase where the tools are powerful enough to generate massive amounts of code and not yet smart enough to make that code secure by default.&lt;/p&gt;

&lt;p&gt;That gap is where the next wave of security work lives. It's where I'm spending all my time right now, and honestly, it's one of the most interesting problems I've worked on in my career.&lt;/p&gt;

&lt;p&gt;If you're working in this space too, or if you're a developer trying to figure out how to use AI tools without accidentally introducing a bunch of CVEs, I'd love to chat. Hit me up on LinkedIn or check out what we're building at Offgrid Security.&lt;/p&gt;

&lt;p&gt;And if you want to try kira-lite, the package is up on npm: @offgridsec/kira-lite-mcp. One npx command, zero config, and 376 rules scanning your code before it ever hits the filesystem. I think you'll find it genuinely useful.&lt;/p&gt;

&lt;p&gt;Will be back soon with more on this topic. There's a lot more to unpack, especially around how agent-based workflows are creating entirely new attack surfaces.&lt;/p&gt;

&lt;p&gt;Keep hacking till then ();&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>security</category>
      <category>vibecoding</category>
    </item>
  </channel>
</rss>
