<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Aditi Bhatnagar</title>
    <description>The latest articles on Forem by Aditi Bhatnagar (@aditi_bhatnagar_0250c01e4).</description>
    <link>https://forem.com/aditi_bhatnagar_0250c01e4</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/aditi_bhatnagar_0250c01e4"/>
    <language>en</language>
    <item>
      <title>The LiteLLM Supply Chain Attack Just Changed Everything - Here's How to Protect Your AI Stack</title>
      <dc:creator>Aditi Bhatnagar</dc:creator>
      <pubDate>Thu, 26 Mar 2026 10:41:52 +0000</pubDate>
      <link>https://forem.com/aditi_bhatnagar_0250c01e4/the-litellm-supply-chain-attack-just-changed-everything-heres-how-to-protect-your-ai-stack-ff6</link>
      <guid>https://forem.com/aditi_bhatnagar_0250c01e4/the-litellm-supply-chain-attack-just-changed-everything-heres-how-to-protect-your-ai-stack-ff6</guid>
      <description>&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;p&gt;On March 24, 2026, the widely-used LiteLLM Python package was compromised in a supply chain attack. Malicious versions stole credentials from tens of thousands of developers. This post breaks down what happened, why AI tooling is uniquely vulnerable, and how MCP-based security tools like &lt;code&gt;kira-lite-mcp&lt;/code&gt; can catch these threats before they hit production.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Happened: The LiteLLM Compromise
&lt;/h2&gt;

&lt;p&gt;Two days ago, the AI development community got a brutal wake-up call.&lt;/p&gt;

&lt;p&gt;LiteLLM - the open-source Python library that provides a unified interface across 100+ LLMs (OpenAI, Anthropic, VertexAI, and more) - was hit by a supply chain attack. Versions &lt;strong&gt;1.82.7&lt;/strong&gt; and &lt;strong&gt;1.82.8&lt;/strong&gt;, published directly to PyPI, contained a multi-stage credential stealer targeting SSH keys, cloud provider tokens, Kubernetes secrets, cryptocurrency wallets, and &lt;code&gt;.env&lt;/code&gt; files.&lt;/p&gt;

&lt;p&gt;The attack was carried out by &lt;strong&gt;TeamPCP&lt;/strong&gt;, a threat group that had been chaining compromises across the open-source ecosystem since late 2025. They gained access to a LiteLLM maintainer's PyPI credentials through a &lt;em&gt;prior&lt;/em&gt; compromise of Aqua Security's Trivy scanner - meaning one breach cascaded into another.&lt;/p&gt;

&lt;p&gt;Here's a timeline of what went down:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;~08:30 UTC&lt;/strong&gt; - Malicious versions 1.82.7 and 1.82.8 published to PyPI&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;10:39-16:00 UTC&lt;/strong&gt; - Compromised packages available for download&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;~11:25 UTC&lt;/strong&gt; - PyPI quarantined the packages&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;By end of day&lt;/strong&gt; - Versions yanked, credentials rotated, external IR engaged&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The payload was embedded in &lt;code&gt;proxy_server.py&lt;/code&gt; and, in version 1.82.8, a &lt;code&gt;.pth&lt;/code&gt; file (&lt;code&gt;litellm_init.pth&lt;/code&gt;) that &lt;strong&gt;executed automatically on every Python process startup&lt;/strong&gt;. Not when you ran your app. Not when you imported litellm. When &lt;em&gt;Python itself started&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;If you ran &lt;code&gt;pip install litellm&lt;/code&gt; without a pinned version during that window, the malware was already running before your code did.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Matters More Than a Typical Supply Chain Attack
&lt;/h2&gt;

&lt;p&gt;LiteLLM isn't just another package. It sits at the &lt;strong&gt;center of the AI stack&lt;/strong&gt;, acting as a gateway between applications and multiple LLM providers. That means it often has access to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;API keys for OpenAI, Anthropic, Azure, GCP, and other providers&lt;/li&gt;
&lt;li&gt;Environment variables with database passwords and service credentials&lt;/li&gt;
&lt;li&gt;Kubernetes cluster secrets when deployed as a proxy&lt;/li&gt;
&lt;li&gt;CI/CD pipeline tokens when used in automated workflows&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;According to Wiz, &lt;strong&gt;LiteLLM is present in 36% of all cloud environments&lt;/strong&gt;. That's an astonishing blast radius for a three-hour window of exposure.&lt;/p&gt;

&lt;p&gt;The stolen data was encrypted and exfiltrated to a C2 domain designed to look legitimate: &lt;code&gt;models.litellm[.]cloud&lt;/code&gt;. The payload also attempted to deploy privileged Kubernetes pods to every node in a cluster, giving the attackers persistent access even after the initial malware was removed.&lt;/p&gt;

&lt;p&gt;This wasn't an isolated incident. TeamPCP's campaign has now spanned &lt;strong&gt;five ecosystems&lt;/strong&gt; - GitHub Actions, Docker Hub, npm, Open VSX, and PyPI - with each compromise yielding credentials that unlock the next target.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Real Problem: We Don't Scan What We Should
&lt;/h2&gt;

&lt;p&gt;Here's the uncomfortable truth: most development teams have &lt;strong&gt;zero automated security checks&lt;/strong&gt; between their AI coding assistant generating code and that code hitting production.&lt;/p&gt;

&lt;p&gt;Think about the typical AI-assisted development workflow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Developer asks an AI assistant to write a function&lt;/li&gt;
&lt;li&gt;AI generates code, often pulling in dependencies&lt;/li&gt;
&lt;li&gt;Developer reviews it (maybe), copies it, and ships it&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;At no point in that flow did anyone check whether those dependencies have known vulnerabilities. Nobody verified that the packages the AI suggested &lt;em&gt;actually exist&lt;/em&gt; (package hallucination is a real attack vector). And nobody scanned the generated code itself for injection vulnerabilities, hardcoded secrets, or insecure patterns.&lt;/p&gt;

&lt;p&gt;The LiteLLM incident makes this painfully clear. If your project had an unpinned &lt;code&gt;litellm&lt;/code&gt; dependency and your CI pipeline ran &lt;code&gt;pip install&lt;/code&gt; during that three-hour window, you were compromised. Period.&lt;/p&gt;




&lt;h2&gt;
  
  
  Enter kira-lite-mcp: Security Scanning Inside the AI Loop
&lt;/h2&gt;

&lt;p&gt;This is where tools like &lt;a href="https://www.npmjs.com/package/@offgridsec/kira-lite-mcp" rel="noopener noreferrer"&gt;&lt;code&gt;@offgridsec/kira-lite-mcp&lt;/code&gt;&lt;/a&gt; become critical.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;kira-lite-mcp&lt;/strong&gt; is an MCP (Model Context Protocol) server that brings real-time security scanning directly into your AI coding workflow. Instead of scanning &lt;em&gt;after&lt;/em&gt; code is written and committed, it scans code &lt;strong&gt;as it's being generated&lt;/strong&gt; - inside the same conversation where your AI assistant is writing it.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Makes It Different
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;It runs entirely on your machine.&lt;/strong&gt; Kira-lite ships with Kira-Core, a compiled Go binary bundled for macOS, Linux, and Windows. Your code never leaves your laptop. For proprietary codebases or regulated industries, this is non-negotiable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;376 security rules out of the box.&lt;/strong&gt; It covers OWASP Top 10:2025, OWASP API Security Top 10, and - critically - &lt;strong&gt;OWASP LLM Top 10:2025&lt;/strong&gt;. That last category catches vulnerabilities that traditional scanners completely miss: LLM output passed directly to &lt;code&gt;eval()&lt;/code&gt; or &lt;code&gt;exec()&lt;/code&gt;, prompt injection patterns, and user input concatenated into prompt templates.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Five MCP tools that your AI assistant can invoke contextually:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;scan_code&lt;/code&gt; - scans a snippet &lt;em&gt;before it's written to disk&lt;/em&gt;. The AI literally checks its own work.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;scan_file&lt;/code&gt; - scans an existing file and auto-triggers dependency scanning on lockfiles.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;scan_diff&lt;/code&gt; - compares original vs. modified code and flags only &lt;em&gt;new&lt;/em&gt; vulnerabilities.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;scan_dependency&lt;/code&gt; - checks your lockfiles against CVE databases across 13 formats (npm, PyPI, Go, Maven, crates.io, RubyGems, and more).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;scan_project&lt;/code&gt; - full project-level scan with configurable severity thresholds.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  How This Would Have Helped With LiteLLM
&lt;/h3&gt;

&lt;p&gt;Let's map kira-lite's capabilities directly to the LiteLLM attack scenario:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dependency scanning catches known-bad versions.&lt;/strong&gt; If your project's &lt;code&gt;requirements.txt&lt;/code&gt; or lockfile pulled in litellm 1.82.7 or 1.82.8 after the CVE was published, kira-lite's dependency scanner would flag it. It checks against live CVE databases, not just a static list - so as advisories are published, your scans pick them up.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Code scanning catches suspicious patterns.&lt;/strong&gt; The LiteLLM payload used &lt;code&gt;os.system()&lt;/code&gt; calls, base64-encoded strings, and network exfiltration patterns. These are exactly the kinds of patterns that SAST rules detect. If an AI assistant generated code that imports from or interacts with a compromised dependency in a suspicious way, &lt;code&gt;scan_code&lt;/code&gt; would catch it before it's saved.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Project-level scanning catches transitive risks.&lt;/strong&gt; Many developers weren't even using LiteLLM directly - it was pulled in as a &lt;em&gt;transitive dependency&lt;/em&gt; through AI agent frameworks, MCP servers, or LLM orchestration tools. A full project scan would surface these hidden dependencies.&lt;/p&gt;




&lt;h2&gt;
  
  
  Setting It Up (It Takes 30 Seconds)
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx @offgridsec/kira-lite-mcp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. Zero config. Works with Claude Code, Claude Desktop, Cursor, VS Code, and any MCP-compatible client.&lt;/p&gt;

&lt;p&gt;For Claude Desktop, add to your config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"kira-lite"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"npx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"-y"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"@offgridsec/kira-lite-mcp"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once configured, your AI assistant can call the security scanning tools automatically - catching vulnerabilities in the same conversation where code is being written.&lt;/p&gt;




&lt;h2&gt;
  
  
  Broader Lessons From the LiteLLM Incident
&lt;/h2&gt;

&lt;p&gt;The LiteLLM compromise isn't the end of this campaign. As Endor Labs put it: &lt;em&gt;"TeamPCP has demonstrated a consistent pattern: each compromised environment yields credentials that unlock the next target."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Here's what every team building with AI should take away:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Pin Your Dependencies
&lt;/h3&gt;

&lt;p&gt;If your &lt;code&gt;requirements.txt&lt;/code&gt; says &lt;code&gt;litellm&lt;/code&gt; instead of &lt;code&gt;litellm==1.82.6&lt;/code&gt;, you were exposed. Pin versions. Use lockfiles. Verify hashes. This is basic supply chain hygiene that too many AI projects skip because they move fast.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Treat AI-Generated Code Like Untrusted Code
&lt;/h3&gt;

&lt;p&gt;Your AI assistant doesn't know about zero-day supply chain attacks. It doesn't verify that the packages it suggests are uncompromised. Treat its output the same way you'd treat code from a contractor who doesn't know your security policies.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Scan In the Loop, Not After the Fact
&lt;/h3&gt;

&lt;p&gt;The traditional model - write code, commit, CI scans, fix later - doesn't scale when AI is generating code at 10x the speed of manual development. Tools like kira-lite-mcp shift security scanning &lt;em&gt;left of left&lt;/em&gt;: into the generation step itself.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Watch Your Transitive Dependencies
&lt;/h3&gt;

&lt;p&gt;LiteLLM wasn't just installed directly. It was pulled in by MCP servers, agent frameworks, and orchestration tools. Know your dependency tree. Audit it regularly. Use &lt;code&gt;scan_dependency&lt;/code&gt; on every lockfile in your project.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Assume Breach, Rotate Fast
&lt;/h3&gt;

&lt;p&gt;If you installed litellm 1.82.7 or 1.82.8, assume every credential on that machine is compromised. SSH keys, cloud tokens, API keys, database passwords, crypto wallets - rotate all of them. Check for persistence mechanisms (&lt;code&gt;~/.config/sysmon/sysmon.py&lt;/code&gt;, &lt;code&gt;~/.config/systemd/user/sysmon.service&lt;/code&gt;). In Kubernetes environments, audit &lt;code&gt;kube-system&lt;/code&gt; for pods matching &lt;code&gt;node-setup-*&lt;/code&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Bottom Line
&lt;/h2&gt;

&lt;p&gt;The AI supply chain is now a high-value target. LiteLLM's 3.4 million daily downloads made it an incredibly efficient attack vector - and the attackers knew it.&lt;/p&gt;

&lt;p&gt;We can't prevent every supply chain compromise. But we &lt;em&gt;can&lt;/em&gt; build security into the development workflow itself, catching vulnerabilities before they become incidents. Tools like kira-lite-mcp represent a fundamental shift: from security as a gate at the end of the pipeline to security as a collaborator sitting next to you while you code.&lt;/p&gt;

&lt;p&gt;The LiteLLM incident cost organizations time, trust, and potentially sensitive data. The next supply chain attack - and there will be a next one - doesn't have to.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;If you found this useful, give it a like and follow for more on AI security and developer tooling. Have questions about securing your AI development workflow? Drop them in the comments.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Tags:&lt;/strong&gt; &lt;code&gt;#security&lt;/code&gt; &lt;code&gt;#ai&lt;/code&gt; &lt;code&gt;#programming&lt;/code&gt; &lt;code&gt;#devops&lt;/code&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  FAQ: LiteLLM Supply Chain Attack and AI Security
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What was the LiteLLM supply chain attack?&lt;/strong&gt;&lt;br&gt;
On March 24, 2026, the LiteLLM Python package on PyPI was compromised by a threat group called TeamPCP. Versions 1.82.7 and 1.82.8 contained credential-stealing malware that targeted SSH keys, cloud tokens, API keys, and Kubernetes secrets. The malicious packages were available for approximately three hours before being removed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How do I know if I was affected by the LiteLLM attack?&lt;/strong&gt;&lt;br&gt;
You were likely affected if you ran &lt;code&gt;pip install litellm&lt;/code&gt; without a pinned version between 10:39 UTC and 16:00 UTC on March 24, 2026, and received version 1.82.7 or 1.82.8. Run &lt;code&gt;pip show litellm&lt;/code&gt; to check your installed version. Users of the official LiteLLM Proxy Docker image were not impacted.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is kira-lite-mcp?&lt;/strong&gt;&lt;br&gt;
kira-lite-mcp (&lt;code&gt;@offgridsec/kira-lite-mcp&lt;/code&gt;) is an MCP-based security scanner that integrates directly into AI coding assistants like Claude Code, Cursor, and VS Code. It scans code for vulnerabilities, checks dependencies against CVE databases, and detects insecure patterns - all in real-time during the development process, entirely on your local machine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How does MCP help with security scanning?&lt;/strong&gt;&lt;br&gt;
MCP (Model Context Protocol) allows AI assistants to invoke external tools contextually. With an MCP security scanner, the AI can proactively check its own generated code for vulnerabilities, scan dependencies for known CVEs, and flag insecure patterns before the code is ever saved to disk.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What should I do if I installed a compromised version of LiteLLM?&lt;/strong&gt;&lt;br&gt;
Immediately remove the package, purge your pip cache, rotate all credentials that were accessible on the affected machine (SSH keys, cloud tokens, API keys, database passwords), check for persistence mechanisms, and if running in Kubernetes, audit for unauthorized pods. Consider rebuilding affected systems from a known clean state.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Your AI Coding Assistant is Probably Writing Vulnerabilities. Here's How to Catch Them.</title>
      <dc:creator>Aditi Bhatnagar</dc:creator>
      <pubDate>Sat, 28 Feb 2026 10:40:47 +0000</pubDate>
      <link>https://forem.com/aditi_bhatnagar_0250c01e4/your-ai-coding-assistant-is-probably-writing-vulnerabilities-heres-how-to-catch-them-3k8j</link>
      <guid>https://forem.com/aditi_bhatnagar_0250c01e4/your-ai-coding-assistant-is-probably-writing-vulnerabilities-heres-how-to-catch-them-3k8j</guid>
      <description>&lt;p&gt;Hi there, my fellow people on the internet. Hope you're doing well and your codebase isn't on fire (yet).&lt;/p&gt;

&lt;p&gt;So here's the thing. Over the past year I've been watching something unfold that genuinely worries me. Everyone and their dog is using AI to write code now. Copilot, Cursor, Claude Code, ChatGPT, you name it. Vibe coding is real, and the productivity gains are no joke. I've used these tools myself while building Kira at Offgrid Security, and I'm not about to pretend they aren't useful.&lt;/p&gt;

&lt;p&gt;But I've also spent a decade in security, building endpoint protection at Microsoft, securing cloud infrastructure at Atlassian, and now running my own security company. And that lens makes it impossible for me to look at AI-generated code and not ask my favorite question: what can go wrong?&lt;/p&gt;

&lt;p&gt;Turns out, a lot.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Numbers Don't Lie (And They Aren't Pretty)
&lt;/h2&gt;

&lt;p&gt;Veracode recently published their 2025 GenAI Code Security Report after testing code from over 100 large language models. The headline finding? AI-generated code introduced security flaws in 45% of test cases. Not edge cases. Not obscure languages. Common OWASP Top 10 vulnerabilities across Java, Python, JavaScript, and C#.&lt;/p&gt;

&lt;p&gt;Java was the worst offender with a 72% security failure rate. Cross-Site Scripting had an 86% failure rate. Let that sink in.&lt;/p&gt;

&lt;p&gt;And here's the part that surprised even me: bigger, newer models don't do any better. Security performance has stayed flat even as models have gotten dramatically better at writing code that compiles and runs. They've learned syntax. They haven't learned security.&lt;/p&gt;

&lt;p&gt;Apiiro's independent research across Fortune 50 companies backed this up, finding 2.74x more vulnerabilities in AI-generated code compared to human-written code. That's not a rounding error. That's a systemic problem.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi64ft4s7h2cmfvmty34x.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi64ft4s7h2cmfvmty34x.webp" alt=" " width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Does This Keep Happening?
&lt;/h2&gt;

&lt;p&gt;If you think about how LLMs learn to code, it makes total sense. They're trained on massive amounts of publicly available code from GitHub, Stack Overflow, tutorials, blog posts. The thing is, a huge chunk of that code is insecure. Old patterns, missing input validation, hardcoded credentials, SQL queries built with string concatenation. If the training data is full of bad habits, the model will confidently reproduce those bad habits.&lt;/p&gt;

&lt;p&gt;The other piece is that LLMs don't understand your threat model. They don't know your application's architecture, your trust boundaries, your authentication flow. When you ask for an API endpoint, the model will happily generate one that accepts input without validation, because you didn't tell it to validate. And honestly, most developers don't include security constraints in their prompts. That's the whole premise of vibe coding: tell the AI what you want, trust it to figure out the how.&lt;/p&gt;

&lt;p&gt;The problem is that "the how" often skips the security bits entirely.&lt;/p&gt;

&lt;p&gt;I categorize these into three buckets:&lt;/p&gt;

&lt;p&gt;The Obvious Stuff - missing input sanitization, SQL injection, XSS. These are the classics that have been plaguing us for two decades and LLMs are very good at reintroducing them because they're overrepresented in training data.&lt;/p&gt;

&lt;p&gt;The Subtle Stuff - business logic flaws, missing access controls, race conditions. The code looks correct. It passes basic tests. But it's missing the guardrails that a security-conscious developer would add. This is harder to catch because there's no obvious "bad pattern" to scan for.&lt;/p&gt;

&lt;p&gt;The Novel Stuff - hallucinated dependencies (packages that don't exist but an attacker could register), overly complex dependency trees for simple tasks, and the reintroduction of deprecated or known-vulnerable libraries. This one is uniquely AI-flavored and it's growing fast.&lt;/p&gt;

&lt;h2&gt;
  
  
  So What Do We Do About It?
&lt;/h2&gt;

&lt;p&gt;Here's where I get to talk about what I've been building.&lt;/p&gt;

&lt;p&gt;At Offgrid Security, the core problem we're solving with Kira is: can we use AI to actually catch the security issues that AI introduces? Fight fire with fire, if you will.&lt;/p&gt;

&lt;p&gt;We recently released &lt;a href="https://www.npmjs.com/package/@offgridsec/kira-lite-mcp" rel="noopener noreferrer"&gt;kira-lite&lt;/a&gt;, an MCP (Model Context Protocol) server that plugs directly into your AI-powered development workflow. If you haven't been following the MCP ecosystem, here's the quick version: MCP is a standard protocol that lets AI assistants connect to external tools and data sources. Think of it as giving your AI coding assistant the ability to call out to specialized services while it's working.&lt;/p&gt;

&lt;p&gt;The idea behind kira-lite is straightforward. Instead of generating code and hoping for the best, your AI assistant can call kira-lite during the development process to scan for security issues before the code is even written to disk. It sits in the workflow, not after it.&lt;/p&gt;

&lt;p&gt;Here's how you'd set it up:&lt;/p&gt;

&lt;p&gt;Claude Code:&lt;br&gt;
&lt;code&gt;claude mcp add --scope user kira-lite -- npx -y @offgridsec/kira-lite-mcp&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
Cursor / Windsurf / Other MCP Clients:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;{&lt;br&gt;
  "kira-lite": {&lt;br&gt;
    "command": "npx",&lt;br&gt;
    "args": ["-y", "@offgridsec/kira-lite-mcp"]&lt;br&gt;
  }&lt;br&gt;
}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;No API keys. No accounts. No external servers. One command and you're scanning.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Makes This a One-Stop Solution
&lt;/h2&gt;

&lt;p&gt;I've used a lot of security scanners in my career. Most of them do one thing okay. Some catch secrets, some catch injection flaws, some handle dependency vulnerabilities. Kira-lite was built to be the thing you don't have to supplement with five other tools.&lt;/p&gt;

&lt;p&gt;Here's what ships out of the box:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;376 built-in security rules across 15 languages and formats.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We're not talking about a toy regex scanner here. &lt;br&gt;
This covers JavaScript, TypeScript, Python, Java, Go, C#, PHP, Ruby, C/C++, Shell, Terraform, Dockerfile, Kubernetes YAML, and more. Each language has framework-specific rules too. &lt;br&gt;
Django's DEBUG=True in production, Spring's CSRF disabled, Express.js missing helmet middleware, React's dangerouslySetInnerHTML with unsanitized input. The stuff that generic scanners miss because they don't understand the framework context.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;92 secret detectors.&lt;/strong&gt;&lt;br&gt;
Not just AWS keys and GitHub tokens. We're detecting credentials for Anthropic, OpenAI, Groq, HuggingFace, DeepSeek (relevant given the AI coding boom), plus cloud providers like GCP, DigitalOcean, Vercel, Netlify, Fly.io. CI/CD tokens for CircleCI, Buildkite, Terraform Cloud. SaaS tokens for Atlassian, Okta, Auth0, Sentry, Datadog. Payment keys for Stripe, PayPal, Square, Razorpay. The list goes on. If an AI coding assistant hardcodes a credential (and they love doing this), kira-lite will catch it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dependency vulnerability scanning across 11 ecosystems.&lt;/strong&gt;&lt;br&gt;
This one is huge. Kira-lite scans your lockfiles against the OSV.dev database (the same data source behind Google's osv-scanner) for known CVEs. It supports npm, PyPI, Go, Maven, crates.io, RubyGems, Packagist, NuGet, Pub, and Hex. Thirteen lockfile formats total. Remember what I said about AI assistants introducing too many dependencies? This is how you catch the ones with known vulnerabilities before they become your problem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Full OWASP coverage.&lt;/strong&gt;&lt;br&gt;
And I don't mean "we cover a few items from the Top 10." Kira-lite maps to OWASP Top 10:2025, OWASP API Security Top 10, and OWASP LLM Top 10:2025. That last one is particularly relevant. It catches things like LLM output being passed directly to eval() or exec(), prompt injection patterns, and user input concatenated into prompt templates. If you're building AI-powered applications (and who isn't, these days), these are the vulnerabilities that existing scanners completely ignore.&lt;/p&gt;

&lt;p&gt;Five distinct MCP tools that your AI assistant can invoke contextually:&lt;/p&gt;

&lt;p&gt;`scan_code scans a snippet before it's written to disk. The AI literally checks its own work before handing it to you.&lt;/p&gt;

&lt;p&gt;scan_file scans an existing file and automatically triggers dependency scanning if it hits a lockfile.&lt;/p&gt;

&lt;p&gt;scan_diff compares original vs modified code and reports only new vulnerabilities. This is incredibly useful during refactors where you don't want noise from pre-existing issues.&lt;/p&gt;

&lt;p&gt;scan_dependencies does a full dependency audit on demand.&lt;/p&gt;

&lt;p&gt;fix_vulnerability provides remediation guidance for specific vulnerability IDs or CWEs.`&lt;/p&gt;

&lt;p&gt;And the scanning happens &lt;strong&gt;entirely on your machine&lt;/strong&gt;. Kira-lite ships with Kira-Core, a compiled Go binary bundled for macOS, Linux, and Windows. Your code never leaves your laptop. For anyone working on proprietary codebases or in regulated industries, that's not a nice-to-have, it's a requirement.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why MCP and Why Now?
&lt;/h2&gt;

&lt;p&gt;I've been thinking about this a lot. The traditional security tooling model is built around gates and checkpoints. Write code, commit, run CI pipeline, scanner finds issues, developer goes back to fix. It works, but it's slow and creates friction that developers (understandably) resent.&lt;/p&gt;

&lt;p&gt;With MCP, the security tool becomes a collaborator rather than a gatekeeper. The AI assistant can proactively check its own work. It can call scan_code before presenting a snippet to you, catch the SQL injection in the Python function or the missing authentication check on the API endpoint, and fix it in the same conversation. No context switch. No waiting for CI. No separate dashboard to check.&lt;/p&gt;

&lt;p&gt;With Claude Code, you can even set it up so that every edit is automatically scanned. Drop a CLAUDE.md file in your project that tells Claude to call scan_code before every write operation, and you've essentially got a security co-pilot riding shotgun on every line of AI-generated code.&lt;/p&gt;

&lt;p&gt;This isn't a magic bullet. I want to be clear about that. No tool catches everything, and the security landscape for AI-generated code is evolving faster than any single solution can keep up with. But the shift from "scan after the fact" to "scan during generation" is significant. It's the difference between finding the fire after it's spread and catching the spark.&lt;/p&gt;

&lt;h2&gt;
  
  
  Things I'd Recommend Right Now
&lt;/h2&gt;

&lt;p&gt;Whether you use kira-lite or not, here are some things I'd strongly suggest if your team is using AI coding assistants:&lt;/p&gt;

&lt;p&gt;Don't trust, verify. Treat AI-generated code the same way you'd treat code from a new contractor who doesn't know your codebase. Review it. Question it. Don't assume it's handling edge cases or security concerns just because it compiles.&lt;/p&gt;

&lt;p&gt;Add security context to your prompts. If you're asking an AI to write an API endpoint, explicitly say "include input validation, authentication checks, and parameterized queries." It won't add these by default.&lt;/p&gt;

&lt;p&gt;Automate scanning in the loop. Whether it's through an MCP server like kira-lite, a SAST tool in your CI pipeline, or both, don't ship AI-generated code without automated security analysis. The volume of code being generated is too high for manual review alone.&lt;/p&gt;

&lt;p&gt;Watch your dependencies. AI assistants love adding packages. Check that those packages actually exist, are maintained, and don't have known vulnerabilities. Package hallucination is a real attack vector now. Tools like kira-lite's dependency scanner can automatically check your lockfiles against CVE databases, which saves you from manually auditing every npm install your AI assistant decides to run.&lt;/p&gt;

&lt;p&gt;Educate your team. The developers using AI tools need to understand that "working code" and "secure code" are not the same thing. This isn't about slowing people down. It's about building awareness so they know what to look for.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Road Ahead
&lt;/h2&gt;

&lt;p&gt;I genuinely believe AI is going to transform how we build software. I'm building an AI security company, so clearly I'm bought in on that future. But we're in this weird in-between phase where the tools are powerful enough to generate massive amounts of code and not yet smart enough to make that code secure by default.&lt;/p&gt;

&lt;p&gt;That gap is where the next wave of security work lives. It's where I'm spending all my time right now, and honestly, it's one of the most interesting problems I've worked on in my career.&lt;/p&gt;

&lt;p&gt;If you're working in this space too, or if you're a developer trying to figure out how to use AI tools without accidentally introducing a bunch of CVEs, I'd love to chat. Hit me up on LinkedIn or check out what we're building at Offgrid Security.&lt;/p&gt;

&lt;p&gt;And if you want to try kira-lite, the package is up on npm: @offgridsec/kira-lite-mcp. One npx command, zero config, and 376 rules scanning your code before it ever hits the filesystem. I think you'll find it genuinely useful.&lt;/p&gt;

&lt;p&gt;Will be back soon with more on this topic. There's a lot more to unpack, especially around how agent-based workflows are creating entirely new attack surfaces.&lt;/p&gt;

&lt;p&gt;Keep hacking till then ();&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>security</category>
      <category>vibecoding</category>
    </item>
  </channel>
</rss>
