<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Walid Ladeb</title>
    <description>The latest articles on Forem by Walid Ladeb (@ladebw).</description>
    <link>https://forem.com/ladebw</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/ladebw"/>
    <language>en</language>
    <item>
      <title>Your AI Stack Is Already Being Exploited. You Just Don't Know It Yet.</title>
      <dc:creator>Walid Ladeb</dc:creator>
      <pubDate>Thu, 26 Mar 2026 09:31:27 +0000</pubDate>
      <link>https://forem.com/ladebw/your-ai-stack-is-alreadybeing-exploitedyou-just-dont-know-it-yet-4k40</link>
      <guid>https://forem.com/ladebw/your-ai-stack-is-alreadybeing-exploitedyou-just-dont-know-it-yet-4k40</guid>
      <description>&lt;p&gt;&lt;strong&gt;How ARCADA audits the attack surface most security tools don't even know exists.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;01 — THE PROBLEM&lt;/strong&gt;&lt;br&gt;
The security tools you trust weren't built for this.&lt;br&gt;
In 2024, a researcher at a Fortune 500 company discovered a backdoor in a popular Python package. It had been there for 14 months. The existing SAST tools found nothing. The code reviewers saw nothing. The CI pipeline passed every check. The package had been downloaded over 40 million times.&lt;/p&gt;

&lt;p&gt;This wasn't a zero-day exploit or a nation-state attack. It was a malicious setup.py hook that executed at install time, exfiltrating environment variables to a remote server. The kind of attack that's been in the attacker playbook for years but that traditional security tooling systematically misses.&lt;/p&gt;

&lt;p&gt;The gap&lt;br&gt;
Tools like Bandit, Semgrep, and Snyk are excellent at what they were built for: finding CVEs in known libraries and flagging dangerous patterns in application code. But the AI ecosystem has introduced an entirely new attack surface one that didn't exist when those tools were designed.&lt;/p&gt;

&lt;p&gt;Consider what a modern AI application actually looks like: LLM API calls with user-controlled prompts. Agent frameworks executing tools autonomously. RAG pipelines ingesting untrusted documents. Fine-tuning pipelines writing to training datasets. Model weights loaded from arbitrary sources. A supply chain of Python packages, each with install hooks, that runs with full system privileges.&lt;/p&gt;

&lt;p&gt;None of these are application-layer vulnerabilities in the traditional sense. They're trust boundary violations places where an attacker can inject data that gets interpreted as instructions, or exfiltrate data through channels that look like normal operation.&lt;/p&gt;

&lt;p&gt;That's the problem ARCADA was built to solve.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;02 — THE THREAT LANDSCAPE&lt;/strong&gt;&lt;br&gt;
What attackers are actually doing right now.&lt;br&gt;
Before diving into ARCADA, it's worth being concrete about what the AI security threat landscape looks like in practice because most developers significantly underestimate it.&lt;/p&gt;

&lt;p&gt;Supply chain attacks via install hooks&lt;br&gt;
When you run pip install anything, Python executes the package's setup.py with your full user privileges. A malicious package can read your entire environment, exfiltrate SSH keys, API keys, and tokens, and establish persistence all before your application runs a single line of code. The 2023 PyTorch-nightly incident compromised thousands of developer machines exactly this way.&lt;/p&gt;

&lt;p&gt;Prompt injection at scale&lt;br&gt;
An LLM that processes untrusted user input or retrieves documents from an external source can be made to ignore its system prompt, leak its context window, or execute unintended tool calls. This isn't theoretical. In 2024, researchers demonstrated prompt injection attacks against production chatbots at major banks, healthcare providers, and SaaS companies. The attack surface is every input path to your LLM.&lt;/p&gt;

&lt;p&gt;Trojan Source and homoglyph attacks&lt;br&gt;
A Cyrillic а looks identical to a Latin a in every editor and code review tool. Attackers can substitute characters in function names, variable names, or string literals to create code that looks correct to human reviewers but behaves differently at runtime. This class of attack, documented in the 2021 Trojan Source paper, is increasingly used in targeted supply chain attacks against AI infrastructure teams.&lt;/p&gt;

&lt;p&gt;Model weight backdoors&lt;br&gt;
PyTorch model files are serialized with Python's pickle module. A malicious .pt file can execute arbitrary code when loaded with torch.load(). This is not a hypothetical: Hugging Face has removed hundreds of malicious model files found in the wild. If your application downloads and loads model weights from the internet, this is a live attack vector.&lt;/p&gt;

&lt;p&gt;Coverage estimate&lt;br&gt;
Based on analysis of public vulnerability reports, CVE databases, and supply chain incident data from 2022–2024, the attack categories listed above account for an estimated 73% of AI/LLM infrastructure compromises yet are covered by fewer than 20% of existing security tools targeting Python codebases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;03 — THE SOLUTION&lt;/strong&gt;&lt;br&gt;
ARCADA: Zero-trust auditor for AI systems.&lt;br&gt;
ARCADA is an open-source security auditor built specifically for AI/LLM infrastructure, agent frameworks, and supply chains. Unlike traditional SAST tools that pattern-match against a fixed rule set, ARCADA combines 20 specialized static analysis scanners with an AI reasoning engine (powered by DeepSeek) that synthesizes findings into a prioritized, attacker-perspective report.&lt;/p&gt;

&lt;p&gt;The design philosophy is zero-trust: every dependency is treated as potentially malicious, every API as potentially exfiltrating data, every agent as potentially hijacked. The AI reasoning layer understands compound risks the combination of a missing rate limit, an unvalidated LLM output, and a tool with filesystem access is a much bigger deal than any one finding in isolation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzoy9158sr6pwulb9siez.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzoy9158sr6pwulb9siez.PNG" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The three interfaces CLI, REST API, and Python SDK mean ARCADA fits wherever your workflow lives: a pre-commit hook, a GitHub Actions step, a nightly audit job, or an inline check in your deployment pipeline.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install&lt;/span&gt;
pip &lt;span class="nb"&gt;install &lt;/span&gt;arcada

&lt;span class="c"&gt;# Audit a requirements file&lt;/span&gt;
arcada audit requirements.txt

&lt;span class="c"&gt;# Audit an entire AI project&lt;/span&gt;
arcada audit ./my-llm-app/

&lt;span class="c"&gt;# Audit a public GitHub repo&lt;/span&gt;
arcada audit https://github.com/org/repo

&lt;span class="c"&gt;# CI gate — fail pipeline on high/critical findings&lt;/span&gt;
arcada audit &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nt"&gt;--fail-on&lt;/span&gt; high &lt;span class="nt"&gt;--format&lt;/span&gt; sarif &lt;span class="nt"&gt;--output&lt;/span&gt; arcada.sarif
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;04 — UNDER THE HOOD&lt;/strong&gt;&lt;br&gt;
20 scanners, running in parallel.&lt;br&gt;
Each scanner is a focused, independent module targeting a specific attack category. They run concurrently across every file in the target, then all findings are deduplicated and sent to the AI reasoning engine for synthesis. Here's what's in the scanner fleet:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F09y1dn2yn9vbkb6t17kb.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F09y1dn2yn9vbkb6t17kb.PNG" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcdexeb348upiqdv61tl5.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcdexeb348upiqdv61tl5.PNG" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff8jqsxqnh2wujrfs2ahy.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff8jqsxqnh2wujrfs2ahy.PNG" alt=" "&gt;&lt;/a&gt;&lt;br&gt;
Beyond the scanner fleet, ARCADA's reachability analysis is worth calling out specifically. Most SAST tools flag every dangerous sink every eval(), every subprocess.call(). ARCADA builds a call graph from your entry points and only surfaces vulnerabilities that are actually reachable in practice. This dramatically reduces false positives on large codebases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;05 — COVERAGE&lt;/strong&gt;&lt;br&gt;
What percentage of AI attacks does it catch?&lt;br&gt;
This is the question that matters most, and it deserves an honest answer rather than a marketing number. Based on mapping ARCADA's scanners against publicly documented AI/LLM infrastructure incidents and the OWASP LLM Top 10 (2025), here's the breakdown:&lt;/p&gt;

&lt;p&gt;Supply chain attacks (install hooks, typosquatting, dependency confusion) ~85%&lt;br&gt;
Secrets and credential exposure ~90%&lt;br&gt;
Prompt injection (code-level patterns) ~70%&lt;br&gt;
Cryptographic weaknesses ~88%&lt;br&gt;
Model weight attacks (pickle backdoors) ~75%&lt;br&gt;
Trojan Source / homoglyph attacks ~95%&lt;br&gt;
LLM exfiltration channels (agent frameworks) ~80%&lt;br&gt;
Runtime/infra misconfigurations ~65%&lt;/p&gt;

&lt;p&gt;Bottom line&lt;br&gt;
~73% weighted coverage across documented AI infrastructure attack categories compared to roughly 15–20% coverage from general-purpose Python SAST tools applied to the same attack surface. ARCADA doesn't replace your existing tools; it covers the blind spots they leave.&lt;/p&gt;

&lt;p&gt;The gaps runtime behavioral attacks, novel prompt injection vectors, zero-day CVEs in LLM libraries are honest limitations. No static analysis tool catches 100% of attacks. ARCADA is a first line of defense that eliminates the low-hanging fruit, reducing your attack surface enough that the remaining risks become tractable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;06 — IN PRACTICE&lt;/strong&gt;&lt;br&gt;
What a real audit report looks like.&lt;br&gt;
Here's an example of what ARCADA surfaces on a typical LangChain-based application with a few common mistakes baked in:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ARCADA — AI Runtime &amp;amp; Trust Evaluator

 Risk Score    ████████░░ 78/100
 Maturity      Weak
 Findings      23 total  (2 critical  7 high  9 medium  5 low)

CRITICAL  Hardcoded secret: Anthropic API Key
          Line 14 in config.py — sk-ant-api03-&amp;lt;redacted&amp;gt;
          Fix: Remove from code. Rotate immediately. Use env vars.

CRITICAL  Cyrillic homoglyph in identifier (Trojan Source)
          Line 203 in auth/validators.py — vаlidate_token()
          Cyrillic 'а' (U+0430) substituted for Latin 'a'
          Fix: Replace with ASCII. Add Unicode validation to CI.

HIGH      LangChain exfiltration: bind() with API key
          Line 88 in chains/qa.py — chain.bind(api_key=os.environ...)
          Fix: Use server-side config, not chain arguments.

HIGH      Non-constant-time comparison: == on token
          Line 31 in api/auth.py — if token == request_token
          Fix: Use hmac.compare_digest()

... 19 more findings

 Top risks:
  → Hardcoded Anthropic key exposed in source control
  → Trojan Source attack detected in auth validator
  → LangChain chain leaking API credentials to LLM context
  → Timing attack surface in token comparison
  → Unpinned langchain dependency (typosquatting risk)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The AI reasoning layer then synthesizes these raw findings into a narrative: the combination of a leaked API key, a compromised auth validator, and an exfiltration-prone LangChain chain creates a compound risk where an attacker who controls any one of those could pivot to the others. That's the kind of contextual analysis that rule-based tools can't produce.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;07 — CI/CD&lt;/strong&gt;&lt;br&gt;
Drop it into your pipeline in 5 minutes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ARCADA Security Audit&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;pull_request&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;arcada&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Run ARCADA audit&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;DEEPSEEK_API_KEY&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.DEEPSEEK_API_KEY }}&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;pip install arcada&lt;/span&gt;
          &lt;span class="s"&gt;arcada audit . --fail-on high \&lt;/span&gt;
                         &lt;span class="s"&gt;--format sarif \&lt;/span&gt;
                         &lt;span class="s"&gt;--output arcada.sarif&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Upload to GitHub Security tab&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;github/codeql-action/upload-sarif@v3&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;sarif_file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;arcada.sarif&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The --fail-on high flag exits with code 1 if any high or critical finding is detected, blocking the merge. SARIF upload pushes findings directly to the GitHub Security tab, where they appear alongside CodeQL results as code-scanning alerts on the specific lines.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;08 — CLOSING&lt;/strong&gt;&lt;br&gt;
The attack surface grew. The tooling needs to catch up.&lt;br&gt;
The AI boom has created a generation of applications with a security posture that's stuck in 2015. Teams are shipping LLM-powered products at breakneck speed, pulling in agent frameworks, model weights, and LLM API integrations and auditing them with tools designed for a fundamentally different threat model.&lt;/p&gt;

&lt;p&gt;ARCADA isn't a magic bullet. It won't catch everything. But it closes the gap between what your existing tools audit and what your actual attack surface looks like in 2025. And it does it in a form that fits the way AI teams actually work: a CLI for local dev, a REST API for integrations, and a GitHub Actions step for CI.&lt;/p&gt;

&lt;p&gt;The code is open source, the scanner modules are designed to be extended, and there's a Python SDK for building on top of it. If you're working on AI infrastructure security and want to contribute a scanner for a new framework, a new attack class, or a new language the architecture makes it straightforward.&lt;/p&gt;

&lt;p&gt;Start auditing your AI stack today.&lt;br&gt;
ARCADA is open source, MIT licensed, and takes under 5 minutes to set up.&lt;br&gt;
&lt;a href="https://github.com/ladebw/ARCADA" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>news</category>
      <category>threathunting</category>
      <category>zerotrust</category>
    </item>
    <item>
      <title>Shipping Fast with AI? You’re Probably Shipping Vulnerabilities Too.</title>
      <dc:creator>Walid Ladeb</dc:creator>
      <pubDate>Wed, 25 Mar 2026 14:02:38 +0000</pubDate>
      <link>https://forem.com/ladebw/shipping-fast-with-ai-youre-probably-shipping-vulnerabilities-too-3b62</link>
      <guid>https://forem.com/ladebw/shipping-fast-with-ai-youre-probably-shipping-vulnerabilities-too-3b62</guid>
      <description>&lt;p&gt;What nobody tells you about building with AI (from someone shipping fast):&lt;/p&gt;

&lt;p&gt;Over the past weeks, I kept seeing the same pattern:&lt;/p&gt;

&lt;p&gt;Apps exposing secrets without the “builder” writing real code&lt;br&gt;
Databases left open, no exploit needed&lt;br&gt;
Projects that pass tests, CI, reviews… yet are trivially breakable&lt;/p&gt;

&lt;p&gt;Everything works.&lt;br&gt;
Nothing is safe.&lt;/p&gt;

&lt;p&gt;That’s the gap.&lt;/p&gt;

&lt;p&gt;We’ve optimized everything for speed:&lt;/p&gt;

&lt;p&gt;AI writes the code&lt;br&gt;
CI catches build errors&lt;br&gt;
Tests catch regressions&lt;br&gt;
Observability catches crashes&lt;/p&gt;

&lt;p&gt;But one question is missing:&lt;/p&gt;

&lt;p&gt;“What can an attacker actually do with this right now?”&lt;/p&gt;

&lt;p&gt;And honestly, most indie builders (myself included at first) don’t think this way.&lt;/p&gt;

&lt;p&gt;Because:&lt;/p&gt;

&lt;p&gt;PR reviews miss auth edge cases&lt;br&gt;
Unit tests don’t simulate abuse&lt;br&gt;
Staging ≠ real adversarial environment&lt;br&gt;
Business logic flaws look completely fine… until someone abuses them&lt;/p&gt;

&lt;p&gt;AI makes this worse.&lt;br&gt;
It gives you clean-looking code, fast but no guarantee it’s safe.&lt;/p&gt;

&lt;p&gt;So I started building something for myself:&lt;/p&gt;

&lt;p&gt;A tool that looks at your app like an attacker would:&lt;/p&gt;

&lt;p&gt;Crawls your running app (not just code)&lt;br&gt;
Maps real attack surface&lt;br&gt;
Tries abuse paths dynamically&lt;br&gt;
Returns findings with proof (not guesses)&lt;br&gt;
Suggests fixes you can actually apply&lt;/p&gt;

&lt;p&gt;Not another static scanner.&lt;br&gt;
Not another “best practices” checklist.&lt;/p&gt;

&lt;p&gt;Something you run before shipping and ask:&lt;br&gt;
“Am I about to get wrecked?”&lt;/p&gt;

&lt;p&gt;If you’re an indie hacker shipping fast with AI, you probably have this blind spot too.&lt;/p&gt;

&lt;p&gt;I’m sharing the build in public here:&lt;br&gt;
&lt;a href="https://x.com/ARCADArun" rel="noopener noreferrer"&gt;https://x.com/ARCADArun&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>cybersecurity</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Proof-of-Execution: verifying what AI agents actually execute</title>
      <dc:creator>Walid Ladeb</dc:creator>
      <pubDate>Fri, 13 Mar 2026 04:39:00 +0000</pubDate>
      <link>https://forem.com/ladebw/proof-of-execution-verifying-what-ai-agents-actually-execute-5aoc</link>
      <guid>https://forem.com/ladebw/proof-of-execution-verifying-what-ai-agents-actually-execute-5aoc</guid>
      <description>&lt;p&gt;I’ve been working on a protocol called Proof-of-Execution (PoE).&lt;/p&gt;

&lt;p&gt;The idea is simple: AI agents today are evaluated mostly on their outputs, but outputs can be correct even if the agent didn’t actually perform the work.&lt;/p&gt;

&lt;p&gt;PoE introduces execution traces that can be verified to provide evidence of how the agent completed the task.&lt;/p&gt;

&lt;p&gt;It’s designed for multi-agent systems and agent infrastructure.&lt;/p&gt;

&lt;p&gt;Curious how others think about verification in autonomous agent systems.&lt;/p&gt;

&lt;p&gt;I can share the repo if anyone is interested.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1n4ykaxj9o6cscyh42e0.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1n4ykaxj9o6cscyh42e0.PNG" alt=" " width="800" height="470"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>discuss</category>
      <category>showdev</category>
    </item>
  </channel>
</rss>
