<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Viru Swami</title>
    <description>The latest articles on Forem by Viru Swami (@viru_swami).</description>
    <link>https://forem.com/viru_swami</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/viru_swami"/>
    <language>en</language>
    <item>
      <title>Your AI Agent Can Delete Production — Can You Prove It?</title>
      <dc:creator>Viru Swami</dc:creator>
      <pubDate>Sat, 28 Mar 2026 09:41:39 +0000</pubDate>
      <link>https://forem.com/viru_swami/your-ai-agent-can-delete-production-can-you-prove-it-41nh</link>
      <guid>https://forem.com/viru_swami/your-ai-agent-can-delete-production-can-you-prove-it-41nh</guid>
      <description>&lt;p&gt;AI agents are no longer passive.&lt;/p&gt;

&lt;p&gt;They execute shell commands, modify files, call APIs, trigger real-world actions.&lt;/p&gt;

&lt;p&gt;Now consider this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Your agent deletes production data. You check the logs. Logs say: &lt;em&gt;"No destructive action executed."&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Now what?&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Real Problem
&lt;/h2&gt;

&lt;p&gt;Logs are not evidence. They are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;editable&lt;/li&gt;
&lt;li&gt;reorderable&lt;/li&gt;
&lt;li&gt;controlled by the same system that produced them&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A log is just a story told after the fact. And with AI agents? That story may not be trustworthy.&lt;/p&gt;




&lt;h2&gt;
  
  
  Failure Scenario
&lt;/h2&gt;

&lt;p&gt;Here's what actually executed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;1. &lt;span class="nb"&gt;read &lt;/span&gt;config
2. call API
3. &lt;span class="nb"&gt;rm &lt;/span&gt;production.db
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's what the logs showed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;1. &lt;span class="nb"&gt;read &lt;/span&gt;config
2. call API
&lt;span class="c"&gt;# &amp;lt;missing&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Was step 3 never executed? Removed? Corrupted?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You cannot prove anything.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What "Proof" Requires
&lt;/h2&gt;

&lt;p&gt;For logs to become evidence, they must be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;tamper-evident&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;sequential&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;independently verifiable&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The Idea: Hash-Chained Execution
&lt;/h2&gt;

&lt;p&gt;Each action is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;canonicalized (RFC 8785)&lt;/li&gt;
&lt;li&gt;hashed (SHA-256)&lt;/li&gt;
&lt;li&gt;linked to the previous entry&lt;/li&gt;
&lt;li&gt;signed (Ed25519)
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Entry 0 → Entry 1 → Entry 2 → ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Modify anything — the chain breaks instantly.&lt;/p&gt;




&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;guardclaw verify ledger.jsonl
✓ VALID — 1024 entries
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Edit one byte:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;guardclaw verify ledger.jsonl
✗ CHAIN BREAK at entry 47
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No ambiguity.&lt;/p&gt;




&lt;h2&gt;
  
  
  What This Guarantees
&lt;/h2&gt;

&lt;p&gt;✅ Order of execution&lt;br&gt;&lt;br&gt;
✅ Integrity of records&lt;br&gt;&lt;br&gt;
✅ No silent modification  &lt;/p&gt;

&lt;h2&gt;
  
  
  What It Does NOT Guarantee
&lt;/h2&gt;

&lt;p&gt;❌ Correctness&lt;br&gt;&lt;br&gt;
❌ Safety&lt;br&gt;&lt;br&gt;
❌ Truthful inputs  &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Integrity ≠ intelligence.&lt;/strong&gt; This is not observability. It's an integrity layer for agent execution — similar to what Git does for code history, or Certificate Transparency does for TLS.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Open Question
&lt;/h2&gt;

&lt;p&gt;If your AI agent deletes data, sends money, or executes infrastructure changes —&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;how do you prove what actually happened?&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What I'm Building
&lt;/h2&gt;

&lt;p&gt;I've been exploring this problem with an open-source project:&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://github.com/viruswami5511/guardclaw" rel="noopener noreferrer"&gt;github.com/viruswami5511/guardclaw&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Would love feedback from anyone running agents in production.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>cryptography</category>
      <category>security</category>
    </item>
    <item>
      <title>AI agents can run shell commands — how do you prove what actually happened?</title>
      <dc:creator>Viru Swami</dc:creator>
      <pubDate>Wed, 11 Mar 2026 18:03:22 +0000</pubDate>
      <link>https://forem.com/viru_swami/ai-agents-can-run-shell-commands-and-modify-files-but-their-logs-can-be-edited-3m3n</link>
      <guid>https://forem.com/viru_swami/ai-agents-can-run-shell-commands-and-modify-files-but-their-logs-can-be-edited-3m3n</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;AI agents can now run shell commands, modify files, and deploy infrastructure.&lt;br&gt;
But their logs can be edited after the fact.&lt;br&gt;
GuardClaw is an experiment in cryptographic, tamper-evident execution logs for AI agents.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The Problem: AI agents are gaining real execution power
&lt;/h2&gt;

&lt;p&gt;Modern AI assistants are no longer just answering questions.&lt;/p&gt;

&lt;p&gt;They can now:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;run shell commands&lt;/li&gt;
&lt;li&gt;read and modify files&lt;/li&gt;
&lt;li&gt;interact with databases&lt;/li&gt;
&lt;li&gt;call APIs&lt;/li&gt;
&lt;li&gt;execute DevOps workflows&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Frameworks like LangChain, AutoGen, and MCP make it easy to give AI agents real capabilities.&lt;/p&gt;

&lt;p&gt;But this raises a simple question:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If an AI agent does something dangerous, how do we prove what actually happened?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Most systems rely on traditional logs.&lt;br&gt;
The problem is that logs are &lt;strong&gt;mutable&lt;/strong&gt;.&lt;br&gt;
Anyone with access can edit them.&lt;/p&gt;
&lt;h2&gt;
  
  
  A simple example
&lt;/h2&gt;

&lt;p&gt;Imagine an AI DevOps assistant runs the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;shell&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exec&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;rm production.db&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now someone edits the log file to hide it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;shell&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exec&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;ls&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you investigate later, the logs look normal.&lt;br&gt;
The destructive command is gone.&lt;br&gt;
There is no way to prove the log was edited.&lt;/p&gt;
&lt;h2&gt;
  
  
  The idea: tamper-evident execution logs
&lt;/h2&gt;

&lt;p&gt;I built an open-source project called GuardClaw to experiment with a different approach.&lt;br&gt;
Instead of normal logs, every action is written to a cryptographically signed ledger.&lt;/p&gt;

&lt;p&gt;Each entry is:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Canonicalized&lt;/li&gt;
&lt;li&gt;Linked to the previous entry using a SHA-256 hash&lt;/li&gt;
&lt;li&gt;Signed with an Ed25519 signature&lt;/li&gt;
&lt;li&gt;Appended to a JSONL ledger&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;deleting entries breaks the chain&lt;/li&gt;
&lt;li&gt;editing entries breaks the signature&lt;/li&gt;
&lt;li&gt;reordering entries breaks the hash linkage&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If the ledger is modified, verification fails.&lt;/p&gt;

&lt;p&gt;A small demo&lt;br&gt;
Install GuardClaw:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;guardclaw
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run a simple agent example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;guardclaw&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;init_global_ledger&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Ed25519KeyManager&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;guardclaw.mcp&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;GuardClawMCPProxy&lt;/span&gt;

&lt;span class="n"&gt;km&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Ed25519KeyManager&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;generate&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="nf"&gt;init_global_ledger&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;key_manager&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;km&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;agent_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;demo-agent&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;proxy&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;GuardClawMCPProxy&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;demo-agent&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;search&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Results for: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="n"&gt;proxy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;register_tool&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;search&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;search&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;proxy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;call&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;search&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;AI governance&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;GuardClaw writes a ledger automatically:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.guardclaw/ledger/ledger.jsonl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verifying the ledger&lt;br&gt;
Run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python &lt;span class="nt"&gt;-m&lt;/span&gt; guardclaw verify .guardclaw/ledger/ledger.jsonl

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;VALID
ledger integrity confirmed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Now try editing the ledger file.&lt;br&gt;
Delete one entry.&lt;/strong&gt;&lt;br&gt;
Run verification again.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python &lt;span class="nt"&gt;-m&lt;/span&gt; guardclaw verify .guardclaw/ledger/ledger.jsonl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;INVALID
causal hash mismatch
ledger integrity violated
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The modification is immediately detected.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters for AI agents
&lt;/h2&gt;

&lt;p&gt;AI systems are increasingly executing actions autonomously:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;coding assistants modifying repositories&lt;/li&gt;
&lt;li&gt;DevOps agents deploying infrastructure&lt;/li&gt;
&lt;li&gt;security agents running scans&lt;/li&gt;
&lt;li&gt;trading bots executing transactions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In these environments, it becomes important to answer questions like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What did the agent actually do?&lt;/li&gt;
&lt;li&gt;When did it happen?&lt;/li&gt;
&lt;li&gt;Was the log modified afterward?&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What an agent execution ledger looks like
&lt;/h2&gt;

&lt;p&gt;Example execution recorded by GuardClaw:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Agent actions:

seq 0  tool.search("AI governance")
seq 1  tool.read_file("config.yaml")
seq 2  shell.exec("rm production.db")

GuardClaw ledger → cryptographically signed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Tamper-evident execution ledgers provide a way to &lt;strong&gt;verify agent actions cryptographically.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Integrating with AI systems&lt;br&gt;
GuardClaw already includes adapters for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;LangChain&lt;/li&gt;
&lt;li&gt;CrewAI&lt;/li&gt;
&lt;li&gt;MCP tool calls&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This makes it possible to record:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;tool calls&lt;/li&gt;
&lt;li&gt;agent actions&lt;/li&gt;
&lt;li&gt;execution results&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;while the system runs normally.&lt;/p&gt;
&lt;h2&gt;
  
  
  Example: recording GPT tool calls
&lt;/h2&gt;

&lt;p&gt;In a simple demo, an AI assistant calling tools produces a ledger like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;seq 0  search → INTENT
seq 1  search → RESULT

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every step is signed and chained.&lt;br&gt;
If the ledger is edited later, verification fails.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'm exploring
&lt;/h2&gt;

&lt;p&gt;GuardClaw is still an early experiment.&lt;br&gt;
I'm interested in exploring ideas like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;verifiable AI execution logs&lt;/li&gt;
&lt;li&gt;agent accountability systems&lt;/li&gt;
&lt;li&gt;cryptographic audit trails for autonomous agents&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're building AI agents or automation systems, I’d love to hear how you currently handle logging and auditing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Project&lt;/strong&gt;&lt;br&gt;
GitHub:&lt;br&gt;
&lt;a href="https://github.com/viruswami5511/guardclaw" rel="noopener noreferrer"&gt;https://github.com/viruswami5511/guardclaw&lt;/a&gt;&lt;br&gt;
PyPI:&lt;br&gt;
&lt;a href="https://pypi.org/project/guardclaw/" rel="noopener noreferrer"&gt;https://pypi.org/project/guardclaw/&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Feedback welcome
&lt;/h2&gt;

&lt;p&gt;I'm especially interested in feedback from people building:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AI agents&lt;/li&gt;
&lt;li&gt;automation pipelines&lt;/li&gt;
&lt;li&gt;DevOps tooling&lt;/li&gt;
&lt;li&gt;security infrastructure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What kinds of auditing or accountability tools do you wish existed for AI systems?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>devtools</category>
      <category>security</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
