<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: miwaty</title>
    <description>The latest articles on Forem by miwaty (@miwaty).</description>
    <link>https://forem.com/miwaty</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/miwaty"/>
    <language>en</language>
    <item>
      <title>I Built a Fully Local OSINT Agent with Ollama, LangChain, Telegram and Qwen3.5 14B — Running 24/7 on My Homelab, Zero Cloud, Zero Compromises</title>
      <dc:creator>miwaty</dc:creator>
      <pubDate>Sun, 12 Apr 2026 21:45:28 +0000</pubDate>
      <link>https://forem.com/miwaty/i-built-a-fully-local-osint-agent-with-ollama-langchain-telegram-and-qwen35-14b-running-247-1pfk</link>
      <guid>https://forem.com/miwaty/i-built-a-fully-local-osint-agent-with-ollama-langchain-telegram-and-qwen35-14b-running-247-1pfk</guid>
      <description>&lt;h1&gt;
  
  
  I Built a Fully Local OSINT Agent with Ollama, LangChain, Telegram and Qwen3.5 14B — Running 24/7 on My Homelab, Zero Cloud, Zero Compromises
&lt;/h1&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Disclaimer&lt;/strong&gt; — This project was built for &lt;strong&gt;educational purposes only&lt;/strong&gt;, as a hands-on way to learn LangChain, local LLMs and OSINT tooling. Every technique described here should only be used on targets you have explicit written authorisation to analyse. Unauthorised use of OSINT tools may be illegal in your jurisdiction. Always act ethically and responsibly.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I've been wanting to seriously learn LangChain for a while. Not just tutorials — actually build something real that I'd use.&lt;/p&gt;

&lt;p&gt;So I built &lt;strong&gt;OSINT Marin AI&lt;/strong&gt;: a fully local OSINT agent running on my homelab, accessible from anywhere via Telegram, powered by &lt;strong&gt;Qwen3.5 14B&lt;/strong&gt; running locally via Ollama, with Tor for IP rotation, SQLite for caching, and Matplotlib for statistical charts sent directly to my phone.&lt;/p&gt;

&lt;p&gt;No cloud. No API keys. No data leaving my network. Just Python, open-source tools, and a lot of trial and error.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why local?
&lt;/h2&gt;

&lt;p&gt;If you work anywhere near cybersecurity, you already know the answer. Sending queries about IP addresses, domains, Instagram profiles or email addresses to a cloud API means that data is leaving your network and hitting someone else's servers. In a professional context that's often a hard no.&lt;/p&gt;

&lt;p&gt;Local models solve this completely. Everything stays on your hardware, under your control, with no third-party visibility into what you're analysing.&lt;/p&gt;

&lt;p&gt;For this project I used &lt;strong&gt;Qwen3.5 14B&lt;/strong&gt; in Q6_K quantization, served by Ollama. Thinking mode disabled — it adds latency with no real benefit for this use case.&lt;/p&gt;




&lt;h2&gt;
  
  
  What the agent can actually do
&lt;/h2&gt;

&lt;p&gt;I'll be concrete. These are the Telegram commands that work right now:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Instagram:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;/analizza_profilo @username&lt;/code&gt; — full profile metadata + AI summary&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/scarica_foto @username&lt;/code&gt; — downloads photos, shows a paginated interactive list with buttons to view each photo or its metadata (likes, comments, date, hashtags, caption) directly in Telegram&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/scarica_reel @username&lt;/code&gt; — downloads reels&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/stories @username&lt;/code&gt; — downloads stories (requires auth)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/foto_profilo @username&lt;/code&gt; — downloads and sends the profile picture&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/statistiche @username&lt;/code&gt; — generates and sends 7 statistical charts as PNG images&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Web &amp;amp; OSINT:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;/analizza_sito https://...&lt;/code&gt; — full site analysis with AI summary&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/whois dominio.com&lt;/code&gt; — WHOIS + DNS records + AI interpretation&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/sherlock username&lt;/code&gt; — username search across platforms + AI digital profile&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/ip 1.2.3.4&lt;/code&gt; — geolocation, ISP, ASN + AI comment&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/email info@x.com&lt;/code&gt; — MX records, provider, domain status + AI analysis&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/telefono +39...&lt;/code&gt; — carrier, country, number type + AI comment&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/chiedi &amp;lt;domanda&amp;gt;&lt;/code&gt; — free natural language question to the LLM&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every command that hits an external service first checks the local SQLite database. If the data is already there, it returns it instantly without making a new request.&lt;/p&gt;




&lt;h2&gt;
  
  
  The stack
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python-telegram-bot 21.6     ← user interface
LangChain 0.3.7              ← agent framework
LangChain-Ollama 0.2.1       ← LLM connector
Ollama                       ← local model server
Qwen3.5 14B (Q6_K)          ← the model (no thinking mode)
Instaloader 4.13.1           ← Instagram scraping
SQLAlchemy 2.0.35 + SQLite   ← local database
Matplotlib 3.9.2             ← statistical charts
Sherlock                     ← username OSINT
python-whois + dnspython     ← WHOIS and DNS
phonenumbers                 ← phone number analysis
stem 1.8.2                   ← Tor integration
Loguru 0.7.2                 ← structured logging
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;You (Telegram)
      ↓
  Telegram Bot (python-telegram-bot)
      ↓
  handlers.py — command routing + auth check
      ↓
  tools/ — instagram, web, osint, stats
      ↓
  SQLite DB — always checked before external requests
      ↓
  External sources (Instagram, ip-api, Sherlock...)
      ↓ (optional)
  Tor proxy — IP rotation for Instagram requests
      ↓
  agent/llm.py — ChatOllama, generates AI summary
      ↓
  Back to Telegram — structured data + AI block
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The LLM is the last step, not the first. The tools collect the data, the model synthesises it. If Ollama isn't running, the bot works anyway — the AI block is just skipped silently.&lt;/p&gt;




&lt;h2&gt;
  
  
  The caching layer — the decision I'm most proud of
&lt;/h2&gt;

&lt;p&gt;Every external request is expensive: rate limits, latency, risk of getting blocked. So before every call to Instagram, ip-api, Sherlock or WHOIS, the agent checks SQLite first.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;analizza_profilo&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;username&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# Check local DB first — never hit Instagram twice for the same target
&lt;/span&gt;    &lt;span class="n"&gt;profilo_db&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;get_profile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;username&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;profilo_db&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Profile &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;username&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; found in local database&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{...}&lt;/span&gt;  &lt;span class="c1"&gt;# instant response
&lt;/span&gt;
    &lt;span class="c1"&gt;# Only here if not cached
&lt;/span&gt;    &lt;span class="n"&gt;loader&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;_get_loader&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;autenticato&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;profile&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;instaloader&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Profile&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_username&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;loader&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;username&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;dati&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="nf"&gt;save_profile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;dati&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;dati&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Five tables: &lt;code&gt;profiles&lt;/code&gt;, &lt;code&gt;posts&lt;/code&gt;, &lt;code&gt;websites&lt;/code&gt;, &lt;code&gt;osint_results&lt;/code&gt;, &lt;code&gt;logs&lt;/code&gt;. Every result stored with full metadata. Over time the local database becomes genuinely useful — a growing knowledge base of everything you've ever analysed.&lt;/p&gt;




&lt;h2&gt;
  
  
  Tor integration for IP rotation
&lt;/h2&gt;

&lt;p&gt;Instagram rate-limits aggressively. After a few requests, you start getting 401/403 from their GraphQL API. My solution: route Instagram traffic through Tor and rotate the circuit when blocked.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew &lt;span class="nb"&gt;install &lt;/span&gt;tor
&lt;span class="c"&gt;# Add to torrc:&lt;/span&gt;
&lt;span class="c"&gt;# ControlPort 9051&lt;/span&gt;
brew services start tor
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In Python, the &lt;code&gt;utils/tor.py&lt;/code&gt; module handles circuit rotation via &lt;code&gt;stem&lt;/code&gt;. When Instaloader hits a block, the agent automatically requests a new Tor circuit and retries. Not perfect, but it works well enough for homelab use.&lt;/p&gt;




&lt;h2&gt;
  
  
  The LLM integration — keeping it simple
&lt;/h2&gt;

&lt;p&gt;I didn't build a complex ReAct agent for this. The pattern is simpler and more reliable:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Tool runs, collects structured data&lt;/li&gt;
&lt;li&gt;Data is sent to Telegram as formatted text&lt;/li&gt;
&lt;li&gt;Same data is passed to &lt;code&gt;_llm_analizza()&lt;/code&gt; which selects the right prompt for that data type&lt;/li&gt;
&lt;li&gt;Model generates a natural language summary&lt;/li&gt;
&lt;li&gt;Summary sent to Telegram as a separate &lt;strong&gt;Analisi AI&lt;/strong&gt; block
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;llm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ChatOllama&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;qwen3.5:14b&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;base_url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getenv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;OLLAMA_BASE_URL&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http://localhost:11434&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;temperature&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;      &lt;span class="c1"&gt;# deterministic — no hallucination games
&lt;/span&gt;    &lt;span class="n"&gt;num_predict&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;2048&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Temperature 0.1 because I want the model to interpret data, not invent it. The system prompt instructs it to never speculate beyond what the tools returned.&lt;/p&gt;




&lt;h2&gt;
  
  
  The 7 statistical charts
&lt;/h2&gt;

&lt;p&gt;For Instagram profiles, &lt;code&gt;/statistiche @username&lt;/code&gt; generates and sends 7 PNG charts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Monthly posting frequency&lt;/strong&gt; — bar chart&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Most active hours&lt;/strong&gt; — heatmap (day of week × hour)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Engagement rate over time&lt;/strong&gt; — line chart (likes + comments / followers × 100)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Like evolution&lt;/strong&gt; — line chart&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Top hashtags&lt;/strong&gt; — horizontal bar chart (top 20)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Content type breakdown&lt;/strong&gt; — pie chart (photo / video / reel)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Average caption length by month&lt;/strong&gt; — bar chart&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All generated with Matplotlib in &lt;code&gt;Agg&lt;/code&gt; mode (no display needed), saved to &lt;code&gt;/tmp/&lt;/code&gt;, sent to Telegram, then deleted. The underlying data comes from SQLite — no additional Instagram requests.&lt;/p&gt;




&lt;h2&gt;
  
  
  Project structure — designed to be extended
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;osint-marin-ai/
├── main.py               ← entrypoint
├── agent/
│   └── llm.py            ← ChatOllama + verifica_ollama + analizza_con_llm
├── bot/
│   ├── handlers.py       ← all Telegram command handlers
│   └── keyboards.py      ← inline keyboards
├── tools/
│   ├── instagram.py      ← Instaloader
│   ├── web.py            ← site analysis
│   ├── osint.py          ← Sherlock, IP, email, phone, WHOIS
│   └── stats.py          ← Matplotlib charts
├── db/
│   ├── models.py         ← SQLAlchemy models
│   └── database.py       ← CRUD + always-check-db-first pattern
└── utils/
    ├── logger.py         ← Loguru
    └── tor.py            ← Tor proxy + circuit rotation
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Adding a new tool takes four steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a file in &lt;code&gt;/tools/&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Save results via &lt;code&gt;db/database.py&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Register the command in &lt;code&gt;bot/handlers.py&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Add a prompt entry in &lt;code&gt;_llm_analizza()&lt;/code&gt; if you want AI summaries&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That's the entire extension contract. No hidden coupling, everything is documented in Italian with docstrings.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I actually learned building this
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;LangChain is powerful but you don't always need the full agent loop.&lt;/strong&gt; For most of my commands, a simple tool → LLM → response pattern is faster, more reliable and easier to debug than a full ReAct agent. I'll use LangGraph for the more complex multi-step workflows I'm planning next.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Local LLMs are genuinely usable now.&lt;/strong&gt; A year ago this would have been frustratingly slow. Today, with Q6_K quantization and thinking mode disabled, the model responds fast enough for interactive Telegram use. The ecosystem has caught up.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Caching is not optional.&lt;/strong&gt; Instagram will block you. Sherlock takes 60-120 seconds. Without the SQLite caching layer this would be unusable. Cache everything, always check the database first.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tor helps, but isn't a silver bullet.&lt;/strong&gt; Instagram's bot detection is sophisticated. Authenticated requests with a dedicated account are still more reliable than Tor rotation for heavy scraping.&lt;/p&gt;




&lt;h2&gt;
  
  
  What's next
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;LangGraph&lt;/strong&gt; for multi-step OSINT workflows (gather → correlate → report)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SearXNG&lt;/strong&gt; for fully local web search integrated into the agent&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PDF report generation&lt;/strong&gt; — export a complete OSINT report from Telegram&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scheduled monitoring&lt;/strong&gt; — alert when a tracked profile changes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Face recognition&lt;/strong&gt; on downloaded photos&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Final thoughts
&lt;/h2&gt;

&lt;p&gt;I started this to learn LangChain. I ended up with a tool I actually use. That's the best possible outcome from a homelab project built for educational purposes.&lt;/p&gt;

&lt;p&gt;If you're in security and haven't explored local LLMs yet — the barrier is lower than you think. Ollama makes model management trivial, LangChain handles the agent plumbing, and python-telegram-bot gives you a mobile interface in 50 lines of code.&lt;/p&gt;

&lt;p&gt;The stack is mature. The models are capable. The only thing missing is someone building with them.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Written by miwaty — cybersecurity enthusiast, homelab builder, eternal work in progress.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>agents</category>
      <category>cybersecurity</category>
      <category>llm</category>
      <category>python</category>
    </item>
    <item>
      <title>How I Used the Model Context Protocol (MCP) to Coordinate My DIY Smart Home</title>
      <dc:creator>miwaty</dc:creator>
      <pubDate>Sun, 06 Jul 2025 20:50:23 +0000</pubDate>
      <link>https://forem.com/miwaty/how-i-used-the-model-context-protocol-mcp-to-coordinate-my-diy-smart-home-109m</link>
      <guid>https://forem.com/miwaty/how-i-used-the-model-context-protocol-mcp-to-coordinate-my-diy-smart-home-109m</guid>
      <description>&lt;h1&gt;
  
  
  How I Used the Model Context Protocol (MCP) to Coordinate My DIY Smart Home
&lt;/h1&gt;

&lt;p&gt;In recent months, I embarked on a personal project to upgrade my home with a smart automation system. The challenge was to integrate multiple components—motion sensors, temperature sensors, lights, and an IP camera—into a cohesive and context-aware environment.&lt;/p&gt;

&lt;p&gt;Initially, each component was working in isolation. A motion sensor could turn on a light, a camera could start recording when triggered, and the temperature sensor could send its data periodically. But none of these devices had any understanding of the overall context. This limitation became apparent as the system grew more complex.&lt;/p&gt;

&lt;p&gt;This article explores how I introduced a custom implementation of a &lt;strong&gt;Model Context Protocol (MCP)&lt;/strong&gt; to unify the behavior of all these components and create a more intelligent and coordinated smart home system.&lt;/p&gt;

&lt;h2&gt;
  
  
  System Overview
&lt;/h2&gt;

&lt;p&gt;The hardware and software stack included:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A mini PC running Ubuntu acting as the central server&lt;/li&gt;
&lt;li&gt;ESP32 boards equipped with motion and temperature sensors, communicating via MQTT&lt;/li&gt;
&lt;li&gt;A Raspberry Pi connected to an IP camera&lt;/li&gt;
&lt;li&gt;Smart lights controlled via RESTful API&lt;/li&gt;
&lt;li&gt;A Python-based automation controller&lt;/li&gt;
&lt;li&gt;A WebSocket server for context synchronization&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Initially, each component operated using simple event-action scripts. For example, when the motion sensor detected movement, a script would trigger the lights to turn on. However, the logic quickly became fragile and difficult to manage as additional devices were added.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Core Issue: Lack of Shared Context
&lt;/h2&gt;

&lt;p&gt;The primary issue was the absence of a shared understanding of the environment. Consider the following scenarios:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The lights turned on even when I was not home.&lt;/li&gt;
&lt;li&gt;The IP camera kept recording even while I was in the living room.&lt;/li&gt;
&lt;li&gt;Motion detection events were triggering the same reactions repeatedly, regardless of other conditions.&lt;/li&gt;
&lt;li&gt;Environmental conditions such as brightness or time of day were not considered.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each of these problems stemmed from the lack of a &lt;strong&gt;global state&lt;/strong&gt; that all components could refer to. Each device operated based on its own inputs, without knowledge of what others were sensing or deciding.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introducing the Model Context Protocol (MCP)
&lt;/h2&gt;

&lt;p&gt;To solve this, I implemented a lightweight version of the Model Context Protocol. The goal was to establish a &lt;strong&gt;centralized context server&lt;/strong&gt; that all components could publish to and subscribe from.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Principles
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Each agent (sensor, light, camera, etc.) publishes its &lt;strong&gt;local state&lt;/strong&gt; to a central context server.&lt;/li&gt;
&lt;li&gt;The server maintains a &lt;strong&gt;global context model&lt;/strong&gt; in memory, updated incrementally as new events arrive.&lt;/li&gt;
&lt;li&gt;When significant changes occur in the context, &lt;strong&gt;interested agents are notified&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Agents react &lt;strong&gt;locally&lt;/strong&gt;, using the latest context to decide whether or not to act.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This approach decouples the behavior logic from hardcoded trigger rules and instead allows decisions to be made based on consistent, shared context.&lt;/p&gt;

&lt;h2&gt;
  
  
  Technical Implementation
&lt;/h2&gt;

&lt;p&gt;The Context Server was implemented in Python using the &lt;code&gt;asyncio&lt;/code&gt; and &lt;code&gt;websockets&lt;/code&gt; libraries. MQTT was used for communication with ESP32 devices, and TinyDB was used to persist the context state for inspection and recovery.&lt;/p&gt;

&lt;p&gt;Each component sends updates in JSON format. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"source"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"motion_sensor_living_room"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"motion"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"timestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2025-07-06T14:21:45Z"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The context server processes this update and modifies the global context accordingly. A simplified context might look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"presence"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"motion"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"is_dark"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"temperature"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;27.8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"last_motion"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2025-07-06T14:21:45Z"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The context server then notifies all subscribers (e.g., the light controller, the camera module) about the updated context.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real-World Use Case: Automated Presence Management
&lt;/h2&gt;

&lt;p&gt;A particularly useful application of MCP was for &lt;strong&gt;presence detection&lt;/strong&gt;. Here's how the system behaves with MCP:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;When motion is detected while the context indicates that the house is "empty", the context updates &lt;code&gt;presence&lt;/code&gt; to &lt;code&gt;true&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The camera module, upon detecting &lt;code&gt;presence: true&lt;/code&gt;, automatically stops recording to respect privacy.&lt;/li&gt;
&lt;li&gt;The lighting controller turns on lights only if &lt;code&gt;presence: true&lt;/code&gt; &lt;strong&gt;and&lt;/strong&gt; &lt;code&gt;is_dark: true&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Context updates are logged and stored for historical analysis or future machine learning applications.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Benefits of Using MCP
&lt;/h2&gt;

&lt;p&gt;Implementing MCP brought several advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Improved consistency&lt;/strong&gt;: All agents operate based on a unified context model.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reduced false triggers&lt;/strong&gt;: Actions are taken only when multiple conditions are satisfied.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Easier debugging&lt;/strong&gt;: Centralized logging of context changes makes it easier to trace issues.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Extensibility&lt;/strong&gt;: New agents or logic can be added without modifying existing ones, as long as they follow the context protocol.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Security and Reliability Considerations
&lt;/h2&gt;

&lt;p&gt;While this system is not exposed to the public internet, I added basic measures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Token-based authentication for WebSocket clients&lt;/li&gt;
&lt;li&gt;Input validation using &lt;code&gt;jsonschema&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Context versioning and timestamp tracking&lt;/li&gt;
&lt;li&gt;Manual override controls via command line interface&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Future improvements may include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;TLS encryption for WebSocket communication&lt;/li&gt;
&lt;li&gt;Integration with a dashboard for visual monitoring&lt;/li&gt;
&lt;li&gt;Rule-based automation engine using YAML or JSON configuration&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Building a smart home system that behaves intelligently requires more than just connecting devices. The key lies in providing a &lt;strong&gt;shared understanding of context&lt;/strong&gt; that all devices can refer to.&lt;/p&gt;

&lt;p&gt;By implementing a simple Model Context Protocol, I was able to transform a fragile collection of scripts into a robust and extensible system where each component plays its part based on the global state.&lt;/p&gt;

&lt;p&gt;This approach is scalable, reusable, and applicable well beyond home automation—any distributed system can benefit from having a centralized or decentralized context synchronization strategy.&lt;/p&gt;

&lt;p&gt;If you're interested in the source code for the context server or the agents, feel free to reach out. I’d be happy to publish it in a follow-up article.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Getting Started with Podman: My First Splunk Test Lab</title>
      <dc:creator>miwaty</dc:creator>
      <pubDate>Fri, 13 Jun 2025 10:50:39 +0000</pubDate>
      <link>https://forem.com/miwaty/getting-started-with-podman-my-first-splunk-test-lab-4ceg</link>
      <guid>https://forem.com/miwaty/getting-started-with-podman-my-first-splunk-test-lab-4ceg</guid>
      <description>&lt;p&gt;A few weeks ago, I stumbled upon a LinkedIn post that mentioned Podman as a drop-in replacement for Docker—daemonless, rootless, and open-source. I had heard about it before but never gave it much thought. This time, the post got my attention.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Disclaimer: I'm not a professional writer or seasoned blogger.&lt;br&gt;
I mostly use Dev.to as a notebook or public library for my tech experiments.&lt;br&gt;
That said—I genuinely hope you’ll find something useful here that helps you replicate, improve, or build your own version of this small Splunk lab with Podman.&lt;br&gt;
If you have suggestions, I’m always open to learning more.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I'm working a lot with Splunk, and I often spin up quick labs to test different components like Indexers, Heavy Forwarders, and Search Heads. I figured—why not try doing this with Podman?&lt;/p&gt;

&lt;p&gt;Here’s how I set up a basic Splunk architecture using Podman and podman-compose.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create the podman-compose.yml&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;**&lt;br&gt;
I used a file nearly identical to what I’d write for Docker Compose, since podman-compose is compatible with the Compose Specification.&lt;br&gt;
**&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.8'

services:
  idx:
    image: docker.io/splunk/splunk:latest
    container_name: idx
    environment:
      - SPLUNK_START_ARGS=--accept-license
      - SPLUNK_PASSWORD=Splunk@00
      - SPLUNK_ROLE=splunk_indexer
      - SPLUNK_ENABLE_LISTEN=9997
    ports:
      - "8000:8000"
      - "9997:9997"
      - "8089:8089"
    networks:
      - splunk-net

  hf:
    image: docker.io/splunk/splunk:latest
    container_name: hf
    environment:
      - SPLUNK_START_ARGS=--accept-license
      - SPLUNK_PASSWORD=Splunk@00
      - SPLUNK_ROLE=splunk_heavy_forwarder
    ports:
      - "8001:8000"
    networks:
      - splunk-net
    depends_on:
      - idx

  sh:
    image: docker.io/splunk/splunk:latest
    container_name: sh
    environment:
      - SPLUNK_START_ARGS=--accept-license
      - SPLUNK_PASSWORD=Splunk@00
      - SPLUNK_ROLE=splunk_search_head
    ports:
      - "8003:8000"
    networks:
      - splunk-net
    depends_on:
      - idx

networks:
  splunk-net:
    driver: bridge
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Run It with podman-compose&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;podman-compose -f podman-compose.yml up -d&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This brought up the three containers: idx, hf, and sh, running on the same network.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Post-Startup Configuration&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Search Head
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;podman exec -u splunk -it sh bash
/opt/splunk/bin/splunk add search-server idx:8089 -remoteUsername admin -remotePassword Splunk@00 -auth admin:Splunk@00
exit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Heavy Forwarder
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;podman exec -u splunk -it hf bash
/opt/splunk/bin/splunk add forward-server idx:9997 -auth admin:Splunk@00
exit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Verify the Configuration&lt;br&gt;
Go to Splunk Web Access:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On the Search Head, go to&lt;br&gt;
  Settings &amp;gt; Distributed Search &amp;gt; Search Peers&lt;br&gt;
  and verify that the indexer appears and is connected.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On the Heavy Forwarder, check&lt;br&gt;
  Settings &amp;gt; Forwarding and receiving &amp;gt; Forwarded Data&lt;br&gt;
  to confirm that data is being forwarded to the indexer.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Honestly, I didn’t expect Podman to work this smoothly. The only real change I had to make was adding the full image path (docker.io/splunk/splunk) to avoid name resolution issues. Otherwise, the experience felt familiar and lightweight.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
    </item>
    <item>
      <title>have fun with Zeek</title>
      <dc:creator>miwaty</dc:creator>
      <pubDate>Sun, 16 Mar 2025 21:34:18 +0000</pubDate>
      <link>https://forem.com/miwaty/have-fun-with-zeek-4c5</link>
      <guid>https://forem.com/miwaty/have-fun-with-zeek-4c5</guid>
      <description>&lt;h2&gt;
  
  
  What a wonderful tool zeek is!
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;This is the thought I had after realising the versatility and potential of the instrument.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A network security monitoring tool, I needed this to perform a task. Its extensibility through the installation of protocol recognition packages is what impressed me the most, but let's take two steps back and start from the beginning.&lt;/p&gt;

&lt;p&gt;I am using Zeek on my Raspberry(&lt;strong&gt;ubuntu&lt;/strong&gt;), connected to a switch port(configured in monitor mode), Zeek operating to analyse and catalogue traffic, and a Universal Forwarder to make my life easier for post-capture data analysis.&lt;/p&gt;

&lt;p&gt;Once the traffic capture begins, the captured packets are sent to an Event engine which is able to transform the flow of packets into a series of events representing network activity in a natural form. For example, an event of type HTTP is converted into its equivalent to know the ip addresses involved, http version used and the uri involved. Likewise an SSH type event contains the authentication, the IPs involved and the status. All that is extracted and converted is thanks to scripts already included in Zeek, which already allow a lot of information to be obtained with just the basis of the tool. All this is carefully placed in .log files for reading.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Are all types of protocols recognised?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The answer is yes.&lt;/strong&gt;&lt;br&gt;
but&lt;br&gt;
But if we install the appropriate plugins, which I will explain in detail in another post. I had to perform an assessment discovery to compile a list of everything on that network segmentation.&lt;/p&gt;

&lt;p&gt;Then knowing the domain context (small company) i.e. a small 3D printing company, I could have made a list by hand and then checked Zeek's results against my list. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;But that was no fun!!!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So what I did was ask them to mirror a port and thus start Zeek. &lt;br&gt;
Zeek fills a file called conn.log with all captured IP addresses primarily, located in the logs/current/ directory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Main Log: conn.log&lt;/strong&gt;&lt;br&gt;
This file records all observed network connections, including:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Source IP (id.orig_h)&lt;br&gt;
Destination IP (id.resp_h)&lt;br&gt;
Ports, protocol, connection duration, and data transferred&lt;br&gt;
&lt;/p&gt;


&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ts uid id.orig_h id.resp_h proto service
1693401923.01 C5mAqR1T7kl 192.168.1.10 8.8.8 udp dns
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This provides a full list of observed IPs in Zeek's network monitoring.&lt;/p&gt;

&lt;p&gt;Then from there knowing the context and also recognising some services I was able to do some research and be able to install the exact modules (&lt;a href="https://packages.zeek.org/" rel="noopener noreferrer"&gt;zeek-packages&lt;/a&gt;). One of these was ICSNPP-Modbus. Making a second round I completed the list and checking the parameters I got a good &lt;strong&gt;90%&lt;/strong&gt; coverage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What did I learn from this experience?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Definitely a new tool to study in depth. A little more about the world of PLCs.&lt;br&gt;
The architecture of Zeek itself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How can you experiment?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Surely if you have a knowledge that allows you to monitor a network legally, you can try anything and for as long as you want since scanning is passive you do not risk doing damage or crashing networks.&lt;/p&gt;

&lt;p&gt;Otherwise if you do not have the possibility of having a small infrastructure at home, you can always analyze pcap packets!!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Small Example:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Take a pcap of your interest, I recommend this list:&lt;br&gt;
&lt;a href="https://github.com/automayt/ICS-pcap" rel="noopener noreferrer"&gt;ICS-pcap&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;


&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;run the command:

**zeek -r &amp;lt;pcap&amp;gt;**
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and at the end of the procedure you will have your files with the data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to analyze files more comfortably?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I use Splunk, it allows me to gather all the files under an index and launch SPL queries to facilitate the analysis.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Some tips&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  - Perform a diagnosis with zeekctl check
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;When installing modules never skip the tests&lt;/li&gt;
&lt;li&gt;Do not install zkg as an external module with apt, but use the one under the --/zeek/bin folder&lt;/li&gt;
&lt;li&gt;Contact me if you have errors.&lt;/li&gt;
&lt;li&gt;Arm yourself with patience and remember that you must have in mind the infrastructure on which you are operating with Zeek&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>security</category>
      <category>zeek</category>
      <category>wireless</category>
    </item>
  </channel>
</rss>
