<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Damir</title>
    <description>The latest articles on Forem by Damir (@damsho92).</description>
    <link>https://forem.com/damsho92</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/damsho92"/>
    <language>en</language>
    <item>
      <title>The Developer’s Guide to the EU AI Act (What Actually Breaks Your Code)</title>
      <dc:creator>Damir</dc:creator>
      <pubDate>Wed, 01 Apr 2026 12:51:18 +0000</pubDate>
      <link>https://forem.com/damsho92/the-developers-guide-to-the-eu-ai-act-what-actually-breaks-your-code-10dj</link>
      <guid>https://forem.com/damsho92/the-developers-guide-to-the-eu-ai-act-what-actually-breaks-your-code-10dj</guid>
      <description>&lt;p&gt;If you're building AI features into your SaaS in 2026, you've probably heard the panic about the EU AI Act.&lt;/p&gt;

&lt;p&gt;Lawyers are charging €300–€500/hour to explain it.&lt;/p&gt;

&lt;p&gt;But let’s be honest:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Developers are the ones who actually have to implement compliance.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  🧠 The moment it clicked for me
&lt;/h2&gt;

&lt;p&gt;A few days ago, I was auditing a really well-built Node.js + Cloudflare app for a client.&lt;/p&gt;

&lt;p&gt;From a product perspective, it was great:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;fast
&lt;/li&gt;
&lt;li&gt;clean UI
&lt;/li&gt;
&lt;li&gt;solid architecture
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But from a compliance standpoint?&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;It was a ticking time bomb.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  🚨 The “innocent” P0 bug
&lt;/h2&gt;

&lt;p&gt;During the audit, I noticed something small:&lt;/p&gt;

&lt;p&gt;The app was storing Google OAuth tokens and sensitive user data.&lt;/p&gt;

&lt;p&gt;The problem?&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;No encryption at rest.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In a normal startup MVP, you might think:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“We’ll fix it later.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Under &lt;strong&gt;GDPR (Article 32)&lt;/strong&gt; and the EU AI Act data governance requirements…&lt;/p&gt;

&lt;p&gt;This is a &lt;strong&gt;P0 issue&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;If your AI system processes that data — or if there’s a breach —&lt;br&gt;&lt;br&gt;
“we’re just a startup” is not a defense.&lt;/p&gt;




&lt;h2&gt;
  
  
  ⚙️ The fix was easy (the problem wasn’t)
&lt;/h2&gt;

&lt;p&gt;The engineering team fixed it in a few hours:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;added AES-256 encryption to DB fields
&lt;/li&gt;
&lt;li&gt;updated access patterns
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Simple.&lt;/p&gt;

&lt;p&gt;But finding it &lt;strong&gt;before&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a regulator
&lt;/li&gt;
&lt;li&gt;or an enterprise client
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;…made all the difference.&lt;/p&gt;




&lt;h2&gt;
  
  
  📝 The real pain: Annex IV
&lt;/h2&gt;

&lt;p&gt;If your system is classified as &lt;strong&gt;High-Risk&lt;/strong&gt; under the EU AI Act…&lt;/p&gt;

&lt;p&gt;You are legally required to produce &lt;strong&gt;technical documentation&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This is called:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Annex IV&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  What Annex IV actually means for developers
&lt;/h3&gt;

&lt;p&gt;This is not “legal paperwork”.&lt;/p&gt;

&lt;p&gt;It’s engineering documentation.&lt;/p&gt;

&lt;p&gt;You need to describe:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;your architecture and system components
&lt;/li&gt;
&lt;li&gt;data flows (training vs inference)
&lt;/li&gt;
&lt;li&gt;human oversight (Article 14)
&lt;/li&gt;
&lt;li&gt;logging and traceability
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And here’s the catch:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Lawyers can’t write this for you.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;They don’t know:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;your DB schema
&lt;/li&gt;
&lt;li&gt;your API flows
&lt;/li&gt;
&lt;li&gt;your RAG pipeline
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You do.&lt;/p&gt;




&lt;h2&gt;
  
  
  🛠️ How to not lose weeks on this
&lt;/h2&gt;

&lt;p&gt;Instead of reading 300+ pages of regulation, here’s what actually works:&lt;/p&gt;




&lt;h3&gt;
  
  
  1. Use a real Annex IV template
&lt;/h3&gt;

&lt;p&gt;Don’t start from scratch.&lt;/p&gt;

&lt;p&gt;Use a structure that maps directly to the regulation.&lt;/p&gt;

&lt;p&gt;I put together a developer-friendly template based on actual requirements:&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://www.complianceradar.dev/annex-iv-template" rel="noopener noreferrer"&gt;https://www.complianceradar.dev/annex-iv-template&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It translates legal language into:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;system sections
&lt;/li&gt;
&lt;li&gt;architecture blocks
&lt;/li&gt;
&lt;li&gt;engineering concepts
&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  2. Audit your architecture early
&lt;/h3&gt;

&lt;p&gt;Most teams do this too late.&lt;/p&gt;

&lt;p&gt;They build → ship → then ask:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Are we compliant?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That’s backwards.&lt;/p&gt;

&lt;p&gt;You should be checking:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;before deployment
&lt;/li&gt;
&lt;li&gt;at architecture level
&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  🔍 What to look for
&lt;/h3&gt;

&lt;p&gt;Typical “P0 compliance bugs”:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;unencrypted sensitive data
&lt;/li&gt;
&lt;li&gt;missing transparency
&lt;/li&gt;
&lt;li&gt;unclear system boundaries
&lt;/li&gt;
&lt;li&gt;no logging / traceability
&lt;/li&gt;
&lt;li&gt;no human oversight
&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  ⚡ Automate the first pass
&lt;/h3&gt;

&lt;p&gt;If you want a quick baseline:&lt;/p&gt;

&lt;p&gt;You can upload your architecture and get:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;risk classification
&lt;/li&gt;
&lt;li&gt;compliance gaps
&lt;/li&gt;
&lt;li&gt;concrete fixes
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;👉 &lt;a href="https://www.complianceradar.dev" rel="noopener noreferrer"&gt;https://www.complianceradar.dev&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  💡 What I learned
&lt;/h2&gt;

&lt;p&gt;The biggest mistake developers make is thinking:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;compliance = legal problem&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It’s not.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;It’s an architecture problem.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And like any architecture problem:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;if you catch it early → easy fix
&lt;/li&gt;
&lt;li&gt;if you catch it late → painful refactor
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  👇 Question for you
&lt;/h2&gt;

&lt;p&gt;Are you checking compliance before you build…&lt;/p&gt;

&lt;p&gt;Or after things are already in production?&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>architecture</category>
      <category>startup</category>
    </item>
    <item>
      <title>We turned EU AI Act compliance into a marketing feature (and it changed everything)</title>
      <dc:creator>Damir</dc:creator>
      <pubDate>Fri, 20 Mar 2026 11:28:01 +0000</pubDate>
      <link>https://forem.com/damsho92/we-turned-eu-ai-act-compliance-into-a-marketing-feature-and-it-changed-everything-2no3</link>
      <guid>https://forem.com/damsho92/we-turned-eu-ai-act-compliance-into-a-marketing-feature-and-it-changed-everything-2no3</guid>
      <description>&lt;p&gt;Most developers treat compliance as a checkbox.&lt;/p&gt;

&lt;p&gt;Something you deal with at the end.&lt;br&gt;
Something legal handles.&lt;br&gt;
Something you hope doesn’t block your launch.&lt;/p&gt;

&lt;p&gt;I used to think the same.&lt;/p&gt;

&lt;p&gt;Until I started building an AI SaaS for the EU market.&lt;/p&gt;




&lt;h2&gt;
  
  
  🛑 The problem: compliance kills momentum
&lt;/h2&gt;

&lt;p&gt;While building ComplianceRadar (a tool that scans websites and AI systems for EU AI Act + GDPR risks), I kept running into the same issue:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Developers don’t understand the law&lt;/li&gt;
&lt;li&gt;Lawyers are too slow and expensive&lt;/li&gt;
&lt;li&gt;Teams only think about compliance after launch&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By the time they realize something is wrong, it’s already too late.&lt;/p&gt;

&lt;p&gt;They need to refactor architecture.&lt;br&gt;
Rewrite flows.&lt;br&gt;
Add missing controls.&lt;/p&gt;

&lt;p&gt;It’s painful.&lt;/p&gt;




&lt;h2&gt;
  
  
  🧠 The shift: compliance is not a cost, it's a signal(!)
&lt;/h2&gt;

&lt;p&gt;At some point, something clicked:&lt;/p&gt;

&lt;p&gt;What if compliance isn’t just protection…&lt;/p&gt;

&lt;p&gt;What if it’s positioning?&lt;/p&gt;

&lt;p&gt;In the EU, trust is everything.&lt;/p&gt;

&lt;p&gt;If you can prove your AI is compliant, you immediately:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;reduce buyer friction&lt;/li&gt;
&lt;li&gt;increase conversion&lt;/li&gt;
&lt;li&gt;stand out from competitors&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That’s not legal.&lt;/p&gt;

&lt;p&gt;That’s marketing.&lt;/p&gt;




&lt;h2&gt;
  
  
  🔧 What we built
&lt;/h2&gt;

&lt;p&gt;We decided to flip the model.&lt;/p&gt;

&lt;p&gt;Instead of hiding compliance in PDFs and internal docs, we made it visible.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Pre-launch compliance (PDF scanning)
&lt;/h3&gt;

&lt;p&gt;We added a feature that lets you upload your AI architecture (PDF) and detect risks &lt;strong&gt;before writing code&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;→ No live product required&lt;br&gt;&lt;br&gt;
→ Annex IV alignment&lt;br&gt;&lt;br&gt;
→ instant feedback  &lt;/p&gt;

&lt;p&gt;This alone changed how teams think about compliance.&lt;/p&gt;




&lt;h3&gt;
  
  
  2. The public trust badge
&lt;/h3&gt;

&lt;p&gt;Then we built something unexpected:&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;compliance badge&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Something you can embed on your site that says:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Audited via ComplianceRadar.dev”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now compliance becomes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;visible
&lt;/li&gt;
&lt;li&gt;shareable
&lt;/li&gt;
&lt;li&gt;part of your product
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🚀 The result
&lt;/h2&gt;

&lt;p&gt;Instead of asking:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Are we compliant?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Teams now ask:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“How do we show that we are compliant?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That’s a completely different mindset.&lt;/p&gt;




&lt;h2&gt;
  
  
  💡 What I learned
&lt;/h2&gt;

&lt;p&gt;If you’re building an AI product for Europe:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Don’t treat compliance as a blocker&lt;/li&gt;
&lt;li&gt;Don’t wait until after launch&lt;/li&gt;
&lt;li&gt;Don’t hide it in documentation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Turn it into something your users can see.&lt;/p&gt;

&lt;p&gt;Because in 2026:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Trust is a feature.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  🔍 If you want to test this
&lt;/h2&gt;

&lt;p&gt;I built a tool to help with this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;scan live apps
&lt;/li&gt;
&lt;li&gt;upload architecture (PDF)
&lt;/li&gt;
&lt;li&gt;generate compliance insights
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;👉 &lt;a href="https://www.complianceradar.dev" rel="noopener noreferrer"&gt;https://www.complianceradar.dev&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>programming</category>
      <category>saas</category>
    </item>
    <item>
      <title>I Almost Launched an "Illegal" AI SaaS Today. Here’s How I Fixed It (EU AI Act Guide).</title>
      <dc:creator>Damir</dc:creator>
      <pubDate>Tue, 17 Mar 2026 12:47:09 +0000</pubDate>
      <link>https://forem.com/damsho92/i-almost-launched-an-illegal-ai-saas-today-heres-how-i-fixed-it-eu-ai-act-guide-1iji</link>
      <guid>https://forem.com/damsho92/i-almost-launched-an-illegal-ai-saas-today-heres-how-i-fixed-it-eu-ai-act-guide-1iji</guid>
      <description>&lt;p&gt;Building an AI wrapper or SaaS in 2026 is easy.&lt;/p&gt;

&lt;p&gt;Launching it &lt;strong&gt;legally in the EU?&lt;/strong&gt; Not so much.&lt;/p&gt;




&lt;p&gt;Today, I was doing final checks before launching my project, &lt;strong&gt;ComplianceRadar&lt;/strong&gt;, a tool that scans websites for GDPR, ePrivacy, and EU AI Act issues.&lt;/p&gt;

&lt;p&gt;Everything looked green:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;clean code
&lt;/li&gt;
&lt;li&gt;Stripe fully integrated
&lt;/li&gt;
&lt;li&gt;security checks (including IDOR protection)
&lt;/li&gt;
&lt;li&gt;production-ready UI
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I was ready to ship.&lt;/p&gt;

&lt;p&gt;Then I did one last thing.&lt;/p&gt;

&lt;p&gt;I ran my own tool… against my own website.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Result
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Score: 65/100&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Wait… what?&lt;/p&gt;




&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;I was missing an &lt;strong&gt;AI Transparency Page&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Which means:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I was building a compliance tool… that was technically &lt;strong&gt;not compliant&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Specifically, I hadn’t fully addressed transparency expectations for AI deployers under the EU AI Act.&lt;/p&gt;

&lt;p&gt;So I did something most developers hate:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Code freeze.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I paused the launch and spent the morning fixing it properly.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Misconception
&lt;/h2&gt;

&lt;p&gt;A lot of developers think:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“AI transparency means open-sourcing your code or exposing your prompts.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That’s not true.&lt;/p&gt;

&lt;p&gt;You can be compliant &lt;strong&gt;without exposing your secret sauce&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Here’s what actually matters.&lt;/p&gt;




&lt;h2&gt;
  
  
  What You Actually Need to Document
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Model Architecture &amp;amp; API Usage
&lt;/h3&gt;

&lt;p&gt;Don’t hide what you’re using.&lt;/p&gt;

&lt;p&gt;In my case:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Google Gemini 2.5 Flash
&lt;/li&gt;
&lt;li&gt;via a secure Enterprise API
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Why this matters:&lt;/p&gt;

&lt;p&gt;B2B clients want to know you’re not running a random open-source model on an unprotected server.&lt;/p&gt;




&lt;h3&gt;
  
  
  2. The “Zero Data Retention” Guarantee
&lt;/h3&gt;

&lt;p&gt;This is one of the strongest trust signals you can have.&lt;/p&gt;

&lt;p&gt;I explicitly documented that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;user inputs (URLs + extracted content) are processed &lt;strong&gt;ephemerally&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;no customer data is used to train models
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is possible because of enterprise API guarantees.&lt;/p&gt;




&lt;h3&gt;
  
  
  3. Prompt Governance (Technical Guardrails)
&lt;/h3&gt;

&lt;p&gt;I didn’t expose my prompts.&lt;/p&gt;

&lt;p&gt;But I documented &lt;strong&gt;how the system is controlled&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;forcing strict &lt;code&gt;application/json&lt;/code&gt; outputs
&lt;/li&gt;
&lt;li&gt;limiting free-form responses
&lt;/li&gt;
&lt;li&gt;treating the model as a structured scoring engine
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This reduces hallucination risk and improves determinism.&lt;/p&gt;




&lt;h3&gt;
  
  
  4. Human-in-the-Loop Disclaimer
&lt;/h3&gt;

&lt;p&gt;This one is critical.&lt;/p&gt;

&lt;p&gt;AI is not perfect.&lt;/p&gt;

&lt;p&gt;So I clearly state:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The system is an assistive co-pilot, not a replacement for professional advice.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is both:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a &lt;strong&gt;compliance requirement&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;a &lt;strong&gt;liability safeguard&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The Result (After Fixing It)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;✅ SaaS launched
&lt;/li&gt;
&lt;li&gt;✅ Transparency implemented
&lt;/li&gt;
&lt;li&gt;✅ Compliance improved
&lt;/li&gt;
&lt;li&gt;✅ Trust factor increased significantly
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Final Thought
&lt;/h2&gt;

&lt;p&gt;If you’re building AI tools for the European market:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Don’t just build fast, build compliant.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The hardest part isn’t writing code.&lt;/p&gt;

&lt;p&gt;It’s understanding what your system &lt;strong&gt;is allowed to do&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Want to See It in Production?
&lt;/h2&gt;

&lt;p&gt;I made the transparency page public:&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://www.complianceradar.dev/ai-transparency" rel="noopener noreferrer"&gt;https://www.complianceradar.dev/ai-transparency&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also scan your own site and see how your system performs.&lt;/p&gt;




</description>
      <category>ai</category>
      <category>webdev</category>
      <category>programming</category>
      <category>saas</category>
    </item>
    <item>
      <title>Is Your AI App High-Risk? A Simple EU AI Act Guide for Developers</title>
      <dc:creator>Damir</dc:creator>
      <pubDate>Mon, 16 Mar 2026 14:14:52 +0000</pubDate>
      <link>https://forem.com/damsho92/is-your-ai-app-high-risk-a-simple-eu-ai-act-guide-for-developers-c29</link>
      <guid>https://forem.com/damsho92/is-your-ai-app-high-risk-a-simple-eu-ai-act-guide-for-developers-c29</guid>
      <description>&lt;p&gt;If you're building an AI wrapper, a chatbot, or integrating LLMs into your SaaS right now, you've probably heard about the &lt;strong&gt;EU AI Act&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It's the world's first comprehensive AI regulation.&lt;/p&gt;

&lt;p&gt;And yes, it applies to you &lt;strong&gt;even if your servers are in the US&lt;/strong&gt; but your users are in the EU.&lt;/p&gt;

&lt;p&gt;The problem?&lt;/p&gt;

&lt;p&gt;Most developers and indie hackers have &lt;strong&gt;no idea where their product falls under this law&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;We're used to writing code, not reading &lt;strong&gt;300-page legal documents&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;But getting this wrong can become an expensive mistake.&lt;/p&gt;

&lt;p&gt;So here's a &lt;strong&gt;simple, developer-friendly breakdown&lt;/strong&gt; of how to classify your AI product.&lt;/p&gt;




&lt;h2&gt;
  
  
  🔺 The Risk Pyramid: Where Do You Belong?
&lt;/h2&gt;

&lt;p&gt;The EU AI Act doesn't care about &lt;strong&gt;how your AI works&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It doesn't matter if you're using:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GPT-4
&lt;/li&gt;
&lt;li&gt;Claude
&lt;/li&gt;
&lt;li&gt;Llama
&lt;/li&gt;
&lt;li&gt;a custom model
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What the law actually cares about is &lt;strong&gt;the use case&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In other words:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;What is your AI being used for?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The Act divides AI systems into &lt;strong&gt;four categories&lt;/strong&gt;.&lt;/p&gt;




&lt;h3&gt;
  
  
  🚫 Prohibited AI (Banned)
&lt;/h3&gt;

&lt;p&gt;These systems are outright illegal.&lt;/p&gt;

&lt;p&gt;Examples include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;social scoring systems
&lt;/li&gt;
&lt;li&gt;biometric manipulation
&lt;/li&gt;
&lt;li&gt;AI exploiting vulnerable populations
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you are building something in this category…&lt;/p&gt;

&lt;p&gt;You should probably stop.&lt;/p&gt;




&lt;h3&gt;
  
  
  ⚠️ High-Risk AI (Heavy Compliance)
&lt;/h3&gt;

&lt;p&gt;This is where things get serious.&lt;/p&gt;

&lt;p&gt;Typical examples include AI used for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CV screening or hiring decisions
&lt;/li&gt;
&lt;li&gt;credit scoring
&lt;/li&gt;
&lt;li&gt;medical devices
&lt;/li&gt;
&lt;li&gt;law enforcement
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If your system falls here, compliance becomes &lt;strong&gt;a serious engineering task&lt;/strong&gt;.&lt;/p&gt;




&lt;h3&gt;
  
  
  👀 Limited Risk (Transparency Rules)
&lt;/h3&gt;

&lt;p&gt;This category includes things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;chatbots
&lt;/li&gt;
&lt;li&gt;deepfake generators
&lt;/li&gt;
&lt;li&gt;emotion recognition systems
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The main requirement here is simple:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Users must clearly know they are interacting with AI.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  ✅ Minimal Risk (Almost No Restrictions)
&lt;/h3&gt;

&lt;p&gt;This is where most everyday AI lives.&lt;/p&gt;

&lt;p&gt;Examples include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;spam filters
&lt;/li&gt;
&lt;li&gt;AI features in video games
&lt;/li&gt;
&lt;li&gt;recommendation engines
&lt;/li&gt;
&lt;li&gt;inventory prediction
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If your system falls here, you're mostly free to build without heavy regulatory overhead.&lt;/p&gt;




&lt;h2&gt;
  
  
  ⚙️ Why Risk Classification Matters (For Your Codebase)
&lt;/h2&gt;

&lt;p&gt;If your app falls into the &lt;strong&gt;High-Risk category&lt;/strong&gt;, things change dramatically.&lt;/p&gt;

&lt;p&gt;You can't just deploy to production and move on.&lt;/p&gt;

&lt;p&gt;The EU AI Act requires several architectural and operational changes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Risk Management System
&lt;/h3&gt;

&lt;p&gt;You must continuously test for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;bias&lt;/li&gt;
&lt;li&gt;errors&lt;/li&gt;
&lt;li&gt;unintended outcomes&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Human Oversight
&lt;/h3&gt;

&lt;p&gt;Your UI/UX must allow &lt;strong&gt;a human to intervene&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;That could mean:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;override mechanisms&lt;/li&gt;
&lt;li&gt;manual review&lt;/li&gt;
&lt;li&gt;emergency shutdown features&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Logging &amp;amp; Traceability
&lt;/h3&gt;

&lt;p&gt;Your infrastructure must keep &lt;strong&gt;clear logs of AI decisions&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Think:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;inputs&lt;/li&gt;
&lt;li&gt;outputs&lt;/li&gt;
&lt;li&gt;model behavior&lt;/li&gt;
&lt;li&gt;timestamps&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Technical Documentation (Annex IV)
&lt;/h3&gt;

&lt;p&gt;This is the scary one.&lt;/p&gt;

&lt;p&gt;You must produce &lt;strong&gt;formal documentation describing your entire system&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;system architecture&lt;/li&gt;
&lt;li&gt;training data description&lt;/li&gt;
&lt;li&gt;evaluation procedures&lt;/li&gt;
&lt;li&gt;risk management methods&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🧩 The Hard Part: Understanding Annex III
&lt;/h2&gt;

&lt;p&gt;Ironically, the hardest part of compliance isn't writing documentation.&lt;/p&gt;

&lt;p&gt;It's figuring out &lt;strong&gt;whether your system is high-risk in the first place&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The rules (especially &lt;strong&gt;Annex III of the AI Act&lt;/strong&gt;) are complicated.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;p&gt;A simple HR chatbot might be &lt;strong&gt;Limited Risk&lt;/strong&gt; if it only answers FAQs.&lt;/p&gt;

&lt;p&gt;But the moment it &lt;strong&gt;filters candidates based on resumes&lt;/strong&gt;, it suddenly becomes &lt;strong&gt;High-Risk AI&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The boundary is extremely blurry.&lt;/p&gt;




&lt;h2&gt;
  
  
  🛠️ A Simple Self-Assessment Tool
&lt;/h2&gt;

&lt;p&gt;I recently ran into this problem while auditing my own AI projects.&lt;/p&gt;

&lt;p&gt;After spending hours reading legal documents, I got frustrated.&lt;/p&gt;

&lt;p&gt;So I did what most developers would do.&lt;/p&gt;

&lt;p&gt;I built a tool to automate the classification.&lt;/p&gt;

&lt;p&gt;It's a simple &lt;strong&gt;EU AI Act Risk Classification Quiz&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It takes about &lt;strong&gt;2 minutes&lt;/strong&gt; and based on your use case it estimates which &lt;strong&gt;risk tier&lt;/strong&gt; your AI system falls into.&lt;/p&gt;

&lt;p&gt;🔗 &lt;strong&gt;Check your AI app's risk level here:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://www.complianceradar.dev/ai-act-risk-classification" rel="noopener noreferrer"&gt;https://www.complianceradar.dev/ai-act-risk-classification&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  💡 Final Thought
&lt;/h2&gt;

&lt;p&gt;When it comes to the EU AI Act, &lt;strong&gt;ignorance won't help you&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;But the real difficulty isn't compliance itself.&lt;/p&gt;

&lt;p&gt;It's understanding &lt;strong&gt;where your product sits on the risk pyramid&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Once you know your classification, the path forward becomes much clearer.&lt;/p&gt;

&lt;p&gt;And then you can go back to doing what you do best:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;building and shipping software.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;💬 &lt;strong&gt;What kind of AI app are you building right now?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Drop your use-case in the comments and let's figure out your &lt;strong&gt;risk tier&lt;/strong&gt; together.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>saas</category>
      <category>programming</category>
    </item>
    <item>
      <title>SEO is Dead? How I Optimized My Next.js SaaS for ChatGPT &amp; Perplexity (AEO)</title>
      <dc:creator>Damir</dc:creator>
      <pubDate>Sat, 14 Mar 2026 07:27:29 +0000</pubDate>
      <link>https://forem.com/damsho92/seo-is-dead-how-i-optimized-my-nextjs-saas-for-chatgpt-perplexity-aeo-9j7</link>
      <guid>https://forem.com/damsho92/seo-is-dead-how-i-optimized-my-nextjs-saas-for-chatgpt-perplexity-aeo-9j7</guid>
      <description>&lt;p&gt;Everyone is still playing the Google SEO game: stuffing keywords, buying backlinks, and fighting for Page 1.&lt;/p&gt;

&lt;p&gt;But if you are building a &lt;strong&gt;B2B SaaS in 2026&lt;/strong&gt;, your target audience (developers, founders, CTOs) has already changed their behavior.&lt;/p&gt;

&lt;p&gt;They aren't Googling anymore.&lt;/p&gt;

&lt;p&gt;They are asking:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ChatGPT
&lt;/li&gt;
&lt;li&gt;Perplexity
&lt;/li&gt;
&lt;li&gt;Claude
&lt;/li&gt;
&lt;li&gt;Google AI Mode
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When I launched my micro-SaaS &lt;strong&gt;ComplianceRadar&lt;/strong&gt; (an automated EU AI Act risk scanner), I realized something interesting:&lt;/p&gt;

&lt;p&gt;Getting to the top of Google might take &lt;strong&gt;6 months&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;But getting cited by an &lt;strong&gt;LLM as an authoritative source&lt;/strong&gt; can happen almost instantly if you structure your site correctly.&lt;/p&gt;

&lt;p&gt;Welcome to &lt;strong&gt;AEO — AI Engine Optimization.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here are the &lt;strong&gt;three things I implemented on day one&lt;/strong&gt; to make my Next.js SaaS machine-readable for AI systems.&lt;/p&gt;




&lt;h1&gt;
  
  
  1. The Secret Weapon: &lt;code&gt;llms.txt&lt;/code&gt;
&lt;/h1&gt;

&lt;p&gt;Just like &lt;code&gt;robots.txt&lt;/code&gt; tells search engines where to go, the new &lt;strong&gt;llms.txt&lt;/strong&gt; concept helps AI agents understand what your company actually does.&lt;/p&gt;

&lt;p&gt;AI crawlers like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;OpenAI's crawler
&lt;/li&gt;
&lt;li&gt;Anthropic crawlers
&lt;/li&gt;
&lt;li&gt;Perplexity indexing systems
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;prefer &lt;strong&gt;high-signal text over visual layout&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;They don't care about your beautiful Tailwind gradients.&lt;/p&gt;

&lt;p&gt;They want &lt;strong&gt;structured facts&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;I created an &lt;code&gt;llms.txt&lt;/code&gt; file and placed it in the &lt;code&gt;public/&lt;/code&gt; folder so it lives at:&lt;br&gt;
&lt;a href="//complianceradar.dev/llms.txt"&gt;complianceradar.dev/llms.txt&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Example structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ComplianceRadar

&amp;gt; Automated EU AI Act Risk Tier Classification for Developers

## Primary Services

- AI Risk Scanner: Analyzes an AI application's feature set and outputs a strict risk classification.
- Compliance Roadmaps: Technical and legal summaries based on Annex III.

## Target Audience

- Indie Hackers
- AI Startups
- Compliance Officers

## Trust &amp;amp; Methodology

The classification engine maps user inputs directly against the official text of the EU AI Act using a strict decision tree.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  2. Injecting Heavy Structured Data (JSON-LD)
&lt;/h1&gt;

&lt;p&gt;LLMs rely heavily on the semantic web.&lt;/p&gt;

&lt;p&gt;Having an  tag is nice.&lt;/p&gt;

&lt;p&gt;But giving the AI a literal JSON object describing your product is much more powerful.&lt;/p&gt;

&lt;p&gt;Inside my Next.js App Router, I injected JSON-LD schemas into core routes.&lt;/p&gt;

&lt;p&gt;Main schemas used:&lt;/p&gt;

&lt;p&gt;Organization&lt;/p&gt;

&lt;p&gt;WebSite&lt;/p&gt;

&lt;p&gt;SoftwareApplication&lt;/p&gt;

&lt;p&gt;FAQPage&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"@context"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://schema.org"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"@type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"SoftwareApplication"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ComplianceRadar"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"applicationCategory"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"BusinessApplication"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Automated EU AI Act risk classification tool"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"offers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"@type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Offer"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"price"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"29"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"priceCurrency"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"EUR"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This explicitly tells AI systems:&lt;/p&gt;

&lt;p&gt;what the product is&lt;/p&gt;

&lt;p&gt;what category it belongs to&lt;/p&gt;

&lt;p&gt;how it is priced&lt;/p&gt;

&lt;p&gt;Structured data = AI-friendly content.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. The "Authority Anchor" Technique (Official Citations)
&lt;/h2&gt;

&lt;p&gt;Here is the biggest mistake founders make with content marketing:&lt;/p&gt;

&lt;p&gt;They write great opinion pieces but provide &lt;strong&gt;zero hard sources&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;LLMs are designed to prioritize &lt;strong&gt;authoritative and corroborated information&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;If your blog post says:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;EU AI Act fines are 7% of global revenue&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;without linking to a primary source, an AI model may ignore it.&lt;/p&gt;

&lt;p&gt;To fix this, I added &lt;strong&gt;explicit outbound links to primary legal sources&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;official EU law documentation
&lt;/li&gt;
&lt;li&gt;EUR-Lex legislation pages
&lt;/li&gt;
&lt;li&gt;regulatory summaries&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By doing this, the article becomes a &lt;strong&gt;bridge between complex legislation and developer-friendly explanations&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This signals to AI systems:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This source is aggregating verified regulatory information.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And that increases the chances of being cited.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Result
&lt;/h2&gt;

&lt;p&gt;Building an interactive SaaS is only &lt;strong&gt;half of the battle&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The other half is &lt;strong&gt;distribution&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;By implementing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;llms.txt&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;structured JSON-LD&lt;/li&gt;
&lt;li&gt;authoritative citations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;ComplianceRadar is no longer just waiting for Google indexing.&lt;/p&gt;

&lt;p&gt;It is actively feeding &lt;strong&gt;structured, trustworthy data into the AI models developers use every day.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Final Thought
&lt;/h2&gt;

&lt;p&gt;If you are building a SaaS in 2026, especially for developers:&lt;/p&gt;

&lt;p&gt;Stop optimizing &lt;strong&gt;only for Google&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Start optimizing for the machines your users are actually talking to.&lt;/p&gt;




&lt;h2&gt;
  
  
  Try the Scanner
&lt;/h2&gt;

&lt;p&gt;If you're building an AI feature and want to understand potential regulatory risks under the EU AI Act, you can try my free scanner here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.complianceradar.dev" rel="noopener noreferrer"&gt;ComplianceRadar&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>aeo</category>
      <category>webdev</category>
      <category>nextjs</category>
    </item>
  </channel>
</rss>
