<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: AppZ</title>
    <description>The latest articles on Forem by AppZ (@appz_b0659e1ca24e36738948).</description>
    <link>https://forem.com/appz_b0659e1ca24e36738948</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/appz_b0659e1ca24e36738948"/>
    <language>en</language>
    <item>
      <title>GDPR Tells You to Delete. The EU AI Act Tells You to Archive. Here Is How to Resolve It.</title>
      <dc:creator>AppZ</dc:creator>
      <pubDate>Fri, 24 Apr 2026 04:47:42 +0000</pubDate>
      <link>https://forem.com/appz_b0659e1ca24e36738948/gdpr-tells-you-to-delete-the-eu-ai-act-tells-you-to-archive-here-is-how-to-resolve-it-1b5h</link>
      <guid>https://forem.com/appz_b0659e1ca24e36738948/gdpr-tells-you-to-delete-the-eu-ai-act-tells-you-to-archive-here-is-how-to-resolve-it-1b5h</guid>
      <description>&lt;p&gt;If you are building or deploying AI systems in the EU, you are probably already managing GDPR obligations. Now the EU AI Act is layered on top.&lt;/p&gt;

&lt;p&gt;The two regulations appear to conflict directly on one of the most sensitive data questions: how long do you keep training data?&lt;/p&gt;

&lt;p&gt;GDPR says you delete it when it is no longer necessary. The EU AI Act says you keep it to prove your system is compliant.&lt;/p&gt;

&lt;p&gt;Here is how you resolve it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why the Tension Exists
&lt;/h2&gt;

&lt;p&gt;GDPR's storage limitation principle under Article 5(1)(e) requires that personal data is kept in a form that permits identification of data subjects for no longer than is necessary for the purposes for which the data was collected.&lt;/p&gt;

&lt;p&gt;The EU AI Act under Article 10(5) allows high-risk AI systems to retain special categories of personal data, where strictly necessary, for the purpose of detecting and correcting biases.&lt;/p&gt;

&lt;p&gt;Article 12 of the EU AI Act requires logging capabilities that retain records sufficient to enable post-hoc review of system outputs. For systems used in high-stakes decisions, these logs include input data linked to specific decisions affecting identifiable individuals.&lt;/p&gt;

&lt;p&gt;A pure GDPR application would say: delete when the purpose expires. A pure EU AI Act application would say: retain to demonstrate conformity. Neither regulation explicitly defers to the other.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Reconciliation Framework
&lt;/h2&gt;

&lt;p&gt;The resolution lies in purpose specification and proportionality, the same principles that underpin GDPR compliance generally.&lt;/p&gt;

&lt;p&gt;The key is to separate training data retention from operational log retention, and to apply different retention rules to each.&lt;/p&gt;

&lt;p&gt;For training data, the GDPR purpose limitation principle requires a clear, documented legal basis for extended retention. Under Article 6(1)(f), legitimate interests or under Article 9(2)(g) for special category data in the substantial public interest, you can justify retaining training data beyond its original collection purpose if you document the necessity for bias detection, can demonstrate no less privacy-invasive alternative exists, and have completed a legitimate interests assessment.&lt;/p&gt;

&lt;p&gt;This is not a blanket exemption. It requires active documentation and regular review.&lt;/p&gt;

&lt;p&gt;For operational logs under Article 12 of the EU AI Act, the retention period must be proportionate to the risk profile of the system. A general-purpose AI tool used internally has a different risk profile from a high-risk system used in employment screening under Annex III. The former may justify 30-day log retention. The latter may require two years or more.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Your Documentation Must Show
&lt;/h2&gt;

&lt;p&gt;The practical resolution requires you to produce documentation that serves both regulatory frameworks simultaneously.&lt;/p&gt;

&lt;p&gt;Your data governance record must show the original collection purpose, the EU AI Act compliance purpose that justifies extended retention, the legal basis under GDPR Article 6 or Article 9 for that extended retention, a defined retention period tied to the EU AI Act conformity assessment cycle, and a deletion schedule activated at the end of that period.&lt;/p&gt;

&lt;p&gt;This documentation should sit alongside your EU AI Act technical documentation under Annex IV and your GDPR Records of Processing Activities. In practice, many organisations are creating a unified data governance layer that feeds both.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Special Category Problem
&lt;/h2&gt;

&lt;p&gt;The conflict sharpens with special category data. GDPR restricts processing under Article 9 to specific grounds. The EU AI Act under Article 10(5) permits retention of such data for bias detection but only where strictly necessary and where appropriate safeguards are in place.&lt;/p&gt;

&lt;p&gt;Strictly necessary is a high bar. You cannot retain sensitive demographic data because it might be useful. You need to demonstrate that the bias detection objective cannot be achieved using anonymised or aggregated data.&lt;/p&gt;

&lt;p&gt;In most cases, pseudonymisation provides a workable middle path. You retain the data structure necessary for bias analysis while reducing the re-identification risk that makes GDPR Article 9 processing so constrained.&lt;/p&gt;

&lt;p&gt;Pseudonymisation does not eliminate GDPR obligations. The data remains personal data for regulatory purposes. But it reduces the risk profile and strengthens the proportionality case.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical Steps
&lt;/h2&gt;

&lt;p&gt;First, classify your AI systems by risk tier. High-risk systems under Annex III have active retention obligations that justify more extensive GDPR carve-outs. Limited risk systems have a weaker case for extended retention.&lt;/p&gt;

&lt;p&gt;Second, map your data flows. Know which data feeds your training pipeline, which data populates your operational logs, and which data supports your post-market monitoring obligations under Article 72.&lt;/p&gt;

&lt;p&gt;Third, draft retention schedules that are regulation-specific. Your training data retention policy and your operational log policy should reference both GDPR and EU AI Act obligations explicitly, with legal bases cited for each.&lt;/p&gt;

&lt;p&gt;Fourth, build deletion into your conformity process. When a conformity assessment cycle ends, your deletion triggers should activate. Retention that outlasts its regulatory purpose becomes a GDPR liability.&lt;/p&gt;

&lt;p&gt;The organisations that handle this well are the ones treating data governance as infrastructure, not paperwork. The conflict between these two regulations is real, but it is navigable with deliberate documentation and proportionate retention design.&lt;/p&gt;

</description>
      <category>euaiact</category>
      <category>ai</category>
      <category>gdpr</category>
      <category>compliance</category>
    </item>
    <item>
      <title>What EU AI Act Compliance Actually Costs and Where the Money Goes</title>
      <dc:creator>AppZ</dc:creator>
      <pubDate>Fri, 24 Apr 2026 04:45:39 +0000</pubDate>
      <link>https://forem.com/appz_b0659e1ca24e36738948/what-eu-ai-act-compliance-actually-costs-and-where-the-money-goes-4d71</link>
      <guid>https://forem.com/appz_b0659e1ca24e36738948/what-eu-ai-act-compliance-actually-costs-and-where-the-money-goes-4d71</guid>
      <description>&lt;p&gt;Every conversation I have with a founder building AI products in Europe eventually comes around to the same question: what is this actually going to cost us?&lt;/p&gt;

&lt;p&gt;The EU AI Act is not a fine-on-paper regulation. It has teeth. And the compliance costs are real, spread across people, processes, and infrastructure that most startups have not budgeted for.&lt;/p&gt;

&lt;p&gt;Here is a breakdown of where the money actually goes.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Legal Bill Comes First
&lt;/h2&gt;

&lt;p&gt;Before you can comply, you need to understand what applies to you. That means legal counsel who actually knows the EU AI Act, not just GDPR specialists who have skimmed the summary.&lt;/p&gt;

&lt;p&gt;For a startup deploying a high-risk AI system under Annex III, expect to spend between 15,000 and 50,000 euros on initial legal scoping alone. That covers classification analysis, reviewing your data governance arrangements, and mapping your obligations across Articles 9 through 17.&lt;/p&gt;

&lt;p&gt;If you are a provider placing a system on the EU market and also acting as a deployer, that cost doubles because you are subject to two overlapping obligation sets.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conformity Assessment Is Not Free
&lt;/h2&gt;

&lt;p&gt;Article 43 requires a conformity assessment before a high-risk system goes live. For most categories, you can do this internally. But internally does not mean cheaply.&lt;/p&gt;

&lt;p&gt;You will need to produce technical documentation under Annex IV. That means logging your training data sources, validation methodology, accuracy metrics across demographic groups, and a full description of the system purpose and logic. A consultant with AI technical audit experience charges between 10,000 and 30,000 euros per engagement for this work.&lt;/p&gt;

&lt;p&gt;If your system falls under Annex III categories that require third-party notified body review, such as biometric categorisation or certain critical infrastructure applications, add another 20,000 to 80,000 euros for the external audit.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Human Capital Cost Is Underestimated
&lt;/h2&gt;

&lt;p&gt;The Act requires a natural person overseeing automated decision-making in high-risk contexts. That oversight has to be real, documented, and defensible.&lt;/p&gt;

&lt;p&gt;That means hiring or retraining staff. A qualified AI compliance officer in the EU earns between 70,000 and 120,000 euros annually. If you do not have one, you will either hire one or rely on expensive external consultants for each review cycle.&lt;/p&gt;

&lt;p&gt;Technical staff also need upskilling. Your engineers need to understand prohibited practice boundaries, data minimisation requirements under Article 10, and logging obligations under Article 12. Training programmes for a team of 20 typically run 5,000 to 15,000 euros.&lt;/p&gt;

&lt;h2&gt;
  
  
  Infrastructure Adjustments Are Unavoidable
&lt;/h2&gt;

&lt;p&gt;Article 12 mandates automatic logging of events during the operation of high-risk AI systems. If your current infrastructure does not capture decision-level logs with timestamps, input parameters, and output records, you need to build that capability.&lt;/p&gt;

&lt;p&gt;For most SaaS products, this means engineering work. Expect one to three months of developer time depending on complexity. At European contractor rates, that is 20,000 to 60,000 euros.&lt;/p&gt;

&lt;p&gt;You also need to ensure your training and validation data meets the requirements of Article 10. Data from sources that cannot demonstrate relevance, representativeness, and freedom from prohibited biases will need to be replaced or supplemented. Data procurement and cleaning at scale is a real cost that organisations routinely underestimate.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Registration and Ongoing Obligations
&lt;/h2&gt;

&lt;p&gt;Once you are compliant, you have ongoing obligations. High-risk systems must be registered in the EU database. That process requires accurate technical documentation and is not a one-time submission.&lt;/p&gt;

&lt;p&gt;Post-market monitoring under Article 72 requires a structured process for collecting and reviewing real-world performance data. If you discover a substantial modification to the system, the conformity assessment process restarts.&lt;/p&gt;

&lt;p&gt;Annual compliance maintenance, including documentation updates, monitoring reviews, and retraining on regulatory changes, typically runs 15,000 to 40,000 euros per year for a mid-size organisation.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Adds Up To
&lt;/h2&gt;

&lt;p&gt;For a startup deploying a single high-risk AI system in the EU, realistic first-year compliance costs range from 80,000 to 250,000 euros when you add legal, conformity, staffing, and infrastructure together. For an enterprise with multiple deployments across Annex III categories, total costs can exceed one million euros.&lt;/p&gt;

&lt;p&gt;These are not worst-case figures. They reflect what I am seeing in practice.&lt;/p&gt;

&lt;p&gt;The organisations that will control these costs are the ones that build compliance infrastructure once and reuse it across products, that document as they build rather than retrospectively, and that treat the technical documentation requirement as an engineering discipline rather than a legal afterthought.&lt;/p&gt;

&lt;p&gt;Compliance is expensive. But getting it wrong is more expensive. The fines under Article 99 reach 30 million euros or 6 percent of global annual turnover. That math makes a robust compliance programme look cheap.&lt;/p&gt;

</description>
      <category>euaiact</category>
      <category>compliance</category>
      <category>ai</category>
      <category>regulation</category>
    </item>
    <item>
      <title>What Your DPO Needs to Know About the EU AI Act Before August 2026</title>
      <dc:creator>AppZ</dc:creator>
      <pubDate>Tue, 21 Apr 2026 02:26:53 +0000</pubDate>
      <link>https://forem.com/appz_b0659e1ca24e36738948/what-your-dpo-needs-to-know-about-the-eu-ai-act-before-august-2026-55lk</link>
      <guid>https://forem.com/appz_b0659e1ca24e36738948/what-your-dpo-needs-to-know-about-the-eu-ai-act-before-august-2026-55lk</guid>
      <description>&lt;p&gt;The EU AI Act is often talked about as a technology problem. It isn't. It's a documentation and governance problem — and that lands squarely on your Data Protection Officer.&lt;/p&gt;

&lt;p&gt;Here's what DPOs and compliance leads need to understand before the August 2026 enforcement deadline.&lt;/p&gt;

&lt;h2&gt;
  
  
  You are now responsible for AI systems, not just data
&lt;/h2&gt;

&lt;p&gt;Under the EU AI Act, high-risk AI systems require a designated person accountable for compliance. If your organisation already has a DPO structure, you're likely the closest thing to that person. That means understanding risk classification, maintaining technical documentation, and being able to demonstrate conformity to regulators on demand.&lt;/p&gt;

&lt;h2&gt;
  
  
  Risk classification is the first hurdle — and it's harder than it sounds
&lt;/h2&gt;

&lt;p&gt;The Act defines eight categories of high-risk AI (Annex III). Systems used in recruitment, credit scoring, education, critical infrastructure, law enforcement, biometric identification, and certain medical contexts are automatically high-risk. But the classification isn't always obvious. A tool your company uses for "internal HR" might qualify. A scoring model built into your CRM might qualify.&lt;/p&gt;

&lt;p&gt;If you can't classify your AI systems, you can't know what obligations apply to you.&lt;/p&gt;

&lt;h2&gt;
  
  
  What high-risk classification actually requires
&lt;/h2&gt;

&lt;p&gt;If a system is classified as high-risk, your obligations include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A conformity assessment&lt;/li&gt;
&lt;li&gt;Technical documentation (Article 11)&lt;/li&gt;
&lt;li&gt;A risk management system (Article 9)&lt;/li&gt;
&lt;li&gt;Data governance requirements (Article 10)&lt;/li&gt;
&lt;li&gt;Logging and auditability (Article 12)&lt;/li&gt;
&lt;li&gt;Transparency requirements for users (Article 13)&lt;/li&gt;
&lt;li&gt;Human oversight mechanisms (Article 14)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is not a checkbox exercise. Regulators will expect you to demonstrate these in practice.&lt;/p&gt;

&lt;h2&gt;
  
  
  The documentation gap is where most organisations will fail
&lt;/h2&gt;

&lt;p&gt;In practice, most companies have no centralised record of which AI systems they operate, let alone documentation that meets Article 11 requirements. DPOs who've built GDPR record-of-processing-activities (RoPA) frameworks will recognise this problem — the EU AI Act requires a similar inventory exercise, but for AI systems rather than data.&lt;/p&gt;

&lt;p&gt;Start with an AI system inventory. Map every tool, model, or automated decision system your organisation uses or provides. Then apply the risk classification criteria.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical starting point
&lt;/h2&gt;

&lt;p&gt;ActComply (getactcomply.com) automates the classification step and generates draft Article 11 documentation. Free to try, no account required.&lt;/p&gt;

&lt;p&gt;The August 2026 deadline is not a soft launch. Enforcement powers are live from that date. DPOs who start now have time to do this properly. Those who wait until Q2 2026 won't.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Zac is the founder of ActComply, an EU AI Act compliance tool for technical teams and compliance professionals.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>compliance</category>
      <category>legal</category>
      <category>ai</category>
      <category>gdpr</category>
    </item>
    <item>
      <title>What Developers Need to Know About the EU AI Act Before August 2026</title>
      <dc:creator>AppZ</dc:creator>
      <pubDate>Mon, 20 Apr 2026 21:52:44 +0000</pubDate>
      <link>https://forem.com/appz_b0659e1ca24e36738948/what-developers-need-to-know-about-the-eu-ai-act-before-august-2026-33df</link>
      <guid>https://forem.com/appz_b0659e1ca24e36738948/what-developers-need-to-know-about-the-eu-ai-act-before-august-2026-33df</guid>
      <description>&lt;p&gt;If you're building AI systems that touch European users, the EU AI Act is no longer a future problem. Enforcement starts August 2, 2026, and the fines are serious — up to €35 million or 7% of global annual turnover, whichever is higher.&lt;/p&gt;

&lt;p&gt;Most developers are either ignoring it or assuming their legal team has it covered. Neither is a safe bet.&lt;/p&gt;

&lt;p&gt;Here's what you actually need to know.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the EU AI Act actually is
&lt;/h2&gt;

&lt;p&gt;The EU AI Act is a product safety regulation, not an ethics framework. Think of it like CE marking for software. If your AI system is deemed "high-risk," you need to document it, test it, monitor it post-deployment, and register it in an EU database before you can deploy it.&lt;/p&gt;

&lt;p&gt;It's not about whether your AI is "good" or "fair." It's about whether you can prove it is.&lt;/p&gt;

&lt;h2&gt;
  
  
  How risk tiers work
&lt;/h2&gt;

&lt;p&gt;The Act splits AI systems into four buckets:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prohibited&lt;/strong&gt; — banned outright. Real-time biometric surveillance in public spaces, social scoring systems, subliminal manipulation. If you're building these, stop.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;High-risk&lt;/strong&gt; — this is where most developers get caught out. Systems used in hiring, credit scoring, education, healthcare triage, law enforcement, critical infrastructure, and border control all fall here. If your product touches these sectors, you're likely high-risk.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Limited risk&lt;/strong&gt; — chatbots and deepfake generators. You mostly just need to tell users they're interacting with AI.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Minimal risk&lt;/strong&gt; — spam filters, AI in games. No specific obligations, just general good practice.&lt;/p&gt;

&lt;h2&gt;
  
  
  What high-risk actually requires from your team
&lt;/h2&gt;

&lt;p&gt;If you're classified as high-risk, here's the technical checklist:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Risk management system&lt;/strong&gt; — documented throughout the development lifecycle, not just at launch&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data governance&lt;/strong&gt; — training data must be relevant, representative, and free from errors that could cause bias&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Technical documentation&lt;/strong&gt; — detailed enough for a regulator to assess conformity&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Logging and audit trails&lt;/strong&gt; — automatic logs of operation so incidents can be reconstructed&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Transparency&lt;/strong&gt; — users must know they're interacting with AI and what it can and can't do&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Human oversight&lt;/strong&gt; — the system must be designed so humans can intervene, override, or shut it down&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Accuracy and robustness&lt;/strong&gt; — performance must be validated against adversarial inputs and edge cases&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EU database registration&lt;/strong&gt; — before deployment, high-risk systems must be registered in the EU's public AI database&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The timeline most teams are underestimating
&lt;/h2&gt;

&lt;p&gt;August 2026 sounds far away until you realise the documentation work for a high-risk system typically takes 3 to 6 months. If you haven't started, you're already behind.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to figure out if your system is high-risk
&lt;/h2&gt;

&lt;p&gt;The classification logic in the Act is genuinely complex — it involves cross-referencing Annex III use cases with deployment context and the degree of human oversight. Most teams don't have in-house legal expertise to do this correctly.&lt;/p&gt;

&lt;p&gt;We built &lt;a href="https://www.getactcomply.com" rel="noopener noreferrer"&gt;ActComply&lt;/a&gt; to automate this. You describe your AI system, who it affects, and what sector it operates in, and it classifies you under the Act with exact article references in under 5 minutes. It then generates a compliance checklist and documentation templates specific to your risk tier.&lt;/p&gt;

&lt;p&gt;It won't replace a compliance lawyer for edge cases, but it'll tell you immediately whether you need one — and give you a solid starting point either way.&lt;/p&gt;

&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;EU AI Act enforcement is August 2, 2026&lt;/li&gt;
&lt;li&gt;High-risk AI systems have serious documentation and monitoring requirements&lt;/li&gt;
&lt;li&gt;Classification is non-trivial and getting it wrong is expensive&lt;/li&gt;
&lt;li&gt;Start your compliance assessment now — the documentation pipeline is longer than you think&lt;/li&gt;
&lt;li&gt;Free tool to classify your system: &lt;a href="https://www.getactcomply.com" rel="noopener noreferrer"&gt;getactcomply.com&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Happy to answer questions in the comments about specific use cases or sectors.&lt;/p&gt;

</description>
      <category>aiwebdevsecurityeurope</category>
    </item>
  </channel>
</rss>
