<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Oleksandr Usenko</title>
    <description>The latest articles on Forem by Oleksandr Usenko (@oleksandr_usenko_8f82e83d).</description>
    <link>https://forem.com/oleksandr_usenko_8f82e83d</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/oleksandr_usenko_8f82e83d"/>
    <language>en</language>
    <item>
      <title>You Didn't Build the AI — You're Still Liable: Deployer Obligations Under the EU AI Act</title>
      <dc:creator>Oleksandr Usenko</dc:creator>
      <pubDate>Mon, 23 Feb 2026 12:43:03 +0000</pubDate>
      <link>https://forem.com/oleksandr_usenko_8f82e83d/you-didnt-build-the-ai-youre-still-liable-deployer-obligations-under-the-eu-ai-act-113p</link>
      <guid>https://forem.com/oleksandr_usenko_8f82e83d/you-didnt-build-the-ai-youre-still-liable-deployer-obligations-under-the-eu-ai-act-113p</guid>
      <description>&lt;h2&gt;
  
  
  The Uncomfortable Truth About Third-Party AI
&lt;/h2&gt;

&lt;p&gt;There is a widespread misconception circulating in European boardrooms right now: "We don't build AI, so the AI Act doesn't really apply to us."&lt;/p&gt;

&lt;p&gt;This is wrong, and it could be an expensive mistake.&lt;/p&gt;

&lt;p&gt;The EU AI Act distinguishes between &lt;strong&gt;providers&lt;/strong&gt; (those who develop or place AI systems on the market) and &lt;strong&gt;deployers&lt;/strong&gt; (those who use AI systems under their authority). If your company uses ChatGPT for customer support, Microsoft Copilot for document drafting, an AI-powered applicant tracking system for hiring, or any SaaS product with embedded AI features — you are a deployer. And deployers have real, enforceable obligations under the EU AI Act starting &lt;strong&gt;August 2, 2026&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The penalty for getting this wrong? Up to €15 million or 3% of global annual turnover, whichever is higher.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Exactly Is a "Deployer"?
&lt;/h2&gt;

&lt;p&gt;Article 3(4) of the EU AI Act defines a deployer as any natural or legal person that uses an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity.&lt;/p&gt;

&lt;p&gt;This definition is deliberately broad. You become a deployer the moment your organization uses an AI system in a professional context — regardless of whether you built it, bought it, or simply signed up for a free tier.&lt;/p&gt;

&lt;p&gt;Some examples that make companies deployers without them realizing it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Using AI-powered recruitment tools&lt;/strong&gt; like HireVue, Pymetrics, or any ATS with AI screening — this is likely high-risk AI under Annex III&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deploying AI chatbots&lt;/strong&gt; for customer service, even if they run on GPT-4 or Claude under the hood&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Using AI credit scoring&lt;/strong&gt; or fraud detection tools provided by your fintech vendor&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Running AI-assisted medical diagnostics&lt;/strong&gt; through third-party software in clinical settings&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Applying AI for employee monitoring&lt;/strong&gt; or performance evaluation through HR platforms&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In each case, the AI provider has their own set of obligations. But that does not eliminate yours.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Five Core Deployer Obligations
&lt;/h2&gt;

&lt;p&gt;Here is what the EU AI Act actually requires from deployers of high-risk AI systems. These are not suggestions — they are legally binding requirements.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Use the System According to Instructions
&lt;/h3&gt;

&lt;p&gt;Article 26(1) requires deployers to use high-risk AI systems in accordance with the instructions of use provided by the provider. This sounds trivial, but it has teeth.&lt;/p&gt;

&lt;p&gt;If your provider's documentation says their AI hiring tool should only be used as a screening aid with human review of all decisions, and your HR team starts auto-rejecting candidates based solely on AI scores — you are in violation. The provider documented the intended use. You deviated from it. The liability is yours.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Practical step:&lt;/strong&gt; Obtain and actually read the instructions of use for every AI system your organization deploys. Store them centrally. Ensure the people operating these systems know the documented boundaries.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Assign Human Oversight
&lt;/h3&gt;

&lt;p&gt;Article 26(2) requires deployers to assign human oversight of high-risk AI systems to individuals who have the competence, training, and authority to fulfil that role.&lt;/p&gt;

&lt;p&gt;This means someone in your organization must be specifically responsible for monitoring each high-risk AI system's operation. That person needs to understand how the system works at a functional level, be trained on what to watch for, and have the actual authority to override or stop the system when something goes wrong.&lt;/p&gt;

&lt;p&gt;A common failure mode: companies designate a junior employee who technically "monitors" the AI system but has no authority to intervene when it produces questionable outputs. This does not satisfy the requirement.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Practical step:&lt;/strong&gt; For each high-risk AI system, formally document who provides human oversight. Verify they have appropriate training and decision-making authority. This should be part of your &lt;a href="https://aktai.eu/blog/ai-governance-framework-smbs" rel="noopener noreferrer"&gt;AI governance framework&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Ensure Input Data Quality
&lt;/h3&gt;

&lt;p&gt;Article 26(4) states that deployers who exercise control over the input data must ensure that input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system.&lt;/p&gt;

&lt;p&gt;If your AI credit scoring tool takes customer data that you collect and manage, you are responsible for the quality and representativeness of that data. Feeding the system incomplete, biased, or outdated data is your liability, not the provider's.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Practical step:&lt;/strong&gt; Audit the data pipelines feeding your high-risk AI systems. Document what data goes in, where it comes from, how it is cleaned, and whether it adequately represents the population the system will affect.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Monitor Operation and Report Issues
&lt;/h3&gt;

&lt;p&gt;Article 26(5) requires deployers to monitor the operation of high-risk AI systems based on the instructions of use and, when relevant, inform providers or distributors about serious incidents or malfunctions.&lt;/p&gt;

&lt;p&gt;You cannot simply deploy an AI system and forget about it. You need ongoing monitoring — checking whether the system performs as expected, whether its outputs remain within acceptable bounds, and whether any patterns suggest drift, bias, or malfunction.&lt;/p&gt;

&lt;p&gt;When something goes wrong, you have a legal obligation to report it. If you discover a serious incident — meaning an incident that results in death, serious damage to health, serious disruption to critical infrastructure, or serious harm to property or the environment — you must report it to both the provider and the relevant market surveillance authority. The timeline for this reporting is tight: you need to act as soon as you establish a causal link between the AI system and the incident.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Practical step:&lt;/strong&gt; Establish a monitoring protocol for each high-risk AI system. Define what "normal operation" looks like and what triggers an investigation. Create an &lt;a href="https://aktai.eu/blog/eu-ai-act-serious-incident-reporting-playbook" rel="noopener noreferrer"&gt;incident reporting process&lt;/a&gt; before you need one.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Conduct a Fundamental Rights Impact Assessment
&lt;/h3&gt;

&lt;p&gt;Article 27 requires certain deployers — specifically public bodies and private entities providing public services — to carry out a fundamental rights impact assessment (FRIA) before deploying high-risk AI systems.&lt;/p&gt;

&lt;p&gt;Even if your organization is not legally required to conduct a FRIA, doing one is good practice. It forces you to think systematically about who your AI systems affect, what could go wrong, and what safeguards are in place. Regulators will likely view a voluntary FRIA favorably if questions arise about your compliance posture.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Practical step:&lt;/strong&gt; Use the &lt;a href="https://aktai.eu/blog/ai-risk-assessment-guide" rel="noopener noreferrer"&gt;risk assessment frameworks&lt;/a&gt; relevant to your sector. Document your assessment and keep it updated as your use of the AI system evolves.&lt;/p&gt;

&lt;h2&gt;
  
  
  Transparency Obligations Apply to All Deployers
&lt;/h2&gt;

&lt;p&gt;Even if your AI systems are not classified as high-risk, you still have transparency obligations under Article 50.&lt;/p&gt;

&lt;p&gt;If you deploy an AI system that interacts directly with people — a chatbot, a virtual assistant, an AI phone agent — you must inform those people that they are interacting with an AI system. The exception is only when this is "obvious from the circumstances and the context of use," which is a higher bar than most companies assume.&lt;/p&gt;

&lt;p&gt;If you use AI to generate or manipulate images, audio, or video content (deepfakes or synthetic media), you must disclose that the content was AI-generated. This applies even to marketing materials that use AI-generated imagery.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Practical step:&lt;/strong&gt; Audit every customer-facing AI touchpoint. Implement clear, visible disclosures. "This conversation is powered by AI" is a reasonable starting point for chatbots.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Supply Chain Problem
&lt;/h2&gt;

&lt;p&gt;Here is where things get genuinely difficult for deployers: you are dependent on your AI providers for much of the information you need to fulfil your own obligations.&lt;/p&gt;

&lt;p&gt;To comply with your deployer obligations, you need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Instructions of use from the provider&lt;/li&gt;
&lt;li&gt;Information about the system's intended purpose and limitations&lt;/li&gt;
&lt;li&gt;Details about the training data (at least at a high level)&lt;/li&gt;
&lt;li&gt;Technical documentation sufficient to understand how the system works&lt;/li&gt;
&lt;li&gt;Information about the system's accuracy, robustness, and cybersecurity measures&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The AI Act places obligations on providers to supply this information. But in practice, many AI vendors — especially larger ones — are still working out how to deliver this documentation in a usable format. Some SaaS providers with embedded AI features may not even have classified their systems under the AI Act yet.&lt;/p&gt;

&lt;p&gt;This creates a real problem: your compliance depends partly on your vendors' compliance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What to do about it:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Audit your vendor contracts now.&lt;/strong&gt; Do they include AI Act compliance commitments? If not, negotiate amendments before August 2026.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Request documentation proactively.&lt;/strong&gt; Ask vendors for their AI Act compliance documentation, intended use descriptions, and risk classifications. If they cannot provide it, that is a red flag.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build contract language&lt;/strong&gt; that requires vendors to notify you of material changes to their AI systems, provide updated documentation, and cooperate in the event of regulatory inquiries.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Have a contingency plan.&lt;/strong&gt; If a critical vendor cannot demonstrate compliance, you need to know your alternatives. Switching AI tools under regulatory pressure is far worse than planning ahead.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The AI Literacy Requirement Hits Deployers Too
&lt;/h2&gt;

&lt;p&gt;Article 4 of the EU AI Act — which has been enforceable since February 2025 — requires deployers to ensure that their staff and other persons dealing with the operation and use of AI systems on their behalf have a sufficient level of AI literacy.&lt;/p&gt;

&lt;p&gt;This is not a vague aspiration. It is an enforceable obligation, and it already applies today.&lt;/p&gt;

&lt;p&gt;For deployers, AI literacy means your people need to understand what the AI systems they work with actually do, what their limitations are, and how to interpret their outputs appropriately. A recruiter using an AI screening tool needs to understand that the tool's scores are probabilistic, not deterministic. A loan officer using AI credit scoring needs to understand that the system can produce biased outcomes if not properly monitored.&lt;/p&gt;

&lt;p&gt;Generic "AI awareness" training does not satisfy this requirement. The training needs to be tailored to the specific AI systems your people use and the context in which they use them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Practical step:&lt;/strong&gt; Build an &lt;a href="https://aktai.eu/blog/ai-literacy-training-article-4" rel="noopener noreferrer"&gt;AI literacy program&lt;/a&gt; tied to the specific AI tools your organization deploys. Document the training, track completion, and update it when systems change.&lt;/p&gt;

&lt;h2&gt;
  
  
  Five Months Left: What to Do Now
&lt;/h2&gt;

&lt;p&gt;The August 2, 2026 deadline is approximately five months away. If your organization has not started addressing deployer obligations, here is a practical prioritization:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Month 1: Inventory&lt;/strong&gt;&lt;br&gt;
Complete an &lt;a href="https://aktai.eu/blog/ai-systems-inventory-guide" rel="noopener noreferrer"&gt;AI systems inventory&lt;/a&gt;. Every AI system your organization uses, including embedded AI in SaaS tools. Classify each by risk level.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Month 2: Gap analysis&lt;/strong&gt;&lt;br&gt;
For each high-risk system, map your current practices against the five deployer obligations above. Identify the gaps.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Month 3: Vendor engagement&lt;/strong&gt;&lt;br&gt;
Contact every AI vendor supplying high-risk or significant AI systems. Request their compliance documentation. Begin contract negotiations where needed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Month 4: Implementation&lt;/strong&gt;&lt;br&gt;
Implement human oversight assignments, monitoring protocols, incident reporting processes, and data quality procedures for high-risk systems. Roll out AI literacy training.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Month 5: Verification&lt;/strong&gt;&lt;br&gt;
Test your processes. Run a tabletop exercise simulating a regulatory inquiry. Verify your documentation is complete and accessible. Conduct an internal &lt;a href="https://aktai.eu/blog/ai-compliance-checklist-guide" rel="noopener noreferrer"&gt;compliance check&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This timeline is tight but achievable for organizations that commit resources to it. If you have already started, you are ahead of the majority — &lt;a href="https://ai2.work/economics/eu-ai-act-high-risk-rules-hit-august-2026-your-compliance-countdown/" rel="noopener noreferrer"&gt;research suggests only 18% of organizations have fully implemented AI governance frameworks&lt;/a&gt; despite widespread operational AI use.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bottom Line
&lt;/h2&gt;

&lt;p&gt;Using third-party AI does not insulate you from regulatory responsibility. The EU AI Act was deliberately designed to create accountability across the entire AI value chain — from the company that builds the model to the organization that deploys it to make decisions affecting real people.&lt;/p&gt;

&lt;p&gt;The good news is that deployer obligations, while real, are manageable. They boil down to: know what AI you use, use it properly, watch it carefully, keep records, and make sure your people understand what they are working with.&lt;/p&gt;

&lt;p&gt;The bad news is that "we assumed our vendor handled all of that" will not be an acceptable answer when a market surveillance authority comes asking questions.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Need help mapping your deployer obligations? &lt;a href="https://aktai.eu/?utm_source=blog&amp;amp;utm_medium=content&amp;amp;utm_campaign=deployer-obligations" rel="noopener noreferrer"&gt;AktAI&lt;/a&gt; helps organizations identify, classify, and manage their AI compliance requirements — including the deployer-specific obligations that most compliance tools overlook. Start with a free assessment.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>compliance</category>
      <category>europe</category>
      <category>saas</category>
    </item>
    <item>
      <title>EU AI Act Serious Incident Reporting: A Practical 15-Day Playbook</title>
      <dc:creator>Oleksandr Usenko</dc:creator>
      <pubDate>Sun, 22 Feb 2026 17:01:16 +0000</pubDate>
      <link>https://forem.com/oleksandr_usenko_8f82e83d/eu-ai-act-serious-incident-reporting-a-practical-15-day-playbook-1h37</link>
      <guid>https://forem.com/oleksandr_usenko_8f82e83d/eu-ai-act-serious-incident-reporting-a-practical-15-day-playbook-1h37</guid>
      <description>&lt;p&gt;When teams discuss EU AI Act readiness, they usually focus on classification, documentation, and conformity assessment. But one of the hardest obligations in practice is operational: &lt;strong&gt;what happens when something goes wrong in production&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Article 73 requires providers of high-risk AI systems to report serious incidents to market surveillance authorities, with strict time windows (up to 15 days, and faster in severe cases). For many organizations, that is not mainly a legal drafting issue. It is a detection, escalation, and evidence-preservation issue.&lt;/p&gt;

&lt;p&gt;This guide is a practical playbook for building a reporting process that works under pressure.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Article 73 Actually Requires (in plain terms)
&lt;/h2&gt;

&lt;p&gt;From the legal text and Service Desk publication, the main requirements are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Report serious incidents to the market surveillance authority in the Member State where the incident occurred.&lt;/li&gt;
&lt;li&gt;Report immediately after establishing a causal link (or reasonable likelihood), and no later than &lt;strong&gt;15 days&lt;/strong&gt; after awareness.&lt;/li&gt;
&lt;li&gt;Report within &lt;strong&gt;2 days&lt;/strong&gt; for widespread infringement / very severe incident categories.&lt;/li&gt;
&lt;li&gt;Report within &lt;strong&gt;10 days&lt;/strong&gt; in case of death.&lt;/li&gt;
&lt;li&gt;Initial incomplete reports are allowed, followed by complete reporting.&lt;/li&gt;
&lt;li&gt;Investigate without delay, cooperate with authorities, and avoid altering evidence in ways that could compromise later evaluation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Sources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;European Commission AI Act Service Desk, Article 73: &lt;a href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-73" rel="noopener noreferrer"&gt;https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-73&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Consolidated AI Act text (Article 73): &lt;a href="https://artificialintelligenceact.eu/article/73/" rel="noopener noreferrer"&gt;https://artificialintelligenceact.eu/article/73/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why companies miss deadlines (even when they know the law)
&lt;/h2&gt;

&lt;p&gt;In preparation audits, the same operational gaps appear repeatedly:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No incident taxonomy for AI-specific harm&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Security teams have incident classes; product teams have bug severity; legal has regulatory risk. They are not aligned.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No trigger owner&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Teams debate whether the issue is “serious enough” while the reporting clock is running.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Evidence gets overwritten&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Logs are rotated, models are hot-patched, or prompts are not preserved in reproducible form.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cross-border confusion&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Teams do not know which Member State authority should receive first notice when users are in multiple countries.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Provider vs deployer ambiguity&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Contracts do not clearly define who notifies and who supplies supporting facts.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The result is predictable: delayed notification, incomplete documentation, and inconsistent narratives across legal, engineering, and customer-facing teams.&lt;/p&gt;

&lt;h2&gt;
  
  
  A realistic 15-day playbook
&lt;/h2&gt;

&lt;p&gt;Below is a pragmatic sequence you can implement now.&lt;/p&gt;

&lt;h3&gt;
  
  
  Day 0 (first awareness): stabilize and preserve
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Objective:&lt;/strong&gt; avoid evidence loss and establish accountable ownership.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Appoint an incident lead (name, backup, and on-call path).&lt;/li&gt;
&lt;li&gt;Open a dedicated incident record with immutable timestamps.&lt;/li&gt;
&lt;li&gt;Preserve relevant artifacts immediately:

&lt;ul&gt;
&lt;li&gt;model version and configuration&lt;/li&gt;
&lt;li&gt;prompts/inputs and outputs (where lawful and available)&lt;/li&gt;
&lt;li&gt;human-oversight actions and overrides&lt;/li&gt;
&lt;li&gt;logs, alerts, user complaints, and rollback actions&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Freeze non-essential changes to affected components until legal/technical review confirms safe handling.&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Critical principle: do not wait for perfect causality proof before opening the case. You can submit an initial incomplete report later if needed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Day 1–2: classify severity and reporting clock
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Objective:&lt;/strong&gt; determine if Article 73 thresholds are likely met.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Apply a pre-defined seriousness matrix (harm impact × scale × reversibility × affected rights/safety).&lt;/li&gt;
&lt;li&gt;Decide whether the incident could fit:

&lt;ul&gt;
&lt;li&gt;standard serious incident (up to 15 days)&lt;/li&gt;
&lt;li&gt;widespread/severe category (2 days)&lt;/li&gt;
&lt;li&gt;death-related incident (10 days)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Record rationale for classification decisions, including dissenting views.&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;If threshold is plausibly met, prepare draft notification immediately. Over-reporting with clear uncertainty notes is generally safer than late reporting after internal debate.&lt;/p&gt;

&lt;h3&gt;
  
  
  Day 2–5: submit initial notification (if needed)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Objective:&lt;/strong&gt; meet timing while facts are still emerging.&lt;/p&gt;

&lt;p&gt;A strong initial report should include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;system identity and intended purpose&lt;/li&gt;
&lt;li&gt;incident date/time window and discovery path&lt;/li&gt;
&lt;li&gt;known/potential harm profile&lt;/li&gt;
&lt;li&gt;affected geography / Member State context&lt;/li&gt;
&lt;li&gt;current containment actions&lt;/li&gt;
&lt;li&gt;known data gaps and expected follow-up timeline&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Article 73 explicitly allows incomplete initial reporting, provided follow-up is timely and substantive.&lt;/p&gt;

&lt;h3&gt;
  
  
  Day 5–10: investigate and update
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Objective:&lt;/strong&gt; move from hypothesis to defensible cause analysis.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run technical root-cause analysis (data drift, edge case, oversight failure, integration issue, etc.).&lt;/li&gt;
&lt;li&gt;Quantify affected population and confidence intervals.&lt;/li&gt;
&lt;li&gt;Validate whether corrective actions reduce recurrence risk.&lt;/li&gt;
&lt;li&gt;Coordinate messaging so regulatory, legal, and customer communications are factually consistent.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If death-related criteria apply, ensure the 10-day deadline is handled as a hard stop.&lt;/p&gt;

&lt;h3&gt;
  
  
  Day 10–15: complete report and corrective plan
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Objective:&lt;/strong&gt; provide authorities with decision-grade information.&lt;/p&gt;

&lt;p&gt;Final reporting package should include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;incident chronology&lt;/li&gt;
&lt;li&gt;causal analysis and uncertainty boundaries&lt;/li&gt;
&lt;li&gt;corrective and preventive actions (CAPA)&lt;/li&gt;
&lt;li&gt;deployment restrictions or temporary suspension decisions&lt;/li&gt;
&lt;li&gt;monitoring commitments and review cadence&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Think of this not as a one-off filing, but as the first artifact in your post-market compliance trail.&lt;/p&gt;

&lt;h2&gt;
  
  
  Provider–deployer split: define it before incident day
&lt;/h2&gt;

&lt;p&gt;Many AI products involve one company as provider and another as deployer. Under pressure, this becomes a bottleneck unless contracts are explicit.&lt;/p&gt;

&lt;p&gt;Minimum contractual clauses to include now:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;who files Article 73 notification&lt;/li&gt;
&lt;li&gt;max handoff time for incident facts (for example, 24 hours)&lt;/li&gt;
&lt;li&gt;evidence retention responsibilities&lt;/li&gt;
&lt;li&gt;contact points for regulatory follow-up&lt;/li&gt;
&lt;li&gt;authority for emergency suspension / kill switch decisions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without these clauses, your legal position may be clear on paper but unworkable in operations.&lt;/p&gt;

&lt;h2&gt;
  
  
  The evidence package regulators usually expect
&lt;/h2&gt;

&lt;p&gt;No two investigations are identical, but teams should prepare to produce:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;versioned model and release history&lt;/li&gt;
&lt;li&gt;validation/testing evidence relevant to failure mode&lt;/li&gt;
&lt;li&gt;human oversight procedures and operator logs&lt;/li&gt;
&lt;li&gt;incident triage records and decision timestamps&lt;/li&gt;
&lt;li&gt;corrective-action verification data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you cannot reconstruct what happened from your own records, authorities will assume your controls are weaker than your policy documents claim.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical controls that reduce reporting risk
&lt;/h2&gt;

&lt;p&gt;You do not need a huge compliance program to improve immediately. Start with these controls:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;AI-specific incident rubric&lt;/strong&gt; integrated into existing security/ops workflows.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;72-hour internal escalation SLA&lt;/strong&gt; from first signal to legal/compliance review.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Evidence lock protocol&lt;/strong&gt; for logs, prompts, and model artifacts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pre-mapped authority list&lt;/strong&gt; by country and product footprint.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Quarterly tabletop exercise&lt;/strong&gt; for one realistic serious-incident scenario.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These five controls usually deliver a larger risk reduction than producing another policy PDF.&lt;/p&gt;

&lt;h2&gt;
  
  
  What to do this week (if you are behind)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Name your incident lead and backup.&lt;/li&gt;
&lt;li&gt;Create a one-page Article 73 trigger matrix.&lt;/li&gt;
&lt;li&gt;Test evidence preservation on one live model.&lt;/li&gt;
&lt;li&gt;Review provider/deployer language in top customer contracts.&lt;/li&gt;
&lt;li&gt;Run one 60-minute simulation with legal + engineering + support.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If these actions feel basic, that is the point. Basic execution under time pressure is what prevents deadline failure.&lt;/p&gt;

&lt;h2&gt;
  
  
  A sober note on enforcement reality
&lt;/h2&gt;

&lt;p&gt;The AI Act is often discussed as future-state compliance. Incident reporting is different: it is measured against actual operational behavior during real harm events.&lt;/p&gt;

&lt;p&gt;Teams that perform well are not the ones with the longest policies. They are the ones with clear ownership, preserved evidence, and repeatable cross-functional response.&lt;/p&gt;

&lt;p&gt;If you are already preparing for conformity assessment and post-market monitoring, this playbook should be part of that same system, not a separate legal checklist.&lt;/p&gt;




&lt;p&gt;If you want a structured way to track incident-readiness controls and documentation gaps, you can use AktAI’s dashboard:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aktai.eu/dashboard?utm_source=bluesky&amp;amp;utm_medium=social&amp;amp;utm_campaign=article_incident_reporting_playbook" rel="noopener noreferrer"&gt;https://aktai.eu/dashboard?utm_source=bluesky&amp;amp;utm_medium=social&amp;amp;utm_campaign=article_incident_reporting_playbook&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aktai.eu/dashboard?utm_source=mastodon&amp;amp;utm_medium=social&amp;amp;utm_campaign=article_incident_reporting_playbook" rel="noopener noreferrer"&gt;https://aktai.eu/dashboard?utm_source=mastodon&amp;amp;utm_medium=social&amp;amp;utm_campaign=article_incident_reporting_playbook&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aktai.eu/dashboard?utm_source=devto&amp;amp;utm_medium=social&amp;amp;utm_campaign=article_incident_reporting_playbook" rel="noopener noreferrer"&gt;https://aktai.eu/dashboard?utm_source=devto&amp;amp;utm_medium=social&amp;amp;utm_campaign=article_incident_reporting_playbook&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aktai.eu/dashboard?utm_source=hashnode&amp;amp;utm_medium=social&amp;amp;utm_campaign=article_incident_reporting_playbook" rel="noopener noreferrer"&gt;https://aktai.eu/dashboard?utm_source=hashnode&amp;amp;utm_medium=social&amp;amp;utm_campaign=article_incident_reporting_playbook&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Related guides:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aktai.eu/blog/eu-ai-act-enforcement-2026" rel="noopener noreferrer"&gt;https://aktai.eu/blog/eu-ai-act-enforcement-2026&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aktai.eu/blog/ai-risk-assessment-guide" rel="noopener noreferrer"&gt;https://aktai.eu/blog/ai-risk-assessment-guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aktai.eu/blog/compliance-documentation-best-practices" rel="noopener noreferrer"&gt;https://aktai.eu/blog/compliance-documentation-best-practices&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>compliance</category>
      <category>europe</category>
      <category>saas</category>
    </item>
    <item>
      <title>EU AI Act Conformity Assessment: A Practical Guide for High-Risk AI Providers</title>
      <dc:creator>Oleksandr Usenko</dc:creator>
      <pubDate>Sun, 22 Feb 2026 16:33:05 +0000</pubDate>
      <link>https://forem.com/oleksandr_usenko_8f82e83d/eu-ai-act-conformity-assessment-a-practical-guide-for-high-risk-ai-providers-2kj7</link>
      <guid>https://forem.com/oleksandr_usenko_8f82e83d/eu-ai-act-conformity-assessment-a-practical-guide-for-high-risk-ai-providers-2kj7</guid>
      <description>&lt;h2&gt;
  
  
  The Obligation Nobody Talks About Enough
&lt;/h2&gt;

&lt;p&gt;Ask most businesses about EU AI Act compliance and they mention risk classification, technical documentation, and transparency requirements. Far fewer mention the step that legally gates everything else: the &lt;strong&gt;conformity assessment&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Under Article 43, providers of high-risk AI systems cannot legally place their systems on the EU market — or put them into service — without completing a conformity assessment first. It is not something you do after launch. It is not a checkbox you add to your release checklist. It is a prerequisite, and as of August 2, 2026, it is enforceable.&lt;/p&gt;

&lt;p&gt;This guide explains what a conformity assessment actually involves, what it produces, when you need a third party and when you can self-assess, and where most organizations are getting it wrong.&lt;/p&gt;

&lt;h2&gt;
  
  
  First: Are You a Provider?
&lt;/h2&gt;

&lt;p&gt;Before worrying about conformity assessment procedures, confirm that you are actually a provider in the legal sense.&lt;/p&gt;

&lt;p&gt;The EU AI Act defines &lt;strong&gt;providers&lt;/strong&gt; as entities that develop an AI system and place it on the market or put it into service under their own name or trademark. If you build an AI-powered product and sell it, license it, or deploy it commercially, you are almost certainly a provider. The same applies if you significantly modify an AI system from another source — you may inherit the provider's obligations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deployers&lt;/strong&gt; — organizations that use AI systems built by others — have a different, narrower set of obligations. They must perform fundamental rights impact assessments for certain use cases, maintain human oversight, register in some circumstances, and cooperate with providers. But the heavy conformity machinery falls primarily on the provider.&lt;/p&gt;

&lt;p&gt;If you are a business deploying a third-party AI product (say, an HR vendor's recruitment screening tool), make sure your vendor has completed the conformity assessment. Request a copy of their EU declaration of conformity. If they cannot provide one, treat that as a significant red flag — they are placing a non-compliant system in front of you.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the Conformity Assessment Is Actually For
&lt;/h2&gt;

&lt;p&gt;The conformity assessment is the formal mechanism by which a provider demonstrates that their high-risk AI system meets all the requirements of Chapter III, Section 2 of the AI Act. This includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Article 9&lt;/strong&gt;: A risk management system covering the full AI lifecycle&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Article 10&lt;/strong&gt;: Data governance requirements for training, validation, and testing datasets&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Article 11&lt;/strong&gt;: Technical documentation that would allow an authority to reconstruct and assess the system&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Article 12&lt;/strong&gt;: Logging and record-keeping capabilities built into the system&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Article 13&lt;/strong&gt;: Transparency and information provision to deployers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Article 14&lt;/strong&gt;: Human oversight mechanisms designed into the system&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Article 15&lt;/strong&gt;: Accuracy, robustness, and cybersecurity requirements&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Meeting these requirements is one thing. The conformity assessment is how you &lt;em&gt;prove&lt;/em&gt; you meet them — with documentation, testing records, and a formal declaration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Two Paths: Self-Assessment vs. Third-Party
&lt;/h2&gt;

&lt;p&gt;The AI Act provides two routes to conformity assessment, and which one you must take depends on what type of high-risk AI system you have.&lt;/p&gt;

&lt;h3&gt;
  
  
  Internal Control (Annex VI) — Self-Assessment
&lt;/h3&gt;

&lt;p&gt;For most high-risk AI systems, providers can conduct an internal conformity assessment following the procedure in Annex VI. This is self-assessment — no notified body involvement required. You build and document the evidence yourself, then sign a declaration of conformity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Do not mistake "self-assessment" for "informal assessment."&lt;/strong&gt; Annex VI requires you to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Document that your quality management system meets Article 17 requirements&lt;/li&gt;
&lt;li&gt;Apply relevant harmonized standards (or demonstrate equivalent compliance without them)&lt;/li&gt;
&lt;li&gt;Review and update technical documentation in line with Article 11&lt;/li&gt;
&lt;li&gt;Maintain internal audit procedures and records&lt;/li&gt;
&lt;li&gt;Sign and store the EU declaration of conformity&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The self-assessment path is rigorous. Regulators can request access to everything you produced during the assessment process. If you signed a declaration of conformity without the substance to back it up, that is the kind of thing that triggers the heaviest enforcement attention.&lt;/p&gt;

&lt;h3&gt;
  
  
  Third-Party Assessment (Annex VII) — Notified Body Involvement
&lt;/h3&gt;

&lt;p&gt;For a narrower category of high-risk AI systems — specifically, those that are safety components of products covered by other EU harmonized legislation (like medical devices or machinery), &lt;strong&gt;and&lt;/strong&gt; where no harmonized standards exist that cover the AI-specific aspects — a third-party assessment by a notified body is required.&lt;/p&gt;

&lt;p&gt;Notified bodies are accredited organizations designated by member states to assess conformity. They charge fees (often substantial), operate on their own timelines, and are in limited supply. If you fall into this category and have not already engaged a notified body, your August 2026 window is tight.&lt;/p&gt;

&lt;p&gt;Check the NANDO database (the European Commission's official notified body database) to find accredited bodies for your product category. Do this now — lead times can be months.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Conformity Assessment Process in Practice
&lt;/h2&gt;

&lt;p&gt;Regardless of which route applies, the practical work looks similar. Here is a realistic breakdown of the steps involved.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Determine High-Risk Classification (If Not Already Done)
&lt;/h3&gt;

&lt;p&gt;Before you can assess conformity, you need to confirm your system is actually high-risk. Article 6 and Annex III define the categories. Common high-risk domains include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Biometric identification&lt;/li&gt;
&lt;li&gt;Critical infrastructure management&lt;/li&gt;
&lt;li&gt;Education and vocational training&lt;/li&gt;
&lt;li&gt;Employment and worker management&lt;/li&gt;
&lt;li&gt;Access to essential private and public services&lt;/li&gt;
&lt;li&gt;Law enforcement&lt;/li&gt;
&lt;li&gt;Migration and asylum&lt;/li&gt;
&lt;li&gt;Administration of justice&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If your system falls into one of these categories and makes decisions, assists in decisions, or influences decisions affecting individuals, you are very likely in high-risk territory. The &lt;a href="https://aktai.eu/blog/how-to-check-ai-compliance" rel="noopener noreferrer"&gt;EU AI Act compliance checker&lt;/a&gt; can help you work through the classification logic.&lt;/p&gt;

&lt;p&gt;Do not assume you are not high-risk just because you are a small company or because your system is not the "main" decision-maker. The Act explicitly covers systems that assist or influence decisions, not just fully autonomous ones.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Establish (or Audit) Your Quality Management System
&lt;/h3&gt;

&lt;p&gt;Article 17 requires providers to implement a quality management system (QMS) that covers the entire AI system lifecycle. This is not an optional governance nicety — it is a formal requirement that feeds directly into the conformity assessment.&lt;/p&gt;

&lt;p&gt;Your QMS must cover at minimum:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A strategy for regulatory compliance, including how you identify and apply harmonized standards&lt;/li&gt;
&lt;li&gt;Procedures for design, development, and design change review&lt;/li&gt;
&lt;li&gt;Procedures for technical documentation management&lt;/li&gt;
&lt;li&gt;Procedures for post-market monitoring (Article 72)&lt;/li&gt;
&lt;li&gt;Procedures for handling serious incidents and reporting to authorities&lt;/li&gt;
&lt;li&gt;Data management procedures (feeding into Article 10 compliance)&lt;/li&gt;
&lt;li&gt;How you ensure human oversight is implemented (feeding into Article 14)&lt;/li&gt;
&lt;li&gt;How you manage third-party AI component suppliers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you already have an ISO 9001-certified QMS, that is a starting point, not a finish line. The AI Act QMS requirements go further into AI-specific obligations that most existing QMS frameworks do not cover.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Build the Technical Documentation
&lt;/h3&gt;

&lt;p&gt;Article 11 and Annex IV define what technical documentation must contain. This is the most documentation-intensive part of the conformity assessment, and the most commonly underestimated.&lt;/p&gt;

&lt;p&gt;Required documentation includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;General description&lt;/strong&gt;: What the system does, its intended purpose, deployment context, and the persons or groups it is intended to be used for or by&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;System design and architecture&lt;/strong&gt;: Functional description, key design choices, system components including software and hardware, the computational graph and input/output specifications&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Training methodology&lt;/strong&gt;: Training data, preprocessing steps, labeling approach, data validation, test and validation procedures&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance metrics&lt;/strong&gt;: Accuracy, robustness, and any output limitations — measured across relevant subpopulations and deployment conditions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Risk management documentation&lt;/strong&gt;: The full record of your Article 9 risk management process — identified risks, risk estimates, mitigations adopted, residual risks accepted&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Changes log&lt;/strong&gt;: A complete record of changes made to the system during development and after deployment&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Standards applied&lt;/strong&gt;: List of harmonized standards or common specifications applied, and how compliance was demonstrated&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Human oversight description&lt;/strong&gt;: How the system provides for operator oversight and intervention, including what override mechanisms exist and how operators are trained&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This documentation needs to be maintained for the full lifecycle of the system plus ten years after it is placed on the market. Build systems for maintaining it from the start — retrofitting documentation years later is painful and produces worse outputs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Conduct and Document Testing
&lt;/h3&gt;

&lt;p&gt;Your conformity assessment is not credible without empirical evidence from testing. The AI Act requires testing against intended purpose and foreseeable use — including foreseeable misuse — across the relevant population groups.&lt;/p&gt;

&lt;p&gt;Critical testing requirements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Dataset validation&lt;/strong&gt;: You must be able to demonstrate that training, validation, and test datasets are relevant, representative, free of errors to the extent possible, and have appropriate statistical properties for the intended purpose&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bias and fairness testing&lt;/strong&gt;: For systems that affect individuals, you must test for bias across protected characteristics and document results — including where bias remains after mitigation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Robustness testing&lt;/strong&gt;: Testing for behavior under edge cases, adversarial inputs, and operational stresses&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Human oversight validation&lt;/strong&gt;: Demonstrating that human override mechanisms work as intended under realistic conditions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Keep raw test results, methodology documentation, and test dataset descriptions. These are exactly what a market surveillance authority will request if they investigate.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Sign the EU Declaration of Conformity
&lt;/h3&gt;

&lt;p&gt;Once the assessment is complete, the provider must draw up an EU declaration of conformity in accordance with Article 47. This is a legally binding document declaring that the system meets all applicable requirements of the AI Act.&lt;/p&gt;

&lt;p&gt;The declaration must be kept for ten years and must be updated when the system is substantially modified (which may trigger a new conformity assessment).&lt;/p&gt;

&lt;p&gt;Do not sign this document until the underlying assessment work is actually complete. Signing a declaration of conformity for a system that has not undergone proper assessment is a straightforward path to enforcement action.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 6: Affix CE Marking and Register in the EU Database
&lt;/h3&gt;

&lt;p&gt;Article 48 requires high-risk AI systems to bear CE marking (confirming conformity with the Act and any other applicable EU legislation). Article 49 requires providers to register their systems in the EU database before placing them on the market.&lt;/p&gt;

&lt;p&gt;The EU database is maintained by the Commission and will be publicly accessible. Registration requires providing summary information about the system, its purpose, risk category, and the provider. This creates an accountability trail — registration means you are publicly on record as a provider of a high-risk AI system.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Substantial Modification Problem
&lt;/h2&gt;

&lt;p&gt;One area where organizations frequently get caught out is the &lt;strong&gt;substantial modification&lt;/strong&gt; trigger. If you make significant changes to a high-risk AI system after placing it on the market, the AI Act may require you to repeat the conformity assessment for the modified system.&lt;/p&gt;

&lt;p&gt;The Act does not define "substantial modification" with mathematical precision — it is a judgment call based on whether the change affects the system's compliance with Chapter III requirements. Relevant factors include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Changes to the intended purpose&lt;/li&gt;
&lt;li&gt;Changes to training data or training methodology&lt;/li&gt;
&lt;li&gt;Significant changes to model architecture or components&lt;/li&gt;
&lt;li&gt;Changes that affect the system's accuracy, robustness, or the way human oversight functions&lt;/li&gt;
&lt;li&gt;Changes that affect the risk profile identified during the original assessment&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The practical implication: if you have a high-risk AI system in production and you deploy a significant model update, you need a process to evaluate whether that update triggers a new conformity assessment. Build this into your change management and release processes, not as an afterthought.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Good Looks Like vs. What We Actually See
&lt;/h2&gt;

&lt;p&gt;Based on what practitioners are encountering across organizations preparing for August 2026, here is an honest picture of where businesses stand.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What good looks like:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Technical documentation is living documentation, maintained as part of the development workflow, not generated retroactively&lt;/li&gt;
&lt;li&gt;Risk management is integrated into sprint cycles and release gates, not a separate annual exercise&lt;/li&gt;
&lt;li&gt;Testing evidence is version-controlled alongside the model itself&lt;/li&gt;
&lt;li&gt;Post-market monitoring is automated where possible, with clear escalation paths for anomalies&lt;/li&gt;
&lt;li&gt;The declaration of conformity is reviewed by both legal and technical staff before signing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What we actually see in most organizations:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Technical documentation attempted retroactively, with significant gaps where development decisions were not recorded at the time&lt;/li&gt;
&lt;li&gt;Risk management treated as a document rather than a process — a risk register that was filled in once and never updated&lt;/li&gt;
&lt;li&gt;Testing limited to standard ML metrics (accuracy, F1 score) without the regulatory-specific testing for bias, robustness, and foreseeable misuse&lt;/li&gt;
&lt;li&gt;No process for evaluating whether model updates trigger a new conformity assessment&lt;/li&gt;
&lt;li&gt;Legal staff drafting declarations of conformity without full visibility into the underlying technical evidence&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Where to Start If You Are Behind
&lt;/h2&gt;

&lt;p&gt;If your conformity assessment work has not started or is materially incomplete, here is the prioritized sequence:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Confirm your risk classification&lt;/strong&gt; — You cannot plan a conformity assessment without knowing which procedure applies and what Annex III categories are in scope.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Inventory your AI systems&lt;/strong&gt; — Most organizations do not have a complete picture of all AI systems they operate. The &lt;a href="https://aktai.eu/blog/ai-systems-inventory-guide" rel="noopener noreferrer"&gt;AI systems inventory guide&lt;/a&gt; covers how to build one.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Gap-assess your technical documentation&lt;/strong&gt; — What do you have versus what Article 11 requires? Be ruthless. "We could reconstruct this from our commit history" is not the same as documentation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Establish your QMS structure&lt;/strong&gt; — Even if you do not have a formal QMS, document your current process and map it against Article 17. Identify the gaps.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Plan and execute testing&lt;/strong&gt; — Prioritize bias and robustness testing if you have not done it. These are the areas most likely to require rework if gaps are found.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Engage legal counsel to review draft declarations&lt;/strong&gt; — Do not treat the declaration of conformity as a boilerplate document. Have qualified legal counsel familiar with the AI Act review it.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you want a structured way to track progress across all these dimensions, &lt;a href="https://aktai.eu/dashboard" rel="noopener noreferrer"&gt;AktAI's compliance dashboard&lt;/a&gt; maps your documentation and assessment status against the Article 11 and Article 9 requirements, so you can see gaps without manually cross-referencing regulatory text.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Timeline Pressure Is Real
&lt;/h2&gt;

&lt;p&gt;August 2, 2026 is approximately five months away. A conformity assessment for a complex high-risk AI system — done properly — takes two to four months of sustained effort for an organization that is reasonably well-prepared. Organizations that are starting from scratch are looking at a compressed, intensive process.&lt;/p&gt;

&lt;p&gt;The risk of cutting corners is not abstract. Market surveillance authorities have been explicit that they will request technical documentation and assessment records as part of enforcement investigations. A declaration of conformity unsupported by real assessment work is a liability, not a shield.&lt;/p&gt;

&lt;p&gt;Start the assessment process now. Document everything as you go. And if you identify gaps in your high-risk AI system that cannot be remediated in time, have an honest conversation about whether you can continue operating that system after August — or whether you need to temporarily suspend it until compliance is achieved. That is a difficult business decision, but it is a better outcome than enforcement action.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Further reading: The &lt;a href="https://aktai.eu/blog/ai-act-deadlines-2025-2027" rel="noopener noreferrer"&gt;EU AI Act deadlines timeline&lt;/a&gt; shows all key compliance milestones. The &lt;a href="https://aktai.eu/blog/ai-risk-assessment-guide" rel="noopener noreferrer"&gt;AI risk assessment guide&lt;/a&gt; covers the Article 9 risk management system in depth. The &lt;a href="https://aktai.eu/blog/compliance-documentation-best-practices" rel="noopener noreferrer"&gt;compliance documentation best practices&lt;/a&gt; guide addresses technical documentation structure and maintenance.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>compliance</category>
      <category>europe</category>
      <category>saas</category>
    </item>
    <item>
      <title>EU AI Act: What Every Small Business Owner Needs to Know in 2026</title>
      <dc:creator>Oleksandr Usenko</dc:creator>
      <pubDate>Sun, 22 Feb 2026 16:25:07 +0000</pubDate>
      <link>https://forem.com/oleksandr_usenko_8f82e83d/eu-ai-act-what-every-small-business-owner-needs-to-know-in-2026-5a2g</link>
      <guid>https://forem.com/oleksandr_usenko_8f82e83d/eu-ai-act-what-every-small-business-owner-needs-to-know-in-2026-5a2g</guid>
      <description>&lt;h2&gt;
  
  
  What Is the EU AI Act?
&lt;/h2&gt;

&lt;p&gt;The EU AI Act is the world's first comprehensive law regulating artificial intelligence. Adopted by the European Union in 2024, it sets rules for how AI systems can be developed, sold, and used across Europe. Think of it as the AI equivalent of GDPR — but instead of protecting personal data, it governs how businesses use AI tools.&lt;/p&gt;

&lt;p&gt;If your company operates in the EU and uses any AI-powered tool — from a chatbot to an HR screening system — this law applies to you.&lt;/p&gt;

&lt;h2&gt;
  
  
  Does It Affect My Small Business?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Yes, almost certainly.&lt;/strong&gt; The EU AI Act does not only target Big Tech. It applies to any organization that develops, deploys, or uses AI systems within the EU. That includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Businesses using AI-powered recruitment tools&lt;/li&gt;
&lt;li&gt;Companies using chatbots for customer service&lt;/li&gt;
&lt;li&gt;Any organization using AI for decision-making (credit scoring, insurance, etc.)&lt;/li&gt;
&lt;li&gt;Businesses using AI content generators, translation tools, or analytics&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Even if you only use off-the-shelf AI tools like ChatGPT, Microsoft Copilot, or Grammarly, you still have obligations — particularly around &lt;strong&gt;AI literacy&lt;/strong&gt; (Article 4).&lt;/p&gt;

&lt;h2&gt;
  
  
  What Are the Key Dates?
&lt;/h2&gt;

&lt;p&gt;The law is being enforced in phases:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;February 2, 2025&lt;/strong&gt; — Prohibited AI practices banned; AI literacy obligation takes effect (Article 4)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;August 2, 2025&lt;/strong&gt; — Rules for general-purpose AI models (like GPT, Gemini)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;August 2, 2026&lt;/strong&gt; — Full enforcement for high-risk AI systems. This is the big deadline.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;August 2, 2027&lt;/strong&gt; — Existing AI products embedded in regulated goods must also comply&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The Article 4 AI literacy deadline has &lt;strong&gt;already passed&lt;/strong&gt;. If you have not addressed it, you are technically non-compliant today.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Happens if I Don't Comply?
&lt;/h2&gt;

&lt;p&gt;Fines are significant and scaled to company size:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Up to EUR 35 million or 7% of global turnover&lt;/strong&gt; for using prohibited AI&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Up to EUR 15 million or 3% of global turnover&lt;/strong&gt; for violating high-risk rules&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Up to EUR 7.5 million or 1.5% of global turnover&lt;/strong&gt; for providing incorrect information&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For SMEs, the lesser of the absolute amount or turnover percentage applies — but even the lower end is serious for a small business.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Should I Do Right Now?
&lt;/h2&gt;

&lt;p&gt;Here is a practical four-step plan:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Inventory your AI tools&lt;/strong&gt; — List every AI-powered tool your organization uses. Include everything from email spam filters to AI recruitment platforms. Our &lt;a href="https://aktai.eu/discover" rel="noopener noreferrer"&gt;free discovery wizard&lt;/a&gt; can help.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Classify the risk&lt;/strong&gt; — The EU AI Act uses a four-tier risk system: unacceptable, high, limited, and minimal. Your obligations depend on which tier your tools fall into. Use our &lt;a href="https://aktai.eu/classify" rel="noopener noreferrer"&gt;free risk classifier&lt;/a&gt; to check.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Train your staff&lt;/strong&gt; — Article 4 AI literacy is already mandatory. Every employee who interacts with AI needs to understand the basics. This does not need to be a PhD-level course — proportionate, practical training is what the law requires.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Document everything&lt;/strong&gt; — Start building records of what AI you use, why, and what safeguards are in place. This documentation is critical for high-risk systems.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  How Can AktAI Help?
&lt;/h2&gt;

&lt;p&gt;AktAI automates the entire compliance workflow: AI system classification, document generation, staff training records, and gap analysis. Instead of hiring a consultant for thousands of euros, you can start at EUR 49/month.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ready to see where you stand?&lt;/strong&gt; Take our &lt;a href="https://aktai.eu/assess" rel="noopener noreferrer"&gt;free readiness assessment&lt;/a&gt; — it takes less than 2 minutes and requires no signup.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>compliance</category>
      <category>europe</category>
      <category>saas</category>
    </item>
  </channel>
</rss>
