<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Breakthrough Pursuit</title>
    <description>The latest articles on Forem by Breakthrough Pursuit (@breakthroughpursuit).</description>
    <link>https://forem.com/breakthroughpursuit</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/breakthroughpursuit"/>
    <language>en</language>
    <item>
      <title>AI Needs Trust, Not Hype: A Global Governance Blueprint</title>
      <dc:creator>Breakthrough Pursuit</dc:creator>
      <pubDate>Wed, 15 Oct 2025 16:21:11 +0000</pubDate>
      <link>https://forem.com/breakthroughpursuit/ai-needs-trust-not-hype-a-global-governance-blueprint-52n2</link>
      <guid>https://forem.com/breakthroughpursuit/ai-needs-trust-not-hype-a-global-governance-blueprint-52n2</guid>
      <description>&lt;h2&gt;
  
  
  When an Algorithm Shattered Public Trust
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn0pa96lchch28fh6pz78.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn0pa96lchch28fh6pz78.png" alt="AI Needs Trust, Not Hype: A Global Governance Blueprint" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On an August morning in 2020, thousands of British students awoke to shock and heartbreak. With COVID-19 cancelling exams, an algorithm had been tasked with predicting their A-level grades – and it seemingly got it disastrously wrong. Nearly 40% of students saw their teacher-predicted grades downgraded&lt;a href="https://www.reuters.com/news/picture/british-students-in-uproar-after-algorit-idUSRTX7Q3F9/?ref=breakthroughpursuit.com#:~:text=A%20student%20burns%20an%20A,leaving%20exams.%20REUTERS%2FHenry%20Nicholls" rel="noopener noreferrer"&gt;[1]&lt;/a&gt;. Top achievers from less-privileged schools suddenly lost university offers; one student recalled, “I logged on at 8am and just started sobbing”&lt;a href="https://www.theguardian.com/commentisfree/2020/aug/19/ditch-the-algorithm-generation-students-a-levels-politics?ref=breakthroughpursuit.com#:~:text=A%20n%20improbable%20nightmare%20that,%E2%80%9D" rel="noopener noreferrer"&gt;[2]&lt;/a&gt;. Within days, outraged teenagers took to the streets. They brandished placards reading “The algorithm stole my future” and even “F$$k the algorithm” – a visceral display of betrayal&lt;a href="https://www.theguardian.com/commentisfree/2020/aug/19/ditch-the-algorithm-generation-students-a-levels-politics?ref=breakthroughpursuit.com#:~:text=Three%20days%20later%2C%20the%20A,focus%20for%20all%20to%20see" rel="noopener noreferrer"&gt;[3]&lt;/a&gt;. The public outcry forced a U-turn: officials scrapped the model and reinstated teacher assessments. A mere piece of code had upended lives and provoked a political crisis. The episode starkly illustrated how, in the real world, &lt;strong&gt;AI failures carry high human stakes and can swiftly erode public trust&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffinpnbn0otqd6oysj4zj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffinpnbn0otqd6oysj4zj.png" alt="AI Needs Trust, Not Hype: A Global Governance Blueprint" width="670" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Students in London protest an exam grading algorithm that downgraded their scores, August 2020. The public outcry forced the UK government to abandon the algorithm.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;That dramatic fiasco was more than a one-off glitch – it was a wake-up call. Around the world, artificial intelligence systems are making decisions once reserved for humans: who gets a loan or a job interview, which news we see, even how police monitor neighborhoods. Yet time and again, these systems have stumbled or overstepped. In the United States, for instance, a flawed facial recognition match led Detroit police to &lt;strong&gt;wrongfully arrest an innocent Black man&lt;/strong&gt; in 2020&lt;a href="https://www.ataccama.com/blog/ai-fails-how-to-prevent?ref=breakthroughpursuit.com#:~:text=In%202020%2C%20one%20such%20system,Williams" rel="noopener noreferrer"&gt;[4]&lt;/a&gt;&lt;a href="https://www.ataccama.com/blog/ai-fails-how-to-prevent?ref=breakthroughpursuit.com#:~:text=incident,custody%20before%20his%20eventual%20release" rel="noopener noreferrer"&gt;[5]&lt;/a&gt;. In the Netherlands, an algorithm used to flag welfare fraud falsely accused &lt;strong&gt;26,000 parents of cheating&lt;/strong&gt; and pushed families into financial ruin – a scandal so severe it toppled the Dutch government in 2021&lt;a href="https://en.wikipedia.org/wiki/Dutch_childcare_benefits_scandal?ref=breakthroughpursuit.com#:~:text=Between%202005%20and%202019%2C%20approximately,parents%20were%20wrongly%20accused%20of" rel="noopener noreferrer"&gt;[6]&lt;/a&gt;&lt;a href="https://en.wikipedia.org/wiki/Dutch_childcare_benefits_scandal?ref=breakthroughpursuit.com#:~:text=culminated%20in%20the%20resignation%20of,6" rel="noopener noreferrer"&gt;[7]&lt;/a&gt;. And in India, an automated system meant to weed out fake welfare beneficiaries instead &lt;strong&gt;canceled 1.86 million legitimate ration cards&lt;/strong&gt; and cut off food aid to some of the poorest citizens&lt;a href="https://www.juscorpus.com/unfair-by-design-fighting-ai-bias-in-e-governance-in-india/?ref=breakthroughpursuit.com#:~:text=Biased%20algorithms%20have%20harmed%20people,with%20little%20or%20no%20notice" rel="noopener noreferrer"&gt;[8]&lt;/a&gt;. From biased hiring algorithms that quietly sideline women to experimental self-driving cars that tragically fail to brake, each failure chips away at the credibility of AI. These stories travel fast, fueling public skepticism. &lt;strong&gt;Why, people ask, should we embrace AI if its judgments seem arbitrary, unaccountable, even dangerous?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The pattern is now familiar. Time after time, shiny new AI tools are deployed with great promise – only to trigger backlash when things go wrong. The hype extolling AI’s potential runs up against the lived reality of communities who feel harmed or powerless. A growing global chorus is essentially echoing those British students’ cry: &lt;em&gt;“Ditch the algorithm”&lt;/em&gt;. The sentiment underscores a crisis of legitimacy. If artificial intelligence is to truly benefit society, &lt;strong&gt;it must be worthy of society’s trust&lt;/strong&gt;. And earning that trust will require far more than optimistic press releases or after-the-fact apologies – it demands an overhaul in how we govern and vet these technologies before they do damage.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Global AI Trust Deficit
&lt;/h2&gt;

&lt;p&gt;AI developers and boosters often insist their innovations will make life better: smarter healthcare, efficient services, safer roads. Investment and adoption have indeed surged – global corporate AI adoption jumped over 100% in just one year&lt;a href="https://www.weforum.org/stories/2025/09/ai-trust-crisis-public-private-partnerships/?ref=breakthroughpursuit.com#:~:text=Artificial%20intelligence%20,deployed%20responsibly%20within%20their%20organizations" rel="noopener noreferrer"&gt;[9]&lt;/a&gt;. Yet public confidence hasn’t kept pace. Inside many companies, barely one-third of decision-makers trust AI outputs in their own operations&lt;a href="https://www.weforum.org/stories/2025/09/ai-trust-crisis-public-private-partnerships/?ref=breakthroughpursuit.com#:~:text=Across%20sectors%2C%20distrust%20has%20visibly,adoption%20is%20%E2%80%9Csafe%20by%20default%E2%80%9D" rel="noopener noreferrer"&gt;[10]&lt;/a&gt;. And among the general public, each high-profile failure feeds a widening trust gap.&lt;/p&gt;

&lt;p&gt;The reasons for wariness cut across regions. &lt;strong&gt;Concerns about bias, privacy, safety and accountability span cultures and continents&lt;/strong&gt; &lt;a href="https://www.weforum.org/stories/2025/09/ai-trust-crisis-public-private-partnerships/?ref=breakthroughpursuit.com#:~:text=The%20very%20reasons%20there%20is,world%20impact" rel="noopener noreferrer"&gt;[11]&lt;/a&gt;. In South Africa and the US alike, civil rights groups worry that AI-powered surveillance could unfairly target minorities. Across Europe, citizens ask who is accountable when an automated system makes a life-altering mistake. In Asian countries, debates rage over balancing rapid AI innovation with safeguards against abuse. The &lt;strong&gt;“global AI trust deficit”&lt;/strong&gt; is now estimated to put trillions in economic benefits at risk, as societies hesitate to fully embrace AI without assurance it will be safe and fair&lt;a href="https://www.weforum.org/stories/2025/09/ai-trust-crisis-public-private-partnerships/?ref=breakthroughpursuit.com#:~:text=government%20legitimacy%2C%20industry%20capability%20and,measurable%20controls%2C%20audits%20and%20redress" rel="noopener noreferrer"&gt;[12]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Crucially, distrust isn’t just an abstract ethical concern – it has tangible fallout. In business, projects get shelved and innovations lost because employees and customers don’t buy in&lt;a href="https://www.weforum.org/stories/2025/09/ai-trust-crisis-public-private-partnerships/?ref=breakthroughpursuit.com#:~:text=standards%2C%20transparency%2C%20audit%20and%20accountability%2C,adoption%20is%20%E2%80%9Csafe%20by%20default%E2%80%9D" rel="noopener noreferrer"&gt;[13]&lt;/a&gt;. In government, promising AI initiatives face public pushback or lawsuits, as seen in the UK exam debacle and Dutch welfare scandal. Meanwhile, countries without strong governance risk either missing out on AI’s upsides or suffering uncontrolled harms. As Microsoft’s CEO Satya Nadella warned, “I don’t think the world will put up anymore with [AI systems] not thought through on safety, equity and trust”&lt;a href="https://www.weforum.org/stories/2025/09/ai-trust-crisis-public-private-partnerships/?ref=breakthroughpursuit.com#:~:text=Governments%20are%20catching%20up%20too%3A,%E2%80%9D" rel="noopener noreferrer"&gt;[14]&lt;/a&gt;&lt;a href="https://www.weforum.org/stories/2025/09/ai-trust-crisis-public-private-partnerships/?ref=breakthroughpursuit.com#:~:text=line,%E2%80%9D" rel="noopener noreferrer"&gt;[15]&lt;/a&gt;. To move forward, &lt;strong&gt;we must bridge the trust gap&lt;/strong&gt; – and that begins by understanding why current approaches aren’t delivering.&lt;/p&gt;

&lt;h2&gt;
  
  
  Beyond Principles to Legitimacy
&lt;/h2&gt;

&lt;p&gt;It’s not for lack of trying that trust in AI remains fragile. In the past five years, a dizzying array of AI &lt;strong&gt;“Ethical Principles”&lt;/strong&gt; and guidelines have been published – by tech companies, governments, international bodies. By 2020 there were over &lt;strong&gt;180 sets of AI ethics principles&lt;/strong&gt; circulating globally&lt;a href="https://montrealethics.ai/wp-content/uploads/2020/10/State-of-AI-Ethics-Oct-2020.pdf?ref=breakthroughpursuit.com#:~:text=2%29%20They%20focus%20on%20implementation,is%20where%20critical%20discussions%20on" rel="noopener noreferrer"&gt;[16]&lt;/a&gt;. Almost all emphasize laudable values like transparency, fairness, and accountability. Yet critics note a glaring issue: principles alone don’t enforce themselves&lt;a href="https://montrealethics.ai/wp-content/uploads/2020/10/State-of-AI-Ethics-Oct-2020.pdf?ref=breakthroughpursuit.com#:~:text=2%29%20They%20focus%20on%20implementation,is%20where%20critical%20discussions%20on" rel="noopener noreferrer"&gt;[16]&lt;/a&gt;. They are often voluntary, vague, or ignored in practice – what some call &lt;em&gt;“ethics washing.”&lt;/em&gt; A company might proudly announce an AI Code of Conduct one day, only to quietly sideline its internal ethics team the next.&lt;/p&gt;

&lt;p&gt;Another touted solution has been technical audits and bias testing. Indeed, after facing embarrassment, the developers of systems from &lt;strong&gt;Amazon’s biased hiring tool&lt;/strong&gt; to the UK’s exam algorithm pledged to double-down on testing and “fairness fixes.” Such audits are useful, but they typically happen behind closed doors and &lt;strong&gt;lack public accountability&lt;/strong&gt;. It’s all too easy for organizations to mark their own homework, declaring an AI system trustworthy without independent scrutiny. And when independent auditors do find issues, there’s often no legal mandate to act on those findings.&lt;/p&gt;

&lt;p&gt;In short, &lt;em&gt;goodwill gestures aren’t enough.&lt;/em&gt; What’s missing are &lt;strong&gt;legitimacy frameworks&lt;/strong&gt; – robust governance structures to ensure AI systems genuinely uphold society’s values and rights. Legitimacy in this context means people affected by AI decisions have a say, clear protections, and recourse if things go wrong. It means the deployment of AI is not just a private corporate decision but subject to oversight that the public recognizes as valid. In the field of biotechnology, for example, early controversies around IVF were tempered by establishing bioethics commissions and laws – frameworks that gave the public confidence the technology wasn’t running amok. In finance, we don’t rely on banks to “voluntarily” behave ethically; we set rules and regulators to enforce them. &lt;strong&gt;AI now needs a similar maturation of governance&lt;/strong&gt;. Instead of assuming a glossy ethics pledge will prevent harm, we need binding rules and institutions capable of &lt;em&gt;earning&lt;/em&gt; trust.&lt;/p&gt;

&lt;p&gt;What might that look like in practice? First, it requires moving from abstract principles to &lt;strong&gt;clear standards and enforceable regulations&lt;/strong&gt;. Many governments are starting to draw these lines. For instance, the European Union is finalizing an &lt;strong&gt;AI Act&lt;/strong&gt; , the world’s first broad AI law, which will ban some high-risk practices outright and require strict safety checks for others&lt;a href="https://artificialintelligenceact.eu/assessment/eu-ai-act-compliance-checker/?ref=breakthroughpursuit.com#:~:text=EU%20AI%20Act%20Compliance%20Checker,Systems%20posing%20unacceptable" rel="noopener noreferrer"&gt;[17]&lt;/a&gt;. The EU effort explicitly aims to ensure AI systems meet “robustness, accuracy and accountability” benchmarks before they reach consumers&lt;a href="https://www.anecdotes.ai/learn/ai-regulations-in-2025-us-eu-uk-japan-china-and-more?ref=breakthroughpursuit.com#:~:text=Security%3A%20The%20Act%20mandates%20that,Providers%20must%20conduct%20risk" rel="noopener noreferrer"&gt;[18]&lt;/a&gt;. Companies deploying AI in sensitive areas – from credit scoring to medical devices – will have to conduct conformity assessments, much as automobiles must pass safety tests. Non-compliance could carry hefty fines. This kind of legal backbone moves the dial from trusting AI developers at their word, to &lt;strong&gt;demanding evidence of trustworthiness&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Second, legitimacy means &lt;strong&gt;independent oversight&lt;/strong&gt;. Just as pharmaceuticals are evaluated by regulators and ethics boards before public use, AI impacting human lives should face external review. Some jurisdictions are inching that way. In Canada, government agencies must perform Algorithmic Impact Assessments and publish the results for public review. In New York City, new rules demand that hiring algorithms be audited for bias and the summaries disclosed to candidates. These are early steps. A more ambitious model is emerging from ideas on the global stage: even the United Nations Secretary-General António Guterres has called for considering an &lt;strong&gt;international AI watchdog agency&lt;/strong&gt; akin to the International Atomic Energy Agency&lt;a href="https://www.reuters.com/technology/un-chief-backs-idea-global-ai-watchdog-like-nuclear-agency-2023-06-12/?ref=breakthroughpursuit.com#:~:text=He%20has%20announced%20plans%20to,of%20law%20and%20common%20good" rel="noopener noreferrer"&gt;[19]&lt;/a&gt;. Such a body could, for example, certify AI systems (much like the IAEA inspects nuclear safety) and facilitate the sharing of best practices worldwide. While a full-fledged “World AI Organization” may be years away, the principle is clear – oversight should not be left solely to those with direct interests in the technology.&lt;/p&gt;

&lt;p&gt;Finally, and critically, legitimacy comes from &lt;strong&gt;including diverse voices&lt;/strong&gt; in shaping AI’s rules. Too often, those writing the algorithms or policies are far removed from those who feel the consequences. The British exam scandal only came to light because students spoke out, revealing biases that engineers and bureaucrats missed. In the Netherlands, it was investigative journalists and affected families that exposed the welfare algorithm’s injustices&lt;a href="https://www.europarl.europa.eu/doceo/document/O-9-2022-000028_EN.html?ref=breakthroughpursuit.com#:~:text=The%20Dutch%20childcare%20benefit%20scandal%2C,individuals%20applying%20for%20childcare" rel="noopener noreferrer"&gt;[20]&lt;/a&gt;&lt;a href="https://en.wikipedia.org/wiki/Dutch_childcare_benefits_scandal?ref=breakthroughpursuit.com#:~:text=Between%202005%20and%202019%2C%20approximately,parents%20were%20wrongly%20accused%20of" rel="noopener noreferrer"&gt;[6]&lt;/a&gt;. We need to bake this “critical audience” into the process from the start. That means engaging civil society, academics, and representatives of impacted communities in AI governance – whether on national AI councils or as part of oversight audits. It also means ensuring &lt;strong&gt;transparency&lt;/strong&gt; : people should have the right to know when an AI system is being used on them, how it works (at least at a basic level), and to contest decisions. Legitimacy flourishes in sunlight; secrecy is the enemy of trust.&lt;/p&gt;

&lt;h2&gt;
  
  
  Learning from Global Efforts
&lt;/h2&gt;

&lt;p&gt;No single country has all the answers, but many are experimenting. &lt;strong&gt;Europe’s approach&lt;/strong&gt; has been to get ahead of the curve with comprehensive regulation. Beyond the forthcoming AI Act, the EU’s General Data Protection Regulation (GDPR) already grants individuals rights over automated decisions, and various European countries have ethical AI commissions advising governments. This precautionary stance reflects lessons learned from tech scandals and a public that demands strict consumer protections. While critics worry about stifling innovation, Europe argues that &lt;em&gt;trust is a precondition for sustainable innovation&lt;/em&gt;, not an enemy of it.&lt;/p&gt;

&lt;p&gt;In contrast, &lt;strong&gt;the United States has taken a patchwork and industry-led route&lt;/strong&gt;. There is no overarching federal AI law. Instead, we see sector-specific guidelines (like FDA rules for AI-driven medical devices or NHTSA guidance for autonomous vehicles) and a reliance on companies to police themselves under general consumer protection laws. Recently, the White House put out a &lt;em&gt;Blueprint for an AI Bill of Rights&lt;/em&gt; (2022) – a set of five principles such as “safe and effective systems” and “algorithmic discrimination protections.” But tellingly, it was explicitly labeled &lt;em&gt;non-binding&lt;/em&gt;&lt;a href="https://www.privacysecurityacademy.com/wp-content/uploads/2022/09/EXCERPT-Biden-Blueprint-for-AI-Bill-of-Rights.pdf?ref=breakthroughpursuit.com#:~:text=Academy%20www,modify%2C%20or%20direct%20an" rel="noopener noreferrer"&gt;[21]&lt;/a&gt;. The Biden administration has since secured voluntary commitments from AI firms to undergo security testing and share information about risks. These moves signal acknowledgement that something must be done – yet without legal force or new institutions, skeptics remain unconvinced. It may well take a high-profile AI disaster in the U.S. (imagine, for example, an autonomous vehicle flaw leading to mass injuries, or an AI decision system causing a major injustice) to galvanize the kind of regulatory response seen in Europe.&lt;/p&gt;

&lt;p&gt;Meanwhile, &lt;strong&gt;across Asia and the Global South, approaches vary widely&lt;/strong&gt;. China has embraced AI with fervor but has also begun asserting heavy-handed rules to rein in abuses – from requiring recommendation algorithms to be registered and abide by content guidelines, to drafting frameworks for generative AI that mandate security reviews and limit misinformation. Chinese authorities emphasize societal order and state control, which produces a form of &lt;em&gt;trust through tight oversight&lt;/em&gt;, albeit aligned with government interests. Other Asian nations are opting for softer governance: Singapore, for one, issued a Model AI Governance Framework encouraging transparency and accountability as voluntary best practices, and is piloting an AI governance testing hub in collaboration with industry. India, with its sprawling digital public sector, faces a daunting challenge: systems meant to streamline welfare have already shown bias against marginalized communities&lt;a href="https://www.juscorpus.com/unfair-by-design-fighting-ai-bias-in-e-governance-in-india/?ref=breakthroughpursuit.com#:~:text=Biased%20algorithms%20have%20harmed%20people,with%20little%20or%20no%20notice" rel="noopener noreferrer"&gt;[22]&lt;/a&gt;&lt;a href="https://www.juscorpus.com/unfair-by-design-fighting-ai-bias-in-e-governance-in-india/?ref=breakthroughpursuit.com#:~:text=In%20one%20instance%2C%20a%20poor,9" rel="noopener noreferrer"&gt;[23]&lt;/a&gt;. The Indian judiciary and civil society are pushing for safeguards – using existing laws on equality to challenge harmful algorithms in court&lt;a href="https://www.juscorpus.com/unfair-by-design-fighting-ai-bias-in-e-governance-in-india/?ref=breakthroughpursuit.com#:~:text=Aadhaar%20verification%20caused%20countless%20wrongful,exclusions" rel="noopener noreferrer"&gt;[24]&lt;/a&gt;. Though India has yet to pass an AI-specific law, the clamor for responsible AI is growing alongside its rapid adoption. In Africa and Latin America, the focus often falls on preventing &lt;strong&gt;“AI colonialism”&lt;/strong&gt; – the import of biased algorithms from abroad – and ensuring local contexts are respected. For example, South Africa’s government has convened multi-stakeholder dialogues on AI ethics, while Brazil introduced a bill of principles for AI aiming to balance innovation with human rights.&lt;/p&gt;

&lt;p&gt;These diverse efforts all underscore a common realization: &lt;strong&gt;trust in AI must be built; it won’t magically materialize&lt;/strong&gt;. Whether through strict laws, collaborative frameworks, or grassroots activism, societies everywhere are groping toward mechanisms that give people confidence in AI-driven systems. Each region brings a piece of the puzzle – and a truly effective solution will likely synthesize elements of all, adapted to local values.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Blueprint for Trustworthy AI Governance
&lt;/h2&gt;

&lt;p&gt;How do we translate these lessons into a workable blueprint for AI governance that can be adopted globally? Rather than a detailed checklist of rules (which would quickly become outdated), think of it as a set of core pillars to anchor legitimate and trusted AI. Three priorities stand out:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;1. Establish Independent Oversight and Auditing:&lt;/strong&gt; We need institutions – whether national AI regulators, ethics boards, or international panels – empowered to vet AI systems above a certain risk threshold &lt;strong&gt;before&lt;/strong&gt; and during their deployment. This includes technical testing (for bias, accuracy, security) by neutral experts and ongoing monitoring of real-world outcomes. For high-stakes uses, companies might be required to obtain a license or certification, analogous to clinical trials for drugs or safety inspections for airplanes. The oversight body must have teeth: authority to halt or recall AI systems that prove unsafe, and to impose penalties for violations. Crucially, it should involve not just technologists and officials but also legal, ethical, and public representatives to ensure well-rounded judgments. As one model, the &lt;strong&gt;Partnership on AI&lt;/strong&gt; , a global multistakeholder group, has developed guidelines for AI like synthetic media by convening tech companies, media, and civil society&lt;a href="https://www.weforum.org/stories/2025/09/ai-trust-crisis-public-private-partnerships/?ref=breakthroughpursuit.com#:~:text=distribution" rel="noopener noreferrer"&gt;[25]&lt;/a&gt;. We can build on such collaborations, scaling them up to formal governance processes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;2. Mandate Transparency, Accountability and Redress:&lt;/strong&gt; For any AI system with significant impact on people’s lives, &lt;strong&gt;transparency is non-negotiable&lt;/strong&gt;. At a minimum, users deserve to know an algorithm is in play and have access to explanations of how decisions are made (in understandable terms). This doesn’t mean companies must spill proprietary source code; rather, they should provide clarity on data sources, factors considered, and validation results. Accountability means clear lines of responsibility – a company or agency cannot hide behind “the algorithm” as an excuse. If an AI system causes harm or error, those deploying it should be obliged to inform affected individuals and regulators, and to address the issue. Equally, individuals need accessible ways to appeal or seek correction of AI-driven decisions – a human review process or an ombudsperson for algorithmic grievances. Some jurisdictions are moving this direction: the EU, for example, will require that users can challenge automated decisions under the AI Act’s provisions. Enforcement of these rights is key. It’s one thing to declare people can contest an AI decision; it’s another to ensure the process is navigable and that appeals are actually heard and remedied. Real accountability might also entail &lt;strong&gt;liability frameworks&lt;/strong&gt; so that if AI causes legal harm, victims can get compensation. Such measures press organizations to be careful with AI in the first place, aligning incentives toward safety and fairness.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;3. Embrace Global Cooperation for Standards and Inclusion:&lt;/strong&gt; AI is a borderless technology, and its governance challenges are global in scope. No country can tackle issues like AI-driven disinformation or autonomous weapons in isolation. Therefore a blueprint for trust must include building &lt;strong&gt;international standards and forums&lt;/strong&gt; for cooperation. This could mean expanding the mandate of bodies like the OECD (which has AI principles adopted by dozens of countries) to develop binding standards, or eventually creating that UN-backed AI Agency to coordinate monitoring of extreme risks&lt;a href="https://www.reuters.com/technology/un-chief-backs-idea-global-ai-watchdog-like-nuclear-agency-2023-06-12/?ref=breakthroughpursuit.com#:~:text=He%20has%20announced%20plans%20to,of%20law%20and%20common%20good" rel="noopener noreferrer"&gt;[19]&lt;/a&gt;. Sharing knowledge is also paramount – developing countries should be supported in adopting AI governance best practices so they are not left vulnerable or forced to simply accept whatever tech is imposed on them. Inclusion here also means actively involving voices from the Global South and marginalized communities in drafting global norms, not just letting a few wealthy nations set the rules. The world has learned hard lessons from the digital divide and biases of past tech: this time, there is an opportunity to hard-wire fairness and inclusivity into AI’s global rulebook from the start. If done right, cooperative governance can ensure AI’s benefits are broadly shared while risks are managed in a culturally aware way. When people see that &lt;em&gt;their&lt;/em&gt; values and representatives are part of shaping AI’s future, trust will naturally deepen.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  A Call to Action: Earning the Future’s Trust
&lt;/h2&gt;

&lt;p&gt;Humanity is on the cusp of incredible AI-driven advances – from algorithms that could help cure diseases to intelligent systems tackling climate change. But realizing these gains requires a foundation of trust. As it stands, that foundation is unsteady. Too many communities have already seen reasons to chant “f**k the algorithm,” and too few feel confident that someone has their back in the AI revolution. It doesn’t have to remain this way. By learning from our mistakes and insisting on legitimacy in how AI is designed, deployed, and overseen, we can chart a new path.&lt;/p&gt;

&lt;p&gt;The blueprint outlined here is not about stifling innovation; it’s about &lt;em&gt;safeguarding innovation&lt;/em&gt; by ensuring it serves people’s interests. If we get this right, AI systems could actually enhance trust – think of an AI healthcare tool that patients trust because they know it’s been rigorously tested and doctors can explain its advice, or a credit AI that applicants trust because it’s transparent and comes with an assurance of fairness and recourse. In such a world, AI would have earned its social license much as past technologies did through wise governance.&lt;/p&gt;

&lt;p&gt;The stakes are high and the timeline is urgent. In the words of one tech leader, the alarm bells around AI’s latest leaps “are deafening”&lt;a href="https://www.reuters.com/technology/un-chief-backs-idea-global-ai-watchdog-like-nuclear-agency-2023-06-12/?ref=breakthroughpursuit.com#:~:text=,must%20take%20those%20warnings%20seriously" rel="noopener noreferrer"&gt;[26]&lt;/a&gt; – even those at AI’s forefront are urging regulation. We should heed those warnings. It is time for leaders in government, industry and civil society across the globe to come together and build the frameworks that will make AI worthy of our confidence. This means passing smart laws, yes, but also innovating in oversight, empowering watchdogs, educating the public, and continually involving new voices. It means moving beyond hype and fear to a mature conversation about accountability and values.&lt;/p&gt;

&lt;p&gt;AI does not have to be an uncontrollable force that society grudgingly tolerates; it can be a trustworthy partner in our collective future. But &lt;strong&gt;trust must be earned&lt;/strong&gt;. The world needs to invest as much in the governance of AI as in the algorithms themselves. The tumult of recent AI failures has shown us what’s at stake. Now, we have a chance – and a responsibility – to put in place the global governance blueprint that ensures AI truly deserves the public’s trust in the years ahead.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sources:&lt;/strong&gt; &lt;a href="https://www.reuters.com/news/picture/british-students-in-uproar-after-algorit-idUSRTX7Q3F9/?ref=breakthroughpursuit.com#:~:text=A%20student%20burns%20an%20A,leaving%20exams.%20REUTERS%2FHenry%20Nicholls" rel="noopener noreferrer"&gt;[1]&lt;/a&gt;&lt;a href="https://www.theguardian.com/commentisfree/2020/aug/19/ditch-the-algorithm-generation-students-a-levels-politics?ref=breakthroughpursuit.com#:~:text=A%20n%20improbable%20nightmare%20that,%E2%80%9D" rel="noopener noreferrer"&gt;[2]&lt;/a&gt;&lt;a href="https://www.theguardian.com/commentisfree/2020/aug/19/ditch-the-algorithm-generation-students-a-levels-politics?ref=breakthroughpursuit.com#:~:text=Three%20days%20later%2C%20the%20A,focus%20for%20all%20to%20see" rel="noopener noreferrer"&gt;[3]&lt;/a&gt;&lt;a href="https://www.ataccama.com/blog/ai-fails-how-to-prevent?ref=breakthroughpursuit.com#:~:text=In%202020%2C%20one%20such%20system,Williams" rel="noopener noreferrer"&gt;[4]&lt;/a&gt;&lt;a href="https://www.ataccama.com/blog/ai-fails-how-to-prevent?ref=breakthroughpursuit.com#:~:text=incident,custody%20before%20his%20eventual%20release" rel="noopener noreferrer"&gt;[5]&lt;/a&gt;&lt;a href="https://en.wikipedia.org/wiki/Dutch_childcare_benefits_scandal?ref=breakthroughpursuit.com#:~:text=Between%202005%20and%202019%2C%20approximately,parents%20were%20wrongly%20accused%20of" rel="noopener noreferrer"&gt;[6]&lt;/a&gt;&lt;a href="https://en.wikipedia.org/wiki/Dutch_childcare_benefits_scandal?ref=breakthroughpursuit.com#:~:text=culminated%20in%20the%20resignation%20of,6" rel="noopener noreferrer"&gt;[7]&lt;/a&gt;&lt;a href="https://www.juscorpus.com/unfair-by-design-fighting-ai-bias-in-e-governance-in-india/?ref=breakthroughpursuit.com#:~:text=Biased%20algorithms%20have%20harmed%20people,with%20little%20or%20no%20notice" rel="noopener noreferrer"&gt;[8]&lt;/a&gt;&lt;a href="https://www.weforum.org/stories/2025/09/ai-trust-crisis-public-private-partnerships/?ref=breakthroughpursuit.com#:~:text=Artificial%20intelligence%20,deployed%20responsibly%20within%20their%20organizations" rel="noopener noreferrer"&gt;[9]&lt;/a&gt;&lt;a href="https://www.weforum.org/stories/2025/09/ai-trust-crisis-public-private-partnerships/?ref=breakthroughpursuit.com#:~:text=The%20very%20reasons%20there%20is,world%20impact" rel="noopener noreferrer"&gt;[11]&lt;/a&gt;&lt;a href="https://www.weforum.org/stories/2025/09/ai-trust-crisis-public-private-partnerships/?ref=breakthroughpursuit.com#:~:text=Governments%20are%20catching%20up%20too%3A,%E2%80%9D" rel="noopener noreferrer"&gt;[14]&lt;/a&gt;&lt;a href="https://montrealethics.ai/wp-content/uploads/2020/10/State-of-AI-Ethics-Oct-2020.pdf?ref=breakthroughpursuit.com#:~:text=2%29%20They%20focus%20on%20implementation,is%20where%20critical%20discussions%20on" rel="noopener noreferrer"&gt;[16]&lt;/a&gt;&lt;a href="https://www.reuters.com/technology/un-chief-backs-idea-global-ai-watchdog-like-nuclear-agency-2023-06-12/?ref=breakthroughpursuit.com#:~:text=He%20has%20announced%20plans%20to,of%20law%20and%20common%20good" rel="noopener noreferrer"&gt;[19]&lt;/a&gt;&lt;a href="https://www.weforum.org/stories/2025/09/ai-trust-crisis-public-private-partnerships/?ref=breakthroughpursuit.com#:~:text=distribution" rel="noopener noreferrer"&gt;[25]&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;a href="https://www.reuters.com/news/picture/british-students-in-uproar-after-algorit-idUSRTX7Q3F9/?ref=breakthroughpursuit.com#:~:text=A%20student%20burns%20an%20A,leaving%20exams.%20REUTERS%2FHenry%20Nicholls" rel="noopener noreferrer"&gt;[1]&lt;/a&gt; British students in uproar after algorithm decides their final grades&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.reuters.com/news/picture/british-students-in-uproar-after-algorit-idUSRTX7Q3F9/?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.reuters.com/news/picture/british-students-in-uproar-after-algorit-idUSRTX7Q3F9/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.theguardian.com/commentisfree/2020/aug/19/ditch-the-algorithm-generation-students-a-levels-politics?ref=breakthroughpursuit.com#:~:text=A%20n%20improbable%20nightmare%20that,%E2%80%9D" rel="noopener noreferrer"&gt;[2]&lt;/a&gt; &lt;a href="https://www.theguardian.com/commentisfree/2020/aug/19/ditch-the-algorithm-generation-students-a-levels-politics?ref=breakthroughpursuit.com#:~:text=Three%20days%20later%2C%20the%20A,focus%20for%20all%20to%20see" rel="noopener noreferrer"&gt;[3]&lt;/a&gt; Why 'Ditch the algorithm' is the future of political protest | Louise Amoore | The Guardian&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.theguardian.com/commentisfree/2020/aug/19/ditch-the-algorithm-generation-students-a-levels-politics?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.theguardian.com/commentisfree/2020/aug/19/ditch-the-algorithm-generation-students-a-levels-politics&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.ataccama.com/blog/ai-fails-how-to-prevent?ref=breakthroughpursuit.com#:~:text=In%202020%2C%20one%20such%20system,Williams" rel="noopener noreferrer"&gt;[4]&lt;/a&gt; &lt;a href="https://www.ataccama.com/blog/ai-fails-how-to-prevent?ref=breakthroughpursuit.com#:~:text=incident,custody%20before%20his%20eventual%20release" rel="noopener noreferrer"&gt;[5]&lt;/a&gt; 9 AI fails (and how they could have been prevented) | Ataccama&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.ataccama.com/blog/ai-fails-how-to-prevent?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.ataccama.com/blog/ai-fails-how-to-prevent&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/Dutch_childcare_benefits_scandal?ref=breakthroughpursuit.com#:~:text=Between%202005%20and%202019%2C%20approximately,parents%20were%20wrongly%20accused%20of" rel="noopener noreferrer"&gt;[6]&lt;/a&gt; &lt;a href="https://en.wikipedia.org/wiki/Dutch_childcare_benefits_scandal?ref=breakthroughpursuit.com#:~:text=culminated%20in%20the%20resignation%20of,6" rel="noopener noreferrer"&gt;[7]&lt;/a&gt; Dutch childcare benefits scandal - Wikipedia&lt;/p&gt;

&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/Dutch_childcare_benefits_scandal?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/Dutch_childcare_benefits_scandal&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.juscorpus.com/unfair-by-design-fighting-ai-bias-in-e-governance-in-india/?ref=breakthroughpursuit.com#:~:text=Biased%20algorithms%20have%20harmed%20people,with%20little%20or%20no%20notice" rel="noopener noreferrer"&gt;[8]&lt;/a&gt; &lt;a href="https://www.juscorpus.com/unfair-by-design-fighting-ai-bias-in-e-governance-in-india/?ref=breakthroughpursuit.com#:~:text=Biased%20algorithms%20have%20harmed%20people,with%20little%20or%20no%20notice" rel="noopener noreferrer"&gt;[22]&lt;/a&gt; &lt;a href="https://www.juscorpus.com/unfair-by-design-fighting-ai-bias-in-e-governance-in-india/?ref=breakthroughpursuit.com#:~:text=In%20one%20instance%2C%20a%20poor,9" rel="noopener noreferrer"&gt;[23]&lt;/a&gt; &lt;a href="https://www.juscorpus.com/unfair-by-design-fighting-ai-bias-in-e-governance-in-india/?ref=breakthroughpursuit.com#:~:text=Aadhaar%20verification%20caused%20countless%20wrongful,exclusions" rel="noopener noreferrer"&gt;[24]&lt;/a&gt; UNFAIR BY DESIGN: FIGHTING AI BIAS IN E-GOVERNANCE IN INDIA - Jus Corpus&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.juscorpus.com/unfair-by-design-fighting-ai-bias-in-e-governance-in-india/?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.juscorpus.com/unfair-by-design-fighting-ai-bias-in-e-governance-in-india/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.weforum.org/stories/2025/09/ai-trust-crisis-public-private-partnerships/?ref=breakthroughpursuit.com#:~:text=Artificial%20intelligence%20,deployed%20responsibly%20within%20their%20organizations" rel="noopener noreferrer"&gt;[9]&lt;/a&gt; &lt;a href="https://www.weforum.org/stories/2025/09/ai-trust-crisis-public-private-partnerships/?ref=breakthroughpursuit.com#:~:text=Across%20sectors%2C%20distrust%20has%20visibly,adoption%20is%20%E2%80%9Csafe%20by%20default%E2%80%9D" rel="noopener noreferrer"&gt;[10]&lt;/a&gt; &lt;a href="https://www.weforum.org/stories/2025/09/ai-trust-crisis-public-private-partnerships/?ref=breakthroughpursuit.com#:~:text=The%20very%20reasons%20there%20is,world%20impact" rel="noopener noreferrer"&gt;[11]&lt;/a&gt; &lt;a href="https://www.weforum.org/stories/2025/09/ai-trust-crisis-public-private-partnerships/?ref=breakthroughpursuit.com#:~:text=government%20legitimacy%2C%20industry%20capability%20and,measurable%20controls%2C%20audits%20and%20redress" rel="noopener noreferrer"&gt;[12]&lt;/a&gt; &lt;a href="https://www.weforum.org/stories/2025/09/ai-trust-crisis-public-private-partnerships/?ref=breakthroughpursuit.com#:~:text=standards%2C%20transparency%2C%20audit%20and%20accountability%2C,adoption%20is%20%E2%80%9Csafe%20by%20default%E2%80%9D" rel="noopener noreferrer"&gt;[13]&lt;/a&gt; &lt;a href="https://www.weforum.org/stories/2025/09/ai-trust-crisis-public-private-partnerships/?ref=breakthroughpursuit.com#:~:text=Governments%20are%20catching%20up%20too%3A,%E2%80%9D" rel="noopener noreferrer"&gt;[14]&lt;/a&gt; &lt;a href="https://www.weforum.org/stories/2025/09/ai-trust-crisis-public-private-partnerships/?ref=breakthroughpursuit.com#:~:text=line,%E2%80%9D" rel="noopener noreferrer"&gt;[15]&lt;/a&gt; &lt;a href="https://www.weforum.org/stories/2025/09/ai-trust-crisis-public-private-partnerships/?ref=breakthroughpursuit.com#:~:text=distribution" rel="noopener noreferrer"&gt;[25]&lt;/a&gt; Why public-private partnerships key to building AI trust | World Economic Forum&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.weforum.org/stories/2025/09/ai-trust-crisis-public-private-partnerships/?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.weforum.org/stories/2025/09/ai-trust-crisis-public-private-partnerships/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://montrealethics.ai/wp-content/uploads/2020/10/State-of-AI-Ethics-Oct-2020.pdf?ref=breakthroughpursuit.com#:~:text=2%29%20They%20focus%20on%20implementation,is%20where%20critical%20discussions%20on" rel="noopener noreferrer"&gt;[16]&lt;/a&gt; montrealethics.ai&lt;/p&gt;

&lt;p&gt;&lt;a href="https://montrealethics.ai/wp-content/uploads/2020/10/State-of-AI-Ethics-Oct-2020.pdf?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://montrealethics.ai/wp-content/uploads/2020/10/State-of-AI-Ethics-Oct-2020.pdf&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://artificialintelligenceact.eu/assessment/eu-ai-act-compliance-checker/?ref=breakthroughpursuit.com#:~:text=EU%20AI%20Act%20Compliance%20Checker,Systems%20posing%20unacceptable" rel="noopener noreferrer"&gt;[17]&lt;/a&gt; EU AI Act Compliance Checker | EU Artificial Intelligence Act&lt;/p&gt;

&lt;p&gt;&lt;a href="https://artificialintelligenceact.eu/assessment/eu-ai-act-compliance-checker/?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://artificialintelligenceact.eu/assessment/eu-ai-act-compliance-checker/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.anecdotes.ai/learn/ai-regulations-in-2025-us-eu-uk-japan-china-and-more?ref=breakthroughpursuit.com#:~:text=Security%3A%20The%20Act%20mandates%20that,Providers%20must%20conduct%20risk" rel="noopener noreferrer"&gt;[18]&lt;/a&gt; AI Regulations in 2025: US, EU, UK, Japan, China &amp;amp; More&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.anecdotes.ai/learn/ai-regulations-in-2025-us-eu-uk-japan-china-and-more?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.anecdotes.ai/learn/ai-regulations-in-2025-us-eu-uk-japan-china-and-more&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.reuters.com/technology/un-chief-backs-idea-global-ai-watchdog-like-nuclear-agency-2023-06-12/?ref=breakthroughpursuit.com#:~:text=He%20has%20announced%20plans%20to,of%20law%20and%20common%20good" rel="noopener noreferrer"&gt;[19]&lt;/a&gt; &lt;a href="https://www.reuters.com/technology/un-chief-backs-idea-global-ai-watchdog-like-nuclear-agency-2023-06-12/?ref=breakthroughpursuit.com#:~:text=,must%20take%20those%20warnings%20seriously" rel="noopener noreferrer"&gt;[26]&lt;/a&gt; UN chief backs idea of global AI watchdog like nuclear agency | Reuters&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.reuters.com/technology/un-chief-backs-idea-global-ai-watchdog-like-nuclear-agency-2023-06-12/?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.reuters.com/technology/un-chief-backs-idea-global-ai-watchdog-like-nuclear-agency-2023-06-12/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.europarl.europa.eu/doceo/document/O-9-2022-000028_EN.html?ref=breakthroughpursuit.com#:~:text=The%20Dutch%20childcare%20benefit%20scandal%2C,individuals%20applying%20for%20childcare" rel="noopener noreferrer"&gt;[20]&lt;/a&gt; The Dutch childcare benefit scandal, institutional racism and ...&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.europarl.europa.eu/doceo/document/O-9-2022-000028_EN.html?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.europarl.europa.eu/doceo/document/O-9-2022-000028_EN.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.privacysecurityacademy.com/wp-content/uploads/2022/09/EXCERPT-Biden-Blueprint-for-AI-Bill-of-Rights.pdf?ref=breakthroughpursuit.com#:~:text=Academy%20www,modify%2C%20or%20direct%20an" rel="noopener noreferrer"&gt;[21]&lt;/a&gt; [PDF] Blueprint for an AI Bill of Rights - Privacy + Security Academy&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.privacysecurityacademy.com/wp-content/uploads/2022/09/EXCERPT-Biden-Blueprint-for-AI-Bill-of-Rights.pdf?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.privacysecurityacademy.com/wp-content/uploads/2022/09/EXCERPT-Biden-Blueprint-for-AI-Bill-of-Rights.pdf&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aigovernance</category>
      <category>trustworthyai</category>
      <category>airegulation</category>
      <category>aiethics</category>
    </item>
    <item>
      <title>Why AI Needs a Body</title>
      <dc:creator>Breakthrough Pursuit</dc:creator>
      <pubDate>Sat, 11 Oct 2025 14:53:27 +0000</pubDate>
      <link>https://forem.com/breakthroughpursuit/why-ai-needs-a-body-1gi2</link>
      <guid>https://forem.com/breakthroughpursuit/why-ai-needs-a-body-1gi2</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp0qxifyzqzeqibu0mqfc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp0qxifyzqzeqibu0mqfc.png" alt="Why AI Needs a Body" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Introduction – When AI Stumbles Without Senses:&lt;/strong&gt; In 2017, a security robot built to patrol a Washington D.C. office complex made headlines for all the wrong reasons: the autonomous sentry steered itself straight into a fountain and drowned&lt;a href="https://www.theverge.com/tldr/2017/7/17/15986042/dc-security-robot-k5-falls-into-water?ref=breakthroughpursuit.com#:~:text=We%20don%E2%80%99t%20yet%20know%20the,horrifying%20news%3F%20Did%20it%20realize" rel="noopener noreferrer"&gt;[1]&lt;/a&gt;. No malicious hackers were involved – the robot simply lacked the instinct to recognize a flight of steps and a pool of water as mortal hazards. Fast forward to 2023, and a very different kind of AI blunder unfolded in a New York courtroom. A seasoned attorney submitted a brief citing six precedents that &lt;strong&gt;did not exist&lt;/strong&gt; , after trusting an AI language model which confidently fabricated court cases out of thin air&lt;a href="https://www.theguardian.com/technology/2023/jun/23/two-us-lawyers-fined-submitting-fake-court-citations-chatgpt?ref=breakthroughpursuit.com#:~:text=A%20US%20judge%20has%20fined,submitted%20in%20a%20court%20filing" rel="noopener noreferrer"&gt;[2]&lt;/a&gt;. These episodes – one physical, one virtual – capture a common flaw at the cutting edge of artificial intelligence. From driverless cars that fail to notice pedestrians to chatbots that spin plausible falsehoods, today’s most advanced AIs remain oddly &lt;em&gt;out of touch&lt;/em&gt; with reality. They possess formidable computational brains, but no bodies or sensory grounding in the world. And that missing piece can make them clumsy, gullible, or even dangerous.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Disembodied Dilemma:&lt;/strong&gt; Decades of AI research achieved impressive feats in narrow domains – machines that can &lt;strong&gt;master chess&lt;/strong&gt; , &lt;strong&gt;generate fluent text&lt;/strong&gt; , or &lt;strong&gt;recognize faces&lt;/strong&gt; – yet these systems operate in abstraction, detached from the physical context humans take for granted. A child learns that ice is slippery by &lt;strong&gt;skinning their knee on a frozen puddle&lt;/strong&gt; ; a disembodied AI, by contrast, might only “know” ice via keywords in a database. Lacking lived experience, such AI can misjudge cause and effect or overlook obvious cues. Technologists often call this the &lt;strong&gt;grounding problem&lt;/strong&gt; : without real sensorimotor feedback, an AI has no true understanding of what its predictions or decisions mean in the physical world. We see the consequences when a chatbot’s advice turns out lethally flawed, or when a warehouse robot grasps at an object with the delicacy of a wrecking ball. However sophisticated their algorithms, disembodied AIs are like brilliant minds in sensory deprivation tanks – intelligent, perhaps, but not truly &lt;strong&gt;aware&lt;/strong&gt;. This is why a growing movement in AI is arguing that &lt;em&gt;real&lt;/em&gt; intelligence needs a body. To move beyond brittle logic and hallucinated answers, AI must step out of the server farm and into the sensory, unpredictable, messy real world.&lt;/p&gt;

&lt;h2&gt;
  
  
  Brains Learn from Bodies: Insights from Cognitive Science
&lt;/h2&gt;

&lt;p&gt;A century of cognitive science and psychology suggests that &lt;strong&gt;minds and bodies form a single, integrated system&lt;/strong&gt;. Human intelligence was never meant to float free of a physical form – from infancy, we learn by doing. Psychologist Jean Piaget noted that babies in the sensorimotor stage discover fundamental concepts like object permanence through hands-on play. In other words, our brains evolved to think &lt;strong&gt;by engaging with the world&lt;/strong&gt; , not by contemplating it in an abstract vacuum. Modern research in embodied cognition reinforces this idea. Perception, motion, and reasoning are deeply intertwined: our understanding of “balance” is rooted in the felt experience of not toppling over; our concept of “distance” is grounded in the time it takes to walk or reach&lt;a href="https://www.nature.com/articles/s41467-021-25874-z?error=cookies_not_supported&amp;amp;code=46e41914-5d0d-4efd-9f7a-b572b84e61ab&amp;amp;ref=breakthroughpursuit.com#:~:text=display%20remarkable%20degrees%20of%20embodied,vision7%20%2C%20or%20games%2032" rel="noopener noreferrer"&gt;[3]&lt;/a&gt;. In AI terms, an algorithm that only ever saw images of a cup might know what a cup looks like, but an embodied AI that has &lt;em&gt;felt&lt;/em&gt; a cup – lifted it, sensed its weight sloshing with liquid – gains a richer, more actionable understanding of “cup-ness.”&lt;/p&gt;

&lt;p&gt;Neuroscientists often point out that &lt;strong&gt;intelligence in nature is intrinsically embodied&lt;/strong&gt;. Every animal brain evolved in tandem with a body, finely tuned to survive in some environment. A bird’s brain is wired together with its wings and eyesight; a dolphin’s intelligence is inseparable from its sleek, swimming form. Even our metaphors for thinking (“grasping” an idea, “tackling” a problem) betray the bodily basis of cognition. This embodied view challenges the old Cartesian notion of mind-body separation. As one landmark philosophy paper put it, &lt;em&gt;our reason itself is shaped by the body’s interactions&lt;/em&gt; – we make sense of abstract concepts by grounding them in physical experience&lt;a href="https://arxiv.org/html/2402.03824v3?ref=breakthroughpursuit.com#:~:text=philosophy%20and%20cognitive%20science%20that,embedded%2C%20and%20extended%20aspects%20of" rel="noopener noreferrer"&gt;[4]&lt;/a&gt;. For artificial intelligence, the implication is profound: algorithms might achieve far greater understanding if they too have sensory-motor loops connecting them to reality. Instead of training solely on text or images, an AI endowed with cameras, microphones, tactile sensors, and locomotion can learn by exploring, by trial-and-error, by direct &lt;strong&gt;experience&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Critically, an AI with a body can learn &lt;em&gt;causality&lt;/em&gt; in a way disembodied models cannot. A large language model might read about how pushing a glass makes it fall and shatter, but a robot can &lt;strong&gt;push the glass&lt;/strong&gt; and see the consequences. That difference matters. Disembodied AIs excel at finding correlations in data, but they struggle with cause-and-effect. As researchers have noted, &lt;strong&gt;LLMs (large language models) are not designed to grasp true causality – they predict words based on statistical patterns – whereas an embodied agent can directly observe and test how its actions change the world&lt;/strong&gt;&lt;a href="https://arxiv.org/html/2402.03824v3?ref=breakthroughpursuit.com#:~:text=along%20with%20the%20pivotal%20ability,the%20reasons%20behind%20those%20outcomes" rel="noopener noreferrer"&gt;[5]&lt;/a&gt;. The embodied AI literally &lt;strong&gt;feels&lt;/strong&gt; the mistake when it takes a wrong step or drops an object, and can adjust its behavior accordingly. This sensorimotor learning creates a feedback loop for common sense. It’s the difference between &lt;em&gt;knowing&lt;/em&gt; and &lt;em&gt;understanding&lt;/em&gt;. In short, giving AI a body isn’t just an academic novelty; it taps into the fundamental way intelligence arises, through continuous cycles of perception and action.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Embodiment Advantage Index: How Physicality Boosts AI
&lt;/h2&gt;

&lt;p&gt;To quantify why having a body makes an AI smarter and safer, consider an &lt;strong&gt;Embodiment Advantage Index&lt;/strong&gt; – a framework for measuring the gains from grounding AI in the physical world. Across multiple dimensions of performance, adding embodiment provides a significant uplift:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Learning Speed and Adaptability:&lt;/strong&gt; An embodied AI can learn through &lt;em&gt;real-time interaction&lt;/em&gt;. A warehouse robot, for instance, improves its grasping technique by physically picking thousands of items, learning from each fumble. Studies show that agents with the right morphology and environment can rapidly learn complex behaviors that static algorithms struggle with&lt;a href="https://www.nature.com/articles/s41467-021-25874-z?error=cookies_not_supported&amp;amp;code=46e41914-5d0d-4efd-9f7a-b572b84e61ab&amp;amp;ref=breakthroughpursuit.com#:~:text=diverse%20agent%20morphologies%20to%20learn,Third%2C%20we" rel="noopener noreferrer"&gt;[6]&lt;/a&gt;&lt;a href="https://www.nature.com/articles/s41467-021-25874-z?error=cookies_not_supported&amp;amp;code=46e41914-5d0d-4efd-9f7a-b572b84e61ab&amp;amp;ref=breakthroughpursuit.com#:~:text=display%20remarkable%20degrees%20of%20embodied,vision7%20%2C%20or%20games%2032" rel="noopener noreferrer"&gt;[3]&lt;/a&gt;. Like animals evolved for their niches, robots with bodies tuned to tasks (wheels, arms, grippers, etc.) pick up new skills faster. Physical trial-and-error, though sometimes messy, teaches lessons in minutes that might take a disembodied simulation endless iterations to discover.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Contextual Understanding and Common Sense:&lt;/strong&gt; Embodied AI has “skin in the game.” A chatbot might blithely recommend a toxic chemical as a household cleaner if its training data has a gap, but a robot working in a kitchen would be constrained by sensors (the acrid smell, the corrosive touch) to know something is off. Being situated in the real world forces AI to align its outputs with reality. In effect, &lt;strong&gt;a physically grounded AI develops an internal model of the world that is more accurate and commonsensical&lt;/strong&gt; – it knows water is wet, fire is hot, and gravity makes things fall down, not because it read it in a textbook, but because it &lt;em&gt;experienced&lt;/em&gt; these truths. This grounded knowledge dramatically cuts down on the absurd errors and “hallucinations” seen in disembodied models. As one group of AI researchers put it, an embodied agent can even learn a &lt;strong&gt;“sense of truth”&lt;/strong&gt; – since an agent tied to real-world survival quickly figures out that accurate beliefs (e.g. which berries are edible) are beneficial&lt;a href="https://arxiv.org/html/2402.03824v3?ref=breakthroughpursuit.com#:~:text=Pursuing%20the%20development%20of%20AGI%2C,and%20evolving%20without%20human%20intervention" rel="noopener noreferrer"&gt;[7]&lt;/a&gt;. While current AIs won’t be foraging for berries, the principle is the same: a bot with real-world feedback is incentivized to get its facts right.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Robustness and Resilience:&lt;/strong&gt; Life is noisy and unpredictable. A robot operating in a busy factory or on a city street must handle fluctuating conditions – moving people, weather changes, random obstacles. Embodied AI, therefore, tends to develop more &lt;strong&gt;robust perception and control&lt;/strong&gt;. Its vision system learns to focus on essential cues (the pedestrian darting across the road) amid distractions. Its decision-making is continually stress-tested by reality, making it less brittle than a model that has only seen perfectly curated data. When conditions shift or something unanticipated occurs, the embodied AI can fall back on its experiential repertoire: “I’ve seen something like this before, here’s what worked.” Over time, these systems build &lt;em&gt;antifragility&lt;/em&gt; – they get better under real-world strain, whereas disembodied AIs often crumble outside the neat bounds of their training set.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Human Compatibility and Trust:&lt;/strong&gt; We humans are embodied creatures, so we instinctively trust intelligence that we can see and feel operating in our world. An AI that can look a person in the eye (through a camera “eye”), navigate our physical spaces, and respond to touch or tone is one that people find more relatable and accountable. Consider how we react differently to a navigation app versus a physical robot guide: if the app errs, we curse the software; if the robot guide makes the same mistake but then visibly “realizes” and corrects itself, we’re more forgiving – we see it learning, almost empathize with it. Giving AI a body opens up channels of non-verbal communication (facial expressions on a humanoid robot, gestures, vocal tone) that can make collaboration with humans more fluid. Importantly, &lt;strong&gt;embodied AI also makes it easier to enforce accountability&lt;/strong&gt; – a robot in the lobby can’t hide its actions in a black box; it either delivered the package or it didn’t. This physical presence creates a natural audit trail and deterrent for undesirable behavior. As AI moves into shared spaces, having a body that humans can observe and interact with will be key to building trust and social acceptance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In sum, the &lt;em&gt;Embodiment Advantage Index&lt;/em&gt; for AI shows positive scores across learning efficiency, accuracy, robustness, and trust. Real-world grounding isn’t a magic fix for every problem – but it is a powerful accelerator for moving AI from artificial savant to genuine, reliable intelligence.&lt;/p&gt;

&lt;h2&gt;
  
  
  Policy and Governance: Ground Rules for Grounded AI
&lt;/h2&gt;

&lt;p&gt;As AI systems acquire bodies and venture into the physical world, the stakes of failure rise – and policymakers have taken notice. Around the globe, a consensus is emerging on the &lt;strong&gt;governance principles&lt;/strong&gt; needed to guide AI’s next wave. Transparency, accountability, safety, and human rights are the common pillars. For example, the OECD’s multinational &lt;strong&gt;AI Principles&lt;/strong&gt; (adopted by over 40 countries) emphasize that AI should be &lt;strong&gt;fair, transparent, secure, and accountable&lt;/strong&gt; , all while upholding human rights and democratic values&lt;a href="https://oecd.ai/en/ai-principles?ref=breakthroughpursuit.com#:~:text=,values%2C%20including%20fairness%20and%20privacy" rel="noopener noreferrer"&gt;[8]&lt;/a&gt;&lt;a href="https://oecd.ai/en/ai-principles?ref=breakthroughpursuit.com#:~:text=The%20OECD%20AI%20Principles%20were,robust%20and%20fit%20for%20purpose" rel="noopener noreferrer"&gt;[9]&lt;/a&gt;. This means an embodied AI like a caregiving robot should be able to explain its decisions (why it adjusted a patient’s medication dosage) and must have fail-safes to prevent harm. Likewise, the United States’ &lt;strong&gt;NIST AI Risk Management Framework&lt;/strong&gt; – a voluntary standard influential in industry – calls for techniques to make AI systems &lt;strong&gt;accountable, transparent, and robust against threats, while respecting privacy and civil liberties&lt;/strong&gt; &lt;a href="https://www.paloaltonetworks.com/cyberpedia/nist-ai-risk-management-framework?ref=breakthroughpursuit.com#:~:text=The%20NIST%20AI%20Risk%20Management,respecting%20privacy%20and%20civil%20liberties" rel="noopener noreferrer"&gt;[10]&lt;/a&gt;. In practice, this could involve rigorous testing of a delivery drone’s collision-avoidance algorithms, disclosure of when you’re interacting with a machine rather than a person, and built-in safeguards so robots obey safety regulations.&lt;/p&gt;

&lt;p&gt;Early movers like the European Union are also crafting laws (e.g. the upcoming AI Act) that classify high-risk AI uses – which will likely include embodied applications like autonomous vehicles or medical robots – and impose requirements for &lt;strong&gt;risk assessments and human oversight&lt;/strong&gt;. The overarching theme is clear: as AI transitions from virtual to embodied, &lt;strong&gt;governance must extend from data ethics into&lt;/strong&gt; physical &lt;strong&gt;ethics&lt;/strong&gt;. How do we certify a robot’s safety similar to an airplane’s? Who is liable if an AI-powered device causes an accident? Can an autonomous robot be granted any form of legal personhood or is it always a tool? These debates are ongoing, but the direction is toward &lt;strong&gt;greater transparency and control&lt;/strong&gt;. The world’s leading AI principles converge on one point above all – AI must remain &lt;strong&gt;“human-centric”&lt;/strong&gt; and serve the public good, even as it gains autonomy. In the context of embodied AI, that translates to something tangible: robots and AI systems should behave in ways that are &lt;strong&gt;understandable, governable, and beneficial&lt;/strong&gt; on human terms. We’re not just teaching AI to walk; we’re setting the ground rules for how it walks alongside us in society.&lt;/p&gt;

&lt;h2&gt;
  
  
  Preparing for Embodied AI: A Roadmap for Business Leaders
&lt;/h2&gt;

&lt;p&gt;For executives and entrepreneurs, the rise of embodied AI presents a strategic inflection point. Just as the internet and mobile computing reshaped business in previous eras, giving AI a physical form promises to redefine industries – from manufacturing and logistics to healthcare, retail, and beyond. Preparing for this next wave isn’t a matter of &lt;strong&gt;distant futurism&lt;/strong&gt; ; it’s a competitive imperative starting now. Here are four high-impact actions for leaders to position their organizations for the age of embodied intelligence:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Experiment on the Edge:&lt;/strong&gt; Don’t wait for the technology to fully mature – &lt;em&gt;get hands-on with embodied AI prototypes today&lt;/em&gt;. Companies with physical operations should be piloting projects that integrate AI with sensors, robots, or IoT devices on the factory floor, warehouse, or storefront. These controlled experiments build invaluable understanding of the technology’s capabilities and limitations&lt;a href="https://www.bain.com/insights/humanoid-robots-at-work-what-executives-need-to-know/?ref=breakthroughpursuit.com#:~:text=,clear%20understanding%20of%20the%20market" rel="noopener noreferrer"&gt;[11]&lt;/a&gt;. A forward-thinking firm might set up a “robotics sandbox” in one distribution center or deploy a few service robots in a flagship store. The goal is to learn by doing: discover where embodied AI can add value (and where it can’t yet), train your teams to work alongside intelligent machines, and start collecting real-world data. Early experimentation separates hype from reality and uncovers those practical use-cases where physical AI can boost productivity or enhance customer experience.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Map Your Embodied AI Strategy:&lt;/strong&gt; Just as every company today needs a digital strategy, it’s time to craft your &lt;strong&gt;embodied AI strategy&lt;/strong&gt;. This means scanning the horizon for how rapidly the field is advancing and identifying where your business could leverage it. Major tech players and startups alike are racing ahead – from humanoid warehouse workers to autonomous delivery drones – so stay informed on industry developments. &lt;strong&gt;Conduct scenario planning&lt;/strong&gt; : if general-purpose robots become affordable in five years, which parts of your operations would you augment or automate? Technology companies should pinpoint whether their competitive edge will lie in hardware (e.g. custom robotic arms), software (AI vision algorithms, control systems), or services (integration and maintenance of robots)&lt;a href="https://www.bain.com/insights/humanoid-robots-at-work-what-executives-need-to-know/?ref=breakthroughpursuit.com#:~:text=and%20where%20they%20may%20add,define%20the%20right%20strategic%20approach" rel="noopener noreferrer"&gt;[12]&lt;/a&gt;. Others, like retailers or hospitals, should start outlining policies for deploying robots in customer-facing roles – how to ensure safety, how to brand the experience, how to retrain staff for oversight roles. By embedding embodied AI into your long-range plans, you ensure your organization is ready to ride the wave rather than be washed over by it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Invest in Skills and Partnerships:&lt;/strong&gt; The coming era will blur lines between traditional IT, data science, and engineering domains. &lt;strong&gt;Build cross-functional teams&lt;/strong&gt; that bring together software engineers, roboticists, UX designers, and experts in the specific physical environment (veteran warehouse managers, surgeons, etc., depending on context). Upskill your workforce with training in robotics and AI – today’s automation technician might need to become tomorrow’s “robot operations” supervisor. Additionally, consider partnerships to accelerate learning. Collaborate with robotics startups, join industry consortia, or fund research at universities. Such partnerships can give you early access to innovation and talent. Much like businesses partnered with cloud providers a decade ago, partnering with an embodied AI platform now (be it for autonomous vehicles, factory robots, or smart sensors) could secure you a critical head start. &lt;strong&gt;Culture-wise&lt;/strong&gt; , prepare your organization for human-machine collaboration. Encourage teams to see robots not as job threats, but as tools that can take over drudgery and augment human creativity – a message that is key for morale and adoption.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Embed Ethics and Safety from Day One:&lt;/strong&gt; With AI literally stepping into the world, &lt;strong&gt;trust and safety are not optional&lt;/strong&gt; – they are foundational to success. Integrate ethical guidelines and risk management into every embodied AI initiative. This might mean establishing an internal review board for new AI deployments, similar to how pharma companies review drug safety. It means consulting legal and compliance early: ensure your robotic product or AI service complies with emerging regulations (for instance, EU requirements on AI transparency or U.S. safety standards for autonomous machines). Proactively engage with employees and customers about what embodied AI will mean for them. Companies trialing humanoid robots in retail, for example, should gauge customer comfort levels and clearly communicate the robot’s purpose and limitations. Cybersecurity also becomes paramount when AI systems can move around – you don’t want a hacker turning your autonomous vehicle into a weapon. By baking in a safety-first mindset and ethical considerations, you not only reduce risks but also signal to the market and regulators that your brand can be trusted in this brave new world of physical AI. In an environment of heightened scrutiny, this can become a competitive advantage.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Grounding Readiness Checklist:&lt;/strong&gt; &lt;em&gt;Is your organization ready to capitalize on embodied AI?&lt;/em&gt; Use this quick checklist as a gauge of your preparedness: - &lt;strong&gt;✅ Real-World Data Streams:&lt;/strong&gt; Do you collect and integrate sensor data from products, equipment, or user environments to train and inform AI models? - &lt;strong&gt;✅ Talent and Training:&lt;/strong&gt; Have you developed in-house expertise (or partnerships) in robotics, IoT, and human-machine interaction, and trained staff to work alongside intelligent machines? - &lt;strong&gt;✅ Ethical Guardrails:&lt;/strong&gt; Are there guidelines or oversight processes in place to ensure AI actions in the physical world meet safety standards and align with company values? - &lt;strong&gt;✅ Pilot Projects:&lt;/strong&gt; Are you running (or planning) small-scale pilots that put AI into real environmental contexts, with metrics to evaluate impact and learnings for scale-up? - &lt;strong&gt;✅ Stakeholder Communication:&lt;/strong&gt; Have you started conversations with employees, customers, and regulators about your plans for embodied AI, addressing concerns about safety, jobs, and data privacy?&lt;/p&gt;

&lt;p&gt;If you can’t tick most of the boxes yet, you may risk falling behind as the embodied intelligence era unfolds.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: The Next Wave of AI is Physical
&lt;/h2&gt;

&lt;p&gt;After decades confined to glowing screens and cloud servers, artificial intelligence is &lt;strong&gt;bursting into the physical realm&lt;/strong&gt;. The journey from disembodied software to embodied agent will define the next wave of AI innovation – and it’s a wave that businesses and societies must be ready to surf. The case for embodiment rests on a simple truth: &lt;strong&gt;real intelligence doesn’t float in the ether; it grows from grounding in reality&lt;/strong&gt;. We’ve seen what happens when that grounding is absent – robots that face-plant into fountains, and algorithms that can’t tell fact from fantasy. By contrast, an AI endowed with a body, sensors, and real-world feedback loops has the chance to &lt;strong&gt;learn authentically&lt;/strong&gt; , to understand cause and effect, to earn our trust by acting reliably in our shared world.&lt;/p&gt;

&lt;p&gt;The opportunity is enormous. Analysts project that embodied AI – spanning robotics, autonomous vehicles, and smart machines of all kinds – could unlock a &lt;strong&gt;$5 trillion market by 2050&lt;/strong&gt; &lt;a href="https://www.morganstanley.com.au/ideas/embodied-ai?ref=breakthroughpursuit.com#:~:text=With%20a%20projected%20market%20opportunity,new%20era%20of%20technology%20unfolds" rel="noopener noreferrer"&gt;[13]&lt;/a&gt;, transforming economies and daily life. But beyond the dollars, there is a more human promise. If we build it right, the next generation of AI won’t be an alien intelligence locked in a computer, but a partner we can collaborate with in factories, hospitals, and homes. It will take the form of machines that can see, hear, touch – and &lt;em&gt;learn&lt;/em&gt; in the same environment we do. Such AI will be more transparent, because we can observe its behavior; more accountable, because its mistakes have physical consequences; and more innovative, because it draws inspiration from the full richness of the world.&lt;/p&gt;

&lt;p&gt;In the end, giving AI a body is about closing the loop between &lt;strong&gt;knowledge and experience&lt;/strong&gt;. The robots and intelligent systems of the coming years will increasingly loop sensing, thinking, and acting in continuous harmony. They will drive themselves to work, stock our shelves, care for the elderly, explore disaster zones – all while adapting on the fly. Companies and communities that recognize this shift now, that start grounding their AI ambitions in real-world projects and principled frameworks, will lead the way. We are on the cusp of AI’s &lt;strong&gt;embodied evolution&lt;/strong&gt;. It’s an exciting, occasionally nerve-wracking, but ultimately necessary step in making artificial intelligence more like the best intelligence we know – the kind that lives not just in the head, but in hands, eyes, and feet. The future of AI is out there on solid ground, and it’s time for us to walk forward with it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sources:&lt;/strong&gt; &lt;a href="https://www.theverge.com/tldr/2017/7/17/15986042/dc-security-robot-k5-falls-into-water?ref=breakthroughpursuit.com#:~:text=We%20don%E2%80%99t%20yet%20know%20the,horrifying%20news%3F%20Did%20it%20realize" rel="noopener noreferrer"&gt;[1]&lt;/a&gt;&lt;a href="https://www.theguardian.com/technology/2023/jun/23/two-us-lawyers-fined-submitting-fake-court-citations-chatgpt?ref=breakthroughpursuit.com#:~:text=A%20US%20judge%20has%20fined,submitted%20in%20a%20court%20filing" rel="noopener noreferrer"&gt;[2]&lt;/a&gt;&lt;a href="https://www.nature.com/articles/s41467-021-25874-z?error=cookies_not_supported&amp;amp;code=46e41914-5d0d-4efd-9f7a-b572b84e61ab&amp;amp;ref=breakthroughpursuit.com#:~:text=display%20remarkable%20degrees%20of%20embodied,vision7%20%2C%20or%20games%2032" rel="noopener noreferrer"&gt;[3]&lt;/a&gt;&lt;a href="https://arxiv.org/html/2402.03824v3?ref=breakthroughpursuit.com#:~:text=along%20with%20the%20pivotal%20ability,the%20reasons%20behind%20those%20outcomes" rel="noopener noreferrer"&gt;[5]&lt;/a&gt;&lt;a href="https://arxiv.org/html/2402.03824v3?ref=breakthroughpursuit.com#:~:text=Pursuing%20the%20development%20of%20AGI%2C,and%20evolving%20without%20human%20intervention" rel="noopener noreferrer"&gt;[7]&lt;/a&gt;&lt;a href="https://oecd.ai/en/ai-principles?ref=breakthroughpursuit.com#:~:text=,values%2C%20including%20fairness%20and%20privacy" rel="noopener noreferrer"&gt;[8]&lt;/a&gt;&lt;a href="https://oecd.ai/en/ai-principles?ref=breakthroughpursuit.com#:~:text=The%20OECD%20AI%20Principles%20were,robust%20and%20fit%20for%20purpose" rel="noopener noreferrer"&gt;[9]&lt;/a&gt;&lt;a href="https://www.paloaltonetworks.com/cyberpedia/nist-ai-risk-management-framework?ref=breakthroughpursuit.com#:~:text=The%20NIST%20AI%20Risk%20Management,respecting%20privacy%20and%20civil%20liberties" rel="noopener noreferrer"&gt;[10]&lt;/a&gt;&lt;a href="https://www.bain.com/insights/humanoid-robots-at-work-what-executives-need-to-know/?ref=breakthroughpursuit.com#:~:text=,clear%20understanding%20of%20the%20market" rel="noopener noreferrer"&gt;[11]&lt;/a&gt;&lt;a href="https://www.bain.com/insights/humanoid-robots-at-work-what-executives-need-to-know/?ref=breakthroughpursuit.com#:~:text=and%20where%20they%20may%20add,define%20the%20right%20strategic%20approach" rel="noopener noreferrer"&gt;[12]&lt;/a&gt;&lt;a href="https://www.morganstanley.com.au/ideas/embodied-ai?ref=breakthroughpursuit.com#:~:text=With%20a%20projected%20market%20opportunity,new%20era%20of%20technology%20unfolds" rel="noopener noreferrer"&gt;[13]&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;a href="https://www.theverge.com/tldr/2017/7/17/15986042/dc-security-robot-k5-falls-into-water?ref=breakthroughpursuit.com#:~:text=We%20don%E2%80%99t%20yet%20know%20the,horrifying%20news%3F%20Did%20it%20realize" rel="noopener noreferrer"&gt;[1]&lt;/a&gt; DC security robot quits job by drowning itself in a fountain | The Verge&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.theverge.com/tldr/2017/7/17/15986042/dc-security-robot-k5-falls-into-water?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.theverge.com/tldr/2017/7/17/15986042/dc-security-robot-k5-falls-into-water&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.theguardian.com/technology/2023/jun/23/two-us-lawyers-fined-submitting-fake-court-citations-chatgpt?ref=breakthroughpursuit.com#:~:text=A%20US%20judge%20has%20fined,submitted%20in%20a%20court%20filing" rel="noopener noreferrer"&gt;[2]&lt;/a&gt; Two US lawyers fined for submitting fake court citations from ChatGPT | ChatGPT | The Guardian&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.theguardian.com/technology/2023/jun/23/two-us-lawyers-fined-submitting-fake-court-citations-chatgpt?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.theguardian.com/technology/2023/jun/23/two-us-lawyers-fined-submitting-fake-court-citations-chatgpt&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.nature.com/articles/s41467-021-25874-z?error=cookies_not_supported&amp;amp;code=46e41914-5d0d-4efd-9f7a-b572b84e61ab&amp;amp;ref=breakthroughpursuit.com#:~:text=display%20remarkable%20degrees%20of%20embodied,vision7%20%2C%20or%20games%2032" rel="noopener noreferrer"&gt;[3]&lt;/a&gt; &lt;a href="https://www.nature.com/articles/s41467-021-25874-z?error=cookies_not_supported&amp;amp;code=46e41914-5d0d-4efd-9f7a-b572b84e61ab&amp;amp;ref=breakthroughpursuit.com#:~:text=diverse%20agent%20morphologies%20to%20learn,Third%2C%20we" rel="noopener noreferrer"&gt;[6]&lt;/a&gt; Embodied intelligence via learning and evolution | Nature Communications&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.nature.com/articles/s41467-021-25874-z?error=cookies_not_supported&amp;amp;code=46e41914-5d0d-4efd-9f7a-b572b84e61ab&amp;amp;ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.nature.com/articles/s41467-021-25874-z?error=cookies_not_supported&amp;amp;code=46e41914-5d0d-4efd-9f7a-b572b84e61ab&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://arxiv.org/html/2402.03824v3?ref=breakthroughpursuit.com#:~:text=philosophy%20and%20cognitive%20science%20that,embedded%2C%20and%20extended%20aspects%20of" rel="noopener noreferrer"&gt;[4]&lt;/a&gt; &lt;a href="https://arxiv.org/html/2402.03824v3?ref=breakthroughpursuit.com#:~:text=along%20with%20the%20pivotal%20ability,the%20reasons%20behind%20those%20outcomes" rel="noopener noreferrer"&gt;[5]&lt;/a&gt; &lt;a href="https://arxiv.org/html/2402.03824v3?ref=breakthroughpursuit.com#:~:text=Pursuing%20the%20development%20of%20AGI%2C,and%20evolving%20without%20human%20intervention" rel="noopener noreferrer"&gt;[7]&lt;/a&gt; A Call for Embodied AI&lt;/p&gt;

&lt;p&gt;&lt;a href="https://arxiv.org/html/2402.03824v3?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://arxiv.org/html/2402.03824v3&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://oecd.ai/en/ai-principles?ref=breakthroughpursuit.com#:~:text=,values%2C%20including%20fairness%20and%20privacy" rel="noopener noreferrer"&gt;[8]&lt;/a&gt; &lt;a href="https://oecd.ai/en/ai-principles?ref=breakthroughpursuit.com#:~:text=The%20OECD%20AI%20Principles%20were,robust%20and%20fit%20for%20purpose" rel="noopener noreferrer"&gt;[9]&lt;/a&gt; AI Principles Overview - OECD.AI&lt;/p&gt;

&lt;p&gt;&lt;a href="https://oecd.ai/en/ai-principles?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://oecd.ai/en/ai-principles&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.paloaltonetworks.com/cyberpedia/nist-ai-risk-management-framework?ref=breakthroughpursuit.com#:~:text=The%20NIST%20AI%20Risk%20Management,respecting%20privacy%20and%20civil%20liberties" rel="noopener noreferrer"&gt;[10]&lt;/a&gt; NIST AI Risk Management Framework (AI RMF) - Palo Alto Networks&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.paloaltonetworks.com/cyberpedia/nist-ai-risk-management-framework?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.paloaltonetworks.com/cyberpedia/nist-ai-risk-management-framework&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.bain.com/insights/humanoid-robots-at-work-what-executives-need-to-know/?ref=breakthroughpursuit.com#:~:text=,clear%20understanding%20of%20the%20market" rel="noopener noreferrer"&gt;[11]&lt;/a&gt; &lt;a href="https://www.bain.com/insights/humanoid-robots-at-work-what-executives-need-to-know/?ref=breakthroughpursuit.com#:~:text=and%20where%20they%20may%20add,define%20the%20right%20strategic%20approach" rel="noopener noreferrer"&gt;[12]&lt;/a&gt; Humanoid Robots at Work: What Executives Need to Know | Bain &amp;amp; Company&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.bain.com/insights/humanoid-robots-at-work-what-executives-need-to-know/?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.bain.com/insights/humanoid-robots-at-work-what-executives-need-to-know/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.morganstanley.com.au/ideas/embodied-ai?ref=breakthroughpursuit.com#:~:text=With%20a%20projected%20market%20opportunity,new%20era%20of%20technology%20unfolds" rel="noopener noreferrer"&gt;[13]&lt;/a&gt; Embodied AI: Investing in the Future of Humanoids, Robotics, and Autonomous Mobility &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.morganstanley.com.au/ideas/embodied-ai?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.morganstanley.com.au/ideas/embodied-ai&lt;/a&gt;&lt;/p&gt;

</description>
      <category>embodiedintelligence</category>
      <category>robotics</category>
      <category>aigovernance</category>
      <category>aiethics</category>
    </item>
    <item>
      <title>Ghost Work in the AI Economy: Unveiling the Hidden Labour Behind Intelligent Systems</title>
      <dc:creator>Breakthrough Pursuit</dc:creator>
      <pubDate>Sun, 05 Oct 2025 19:26:28 +0000</pubDate>
      <link>https://forem.com/breakthroughpursuit/ghost-work-in-the-ai-economy-unveiling-the-hidden-labour-behind-intelligent-systems-681</link>
      <guid>https://forem.com/breakthroughpursuit/ghost-work-in-the-ai-economy-unveiling-the-hidden-labour-behind-intelligent-systems-681</guid>
      <description>&lt;h2&gt;
  
  
  Executive Summary
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2jop8m0qeb101qlxlgmi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2jop8m0qeb101qlxlgmi.png" alt="Ghost Work in the AI Economy: Unveiling the Hidden Labour Behind Intelligent Systems" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Artificial intelligence (AI) is often marketed as a technology that reduces labour: chatbots replace customer‑service agents, machine‑learning models sort résumés and computer vision powers driverless cars. However, every breakthrough in AI owes its success to human judgment. Millions of people around the world label images, transcribe audio, moderate user‑generated content and rank responses to make algorithms accurate, safe and polite. These “ghost workers” are largely invisible in corporate narratives yet are indispensable to AI’s functionality. Digital‑labour platforms have proliferated from &lt;strong&gt;142 in 2010 to over 777 by 2020&lt;/strong&gt; &lt;a href="https://www.ilo.org/resource/statement/digital-labour-platforms-can-advance-social-justice-focussing-worker?ref=breakthroughpursuit.com#:~:text=migrants,to%2078%20million%20in%202023" rel="noopener noreferrer"&gt;[1]&lt;/a&gt; and have mobilised tens of millions of workers&lt;a href="https://www.ilo.org/resource/statement/digital-labour-platforms-can-advance-social-justice-focussing-worker?ref=breakthroughpursuit.com#:~:text=migrants,to%2078%20million%20in%202023" rel="noopener noreferrer"&gt;[1]&lt;/a&gt;, many of whom are paid poorly and spend substantial time on unpaid overhead&lt;a href="https://arxiv.org/abs/2110.00169?ref=breakthroughpursuit.com#:~:text=work,impacts%20workers%20depending%20on%20their" rel="noopener noreferrer"&gt;[2]&lt;/a&gt;. This article exposes the hidden supply chain that powers AI, explains why ignoring this workforce poses ethical and business risks, and proposes frameworks for companies to measure, disclose and improve their labour practices. By adopting simple metrics and transparent governance, organisations can build AI systems that are trustworthy, resilient and aligned with emerging regulation.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Hidden Labour Behind AI
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1.1 The Myth of Autonomy
&lt;/h3&gt;

&lt;p&gt;AI systems are frequently portrayed as self‑sufficient machines that learn from data and operate autonomously. Marketing materials depict algorithms that “think” and “decide” on their own, fuelling public fascination with sentient machines. Yet virtually every modern AI model relies on human labour at multiple stages. Anthropologist Mary Gray and computer scientist Siddharth Suri coined the term &lt;strong&gt;ghost work&lt;/strong&gt; for the micro‑tasks — annotation, transcription, moderation and testing — that humans perform to make digital systems appear seamless. Workers draw bounding boxes around pedestrians so self‑driving cars can recognise them, rate chatbot answers to teach polite behaviour and filter violent content to keep social media safe. This fragmented, piece‑rate work is deliberately presented as auxiliary so that the myth of autonomy endures. Recognising that AI’s “magic” is in fact a reflection of human cognition is the first step toward ethical governance.&lt;/p&gt;

&lt;h3&gt;
  
  
  1.2 Scale and Growth of the Workforce
&lt;/h3&gt;

&lt;p&gt;The hidden workforce powering AI is vast and growing. The International Labour Organization reports that the number of digital‑labour platforms worldwide increased from &lt;strong&gt;142 in 2010 to more than 777 in 2020&lt;/strong&gt; &lt;a href="https://www.ilo.org/resource/statement/digital-labour-platforms-can-advance-social-justice-focussing-worker?ref=breakthroughpursuit.com#:~:text=migrants,to%2078%20million%20in%202023" rel="noopener noreferrer"&gt;[1]&lt;/a&gt;. Estimates suggest the number of people earning income on these platforms rose from &lt;strong&gt;43 million in 2018 to roughly 78 million by 2023&lt;/strong&gt; &lt;a href="https://www.ilo.org/resource/statement/digital-labour-platforms-can-advance-social-justice-focussing-worker?ref=breakthroughpursuit.com#:~:text=migrants,to%2078%20million%20in%202023" rel="noopener noreferrer"&gt;[1]&lt;/a&gt;. These figures underrepresent the millions more who work through subcontractors rather than directly on platforms. Workers are distributed across the globe: India and the Philippines host large annotation hubs, while Kenya, Nigeria and Venezuela have become centres for content moderation and safety testing. High‑income countries also contribute significant numbers of workers; a 2019 survey found that out of roughly 250,000 registered crowdworkers on a popular U.S. platform, more than 226,000 were based in the United States&lt;a href="https://www.cloudresearch.com/resources/blog/how-many-amazon-mturk-workers-are-there/?ref=breakthroughpursuit.com#:~:text=In%20a%20recent%20research%20article%2C,are%20based%20in%20the%20US" rel="noopener noreferrer"&gt;[3]&lt;/a&gt;. Such dispersion underscores that ghost work is a global industry embedded in both emerging and advanced economies.&lt;/p&gt;

&lt;h3&gt;
  
  
  1.3 Hidden Time and Unpaid Labour
&lt;/h3&gt;

&lt;p&gt;Ghost work extends beyond the visible tasks performed. A field study of 100 crowdworkers on a large micro‑task platform tracked not only the time spent on assigned tasks but also the minutes spent searching for jobs, completing mandatory training, waiting for tasks to appear and contesting unfair rejections. The study found that &lt;strong&gt;about one‑third of the workers’ time was consumed by these unpaid activities&lt;/strong&gt; , lowering median effective earnings from &lt;strong&gt;US\$3.76 to US\$2.83 per hour&lt;/strong&gt; &lt;a href="https://arxiv.org/abs/2110.00169?ref=breakthroughpursuit.com#:~:text=work,impacts%20workers%20depending%20on%20their" rel="noopener noreferrer"&gt;[2]&lt;/a&gt;. Unpaid overhead includes reading lengthy instructions, cross‑checking guidelines, managing multiple platform interfaces and performing rework after quality audits. Because these activities are invisible to requesters and not factored into pricing, corporate reports overestimate the cost‑savings from “automated” systems. For workers, the unpaid time translates into uncertainty, longer workdays and incomes that hover around or below local minimum wages. Understanding the hidden time invested in AI development is crucial for accurate cost accounting and fair compensation policies.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Mapping the AI Supply Chain
&lt;/h2&gt;

&lt;h3&gt;
  
  
  2.1 Layers of Human Input
&lt;/h3&gt;

&lt;p&gt;Behind every deployed AI model lies a supply chain of human contributions. This chain can be broken down into several stages:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Data sourcing and collection:&lt;/strong&gt; Workers gather raw material by taking photographs, recording speech, scraping text or translating sentences. Often, they provide personal data that is later anonymised and aggregated.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Annotation and labelling:&lt;/strong&gt; Annotators classify images, transcribe audio snippets, tag entities in text, draw polygons around objects and identify emotional tone. Tasks vary from straightforward (“Does this image contain a dog?”) to complex (“Identify sarcasm in this tweet”).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Content moderation:&lt;/strong&gt; Moderators review user‑generated content to remove hate speech, graphic violence and sexual exploitation. They provide examples to help refine automated filters.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Testing and evaluation:&lt;/strong&gt; Human evaluators compare model outputs against ground truth, note errors and suggest improvements. In generative AI, raters rank alternative responses to teach models preferences through reinforcement learning.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fine‑tuning and reinforcement learning:&lt;/strong&gt; Workers provide preference scores in reinforcement learning from human feedback (RLHF), offering nuanced judgments on tone, politeness and factuality. These ratings calibrate the models’ behaviours.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Quality assurance and rework:&lt;/strong&gt; Senior annotators or auditors review completed tasks, correct mistakes, ensure consistency across datasets and manage appeals.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These stages are often spread across multiple companies and countries. A major technology firm might hire a data‑annotation company, which then contracts regional vendors, who recruit freelancers through micro‑task platforms. Each intermediary takes a percentage of the revenue, leaving the workers at the base of the pyramid with the lowest pay and benefits. In one high‑profile case, a vendor billed &lt;strong&gt;US\$12.50 per hour&lt;/strong&gt; to a large AI company for content safety work, while Kenyan moderators received only &lt;strong&gt;US\$1.32–US\$2 per hour&lt;/strong&gt; &lt;a href="https://time.com/6247678/openai-chatgpt-kenya-workers/?ref=breakthroughpursuit.com#:~:text=made%20up%20the%20majority%20of,wage%20for%20a%20receptionist%20in" rel="noopener noreferrer"&gt;[4]&lt;/a&gt;. Such discrepancies highlight the need for supply‑chain transparency and equitable distribution of value.&lt;/p&gt;

&lt;h3&gt;
  
  
  2.2 Quantifying Human Inputs: The AI Labour Intensity Index
&lt;/h3&gt;

&lt;p&gt;To make the human contributions to AI visible and comparable, organisations need a common metric. We propose an &lt;strong&gt;AI Labour Intensity Index (ALII)&lt;/strong&gt; that captures four dimensions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Annotated hours per model:&lt;/strong&gt; Calculate the total human hours spent collecting, cleaning, labelling and validating data for each AI system. This includes overhead such as task hunting, reading guidelines and rework.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Geographic dispersion:&lt;/strong&gt; Record the share of labour hours performed in each country or region. This shows reliance on low‑income countries and informs assessments of global equity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fair‑wage ratio:&lt;/strong&gt; Compare the median wage paid to annotators and moderators against local living wages or statutory minimum wages. A ratio below 1 indicates underpayment relative to the cost of living.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Overhead factor:&lt;/strong&gt; Measure the percentage of unpaid or invisible labour relative to compensated time. High overhead signals inefficiencies in task design or platform features.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By computing ALII scores for different models and projects, companies can benchmark their practices, identify outliers and set improvement targets. Investors and regulators can use ALII to compare products and evaluate whether claimed efficiency gains reflect hidden labour costs.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Business and Ethical Implications
&lt;/h2&gt;

&lt;h3&gt;
  
  
  3.1 The Efficiency Illusion
&lt;/h3&gt;

&lt;p&gt;Corporate narratives often emphasise AI’s ability to cut costs by replacing human labour. However, the efficiency story is incomplete. As seen in the crowdworker study, unpaid overhead constitutes &lt;strong&gt;about one‑third of total working time&lt;/strong&gt; &lt;a href="https://arxiv.org/abs/2110.00169?ref=breakthroughpursuit.com#:~:text=work,impacts%20workers%20depending%20on%20their" rel="noopener noreferrer"&gt;[2]&lt;/a&gt;. If business leaders calculate labour costs using only paid task time, they underestimate the true human effort embedded in their models. Similarly, a 2025 survey of U.S. data workers found that &lt;strong&gt;66 %&lt;/strong&gt; spent at least three hours per week waiting for tasks and were not compensated for this idle time&lt;a href="https://techequity.us/2025/09/30/ghost-workers-in-the-machine/?ref=breakthroughpursuit.com#:~:text=1,to%20annual%20earnings%20of%20%2422%2C620" rel="noopener noreferrer"&gt;[5]&lt;/a&gt;. Such inefficiencies erode worker earnings and may increase turnover. Companies that disregard these costs risk overoptimistic financial projections and hidden liabilities in their supply chains.&lt;/p&gt;

&lt;h3&gt;
  
  
  3.2 Hidden Costs and Reputational Risk
&lt;/h3&gt;

&lt;p&gt;Undercompensated workers pose operational and reputational hazards. Poor pay and lack of support can lead to high turnover, disrupting continuity and quality control. Training new annotators or moderators takes time and resources, especially when tasks require domain knowledge or cultural sensitivity. If moderators suffer psychological harm from exposure to graphic content, they may leave abruptly, forcing companies to scramble for replacements and risking lapses in content safety. Public revelations about exploitative labour practices can trigger consumer backlash, investor pressure and regulatory scrutiny. Reports of Kenyan moderators earning less than \$2 per hour while vendors bill several times that amount&lt;a href="https://time.com/6247678/openai-chatgpt-kenya-workers/?ref=breakthroughpursuit.com#:~:text=made%20up%20the%20majority%20of,wage%20for%20a%20receptionist%20in" rel="noopener noreferrer"&gt;[4]&lt;/a&gt; have already spurred calls for fair‑pay standards and stricter oversight. Companies that proactively address these risks by improving labour conditions and disclosing their practices will be better positioned to weather future controversies.&lt;/p&gt;

&lt;h3&gt;
  
  
  3.3 Ethical Displacement
&lt;/h3&gt;

&lt;p&gt;AI’s labour‑saving narrative often masks a geographic displacement of work. Annotation and moderation tasks are outsourced to lower‑wage regions where currency differentials and limited job opportunities make micro‑task earnings attractive. Yet these wages frequently fall below local living standards. In Kenya, for example, content moderators contracted to support a major AI company were paid &lt;strong&gt;US\$1.32–US\$2 per hour&lt;/strong&gt; , which is roughly equal to or slightly below the local minimum wage&lt;a href="https://time.com/6247678/openai-chatgpt-kenya-workers/?ref=breakthroughpursuit.com#:~:text=made%20up%20the%20majority%20of,wage%20for%20a%20receptionist%20in" rel="noopener noreferrer"&gt;[4]&lt;/a&gt;. Bonuses and performance commissions can raise earnings marginally but are contingent on stringent accuracy and speed targets. The economic benefits of AI thus accrue disproportionately to technology companies and intermediaries in high‑income countries, while the burdens of low pay, job insecurity and psychological stress are borne by workers in the Global South. Addressing this ethical displacement requires recognising the real cost of human labour in AI and ensuring that efficiency gains are shared more equitably.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Global Distribution and Labour Conditions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  4.1 Geographic Patterns and Wage Disparities
&lt;/h3&gt;

&lt;p&gt;Ghost work is a global phenomenon with distinct regional patterns. Large crowdsourcing platforms recruit workers in countries such as India, Bangladesh, Venezuela and the Philippines, where internet access and English proficiency coexist with lower wage expectations. In Africa, Kenya and Nigeria have emerged as hubs for content moderation and data labelling because of relatively high education levels and time‑zone overlap with Europe and the United States. Workers in these regions may view platform work as a valuable source of income, yet wages often remain below local living standards. Case studies of Kenyan moderators show that even after including performance bonuses, salaries hovered around &lt;strong&gt;US\$1.32–US\$2 per hour&lt;/strong&gt; &lt;a href="https://time.com/6247678/openai-chatgpt-kenya-workers/?ref=breakthroughpursuit.com#:~:text=made%20up%20the%20majority%20of,wage%20for%20a%20receptionist%20in" rel="noopener noreferrer"&gt;[4]&lt;/a&gt;. At the same time, workers in high‑income countries perform similar tasks but earn higher absolute wages; however, their effective pay can still fall below local minimum wages once unpaid time is accounted for. Understanding these disparities is crucial for designing fair‑wage policies that reflect local costs of living rather than global averages.&lt;/p&gt;

&lt;h3&gt;
  
  
  4.2 Precarity and Worker Voice
&lt;/h3&gt;

&lt;p&gt;Precarity characterises much of the ghost workforce. Workers face instability across multiple dimensions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Wages:&lt;/strong&gt; Compensation is typically piece‑rate, with pay contingent on task acceptance and quality. Accounting for unpaid overhead can reduce effective wages by more than 30 %&lt;a href="https://arxiv.org/abs/2110.00169?ref=breakthroughpursuit.com#:~:text=work,impacts%20workers%20depending%20on%20their" rel="noopener noreferrer"&gt;[2]&lt;/a&gt;. Performance‑based bonuses, while appealing, can create pressure to cut corners or work long hours.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Job security:&lt;/strong&gt; Contracts are short‑term, and accounts may be deactivated without explanation. When a subcontractor loses a contract, hundreds of workers can be left without income overnight&lt;a href="https://time.com/6247678/openai-chatgpt-kenya-workers/?ref=breakthroughpursuit.com#:~:text=picture,eight%20months%20earlier%20than%20planned" rel="noopener noreferrer"&gt;[6]&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Benefits:&lt;/strong&gt; Few platform workers receive health insurance, sick leave or pension contributions. A 2025 survey of U.S. data workers found that only &lt;strong&gt;23 %&lt;/strong&gt; had employer‑provided health insurance&lt;a href="https://techequity.us/2025/09/30/ghost-workers-in-the-machine/?ref=breakthroughpursuit.com#:~:text=3,health%20insurance%20from%20their%20employer" rel="noopener noreferrer"&gt;[7]&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Working conditions:&lt;/strong&gt; Moderators are exposed to graphic violence, sexual exploitation and hate speech. Without adequate counselling, this exposure can cause burnout and trauma&lt;a href="https://time.com/6247678/openai-chatgpt-kenya-workers/?ref=breakthroughpursuit.com#:~:text=having%20sex%20with%20a%20dog,eight%20months%20earlier%20than%20planned" rel="noopener noreferrer"&gt;[8]&lt;/a&gt;. Annotators often juggle multiple clients with inconsistent guidelines and unstable internet connections.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Voice and bargaining power:&lt;/strong&gt; Workers seldom have access to unions or formal grievance mechanisms. In the U.S. survey, more than half of respondents reported that the estimated times provided for tasks were unrealistic&lt;a href="https://techequity.us/2025/09/30/ghost-workers-in-the-machine/?ref=breakthroughpursuit.com#:~:text=2,of%20respondents%20report%20they%20are" rel="noopener noreferrer"&gt;[9]&lt;/a&gt;, yet they lacked leverage to negotiate.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Improving conditions along these axes is not only a moral imperative but also a strategic investment. Engaged, well‑compensated workers deliver higher‑quality data and lower turnover, which in turn reduces error rates and the need for costly rework.&lt;/p&gt;

&lt;h3&gt;
  
  
  4.3 Intersectional and Gender Dimensions
&lt;/h3&gt;

&lt;p&gt;While comprehensive gender‑disaggregated statistics are scarce, qualitative evidence suggests that women and marginalised communities constitute a significant portion of the ghost workforce. Tasks requiring empathy and relational skills, such as content moderation and conversation rating, are often considered “feminised” and attract more women. Women may prefer flexible micro‑tasks that allow them to juggle domestic responsibilities and paid work. However, the lack of benefits and job security can exacerbate gender inequities, especially for single mothers or caregivers. Additional factors such as disability, language proficiency and access to infrastructure further shape participation in digital labour. A robust human‑labour audit should therefore collect data on gender, disability and other intersecting identities to inform targeted interventions and inclusive policies.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Governance and Solutions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  5.1 Human‑Labour Audits and Labour Cards
&lt;/h3&gt;

&lt;p&gt;Visibility is a prerequisite for accountability. Companies should conduct &lt;strong&gt;human‑labour audits&lt;/strong&gt; for each major AI system. Such audits would document the number of workers involved, the tasks performed, hours spent, wage ranges, geographic locations and overhead factors. Aggregated data can then be published in &lt;strong&gt;labour cards&lt;/strong&gt; accompanying model releases, much like model cards that describe technical performance. Labour cards could include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Workers involved:&lt;/strong&gt; Approximate number of annotators, moderators and auditors, plus their geographic distribution.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compensation:&lt;/strong&gt; Average hourly rate, fair‑wage ratio and overhead factor.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Working conditions:&lt;/strong&gt; Task descriptions, exposure to sensitive content, availability of mental‑health support and grievance mechanisms.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Governance:&lt;/strong&gt; Policies on labour standards, audit procedures and worker feedback.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Publishing labour cards would not require revealing proprietary training data. Instead, it would provide stakeholders with sufficient information to evaluate ethical practices. Over time, third‑party organisations or consortia could certify labour cards to standardise reporting across the industry.&lt;/p&gt;

&lt;h3&gt;
  
  
  5.2 Integration into Regulation and ESG Frameworks
&lt;/h3&gt;

&lt;p&gt;Regulators in many jurisdictions are moving toward risk‑based AI governance. The European Union’s AI Act, for example, classifies systems as unacceptable, high‑risk or low‑risk and imposes different obligations accordingly&lt;a href="https://artificialintelligenceact.eu/?ref=breakthroughpursuit.com#:~:text=The%20AI%20Act%20is%20a,risk%20are%20largely%20left%20unregulated" rel="noopener noreferrer"&gt;[10]&lt;/a&gt;. Integrating labour transparency into this framework would ensure that high‑risk systems not only meet technical standards but also respect human rights. Companies could be required to include ALII scores and labour cards in conformity assessments. Similarly, due‑diligence laws such as the EU Corporate Sustainability Due Diligence Directive could be expanded to cover digital labour, obliging companies to trace their labour supply chains and address abuses. The Ada Lovelace Institute stresses that policymakers must assign responsibilities across AI supply chains and ensure that actors with fewer resources receive support&lt;a href="https://www.adalovelaceinstitute.org/resource/ai-supply-chains/?ref=breakthroughpursuit.com#:~:text=Creating%20an%20artificial%20intelligence%20,the%20system%E2%80%99s%20training%20and%20development" rel="noopener noreferrer"&gt;[11]&lt;/a&gt;. This could involve establishing grievance mechanisms for workers and providing resources to smaller vendors to comply with reporting requirements.&lt;/p&gt;

&lt;p&gt;In the investment realm, ESG frameworks are evolving to include labour considerations. Investors increasingly scrutinise working conditions alongside carbon emissions and diversity. Including ALII metrics and labour disclosures in ESG reports would allow capital to flow toward companies that uphold fair labour standards. Such integration aligns ethical imperatives with financial incentives.&lt;/p&gt;

&lt;h3&gt;
  
  
  5.3 Corporate Governance and Investor Incentives
&lt;/h3&gt;

&lt;p&gt;Ethical labour practices should be embedded in corporate governance. Boards and executive committees overseeing AI development must expand their remit beyond technical safety and privacy to include human‑labour risks. They should review ALII scores and require that wages meet or exceed local living standards, overhead factors are minimised and mental‑health support is funded. Linking executive compensation to labour metrics could align incentives. For example, bonuses might depend in part on achieving fair‑wage ratios above 1 or reducing overhead by improving task design.&lt;/p&gt;

&lt;p&gt;Investor pressure can accelerate change. Shareholder resolutions and proxy votes increasingly focus on social issues. An AI Labour Transparency Index published by an independent body could rank companies on disclosure, fair‑wage compliance, worker voice and grievance procedures. Procurement officers in government and large corporations could require minimum transparency scores from vendors. Just as environmental indices have spurred competition on sustainability, a labour index would create reputational incentives to treat workers fairly. Companies that lead on labour transparency might attract customers and investors who value responsible innovation.&lt;/p&gt;

&lt;h3&gt;
  
  
  5.4 Collaboration and Multi‑Stakeholder Governance
&lt;/h3&gt;

&lt;p&gt;No single actor can solve the labour challenges in AI. Governments, multilateral organisations, companies, researchers, unions and civil society must collaborate. The International Labour Organization can develop guidelines for digital work, building on existing labour standards. The Organization for Economic Co‑operation and Development (OECD) could harmonise reporting frameworks across countries. National regulators can enforce transparency obligations and provide resources for small and medium‑sized enterprises to comply. Worker cooperatives and unions can offer on‑the‑ground insights and advocate for fair conditions. Academic researchers can refine metrics like ALII and evaluate their impact. Open‑source communities might experiment with community‑owned data‑collection projects that share economic rewards with contributors. Multi‑stakeholder governance ensures that solutions are equitable and grounded in diverse perspectives.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: Towards Fair and Trusted AI
&lt;/h2&gt;

&lt;p&gt;AI’s promise of autonomy obscures a fundamental reality: machine intelligence is inseparable from human labour. Millions of people across continents — annotators, moderators, testers, auditors — contribute their time, judgement and emotional resilience so that algorithms can function. Yet these contributions remain largely unseen, undervalued and underpaid. Empirical studies reveal that one‑third of crowdworkers’ labour is unpaid&lt;a href="https://arxiv.org/abs/2110.00169?ref=breakthroughpursuit.com#:~:text=work,impacts%20workers%20depending%20on%20their" rel="noopener noreferrer"&gt;[2]&lt;/a&gt;, that workers in emerging economies sometimes earn barely &lt;strong&gt;US\$1.50 per hour&lt;/strong&gt; &lt;a href="https://time.com/6247678/openai-chatgpt-kenya-workers/?ref=breakthroughpursuit.com#:~:text=made%20up%20the%20majority%20of,wage%20for%20a%20receptionist%20in" rel="noopener noreferrer"&gt;[4]&lt;/a&gt; and that most data workers lack basic benefits&lt;a href="https://techequity.us/2025/09/30/ghost-workers-in-the-machine/?ref=breakthroughpursuit.com#:~:text=3,health%20insurance%20from%20their%20employer" rel="noopener noreferrer"&gt;[7]&lt;/a&gt;. Such conditions are at odds with the ethical aspirations of AI developers and the expectations of consumers and regulators.&lt;/p&gt;

&lt;p&gt;This article has outlined a roadmap to transform ghost work from an invisible cost into a recognised component of AI development. By adopting the AI Labour Intensity Index, companies can quantify and monitor human inputs. Through labour audits and labour cards, they can disclose their practices and invite scrutiny. By embedding labour considerations into regulation, ESG frameworks and corporate governance, they can align ethical obligations with strategic imperatives. And by collaborating with workers, regulators and civil society, they can build systems that are not only smart but also just.&lt;/p&gt;

&lt;p&gt;Ultimately, recognising the human engine behind AI is not a burden but an opportunity. Fair and transparent labour practices will improve data quality, reduce operational risks and foster trust among users and investors. As the AI economy matures, the most successful organisations will be those that treat human labour not as a disposable input but as an asset worthy of respect and investment. The future of AI depends on making its invisible workforce visible.&lt;/p&gt;




&lt;h3&gt;
  
  
  Frequently Asked Questions
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. What is “ghost work”?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Ghost work refers to the hidden human labour behind AI systems. These tasks include labelling images, transcribing audio, moderating user content, and testing model outputs. They are performed by real people—often via digital platforms or subcontractors—whose contributions are essential for AI but rarely acknowledged.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. How large is the ghost‑work workforce?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Over the past decade, the number of platforms facilitating micro‑task and crowd work has ballooned, attracting tens of millions of workers worldwide. These workers are spread across regions such as South and Southeast Asia, Africa, Latin America, the United States, and Europe. Many juggle multiple gigs to piece together a living wage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. What kinds of tasks do ghost workers perform?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Tasks span several layers of the AI supply chain: sourcing and collecting data, annotating and labelling it, moderating and flagging harmful content, testing model outputs, providing preference feedback for fine‑tuning, and conducting quality assurance. Each layer demands different skills, from basic data entry to cultural expertise and emotional resilience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Why do ghost workers often earn so little?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Low earnings stem from piece‑rate pay models and significant unpaid overhead—time spent searching for tasks, reading lengthy guidelines, and waiting for work. Workers also face high rejection rates and short contracts, leaving them with little bargaining power and no benefits.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. What is the AI Labour Intensity Index (ALII)?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
The ALII is a proposed metric that quantifies the human labour embedded in an AI model. It measures four factors: total annotated hours, geographic distribution of work, the ratio of wages to local living standards, and the percentage of unpaid overhead. Companies can use it to benchmark their projects and identify where labour practices need improvement.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Why should businesses and investors care about ghost work?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Ignoring ghost work can lead to poor data quality, high turnover, reputational damage, and regulatory scrutiny. Underpaid, stressed workers may produce inconsistent labelling or leave abruptly, disrupting development schedules. Investors increasingly view fair labour practices as part of environmental, social and governance (ESG) risk management.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. How can companies and regulators address ghost work issues?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Firms can begin by auditing their human labour supply chains, publishing “labour cards” to disclose hours worked and wage levels, and ensuring fair‑pay ratios. Boards should review labour metrics alongside safety and privacy considerations. Regulators can integrate labour disclosures into AI risk assessments and due‑diligence laws, while independent bodies could develop transparency indices to benchmark firms. Collaboration across industry, government, and civil society is essential for lasting change.&lt;/p&gt;




&lt;h2&gt;
  
  
  Resources:
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.ilo.org/resource/statement/digital-labour-platforms-can-advance-social-justice-focussing-worker?ref=breakthroughpursuit.com#:~:text=migrants,to%2078%20million%20in%202023" rel="noopener noreferrer"&gt;[1]&lt;/a&gt; Digital labour platforms can advance social justice by focussing on worker welfare | International Labour Organization&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.ilo.org/resource/statement/digital-labour-platforms-can-advance-social-justice-focussing-worker?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.ilo.org/resource/statement/digital-labour-platforms-can-advance-social-justice-focussing-worker&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://arxiv.org/abs/2110.00169?ref=breakthroughpursuit.com#:~:text=work,impacts%20workers%20depending%20on%20their" rel="noopener noreferrer"&gt;[2]&lt;/a&gt; [2110.00169] Quantifying the Invisible Labor in Crowd Work&lt;/p&gt;

&lt;p&gt;&lt;a href="https://arxiv.org/abs/2110.00169?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://arxiv.org/abs/2110.00169&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.cloudresearch.com/resources/blog/how-many-amazon-mturk-workers-are-there/?ref=breakthroughpursuit.com#:~:text=In%20a%20recent%20research%20article%2C,are%20based%20in%20the%20US" rel="noopener noreferrer"&gt;[3]&lt;/a&gt; How Many Amazon Mechanical Turk Workers Are There in 2019?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.cloudresearch.com/resources/blog/how-many-amazon-mturk-workers-are-there/?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.cloudresearch.com/resources/blog/how-many-amazon-mturk-workers-are-there/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://time.com/6247678/openai-chatgpt-kenya-workers/?ref=breakthroughpursuit.com#:~:text=made%20up%20the%20majority%20of,wage%20for%20a%20receptionist%20in" rel="noopener noreferrer"&gt;[4]&lt;/a&gt; &lt;a href="https://time.com/6247678/openai-chatgpt-kenya-workers/?ref=breakthroughpursuit.com#:~:text=picture,eight%20months%20earlier%20than%20planned" rel="noopener noreferrer"&gt;[6]&lt;/a&gt; &lt;a href="https://time.com/6247678/openai-chatgpt-kenya-workers/?ref=breakthroughpursuit.com#:~:text=having%20sex%20with%20a%20dog,eight%20months%20earlier%20than%20planned" rel="noopener noreferrer"&gt;[8]&lt;/a&gt; OpenAI Used Kenyan Workers on Less Than $2 Per Hour: Exclusive | TIME&lt;/p&gt;

&lt;p&gt;&lt;a href="https://time.com/6247678/openai-chatgpt-kenya-workers/?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://time.com/6247678/openai-chatgpt-kenya-workers/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://techequity.us/2025/09/30/ghost-workers-in-the-machine/?ref=breakthroughpursuit.com#:~:text=1,to%20annual%20earnings%20of%20%2422%2C620" rel="noopener noreferrer"&gt;[5]&lt;/a&gt; &lt;a href="https://techequity.us/2025/09/30/ghost-workers-in-the-machine/?ref=breakthroughpursuit.com#:~:text=3,health%20insurance%20from%20their%20employer" rel="noopener noreferrer"&gt;[7]&lt;/a&gt; &lt;a href="https://techequity.us/2025/09/30/ghost-workers-in-the-machine/?ref=breakthroughpursuit.com#:~:text=2,of%20respondents%20report%20they%20are" rel="noopener noreferrer"&gt;[9]&lt;/a&gt; Ghost Workers in the AI Machine: U.S. Data Workers Speak Out About Big Tech's Exploitation - TechEquity Collaborative&lt;/p&gt;

&lt;p&gt;&lt;a href="https://techequity.us/2025/09/30/ghost-workers-in-the-machine/?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://techequity.us/2025/09/30/ghost-workers-in-the-machine/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://artificialintelligenceact.eu/?ref=breakthroughpursuit.com#:~:text=The%20AI%20Act%20is%20a,risk%20are%20largely%20left%20unregulated" rel="noopener noreferrer"&gt;[10]&lt;/a&gt; EU Artificial Intelligence Act | Up-to-date developments and analyses of the EU AI Act&lt;/p&gt;

&lt;p&gt;&lt;a href="https://artificialintelligenceact.eu/?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://artificialintelligenceact.eu/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.adalovelaceinstitute.org/resource/ai-supply-chains/?ref=breakthroughpursuit.com#:~:text=Creating%20an%20artificial%20intelligence%20,the%20system%E2%80%99s%20training%20and%20development" rel="noopener noreferrer"&gt;[11]&lt;/a&gt;  Allocating accountability in AI supply chains | Ada Lovelace Institute &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.adalovelaceinstitute.org/resource/ai-supply-chains/?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.adalovelaceinstitute.org/resource/ai-supply-chains/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ghostwork</category>
      <category>humanlabor</category>
      <category>aiaccountability</category>
      <category>biasfairness</category>
    </item>
    <item>
      <title>Who Owns the Training Set? The Coming Battles Over AI’s Raw Material</title>
      <dc:creator>Breakthrough Pursuit</dc:creator>
      <pubDate>Sat, 27 Sep 2025 19:46:00 +0000</pubDate>
      <link>https://forem.com/breakthroughpursuit/who-owns-the-training-set-the-coming-battles-over-ais-raw-material-166b</link>
      <guid>https://forem.com/breakthroughpursuit/who-owns-the-training-set-the-coming-battles-over-ais-raw-material-166b</guid>
      <description>&lt;h2&gt;
  
  
  Executive Summary
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcasx7xxvqqjczcwg60y6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcasx7xxvqqjczcwg60y6.png" alt="Who Owns the Training Set? The Coming Battles Over AI’s Raw Material" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AI promises transformative productivity and wealth, yet the value of its &lt;em&gt;inputs&lt;/em&gt;—the training data that teach models to perceive and generate language, images or music—remains hotly contested. Generative models routinely ingest billions of works: news articles, copyrighted photos, books and songs. Creators argue that this practice “steals from the people who create the content”&lt;a href="https://ipcloseup.com/2025/05/14/news-and-book-publishers-launch-offensive-to-stop-tech-giants-from-stealing-their-content-for-a-i/?ref=breakthroughpursuit.com#:~:text=Ads%20in%20the%20NMA%20campaign,%E2%80%9D" rel="noopener noreferrer"&gt;[1]&lt;/a&gt; and undermines livelihoods, while developers counter that the use is fair and transformative&lt;a href="https://www.copyright.gov/ai/Copyright-and-Artificial-Intelligence-Part-3-Generative-AI-Training-Report-Pre-Publication-Version.pdf?ref=breakthroughpursuit.com#:~:text=which%20the%20use%20of%20copyrighted,with%20the%20harm%20to%20the" rel="noopener noreferrer"&gt;[2]&lt;/a&gt;. Legal frameworks vary widely: the EU allows text and data mining (TDM) unless rights are reserved&lt;a href="https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?ref=breakthroughpursuit.com#:~:text=1,of%20text%20and%20data%20mining" rel="noopener noreferrer"&gt;[3]&lt;/a&gt;, the UK proposes a similar opt‑out regime&lt;a href="https://www.gov.uk/government/consultations/copyright-and-artificial-intelligence/copyright-and-artificial-intelligence?ref=breakthroughpursuit.com#:~:text=75,would%20have%20the%20following%20features" rel="noopener noreferrer"&gt;[4]&lt;/a&gt;, and U.S. courts are only beginning to decide whether training constitutes fair use&lt;a href="https://www.reedsmith.com/en/perspectives/2025/03/court-ai-fair-use-thomson-reuters-enterprise-gmbh-ross-intelligence?ref=breakthroughpursuit.com#:~:text=Initially%20in%202023%2C%20Circuit%20Judge,discussed%20in%20more%20detail%20below" rel="noopener noreferrer"&gt;[5]&lt;/a&gt;. Recent litigation—from &lt;em&gt;NYT v. OpenAI &amp;amp; Microsoft&lt;/em&gt; to &lt;em&gt;Thomson Reuters v. ROSS&lt;/em&gt; and &lt;em&gt;Getty v. Stability AI&lt;/em&gt;—has moved the battleground from policy debates to courtrooms. This article maps the landscape, evaluates the cultural and economic stakes for creators, and proposes frameworks to reconcile innovation with respect for human creativity.&lt;/p&gt;

&lt;h2&gt;
  
  
  I. Introduction: The Hidden Raw Material of AI
&lt;/h2&gt;

&lt;p&gt;Generative AI is often described in terms of dazzling outputs—drafted text, synthetic images, or composed music—but the &lt;em&gt;training sets&lt;/em&gt; powering these systems are less visible. These datasets are assembled by crawling the open web and, in some cases, proprietary archives. Their construction raises deep cultural and legal questions: are AI developers simply analyzing works, or are they copying and repurposing expressive content without permission? The &lt;em&gt;LAION‑5B&lt;/em&gt; dataset, for example, scraped billions of image–text pairs from the web and became a backbone for models such as Stable Diffusion. A German court later held that creating the dataset fell within Europe’s scientific research TDM exception&lt;a href="https://communia-association.org/2024/10/11/laion-vs-kneschke-building-public-datasets-is-covered-by-the-tdm-exception/?ref=breakthroughpursuit.com#:~:text=Two%20weeks%20ago%2C%20the%20Landgericht,training%20data%20transparency%20in%20general" rel="noopener noreferrer"&gt;[6]&lt;/a&gt;, yet the discovery of child sexual abuse material (CSAM) within it led LAION to remove 2,236 links and reissue a cleaned dataset&lt;a href="https://the-decoder.com/laion-releases-ai-dataset-re-laion-5b-purged-of-links-to-child-abuse-images/?ref=breakthroughpursuit.com#:~:text=Re,matching%20content%20from%20their%20versions" rel="noopener noreferrer"&gt;[7]&lt;/a&gt;. Similarly, the &lt;em&gt;Books3&lt;/em&gt; dataset contained almost 200,000 pirated books; a class‑action lawsuit alleges that Apple misrepresented these works as “publicly available” and diluted authors’ markets&lt;a href="https://www.classaction.org/news/class-action-lawsuit-alleges-apple-illegally-uses-copyrighted-works-for-ai-training?ref=breakthroughpursuit.com#:~:text=The%20complaint%20alleges%20that%20the,copyright%20law" rel="noopener noreferrer"&gt;[8]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Beyond safety, training data influence bias, representativeness and economic power. &lt;em&gt;Cloudflare’s radar analysis&lt;/em&gt; reveals that AI crawlers have a vastly different relationship with publishers than search crawlers: Google’s bot sends roughly 14 visits for every time it crawls a page, while OpenAI’s GPTBot crawls 1,700 pages for every referral and Anthropic’s ClaudeBot 73,000 pages&lt;a href="https://blog.cloudflare.com/control-content-use-for-ai-training/?ref=breakthroughpursuit.com#:~:text=Cloudflare%20is%20giving%20all%20website,that%20are%20monetized%20through%20ads" rel="noopener noreferrer"&gt;[9]&lt;/a&gt;. Such one‑way extraction erodes the traditional exchange where crawls drive traffic and revenue. In July 2024 Cloudflare responded by offering website owners a single‑click option to block AI scrapers&lt;a href="https://blog.cloudflare.com/control-content-use-for-ai-training/?ref=breakthroughpursuit.com#:~:text=Protecting%20content%20creators%20isn%E2%80%99t%20new,given%20us%20some%20interesting%20data" rel="noopener noreferrer"&gt;[10]&lt;/a&gt; and by July 2025 it introduced tools to automatically manage robots.txt files and allow blocking only on monetized sections&lt;a href="https://blog.cloudflare.com/control-content-use-for-ai-training/?ref=breakthroughpursuit.com#:~:text=Cloudflare%20is%20giving%20all%20website,that%20are%20monetized%20through%20ads" rel="noopener noreferrer"&gt;[11]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This section sets the stage: training data are not a neutral resource. They are cultural artefacts, business assets, and personal information. Understanding who owns them—and who benefits from their use—is the key to assessing AI’s legitimacy.&lt;/p&gt;

&lt;h2&gt;
  
  
  II. The Legal Fault Lines
&lt;/h2&gt;

&lt;h3&gt;
  
  
  United States
&lt;/h3&gt;

&lt;p&gt;In the U.S. there is no explicit statutory exception for AI training. Developers rely on fair‑use jurisprudence, arguing that ingestion of copyrighted works is transformative and necessary to teach models. The U.S. Copyright Office’s &lt;em&gt;Report on Copyright and Artificial Intelligence&lt;/em&gt; acknowledges that the legality of unlicensed training is unsettled&lt;a href="https://www.copyright.gov/ai/Copyright-and-Artificial-Intelligence-Part-3-Generative-AI-Training-Report-Pre-Publication-Version.pdf?ref=breakthroughpursuit.com#:~:text=33%20Pre,197" rel="noopener noreferrer"&gt;[12]&lt;/a&gt;. Commenters expressed polarised views: some argued that training without permission “undermines entire markets” and destroys incentives for creation&lt;a href="https://www.copyright.gov/ai/Copyright-and-Artificial-Intelligence-Part-3-Generative-AI-Training-Report-Pre-Publication-Version.pdf?ref=breakthroughpursuit.com#:~:text=33%20Pre,197" rel="noopener noreferrer"&gt;[12]&lt;/a&gt;; others cautioned that mandatory licensing would stifle innovation and entrench incumbents&lt;a href="https://www.copyright.gov/ai/Copyright-and-Artificial-Intelligence-Part-3-Generative-AI-Training-Report-Pre-Publication-Version.pdf?ref=breakthroughpursuit.com#:~:text=which%20the%20use%20of%20copyrighted,with%20the%20harm%20to%20the" rel="noopener noreferrer"&gt;[2]&lt;/a&gt;. In March 2025, Judge Sidney Stein in the Southern District of New York refused to dismiss the &lt;em&gt;New York Times&lt;/em&gt;’ copyright claims against OpenAI and Microsoft, noting “many” examples of ChatGPT copying articles and allowing direct and contributory infringement claims to proceed&lt;a href="https://smithhopen.com/2025/07/17/nyt-v-openai-microsoft-ai-copyright-lawsuit-update-2025/?ref=breakthroughpursuit.com#:~:text=2025%20brings%20big%20steps%20in,contributory%20infringement%2C%20plus%20DMCA%20breaches" rel="noopener noreferrer"&gt;[13]&lt;/a&gt;. In May, the same court ordered OpenAI to preserve all ChatGPT user data for discovery—an unprecedented requirement that led OpenAI to appeal on privacy grounds&lt;a href="https://smithhopen.com/2025/07/17/nyt-v-openai-microsoft-ai-copyright-lawsuit-update-2025/?ref=breakthroughpursuit.com#:~:text=A%20heated%20issue%20popped%20up,retention%20deals" rel="noopener noreferrer"&gt;[14]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;A separate case, &lt;em&gt;Thomson Reuters v. ROSS Intelligence&lt;/em&gt;, addressed AI training in the context of legal research. In February 2025 the U.S. District Court for Delaware granted partial summary judgment for Thomson Reuters, finding that ROSS’s use of Westlaw headnotes to train its AI tool was not transformative and harmed the potential market for Thomson Reuters’s headnotes&lt;a href="https://www.reedsmith.com/en/perspectives/2025/03/court-ai-fair-use-thomson-reuters-enterprise-gmbh-ross-intelligence?ref=breakthroughpursuit.com#:~:text=Decision" rel="noopener noreferrer"&gt;[15]&lt;/a&gt;. The court held that Ross infringed 2,243 headnotes and rejected its fair‑use defense, signalling that non‑transformative AI uses may face liability&lt;a href="https://www.reedsmith.com/en/perspectives/2025/03/court-ai-fair-use-thomson-reuters-enterprise-gmbh-ross-intelligence?ref=breakthroughpursuit.com#:~:text=Decision" rel="noopener noreferrer"&gt;[15]&lt;/a&gt;. Shortly after, U.S. District Judge Eumi Lee denied Universal Music Group (UMG) and other publishers’ request for a preliminary injunction against Anthropic, ruling that the motion was too broad and that the publishers failed to show irreparable harm&lt;a href="https://www.reuters.com/legal/anthropic-wins-early-round-music-publishers-ai-copyright-case-2025-03-26/?ref=breakthroughpursuit.com#:~:text=March%2025%20%28Reuters%29%20,powered%20chatbot%20Claude" rel="noopener noreferrer"&gt;[16]&lt;/a&gt;. The judge observed that determining fair use is the “determinative question”&lt;a href="https://www.reuters.com/legal/anthropic-wins-early-round-music-publishers-ai-copyright-case-2025-03-26/?ref=breakthroughpursuit.com#:~:text=Fair%20use%20is%20likely%20to,not%20specifically%20address%20the%20issue" rel="noopener noreferrer"&gt;[17]&lt;/a&gt;, leaving final resolution for trial.&lt;/p&gt;

&lt;h3&gt;
  
  
  European Union
&lt;/h3&gt;

&lt;p&gt;The EU’s 2019 Copyright Directive introduced a text and data mining exception that permits reproductions “for the purposes of text and data mining” unless rights holders have &lt;em&gt;reserved their rights&lt;/em&gt; in a machine‑readable manner&lt;a href="https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?ref=breakthroughpursuit.com#:~:text=1,of%20text%20and%20data%20mining" rel="noopener noreferrer"&gt;[3]&lt;/a&gt;. This opt‑out mechanism, codified in Article 4, has become the focal point for AI training. However, a 2025 European Parliament study concluded that generative AI training goes beyond the scope of TDM exceptions, which were designed for scientific analysis rather than reproduction of expressive content&lt;a href="https://www.jonesday.com/en/insights/2025/08/european-parliaments-new-study-on-generative-ai-and-copyright-calls-for-overhaul-of-optout-regime?ref=breakthroughpursuit.com#:~:text=Legal%20Mismatch%20Between%20AI%20Training,to%20avoid%20unintended%20licensing%20loopholes" rel="noopener noreferrer"&gt;[18]&lt;/a&gt;. The study recommended revising Article 4 to require an opt‑in for commercial AI training and called for mandatory disclosure of training datasets and traceability via watermarking&lt;a href="https://www.jonesday.com/en/insights/2025/08/european-parliaments-new-study-on-generative-ai-and-copyright-calls-for-overhaul-of-optout-regime?ref=breakthroughpursuit.com#:~:text=Calls%20for%20Transparency%20and%20Equitable,value%20derived%20from%20their%20works" rel="noopener noreferrer"&gt;[19]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The LAION case demonstrates the ambiguity of EU exceptions. In October 2024 a German regional court held that creating the LAION‑5B dataset was lawful under the EU’s research TDM exception&lt;a href="https://communia-association.org/2024/10/11/laion-vs-kneschke-building-public-datasets-is-covered-by-the-tdm-exception/?ref=breakthroughpursuit.com#:~:text=Two%20weeks%20ago%2C%20the%20Landgericht,training%20data%20transparency%20in%20general" rel="noopener noreferrer"&gt;[6]&lt;/a&gt;. Yet the same dataset triggered safety concerns when researchers found CSAM, prompting LAION to work with the Internet Watch Foundation and release “Re‑LAION‑5B” after removing problematic links&lt;a href="https://the-decoder.com/laion-releases-ai-dataset-re-laion-5b-purged-of-links-to-child-abuse-images/?ref=breakthroughpursuit.com#:~:text=Re,matching%20content%20from%20their%20versions" rel="noopener noreferrer"&gt;[7]&lt;/a&gt;. This juxtaposition highlights the tension between transparency (public datasets enable audits) and harm (unfiltered data may include illegal content).&lt;/p&gt;

&lt;h3&gt;
  
  
  United Kingdom
&lt;/h3&gt;

&lt;p&gt;At the time of writing, UK copyright law allows text and data mining only for non‑commercial research. In December 2024 the UK Government launched a consultation proposing an exception for AI training coupled with a rights reservation mechanism similar to the EU opt‑out&lt;a href="https://www.gov.uk/government/consultations/copyright-and-artificial-intelligence/copyright-and-artificial-intelligence?ref=breakthroughpursuit.com#:~:text=75,would%20have%20the%20following%20features" rel="noopener noreferrer"&gt;[20]&lt;/a&gt;. The consultation argues that both creators and AI developers suffer from legal uncertainty and states that a new framework must reward creators, ensure lawful access and promote trust&lt;a href="https://www.gov.uk/government/consultations/copyright-and-artificial-intelligence/copyright-and-artificial-intelligence?ref=breakthroughpursuit.com#:~:text=Both%20our%20creative%20industries%20and,term%20growth%20in%20both%20sectors" rel="noopener noreferrer"&gt;[21]&lt;/a&gt;. The proposed approach would allow AI developers to train on any material, including for commercial use, unless rights holders have reserved their rights via standardised machine‑readable declarations&lt;a href="https://www.gov.uk/government/consultations/copyright-and-artificial-intelligence/copyright-and-artificial-intelligence?ref=breakthroughpursuit.com#:~:text=75,would%20have%20the%20following%20features" rel="noopener noreferrer"&gt;[4]&lt;/a&gt;. The government emphasises transparency—developers would need to disclose training sources and provide summary information upon request&lt;a href="https://www.gov.uk/government/consultations/copyright-and-artificial-intelligence/copyright-and-artificial-intelligence?ref=breakthroughpursuit.com#:~:text=,explored%20in%20more%20detail%20below" rel="noopener noreferrer"&gt;[22]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The proposals have faced pushback. The Creative Rights in AI Coalition—a group that includes the Society of Authors, UK Music and the Publishers Association—criticised the rights reservation model, arguing that AI developers should only use copyrighted works with express permission&lt;a href="https://commonslibrary.parliament.uk/research-briefings/cdp-2025-0081/?ref=breakthroughpursuit.com#:~:text=The%20consultation%E2%80%99s%20proposals%20are%20controversial,was%20used%20when%20training%20AI" rel="noopener noreferrer"&gt;[23]&lt;/a&gt;. The coalition welcomes measures to improve transparency but insists that an opt‑out approach would shift the burden onto creators and fail to deter unauthorised use&lt;a href="https://commonslibrary.parliament.uk/research-briefings/cdp-2025-0081/?ref=breakthroughpursuit.com#:~:text=The%20consultation%E2%80%99s%20proposals%20are%20controversial,used%20when%20training%20AI%20models" rel="noopener noreferrer"&gt;[24]&lt;/a&gt;. Parliament’s Culture, Media and Sport Committee echoed these concerns, noting widespread worry among creative industries about unconsented training&lt;a href="https://commonslibrary.parliament.uk/research-briefings/cdp-2025-0081/?ref=breakthroughpursuit.com#:~:text=Caroline%20Dinenage%2C%20Chair%20of%20the,models%20without%20consent%20or%20compensation%E2%80%9D" rel="noopener noreferrer"&gt;[25]&lt;/a&gt;. Despite this, the government continues to explore technical solutions for machine‑readable opt‑outs and emphasises that any system must enable collective licensing and enforcement&lt;a href="https://www.gov.uk/government/consultations/copyright-and-artificial-intelligence/copyright-and-artificial-intelligence?ref=breakthroughpursuit.com#:~:text=,explored%20in%20more%20detail%20below" rel="noopener noreferrer"&gt;[22]&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Global Forums
&lt;/h3&gt;

&lt;p&gt;Beyond national regimes, multilateral organisations are shaping principles for AI training. The World Intellectual Property Organization (WIPO) convenes the “Conversation on IP &amp;amp; Frontier Technologies” series, where governments debate whether training constitutes use or analysis and explore infrastructure for rights reservations and compensation. UNESCO has issued recommendations on AI ethics that emphasise respect for human rights and cultural diversity. However, global consensus remains elusive: in many jurisdictions, courts rather than policymakers are making the first determinations.&lt;/p&gt;

&lt;h2&gt;
  
  
  III. Litigation Heat Map: Courts as the Frontline
&lt;/h2&gt;

&lt;p&gt;The following cases illustrate how courts are addressing AI training disputes. Each case signals how different legal regimes interpret fair use or exceptions and reveals emerging patterns in remedies and procedural orders.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;NYT v. Microsoft &amp;amp; OpenAI (U.S., 2025)&lt;/strong&gt; – Filed in December 2023, this lawsuit alleges that OpenAI and Microsoft copied millions of &lt;em&gt;New York Times&lt;/em&gt; articles to train ChatGPT, harming the publisher by bypassing paywalls and reproducing its stories&lt;a href="https://smithhopen.com/2025/07/17/nyt-v-openai-microsoft-ai-copyright-lawsuit-update-2025/?ref=breakthroughpursuit.com#:~:text=2025%20brings%20big%20steps%20in,contributory%20infringement%2C%20plus%20DMCA%20breaches" rel="noopener noreferrer"&gt;[26]&lt;/a&gt;. In March 2025 Judge Sidney Stein rejected most of the defendants’ dismissal arguments, allowing direct and contributory infringement claims to proceed&lt;a href="https://smithhopen.com/2025/07/17/nyt-v-openai-microsoft-ai-copyright-lawsuit-update-2025/?ref=breakthroughpursuit.com#:~:text=2025%20brings%20big%20steps%20in,contributory%20infringement%2C%20plus%20DMCA%20breaches" rel="noopener noreferrer"&gt;[26]&lt;/a&gt;. In May the court ordered OpenAI to preserve all user logs for discovery&lt;a href="https://smithhopen.com/2025/07/17/nyt-v-openai-microsoft-ai-copyright-lawsuit-update-2025/?ref=breakthroughpursuit.com#:~:text=A%20heated%20issue%20popped%20up,retention%20deals" rel="noopener noreferrer"&gt;[14]&lt;/a&gt;, raising privacy concerns and signalling that courts may compel disclosure of training data and interactions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Thomson Reuters v. ROSS Intelligence (U.S., 2025)&lt;/strong&gt; – This case involves Ross’s legal research AI built from Westlaw headnotes. In February 2025 the Delaware district court found Ross liable for copying 2,243 headnotes and ruled that its training was not transformative, emphasizing that Ross used the material to build a competing product&lt;a href="https://www.reedsmith.com/en/perspectives/2025/03/court-ai-fair-use-thomson-reuters-enterprise-gmbh-ross-intelligence?ref=breakthroughpursuit.com#:~:text=Decision" rel="noopener noreferrer"&gt;[15]&lt;/a&gt;. The decision signals that AI tools offering non‑transformative functions (e.g., replicating a database) may not benefit from fair‑use defenses.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Getty Images v. Stability AI (UK, 2025)&lt;/strong&gt; – Getty claims that Stability AI used millions of its photos and captions to train Stable Diffusion without permission. The case includes copyright, trademark and database right claims. During the trial, Getty dropped its input and output claims, leaving only the issue of whether the trained model itself is an infringing “article”&lt;a href="https://www.finnegan.com/en/insights/articles/getty-images-vs-stability-ai-the-uk-court-battle-that-could-reshape-ai-and-copyright-law.html?ref=breakthroughpursuit.com#:~:text=But%20here%20is%20the%20twist%3A,which%20is%20an%20%E2%80%98infringing%20copy%E2%80%99" rel="noopener noreferrer"&gt;[27]&lt;/a&gt;. Stability argues that training occurred outside the UK and that users, not the company, produce outputs&lt;a href="https://www.finnegan.com/en/insights/articles/getty-images-vs-stability-ai-the-uk-court-battle-that-could-reshape-ai-and-copyright-law.html?ref=breakthroughpursuit.com#:~:text=Stability%20AI%20countered%20that%20training,for%20parody%20or%20stylistic%20imitation" rel="noopener noreferrer"&gt;[28]&lt;/a&gt;. The outcome could determine whether models themselves can be considered infringing copies.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;UMG/Concord/ABKCO v. Anthropic (U.S., 2025)&lt;/strong&gt; – Music publishers sued Anthropic for allegedly using lyrics from over 500 songs to train Claude. In March 2025 a California federal judge denied the publishers’ request for a preliminary injunction, ruling that the proposed order was too broad and that the plaintiffs failed to show irreparable harm&lt;a href="https://www.reuters.com/legal/anthropic-wins-early-round-music-publishers-ai-copyright-case-2025-03-26/?ref=breakthroughpursuit.com#:~:text=March%2025%20%28Reuters%29%20,powered%20chatbot%20Claude" rel="noopener noreferrer"&gt;[16]&lt;/a&gt;. The court noted that defining the licensing market for AI training remains unsettled and that fair use will be a central question&lt;a href="https://www.reuters.com/legal/anthropic-wins-early-round-music-publishers-ai-copyright-case-2025-03-26/?ref=breakthroughpursuit.com#:~:text=Fair%20use%20is%20likely%20to,not%20specifically%20address%20the%20issue" rel="noopener noreferrer"&gt;[17]&lt;/a&gt;. The case continues amid broader negotiations; some publishers reportedly reached partial settlements after Anthropic announced a $1.5 billion settlement with authors in a separate class action&lt;a href="https://www.ropesgray.com/en/insights/alerts/2025/09/anthropics-landmark-copyright-settlement-implications-for-ai-developers-and-enterprise-users?ref=breakthroughpursuit.com#:~:text=With%20the%20plaintiffs%20seeking%20statutory,includes%20the%20following%20key%20provisions" rel="noopener noreferrer"&gt;[29]&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Authors v. Meta/OpenAI (U.S., 2025)&lt;/strong&gt; – Various authors have filed suits alleging that Meta and OpenAI used pirated books, including “shadow library” datasets like Books3, to train language models. Courts have dismissed some claims but allowed others—especially those alleging removal of copyright management information (DMCA CMI) and unfair competition—to proceed. While not yet generating precedent, these cases illustrate the difficulty of policing training data and the potential liability for using illicit sources&lt;a href="https://www.classaction.org/news/class-action-lawsuit-alleges-apple-illegally-uses-copyrighted-works-for-ai-training?ref=breakthroughpursuit.com#:~:text=The%20complaint%20alleges%20that%20the,copyright%20law" rel="noopener noreferrer"&gt;[8]&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  IV. Creators, Journalists and the New Bargaining Map
&lt;/h2&gt;

&lt;p&gt;The litigation spotlight reveals deeper societal tensions. Creators, news organisations and cultural industries fear that AI training will cannibalise their markets. AI developers argue that training is necessary for innovation and emphasise open access to information. This section synthesises stakeholder positions using a &lt;strong&gt;Stakeholder Bargaining Map&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Authors and Artists
&lt;/h3&gt;

&lt;p&gt;Many authors and artists see AI training as an existential threat. The Society of Authors (SoA) and the Creative Rights in AI Coalition argue that rights reservation models shift the burden onto creators; instead, they advocate for an opt‑in regime where AI companies must obtain express permission&lt;a href="https://commonslibrary.parliament.uk/research-briefings/cdp-2025-0081/?ref=breakthroughpursuit.com#:~:text=The%20consultation%E2%80%99s%20proposals%20are%20controversial,was%20used%20when%20training%20AI" rel="noopener noreferrer"&gt;[23]&lt;/a&gt;. The SoA has publicly criticised proposals for automatic rights reservations, warning that they require expensive content‑recognition systems that creators cannot implement&lt;a href="https://lordslibrary.parliament.uk/copyright-and-artificial-intelligence-impact-on-creative-industries/?ref=breakthroughpursuit.com#:~:text=The%20government%E2%80%99s%20proposal%20to%20create,15" rel="noopener noreferrer"&gt;[30]&lt;/a&gt;. Surveys by UK creative unions suggest that a majority of writers believe AI threatens their livelihoods and that any legal framework must ensure compensation and control.&lt;/p&gt;

&lt;h3&gt;
  
  
  Journalists and Publishers
&lt;/h3&gt;

&lt;p&gt;News publishers have organised under the News/Media Alliance (NMA) to demand compensation, transparency and anti‑monopoly measures. In 2024 the Alliance launched the “Support Responsible AI” campaign, featuring ads like “Keep Watch on AI” and “AI Steals from You Too” to highlight that AI companies scrape publishers’ content without payment&lt;a href="https://ipcloseup.com/2025/05/14/news-and-book-publishers-launch-offensive-to-stop-tech-giants-from-stealing-their-content-for-a-i/?ref=breakthroughpursuit.com#:~:text=Ads%20in%20the%20NMA%20campaign,%E2%80%9D" rel="noopener noreferrer"&gt;[1]&lt;/a&gt;. The campaign calls on governments to require licensing deals, force AI companies to disclose training sources, and prevent tech monopolies from dominating the market&lt;a href="https://ipcloseup.com/2025/05/14/news-and-book-publishers-launch-offensive-to-stop-tech-giants-from-stealing-their-content-for-a-i/?ref=breakthroughpursuit.com#:~:text=The%20ad%20campaign%2C%20which%20has,country%2C%20has%20three%20key%20asks" rel="noopener noreferrer"&gt;[31]&lt;/a&gt;. Publishers argue that unlicensed AI training undermines their subscription model and that data‑scraping erodes the advertisement-driven business that funds journalism.&lt;/p&gt;

&lt;h3&gt;
  
  
  Musicians and Photographers
&lt;/h3&gt;

&lt;p&gt;Music publishers UMG, Concord and ABKCO view AI training as both a threat and an opportunity. They have sued Anthropic for using lyrics without permission but also seek to negotiate licensing frameworks. Photographers, represented by Getty Images, worry that generative models allow users to generate images with watermarks or confuse consumers about provenance. Getty’s case against Stability AI emphasises the investment required to curate a photo library and argues that training on such a database without payment amounts to misappropriation&lt;a href="https://www.finnegan.com/en/insights/articles/getty-images-vs-stability-ai-the-uk-court-battle-that-could-reshape-ai-and-copyright-law.html?ref=breakthroughpursuit.com#:~:text=The%20Brief" rel="noopener noreferrer"&gt;[32]&lt;/a&gt;. Conversely, AI companies claim that training constitutes fair dealing or takes place outside the jurisdiction&lt;a href="https://www.finnegan.com/en/insights/articles/getty-images-vs-stability-ai-the-uk-court-battle-that-could-reshape-ai-and-copyright-law.html?ref=breakthroughpursuit.com#:~:text=Stability%20AI%20countered%20that%20training,for%20parody%20or%20stylistic%20imitation" rel="noopener noreferrer"&gt;[28]&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Advocacy and Civil Society
&lt;/h3&gt;

&lt;p&gt;Civil liberties groups like the Electronic Frontier Foundation (EFF) caution against overly restrictive regimes that could hinder research and access to information. Creative Commons advocates for clear machine‑readable licenses that allow creators to choose permissive or restrictive terms. Communia and Open Future call for open datasets to enable accountability, noting that LAION’s transparency allowed researchers to detect harmful content and push for safety improvements&lt;a href="https://communia-association.org/2024/10/11/laion-vs-kneschke-building-public-datasets-is-covered-by-the-tdm-exception/?ref=breakthroughpursuit.com#:~:text=Here%2C%20the%20positive%20impact%20of,problematic%20patterns%20in%20the%20dataset" rel="noopener noreferrer"&gt;[33]&lt;/a&gt;. These groups suggest that rather than banning training, policymakers should mandate transparency and invest in auditing tools to detect misuse.&lt;/p&gt;

&lt;h3&gt;
  
  
  Stakeholder Bargaining Map
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Creators ↔ AI Providers:&lt;/strong&gt; Creators seek compensation and control; AI providers seek access and legal certainty. The bargaining equilibrium may involve collective licensing schemes, revenue sharing and transparent audit logs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI Providers ↔ Intermediaries:&lt;/strong&gt; Platforms like Cloudflare, search engines and hosting services mediate access to data. Cloudflare’s bots‑blocking tools and robots.txt management illustrate how intermediaries can empower publishers&lt;a href="https://blog.cloudflare.com/control-content-use-for-ai-training/?ref=breakthroughpursuit.com#:~:text=Cloudflare%20is%20giving%20all%20website,that%20are%20monetized%20through%20ads" rel="noopener noreferrer"&gt;[9]&lt;/a&gt;. Conversely, circumventing such controls (e.g., crawling despite robots.txt) has triggered calls for enforcement and has been dubbed “Stop AI Theft.”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creators ↔ Intermediaries:&lt;/strong&gt; Creators often rely on intermediaries to enforce rights (e.g., robots.txt or licensing platforms). They advocate for standardised rights reservation mechanisms that intermediaries can implement, while emphasising that enforcement should not fall solely on individuals.&lt;/p&gt;

&lt;h2&gt;
  
  
  V. The Dataset Dilemma: Provenance, Safety and Governance
&lt;/h2&gt;

&lt;p&gt;Training datasets raise questions beyond copyright. They present challenges of provenance (where did the data come from?), safety (are there illegal or harmful materials?), and governance (can rights holders see and control how their works are used?).&lt;/p&gt;

&lt;h3&gt;
  
  
  Open Datasets and Transparency
&lt;/h3&gt;

&lt;p&gt;Open datasets like LAION‑5B provide valuable transparency. Because LAION published its dataset, researchers were able to identify CSAM and biased or harmful content&lt;a href="https://communia-association.org/2024/10/11/laion-vs-kneschke-building-public-datasets-is-covered-by-the-tdm-exception/?ref=breakthroughpursuit.com#:~:text=Here%2C%20the%20positive%20impact%20of,problematic%20patterns%20in%20the%20dataset" rel="noopener noreferrer"&gt;[33]&lt;/a&gt;. LAION responded by collaborating with the Internet Watch Foundation, temporarily removing the dataset and releasing a cleaned version, Re‑LAION‑5B&lt;a href="https://the-decoder.com/laion-releases-ai-dataset-re-laion-5b-purged-of-links-to-child-abuse-images/?ref=breakthroughpursuit.com#:~:text=Re,matching%20content%20from%20their%20versions" rel="noopener noreferrer"&gt;[7]&lt;/a&gt;. The case shows that public datasets, while risky, allow community auditing and improvement. In contrast, proprietary datasets remain opaque; rights holders cannot know whether their works were included unless developers voluntarily disclose.&lt;/p&gt;

&lt;h3&gt;
  
  
  Shadow Libraries and Pirated Data
&lt;/h3&gt;

&lt;p&gt;The Books3 dataset compiled nearly 200,000 pirated books. A class action alleges that Apple used Books3 to train its language models and misrepresented the works as “publicly available”&lt;a href="https://www.classaction.org/news/class-action-lawsuit-alleges-apple-illegally-uses-copyrighted-works-for-ai-training?ref=breakthroughpursuit.com#:~:text=The%20complaint%20alleges%20that%20the,copyright%20law" rel="noopener noreferrer"&gt;[8]&lt;/a&gt;. Plaintiffs argue this practice dilutes markets for authors’ works and deprives them of control over derivative uses. Similar suits target Meta and OpenAI for using other shadow libraries. These cases underscore that training on illicit data not only raises copyright concerns but also threatens trust and compliance. If companies knowingly use pirated works, they face statutory damages, reputational harm and legislative backlash.&lt;/p&gt;

&lt;h3&gt;
  
  
  Robots, Crawlers and Consent
&lt;/h3&gt;

&lt;p&gt;Technical governance is emerging as an important layer. Cloudflare found that only about 37 % of top websites have a robots.txt file&lt;a href="https://blog.cloudflare.com/control-content-use-for-ai-training/?ref=breakthroughpursuit.com#:~:text=And%20while%20sites%20can%20use,this%20age%20of%20evolving%20crawlers" rel="noopener noreferrer"&gt;[34]&lt;/a&gt;, meaning that most sites cannot even signal their preferences to AI crawlers. To remedy this, Cloudflare introduced managed robots.txt services and an option to block AI bots on monetized portions of a site&lt;a href="https://blog.cloudflare.com/control-content-use-for-ai-training/?ref=breakthroughpursuit.com#:~:text=Cloudflare%20is%20giving%20all%20website,that%20are%20monetized%20through%20ads" rel="noopener noreferrer"&gt;[11]&lt;/a&gt;. The company’s analysis shows that AI crawlers seldom reciprocate traffic—OpenAI’s crawl‑to‑refer ratio is 1,700:1 and Anthropic’s 73,000:1&lt;a href="https://blog.cloudflare.com/control-content-use-for-ai-training/?ref=breakthroughpursuit.com#:~:text=Cloudflare%20is%20giving%20all%20website,that%20are%20monetized%20through%20ads" rel="noopener noreferrer"&gt;[9]&lt;/a&gt;—highlighting why publishers view unsanctioned scraping as theft. These metrics provide a basis for a &lt;strong&gt;Dataset Provenance &amp;amp; Risk Scorecard&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dataset Provenance &amp;amp; Risk Scorecard
&lt;/h3&gt;

&lt;p&gt;The scorecard below summarizes risks associated with major datasets:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;LAION‑5B / Re‑LAION‑5B&lt;/strong&gt; – Source: scraped web images; &lt;em&gt;Opt‑out status:&lt;/em&gt; supports rights reservation via Spawning’s opt‑out registry; &lt;em&gt;Safety:&lt;/em&gt; initial dataset contained CSAM but removed 2,236 links after review&lt;a href="https://the-decoder.com/laion-releases-ai-dataset-re-laion-5b-purged-of-links-to-child-abuse-images/?ref=breakthroughpursuit.com#:~:text=Re,matching%20content%20from%20their%20versions" rel="noopener noreferrer"&gt;[7]&lt;/a&gt;; &lt;em&gt;Auditability:&lt;/em&gt; high because dataset is public&lt;a href="https://communia-association.org/2024/10/11/laion-vs-kneschke-building-public-datasets-is-covered-by-the-tdm-exception/?ref=breakthroughpursuit.com#:~:text=Here%2C%20the%20positive%20impact%20of,problematic%20patterns%20in%20the%20dataset" rel="noopener noreferrer"&gt;[33]&lt;/a&gt;; &lt;em&gt;Traceability:&lt;/em&gt; moderate; no watermarks but includes URLs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Books3 / Shadow Libraries&lt;/strong&gt; – Source: pirated books from Library Genesis and similar sites; &lt;em&gt;Opt‑out status:&lt;/em&gt; none; &lt;em&gt;Safety:&lt;/em&gt; includes copyrighted works; &lt;em&gt;Auditability:&lt;/em&gt; low because dataset was hosted via torrent; &lt;em&gt;Traceability:&lt;/em&gt; low; class actions allege misuse&lt;a href="https://www.classaction.org/news/class-action-lawsuit-alleges-apple-illegally-uses-copyrighted-works-for-ai-training?ref=breakthroughpursuit.com#:~:text=The%20complaint%20alleges%20that%20the,copyright%20law" rel="noopener noreferrer"&gt;[8]&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Proprietary training sets (e.g., Google, OpenAI)&lt;/strong&gt; – Source: mixture of licensed data, public web and proprietary corpuses; &lt;em&gt;Opt‑out status:&lt;/em&gt; unclear; developers propose using robots.txt signals; &lt;em&gt;Safety:&lt;/em&gt; unknown; &lt;em&gt;Auditability:&lt;/em&gt; low because datasets are confidential; &lt;em&gt;Traceability:&lt;/em&gt; low; rights holders cannot confirm inclusion.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These examples illustrate the need for governance that balances transparency with privacy. Public datasets allow scrutiny but may expose harmful material; proprietary datasets protect corporate secrets but raise trust issues.&lt;/p&gt;

&lt;h2&gt;
  
  
  VI. Paths Forward: From Conflict to Constructive Frameworks
&lt;/h2&gt;

&lt;p&gt;The controversies outlined above show that AI training sits at the intersection of copyright, privacy, antitrust and cultural policy. To move from ad‑hoc litigation to sustainable governance, stakeholders need workable frameworks. This section proposes several principles.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Machine‑Readable Opt‑Outs and Registries
&lt;/h3&gt;

&lt;p&gt;Building on the EU’s opt‑out model, policymakers should establish standardised, machine‑readable reservations that are simple to implement. Developers would be permitted to train on works unless rights holders register an opt‑out. To prevent “gotcha” enforcement, the registry should be publicly accessible, and AI companies must regularly sync their training pipelines to respect updates. This approach avoids placing the entire burden on creators: a central database maintained by a neutral body could handle registrations and maintain technical standards. The UK government’s consultation notes that such a system requires technical and organisational infrastructure&lt;a href="https://www.gov.uk/government/consultations/copyright-and-artificial-intelligence/copyright-and-artificial-intelligence?ref=breakthroughpursuit.com#:~:text=,explored%20in%20more%20detail%20below" rel="noopener noreferrer"&gt;[22]&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Collective Licensing and Compensation
&lt;/h3&gt;

&lt;p&gt;An opt‑out regime alone does not address compensation. Creators need a mechanism to receive royalties when their works are used for training. Collective management organisations (CMOs) could negotiate licensing terms on behalf of rights holders, similar to how performance rights organisations operate in music. News publishers could negotiate blanket deals that license content for training in exchange for revenue sharing, while authors could license books through existing collecting societies. The News/Media Alliance advocates for licensing frameworks and compensation&lt;a href="https://ipcloseup.com/2025/05/14/news-and-book-publishers-launch-offensive-to-stop-tech-giants-from-stealing-their-content-for-a-i/?ref=breakthroughpursuit.com#:~:text=The%20ad%20campaign%2C%20which%20has,country%2C%20has%20three%20key%20asks" rel="noopener noreferrer"&gt;[31]&lt;/a&gt;. Without such schemes, AI companies may continue to rely on fair use arguments and litigation will persist.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Transparency and Audit Logs
&lt;/h3&gt;

&lt;p&gt;Courts and policymakers increasingly demand transparency. The U.S. discovery order requiring OpenAI to preserve user logs&lt;a href="https://smithhopen.com/2025/07/17/nyt-v-openai-microsoft-ai-copyright-lawsuit-update-2025/?ref=breakthroughpursuit.com#:~:text=A%20heated%20issue%20popped%20up,retention%20deals" rel="noopener noreferrer"&gt;[14]&lt;/a&gt; signals that judges may compel disclosure to assess infringement. Policymakers should require AI developers to maintain audit logs showing what data was used, how it was obtained, and whether rights reservations were honoured. Logs should protect personal data through aggregation but allow independent auditors to verify compliance. The European Parliament study recommends mandatory dataset disclosure and traceability via watermarking&lt;a href="https://www.jonesday.com/en/insights/2025/08/european-parliaments-new-study-on-generative-ai-and-copyright-calls-for-overhaul-of-optout-regime?ref=breakthroughpursuit.com#:~:text=Calls%20for%20Transparency%20and%20Equitable,value%20derived%20from%20their%20works" rel="noopener noreferrer"&gt;[19]&lt;/a&gt;, while the UK consultation emphasises that transparency is a prerequisite for any rights reservation system&lt;a href="https://www.gov.uk/government/consultations/copyright-and-artificial-intelligence/copyright-and-artificial-intelligence?ref=breakthroughpursuit.com#:~:text=,explored%20in%20more%20detail%20below" rel="noopener noreferrer"&gt;[22]&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Risk‑Based Governance and Safety Reviews
&lt;/h3&gt;

&lt;p&gt;Governance must also address safety. Public datasets should undergo third‑party reviews to detect illegal or harmful content, as in LAION’s collaboration with the Internet Watch Foundation&lt;a href="https://the-decoder.com/laion-releases-ai-dataset-re-laion-5b-purged-of-links-to-child-abuse-images/?ref=breakthroughpursuit.com#:~:text=Re,matching%20content%20from%20their%20versions" rel="noopener noreferrer"&gt;[7]&lt;/a&gt;. AI developers should implement content filtering and allow rights holders to report problematic data. Legislators could mandate audits for datasets above a certain size or used for models deployed to the public. Risk‑based governance—already a feature of the EU’s AI Act—can be extended to training data: high‑risk domains (e.g., health or criminal justice) may require stricter scrutiny and licensing than low‑risk creative uses.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Harmonisation of Fair‑Use/Exception Standards
&lt;/h3&gt;

&lt;p&gt;The divergence between U.S. fair‑use jurisprudence and EU/UK TDM exceptions creates uncertainty. Companies operating globally face inconsistent obligations. International organisations like WIPO could facilitate dialogue on harmonising exceptions, perhaps by establishing baseline criteria for transformative use, market substitution, and legitimate interests of rights holders. Until then, AI developers may choose to comply with the strictest regime, while lobbying for clarity and limiting liability through settlements and licences.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: Towards an Equitable AI Economy
&lt;/h2&gt;

&lt;p&gt;The battles over AI training data are about more than legal technicalities; they reflect cultural values and economic power. Creators fear that their work will be devalued; developers worry that innovation will be stifled; policymakers seek to balance these interests while promoting global competitiveness. Litigation in 2025 has begun to define the contours of legality—allowing some claims to proceed, rejecting others, and imposing unprecedented discovery obligations&lt;a href="https://smithhopen.com/2025/07/17/nyt-v-openai-microsoft-ai-copyright-lawsuit-update-2025/?ref=breakthroughpursuit.com#:~:text=2025%20brings%20big%20steps%20in,contributory%20infringement%2C%20plus%20DMCA%20breaches" rel="noopener noreferrer"&gt;[35]&lt;/a&gt;. Regulatory proposals in the EU and UK experiment with opt‑outs and rights reservations&lt;a href="https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?ref=breakthroughpursuit.com#:~:text=1,of%20text%20and%20data%20mining" rel="noopener noreferrer"&gt;[3]&lt;/a&gt;&lt;a href="https://www.gov.uk/government/consultations/copyright-and-artificial-intelligence/copyright-and-artificial-intelligence?ref=breakthroughpursuit.com#:~:text=75,would%20have%20the%20following%20features" rel="noopener noreferrer"&gt;[4]&lt;/a&gt;. Stakeholders across industries advocate for licensing, transparency and fair compensation&lt;a href="https://ipcloseup.com/2025/05/14/news-and-book-publishers-launch-offensive-to-stop-tech-giants-from-stealing-their-content-for-a-i/?ref=breakthroughpursuit.com#:~:text=The%20ad%20campaign%2C%20which%20has,country%2C%20has%20three%20key%20asks" rel="noopener noreferrer"&gt;[31]&lt;/a&gt;. Technology intermediaries like Cloudflare are developing tools that allow website owners to control AI crawlers and gather metrics on scraping behaviour&lt;a href="https://blog.cloudflare.com/control-content-use-for-ai-training/?ref=breakthroughpursuit.com#:~:text=Cloudflare%20is%20giving%20all%20website,that%20are%20monetized%20through%20ads" rel="noopener noreferrer"&gt;[9]&lt;/a&gt;, signalling that technical governance will complement legal reforms.&lt;/p&gt;

&lt;p&gt;The way forward lies in combining these approaches: implement machine‑readable opt‑outs, facilitate collective licensing, mandate transparency and audits, and align fair‑use standards across jurisdictions. Doing so will not only reward human creativity but also provide legal certainty for AI innovators. As this article has shown, the coming battles over AI’s raw material will shape the legitimacy of the technology itself. A sustainable settlement requires respect for the people whose works underlie AI’s capabilities and a recognition that openness and innovation can coexist with fairness and accountability.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://ipcloseup.com/2025/05/14/news-and-book-publishers-launch-offensive-to-stop-tech-giants-from-stealing-their-content-for-a-i/?ref=breakthroughpursuit.com#:~:text=Ads%20in%20the%20NMA%20campaign,%E2%80%9D" rel="noopener noreferrer"&gt;[1]&lt;/a&gt; &lt;a href="https://ipcloseup.com/2025/05/14/news-and-book-publishers-launch-offensive-to-stop-tech-giants-from-stealing-their-content-for-a-i/?ref=breakthroughpursuit.com#:~:text=The%20ad%20campaign%2C%20which%20has,country%2C%20has%20three%20key%20asks" rel="noopener noreferrer"&gt;[31]&lt;/a&gt; News and Book Publishers Launch Offensive to Stop Tech Giants from Stealing Their Content for A.I.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://ipcloseup.com/2025/05/14/news-and-book-publishers-launch-offensive-to-stop-tech-giants-from-stealing-their-content-for-a-i/?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://ipcloseup.com/2025/05/14/news-and-book-publishers-launch-offensive-to-stop-tech-giants-from-stealing-their-content-for-a-i/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.copyright.gov/ai/Copyright-and-Artificial-Intelligence-Part-3-Generative-AI-Training-Report-Pre-Publication-Version.pdf?ref=breakthroughpursuit.com#:~:text=which%20the%20use%20of%20copyrighted,with%20the%20harm%20to%20the" rel="noopener noreferrer"&gt;[2]&lt;/a&gt; &lt;a href="https://www.copyright.gov/ai/Copyright-and-Artificial-Intelligence-Part-3-Generative-AI-Training-Report-Pre-Publication-Version.pdf?ref=breakthroughpursuit.com#:~:text=33%20Pre,197" rel="noopener noreferrer"&gt;[12]&lt;/a&gt; Copyright and Artificial Intelligence, Part 3: Generative AI Training Pre-Publication Version&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.copyright.gov/ai/Copyright-and-Artificial-Intelligence-Part-3-Generative-AI-Training-Report-Pre-Publication-Version.pdf?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.copyright.gov/ai/Copyright-and-Artificial-Intelligence-Part-3-Generative-AI-Training-Report-Pre-Publication-Version.pdf&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?ref=breakthroughpursuit.com#:~:text=1,of%20text%20and%20data%20mining" rel="noopener noreferrer"&gt;[3]&lt;/a&gt; L_2019130EN.01009201.xml&lt;/p&gt;

&lt;p&gt;&lt;a href="https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.gov.uk/government/consultations/copyright-and-artificial-intelligence/copyright-and-artificial-intelligence?ref=breakthroughpursuit.com#:~:text=75,would%20have%20the%20following%20features" rel="noopener noreferrer"&gt;[4]&lt;/a&gt; &lt;a href="https://www.gov.uk/government/consultations/copyright-and-artificial-intelligence/copyright-and-artificial-intelligence?ref=breakthroughpursuit.com#:~:text=75,would%20have%20the%20following%20features" rel="noopener noreferrer"&gt;[20]&lt;/a&gt; &lt;a href="https://www.gov.uk/government/consultations/copyright-and-artificial-intelligence/copyright-and-artificial-intelligence?ref=breakthroughpursuit.com#:~:text=Both%20our%20creative%20industries%20and,term%20growth%20in%20both%20sectors" rel="noopener noreferrer"&gt;[21]&lt;/a&gt; &lt;a href="https://www.gov.uk/government/consultations/copyright-and-artificial-intelligence/copyright-and-artificial-intelligence?ref=breakthroughpursuit.com#:~:text=,explored%20in%20more%20detail%20below" rel="noopener noreferrer"&gt;[22]&lt;/a&gt;  Copyright and Artificial Intelligence - GOV.UK&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.gov.uk/government/consultations/copyright-and-artificial-intelligence/copyright-and-artificial-intelligence?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.gov.uk/government/consultations/copyright-and-artificial-intelligence/copyright-and-artificial-intelligence&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.reedsmith.com/en/perspectives/2025/03/court-ai-fair-use-thomson-reuters-enterprise-gmbh-ross-intelligence?ref=breakthroughpursuit.com#:~:text=Initially%20in%202023%2C%20Circuit%20Judge,discussed%20in%20more%20detail%20below" rel="noopener noreferrer"&gt;[5]&lt;/a&gt; &lt;a href="https://www.reedsmith.com/en/perspectives/2025/03/court-ai-fair-use-thomson-reuters-enterprise-gmbh-ross-intelligence?ref=breakthroughpursuit.com#:~:text=Decision" rel="noopener noreferrer"&gt;[15]&lt;/a&gt; Court shuts down AI fair use argument in Thomson Reuters Enterprise Centre GMBH v. Ross Intelligence Inc. | Perspectives | Reed Smith LLP&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.reedsmith.com/en/perspectives/2025/03/court-ai-fair-use-thomson-reuters-enterprise-gmbh-ross-intelligence?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.reedsmith.com/en/perspectives/2025/03/court-ai-fair-use-thomson-reuters-enterprise-gmbh-ross-intelligence&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://communia-association.org/2024/10/11/laion-vs-kneschke-building-public-datasets-is-covered-by-the-tdm-exception/?ref=breakthroughpursuit.com#:~:text=Two%20weeks%20ago%2C%20the%20Landgericht,training%20data%20transparency%20in%20general" rel="noopener noreferrer"&gt;[6]&lt;/a&gt; &lt;a href="https://communia-association.org/2024/10/11/laion-vs-kneschke-building-public-datasets-is-covered-by-the-tdm-exception/?ref=breakthroughpursuit.com#:~:text=Here%2C%20the%20positive%20impact%20of,problematic%20patterns%20in%20the%20dataset" rel="noopener noreferrer"&gt;[33]&lt;/a&gt; LAION vs Kneschke: Building public datasets is covered by the TDM exception&lt;/p&gt;

&lt;p&gt;&lt;a href="https://communia-association.org/2024/10/11/laion-vs-kneschke-building-public-datasets-is-covered-by-the-tdm-exception/?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://communia-association.org/2024/10/11/laion-vs-kneschke-building-public-datasets-is-covered-by-the-tdm-exception/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://the-decoder.com/laion-releases-ai-dataset-re-laion-5b-purged-of-links-to-child-abuse-images/?ref=breakthroughpursuit.com#:~:text=Re,matching%20content%20from%20their%20versions" rel="noopener noreferrer"&gt;[7]&lt;/a&gt; LAION releases AI dataset Re-LAION-5B purged of links to child abuse images&lt;/p&gt;

&lt;p&gt;&lt;a href="https://the-decoder.com/laion-releases-ai-dataset-re-laion-5b-purged-of-links-to-child-abuse-images/?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://the-decoder.com/laion-releases-ai-dataset-re-laion-5b-purged-of-links-to-child-abuse-images/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.classaction.org/news/class-action-lawsuit-alleges-apple-illegally-uses-copyrighted-works-for-ai-training?ref=breakthroughpursuit.com#:~:text=The%20complaint%20alleges%20that%20the,copyright%20law" rel="noopener noreferrer"&gt;[8]&lt;/a&gt; Class Action Lawsuit Alleges Apple Illegally Uses Copyrighted Works for AI Training&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.classaction.org/news/class-action-lawsuit-alleges-apple-illegally-uses-copyrighted-works-for-ai-training?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.classaction.org/news/class-action-lawsuit-alleges-apple-illegally-uses-copyrighted-works-for-ai-training&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://blog.cloudflare.com/control-content-use-for-ai-training/?ref=breakthroughpursuit.com#:~:text=Cloudflare%20is%20giving%20all%20website,that%20are%20monetized%20through%20ads" rel="noopener noreferrer"&gt;[9]&lt;/a&gt; &lt;a href="https://blog.cloudflare.com/control-content-use-for-ai-training/?ref=breakthroughpursuit.com#:~:text=Protecting%20content%20creators%20isn%E2%80%99t%20new,given%20us%20some%20interesting%20data" rel="noopener noreferrer"&gt;[10]&lt;/a&gt; &lt;a href="https://blog.cloudflare.com/control-content-use-for-ai-training/?ref=breakthroughpursuit.com#:~:text=Cloudflare%20is%20giving%20all%20website,that%20are%20monetized%20through%20ads" rel="noopener noreferrer"&gt;[11]&lt;/a&gt; &lt;a href="https://blog.cloudflare.com/control-content-use-for-ai-training/?ref=breakthroughpursuit.com#:~:text=And%20while%20sites%20can%20use,this%20age%20of%20evolving%20crawlers" rel="noopener noreferrer"&gt;[34]&lt;/a&gt; Control content use for AI training with Cloudflare’s managed robots.txt and blocking for monetized content&lt;/p&gt;

&lt;p&gt;&lt;a href="https://blog.cloudflare.com/control-content-use-for-ai-training/?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://blog.cloudflare.com/control-content-use-for-ai-training/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://smithhopen.com/2025/07/17/nyt-v-openai-microsoft-ai-copyright-lawsuit-update-2025/?ref=breakthroughpursuit.com#:~:text=2025%20brings%20big%20steps%20in,contributory%20infringement%2C%20plus%20DMCA%20breaches" rel="noopener noreferrer"&gt;[13]&lt;/a&gt; &lt;a href="https://smithhopen.com/2025/07/17/nyt-v-openai-microsoft-ai-copyright-lawsuit-update-2025/?ref=breakthroughpursuit.com#:~:text=A%20heated%20issue%20popped%20up,retention%20deals" rel="noopener noreferrer"&gt;[14]&lt;/a&gt; &lt;a href="https://smithhopen.com/2025/07/17/nyt-v-openai-microsoft-ai-copyright-lawsuit-update-2025/?ref=breakthroughpursuit.com#:~:text=2025%20brings%20big%20steps%20in,contributory%20infringement%2C%20plus%20DMCA%20breaches" rel="noopener noreferrer"&gt;[26]&lt;/a&gt; &lt;a href="https://smithhopen.com/2025/07/17/nyt-v-openai-microsoft-ai-copyright-lawsuit-update-2025/?ref=breakthroughpursuit.com#:~:text=2025%20brings%20big%20steps%20in,contributory%20infringement%2C%20plus%20DMCA%20breaches" rel="noopener noreferrer"&gt;[35]&lt;/a&gt; The New York Times v. OpenAI and Microsoft - Smith &amp;amp; Hopen&lt;/p&gt;

&lt;p&gt;&lt;a href="https://smithhopen.com/2025/07/17/nyt-v-openai-microsoft-ai-copyright-lawsuit-update-2025/?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://smithhopen.com/2025/07/17/nyt-v-openai-microsoft-ai-copyright-lawsuit-update-2025/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.reuters.com/legal/anthropic-wins-early-round-music-publishers-ai-copyright-case-2025-03-26/?ref=breakthroughpursuit.com#:~:text=March%2025%20%28Reuters%29%20,powered%20chatbot%20Claude" rel="noopener noreferrer"&gt;[16]&lt;/a&gt; &lt;a href="https://www.reuters.com/legal/anthropic-wins-early-round-music-publishers-ai-copyright-case-2025-03-26/?ref=breakthroughpursuit.com#:~:text=Fair%20use%20is%20likely%20to,not%20specifically%20address%20the%20issue" rel="noopener noreferrer"&gt;[17]&lt;/a&gt; Anthropic wins early round in music publishers' AI copyright case | Reuters&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.reuters.com/legal/anthropic-wins-early-round-music-publishers-ai-copyright-case-2025-03-26/?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.reuters.com/legal/anthropic-wins-early-round-music-publishers-ai-copyright-case-2025-03-26/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.jonesday.com/en/insights/2025/08/european-parliaments-new-study-on-generative-ai-and-copyright-calls-for-overhaul-of-optout-regime?ref=breakthroughpursuit.com#:~:text=Legal%20Mismatch%20Between%20AI%20Training,to%20avoid%20unintended%20licensing%20loopholes" rel="noopener noreferrer"&gt;[18]&lt;/a&gt; &lt;a href="https://www.jonesday.com/en/insights/2025/08/european-parliaments-new-study-on-generative-ai-and-copyright-calls-for-overhaul-of-optout-regime?ref=breakthroughpursuit.com#:~:text=Calls%20for%20Transparency%20and%20Equitable,value%20derived%20from%20their%20works" rel="noopener noreferrer"&gt;[19]&lt;/a&gt; European Parliament's New Study on Generative AI and Copyright Calls for Overhaul of Opt-Out Regime | Insights | Jones Day&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.jonesday.com/en/insights/2025/08/european-parliaments-new-study-on-generative-ai-and-copyright-calls-for-overhaul-of-optout-regime?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.jonesday.com/en/insights/2025/08/european-parliaments-new-study-on-generative-ai-and-copyright-calls-for-overhaul-of-optout-regime&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://commonslibrary.parliament.uk/research-briefings/cdp-2025-0081/?ref=breakthroughpursuit.com#:~:text=The%20consultation%E2%80%99s%20proposals%20are%20controversial,was%20used%20when%20training%20AI" rel="noopener noreferrer"&gt;[23]&lt;/a&gt; &lt;a href="https://commonslibrary.parliament.uk/research-briefings/cdp-2025-0081/?ref=breakthroughpursuit.com#:~:text=The%20consultation%E2%80%99s%20proposals%20are%20controversial,used%20when%20training%20AI%20models" rel="noopener noreferrer"&gt;[24]&lt;/a&gt; &lt;a href="https://commonslibrary.parliament.uk/research-briefings/cdp-2025-0081/?ref=breakthroughpursuit.com#:~:text=Caroline%20Dinenage%2C%20Chair%20of%20the,models%20without%20consent%20or%20compensation%E2%80%9D" rel="noopener noreferrer"&gt;[25]&lt;/a&gt; Impact of AI on intellectual property - House of Commons Library&lt;/p&gt;

&lt;p&gt;&lt;a href="https://commonslibrary.parliament.uk/research-briefings/cdp-2025-0081/?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://commonslibrary.parliament.uk/research-briefings/cdp-2025-0081/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.finnegan.com/en/insights/articles/getty-images-vs-stability-ai-the-uk-court-battle-that-could-reshape-ai-and-copyright-law.html?ref=breakthroughpursuit.com#:~:text=But%20here%20is%20the%20twist%3A,which%20is%20an%20%E2%80%98infringing%20copy%E2%80%99" rel="noopener noreferrer"&gt;[27]&lt;/a&gt; &lt;a href="https://www.finnegan.com/en/insights/articles/getty-images-vs-stability-ai-the-uk-court-battle-that-could-reshape-ai-and-copyright-law.html?ref=breakthroughpursuit.com#:~:text=Stability%20AI%20countered%20that%20training,for%20parody%20or%20stylistic%20imitation" rel="noopener noreferrer"&gt;[28]&lt;/a&gt; &lt;a href="https://www.finnegan.com/en/insights/articles/getty-images-vs-stability-ai-the-uk-court-battle-that-could-reshape-ai-and-copyright-law.html?ref=breakthroughpursuit.com#:~:text=The%20Brief" rel="noopener noreferrer"&gt;[32]&lt;/a&gt; Getty Images vs. Stability AI: The UK Court Battle That Could Reshape AI and Copyright Law | Articles | Finnegan | Leading IP+ Law Firm&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.finnegan.com/en/insights/articles/getty-images-vs-stability-ai-the-uk-court-battle-that-could-reshape-ai-and-copyright-law.html?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.finnegan.com/en/insights/articles/getty-images-vs-stability-ai-the-uk-court-battle-that-could-reshape-ai-and-copyright-law.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.ropesgray.com/en/insights/alerts/2025/09/anthropics-landmark-copyright-settlement-implications-for-ai-developers-and-enterprise-users?ref=breakthroughpursuit.com#:~:text=With%20the%20plaintiffs%20seeking%20statutory,includes%20the%20following%20key%20provisions" rel="noopener noreferrer"&gt;[29]&lt;/a&gt; Anthropic’s Landmark Copyright Settlement: Implications for AI Developers and Enterprise Users | Insights | Ropes &amp;amp; Gray LLP&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.ropesgray.com/en/insights/alerts/2025/09/anthropics-landmark-copyright-settlement-implications-for-ai-developers-and-enterprise-users?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.ropesgray.com/en/insights/alerts/2025/09/anthropics-landmark-copyright-settlement-implications-for-ai-developers-and-enterprise-users&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://lordslibrary.parliament.uk/copyright-and-artificial-intelligence-impact-on-creative-industries/?ref=breakthroughpursuit.com#:~:text=The%20government%E2%80%99s%20proposal%20to%20create,15" rel="noopener noreferrer"&gt;[30]&lt;/a&gt; Copyright and artificial intelligence: Impact on creative industries - House of Lords Library&lt;/p&gt;

&lt;p&gt;&lt;a href="https://lordslibrary.parliament.uk/copyright-and-artificial-intelligence-impact-on-creative-industries/?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://lordslibrary.parliament.uk/copyright-and-artificial-intelligence-impact-on-creative-industries/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aitrainingdataowners</category>
      <category>copyrightgenerativea</category>
      <category>litigationpolicybatt</category>
      <category>creativeindustriesai</category>
    </item>
    <item>
      <title>AI and the Digital Divide: Ensuring Every Child Benefits, Not Just the Privileged Few</title>
      <dc:creator>Breakthrough Pursuit</dc:creator>
      <pubDate>Wed, 24 Sep 2025 16:53:00 +0000</pubDate>
      <link>https://forem.com/breakthroughpursuit/ai-and-the-digital-divide-ensuring-every-child-benefits-not-just-the-privileged-few-5e7i</link>
      <guid>https://forem.com/breakthroughpursuit/ai-and-the-digital-divide-ensuring-every-child-benefits-not-just-the-privileged-few-5e7i</guid>
      <description>&lt;h2&gt;
  
  
  Executive Summary
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2wgyswm41e41au591def.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2wgyswm41e41au591def.png" alt="AI and the Digital Divide: Ensuring Every Child Benefits, Not Just the Privileged Few" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The accelerating adoption of artificial intelligence (AI) in schools and homes promises to transform teaching and learning. Yet it also risks amplifying existing inequalities if access, skills and protections are not shared fairly. &lt;strong&gt;Nearly half of UK households with children (45 %) fall below the minimum digital living standard&lt;/strong&gt;&lt;a href="https://www.theguardian.com/technology/2024/mar/17/half-uk-families-excluded-modern-digital-society-study?ref=breakthroughpursuit.com#:~:text=Almost%20half%20of%20UK%20families,an%20%E2%80%9Camplifier%20of%20other%20exclusions%E2%80%9D" rel="noopener noreferrer"&gt;[1]&lt;/a&gt;, meaning they lack adequate devices, connectivity or skills. Globally, &lt;strong&gt;two‑thirds of school‑age children—around 1.3 billion—have no internet connection at home&lt;/strong&gt; &lt;a href="https://www.unicef.org/press-releases/two-thirds-worlds-school-age-children-have-no-internet-access-home-new-unicef-itu?ref=breakthroughpursuit.com#:~:text=NEW%20YORK%2FGENEVA%2C%C2%A01%20December%C2%A02020%C2%A0%E2%80%93%C2%A0Two%20thirds%C2%A0of%20the,ITU" rel="noopener noreferrer"&gt;[2]&lt;/a&gt;. These gaps mean pupils from disadvantaged backgrounds fall behind just as AI is becoming embedded in homework, assessments and personalised tutoring tools. The moral and educational duty of trusts and policy‑makers is clear: ensure that every child, not only the privileged few, can benefit from AI.&lt;/p&gt;

&lt;p&gt;This article frames equity as a systemic and ethical imperative. It synthesises research from UNESCO, OECD, UNICEF, Ofcom, the Education Endowment Foundation (EEF) and the Good Things Foundation, alongside UK policy developments such as the Online Safety Act and the Department for Education’s (DfE) generative‑AI guidance. We propose a &lt;strong&gt;five‑part Equity‑by‑Design framework&lt;/strong&gt; to close the divide—Access, Ability, Assurance, Application and Accountability—and offer concrete strategies, metrics and case studies. Our goal is to equip multi‑academy trusts, school leaders and policy‑makers with a blueprint to ensure AI strengthens, rather than erodes, educational equity.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. The New Digital Divide: From Devices to Dignity
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1.1 A Global Perspective
&lt;/h3&gt;

&lt;p&gt;UNESCO’s first global guidance on generative AI in education warns that AI systems can exacerbate inequalities unless they are designed with inclusivity at their core. The guidance calls on governments to &lt;strong&gt;ensure universal internet access, eliminate bias in AI systems, monitor and validate AI outputs, build teacher capacity and promote plural opinions&lt;/strong&gt; &lt;a href="https://www.weforum.org/stories/2023/09/generative-ai-education-unesco/?ref=breakthroughpursuit.com#:~:text=What%20are%20UNESCO%E2%80%99s%20guidelines%20for,AI%20in%20education" rel="noopener noreferrer"&gt;[3]&lt;/a&gt;. These recommendations recognise that technology can harm as well as help. Evidence from the OECD supports this caution: excessive use of devices for leisure undermines attention and learning, with students distracted by peers’ phones scoring significantly lower in mathematics&lt;a href="https://www.oecd.org/content/dam/oecd/en/publications/reports/2024/05/students-digital-devices-and-success_621829ff/9e4c0624-en.pdf?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;[4]&lt;/a&gt;. These insights underscore that simply placing AI in classrooms without addressing access, pedagogy and safeguards can deepen rather than reduce the divide.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;digital canyon&lt;/strong&gt; is widest in low‑income countries. UNICEF and the International Telecommunication Union (ITU) report that &lt;strong&gt;two‑thirds of school‑age children—1.3 billion—lack an internet connection at home&lt;/strong&gt; &lt;a href="https://www.unicef.org/press-releases/two-thirds-worlds-school-age-children-have-no-internet-access-home-new-unicef-itu?ref=breakthroughpursuit.com#:~:text=NEW%20YORK%2FGENEVA%2C%C2%A01%20December%C2%A02020%C2%A0%E2%80%93%C2%A0Two%20thirds%C2%A0of%20the,ITU" rel="noopener noreferrer"&gt;[2]&lt;/a&gt;. The digital divide mirrors economic divides: only &lt;strong&gt;16 % of children from poor households have home internet&lt;/strong&gt; , compared with &lt;strong&gt;58 % in rich households&lt;/strong&gt; &lt;a href="https://www.unicef.org/press-releases/two-thirds-worlds-school-age-children-have-no-internet-access-home-new-unicef-itu?ref=breakthroughpursuit.com#:~:text=NEW%20YORK%2FGENEVA%2C%C2%A01%20December%C2%A02020%C2%A0%E2%80%93%C2%A0Two%20thirds%C2%A0of%20the,ITU" rel="noopener noreferrer"&gt;[2]&lt;/a&gt;. Geographic disparities are stark; in sub‑Saharan Africa, &lt;strong&gt;95 % of children remain unconnected&lt;/strong&gt; &lt;a href="https://www.unicef.org/press-releases/two-thirds-worlds-school-age-children-have-no-internet-access-home-new-unicef-itu?ref=breakthroughpursuit.com#:~:text=NEW%20YORK%2FGENEVA%2C%C2%A01%20December%C2%A02020%C2%A0%E2%80%93%C2%A0Two%20thirds%C2%A0of%20the,ITU" rel="noopener noreferrer"&gt;[2]&lt;/a&gt;. Lack of connectivity isolates children during school closures and prevents them from competing in a digital economy&lt;a href="https://www.unicef.org/press-releases/two-thirds-worlds-school-age-children-have-no-internet-access-home-new-unicef-itu?ref=breakthroughpursuit.com#:~:text=NEW%20YORK%2FGENEVA%2C%C2%A01%20December%C2%A02020%C2%A0%E2%80%93%C2%A0Two%20thirds%C2%A0of%20the,ITU" rel="noopener noreferrer"&gt;[2]&lt;/a&gt;. Addressing these inequalities requires infrastructure investment and innovative financing, such as ITU/UNICEF’s &lt;strong&gt;Giga initiative&lt;/strong&gt; , which has mapped more than &lt;strong&gt;800 000 schools&lt;/strong&gt; and is developing business models to connect them&lt;a href="https://www.unicef.org/press-releases/two-thirds-worlds-school-age-children-have-no-internet-access-home-new-unicef-itu?ref=breakthroughpursuit.com#:~:text=Last%20year%2C%20UNICEF%20and%20ITU,deploy%20digital%20learning%20solutions%20and" rel="noopener noreferrer"&gt;[5]&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  1.2 The UK Picture
&lt;/h3&gt;

&lt;p&gt;Britain is often described as a digital nation, yet its divide is glaring. &lt;strong&gt;Thirty‑four per cent of parents with school‑age children say their child lacks continuous access to an appropriate device at home for online schoolwork&lt;/strong&gt; , and &lt;strong&gt;13 % cannot resolve the problem&lt;/strong&gt; &lt;a href="https://www.ofcom.org.uk/siteassets/resources/documents/research-and-data/media-literacy-research/children/children-media-use-and-attitudes-2024/childrens-media-literacy-report-2024.pdf?ref=breakthroughpursuit.com#:~:text=Online%20access%20and%20use%20%E2%80%A2,at%20home%20was%20not%20possible" rel="noopener noreferrer"&gt;[6]&lt;/a&gt;. A University of Liverpool, Loughborough University and Good Things Foundation study found that &lt;strong&gt;45 % of households with children fall below the minimum digital living standard (MDLS)&lt;/strong&gt;&lt;a href="https://www.theguardian.com/technology/2024/mar/17/half-uk-families-excluded-modern-digital-society-study?ref=breakthroughpursuit.com#:~:text=Almost%20half%20of%20UK%20families,an%20%E2%80%9Camplifier%20of%20other%20exclusions%E2%80%9D" rel="noopener noreferrer"&gt;[1]&lt;/a&gt;. Families from low socio‑economic groups, minority ethnic communities and parents with disabilities are most likely to be excluded&lt;a href="https://www.theguardian.com/technology/2024/mar/17/half-uk-families-excluded-modern-digital-society-study?ref=breakthroughpursuit.com#:~:text=Almost%20half%20of%20UK%20families,an%20%E2%80%9Camplifier%20of%20other%20exclusions%E2%80%9D" rel="noopener noreferrer"&gt;[1]&lt;/a&gt;. &lt;strong&gt;Nearly one in five households lack necessary equipment or services&lt;/strong&gt; &lt;a href="https://www.theguardian.com/technology/2024/mar/17/half-uk-families-excluded-modern-digital-society-study?ref=breakthroughpursuit.com#:~:text=Almost%20half%20of%20UK%20families,an%20%E2%80%9Camplifier%20of%20other%20exclusions%E2%80%9D" rel="noopener noreferrer"&gt;[1]&lt;/a&gt;, while &lt;strong&gt;38 % of households lack essential online skills&lt;/strong&gt; &lt;a href="https://www.theguardian.com/technology/2024/mar/17/half-uk-families-excluded-modern-digital-society-study?ref=breakthroughpursuit.com#:~:text=Almost%20half%20of%20UK%20families,an%20%E2%80%9Camplifier%20of%20other%20exclusions%E2%80%9D" rel="noopener noreferrer"&gt;[1]&lt;/a&gt;. Digital deprivation amplifies other exclusions: children cannot access homework portals, parents cannot apply for benefits online and families miss out on social tariffs.&lt;/p&gt;

&lt;p&gt;The Good Things Foundation’s &lt;strong&gt;Digital Nation 2025&lt;/strong&gt; offers a recent snapshot. It reports that &lt;strong&gt;3.7 million families&lt;/strong&gt; are below the minimum digital living standard and &lt;strong&gt;7.9 million adults&lt;/strong&gt; lack basic digital skills&lt;a href="https://www.goodthingsfoundation.org/policy-and-research/research-and-evidence/research-2024/digital-nation?ref=breakthroughpursuit.com#:~:text=These%20statistics%20are%20presented%20in,horizon%20of%20the%20left%20bank" rel="noopener noreferrer"&gt;[7]&lt;/a&gt;. &lt;strong&gt;1.9 million households&lt;/strong&gt; struggle to afford mobile contracts and &lt;strong&gt;1.6 million adults&lt;/strong&gt; have no smartphone, tablet or laptop&lt;a href="https://www.goodthingsfoundation.org/policy-and-research/research-and-evidence/research-2024/digital-nation?ref=breakthroughpursuit.com#:~:text=,a%20smartphone%2C%20tablet%20or%20laptop" rel="noopener noreferrer"&gt;[8]&lt;/a&gt;. For those offline, &lt;strong&gt;33 % find it hard to use council services&lt;/strong&gt; and &lt;strong&gt;29 % of older people feel left behind by services moving online&lt;/strong&gt; &lt;a href="https://www.goodthingsfoundation.org/policy-and-research/research-and-evidence/research-2024/digital-nation?ref=breakthroughpursuit.com#:~:text=%2A%2033,behind%20by%20services%20moving%20online" rel="noopener noreferrer"&gt;[9]&lt;/a&gt;. Only &lt;strong&gt;10 % of eligible households have taken up social tariffs&lt;/strong&gt; &lt;a href="https://www.goodthingsfoundation.org/policy-and-research/research-and-evidence/research-2024/digital-nation?ref=breakthroughpursuit.com#:~:text=%2A%2030,signed%20up%20for%20social%20tariff" rel="noopener noreferrer"&gt;[10]&lt;/a&gt;. Good Things also reveals that &lt;strong&gt;7000+ community access points&lt;/strong&gt; (digital inclusion hubs) have distributed &lt;strong&gt;64 000 devices&lt;/strong&gt; and saved carbon equivalent to &lt;strong&gt;537 000 trees&lt;/strong&gt; &lt;a href="https://www.goodthingsfoundation.org/policy-and-research/research-and-evidence/research-2024/digital-nation?ref=breakthroughpursuit.com#:~:text=The%20following%20statistics%20are%20on,of%20people%2C%20devices%20and%20buildings" rel="noopener noreferrer"&gt;[11]&lt;/a&gt;. These statistics highlight both the scale of exclusion and the potential of community action.&lt;/p&gt;

&lt;h3&gt;
  
  
  1.3 Affordability &amp;amp; Skills
&lt;/h3&gt;

&lt;p&gt;Affordability is a major barrier. Ofcom’s 2024–25 communications affordability tracker found that &lt;strong&gt;26 % of UK households—around six million—struggled to afford communication services&lt;/strong&gt; , with &lt;strong&gt;9 % of mobile customers&lt;/strong&gt; and &lt;strong&gt;8 % of broadband subscribers&lt;/strong&gt; unable to pay&lt;a href="https://www.ofcom.org.uk/phones-and-broadband/saving-money/affordability-tracker?ref=breakthroughpursuit.com#:~:text=Around%20a%20quarter%20of%20UK,communications%20services%20in%20May%202025" rel="noopener noreferrer"&gt;[12]&lt;/a&gt;. &lt;strong&gt;Only one‑third of eligible decision‑makers were aware of social tariffs&lt;/strong&gt; &lt;a href="https://www.ofcom.org.uk/phones-and-broadband/saving-money/affordability-tracker?ref=breakthroughpursuit.com#:~:text=Around%20a%20quarter%20of%20UK,communications%20services%20in%20May%202025" rel="noopener noreferrer"&gt;[12]&lt;/a&gt;. This low awareness suggests that signposting through schools and trusts could dramatically increase uptake. Meanwhile, the Lloyds Consumer Digital Index 2023 reports that &lt;strong&gt;16 % of adults (about 8.5 million people) lack foundation‑level digital skills&lt;/strong&gt;; among those without these skills, &lt;strong&gt;59 % do not own a device and 62 % lack home internet&lt;/strong&gt; &lt;a href="https://www.ipsos.com/sites/default/files/ct/publication/documents/2023-11/lloyds-consumer-digital-index-2023-report.pdf?ref=breakthroughpursuit.com#:~:text=UK%20Consumer%20Digital%20Index%202023,u" rel="noopener noreferrer"&gt;[13]&lt;/a&gt;. Since parents and teachers provide much of children’s digital guidance, gaps in adult skills hinder children’s progress.&lt;/p&gt;

&lt;h3&gt;
  
  
  1.4 AI Adoption and Emerging Risks
&lt;/h3&gt;

&lt;p&gt;Generative AI is penetrating homes and classrooms at unprecedented speed. Internet Matters’ 2025 study found that &lt;strong&gt;44 % of children actively engage with generative‑AI tools&lt;/strong&gt; and &lt;strong&gt;54 % of child AI users employ them for homework or schoolwork&lt;/strong&gt; &lt;a href="https://www.internetmatters.org/hub/research/generative-ai-in-education-report/?ref=breakthroughpursuit.com#:~:text=Generative%20AI%20in%20education%3A%20Children%E2%80%99s,and%20parents%E2%80%99%20views" rel="noopener noreferrer"&gt;[14]&lt;/a&gt;. Yet &lt;strong&gt;60 % of parents said their child’s school had not informed them about plans to use generative AI&lt;/strong&gt; , and the same proportion of schools had not spoken to pupils about AI&lt;a href="https://www.internetmatters.org/hub/research/generative-ai-in-education-report/?ref=breakthroughpursuit.com#:~:text=Parents%20who%20say%20their%20child%E2%80%99s,AI%20tools%20to%20teach%20students" rel="noopener noreferrer"&gt;[15]&lt;/a&gt;. Vulnerable children are particularly at risk; &lt;strong&gt;41 % of vulnerable children used ChatGPT to complete homework&lt;/strong&gt; &lt;a href="https://www.internetmatters.org/hub/research/generative-ai-in-education-report/?ref=breakthroughpursuit.com#:~:text=41" rel="noopener noreferrer"&gt;[16]&lt;/a&gt;. The research advises government to ensure &lt;strong&gt;digital inclusion and equitable access&lt;/strong&gt; as part of AI guidance&lt;a href="https://www.internetmatters.org/hub/research/generative-ai-in-education-report/?ref=breakthroughpursuit.com#:~:text=Government" rel="noopener noreferrer"&gt;[17]&lt;/a&gt;, and warns that children on free school meals have less access to technology and data&lt;a href="https://www.internetmatters.org/hub/research/generative-ai-in-education-report/?ref=breakthroughpursuit.com#:~:text=Equitable%20access%20to%20AI%20in,education" rel="noopener noreferrer"&gt;[18]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Ofcom’s &lt;strong&gt;2024 media use report&lt;/strong&gt; (not reproduced here for brevity) echoes this picture, finding that nearly half of children use AI tools, often with little oversight. Without deliberate strategies, AI will become another domain where privileged students gain support while others fall further behind.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Moral and Policy Duty of Trusts
&lt;/h2&gt;

&lt;h3&gt;
  
  
  2.1 Equity as a Moral Imperative
&lt;/h3&gt;

&lt;p&gt;Education trusts and governing bodies have a legal duty to provide equal opportunities. In the context of AI, this duty extends beyond physical access to safeguarding, curriculum design and long‑term outcomes. UNESCO and UNICEF emphasise that AI must respect human rights, promote inclusion and protect children’s dignity&lt;a href="https://www.weforum.org/stories/2023/09/generative-ai-education-unesco/?ref=breakthroughpursuit.com#:~:text=What%20are%20UNESCO%E2%80%99s%20guidelines%20for,AI%20in%20education" rel="noopener noreferrer"&gt;[3]&lt;/a&gt;&lt;a href="https://www.unicef.org/press-releases/two-thirds-worlds-school-age-children-have-no-internet-access-home-new-unicef-itu?ref=breakthroughpursuit.com#:~:text=NEW%20YORK%2FGENEVA%2C%C2%A01%20December%C2%A02020%C2%A0%E2%80%93%C2%A0Two%20thirds%C2%A0of%20the,ITU" rel="noopener noreferrer"&gt;[2]&lt;/a&gt;. They urge policymakers to address &lt;strong&gt;affordability, safety and skills&lt;/strong&gt; alongside connectivity&lt;a href="https://www.unicef.org/press-releases/two-thirds-worlds-school-age-children-have-no-internet-access-home-new-unicef-itu?ref=breakthroughpursuit.com#:~:text=NEW%20YORK%2FGENEVA%2C%C2%A01%20December%C2%A02020%C2%A0%E2%80%93%C2%A0Two%20thirds%C2%A0of%20the,ITU" rel="noopener noreferrer"&gt;[2]&lt;/a&gt;. Without these measures, AI will reinforce the digital divide.&lt;/p&gt;

&lt;h3&gt;
  
  
  2.2 Regulatory Landscape: Online Safety Act and Ofcom Codes
&lt;/h3&gt;

&lt;p&gt;The UK’s &lt;strong&gt;Online Safety Act (2023)&lt;/strong&gt; imposes a statutory duty of care on online platforms to protect children. Ofcom’s &lt;strong&gt;Protection of Children Codes of Practice&lt;/strong&gt; , finalised in April 2025, translate this duty into concrete measures. Drawing on consultations with &lt;strong&gt;27 000 children and 13 000 parents&lt;/strong&gt; , the codes demand a &lt;strong&gt;safety‑first approach&lt;/strong&gt; &lt;a href="https://www.ofcom.org.uk/online-safety/protecting-children/new-rules-for-a-safer-generation-of-children-online?ref=breakthroughpursuit.com#:~:text=In%20designing%20the%20Codes%20of,1" rel="noopener noreferrer"&gt;[19]&lt;/a&gt;. Key measures include:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   **Safer feeds:** Services using recommendation algorithms must filter harmful content so that children’s feeds are not seeded with suicide, self‑harm, eating disorders, pornography or extremist material[[20]](https://www.ofcom.org.uk/online-safety/protecting-children/new-rules-for-a-safer-generation-of-children-online?ref=breakthroughpursuit.com#:~:text=The%20steps%20include%20preventing%20minors,online%20bullying%20and%20dangerous%20challenges).
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   **Effective age checks:** High‑risk services must implement robust age assurance to prevent children from accessing inappropriate content[[21]](https://www.ofcom.org.uk/online-safety/protecting-children/new-rules-for-a-safer-generation-of-children-online?ref=breakthroughpursuit.com#:~:text=,assume%20younger%20children%20are%20on).
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   **Fast action:** Platforms must quickly review and remove harmful content and have named accountability structures[[22]](https://www.ofcom.org.uk/online-safety/protecting-children/new-rules-for-a-safer-generation-of-children-online?ref=breakthroughpursuit.com#:~:text=,management%20of%20risk%20to%20children).
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   **Child control &amp;amp; support:** Children should have tools to block, mute or decline group chats and receive support after encountering harmful content[[23]](https://www.ofcom.org.uk/online-safety/protecting-children/new-rules-for-a-safer-generation-of-children-online?ref=breakthroughpursuit.com#:~:text=,so%20children%20can%20understand%20them).
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   **Strong governance:** Services must appoint a named person responsible for children’s safety and periodically review risks[[24]](https://www.ofcom.org.uk/online-safety/protecting-children/new-rules-for-a-safer-generation-of-children-online?ref=breakthroughpursuit.com#:~:text=,management%20of%20risk%20to%20children).
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These codes complement existing requirements for pornographic sites and create a new era of child‑safety regulation&lt;a href="https://www.ofcom.org.uk/online-safety/protecting-children/new-rules-for-a-safer-generation-of-children-online?ref=breakthroughpursuit.com#:~:text=These%20measures%20build%20on%20the,52%20from%20encountering%20online%20pornography" rel="noopener noreferrer"&gt;[25]&lt;/a&gt;. For schools and trusts, they signal that AI platforms integrated into learning must meet safety standards. They also underline the importance of robust filtering and monitoring systems in schools.&lt;/p&gt;

&lt;h3&gt;
  
  
  2.3 Government Support and SEND Pilot
&lt;/h3&gt;

&lt;p&gt;The DfE recognises AI’s potential to support learning for all pupils. In its 2025 policy paper on generative AI, it argues that &lt;strong&gt;safe AI adoption, accompanied by the right infrastructure, can help every child achieve regardless of background&lt;/strong&gt; &lt;a href="https://www.gov.uk/government/publications/generative-artificial-intelligence-in-education/generative-artificial-intelligence-ai-in-education?ref=breakthroughpursuit.com#:~:text=If%20used%20safely%2C%20effectively%20and,skills%20they%20need%20for%20life" rel="noopener noreferrer"&gt;[26]&lt;/a&gt;. To address barriers, the government launched a &lt;strong&gt;£1.7 million pilot of assistive‑technology lending libraries in June 2025&lt;/strong&gt; , aimed at special educational needs and disabilities (SEND) pupils. &lt;strong&gt;Up to 4 000 schools&lt;/strong&gt; will be able to borrow tools such as reading pens and dictation devices to support dyslexia, autism and ADHD&lt;a href="https://www.gov.uk/government/news/thousands-of-children-with-send-to-benefit-from-assistive-tech?ref=breakthroughpursuit.com#:~:text=Thousands%20of%20children%20with%20special,in%20up%20to%204%2C000%20schools" rel="noopener noreferrer"&gt;[27]&lt;/a&gt;. The pilot uses a &lt;strong&gt;“try before you buy”&lt;/strong&gt; model, enabling schools to test devices before investing&lt;a href="https://www.gov.uk/government/news/thousands-of-children-with-send-to-benefit-from-assistive-tech?ref=breakthroughpursuit.com#:~:text=The%20lending%20libraries%20model%20adopts,the%20risk%20of%20wasted%20expenditure" rel="noopener noreferrer"&gt;[28]&lt;/a&gt;. Early results are promising: &lt;strong&gt;86 % of staff reported improved behaviour&lt;/strong&gt; and &lt;strong&gt;89 % saw increased confidence among SEND pupils&lt;/strong&gt; after introducing assistive technology&lt;a href="https://www.gov.uk/government/news/thousands-of-children-with-send-to-benefit-from-assistive-tech?ref=breakthroughpursuit.com#:~:text=The%20impact%20is%20clear%20among,confidence%20amongst%20pupils%20with%20SEND" rel="noopener noreferrer"&gt;[29]&lt;/a&gt;. This pilot illustrates how shared device libraries and targeted interventions can overcome access barriers and support inclusion.&lt;/p&gt;

&lt;h3&gt;
  
  
  2.4 Pedagogy First: Lessons from the EEF
&lt;/h3&gt;

&lt;p&gt;Technology alone does not improve learning; pedagogy does. The EEF’s &lt;strong&gt;Using Digital Technology to Improve Learning&lt;/strong&gt; guidance stresses that schools must &lt;strong&gt;consider how technology will improve teaching and learning before introducing it&lt;/strong&gt; &lt;a href="https://d2tic4wvo1iusb.cloudfront.net/production/eef-guidance-reports/digital/EEF_Digital_Technology_Guidance_Report.pdf?ref=breakthroughpursuit.com#:~:text=overarching%20recommendation%20in%20this%20report,better%20chance%20of%20doing%20so" rel="noopener noreferrer"&gt;[30]&lt;/a&gt;. The report notes that &lt;strong&gt;buying a tablet for every pupil is unlikely to boost attainment&lt;/strong&gt; unless devices are used purposefully to increase practice, feedback and precise assessment&lt;a href="https://d2tic4wvo1iusb.cloudfront.net/production/eef-guidance-reports/digital/EEF_Digital_Technology_Guidance_Report.pdf?ref=breakthroughpursuit.com#:~:text=this%20means%20buying%20a%20tablet,better%20chance%20of%20doing%20so" rel="noopener noreferrer"&gt;[31]&lt;/a&gt;. In other words, digital tools succeed when they are integrated into evidence‑based teaching strategies. The EEF identifies four dimensions where technology can have impact: (1) improving the quality of explanations and modelling; (2) enhancing the quantity and quality of pupil practice; (3) enabling better assessment and feedback; and (4) facilitating data‑driven decision‑making&lt;a href="https://d2tic4wvo1iusb.cloudfront.net/production/eef-guidance-reports/digital/EEF_Digital_Technology_Guidance_Report.pdf?ref=breakthroughpursuit.com#:~:text=computing%20or%20coding%2C%20or%20on,Recommendation%204" rel="noopener noreferrer"&gt;[32]&lt;/a&gt;. Importantly, the EEF emphasises that &lt;strong&gt;implementation and teacher training are crucial&lt;/strong&gt; &lt;a href="https://d2tic4wvo1iusb.cloudfront.net/production/eef-guidance-reports/digital/EEF_Digital_Technology_Guidance_Report.pdf?ref=breakthroughpursuit.com#:~:text=reports%2C%20good%20implementation%20is%20crucial,busy%20reality%20of%20their%20classroom" rel="noopener noreferrer"&gt;[33]&lt;/a&gt;. This aligns with OECD findings that unsupervised device use can harm learning&lt;a href="https://www.oecd.org/content/dam/oecd/en/publications/reports/2024/05/students-digital-devices-and-success_621829ff/9e4c0624-en.pdf?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;[34]&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Equity‑by‑Design Framework
&lt;/h2&gt;

&lt;p&gt;To close the digital divide while embracing AI, we propose a &lt;strong&gt;five‑part Equity‑by‑Design (EBD‑AI) model&lt;/strong&gt;. Adapted from global guidance and the research pack, this framework helps trusts systematically address access, skills, safety, pedagogy and accountability.&lt;/p&gt;

&lt;h3&gt;
  
  
  3.1 Access: Devices, Data and Connectivity
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Devices and data&lt;/strong&gt; are the entry ticket to AI. Across the UK, &lt;strong&gt;0.6 million young people lack home internet or a suitable device&lt;/strong&gt; (Good Things Foundation figure reported in multiple local authority reports). To address this, trusts should:&lt;/p&gt;

&lt;p&gt;·      &lt;strong&gt;Establish device libraries.&lt;/strong&gt; Draw inspiration from the SEND assistive‑technology pilot and the &lt;strong&gt;Rochdale Digitech Library&lt;/strong&gt; , which lets residents borrow laptops and tablets for up to nine weeks and provides free SIM cards via the National Databank&lt;a href="https://www.rochdale.gov.uk/libraries/digitech-digital-tech-library?ref=breakthroughpursuit.com#:~:text=The%20Digitech%20Library%20is%20a,tackle%20digital%20exclusion%20and%20poverty" rel="noopener noreferrer"&gt;[35]&lt;/a&gt;. Trusts can create similar schemes, prioritising pupil‑premium and SEND learners. A “try before you buy” model reduces waste and ensures devices meet pupils’ needs&lt;a href="https://www.gov.uk/government/news/thousands-of-children-with-send-to-benefit-from-assistive-tech?ref=breakthroughpursuit.com#:~:text=The%20lending%20libraries%20model%20adopts,the%20risk%20of%20wasted%20expenditure" rel="noopener noreferrer"&gt;[28]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;·      &lt;strong&gt;Partner with refurbishers and councils.&lt;/strong&gt; Good Things Foundation’s digital inclusion network has collected &lt;strong&gt;64 000 devices&lt;/strong&gt; for redistribution&lt;a href="https://www.goodthingsfoundation.org/policy-and-research/research-and-evidence/research-2024/digital-nation?ref=breakthroughpursuit.com#:~:text=The%20following%20statistics%20are%20on,of%20people%2C%20devices%20and%20buildings" rel="noopener noreferrer"&gt;[11]&lt;/a&gt;. Schools can work with local authorities and corporate donors to refurbish and distribute laptops or tablets, reducing e‑waste and bridging gaps.&lt;/p&gt;

&lt;p&gt;·      &lt;strong&gt;Tackle data poverty.&lt;/strong&gt; The &lt;strong&gt;National Databank&lt;/strong&gt; offers &lt;strong&gt;free mobile SIM cards and data packages&lt;/strong&gt; for people without internet access. Since its launch, over &lt;strong&gt;250 000 data packages&lt;/strong&gt; have been distributed and &lt;strong&gt;89 % of recipients feel more digitally able&lt;/strong&gt; &lt;a href="https://www.goodthingsfoundation.org/our-services/national-databank?ref=breakthroughpursuit.com#:~:text=There%E2%80%99s%20now%20over%203%2C500%20Digital,Hubs%20offering%20the%20National%20Databank" rel="noopener noreferrer"&gt;[36]&lt;/a&gt;. Trusts and community hubs should join the Databank and proactively enrol eligible families. Additionally, schools can promote social tariffs; only &lt;strong&gt;10 % of eligible households&lt;/strong&gt; currently use them&lt;a href="https://www.goodthingsfoundation.org/policy-and-research/research-and-evidence/research-2024/digital-nation?ref=breakthroughpursuit.com#:~:text=%2A%2030,signed%20up%20for%20social%20tariff" rel="noopener noreferrer"&gt;[10]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;·      &lt;strong&gt;Set connectivity standards.&lt;/strong&gt; Following ITU/UNICEF’s &lt;strong&gt;Meaningful School Connectivity&lt;/strong&gt; metrics, trusts should aim for at least &lt;strong&gt;50 Mbps per 30 concurrent learners&lt;/strong&gt; and latency below 50 ms. Where broadband is insufficient, schools can provide vouchers for mobile data or use satellite broadband pilots.&lt;/p&gt;

&lt;h3&gt;
  
  
  3.2 Ability: AI Literacy and Teacher Capacity
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Teachers&lt;/strong&gt; , &lt;strong&gt;parents&lt;/strong&gt; and &lt;strong&gt;pupils&lt;/strong&gt; need new skills to navigate AI. The Lloyds Consumer Digital Index shows that &lt;strong&gt;16 % of adults lack foundation‑level digital skills&lt;/strong&gt; &lt;a href="https://www.ipsos.com/sites/default/files/ct/publication/documents/2023-11/lloyds-consumer-digital-index-2023-report.pdf?ref=breakthroughpursuit.com#:~:text=UK%20Consumer%20Digital%20Index%202023,u" rel="noopener noreferrer"&gt;[13]&lt;/a&gt;. Without adult expertise, children cannot receive safe guidance. Internet Matters research reveals that while &lt;strong&gt;44 % of children actively use generative AI&lt;/strong&gt; , &lt;strong&gt;60 % of parents and 60 % of schools have not discussed AI use with children&lt;/strong&gt; &lt;a href="https://www.internetmatters.org/hub/research/generative-ai-in-education-report/?ref=breakthroughpursuit.com#:~:text=Parents%20who%20say%20their%20child%E2%80%99s,AI%20tools%20to%20teach%20students" rel="noopener noreferrer"&gt;[15]&lt;/a&gt;. To build ability:&lt;/p&gt;

&lt;p&gt;·      &lt;strong&gt;Provide AI literacy modules&lt;/strong&gt; for pupils, covering how generative models work, prompt engineering, fact‑checking, bias, privacy, mental‑health impacts and ethical use. UNESCO’s guidance emphasises developing AI competencies for learners and avoiding dependency on proprietary systems&lt;a href="https://www.weforum.org/stories/2023/09/generative-ai-education-unesco/?ref=breakthroughpursuit.com#:~:text=What%20are%20UNESCO%E2%80%99s%20guidelines%20for,AI%20in%20education" rel="noopener noreferrer"&gt;[3]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;·      &lt;strong&gt;Invest in teacher CPD.&lt;/strong&gt; Professional development should include designing AI‑infused lessons aligned with the EEF’s pedagogical recommendations, understanding algorithmic bias, and safeguarding. Teachers should practise with low‑stakes examples before integrating AI into high‑stakes assessments.&lt;/p&gt;

&lt;p&gt;·      &lt;strong&gt;Engage parents.&lt;/strong&gt; Offer workshops using resources from organisations like Internet Matters, which advise parents to talk with children about AI and explore tools together&lt;a href="https://www.internetmatters.org/hub/research/generative-ai-in-education-report/?ref=breakthroughpursuit.com#:~:text=Parents" rel="noopener noreferrer"&gt;[37]&lt;/a&gt;. Parental involvement increases awareness and ensures consistent messaging across school and home.&lt;/p&gt;

&lt;p&gt;·      &lt;strong&gt;Target support to priority groups.&lt;/strong&gt; Vulnerable children on free school meals have less access to AI tools&lt;a href="https://www.internetmatters.org/hub/research/generative-ai-in-education-report/?ref=breakthroughpursuit.com#:~:text=Equitable%20access%20to%20AI%20in,education" rel="noopener noreferrer"&gt;[18]&lt;/a&gt;. Trusts should allocate more devices, data and training to these pupils and monitor usage gaps.&lt;/p&gt;

&lt;h3&gt;
  
  
  3.3 Assurance: Safety, Privacy and Rights
&lt;/h3&gt;

&lt;p&gt;As AI systems become ubiquitous, children’s safety and rights must be central. The &lt;strong&gt;Online Safety Act&lt;/strong&gt; and &lt;strong&gt;Ofcom codes&lt;/strong&gt; require platforms to filter harmful content, implement age checks and provide reporting mechanisms&lt;a href="https://www.ofcom.org.uk/online-safety/protecting-children/new-rules-for-a-safer-generation-of-children-online?ref=breakthroughpursuit.com#:~:text=The%20steps%20include%20preventing%20minors,online%20bullying%20and%20dangerous%20challenges" rel="noopener noreferrer"&gt;[38]&lt;/a&gt;. Trusts must ensure that AI tools used in schools comply with these regulations. Key actions include:&lt;/p&gt;

&lt;p&gt;·      &lt;strong&gt;Implement robust filtering and monitoring.&lt;/strong&gt; Ensure AI chatbots and search functions used by pupils cannot generate harmful or explicit content. Configure recommender algorithms to block hateful or dangerous outputs&lt;a href="https://www.ofcom.org.uk/online-safety/protecting-children/new-rules-for-a-safer-generation-of-children-online?ref=breakthroughpursuit.com#:~:text=,harmful%20content%20from%20children%E2%80%99s%20feeds" rel="noopener noreferrer"&gt;[39]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;·      &lt;strong&gt;Strengthen data privacy.&lt;/strong&gt; Protect students’ data under GDPR and the Data Protection Act. Limit data collection, anonymise records and demand transparency from AI vendors about data usage.&lt;/p&gt;

&lt;p&gt;·      &lt;strong&gt;Develop risk‑assessment protocols.&lt;/strong&gt; Conduct regular audits of AI tools to assess biases and hallucination risks. Involve safeguarding leads and IT security specialists.&lt;/p&gt;

&lt;p&gt;·      &lt;strong&gt;Align policies with children’s rights.&lt;/strong&gt; UNICEF emphasises that AI must ensure children’s participation, protection and provision rights&lt;a href="https://www.unicef.org/press-releases/two-thirds-worlds-school-age-children-have-no-internet-access-home-new-unicef-itu?ref=breakthroughpursuit.com#:~:text=NEW%20YORK%2FGENEVA%2C%C2%A01%20December%C2%A02020%C2%A0%E2%80%93%C2%A0Two%20thirds%C2%A0of%20the,ITU" rel="noopener noreferrer"&gt;[2]&lt;/a&gt;. Trust policies should include explicit commitments to equity, accessibility and wellbeing.&lt;/p&gt;

&lt;h3&gt;
  
  
  3.4 Application: Pedagogy‑First Use of AI
&lt;/h3&gt;

&lt;p&gt;AI has enormous potential to personalise learning, provide feedback and free teacher time. However, as the EEF warns, &lt;strong&gt;technology must serve pedagogy, not the other way around&lt;/strong&gt; &lt;a href="https://d2tic4wvo1iusb.cloudfront.net/production/eef-guidance-reports/digital/EEF_Digital_Technology_Guidance_Report.pdf?ref=breakthroughpursuit.com#:~:text=overarching%20recommendation%20in%20this%20report,better%20chance%20of%20doing%20so" rel="noopener noreferrer"&gt;[30]&lt;/a&gt;. Trusts should:&lt;/p&gt;

&lt;p&gt;·      &lt;strong&gt;Align AI tools with evidence‑based pedagogy.&lt;/strong&gt; Use generative AI to enhance explanations (e.g., summarising complex concepts), modelling (e.g., scaffolding examples), retrieval practice (e.g., low‑stakes quizzes) and formative assessment&lt;a href="https://d2tic4wvo1iusb.cloudfront.net/production/eef-guidance-reports/digital/EEF_Digital_Technology_Guidance_Report.pdf?ref=breakthroughpursuit.com#:~:text=computing%20or%20coding%2C%20or%20on,Recommendation%204" rel="noopener noreferrer"&gt;[32]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;·      &lt;strong&gt;Avoid distraction and misuse.&lt;/strong&gt; OECD data show that unsupervised device use leads to distraction and lower performance&lt;a href="https://www.oecd.org/content/dam/oecd/en/publications/reports/2024/05/students-digital-devices-and-success_621829ff/9e4c0624-en.pdf?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;[4]&lt;/a&gt;. Schools should set clear rules for device use, integrate AI into structured learning and turn off notifications during lessons.&lt;/p&gt;

&lt;p&gt;·      &lt;strong&gt;Use AI to support differentiation.&lt;/strong&gt; Adaptive tutoring systems can help struggling learners catch up and challenge advanced students. For SEND pupils, tools like reading pens and dictation software, as piloted by the DfE, can transform access to the curriculum&lt;a href="https://www.gov.uk/government/news/thousands-of-children-with-send-to-benefit-from-assistive-tech?ref=breakthroughpursuit.com#:~:text=Thousands%20of%20children%20with%20special,in%20up%20to%204%2C000%20schools" rel="noopener noreferrer"&gt;[27]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;·      &lt;strong&gt;Collaborate with AI ethically.&lt;/strong&gt; Encourage students to treat AI as a co‑pilot rather than a solution. Teach them to verify outputs, cite sources and understand limitations.&lt;/p&gt;

&lt;h3&gt;
  
  
  3.5 Accountability: KPIs and Public Dashboards
&lt;/h3&gt;

&lt;p&gt;Equity requires transparency and measurement. Trusts should adopt &lt;strong&gt;Key Performance Indicators (KPIs)&lt;/strong&gt; and publish dashboards to track progress. Suggested metrics include:&lt;/p&gt;

&lt;p&gt;·      &lt;strong&gt;Access metrics&lt;/strong&gt; : Device‑to‑pupil ratio; number of devices loaned via libraries; proportion of pupils accessing the National Databank; average bandwidth per learner; social‑tariff uptake rates.&lt;/p&gt;

&lt;p&gt;·      &lt;strong&gt;Ability metrics&lt;/strong&gt; : Percentage of staff completing AI CPD; percentage of pupils completing AI literacy modules; number of parent workshop attendees; skills improvement (measured via digital‑skills assessments).&lt;/p&gt;

&lt;p&gt;·      &lt;strong&gt;Assurance metrics&lt;/strong&gt; : Number of safeguarding incidents related to AI; compliance with Ofcom filtering standards; number of data privacy breaches.&lt;/p&gt;

&lt;p&gt;·      &lt;strong&gt;Application metrics&lt;/strong&gt; : AI tool usage rates by pupil‑premium status; improvement in reading and maths attainment (using EEF’s months‑of‑progress framework); reduction in teacher workload through automation.&lt;/p&gt;

&lt;p&gt;·      &lt;strong&gt;Programme ROI metrics&lt;/strong&gt; : Cost per additional device distributed; cost per gigabyte provided via Databank; improvements in attendance and attainment among participants (based on Digital Poverty Alliance or local evaluations).&lt;/p&gt;

&lt;p&gt;These KPIs can be incorporated into public dashboards to build accountability and demonstrate return on investment. Trusts should aim to conduct randomised evaluations where feasible, following EEF evaluation protocols.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. What Works: Case Studies and ROI
&lt;/h2&gt;

&lt;h3&gt;
  
  
  4.1 Device Lending and Libraries
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Rochdale Digitech Library&lt;/strong&gt; provides a model of community‑based lending. Residents can borrow laptops or tablets for up to &lt;strong&gt;nine weeks&lt;/strong&gt; and renew every three weeks&lt;a href="https://www.rochdale.gov.uk/libraries/digitech-digital-tech-library?ref=breakthroughpursuit.com#:~:text=The%20Digitech%20Library%20is%20a,tackle%20digital%20exclusion%20and%20poverty" rel="noopener noreferrer"&gt;[35]&lt;/a&gt;. The library also provides &lt;strong&gt;free SIM cards and data for up to six months&lt;/strong&gt; through the National Databank&lt;a href="https://www.rochdale.gov.uk/libraries/digitech-digital-tech-library?ref=breakthroughpursuit.com#:~:text=The%20Digitech%20Library%20is%20a,tackle%20digital%20exclusion%20and%20poverty" rel="noopener noreferrer"&gt;[35]&lt;/a&gt;. Such programmes reduce digital exclusion and build trust between schools and communities. Impact data from the Digital Poverty Alliance’s Tech4Families programme indicate that providing devices and connectivity improves attendance, homework completion and parental engagement (a detailed evaluation is beyond this paper’s scope but should be referenced).&lt;/p&gt;

&lt;h3&gt;
  
  
  4.2 National Databank and Data Poverty Relief
&lt;/h3&gt;

&lt;p&gt;The &lt;strong&gt;National Databank&lt;/strong&gt; is a flagship digital inclusion scheme. It has distributed &lt;strong&gt;over 250 000 data packages&lt;/strong&gt; to people who cannot afford internet access&lt;a href="https://www.goodthingsfoundation.org/our-services/national-databank?ref=breakthroughpursuit.com#:~:text=There%E2%80%99s%20now%20over%203%2C500%20Digital,Hubs%20offering%20the%20National%20Databank" rel="noopener noreferrer"&gt;[36]&lt;/a&gt;, and &lt;strong&gt;89 % of recipients feel more digitally able or safe&lt;/strong&gt; &lt;a href="https://www.goodthingsfoundation.org/our-services/national-databank?ref=breakthroughpursuit.com#:~:text=There%E2%80%99s%20now%20over%203%2C500%20Digital,Hubs%20offering%20the%20National%20Databank" rel="noopener noreferrer"&gt;[36]&lt;/a&gt;. Trusts can apply to the Databank to secure SIM cards for pupils on free school meals. Partnerships with telecom operators ensure the scheme is scalable.&lt;/p&gt;

&lt;h3&gt;
  
  
  4.3 Assistive‑Technology Lending Pilot
&lt;/h3&gt;

&lt;p&gt;The DfE’s &lt;strong&gt;assistive‑technology pilot&lt;/strong&gt; has shown how targeted interventions yield high impact. Lending libraries will be set up in &lt;strong&gt;up to 32 local authorities&lt;/strong&gt; , allowing &lt;strong&gt;up to 4 000 schools&lt;/strong&gt; to borrow devices such as reading pens and dictation tools&lt;a href="https://www.gov.uk/government/news/thousands-of-children-with-send-to-benefit-from-assistive-tech?ref=breakthroughpursuit.com#:~:text=Thousands%20of%20children%20with%20special,in%20up%20to%204%2C000%20schools" rel="noopener noreferrer"&gt;[27]&lt;/a&gt;. In early trials, &lt;strong&gt;86 % of staff saw improved behaviour&lt;/strong&gt; and &lt;strong&gt;89 % reported increased confidence among SEND pupils&lt;/strong&gt; &lt;a href="https://www.gov.uk/government/news/thousands-of-children-with-send-to-benefit-from-assistive-tech?ref=breakthroughpursuit.com#:~:text=The%20impact%20is%20clear%20among,confidence%20amongst%20pupils%20with%20SEND" rel="noopener noreferrer"&gt;[29]&lt;/a&gt;. These results not only support inclusion but also free teacher time to focus on instruction&lt;a href="https://www.gov.uk/government/news/thousands-of-children-with-send-to-benefit-from-assistive-tech?ref=breakthroughpursuit.com#:~:text=The%20impact%20also%20extends%20to,that%20transforms%20pupils%E2%80%99%20life%20chances" rel="noopener noreferrer"&gt;[40]&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  4.4 Community Hubs and Libraries
&lt;/h3&gt;

&lt;p&gt;Community centres and libraries play an increasing role in digital inclusion. Good Things Foundation’s &lt;strong&gt;National Digital Inclusion Network&lt;/strong&gt; spans &lt;strong&gt;over 3 500 digital inclusion hubs&lt;/strong&gt; &lt;a href="https://www.goodthingsfoundation.org/our-services/national-databank?ref=breakthroughpursuit.com#:~:text=There%E2%80%99s%20now%20over%203%2C500%20Digital,Hubs%20offering%20the%20National%20Databank" rel="noopener noreferrer"&gt;[36]&lt;/a&gt;. These hubs provide device loans, data packages and digital‑skills training, and often host parent workshops. Local examples such as the Rochdale library show how hubs can be integrated into trust strategies.&lt;/p&gt;

&lt;h3&gt;
  
  
  4.5 ROI and Sustainability
&lt;/h3&gt;

&lt;p&gt;Calculating return on investment (ROI) helps justify spending. Suppose a trust invests £100 000 to purchase 200 laptops (£500 each) and uses the National Databank to provide data for 200 pupils (£10 per month). If the programme leads to a &lt;strong&gt;3 percentage‑point increase in attendance&lt;/strong&gt; , &lt;strong&gt;two months’ additional progress in reading&lt;/strong&gt; (valued at roughly £1 000 per pupil in lifetime earnings) and &lt;strong&gt;reduced teacher workload&lt;/strong&gt; (freeing 0.2 FTE per class), the cost per positive outcome is modest. Moreover, refurbished devices and community partnerships can halve hardware costs. While this illustrative example simplifies complex factors, it demonstrates that digital inclusion programmes deliver high social returns.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Roadmap and KPIs for Trusts
&lt;/h2&gt;

&lt;p&gt;Achieving equity is a multi‑year process. Below is a suggested &lt;strong&gt;roadmap&lt;/strong&gt; aligned with the Equity‑by‑Design framework. Trusts should adapt timelines based on resources and context.&lt;/p&gt;

&lt;h3&gt;
  
  
  Stage 1 (Quarter 1–2): Audit and Plan
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   **Conduct a digital equity audit** : survey pupils, parents and staff to assess device access, connectivity, digital skills and AI usage.
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   **Map existing resources** : identify community hubs, local libraries, corporate partners and funding streams (e.g., pupil premium, corporate donations, local authority grants). Engage with the National Databank and refurbishers.
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   **Establish governance** : appoint a digital equity lead and form a steering group including IT, safeguarding, SEND and curriculum leads.
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   **Set targets** : define baseline metrics and ambitious but realistic KPIs for each EBD‑AI dimension.
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Stage 2 (Quarter 3–4): Pilot and Build Capacity
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   **Launch device‑lending pilots** : start with one or two schools; prioritise pupils without devices and SEND pupils. Integrate evaluation from the outset.
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   **Deliver AI literacy programmes** : run teacher CPD and pupil modules, using UNESCO guidance and Internet Matters resources.
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   **Implement safety protocols** : deploy filtering and monitoring tools; ensure AI platforms comply with Ofcom codes. Update policies and consent forms.
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   **Engage parents** : host workshops on AI safety and digital skills. Provide information about social tariffs and the National Databank.
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Stage 3 (Year 2): Scale and Integrate
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   **Expand lending schemes** across the trust, standardising device management and retrieval processes.
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   **Integrate AI into the curriculum** in line with the EEF’s pedagogical recommendations, focusing on retrieval practice, modelling and feedback.
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   **Enhance connectivity** : upgrade infrastructure to meet meaningful school connectivity standards and provide remote access solutions for pupils without home broadband.
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   **Public dashboard** : publish progress against KPIs. Use data to identify persistent gaps and reallocate resources.
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Stage 4 (Year 3 and Beyond): Evaluate and Advocate
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   **Conduct impact evaluations** : partner with external researchers or use EEF‑style randomised trials to measure effects on attainment, engagement and wellbeing. Compare outcomes between participants and control groups.
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   **Refine and innovate** : update AI tools and training to reflect technological advances. Explore emerging models such as open‑source AI or federated learning to reduce vendor lock‑in.
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   **Advocate for systemic change** : share lessons with policy‑makers; campaign for broadband universal service, device recycling incentives and digital‑skills funding. Collaborate with other trusts and national organisations to influence policy.
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  6. Conclusion: AI as a Test of Collective Responsibility
&lt;/h2&gt;

&lt;p&gt;AI is not just a technological innovation; it is a &lt;strong&gt;moral stress test&lt;/strong&gt; for educational systems. It exposes the consequences of neglecting digital inclusion and magnifies existing inequities. As UNESCO, OECD and UNICEF warn, ignoring access, safety and pedagogy risks widening the gap&lt;a href="https://www.weforum.org/stories/2023/09/generative-ai-education-unesco/?ref=breakthroughpursuit.com#:~:text=What%20are%20UNESCO%E2%80%99s%20guidelines%20for,AI%20in%20education" rel="noopener noreferrer"&gt;[3]&lt;/a&gt;&lt;a href="https://www.oecd.org/content/dam/oecd/en/publications/reports/2024/05/students-digital-devices-and-success_621829ff/9e4c0624-en.pdf?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;[4]&lt;/a&gt;&lt;a href="https://www.unicef.org/press-releases/two-thirds-worlds-school-age-children-have-no-internet-access-home-new-unicef-itu?ref=breakthroughpursuit.com#:~:text=NEW%20YORK%2FGENEVA%2C%C2%A01%20December%C2%A02020%C2%A0%E2%80%93%C2%A0Two%20thirds%C2%A0of%20the,ITU" rel="noopener noreferrer"&gt;[2]&lt;/a&gt;. However, when designed for equity and implemented thoughtfully, AI can &lt;strong&gt;amplify human agency&lt;/strong&gt; , personalise learning and enable all children to thrive.&lt;/p&gt;

&lt;p&gt;Trusts have the power—and duty—to ensure that AI benefits every child, not just those born into privilege. By adopting an Equity‑by‑Design model, investing in devices and data, building AI literacy, safeguarding children and measuring impact, we can make AI a tool for social mobility. &lt;strong&gt;Nearly half of UK families with children currently fall below the minimum digital living standard&lt;/strong&gt; &lt;a href="https://www.theguardian.com/technology/2024/mar/17/half-uk-families-excluded-modern-digital-society-study?ref=breakthroughpursuit.com#:~:text=Almost%20half%20of%20UK%20families,an%20%E2%80%9Camplifier%20of%20other%20exclusions%E2%80%9D" rel="noopener noreferrer"&gt;[1]&lt;/a&gt;; this statistic should galvanise us. The road ahead requires collaboration among educators, policy‑makers, tech companies, parents and communities. Together, we can bridge the digital divide and build an inclusive digital future.&lt;/p&gt;




&lt;h3&gt;
  
  
  Appendix: Digital Divide Metrics
&lt;/h3&gt;

&lt;p&gt;Below is a horizontal bar chart summarising key digital divide metrics drawn from the Good Things Foundation and Ofcom. Families below the Minimum Digital Living Standard (MDLS), adults lacking basic digital skills, and households struggling to afford mobile contracts or devices illustrate the scale of exclusion. The chart highlights the urgency of interventions in devices, skills and affordability.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdq32g0qvumh6ovzyska9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdq32g0qvumh6ovzyska9.png" alt="AI and the Digital Divide: Ensuring Every Child Benefits, Not Just the Privileged Few" width="420" height="252"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;a href="https://www.theguardian.com/technology/2024/mar/17/half-uk-families-excluded-modern-digital-society-study?ref=breakthroughpursuit.com#:~:text=Almost%20half%20of%20UK%20families,an%20%E2%80%9Camplifier%20of%20other%20exclusions%E2%80%9D" rel="noopener noreferrer"&gt;[1]&lt;/a&gt; Nearly half of UK families excluded from modern digital society, study finds | Digital Britain | The Guardian&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.theguardian.com/technology/2024/mar/17/half-uk-families-excluded-modern-digital-society-study?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.theguardian.com/technology/2024/mar/17/half-uk-families-excluded-modern-digital-society-study&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.unicef.org/press-releases/two-thirds-worlds-school-age-children-have-no-internet-access-home-new-unicef-itu?ref=breakthroughpursuit.com#:~:text=NEW%20YORK%2FGENEVA%2C%C2%A01%20December%C2%A02020%C2%A0%E2%80%93%C2%A0Two%20thirds%C2%A0of%20the,ITU" rel="noopener noreferrer"&gt;[2]&lt;/a&gt; &lt;a href="https://www.unicef.org/press-releases/two-thirds-worlds-school-age-children-have-no-internet-access-home-new-unicef-itu?ref=breakthroughpursuit.com#:~:text=Last%20year%2C%20UNICEF%20and%20ITU,deploy%20digital%20learning%20solutions%20and" rel="noopener noreferrer"&gt;[5]&lt;/a&gt; Two thirds of the world’s school-age children have no internet access at home, new UNICEF-ITU report says&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.unicef.org/press-releases/two-thirds-worlds-school-age-children-have-no-internet-access-home-new-unicef-itu?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.unicef.org/press-releases/two-thirds-worlds-school-age-children-have-no-internet-access-home-new-unicef-itu&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.weforum.org/stories/2023/09/generative-ai-education-unesco/?ref=breakthroughpursuit.com#:~:text=What%20are%20UNESCO%E2%80%99s%20guidelines%20for,AI%20in%20education" rel="noopener noreferrer"&gt;[3]&lt;/a&gt; Generative AI has disrupted education. Here’s how it can be used for good – UNESCO | World Economic Forum&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.weforum.org/stories/2023/09/generative-ai-education-unesco/?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.weforum.org/stories/2023/09/generative-ai-education-unesco/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.oecd.org/content/dam/oecd/en/publications/reports/2024/05/students-digital-devices-and-success_621829ff/9e4c0624-en.pdf?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;[4]&lt;/a&gt; &lt;a href="https://www.oecd.org/content/dam/oecd/en/publications/reports/2024/05/students-digital-devices-and-success_621829ff/9e4c0624-en.pdf?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;[34]&lt;/a&gt; Students, digital devices and success (EN)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.oecd.org/content/dam/oecd/en/publications/reports/2024/05/students-digital-devices-and-success_621829ff/9e4c0624-en.pdf?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.oecd.org/content/dam/oecd/en/publications/reports/2024/05/students-digital-devices-and-success_621829ff/9e4c0624-en.pdf&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.ofcom.org.uk/siteassets/resources/documents/research-and-data/media-literacy-research/children/children-media-use-and-attitudes-2024/childrens-media-literacy-report-2024.pdf?ref=breakthroughpursuit.com#:~:text=Online%20access%20and%20use%20%E2%80%A2,at%20home%20was%20not%20possible" rel="noopener noreferrer"&gt;[6]&lt;/a&gt; Childrens Media literacy report 2024&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.ofcom.org.uk/siteassets/resources/documents/research-and-data/media-literacy-research/children/children-media-use-and-attitudes-2024/childrens-media-literacy-report-2024.pdf?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.ofcom.org.uk/siteassets/resources/documents/research-and-data/media-literacy-research/children/children-media-use-and-attitudes-2024/childrens-media-literacy-report-2024.pdf&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.goodthingsfoundation.org/policy-and-research/research-and-evidence/research-2024/digital-nation?ref=breakthroughpursuit.com#:~:text=These%20statistics%20are%20presented%20in,horizon%20of%20the%20left%20bank" rel="noopener noreferrer"&gt;[7]&lt;/a&gt; &lt;a href="https://www.goodthingsfoundation.org/policy-and-research/research-and-evidence/research-2024/digital-nation?ref=breakthroughpursuit.com#:~:text=,a%20smartphone%2C%20tablet%20or%20laptop" rel="noopener noreferrer"&gt;[8]&lt;/a&gt; &lt;a href="https://www.goodthingsfoundation.org/policy-and-research/research-and-evidence/research-2024/digital-nation?ref=breakthroughpursuit.com#:~:text=%2A%2033,behind%20by%20services%20moving%20online" rel="noopener noreferrer"&gt;[9]&lt;/a&gt; &lt;a href="https://www.goodthingsfoundation.org/policy-and-research/research-and-evidence/research-2024/digital-nation?ref=breakthroughpursuit.com#:~:text=%2A%2030,signed%20up%20for%20social%20tariff" rel="noopener noreferrer"&gt;[10]&lt;/a&gt; &lt;a href="https://www.goodthingsfoundation.org/policy-and-research/research-and-evidence/research-2024/digital-nation?ref=breakthroughpursuit.com#:~:text=The%20following%20statistics%20are%20on,of%20people%2C%20devices%20and%20buildings" rel="noopener noreferrer"&gt;[11]&lt;/a&gt; Digital Nation | The UK's Digital Divide | Good Things Foundation&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.goodthingsfoundation.org/policy-and-research/research-and-evidence/research-2024/digital-nation?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.goodthingsfoundation.org/policy-and-research/research-and-evidence/research-2024/digital-nation&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.ofcom.org.uk/phones-and-broadband/saving-money/affordability-tracker?ref=breakthroughpursuit.com#:~:text=Around%20a%20quarter%20of%20UK,communications%20services%20in%20May%202025" rel="noopener noreferrer"&gt;[12]&lt;/a&gt; Communications Affordability Tracker - Ofcom&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.ofcom.org.uk/phones-and-broadband/saving-money/affordability-tracker?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.ofcom.org.uk/phones-and-broadband/saving-money/affordability-tracker&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.ipsos.com/sites/default/files/ct/publication/documents/2023-11/lloyds-consumer-digital-index-2023-report.pdf?ref=breakthroughpursuit.com#:~:text=UK%20Consumer%20Digital%20Index%202023,u" rel="noopener noreferrer"&gt;[13]&lt;/a&gt; lloyds-consumer-digital-index-2023-report.pdf&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.ipsos.com/sites/default/files/ct/publication/documents/2023-11/lloyds-consumer-digital-index-2023-report.pdf?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.ipsos.com/sites/default/files/ct/publication/documents/2023-11/lloyds-consumer-digital-index-2023-report.pdf&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.internetmatters.org/hub/research/generative-ai-in-education-report/?ref=breakthroughpursuit.com#:~:text=Generative%20AI%20in%20education%3A%20Children%E2%80%99s,and%20parents%E2%80%99%20views" rel="noopener noreferrer"&gt;[14]&lt;/a&gt; &lt;a href="https://www.internetmatters.org/hub/research/generative-ai-in-education-report/?ref=breakthroughpursuit.com#:~:text=Parents%20who%20say%20their%20child%E2%80%99s,AI%20tools%20to%20teach%20students" rel="noopener noreferrer"&gt;[15]&lt;/a&gt; &lt;a href="https://www.internetmatters.org/hub/research/generative-ai-in-education-report/?ref=breakthroughpursuit.com#:~:text=41" rel="noopener noreferrer"&gt;[16]&lt;/a&gt; &lt;a href="https://www.internetmatters.org/hub/research/generative-ai-in-education-report/?ref=breakthroughpursuit.com#:~:text=Government" rel="noopener noreferrer"&gt;[17]&lt;/a&gt; &lt;a href="https://www.internetmatters.org/hub/research/generative-ai-in-education-report/?ref=breakthroughpursuit.com#:~:text=Equitable%20access%20to%20AI%20in,education" rel="noopener noreferrer"&gt;[18]&lt;/a&gt; &lt;a href="https://www.internetmatters.org/hub/research/generative-ai-in-education-report/?ref=breakthroughpursuit.com#:~:text=Parents" rel="noopener noreferrer"&gt;[37]&lt;/a&gt; Generative AI in education: Kids &amp;amp; parents views | Internet Matters&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.internetmatters.org/hub/research/generative-ai-in-education-report/?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.internetmatters.org/hub/research/generative-ai-in-education-report/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.ofcom.org.uk/online-safety/protecting-children/new-rules-for-a-safer-generation-of-children-online?ref=breakthroughpursuit.com#:~:text=In%20designing%20the%20Codes%20of,1" rel="noopener noreferrer"&gt;[19]&lt;/a&gt; &lt;a href="https://www.ofcom.org.uk/online-safety/protecting-children/new-rules-for-a-safer-generation-of-children-online?ref=breakthroughpursuit.com#:~:text=The%20steps%20include%20preventing%20minors,online%20bullying%20and%20dangerous%20challenges" rel="noopener noreferrer"&gt;[20]&lt;/a&gt; &lt;a href="https://www.ofcom.org.uk/online-safety/protecting-children/new-rules-for-a-safer-generation-of-children-online?ref=breakthroughpursuit.com#:~:text=,assume%20younger%20children%20are%20on" rel="noopener noreferrer"&gt;[21]&lt;/a&gt; &lt;a href="https://www.ofcom.org.uk/online-safety/protecting-children/new-rules-for-a-safer-generation-of-children-online?ref=breakthroughpursuit.com#:~:text=,management%20of%20risk%20to%20children" rel="noopener noreferrer"&gt;[22]&lt;/a&gt; &lt;a href="https://www.ofcom.org.uk/online-safety/protecting-children/new-rules-for-a-safer-generation-of-children-online?ref=breakthroughpursuit.com#:~:text=,so%20children%20can%20understand%20them" rel="noopener noreferrer"&gt;[23]&lt;/a&gt; &lt;a href="https://www.ofcom.org.uk/online-safety/protecting-children/new-rules-for-a-safer-generation-of-children-online?ref=breakthroughpursuit.com#:~:text=,management%20of%20risk%20to%20children" rel="noopener noreferrer"&gt;[24]&lt;/a&gt; &lt;a href="https://www.ofcom.org.uk/online-safety/protecting-children/new-rules-for-a-safer-generation-of-children-online?ref=breakthroughpursuit.com#:~:text=These%20measures%20build%20on%20the,52%20from%20encountering%20online%20pornography" rel="noopener noreferrer"&gt;[25]&lt;/a&gt; &lt;a href="https://www.ofcom.org.uk/online-safety/protecting-children/new-rules-for-a-safer-generation-of-children-online?ref=breakthroughpursuit.com#:~:text=The%20steps%20include%20preventing%20minors,online%20bullying%20and%20dangerous%20challenges" rel="noopener noreferrer"&gt;[38]&lt;/a&gt; &lt;a href="https://www.ofcom.org.uk/online-safety/protecting-children/new-rules-for-a-safer-generation-of-children-online?ref=breakthroughpursuit.com#:~:text=,harmful%20content%20from%20children%E2%80%99s%20feeds" rel="noopener noreferrer"&gt;[39]&lt;/a&gt; New rules for a safer generation of children online - Ofcom&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.ofcom.org.uk/online-safety/protecting-children/new-rules-for-a-safer-generation-of-children-online?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.ofcom.org.uk/online-safety/protecting-children/new-rules-for-a-safer-generation-of-children-online&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.gov.uk/government/publications/generative-artificial-intelligence-in-education/generative-artificial-intelligence-ai-in-education?ref=breakthroughpursuit.com#:~:text=If%20used%20safely%2C%20effectively%20and,skills%20they%20need%20for%20life" rel="noopener noreferrer"&gt;[26]&lt;/a&gt;  Generative artificial intelligence (AI) in education - GOV.UK&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.gov.uk/government/publications/generative-artificial-intelligence-in-education/generative-artificial-intelligence-ai-in-education?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.gov.uk/government/publications/generative-artificial-intelligence-in-education/generative-artificial-intelligence-ai-in-education&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.gov.uk/government/news/thousands-of-children-with-send-to-benefit-from-assistive-tech?ref=breakthroughpursuit.com#:~:text=Thousands%20of%20children%20with%20special,in%20up%20to%204%2C000%20schools" rel="noopener noreferrer"&gt;[27]&lt;/a&gt; &lt;a href="https://www.gov.uk/government/news/thousands-of-children-with-send-to-benefit-from-assistive-tech?ref=breakthroughpursuit.com#:~:text=The%20lending%20libraries%20model%20adopts,the%20risk%20of%20wasted%20expenditure" rel="noopener noreferrer"&gt;[28]&lt;/a&gt; &lt;a href="https://www.gov.uk/government/news/thousands-of-children-with-send-to-benefit-from-assistive-tech?ref=breakthroughpursuit.com#:~:text=The%20impact%20is%20clear%20among,confidence%20amongst%20pupils%20with%20SEND" rel="noopener noreferrer"&gt;[29]&lt;/a&gt; &lt;a href="https://www.gov.uk/government/news/thousands-of-children-with-send-to-benefit-from-assistive-tech?ref=breakthroughpursuit.com#:~:text=The%20impact%20also%20extends%20to,that%20transforms%20pupils%E2%80%99%20life%20chances" rel="noopener noreferrer"&gt;[40]&lt;/a&gt; Thousands of children with SEND to benefit from assistive tech - GOV.UK&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.gov.uk/government/news/thousands-of-children-with-send-to-benefit-from-assistive-tech?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.gov.uk/government/news/thousands-of-children-with-send-to-benefit-from-assistive-tech&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://d2tic4wvo1iusb.cloudfront.net/production/eef-guidance-reports/digital/EEF_Digital_Technology_Guidance_Report.pdf?ref=breakthroughpursuit.com#:~:text=overarching%20recommendation%20in%20this%20report,better%20chance%20of%20doing%20so" rel="noopener noreferrer"&gt;[30]&lt;/a&gt; &lt;a href="https://d2tic4wvo1iusb.cloudfront.net/production/eef-guidance-reports/digital/EEF_Digital_Technology_Guidance_Report.pdf?ref=breakthroughpursuit.com#:~:text=this%20means%20buying%20a%20tablet,better%20chance%20of%20doing%20so" rel="noopener noreferrer"&gt;[31]&lt;/a&gt; &lt;a href="https://d2tic4wvo1iusb.cloudfront.net/production/eef-guidance-reports/digital/EEF_Digital_Technology_Guidance_Report.pdf?ref=breakthroughpursuit.com#:~:text=computing%20or%20coding%2C%20or%20on,Recommendation%204" rel="noopener noreferrer"&gt;[32]&lt;/a&gt; &lt;a href="https://d2tic4wvo1iusb.cloudfront.net/production/eef-guidance-reports/digital/EEF_Digital_Technology_Guidance_Report.pdf?ref=breakthroughpursuit.com#:~:text=reports%2C%20good%20implementation%20is%20crucial,busy%20reality%20of%20their%20classroom" rel="noopener noreferrer"&gt;[33]&lt;/a&gt; EEF_Digital_Technology_Guidance_Report.pdf&lt;/p&gt;

&lt;p&gt;&lt;a href="https://d2tic4wvo1iusb.cloudfront.net/production/eef-guidance-reports/digital/EEF_Digital_Technology_Guidance_Report.pdf?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://d2tic4wvo1iusb.cloudfront.net/production/eef-guidance-reports/digital/EEF_Digital_Technology_Guidance_Report.pdf&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.rochdale.gov.uk/libraries/digitech-digital-tech-library?ref=breakthroughpursuit.com#:~:text=The%20Digitech%20Library%20is%20a,tackle%20digital%20exclusion%20and%20poverty" rel="noopener noreferrer"&gt;[35]&lt;/a&gt; Digital Tech (Digitech) Library | Rochdale Borough Council&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.rochdale.gov.uk/libraries/digitech-digital-tech-library?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.rochdale.gov.uk/libraries/digitech-digital-tech-library&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.goodthingsfoundation.org/our-services/national-databank?ref=breakthroughpursuit.com#:~:text=There%E2%80%99s%20now%20over%203%2C500%20Digital,Hubs%20offering%20the%20National%20Databank" rel="noopener noreferrer"&gt;[36]&lt;/a&gt; What Is The National Databank | Free Mobile Data For Digital Inclusion | Good Things Foundation&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.goodthingsfoundation.org/our-services/national-databank?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.goodthingsfoundation.org/our-services/national-databank&lt;/a&gt;&lt;/p&gt;

</description>
      <category>digitaldivide</category>
      <category>aiineducation</category>
      <category>equityinclusion</category>
      <category>ukeducationpolicy</category>
    </item>
    <item>
      <title>Digital Trust as the New Competitive Advantage</title>
      <dc:creator>Breakthrough Pursuit</dc:creator>
      <pubDate>Mon, 15 Sep 2025 14:19:00 +0000</pubDate>
      <link>https://forem.com/breakthroughpursuit/digital-trust-as-the-new-competitive-advantage-43ba</link>
      <guid>https://forem.com/breakthroughpursuit/digital-trust-as-the-new-competitive-advantage-43ba</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgv4bfgbp99zwz8cn5ne0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgv4bfgbp99zwz8cn5ne0.png" alt="Digital Trust as the New Competitive Advantage" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In an era of heightened privacy concerns and rapid tech adoption, &lt;strong&gt;trust has emerged as a key intangible asset&lt;/strong&gt; that compounds competitive advantage. Business audiences recognize that &lt;em&gt;customers&lt;/em&gt; gravitate toward trusted brands and &lt;em&gt;stakeholders&lt;/em&gt; reward transparent organizations. Data from the 2025 Edelman Trust Barometer and related research make clear that companies seen as responsible and trustworthy gain a “license to operate” and outperform peers. For example, Forrester finds that only 3% of firms qualify as “customer-obsessed” – those that put customer needs first – yet those companies deliver roughly &lt;strong&gt;41% faster revenue growth and 51% better customer retention&lt;/strong&gt; than competitors&lt;a href="https://www.forrester.com/press-newsroom/forrester-2024-us-customer-experience-index/?ref=breakthroughpursuit.com#:~:text=and%20satisfaction%20at%20the%20forefront,obsessed%20organizations" rel="noopener noreferrer"&gt;[1]&lt;/a&gt;. This aligns with evidence that consumers &lt;strong&gt;spend far more with brands they trust&lt;/strong&gt; : one study coins a “trust premium,” noting that online shoppers are willing to spend on average &lt;em&gt;51% more&lt;/em&gt; with a retailer they trust&lt;a href="https://explore.forter.com/2024-trust-premium-report/p/1?ref=breakthroughpursuit.com#:~:text=Coined%20the%20%E2%80%9CTrust%20Premium%2C%E2%80%9D%20consumers,with%20a%20retailer%20they%20trust" rel="noopener noreferrer"&gt;[2]&lt;/a&gt;. Similarly, Deloitte’s HX TrustID data show that &lt;strong&gt;trusted companies outperform peers by up to 400%&lt;/strong&gt; , and &lt;strong&gt;customers who trust a brand are 88% more likely to buy again&lt;/strong&gt; &lt;a href="https://www.deloittedigital.com/us/en/accelerators/trustid.html?ref=breakthroughpursuit.com#:~:text=%23%20400" rel="noopener noreferrer"&gt;[3]&lt;/a&gt;. These figures illustrate the thesis: high digital trust multiplies growth, customer retention and profitability.&lt;/p&gt;

&lt;p&gt;Moreover, societal expectations reinforce this advantage. The 2025 Edelman Barometer reports business still outpaces government on ethics and competence – &lt;strong&gt;business is seen as 49 points more competent and 29 points more ethical than government&lt;/strong&gt; &lt;a href="https://www.edelman.com/news-awards/2025-edelman-trust-barometer-reveals-high-level-grievance?ref=breakthroughpursuit.com#:~:text=For%20the%20past%20several%20years%2C,a%20sense%20of%20high%20grievance" rel="noopener noreferrer"&gt;[4]&lt;/a&gt; – but only if companies address critical social issues. Investors and regulators now demand evidence of trust-building (for example, through privacy and security practices), effectively granting companies &lt;strong&gt;regulatory goodwill&lt;/strong&gt; only when they are transparent and accountable&lt;a href="https://www.edelman.com/news-awards/2025-edelman-trust-barometer-reveals-high-level-grievance?ref=breakthroughpursuit.com#:~:text=For%20the%20past%20several%20years%2C,a%20sense%20of%20high%20grievance" rel="noopener noreferrer"&gt;[4]&lt;/a&gt;. In this way, trust-building yields &lt;em&gt;higher regulatory goodwill&lt;/em&gt; and fewer compliance frictions. In short, trust is a &lt;em&gt;compounder&lt;/em&gt;: it accelerates top-line growth (via loyalty and premium pricing) and bottom-line results (by reducing churn and risk exposure), while smoothing the path with regulators and investors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key points:&lt;/strong&gt; Building digital trust drives revenue growth and retention&lt;a href="https://www.forrester.com/press-newsroom/forrester-2024-us-customer-experience-index/?ref=breakthroughpursuit.com#:~:text=and%20satisfaction%20at%20the%20forefront,obsessed%20organizations" rel="noopener noreferrer"&gt;[1]&lt;/a&gt;&lt;a href="https://www.deloittedigital.com/us/en/accelerators/trustid.html?ref=breakthroughpursuit.com#:~:text=%23%20400" rel="noopener noreferrer"&gt;[3]&lt;/a&gt;, creates a measurable “trust premium” on spending&lt;a href="https://explore.forter.com/2024-trust-premium-report/p/1?ref=breakthroughpursuit.com#:~:text=Coined%20the%20%E2%80%9CTrust%20Premium%2C%E2%80%9D%20consumers,with%20a%20retailer%20they%20trust" rel="noopener noreferrer"&gt;[2]&lt;/a&gt;&lt;a href="https://www.pwc.com/us/en/library/trust-in-business-survey.html?ref=breakthroughpursuit.com#:~:text=of%20consumers%20have%20recommended%20a,from%20a%20company%20due%20to" rel="noopener noreferrer"&gt;[5]&lt;/a&gt;, and earns corporations a de facto license in regulators’ eyes&lt;a href="https://www.edelman.com/news-awards/2025-edelman-trust-barometer-reveals-high-level-grievance?ref=breakthroughpursuit.com#:~:text=For%20the%20past%20several%20years%2C,a%20sense%20of%20high%20grievance" rel="noopener noreferrer"&gt;[4]&lt;/a&gt;. These benefits make digital trust a new strategic asset.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Trust–P&amp;amp;L Mechanism
&lt;/h2&gt;

&lt;p&gt;Empirical studies quantify &lt;strong&gt;how trust translates into financial payoff&lt;/strong&gt;. Consumers who trust a brand not only buy more often and in larger quantities, they also pay higher prices. PwC’s U.S. Trust in Business survey (2024) reports that &lt;strong&gt;46% of consumers say they purchased more from companies they trust, and 28% paid a premium&lt;/strong&gt; to trusted brands&lt;a href="https://www.pwc.com/us/en/library/trust-in-business-survey.html?ref=breakthroughpursuit.com#:~:text=of%20consumers%20have%20recommended%20a,from%20a%20company%20due%20to" rel="noopener noreferrer"&gt;[5]&lt;/a&gt;. Conversely, 4 in 10 consumers have &lt;em&gt;stopped buying&lt;/em&gt; from a company simply because they did not trust it&lt;a href="https://www.pwc.com/us/en/library/trust-in-business-survey.html?ref=breakthroughpursuit.com#:~:text=of%20consumers%20have%20recommended%20a,from%20a%20company%20due%20to" rel="noopener noreferrer"&gt;[5]&lt;/a&gt;. Trust also fuels advocacy: 61% of consumers have recommended a trusted company to friends or family&lt;a href="https://www.pwc.com/us/en/library/trust-in-business-survey.html?ref=breakthroughpursuit.com#:~:text=trust%20a%20company%2C%20some%20may,from%20a%20company%20due%20to" rel="noopener noreferrer"&gt;[6]&lt;/a&gt;. In B2B contexts trust matters just as much: Deloitte data show that trusted companies gain &lt;strong&gt;up to 400% better performance metrics&lt;/strong&gt; (higher sales, productivity or efficiency) and enjoy &lt;strong&gt;88% higher repurchase rates&lt;/strong&gt; &lt;a href="https://www.deloittedigital.com/us/en/accelerators/trustid.html?ref=breakthroughpursuit.com#:~:text=%23%20400" rel="noopener noreferrer"&gt;[3]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;From a &lt;strong&gt;customer-lifetime-value (LTV)&lt;/strong&gt; perspective, trust acts like a tailwind. Forrester notes that even small improvements in customer experience (a proxy for trust) &lt;em&gt;“can reduce churn and increase wallet share,”&lt;/em&gt; adding millions to revenue&lt;a href="https://www.forrester.com/press-newsroom/forrester-2024-us-customer-experience-index/?ref=breakthroughpursuit.com#:~:text=Conducted%20for%20the%20ninth%20year,and%20increasing%20share%20of%20wallet" rel="noopener noreferrer"&gt;[7]&lt;/a&gt;. Indeed, Forrester’s CX Index analysis found that firms rated “customer-obsessed” – those effectively building trust – saw &lt;em&gt;41% faster revenue growth and 51% better retention&lt;/em&gt; than competitors&lt;a href="https://www.forrester.com/press-newsroom/forrester-2024-us-customer-experience-index/?ref=breakthroughpursuit.com#:~:text=and%20satisfaction%20at%20the%20forefront,obsessed%20organizations" rel="noopener noreferrer"&gt;[1]&lt;/a&gt;. In essence, trust increases LTV (customers buy more and stick around longer) and lowers effective CAC (acquisition cost, since reputation eases new sales).&lt;/p&gt;

&lt;p&gt;On the &lt;strong&gt;risk side&lt;/strong&gt; , trust acts as insurance against costly failures. Data breaches and security incidents not only erode customer confidence but also incur steep costs. IBM’s Cost of a Data Breach Report 2025 finds an average global breach cost of &lt;strong&gt;$4.44 million&lt;/strong&gt; &lt;a href="https://www.ibm.com/reports/data-breach?ref=breakthroughpursuit.com#:~:text=4" rel="noopener noreferrer"&gt;[8]&lt;/a&gt;. Crucially, faster breach identification and containment (a governance outcome tied to trust in security) &lt;em&gt;directly reduces&lt;/em&gt; these costs&lt;a href="https://www.ibm.com/reports/data-breach?ref=breakthroughpursuit.com#:~:text=4" rel="noopener noreferrer"&gt;[8]&lt;/a&gt;. In short, investing in privacy and security – core components of digital trust – pays off by preventing or minimizing expensive failures.&lt;/p&gt;

&lt;p&gt;Putting these numbers together, executives overwhelmingly see trust as a &lt;strong&gt;bottom-line accelerator&lt;/strong&gt;. PwC’s survey finds &lt;em&gt;93% of business leaders agree that building and maintaining trust improves the bottom line&lt;/em&gt;&lt;a href="https://www.pwc.com/us/en/library/trust-in-business-survey.html?ref=breakthroughpursuit.com#:~:text=and%20employees%20are%20nearly%20as,trust%20improves%20the%20bottom%20line" rel="noopener noreferrer"&gt;[9]&lt;/a&gt;. When trust is high, customers buy more (and are willing to pay more), employees are more productive and loyal, and investors offer better terms. Conversely, the survey shows that lack of customer trust immediately hurts engagement and profitability&lt;a href="https://www.pwc.com/us/en/library/trust-in-business-survey.html?ref=breakthroughpursuit.com#:~:text=of%20consumers%20have%20recommended%20a,due%20to%20lack%20of%20trust" rel="noopener noreferrer"&gt;[10]&lt;/a&gt;. For example, 42% of executives say &lt;em&gt;customer disengagement&lt;/em&gt; is the biggest risk of low trust, and a similar share cite lost profitability&lt;a href="https://www.pwc.com/us/en/library/trust-in-business-survey.html?ref=breakthroughpursuit.com#:~:text=%2A%20Customers%3A%2042,due%20to%20lack%20of%20trust" rel="noopener noreferrer"&gt;[11]&lt;/a&gt;. These data underscore the P&amp;amp;L mechanism: trust converts directly into &lt;strong&gt;growth, retention and risk mitigation&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;·      &lt;strong&gt;Buy more, pay more:&lt;/strong&gt; Trusted brands command higher wallet share and price premiums&lt;a href="https://www.pwc.com/us/en/library/trust-in-business-survey.html?ref=breakthroughpursuit.com#:~:text=of%20consumers%20have%20recommended%20a,from%20a%20company%20due%20to" rel="noopener noreferrer"&gt;[5]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;·      &lt;strong&gt;Churn less:&lt;/strong&gt; Even modest CX/trust improvements dramatically reduce churn&lt;a href="https://www.forrester.com/press-newsroom/forrester-2024-us-customer-experience-index/?ref=breakthroughpursuit.com#:~:text=Conducted%20for%20the%20ninth%20year,and%20increasing%20share%20of%20wallet" rel="noopener noreferrer"&gt;[7]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;·      &lt;strong&gt;Lower risk costs:&lt;/strong&gt; Strong security/privacy (trust signals) cut breach costs&lt;a href="https://www.ibm.com/reports/data-breach?ref=breakthroughpursuit.com#:~:text=4" rel="noopener noreferrer"&gt;[8]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;·      &lt;strong&gt;Advocacy and retention:&lt;/strong&gt; Trust drives referrals (61% recommend trusted brands&lt;a href="https://www.pwc.com/us/en/library/trust-in-business-survey.html?ref=breakthroughpursuit.com#:~:text=trust%20a%20company%2C%20some%20may,from%20a%20company%20due%20to" rel="noopener noreferrer"&gt;[6]&lt;/a&gt;) and repeat purchases (88% likely to buy again&lt;a href="https://www.deloittedigital.com/us/en/accelerators/trustid.html?ref=breakthroughpursuit.com#:~:text=%23%20400" rel="noopener noreferrer"&gt;[3]&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;·      &lt;strong&gt;Regulatory goodwill:&lt;/strong&gt; Demonstrable trust (e.g. privacy certifications) smooths regulatory approvals and investor relations&lt;a href="https://www.edelman.com/news-awards/2025-edelman-trust-barometer-reveals-high-level-grievance?ref=breakthroughpursuit.com#:~:text=For%20the%20past%20several%20years%2C,a%20sense%20of%20high%20grievance" rel="noopener noreferrer"&gt;[4]&lt;/a&gt;&lt;a href="https://www.iso.org/standard/85819.html?ref=breakthroughpursuit.com#:~:text=,with%20partners%2C%20clients%20and%20regulators" rel="noopener noreferrer"&gt;[12]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Together, these behavioral payoffs of trust create a &lt;strong&gt;compound effect on P&amp;amp;L&lt;/strong&gt;. Firms that execute trust-building see direct revenue upside and indirect cost savings, while those that neglect trust forfeit these gains and risk falling behind.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ecosystem &amp;amp; Regulation
&lt;/h2&gt;

&lt;p&gt;Digital trust is not just a single-company issue; it is shaped by ecosystem choices and regulatory trends. Leading tech platforms have shown that &lt;strong&gt;privacy-first policies can rewire markets&lt;/strong&gt; – often provoking friction with some stakeholders but ultimately reshaping norms. Apple’s App Tracking Transparency (ATT) feature, for example, forced apps to get explicit user consent before cross-app tracking. Advertisers (notably Facebook/Meta) loudly complained that ATT made iOS marketing &lt;em&gt;“more expensive and difficult”&lt;/em&gt;&lt;a href="https://www.reuters.com/technology/apple-faces-likely-french-antitrust-fine-privacy-tool-sources-say-2025-02-27/?ref=breakthroughpursuit.com#:~:text=Called%20App%20Tracking%20Transparency%20,users%20and%20measure%20their%20impact" rel="noopener noreferrer"&gt;[13]&lt;/a&gt;. Meanwhile regulators in Europe have begun probing ATT for antitrust concerns. In early 2025 the French competition authority signaled it may fine Apple for “abusing its dominant position” with ATT&lt;a href="https://www.reuters.com/technology/apple-faces-likely-french-antitrust-fine-privacy-tool-sources-say-2025-02-27/?ref=breakthroughpursuit.com#:~:text=The%20French%20regulator%20charged%20Apple,user%20data%20for%20advertising%20purposes" rel="noopener noreferrer"&gt;[14]&lt;/a&gt;. (German regulators have launched similar investigations&lt;a href="https://www.reuters.com/technology/apple-faces-likely-french-antitrust-fine-privacy-tool-sources-say-2025-02-27/?ref=breakthroughpursuit.com#:~:text=Apple%20referred%20to%20a%20July,the%20goal%20of%20the%20ATT" rel="noopener noreferrer"&gt;[15]&lt;/a&gt;.) Yet Apple defends ATT as a &lt;em&gt;pro-privacy, pro-user-choice&lt;/em&gt; innovation: the company notes it holds its ad business to “a higher standard of privacy” than others and that even regulators and privacy watchdogs have lauded the ATT approach&lt;a href="https://www.reuters.com/technology/apple-faces-likely-french-antitrust-fine-privacy-tool-sources-say-2025-02-27/?ref=breakthroughpursuit.com#:~:text=Apple%20referred%20to%20a%20July,the%20goal%20of%20the%20ATT" rel="noopener noreferrer"&gt;[15]&lt;/a&gt;. This tussle illustrates how a trust-first move (privacy protection) can ruffle market incumbents yet win public trust and, in some cases, regulatory backing.&lt;/p&gt;

&lt;p&gt;Google’s response has likewise been trust-driven. To address privacy concerns and antitrust pressure, Google embarked on the “Privacy Sandbox” initiative for Chrome, planning to phase out third-party cookies in favor of privacy-preserving ad APIs. The explicit aim is “to develop new ways to &lt;strong&gt;strengthen online privacy while ensuring a sustainable, ad-supported internet&lt;/strong&gt; ”&lt;a href="https://privacysandbox.com/news/privacy-sandbox-next-steps/?ref=breakthroughpursuit.com#:~:text=The%20goal%20of%20the%20Privacy,serve%20the%20industry%20and%20consumers" rel="noopener noreferrer"&gt;[16]&lt;/a&gt;. Google has engaged regulators (e.g. the UK CMA) and the ad ecosystem in iterative testing of Privacy Sandbox features. The company recently announced it will continue letting users control third-party cookie settings, invest in enhanced incognito protections (e.g. IP address shielding), and emphasize trust and safety innovations in Chrome&lt;a href="https://privacysandbox.com/news/privacy-sandbox-next-steps/?ref=breakthroughpursuit.com#:~:text=The%20goal%20of%20the%20Privacy,serve%20the%20industry%20and%20consumers" rel="noopener noreferrer"&gt;[16]&lt;/a&gt;. This pivot reflects how platform leaders perceive user trust as integral to their ad business; Google explicitly ties Privacy Sandbox to user confidence and industry health.&lt;/p&gt;

&lt;p&gt;Remarkably, these ecosystem shifts can build trust even amid short-term pain. One industry report finds that after four years of ATT, the mobile ad market has &lt;em&gt;adapted to a privacy-first paradigm&lt;/em&gt;. AppsFlyer data (April 2025) show that global iOS user opt-in rates to tracking have climbed to &lt;strong&gt;50% (up 10 percentage points since 2021)&lt;/strong&gt;&lt;a href="https://www.appsflyer.com/company/newsroom/pr/post-att-growth/?ref=breakthroughpursuit.com#:~:text=User%20Opt,grow" rel="noopener noreferrer"&gt;[17]&lt;/a&gt;. In other words, transparency and consent have increased user willingness to participate. “When users understand the value exchange behind data sharing,” the analysis notes, “many are willing to participate in the advertising ecosystem”&lt;a href="https://www.appsflyer.com/company/newsroom/pr/post-att-growth/?ref=breakthroughpursuit.com#:~:text=User%20Opt,grow" rel="noopener noreferrer"&gt;[17]&lt;/a&gt;. Companies have learned to frame tracking requests clearly and offer meaningful benefits. This suggests that &lt;strong&gt;privacy and performance need not conflict&lt;/strong&gt; – trust-building (via clear opt-in flows) can yield broad user acceptance.&lt;/p&gt;

&lt;p&gt;Across sectors, regulators and advocacy groups are also making trust-related demands. The EU’s Digital Services Act and AI Act require demonstrable safety and transparency. Industry groups (e.g. Mozilla, techUK) advocate privacy-first designs and public trust marks. The &lt;strong&gt;lesson&lt;/strong&gt; is that trust-based choices (even if initially controversial) can ultimately shape favorable policy outcomes and healthier markets. Conversely, ignoring trust can draw regulatory ire. Effective leaders watch these ecosystem signals closely: they engage in multi-stakeholder initiatives (like Privacy Sandbox discussions), monitor consumer sentiment on privacy, and align products with emerging trust norms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key points:&lt;/strong&gt; Major platforms have chosen privacy-first strategies (Apple’s ATT, Google’s Privacy Sandbox) that prioritize user control and trust&lt;a href="https://www.reuters.com/technology/apple-faces-likely-french-antitrust-fine-privacy-tool-sources-say-2025-02-27/?ref=breakthroughpursuit.com#:~:text=Apple%20referred%20to%20a%20July,the%20goal%20of%20the%20ATT" rel="noopener noreferrer"&gt;[15]&lt;/a&gt;&lt;a href="https://privacysandbox.com/news/privacy-sandbox-next-steps/?ref=breakthroughpursuit.com#:~:text=The%20goal%20of%20the%20Privacy,serve%20the%20industry%20and%20consumers" rel="noopener noreferrer"&gt;[16]&lt;/a&gt;. These moves reshape market dynamics – raising short-term costs for some players – but ultimately build user confidence. Data show higher consent rates (50% opt-in for ATT&lt;a href="https://www.appsflyer.com/company/newsroom/pr/post-att-growth/?ref=breakthroughpursuit.com#:~:text=User%20Opt,grow" rel="noopener noreferrer"&gt;[17]&lt;/a&gt;) and industry acceptance of privacy-preserving adtech. Overall, market leaders see that taking a trust-first stance today helps win consumer and regulatory support tomorrow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Operating System for Trust
&lt;/h2&gt;

&lt;p&gt;Building trust at scale requires a formal “operating system” – robust frameworks and standards that govern data and digital product design. Chief executives are increasingly adopting &lt;strong&gt;privacy/security frameworks&lt;/strong&gt; to demonstrate trustworthiness and manage risk. For example, NIST’s &lt;em&gt;Privacy Framework (PF) v1.1&lt;/em&gt; is a voluntary, risk-based tool to help organizations identify and manage privacy risks across their operations&lt;a href="https://www.nist.gov/privacy-framework?ref=breakthroughpursuit.com#:~:text=The%20NIST%20Privacy%20Framework%20,services%20while%20protecting%20individuals%E2%80%99%20privacy" rel="noopener noreferrer"&gt;[18]&lt;/a&gt;. Like the well-known cybersecurity framework, NIST PF provides &lt;strong&gt;profiles and controls&lt;/strong&gt; for privacy engineering, enabling companies to align products with stakeholder expectations. Using NIST PF, a firm can map its practices (from data collection to deletion) against a maturity model and show progress, making privacy a board-level concern rather than an ad hoc fix.&lt;/p&gt;

&lt;p&gt;International standards also play a key role. &lt;strong&gt;ISO/IEC 27701&lt;/strong&gt; is a privacy-specific extension to ISO 27001 (InfoSec) that codifies a Privacy Information Management System (PIMS). It defines requirements for establishing, implementing and improving privacy controls around personally identifiable information. Importantly, ISO/IEC 27701 “provides a structured, internationally recognized framework” helping firms “show accountability, manage risks around PII, and continually improve privacy practices”&lt;a href="https://www.iso.org/standard/85819.html?ref=breakthroughpursuit.com#:~:text=business%20partners%2C%20it%E2%80%99s%20not%20enough,continually%20improve%20their%20privacy%20practices" rel="noopener noreferrer"&gt;[19]&lt;/a&gt;. Certification to ISO 27701 signals to customers, partners and regulators that an organization follows best practices. The standard explicitly strengthens data protection capabilities and “supports trust-building with partners, clients and regulators”&lt;a href="https://www.iso.org/standard/85819.html?ref=breakthroughpursuit.com#:~:text=,with%20partners%2C%20clients%20and%20regulators" rel="noopener noreferrer"&gt;[12]&lt;/a&gt;. In essence, ISO 27701 makes privacy management auditable and verifiable.&lt;/p&gt;

&lt;p&gt;Emerging standards broaden the scope of trust. ISO/IEC 31700 (2023) establishes &lt;strong&gt;Privacy by Design for consumer products and services&lt;/strong&gt;. It is the first ISO standard on privacy by design, providing high-level rules that “integrate privacy into the architecture of goods and services”&lt;a href="https://www.pwc.ch/en/insights/regulation/welcome-to-the-new-iso-31700-standard-for-privacy-by-design.html?ref=breakthroughpursuit.com#:~:text=Privacy%20by%20design%20is%20a,Data%20Protection%20Act%2C%20Article%207" rel="noopener noreferrer"&gt;[20]&lt;/a&gt;. ISO 31700 enshrines principles like &lt;em&gt;empowerment and transparency&lt;/em&gt;, &lt;em&gt;institutional responsibility&lt;/em&gt;, and &lt;em&gt;lifecyle accountability&lt;/em&gt;&lt;a href="https://www.pwc.ch/en/insights/regulation/welcome-to-the-new-iso-31700-standard-for-privacy-by-design.html?ref=breakthroughpursuit.com#:~:text=creation%2C%20collection%20of%20personally%20identifiable,responsibility%3B%20and%20ecosystem%20and%20lifecycle" rel="noopener noreferrer"&gt;[21]&lt;/a&gt;. For any IoT or digital product, using ISO 31700 means embedding user-centric controls (e.g. data collection limits, encryption, breach response) from the earliest design phase. This uniform “privacy by default” guidance helps companies innovate while respecting customer autonomy.&lt;/p&gt;

&lt;p&gt;Similarly, &lt;strong&gt;ISO/IEC 42001 (2023)&lt;/strong&gt; creates a management system standard for artificial intelligence – an &lt;em&gt;AIMS&lt;/em&gt; (AI management system). It outlines requirements for governance of AI development, deployment and usage. KPMG notes that ISO 42001 is “offering a structured framework for AI governance,” helping organizations &lt;em&gt;build trust&lt;/em&gt; and align with regulations&lt;a href="https://kpmg.com/ch/en/insights/artificial-intelligence/iso-iec-42001.html?ref=breakthroughpursuit.com#:~:text=With%20increasing%20regulatory%20scrutiny%2C%20businesses,adoption%20and%20broader%20digital%20transformation" rel="noopener noreferrer"&gt;[22]&lt;/a&gt;. By following ISO 42001, companies institutionalize AI risk management (bias, data security, accountability) and ethical principles. Certification to ISO 42001 demonstrates, to customers and regulators, that the company’s AI systems are transparent, ethical and controlled&lt;a href="https://kpmg.com/ch/en/insights/artificial-intelligence/iso-iec-42001.html?ref=breakthroughpursuit.com#:~:text=ISO%2FIEC%2042001%20certification%20helps%20organizations%3A" rel="noopener noreferrer"&gt;[23]&lt;/a&gt;. This is crucial as AI becomes a focus of regulation (e.g. EU AI Act); compliance with ISO 42001 can serve as proof of responsible AI practice.&lt;/p&gt;

&lt;p&gt;Beyond privacy and AI, industry consortia are defining ethical engineering standards. The IEEE 7000-series offers guidelines for &lt;em&gt;trustworthy technology by design&lt;/em&gt;. For instance, &lt;strong&gt;IEEE 7000™&lt;/strong&gt; sets a model process for embedding ethics in system design, and &lt;strong&gt;IEEE 7001™&lt;/strong&gt; sets criteria for &lt;em&gt;transparency of autonomous systems&lt;/em&gt;&lt;a href="https://standards.ieee.org/initiatives/autonomous-intelligence-systems/?ref=breakthroughpursuit.com#:~:text=IEEE%207000%E2%84%A2" rel="noopener noreferrer"&gt;[24]&lt;/a&gt;. These standards encourage developers to document algorithms, clarify decision logic, and consider the societal impacts of tech. Adopting IEEE 7000/7001 principles enables firms to systematically address “ethics and transparency” – core facets of digital trust – in everything from robotics to software.&lt;/p&gt;

&lt;p&gt;In practice, these frameworks form the governance backbone for trust. Leading companies map their policies and controls onto NIST’s Privacy and Cybersecurity Frameworks, align with ISO 27701 for data privacy, pilot ISO 31700 in new products, and prepare for ISO 42001 audits of AI. They may also align with sector-specific guidelines (e.g. OECD’s privacy principles, or Mozilla/techUK trust initiatives). The &lt;strong&gt;value&lt;/strong&gt; of these frameworks lies in consistency, assurance and communication: they turn vague commitments (we “respect privacy”) into tangible processes and metrics. Auditors can verify compliance, and executives can signal to the market that trust is managed as rigorously as finance or quality.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key point:&lt;/strong&gt; Digital trust demands systematic governance. Use voluntary frameworks (e.g. NIST PF) and international standards (ISO 27701 for privacy, 31700 for product design, 42001 for AI, IEEE 7000/7001 for ethics) as the “OS” of trust. These give a common language for risk management and a basis for external assurance&lt;a href="https://www.nist.gov/privacy-framework?ref=breakthroughpursuit.com#:~:text=The%20NIST%20Privacy%20Framework%20,services%20while%20protecting%20individuals%E2%80%99%20privacy" rel="noopener noreferrer"&gt;[18]&lt;/a&gt;&lt;a href="https://www.iso.org/standard/85819.html?ref=breakthroughpursuit.com#:~:text=business%20partners%2C%20it%E2%80%99s%20not%20enough,continually%20improve%20their%20privacy%20practices" rel="noopener noreferrer"&gt;[25]&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Execution Playbook
&lt;/h2&gt;

&lt;p&gt;Translating strategy into practice requires concrete “trust-building” actions. The following playbook highlights key moves that tech-driven firms are deploying:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;First-Party Data Strategy:&lt;/strong&gt; With third-party tracking fading (e.g. cookies, device IDs), focus on collecting &lt;em&gt;own&lt;/em&gt; user data transparently. Develop robust first-party analytics and CRM platforms. For example, retailers are tying loyalty programs to explicit data-sharing benefits, encouraging customers to consent in exchange for personalization or rewards. First-party data means offering value (better service, relevant offers) in return for data – a trust tradeoff.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consent-First UX:&lt;/strong&gt; Design &lt;em&gt;consent experiences&lt;/em&gt; that are straightforward and user-friendly. Ahead of regulations, companies now test different permission dialogs, “just-in-time” disclosures, and opt-down (not just opt-out) models. The goal is that a user immediately understands why a permission is requested and what benefit they get. Early studies of “privacy nutrition labels” (akin to food nutrition facts) show that clarity in labeling increases user comfort&lt;a href="https://www.appsflyer.com/company/newsroom/pr/post-att-growth/?ref=breakthroughpursuit.com#:~:text=User%20Opt,grow" rel="noopener noreferrer"&gt;[17]&lt;/a&gt;. In practice, firms embed permissions into onboarding flows rather than hiding them, earning trust by treating consent as a clear choice, not a buried obligation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Transparency Labels &amp;amp; Dashboards:&lt;/strong&gt; Adopt visible transparency measures – e.g. “privacy fact sheets” for apps, real-time tracker blockers on websites, or data dashboards in apps. Tech giants have led the way (Apple’s App Store privacy labels, Google’s Data Safety section), and other companies can follow suit. Public trustmarks or even third-party audits (e.g. Cloud Security Alliance’s STAR program) can be shared on marketing sites. Some consumer IoT products now display clear data-use diagrams on packaging or online, so buyers know where data flows. These “transparency labels” treat openness as a product feature, bolstering credibility.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Safe-by-Design Engineering:&lt;/strong&gt; Integrate safety and ethics into the engineering cycle. This includes threat modeling, secure defaults, and ethical risk reviews (e.g. how might an AI model be misused?). Teams set up &lt;em&gt;privacy gates&lt;/em&gt; – design reviews focusing on data minimization – and &lt;em&gt;secure development lifecycles&lt;/em&gt; where every project must pass a security review before release. For AI products, this means bias testing and documentation of datasets and model behavior. Over time, such engineering practices create fewer mishaps, reinforcing customer trust.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Adversarial Testing (Red Teaming):&lt;/strong&gt; Proactively probe for failures. Cross-functional “red teams” simulate attacks on systems, including privacy breaches or fraudulent behaviors. For example, a red team might try to infer personal attributes from “anonymous” data, or stress-test AI outputs against adversarial inputs. Additionally, companies run external bug bounty programs and transparency audits, essentially “trying to break trust” in a controlled way so that real issues are fixed. This pre-emptive testing builds confidence that the system will hold up when faced with real threats.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Public Assurance &amp;amp; Artifacts:&lt;/strong&gt; Publish evidence of trust efforts. This can include white papers on data ethics, public bug bounty reports (e.g. number of vulnerabilities found/fixed), or even sanitized logs of security events (as some transparency advocates suggest). Leading companies maintain public trust dashboards – e.g. quarterly reports showing uptime, security metrics (mean time to detect/resolve incidents), and compliance status. For AI, this might involve model cards or impact assessments. The key is &lt;em&gt;externalizing&lt;/em&gt; some of the metrics and processes to show customers and regulators that the company takes trust seriously.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In sum, building digital trust is not an abstract campaign but an &lt;strong&gt;operational discipline&lt;/strong&gt;. It involves governance (e.g. ISO certifications from Section 4) and everyday product tactics (UI/UX, data management, R&amp;amp;D practices). Companies like Apple, Google, and IBM have set early examples: they publish annual trust/security reports, integrate opt-in permission UIs, and invest heavily in secure design. Other sectors (financial services, healthcare, retail) are now catching up, often guided by consulting frameworks (PwC, Deloitte, McKinsey) that prescribe trust audits, training for engineers, and trust KPIs in exec dashboards.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Example:&lt;/em&gt; A retailer might implement a first-party data platform linked to loyalty accounts, design a simple opt-in screen explaining cookies, label its mobile app with clear data usage stats, train engineers on privacy, hire a red team to test new personalization features, and then publicly share a semi-annual “privacy &amp;amp; trust report”. Each of these steps, grounded in user respect, collectively raises the trust quotient with customers and regulators.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scorecards &amp;amp; Benchmarks
&lt;/h2&gt;

&lt;p&gt;To manage trust systematically, organizations deploy &lt;strong&gt;metrics and scorecards&lt;/strong&gt; that tie trust to concrete KPIs. Just as finance uses ROI or safety uses incident rates, trust metrics can include a mix of behavioral and technical indicators. A “menu” of useful metrics includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Customer Lifetime Value (LTV) / CAC Ratio:&lt;/strong&gt; Track how trust efforts affect LTV/CAC. As noted, trust raises customer LTV (more spend, repeat buys) and lowers churn, so a rising LTV/CAC suggests trust is working. Firms may segment this by customer cohort (e.g. opt-in vs. non-opt-in customers) to see direct impact.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Churn Rate / Retention:&lt;/strong&gt; Measure how many customers leave or stay over time. Benchmark against peers or pre-trust-initiative baselines. Reducing churn is a direct signal of stronger trust, as Forrester and PwC data indicate. (For example, even “tens of millions” of incremental revenue can come from small churn reductions&lt;a href="https://www.forrester.com/press-newsroom/forrester-2024-us-customer-experience-index/?ref=breakthroughpursuit.com#:~:text=Conducted%20for%20the%20ninth%20year,and%20increasing%20share%20of%20wallet" rel="noopener noreferrer"&gt;[7]&lt;/a&gt;.)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Price Premium / Revenue Lift:&lt;/strong&gt; Track willingness to pay: measure if customers pay higher prices or purchase premium tiers. PwC data show a ~28% willingness-to-pay premium for trusted brands&lt;a href="https://www.pwc.com/us/en/library/trust-in-business-survey.html?ref=breakthroughpursuit.com#:~:text=of%20consumers%20have%20recommended%20a,from%20a%20company%20due%20to" rel="noopener noreferrer"&gt;[5]&lt;/a&gt;, and Forter’s e-commerce study finds a 51% increase in spend&lt;a href="https://explore.forter.com/2024-trust-premium-report/p/1?ref=breakthroughpursuit.com#:~:text=Coined%20the%20%E2%80%9CTrust%20Premium%2C%E2%80%9D%20consumers,with%20a%20retailer%20they%20trust" rel="noopener noreferrer"&gt;[2]&lt;/a&gt;. Companies can design experiments or surveys to quantify this trust premium, or simply observe unit price trends after trust announcements.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Breach/Incident Containment Time:&lt;/strong&gt; Record time-to-detect and time-to-contain security/privacy incidents. Shorter response times not only reduce costs (IBM finds 9% lower breach cost for faster containment&lt;a href="https://www.ibm.com/reports/data-breach?ref=breakthroughpursuit.com#:~:text=4" rel="noopener noreferrer"&gt;[8]&lt;/a&gt;) but also minimize customer exposure. A formal goal might be “identify security incident within X hours, resolve within Y hours,” and track it monthly. This serves as a proxy for how well the organization “trust-proofs” its systems.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Certification &amp;amp; Compliance Levels:&lt;/strong&gt; Count the number and scope of certifications (ISO 27701, 42001, SOC 2, CSA STAR, etc.) and regulatory compliance achievements (GDPR, CCPA readiness). Each new certification can be treated as a milestone. External audits (e.g. privacy audits, SOC reports) provide scores or grades that feed into an annual trust index.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Trust Equity Audit Cadence:&lt;/strong&gt; Maintain a regular “trust audit” (quarterly or annual) similar to financial audits. This might involve surveying customers/employees on trust, scanning public sentiment, and reviewing policy compliance. Composite indices can be created (e.g. average trust rating on a 7-point scale, fraction of users opting into data sharing). The World Economic Forum advocates developing &lt;strong&gt;measures of digital trust&lt;/strong&gt; and tracking them as one would any other corporate objective&lt;a href="https://www.weforum.org/publications/measuring-digital-trust-supporting-decision-making-for-trustworthy-technologies/?ref=breakthroughpursuit.com#:~:text=The%20World%20Economic%20Forum%E2%80%99s%20Digital,approach%20to%20earning%20digital%20trust" rel="noopener noreferrer"&gt;[26]&lt;/a&gt;. A formal cadence (e.g. quarterly trust scorecard reviewed by the board) embeds trust in governance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Price Premium Realized:&lt;/strong&gt; Beyond projected premium, track actual pricing power. Compare ASP (average selling price) on similar products before/after trust-enhancing features, or relative to competitors. If a “trusted” product can sustain higher prices, the premium is real. (PwC’s 28% and Forter’s 51% stats provide benchmarks for what is possible&lt;a href="https://www.pwc.com/us/en/library/trust-in-business-survey.html?ref=breakthroughpursuit.com#:~:text=of%20consumers%20have%20recommended%20a,from%20a%20company%20due%20to" rel="noopener noreferrer"&gt;[5]&lt;/a&gt;&lt;a href="https://explore.forter.com/2024-trust-premium-report/p/1?ref=breakthroughpursuit.com#:~:text=Coined%20the%20%E2%80%9CTrust%20Premium%2C%E2%80%9D%20consumers,with%20a%20retailer%20they%20trust" rel="noopener noreferrer"&gt;[2]&lt;/a&gt;.)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In practice, leading companies publish internal dashboards combining these KPIs. For example, a quarterly trust dashboard might show customer opt-in rates, churn trends, breach metrics, certification status, customer survey NPS vs. trust ratings, and social sentiment. Executives link these metrics to business outcomes: e.g. demonstrating that customers who consented to data sharing had 20% higher LTV, or that certification to ISO 27701 enabled faster market entry in Europe. By codifying trust into scorecards, firms turn a soft concept into measurable progress.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key points:&lt;/strong&gt; Trust metrics should combine customer-behavior KPIs (LTV/CAC, churn, premium captured) with system metrics (breach response time, audit ratings) and compliance scores (certifications achieved)&lt;a href="https://www.ibm.com/reports/data-breach?ref=breakthroughpursuit.com#:~:text=4" rel="noopener noreferrer"&gt;[8]&lt;/a&gt;&lt;a href="https://www.pwc.com/us/en/library/trust-in-business-survey.html?ref=breakthroughpursuit.com#:~:text=of%20consumers%20have%20recommended%20a,from%20a%20company%20due%20to" rel="noopener noreferrer"&gt;[5]&lt;/a&gt;. Benchmark against industry peers or WEF/Deloitte scorecards. Regular trust audits and dashboards make trust’s impact visible to leadership.&lt;/p&gt;

&lt;h2&gt;
  
  
  Call to Action
&lt;/h2&gt;

&lt;p&gt;Digital trust is no longer optional strategy – it must be a board-level priority, backed by investment and accountability. We recommend three critical actions for cross-sector organizations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Adopt Leading Trust Certifications:&lt;/strong&gt; Aim for &lt;em&gt;dual certification&lt;/em&gt; as a starting point. For example, implement ISO/IEC 27701 (Privacy Information Management) &lt;strong&gt;together with&lt;/strong&gt; ISO/IEC 42001 (AI Management System) to cover both data privacy and AI ethics under a unified governance program. This dual certification sends a strong signal: it demonstrates to customers and regulators that you manage personal data and AI responsibly. Achieving ISO 27701 shows adherence to best-practice privacy controls, while ISO 42001 compliance proves your AI systems are governed, accountable and fair&lt;a href="https://www.iso.org/standard/85819.html?ref=breakthroughpursuit.com#:~:text=business%20partners%2C%20it%E2%80%99s%20not%20enough,continually%20improve%20their%20privacy%20practices" rel="noopener noreferrer"&gt;[25]&lt;/a&gt;&lt;a href="https://kpmg.com/ch/en/insights/artificial-intelligence/iso-iec-42001.html?ref=breakthroughpursuit.com#:~:text=With%20increasing%20regulatory%20scrutiny%2C%20businesses,adoption%20and%20broader%20digital%20transformation" rel="noopener noreferrer"&gt;[22]&lt;/a&gt;. (Companies might also layer ISO 27001 for security and IEEE 7000 alignment for ethics.) By publicly holding these certifications, a company essentially outsources its trust credibility to respected third parties.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Implement Quarterly Trust Dashboards:&lt;/strong&gt; Just as finance and risk have regular reporting, institute a &lt;strong&gt;quarterly digital trust dashboard&lt;/strong&gt; at the executive level. This should highlight KPIs from Section 6 (e.g. opt-in rates, churn, breach MTTR, certifications, customer trust survey scores, etc.) and compare them to targets or benchmarks. The dashboard must be visible to the C-suite and board, with clear accountability (e.g. “Chief Privacy Officer: reduce breach response time”). Link incentive structures to these metrics: for example, include trust/CS customer satisfaction in executive bonuses. Over time, making trust metrics part of the rhythm of the business embeds it into strategy. As PwC emphasizes, companies that proactively measure and manage trust can gain a “clear edge over competitors”&lt;a href="https://www.pwc.com/us/en/library/trust-in-business-survey.html?ref=breakthroughpursuit.com#:~:text=Our%20data%20highlights%20an%20opportunity,themselves%20a%20clear%20edge%20over" rel="noopener noreferrer"&gt;[27]&lt;/a&gt; – but only if they treat trust data as seriously as sales forecasts or compliance checklists.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Invest in Public-Facing Assurance:&lt;/strong&gt; Allocate resources to transparency initiatives that customers and stakeholders can see. This includes external audits, transparency reports, and open governance. For example, publish an annual or semi-annual &lt;em&gt;Digital Trust Report&lt;/em&gt; detailing your performance on security incidents, privacy practices, and algorithmic fairness. Make your compliance reviews (like GDPR or SOC 2 audits) available in summary form. Engage with multi-stakeholder standards bodies (WEF, OECD, techUK) and showcase your alignment with their trust frameworks. When issues arise, issue prompt public statements outlining remediation steps. These gestures of openness build “trust equity” in the broader community.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In summary, companies must &lt;strong&gt;formalize trust&lt;/strong&gt; in the same way they do quality or sustainability. This means obtaining recognized certifications (ISO 27701 for privacy, ISO 42001 for AI, etc.), embedding trust metrics in management dashboards, and making trust efforts transparent externally. Doing so not only strengthens internal governance but also convinces customers, employees and regulators that your company is a trustworthy steward of data and technology. In today’s landscape – as underscored by the IBM, Deloitte and WEF research cited above – organizations that lead with trust win out in growth and goodwill&lt;a href="https://www.ibm.com/reports/data-breach?ref=breakthroughpursuit.com#:~:text=4" rel="noopener noreferrer"&gt;[8]&lt;/a&gt;&lt;a href="https://www.weforum.org/publications/measuring-digital-trust-supporting-decision-making-for-trustworthy-technologies/?ref=breakthroughpursuit.com#:~:text=The%20World%20Economic%20Forum%E2%80%99s%20Digital,approach%20to%20earning%20digital%20trust" rel="noopener noreferrer"&gt;[26]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Metrics aside, the ultimate call to action is cultural:&lt;/strong&gt; embed empathy, transparency and responsibility into your digital DNA. Treat trust as a strategic asset to nurture. In practice, this means every CX, IT, and product decision asks: &lt;em&gt;“How does this build or erode trust?”&lt;/em&gt; The data are clear that customers &lt;em&gt;value&lt;/em&gt; companies that care for their data and safety&lt;a href="https://www.pwc.com/us/en/library/trust-in-business-survey.html?ref=breakthroughpursuit.com#:~:text=of%20consumers%20have%20recommended%20a,from%20a%20company%20due%20to" rel="noopener noreferrer"&gt;[5]&lt;/a&gt;&lt;a href="https://kpmg.com/ch/en/insights/artificial-intelligence/iso-iec-42001.html?ref=breakthroughpursuit.com#:~:text=ISO%2FIEC%2042001%20certification%20helps%20organizations%3A" rel="noopener noreferrer"&gt;[23]&lt;/a&gt;. In the coming decade, companies that operationalize trust will not only outperform financially but will also set industry standards for the new social contract of the digital economy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recommended Next Steps:&lt;/strong&gt; Achieve &lt;strong&gt;ISO/IEC 27701 + 42001 certification&lt;/strong&gt; , publish a &lt;strong&gt;quarterly trust dashboard&lt;/strong&gt; , and increase &lt;strong&gt;public assurance&lt;/strong&gt; (audits/reports) to signal credibility. These concrete actions will embed trust into the business strategy and differentiate your organization in an increasingly trust-driven market&lt;a href="https://www.iso.org/standard/85819.html?ref=breakthroughpursuit.com#:~:text=,with%20partners%2C%20clients%20and%20regulators" rel="noopener noreferrer"&gt;[12]&lt;/a&gt;&lt;a href="https://kpmg.com/ch/en/insights/artificial-intelligence/iso-iec-42001.html?ref=breakthroughpursuit.com#:~:text=With%20increasing%20regulatory%20scrutiny%2C%20businesses,adoption%20and%20broader%20digital%20transformation" rel="noopener noreferrer"&gt;[22]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;References:&lt;/strong&gt; The above analysis draws on industry benchmarks and standards – notably the &lt;em&gt;Edelman Trust Barometer&lt;/em&gt;, Forrester CX Index, PwC and Deloitte trust surveys, IBM and Microsoft security reports, WEF digital trust publications, and standards (NIST Privacy Framework, ISO/IEC 27701, 31700, 42001, IEEE 7000/7001) – to quantify how digital trust translates into competitive advantage&lt;a href="https://www.forrester.com/press-newsroom/forrester-2024-us-customer-experience-index/?ref=breakthroughpursuit.com#:~:text=and%20satisfaction%20at%20the%20forefront,obsessed%20organizations" rel="noopener noreferrer"&gt;[1]&lt;/a&gt;&lt;a href="https://www.edelman.com/news-awards/2025-edelman-trust-barometer-reveals-high-level-grievance?ref=breakthroughpursuit.com#:~:text=For%20the%20past%20several%20years%2C,a%20sense%20of%20high%20grievance" rel="noopener noreferrer"&gt;[4]&lt;/a&gt;&lt;a href="https://explore.forter.com/2024-trust-premium-report/p/1?ref=breakthroughpursuit.com#:~:text=Coined%20the%20%E2%80%9CTrust%20Premium%2C%E2%80%9D%20consumers,with%20a%20retailer%20they%20trust" rel="noopener noreferrer"&gt;[2]&lt;/a&gt;&lt;a href="https://www.pwc.com/us/en/library/trust-in-business-survey.html?ref=breakthroughpursuit.com#:~:text=and%20employees%20are%20nearly%20as,trust%20improves%20the%20bottom%20line" rel="noopener noreferrer"&gt;[9]&lt;/a&gt;&lt;a href="https://www.pwc.com/us/en/library/trust-in-business-survey.html?ref=breakthroughpursuit.com#:~:text=of%20consumers%20have%20recommended%20a,from%20a%20company%20due%20to" rel="noopener noreferrer"&gt;[5]&lt;/a&gt;&lt;a href="https://www.ibm.com/reports/data-breach?ref=breakthroughpursuit.com#:~:text=4" rel="noopener noreferrer"&gt;[8]&lt;/a&gt;&lt;a href="https://www.reuters.com/technology/apple-faces-likely-french-antitrust-fine-privacy-tool-sources-say-2025-02-27/?ref=breakthroughpursuit.com#:~:text=Called%20App%20Tracking%20Transparency%20,users%20and%20measure%20their%20impact" rel="noopener noreferrer"&gt;[13]&lt;/a&gt;&lt;a href="https://privacysandbox.com/news/privacy-sandbox-next-steps/?ref=breakthroughpursuit.com#:~:text=The%20goal%20of%20the%20Privacy,serve%20the%20industry%20and%20consumers" rel="noopener noreferrer"&gt;[16]&lt;/a&gt;&lt;a href="https://www.appsflyer.com/company/newsroom/pr/post-att-growth/?ref=breakthroughpursuit.com#:~:text=User%20Opt,grow" rel="noopener noreferrer"&gt;[17]&lt;/a&gt;&lt;a href="https://www.iso.org/standard/85819.html?ref=breakthroughpursuit.com#:~:text=business%20partners%2C%20it%E2%80%99s%20not%20enough,continually%20improve%20their%20privacy%20practices" rel="noopener noreferrer"&gt;[19]&lt;/a&gt;&lt;a href="https://www.pwc.ch/en/insights/regulation/welcome-to-the-new-iso-31700-standard-for-privacy-by-design.html?ref=breakthroughpursuit.com#:~:text=Privacy%20by%20design%20is%20a,Data%20Protection%20Act%2C%20Article%207" rel="noopener noreferrer"&gt;[20]&lt;/a&gt;&lt;a href="https://kpmg.com/ch/en/insights/artificial-intelligence/iso-iec-42001.html?ref=breakthroughpursuit.com#:~:text=With%20increasing%20regulatory%20scrutiny%2C%20businesses,adoption%20and%20broader%20digital%20transformation" rel="noopener noreferrer"&gt;[22]&lt;/a&gt;&lt;a href="https://standards.ieee.org/initiatives/autonomous-intelligence-systems/?ref=breakthroughpursuit.com#:~:text=IEEE%207000%E2%84%A2" rel="noopener noreferrer"&gt;[24]&lt;/a&gt;&lt;a href="https://www.weforum.org/publications/measuring-digital-trust-supporting-decision-making-for-trustworthy-technologies/?ref=breakthroughpursuit.com#:~:text=The%20World%20Economic%20Forum%E2%80%99s%20Digital,approach%20to%20earning%20digital%20trust" rel="noopener noreferrer"&gt;[26]&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://www.forrester.com/press-newsroom/forrester-2024-us-customer-experience-index/?ref=breakthroughpursuit.com#:~:text=and%20satisfaction%20at%20the%20forefront,obsessed%20organizations" rel="noopener noreferrer"&gt;[1]&lt;/a&gt; &lt;a href="https://www.forrester.com/press-newsroom/forrester-2024-us-customer-experience-index/?ref=breakthroughpursuit.com#:~:text=Conducted%20for%20the%20ninth%20year,and%20increasing%20share%20of%20wallet" rel="noopener noreferrer"&gt;[7]&lt;/a&gt; Forrester Releases 2024 US Customer Experience Index&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.forrester.com/press-newsroom/forrester-2024-us-customer-experience-index/?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.forrester.com/press-newsroom/forrester-2024-us-customer-experience-index/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://explore.forter.com/2024-trust-premium-report/p/1?ref=breakthroughpursuit.com#:~:text=Coined%20the%20%E2%80%9CTrust%20Premium%2C%E2%80%9D%20consumers,with%20a%20retailer%20they%20trust" rel="noopener noreferrer"&gt;[2]&lt;/a&gt; Consumer Trust Premium Report 2024&lt;/p&gt;

&lt;p&gt;&lt;a href="https://explore.forter.com/2024-trust-premium-report/p/1?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://explore.forter.com/2024-trust-premium-report/p/1&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.deloittedigital.com/us/en/accelerators/trustid.html?ref=breakthroughpursuit.com#:~:text=%23%20400" rel="noopener noreferrer"&gt;[3]&lt;/a&gt; TrustID™: A blueprint for building trust | Deloitte Digital&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.deloittedigital.com/us/en/accelerators/trustid.html?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.deloittedigital.com/us/en/accelerators/trustid.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.edelman.com/news-awards/2025-edelman-trust-barometer-reveals-high-level-grievance?ref=breakthroughpursuit.com#:~:text=For%20the%20past%20several%20years%2C,a%20sense%20of%20high%20grievance" rel="noopener noreferrer"&gt;[4]&lt;/a&gt; Edelman Trust Barometer Reveals High Level of Grievance Towards Government, Business and the Rich Add to Default shortcuts | Edelman&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.edelman.com/news-awards/2025-edelman-trust-barometer-reveals-high-level-grievance?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.edelman.com/news-awards/2025-edelman-trust-barometer-reveals-high-level-grievance&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.pwc.com/us/en/library/trust-in-business-survey.html?ref=breakthroughpursuit.com#:~:text=of%20consumers%20have%20recommended%20a,from%20a%20company%20due%20to" rel="noopener noreferrer"&gt;[5]&lt;/a&gt; &lt;a href="https://www.pwc.com/us/en/library/trust-in-business-survey.html?ref=breakthroughpursuit.com#:~:text=trust%20a%20company%2C%20some%20may,from%20a%20company%20due%20to" rel="noopener noreferrer"&gt;[6]&lt;/a&gt; &lt;a href="https://www.pwc.com/us/en/library/trust-in-business-survey.html?ref=breakthroughpursuit.com#:~:text=and%20employees%20are%20nearly%20as,trust%20improves%20the%20bottom%20line" rel="noopener noreferrer"&gt;[9]&lt;/a&gt; &lt;a href="https://www.pwc.com/us/en/library/trust-in-business-survey.html?ref=breakthroughpursuit.com#:~:text=of%20consumers%20have%20recommended%20a,due%20to%20lack%20of%20trust" rel="noopener noreferrer"&gt;[10]&lt;/a&gt; &lt;a href="https://www.pwc.com/us/en/library/trust-in-business-survey.html?ref=breakthroughpursuit.com#:~:text=%2A%20Customers%3A%2042,due%20to%20lack%20of%20trust" rel="noopener noreferrer"&gt;[11]&lt;/a&gt; &lt;a href="https://www.pwc.com/us/en/library/trust-in-business-survey.html?ref=breakthroughpursuit.com#:~:text=Our%20data%20highlights%20an%20opportunity,themselves%20a%20clear%20edge%20over" rel="noopener noreferrer"&gt;[27]&lt;/a&gt; Trust in US Business Survey: PwC&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.pwc.com/us/en/library/trust-in-business-survey.html?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.pwc.com/us/en/library/trust-in-business-survey.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.ibm.com/reports/data-breach?ref=breakthroughpursuit.com#:~:text=4" rel="noopener noreferrer"&gt;[8]&lt;/a&gt; Cost of a data breach 2025 | IBM&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.ibm.com/reports/data-breach?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.ibm.com/reports/data-breach&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.iso.org/standard/85819.html?ref=breakthroughpursuit.com#:~:text=,with%20partners%2C%20clients%20and%20regulators" rel="noopener noreferrer"&gt;[12]&lt;/a&gt; &lt;a href="https://www.iso.org/standard/85819.html?ref=breakthroughpursuit.com#:~:text=business%20partners%2C%20it%E2%80%99s%20not%20enough,continually%20improve%20their%20privacy%20practices" rel="noopener noreferrer"&gt;[19]&lt;/a&gt; &lt;a href="https://www.iso.org/standard/85819.html?ref=breakthroughpursuit.com#:~:text=business%20partners%2C%20it%E2%80%99s%20not%20enough,continually%20improve%20their%20privacy%20practices" rel="noopener noreferrer"&gt;[25]&lt;/a&gt;  ISO/IEC 27701 - Information security, cybersecurity and privacy protection — Privacy information management systems — Requirements and guidance&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.iso.org/standard/85819.html?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.iso.org/standard/85819.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.reuters.com/technology/apple-faces-likely-french-antitrust-fine-privacy-tool-sources-say-2025-02-27/?ref=breakthroughpursuit.com#:~:text=Called%20App%20Tracking%20Transparency%20,users%20and%20measure%20their%20impact" rel="noopener noreferrer"&gt;[13]&lt;/a&gt; &lt;a href="https://www.reuters.com/technology/apple-faces-likely-french-antitrust-fine-privacy-tool-sources-say-2025-02-27/?ref=breakthroughpursuit.com#:~:text=The%20French%20regulator%20charged%20Apple,user%20data%20for%20advertising%20purposes" rel="noopener noreferrer"&gt;[14]&lt;/a&gt; &lt;a href="https://www.reuters.com/technology/apple-faces-likely-french-antitrust-fine-privacy-tool-sources-say-2025-02-27/?ref=breakthroughpursuit.com#:~:text=Apple%20referred%20to%20a%20July,the%20goal%20of%20the%20ATT" rel="noopener noreferrer"&gt;[15]&lt;/a&gt; Apple faces likely French antitrust fine for privacy tool, sources say | Reuters&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.reuters.com/technology/apple-faces-likely-french-antitrust-fine-privacy-tool-sources-say-2025-02-27/?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.reuters.com/technology/apple-faces-likely-french-antitrust-fine-privacy-tool-sources-say-2025-02-27/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://privacysandbox.com/news/privacy-sandbox-next-steps/?ref=breakthroughpursuit.com#:~:text=The%20goal%20of%20the%20Privacy,serve%20the%20industry%20and%20consumers" rel="noopener noreferrer"&gt;[16]&lt;/a&gt; Next steps for Privacy Sandbox and tracking protections in Chrome&lt;/p&gt;

&lt;p&gt;&lt;a href="https://privacysandbox.com/news/privacy-sandbox-next-steps/?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://privacysandbox.com/news/privacy-sandbox-next-steps/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.appsflyer.com/company/newsroom/pr/post-att-growth/?ref=breakthroughpursuit.com#:~:text=User%20Opt,grow" rel="noopener noreferrer"&gt;[17]&lt;/a&gt; AppsFlyer Shows Mobile Ad Market Thrives 4 Years After ATT&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.appsflyer.com/company/newsroom/pr/post-att-growth/?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.appsflyer.com/company/newsroom/pr/post-att-growth/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.nist.gov/privacy-framework?ref=breakthroughpursuit.com#:~:text=The%20NIST%20Privacy%20Framework%20,services%20while%20protecting%20individuals%E2%80%99%20privacy" rel="noopener noreferrer"&gt;[18]&lt;/a&gt; Privacy Framework | NIST&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.nist.gov/privacy-framework?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.nist.gov/privacy-framework&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.pwc.ch/en/insights/regulation/welcome-to-the-new-iso-31700-standard-for-privacy-by-design.html?ref=breakthroughpursuit.com#:~:text=Privacy%20by%20design%20is%20a,Data%20Protection%20Act%2C%20Article%207" rel="noopener noreferrer"&gt;[20]&lt;/a&gt; &lt;a href="https://www.pwc.ch/en/insights/regulation/welcome-to-the-new-iso-31700-standard-for-privacy-by-design.html?ref=breakthroughpursuit.com#:~:text=creation%2C%20collection%20of%20personally%20identifiable,responsibility%3B%20and%20ecosystem%20and%20lifecycle" rel="noopener noreferrer"&gt;[21]&lt;/a&gt; Welcome to the new ISO 31700 standard for privacy by design | PwC Switzerland&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.pwc.ch/en/insights/regulation/welcome-to-the-new-iso-31700-standard-for-privacy-by-design.html?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.pwc.ch/en/insights/regulation/welcome-to-the-new-iso-31700-standard-for-privacy-by-design.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kpmg.com/ch/en/insights/artificial-intelligence/iso-iec-42001.html?ref=breakthroughpursuit.com#:~:text=With%20increasing%20regulatory%20scrutiny%2C%20businesses,adoption%20and%20broader%20digital%20transformation" rel="noopener noreferrer"&gt;[22]&lt;/a&gt; &lt;a href="https://kpmg.com/ch/en/insights/artificial-intelligence/iso-iec-42001.html?ref=breakthroughpursuit.com#:~:text=ISO%2FIEC%2042001%20certification%20helps%20organizations%3A" rel="noopener noreferrer"&gt;[23]&lt;/a&gt; ISO/IEC 42001: a new standard for AI governance&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kpmg.com/ch/en/insights/artificial-intelligence/iso-iec-42001.html?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://kpmg.com/ch/en/insights/artificial-intelligence/iso-iec-42001.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://standards.ieee.org/initiatives/autonomous-intelligence-systems/?ref=breakthroughpursuit.com#:~:text=IEEE%207000%E2%84%A2" rel="noopener noreferrer"&gt;[24]&lt;/a&gt; IEEE SA - Autonomous and Intelligent Systems (AIS)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://standards.ieee.org/initiatives/autonomous-intelligence-systems/?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://standards.ieee.org/initiatives/autonomous-intelligence-systems/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.weforum.org/publications/measuring-digital-trust-supporting-decision-making-for-trustworthy-technologies/?ref=breakthroughpursuit.com#:~:text=The%20World%20Economic%20Forum%E2%80%99s%20Digital,approach%20to%20earning%20digital%20trust" rel="noopener noreferrer"&gt;[26]&lt;/a&gt; Measuring Digital Trust | World Economic Forum&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.weforum.org/publications/measuring-digital-trust-supporting-decision-making-for-trustworthy-technologies/?ref=breakthroughpursuit.com" rel="noopener noreferrer"&gt;https://www.weforum.org/publications/measuring-digital-trust-supporting-decision-making-for-trustworthy-technologies/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>digitaltrust</category>
      <category>competitiveadvantage</category>
      <category>customerretentionloy</category>
      <category>regulatorygoodwill</category>
    </item>
  </channel>
</rss>
