<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Riku Lauttia</title>
    <description>The latest articles on Forem by Riku Lauttia (@rikulauttia).</description>
    <link>https://forem.com/rikulauttia</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/rikulauttia"/>
    <language>en</language>
    <item>
      <title>A Social Contract for AI</title>
      <dc:creator>Riku Lauttia</dc:creator>
      <pubDate>Sat, 25 Oct 2025 18:35:33 +0000</pubDate>
      <link>https://forem.com/rikulauttia/a-social-contract-for-ai-lah</link>
      <guid>https://forem.com/rikulauttia/a-social-contract-for-ai-lah</guid>
      <description>&lt;p&gt;Responsibility, Competence and Infrastructure — In Practice.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AI can propose, but only people can decide and own consequences.&lt;/li&gt;
&lt;li&gt;Explanations must fit each audience — citizens get reasons they understand; experts get verifiable logs.&lt;/li&gt;
&lt;li&gt;Security moves “left”: design it into data, models, supply chains, and operations.&lt;/li&gt;
&lt;li&gt;Capacity must be layered and portable (exit rights tested, not only promised).&lt;/li&gt;
&lt;li&gt;Curated data and guardrails against “model collapse” are non-negotiable.&lt;/li&gt;
&lt;li&gt;Ethically, pursue a dual strategy: partner where power lives, but fund open alternatives to keep freedom to move.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv6wk05zqvxpagmfmy4wu.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv6wk05zqvxpagmfmy4wu.jpg" alt=" " width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AI now threads through the economy, government, and everyday life — and shifts power as it goes: who steers development, whose voice counts, and whose risks are tolerated. This essay offers a practical framework any organization can adopt to make defensible, auditable AI decisions today. The core claim is simple: a democratic path requires that we operationalize six ideas — human responsibility, audience-specific explanation, security-by-design, layered and portable capacity, curated data, and an ethics dual strategy — across contracts, architectures, and daily routines. If we don’t, decision power quietly migrates to whoever controls the fastest lane: compute, data, and contracting leverage.&lt;/p&gt;

&lt;p&gt;Machines can support work and decisions; responsibility must remain human. Below I show how to turn principles into checkable routines — procurement clauses, architectural patterns, and training programs — so the “social contract for AI” isn’t just a strategy slide, but something you can verify.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Only Humans Decide&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Large language models feel fluent, but they do not share human commitments. A model can predict words; only a person can promise, be accountable, and correct a decision. That boundary is crucial in public power, healthcare, and due process.&lt;/p&gt;

&lt;p&gt;Design rule: in every critical application, separate suggestion from decision. Name the human approver. Make the chain auditable. Without this, fluency gets mistaken for understanding, and responsibility blurs.&lt;/p&gt;

&lt;p&gt;Practice checklist:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Name the decision owner in the UI and in logs.&lt;/li&gt;
&lt;li&gt;Require a justification field (what evidence, which data versions, which tests passed).&lt;/li&gt;
&lt;li&gt;Show citizens the decision and appeal path in the same view.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Competence, Reframed: From “Code Writer” to “Verifier”&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Generative tools shift software work: less manual writing, more problem framing, testing, verification, and safe use. Quality does not emerge by accident.&lt;/p&gt;

&lt;p&gt;What changes in teams:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prompts are artifacts. Keep them in version control.&lt;/li&gt;
&lt;li&gt;CI for generations. Treat output like code: unit tests, policy tests, red-team suites.&lt;/li&gt;
&lt;li&gt;Named approver. A human must gate releases and risky actions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Operational metric ideas:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Escapes per release (errors past tests into production).&lt;/li&gt;
&lt;li&gt;Percentage of critical actions with human approval logged.&lt;/li&gt;
&lt;li&gt;Mean time to correction after citizen/agent appeal.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Minimal “Gen-AI Gate” in CI (concept):&lt;br&gt;
checks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;unit-tests&lt;/li&gt;
&lt;li&gt;policy-tests (jailbreaks, PII, safety rails)&lt;/li&gt;
&lt;li&gt;eval-bench (task-specific accuracy/latency)&lt;/li&gt;
&lt;li&gt;human-approval (required for risk &amp;gt;= medium; role: Service Owner)&lt;/li&gt;
&lt;li&gt;artifacts:&lt;/li&gt;
&lt;li&gt;prompts/ (versioned prompts)&lt;/li&gt;
&lt;li&gt;evals/ (reproducible eval sets)&lt;/li&gt;
&lt;li&gt;provenance/manifest.json (model + data snapshot)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Explainability That Matters (to Each Audience)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One diagram rarely justifies a decision. In public use, we need reasons a citizen understands, plus deep logs for auditors. Explanation isn’t a bolt-on; it’s part of the system.&lt;/p&gt;

&lt;p&gt;Two-tier model:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Citizen layer: plain-language rationale, key factors, uncertainty, and an appeal button.&lt;/li&gt;
&lt;li&gt;Expert layer: versioned data/model snapshot, feature contributions, policy rules invoked, and evaluation traces.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Make it measurable:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Maintain a justification budget alongside latency budgets.&lt;/li&gt;
&lt;li&gt;Track comprehension with user tests (do non-experts correctly paraphrase the reason?).&lt;/li&gt;
&lt;li&gt;Version explanation artifacts so they update as data/models change.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Security as the Default&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Attackers’ reconnaissance is fast and automated; defenders must raise costs before the first exploit. AI also lowers the attacker’s skill threshold.&lt;/p&gt;

&lt;p&gt;Architecture patterns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Strict train/test/prod separation; zero standing privileges.&lt;/li&gt;
&lt;li&gt;Minimal metadata retention; role-based access with time-boxed tokens.&lt;/li&gt;
&lt;li&gt;Supply-chain provenance: models, libraries, datasets (SBOM, dataset lineage, signed attestations).&lt;/li&gt;
&lt;li&gt;Continuous LLM red-teaming (prompt injection, data exfiltration, tool abuse).&lt;/li&gt;
&lt;li&gt;Practiced incident drills: backup integrity, failover paths, and clear communications.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Quarterly security routine (example):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Purple-team exercise (prompt injection + data exfiltration).&lt;/li&gt;
&lt;li&gt;Restore from backup and switch to warm-standby.&lt;/li&gt;
&lt;li&gt;Rotate keys/tokens; verify blast radius limits.&lt;/li&gt;
&lt;li&gt;Publish a short, de-identified internal postmortem.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Compute Is Political: Layered and Portable Capacity&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Specialized accelerators concentrate performance in few places. That’s not just technical — it’s geopolitical and economic. If critical functions depend on a single vendor, you inherit their pricing and disruptions.&lt;/p&gt;

&lt;p&gt;Capacity strategy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Layering: national/regional cloud where needed, with local edge for continuity.&lt;/li&gt;
&lt;li&gt;Exit rights you test: data and model export in usable formats; like-for-like performance tests on alternates.&lt;/li&gt;
&lt;li&gt;Procurement points: open interfaces, portability scoring, energy use in total cost of ownership.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Business-continuity goals:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;RTO (recovery time objective) and RPO (data loss window) defined and tested twice a year.&lt;/li&gt;
&lt;li&gt;Simulate provider lockout and prove you can run elsewhere.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Data Quality, Synthetic Data, and Model-Collapse Risk&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If systems start learning from their own outputs, distributions drift and quality decays. Prevent recursive self-feeding unless a human-in-the-loop review clears it.&lt;/p&gt;

&lt;p&gt;Data governance:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dataset cards: origin, rights, bias notes, update history.&lt;/li&gt;
&lt;li&gt;Synthetic data controls: document generators, share, and purpose; cap its proportion; validate with real-world probes.&lt;/li&gt;
&lt;li&gt;Pre-deployment quality gates — don’t wait for incidents.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Data Registry (illustrative excerpt):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dataset: claims_2024Q3&lt;/li&gt;
&lt;li&gt;Sources: municipal systems A/B, OCR pipeline v2.1&lt;/li&gt;
&lt;li&gt;Known risks: under-representation of non-native speakers&lt;/li&gt;
&lt;li&gt;Synthetic share: 12% (generator v0.9, style constraints on)&lt;/li&gt;
&lt;li&gt;Last audit: 2025–09–15 (pass)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Ethics Between Power and Freedom: The “Dirty Hands” Dual Strategy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We often face a choice: influence from inside partnerships (where decisions are made) or from outside by building alternatives. Both carry risks; the practical answer is to do both.&lt;/p&gt;

&lt;p&gt;What it looks like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Work with major providers and fund open models, test beds, and standards in parallel.&lt;/li&gt;
&lt;li&gt;Independent ethics boards with real stop authority for large procurements.&lt;/li&gt;
&lt;li&gt;Public conflict-of-interest and influence logs.&lt;/li&gt;
&lt;li&gt;Annual impact reports and third-party audits.&lt;/li&gt;
&lt;li&gt;Whistleblower channels that actually protect people.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The Social Contract, Operationalized: Six Principles → Six Routines&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Responsibility stays human&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Named approver; time-stamped decision log; RACI table.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Explanation by audience&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Citizen rationale and appeal; expert trace logs; comprehension tests.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Security by design&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Minimized metadata; supply-chain provenance; practiced drills.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Layered and portable capacity&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tested exits; portability score in procurement; energy in TCO.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Curated data, synthetic under control&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Registry and quality gates; recursion guard; bias and drift monitors.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Ethics dual strategy&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Partner plus open alternatives; independent board; public reports.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Public dashboard suggestion:&lt;br&gt;
Publish quarterly: portability test results, data-quality grade, audit findings (de-identified), and time-to-correction for appeals.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Case Example: Municipal Social Services&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A city uses LLMs to draft assessments and summaries.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Responsibility: each suggestion requires a named caseworker’s approval; UI records reasons and data versions.&lt;/li&gt;
&lt;li&gt;Explanation: citizens see plain-language reasons and an appeal link; staff see model/data/version logs.&lt;/li&gt;
&lt;li&gt;Security: separate train/test/prod; metadata minimized; honeytokens detect exfil attempts; regular drills switch to edge capacity during outages.&lt;/li&gt;
&lt;li&gt;Data: documented sources; limited, labeled synthetic share; pre-launch quality gates.&lt;/li&gt;
&lt;li&gt;Ethics and portability: publish de-identified quarterly metrics; in parallel, pilot an open model to keep exit options real.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Result: faster service without sacrificing due process or trust.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Making Hidden Power Visible&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AI’s strongest effects hide in contracting, data curation, update cadence, and architecture choices. That’s where “quiet power” accumulates. Counter it with routines: decision logs, impact assessments, version histories, public changelogs, and test results. These create a learning loop, not just a compliance tick-box.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Portability is a drill, not a slogan. Practice data/model exports and failovers with real costs attached.&lt;/li&gt;
&lt;li&gt;Curation is a routine, preventing silent decay and keeping models grounded.&lt;/li&gt;
&lt;li&gt;Explanation by audience keeps citizens informed and auditors effective.&lt;/li&gt;
&lt;li&gt;Security by design raises attacker costs and shrinks blast radius.&lt;/li&gt;
&lt;li&gt;Independent oversight verifies the drills and keeps everyone honest.
In Finland and across Europe, this is also about sovereignty: layered capacity (regional cloud plus edge) so we’re not captive to a single vendor or geography.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Conclusion: Machines Propose, Society Decides&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Models predict words; people carry duties. A workable social contract for AI hard-wires that reality. When explanation, security, capacity, data, and ethics are embedded in routines, AI strengthens democracy: decisions are justifiable today and correctable tomorrow, without undue delay or cost.&lt;/p&gt;

&lt;p&gt;Two movements, in parallel:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Institutionalize verifiability — traceable suggestions, reproducible evidence, accountable approvals.&lt;/li&gt;
&lt;li&gt;Build sovereign capacity — tested exit rights, portable stacks, and real security.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These reinforce one another. Without institutions, sovereignty is a slogan. Without sovereignty, institutions are fragile. Measure progress through regular audits and public exercises; otherwise capability remains on paper. An ethics dual strategy keeps influence where power sits while preserving the freedom to leave.&lt;/p&gt;

&lt;p&gt;Bottom line: machines can suggest; we set direction and terms. That’s how innovation becomes fixable — and fair.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Appendix: Ready-to-Use Artifacts&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;RACI for Critical Decisions (template)&lt;br&gt;
Activity | Responsible | Accountable | Consulted | Informed&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Model suggestion accepted/rejected | Caseworker | Service Owner | Legal, DPO | Citizen, Team&lt;/li&gt;
&lt;li&gt;Data update to training set | Data Steward | CDO | Security, Domain Lead | Audit Board&lt;/li&gt;
&lt;li&gt;Portability drill execution | SRE Lead | CTO/CIO | Vendor, Risk | Public report&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Procurement Clauses (excerpt)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Portability &amp;amp; Exit: Vendor must support export of data, prompts, embeddings, and fine-tuned weights in documented formats; provide performance baselines for alternative environments; participate in semiannual failover drills.&lt;/li&gt;
&lt;li&gt;Security &amp;amp; Provenance: Provide SBOM for models/libs, dataset lineage, and signed attestations; pass red-team tests twice yearly.&lt;/li&gt;
&lt;li&gt;Explanation: Deliver citizen-facing rationales and expert trace logs via APIs; support appeal integration.&lt;/li&gt;
&lt;li&gt;Data Governance: Maintain dataset cards; cap and document synthetic shares; prevent recursive training on system outputs without human review.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Policy Budgets (keep next to latency SLOs)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Justification budget: p95 ≤ 500 ms to render citizen rationale and appeal link.&lt;/li&gt;
&lt;li&gt;Correction budget: ≤ 5 business days from appeal to adjudication.&lt;/li&gt;
&lt;li&gt;Portability budget: ≤ 24 h to restore service in alternate environment (RTO), ≤ 1 h data loss (RPO).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;References&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;P.J. Denning &amp;amp; B.S. Rousse, “Can Machines Be in Language?”, Communications of the ACM, 67(3):32–35, 2024.&lt;/p&gt;

&lt;p&gt;S. Greengard, “AI Rewrites Coding,” Communications of the ACM, 66(4):12–14, 2023.&lt;/p&gt;

&lt;p&gt;A. Malizia &amp;amp; F. Paternò, “Why Is the Current XAI Not Meeting the Expectations?”, Communications of the ACM, 66(12):20–23, 2023.&lt;/p&gt;

&lt;p&gt;W. Mazurczyk &amp;amp; L. Caviglione, “Cyber Reconnaissance Techniques,” Communications of the ACM, 64(3):86–95, 2021.&lt;/p&gt;

&lt;p&gt;N. Savage, “The Collapse of GPT,” Communications of the ACM, 68(6):11–13, 2025.&lt;/p&gt;

&lt;p&gt;H. Skaug Sætra, M. Coeckelbergh &amp;amp; J. Danaher, “The AI Ethicist’s Dirty Hands Problem,” Communications of the ACM, 66(1):39–41, 2023.&lt;/p&gt;

&lt;p&gt;N.C. Thompson &amp;amp; S. Spanuth, “The Decline of Computers as a General Purpose Technology,” Communications of the ACM, 64(3):64–72, 2021.&lt;/p&gt;

</description>
      <category>aigovernance</category>
      <category>responsibleai</category>
      <category>datagovernance</category>
      <category>security</category>
    </item>
    <item>
      <title>AI at the Edge: Why Hardware and Embedded AI Will Decide the Next Decade</title>
      <dc:creator>Riku Lauttia</dc:creator>
      <pubDate>Fri, 26 Sep 2025 08:45:39 +0000</pubDate>
      <link>https://forem.com/rikulauttia/ai-at-the-edge-why-hardware-and-embedded-ai-will-decide-the-next-decade-l57</link>
      <guid>https://forem.com/rikulauttia/ai-at-the-edge-why-hardware-and-embedded-ai-will-decide-the-next-decade-l57</guid>
      <description>&lt;p&gt;Over the last few years, artificial intelligence has shifted from experimental to indispensable. What once ran in massive cloud data centers is now moving closer to the devices we hold, wear, and deploy in the field. The next decade of AI will be defined not only by smarter algorithms, but by &lt;strong&gt;where and how&lt;/strong&gt; those algorithms run. Hardware AI and embedded, on-device intelligence are set to become the decisive frontier.&lt;/p&gt;

&lt;h2&gt;
  
  
  1) Why the Edge Matters
&lt;/h2&gt;

&lt;p&gt;Cloud-based AI fueled most of the breakthroughs of the 2010s and early 2020s. But as models scale, their cost, latency, and environmental footprint can become unsustainable.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Analysts have projected that AI data centers could consume on the order of &lt;strong&gt;~1 trillion liters of water annually by 2028&lt;/strong&gt;, a step-change from today.&lt;/li&gt;
&lt;li&gt;Global demand for GPUs and accelerators has driven &lt;strong&gt;energy and supply-chain pressure&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;This resource intensity collides with rising demand for &lt;strong&gt;real-time, privacy-sensitive AI&lt;/strong&gt;. From drones to medical devices, users can’t always wait for a round trip to the cloud.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That’s why &lt;strong&gt;AI at the edge&lt;/strong&gt; — running locally on chips inside devices — is becoming a necessity, not a luxury.&lt;/p&gt;

&lt;h2&gt;
  
  
  2) Breakthroughs in AI Hardware
&lt;/h2&gt;

&lt;p&gt;2025 has already seen advances that signal where the industry is heading:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Compact on-device models.&lt;/strong&gt; Lightweight multilingual embedding and reasoning models now fit within a few hundred MB of RAM, enabling search, semantic understanding, and ranking on phones, laptops, and IoT devices.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Optical/photonic AI chips.&lt;/strong&gt; Research-grade parts that use light for certain operations report &lt;strong&gt;order-of-magnitude efficiency gains&lt;/strong&gt; (often cited up to ~100×) on tasks like image recognition and pattern detection.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Specialized accelerators.&lt;/strong&gt; From major vendors to startups, &lt;strong&gt;domain-specific chips&lt;/strong&gt; for robotics, defense systems, and autonomous vehicles are reducing reliance on bulky data-center inference.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Together, these innovations point to an AI future where &lt;strong&gt;smaller, faster, greener hardware&lt;/strong&gt; is as important as software algorithms.&lt;/p&gt;

&lt;h2&gt;
  
  
  3) Embedded AI in Action
&lt;/h2&gt;

&lt;p&gt;Edge AI is not theoretical — it’s already reshaping industries:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Drones &amp;amp; robotics.&lt;/strong&gt; Autonomous aerial, land, and underwater systems rely on embedded AI to make &lt;strong&gt;split-second decisions&lt;/strong&gt; without constant connectivity. In defense, &lt;strong&gt;swarm coordination&lt;/strong&gt; is being tested for missions that would be impossible with cloud-only control loops.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Healthcare devices.&lt;/strong&gt; Wearables and imaging equipment embed models that can &lt;strong&gt;detect anomalies locally&lt;/strong&gt;, protecting patient privacy while reducing time to diagnosis.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automotive.&lt;/strong&gt; Modern vehicles integrate on-device AI for &lt;strong&gt;lane detection, collision avoidance, and adaptive cruise control&lt;/strong&gt; — all requiring real-time inference with near-zero latency.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If AI is to be truly ubiquitous, it must &lt;strong&gt;live inside the devices we use&lt;/strong&gt; — not just the servers we rent.&lt;/p&gt;

&lt;h2&gt;
  
  
  4) Commercial &amp;amp; Strategic Implications
&lt;/h2&gt;

&lt;p&gt;This shift will reshape technology, business, and geopolitics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;For companies.&lt;/strong&gt; Adopting edge AI can &lt;strong&gt;reduce cloud costs&lt;/strong&gt;, unlock &lt;strong&gt;new revenue streams&lt;/strong&gt;, and build &lt;strong&gt;more resilient systems&lt;/strong&gt;. Leading AI platforms report &lt;strong&gt;multi-billion-dollar ARR trajectories&lt;/strong&gt;, underscoring how central AI infrastructure has become.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;For defense &amp;amp; security.&lt;/strong&gt; Nations that master efficient AI hardware and &lt;strong&gt;swarm-scale autonomy&lt;/strong&gt; will hold decisive advantages. As with prior dual-use technologies, &lt;strong&gt;hardware capability and governance&lt;/strong&gt; will be strategic levers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;For sustainability.&lt;/strong&gt; With data-center demand straining energy and water systems, &lt;strong&gt;hardware efficiency&lt;/strong&gt; is the only path to environmentally viable, scaled AI.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  5) Looking Ahead
&lt;/h2&gt;

&lt;p&gt;The last decade’s AI conversation was dominated by models: GPTs, diffusion, RL breakthroughs. The next decade will be dominated by &lt;strong&gt;deployment&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How do we &lt;strong&gt;make AI run everywhere&lt;/strong&gt;?&lt;/li&gt;
&lt;li&gt;How do we &lt;strong&gt;power it sustainably&lt;/strong&gt;?&lt;/li&gt;
&lt;li&gt;How do we &lt;strong&gt;secure it&lt;/strong&gt; in critical applications like defense and healthcare?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The answer is clear: &lt;strong&gt;hardware AI and embedded intelligence&lt;/strong&gt; will determine who leads — and who follows — in the global AI race.&lt;/p&gt;




&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;AI at the edge is not a side story — it’s the &lt;strong&gt;main act&lt;/strong&gt; of the 2025s. From efficient chips and on-device models to swarms of autonomous drones, the future of AI will be measured by &lt;strong&gt;how well we embed intelligence&lt;/strong&gt; into the fabric of our machines.&lt;/p&gt;

&lt;p&gt;For entrepreneurs, engineers, and policymakers, the message is the same: &lt;strong&gt;own the edge, and you own the future of AI.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;#ArtificialIntelligence #EdgeComputing #MachineLearning #DeepLearning #AIHardware&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>edgecomputing</category>
      <category>machinelearning</category>
      <category>deeplearning</category>
    </item>
    <item>
      <title>From San Francisco to Europe: The 2025 Playbook for Building Agentic AI That Scales</title>
      <dc:creator>Riku Lauttia</dc:creator>
      <pubDate>Fri, 26 Sep 2025 08:39:50 +0000</pubDate>
      <link>https://forem.com/rikulauttia/from-san-francisco-to-europe-the-2025-playbook-for-building-agentic-ai-that-scales-aih</link>
      <guid>https://forem.com/rikulauttia/from-san-francisco-to-europe-the-2025-playbook-for-building-agentic-ai-that-scales-aih</guid>
      <description>&lt;p&gt;In December 2024 I spent two weeks in San Francisco talking to builders across labs, clouds, and startups. The same patterns I saw there have crystallized in 2025.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsvrua5nhtyljdc93ivdf.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsvrua5nhtyljdc93ivdf.jpg" alt=" " width="800" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The 2025 inflection: agents + reasoning + open models
&lt;/h2&gt;

&lt;p&gt;The conversation has moved beyond chat. The hottest race now is for &lt;strong&gt;reliable agents&lt;/strong&gt; — systems that can plan, take multi-step actions, and operate software on our behalf. Even the biggest platforms are saying it out loud: Amazon’s AGI group is prioritizing &lt;strong&gt;agents over raw LLM size&lt;/strong&gt; because doing things matters more than talking about them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reasoning-first foundation models&lt;/strong&gt; have arrived too. OpenAI’s &lt;strong&gt;o3&lt;/strong&gt; family is optimized for long, careful thinking and is now broadly available via ChatGPT and API. On the open side, &lt;strong&gt;Meta’s Llama 4 and 3.1 (up to 405B)&lt;/strong&gt; pushed the ceiling for openly available models, and &lt;strong&gt;Qwen&lt;/strong&gt; has been iterating fast with &lt;strong&gt;Qwen2.5/Qwen3&lt;/strong&gt; and large MoE variants. There’s also a wave of open agentic research like &lt;strong&gt;Moonshot’s Kimi K2&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Under the hood, &lt;strong&gt;hardware capacity is exploding&lt;/strong&gt; — NVIDIA &lt;strong&gt;Blackwell&lt;/strong&gt; systems roll out through 2025, enabling cheaper inference and larger context, though demand remains intense. And on the edge, &lt;strong&gt;Apple Intelligence&lt;/strong&gt; is mainstreaming on-device AI with a privacy-by-design architecture developers can tap into across iPhone, iPad, and Mac.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Europe can win this wave
&lt;/h2&gt;

&lt;p&gt;Two structural advantages stand out:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Trust by design.&lt;/strong&gt; The &lt;strong&gt;EU AI Act&lt;/strong&gt; is now in force, with prohibitions/AI-literacy obligations already active (Feb 2025), &lt;strong&gt;GPAI duties applying from Aug 2, 2025&lt;/strong&gt;, and &lt;strong&gt;full applicability by Aug 2, 2026&lt;/strong&gt; (with some transitions to 2027). Building to these standards is a global trust signal.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Industry depth.&lt;/strong&gt; Europe owns complex verticals — &lt;strong&gt;telecom, energy, health, manufacturing&lt;/strong&gt; — where reliable agents plus tight governance beat raw benchmarks every time.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  My 7 rules for AI engineers in 2025 (the playbook I brought home)
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Design for agent reliability, not demos.&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Add eval gates that block deploys when plans/actions regress (schema checks, tool-use validation, safety rails). Benchmarks are nice; passing CI with real tasks is better.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Measure unit economics from day one.&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Track &lt;strong&gt;cost per request&lt;/strong&gt; (input/output tokens × model pricing), &lt;strong&gt;p50/p95/p99 latency&lt;/strong&gt;, &lt;strong&gt;cache hit rate&lt;/strong&gt;, and &lt;strong&gt;error budget&lt;/strong&gt;. This is how you scale without surprises.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Compliance is a feature.&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Treat &lt;strong&gt;EU-AI-Act alignment&lt;/strong&gt; like performance work: data lineage, audit logs, PII handling, human-in-the-loop where risk is high. Teams that can show &lt;em&gt;compliant by construction&lt;/em&gt; will close bigger deals faster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Leverage open models strategically.&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Llama&lt;/strong&gt; and &lt;strong&gt;Qwen&lt;/strong&gt; families give you sovereign options: fine-tune locally, serve on your infra, and mix with closed models when you need peak reasoning (e.g., &lt;strong&gt;o3&lt;/strong&gt; for edge cases).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Be hardware-aware.&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Build for the 2025 stack: &lt;strong&gt;longer contexts, MoE routing, paged KV, quantization&lt;/strong&gt;. If you can show a &lt;strong&gt;30–50%&lt;/strong&gt; latency or cost drop on Blackwell-class nodes (or smart batching on current GPUs), you’re speaking the language of production.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;On-device is a first-class path.&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Privacy-sensitive features belong &lt;strong&gt;on the device&lt;/strong&gt; when possible; use secure fallback to cloud for heavier tasks. &lt;strong&gt;Apple’s model tiers&lt;/strong&gt; and &lt;strong&gt;Private Cloud Compute&lt;/strong&gt; make this an easy story to tell customers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Ship thin slices into real workflows.&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Pick a vertical (&lt;strong&gt;telecom, energy, health&lt;/strong&gt;), automate one painful multi-step task end-to-end, and instrument the results. Repeat. &lt;strong&gt;Careers are built on shipped systems — not whitepapers.&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  A note from San Francisco
&lt;/h2&gt;

&lt;p&gt;What struck me in San Francisco wasn’t just the pace — it was the clarity. Teams that win &lt;strong&gt;obsess over reliability, cost, and trust&lt;/strong&gt;. In 2025, Europe can add something special to that equation: &lt;strong&gt;deployment in regulated, high-impact industries&lt;/strong&gt;. That’s where agentic AI stops being a demo and starts changing how the world runs.&lt;/p&gt;

&lt;p&gt;If this resonates, let’s connect — I’m building &lt;strong&gt;systems-first AI&lt;/strong&gt;, open to collaborations.&lt;/p&gt;

&lt;p&gt;— &lt;strong&gt;Riku Lauttia&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;#ArtificialIntelligence #MachineLearning&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>softwareengineering</category>
      <category>startup</category>
    </item>
    <item>
      <title>Europe Must Stay Hungry: Why the Next Decade of AI Will Be Decided Here</title>
      <dc:creator>Riku Lauttia</dc:creator>
      <pubDate>Sat, 16 Aug 2025 23:20:03 +0000</pubDate>
      <link>https://forem.com/rikulauttia/europe-must-stay-hungry-why-the-next-decade-of-ai-will-be-decided-here-1ge5</link>
      <guid>https://forem.com/rikulauttia/europe-must-stay-hungry-why-the-next-decade-of-ai-will-be-decided-here-1ge5</guid>
      <description>&lt;p&gt;&lt;em&gt;Ambition, trust, and scale — the European formula for meaningful AI.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmfhvsx0wumtctsj3540z.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmfhvsx0wumtctsj3540z.jpg" alt="Europe’s mix of world-class research, trust-by-design regulation, and deep industry can turn AI momentum into market leadership—here’s the playbook." width="800" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Artificial Intelligence is the defining technology of our era. The question is no longer &lt;em&gt;if&lt;/em&gt; it will transform industries, but &lt;em&gt;where&lt;/em&gt; the most meaningful breakthroughs and commercial successes will happen.&lt;/p&gt;

&lt;p&gt;Many point to Silicon Valley or China as the natural leaders. I believe Europe has a unique chance — and responsibility — to define the next decade of AI. The challenge is simple: we must &lt;strong&gt;keep our ambition high&lt;/strong&gt; and &lt;strong&gt;convert momentum into market leadership&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Europe’s Strategic Advantages
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;World-class research.&lt;/strong&gt; Universities and labs across the continent consistently advance ML, systems, and applied AI.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Trust by design.&lt;/strong&gt; With frameworks like the EU AI Act, Europe is positioned to set global standards for safe and responsible AI — a long-term advantage.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Industrial depth.&lt;/strong&gt; Healthcare, energy, telecom, manufacturing, mobility — places where AI creates real productivity and societal impact.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Talent + Trust + Industry&lt;/strong&gt; is a combination no other region matches at this scale.&lt;/p&gt;

&lt;h2&gt;
  
  
  Turning Momentum Into Market Leadership
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Empower emerging talent.&lt;/strong&gt; Give engineers bold problems, mentorship, and global exposure.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Help startups scale at home.&lt;/strong&gt; Capital, customers, and commercialization support.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pair research excellence with go-to-market execution.&lt;/strong&gt; Breakthroughs matter when they become products used by millions.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Finland and the Nordics: A Focused Case
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Build &lt;strong&gt;AI-native companies&lt;/strong&gt; that scale globally from day one.
&lt;/li&gt;
&lt;li&gt;Apply AI to &lt;strong&gt;critical infrastructure&lt;/strong&gt; — networks, healthcare, energy — where reliability and trust matter most.
&lt;/li&gt;
&lt;li&gt;Keep a &lt;strong&gt;builder’s mindset&lt;/strong&gt;: practical, systems-oriented, export-ready.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;“Good” is not the goal. &lt;strong&gt;World-class&lt;/strong&gt; is.&lt;/p&gt;

&lt;h2&gt;
  
  
  Developer Takeaways (DEV angle)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Systems &amp;gt; demos.&lt;/strong&gt; Invest in data pipelines, evals, infra, observability (LLMOps) as first-class features.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compliance-by-default.&lt;/strong&gt; Design data governance, audit trails, model risk notes alongside code.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Vertical focus.&lt;/strong&gt; Pick one domain (health, energy, telecom) and ship a thin-slice product that meets real constraints.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What to Build Next (ideas you can ship in a week)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Inference cost dashboard&lt;/strong&gt; (per-request cost &amp;amp; latency, with alerts).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prompt &amp;amp; eval harness&lt;/strong&gt; (regression tests for prompts/agents with pass/fail gates).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PII-aware data loader&lt;/strong&gt; (redact/classify on ingest; keep an audit log).&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Discuss
&lt;/h2&gt;

&lt;p&gt;What’s the &lt;strong&gt;single action&lt;/strong&gt; that would most accelerate Europe’s AI momentum for developers?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>career</category>
      <category>discuss</category>
      <category>startup</category>
    </item>
  </channel>
</rss>
