<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Sara IHSINE</title>
    <description>The latest articles on Forem by Sara IHSINE (@sara_ihsine_1e30c04c).</description>
    <link>https://forem.com/sara_ihsine_1e30c04c</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/sara_ihsine_1e30c04c"/>
    <language>en</language>
    <item>
      <title>Why We Built a Framework Nobody Asked For</title>
      <dc:creator>Sara IHSINE</dc:creator>
      <pubDate>Fri, 30 Jan 2026 10:56:23 +0000</pubDate>
      <link>https://forem.com/sara_ihsine_1e30c04c/why-we-built-a-framework-nobody-asked-for-m7e</link>
      <guid>https://forem.com/sara_ihsine_1e30c04c/why-we-built-a-framework-nobody-asked-for-m7e</guid>
      <description>&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://medium.com/@azzeddine.ihsine/why-we-built-a-framework-nobody-asked-for-3b1ef24d82b3" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;. Cross-posted here for the dev.to community.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Quick context for devs:&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Traditional docs and tests weren't built for AI-assisted development. When AI co-creates code, you need continuous, verifiable proof chains not just documentation. This is the story behind D-POAF®, a proof-oriented framework we built to address that gap.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Moment Everything Became Clear
&lt;/h2&gt;

&lt;p&gt;It was a Thursday afternoon.&lt;/p&gt;

&lt;p&gt;My husband, Azzeddine, and I were in a meeting with a client's compliance team a financial services company rolling out AI-assisted development to accelerate their engineering pipeline.&lt;/p&gt;

&lt;p&gt;The head of compliance asked what sounded like a straightforward question:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"Can you show me this feature was delivered as specified?"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The engineering lead didn't hesitate. They pulled up the usual artifacts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Requirements doc&lt;/li&gt;
&lt;li&gt;Pull request and code review&lt;/li&gt;
&lt;li&gt;Test results&lt;/li&gt;
&lt;li&gt;Release notes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;"This is all here," they said. "We've documented it."&lt;/p&gt;

&lt;p&gt;The compliance officer paused.&lt;/p&gt;

&lt;p&gt;"That's documentation," they said. "I'm asking for traceability."&lt;/p&gt;

&lt;p&gt;Then the follow-up that changed the room:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"Can you walk me through the chain, from intent to implementation to outcome including what changed along the way, and why?"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Silence.&lt;/p&gt;

&lt;p&gt;Not because the engineers were incompetent.&lt;br&gt;&lt;br&gt;
Not because they cut corners.&lt;br&gt;&lt;br&gt;
But because they were using tools and practices built for a world where humans write every line of code, make every decision, and own every artifact end-to-end.&lt;/p&gt;

&lt;p&gt;That world doesn't exist anymore.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Pattern We Couldn't Ignore
&lt;/h2&gt;

&lt;p&gt;That meeting wasn't unique. Over the next months, we saw the same tension surface everywhere regulated environments, critical systems, fast-moving product teams.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0v91cqytbhy4mnoeeo4t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0v91cqytbhy4mnoeeo4t.png" alt="Infographic showing accountability questions from healthcare, banking, tech, and compliance sectors" width="800" height="288"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;The same accountability question surfaces across industries when AI enters the engineering workflow&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Different people said it differently:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Healthcare teams&lt;/strong&gt; building AI-assisted workflows: &lt;em&gt;"How do we keep a clean trail for submissions?"&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Banking engineering leads&lt;/strong&gt; modernizing legacy systems: &lt;em&gt;"How do we show compliance when AI is in the loop?"&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tech companies&lt;/strong&gt; shipping AI-integrated features weekly: &lt;em&gt;"How do we keep accountability when code is co-created?"&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Different contexts. Same underlying problem:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Teams were moving faster than their ability to explain, justify, and own what they shipped.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;And when you can't do that, it stops being a "process issue." It becomes an operational, legal, and reputational risk.&lt;/p&gt;




&lt;h2&gt;
  
  
  When AI Becomes Part of Engineering
&lt;/h2&gt;

&lt;p&gt;AI doesn't just "help" anymore. It &lt;strong&gt;participates&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In modern teams, AI can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;generate code from requirements&lt;/li&gt;
&lt;li&gt;propose architecture options&lt;/li&gt;
&lt;li&gt;refactor modules&lt;/li&gt;
&lt;li&gt;draft tests and documentation&lt;/li&gt;
&lt;li&gt;summarize reviews and decisions&lt;/li&gt;
&lt;li&gt;flag issues in production&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This isn't "AI tools." This is &lt;strong&gt;AI-native software engineering&lt;/strong&gt; where AI is woven into every phase of the lifecycle and influences both decisions and outputs alongside humans.&lt;/p&gt;

&lt;p&gt;That's where our traditional practices start to break, in ways that feel familiar if you've lived through them.&lt;/p&gt;

&lt;h3&gt;
  
  
  1) The decision that "nobody" made
&lt;/h3&gt;

&lt;p&gt;An engineer asks an assistant for implementation options. The assistant recommends a path. It's phrased confidently. It's fast. It works.&lt;/p&gt;

&lt;p&gt;Two weeks later someone asks:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"Why did we choose this approach?"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;And the honest answer is fuzzy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"It seemed reasonable at the time."&lt;/li&gt;
&lt;li&gt;"That's what the assistant suggested."&lt;/li&gt;
&lt;li&gt;"I don't remember the tradeoffs."&lt;/li&gt;
&lt;li&gt;"The discussion is somewhere."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Nobody acted irresponsibly. But the decision trail didn't survive the speed.&lt;/p&gt;

&lt;h3&gt;
  
  
  2) The PR that passed… without anyone truly owning it
&lt;/h3&gt;

&lt;p&gt;The pull request looks clean. Tests are green. The summary is polished, because AI wrote the summary. Reviewers skim and approve.&lt;/p&gt;

&lt;p&gt;Later, an incident happens and the question becomes:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"Who understood this change well enough to vouch for it?"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Everyone participated.&lt;br&gt;&lt;br&gt;
No one truly owned it.&lt;/p&gt;

&lt;h3&gt;
  
  
  3) The requirement that quietly drifted
&lt;/h3&gt;

&lt;p&gt;The intent starts clear. But along the way:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a prompt gets tweaked&lt;/li&gt;
&lt;li&gt;a generated implementation is "slightly adjusted"&lt;/li&gt;
&lt;li&gt;an edge case is rationalized away&lt;/li&gt;
&lt;li&gt;a test is rewritten to match the new behavior&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the end, everything looks aligned because all artifacts reflect the final state.&lt;/p&gt;

&lt;p&gt;But the business asks:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"When did we decide to treat that edge case differently?"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;No one can point to a moment. It just… happened.&lt;/p&gt;

&lt;h3&gt;
  
  
  4) The "same input, different output" problem
&lt;/h3&gt;

&lt;p&gt;A team reruns a workflow that previously generated a stable result. Now the output changes.&lt;/p&gt;

&lt;p&gt;Same repo. Same ticket. Same engineer. &lt;strong&gt;Different behavior&lt;/strong&gt;, because the model updated, the system prompt changed, a tool version shifted, or the retrieval context evolved.&lt;/p&gt;

&lt;p&gt;Now the question isn't only &lt;em&gt;"what changed in the code?"&lt;/em&gt;&lt;br&gt;&lt;br&gt;
It's &lt;em&gt;"what changed in the system that created the code?"&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What Actually Broke
&lt;/h2&gt;

&lt;p&gt;AI didn't break software engineering. It exposed what was already fragile:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;our ability to connect intent → decisions → implementation → validation → outcomes in a reliable way.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Most teams assume accountability exists because they have artifacts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;tickets&lt;/li&gt;
&lt;li&gt;docs&lt;/li&gt;
&lt;li&gt;PRs&lt;/li&gt;
&lt;li&gt;reviews&lt;/li&gt;
&lt;li&gt;test runs&lt;/li&gt;
&lt;li&gt;audit logs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But these were designed for a &lt;strong&gt;human-centric workflow&lt;/strong&gt;, where decisions are made in meetings, code is written by people, and reasoning is implicit because the author can be asked later.&lt;/p&gt;

&lt;p&gt;When AI participates materially, that assumption fails.&lt;/p&gt;

&lt;p&gt;You can still produce documentation. But your ability to explain and defend the chain becomes inconsistentn, especially at scale, across teams, across time.&lt;/p&gt;

&lt;p&gt;In high-stakes environments, &lt;strong&gt;"we think it's right" isn't a durable posture.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Realization We Couldn't Unsee
&lt;/h2&gt;

&lt;p&gt;We kept hearing versions of the same demand:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Show me the chain."&lt;/li&gt;
&lt;li&gt;"Show me what changed and why."&lt;/li&gt;
&lt;li&gt;"Show me who decided."&lt;/li&gt;
&lt;li&gt;"Show me what validated this behavior."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Not more documentation.&lt;br&gt;&lt;br&gt;
Not more dashboards.&lt;br&gt;&lt;br&gt;
Not another checklist.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A system that preserves accountability as the work happens, even with AI in the loop.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That's when the idea crystallized into something simple (and uncomfortable):&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In AI-native engineering, legitimacy can't rely on authority alone, it has to rely on evidence that stays connected end-to-end.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So instead of "add governance later," we flipped the order:&lt;/p&gt;

&lt;h3&gt;
  
  
  Proof-first engineering
&lt;/h3&gt;

&lt;p&gt;Start with the question: &lt;em&gt;what would we need to demonstrate this work is aligned, justified, and safe over time?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Then build the lifecycle so those demonstrations are generated &lt;strong&gt;continuously&lt;/strong&gt; not retroactively.&lt;/p&gt;

&lt;p&gt;(Yes, "proof" is the word we ended up using for that standard of evidence. Not as a buzzword as a requirement.)&lt;/p&gt;




&lt;h2&gt;
  
  
  What We Tried (And Why It Didn't Work)
&lt;/h2&gt;

&lt;p&gt;We looked for existing solutions.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Agile?&lt;/strong&gt; Great for collaboration. Mostly silent on AI participation and decision traceability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DevOps?&lt;/strong&gt; Great for automation and monitoring. Not designed to preserve intent-to-outcome accountability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compliance frameworks?&lt;/strong&gt; Strong on controls and audits. Often episodic and external to daily engineering flow.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI governance guidelines?&lt;/strong&gt; Frequently principles and risk taxonomies. Not an operational engineering model for teams shipping weekly.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We could bolt on documentation.&lt;br&gt;&lt;br&gt;
We could add approval gates.&lt;br&gt;&lt;br&gt;
We could create dashboards.&lt;/p&gt;

&lt;p&gt;But none of that created what teams needed most:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;continuous, verifiable linkage from intent → decisions → artifacts → outcomes, even with AI participating.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  How D-POAF® Emerged
&lt;/h2&gt;

&lt;p&gt;That's what became &lt;strong&gt;D-POAF®&lt;/strong&gt; the Decentralized Proof-Oriented AI Framework.&lt;/p&gt;

&lt;p&gt;A reference model for AI-native software engineering built around five principles:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Proof Before Authority:&lt;/strong&gt; decisions are legitimate when justified by verifiable evidence, not hierarchy or automation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Decentralized Decision-Making:&lt;/strong&gt; authority is distributed across humans and AI with explicit boundaries, not concentrated in a black box.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Evidence-Driven Living Governance:&lt;/strong&gt; rules evolve based on observed outcomes, not static mandates.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Traceability as a First-Class Property:&lt;/strong&gt; intent, decisions, actions, artifacts, and outcomes remain linkable and auditable.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Human Accountability Is Non-Transferable:&lt;/strong&gt; even with AI autonomy, humans retain explicit responsibility for boundaries, escalations, and acceptance.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh9e1uojcjjxnfz35b6zf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh9e1uojcjjxnfz35b6zf.png" alt="Pentagon diagram showing D-POAF's five principles: Proof Before Authority, Decentralized Decision-Making, Evidence-Driven Living Governance, Traceability as First-Class Property, and Human Accountability Is Non-Transferable" width="709" height="581"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Five foundational principles that shift legitimacy from authority to verifiable evidence&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We structured work into &lt;strong&gt;Waves&lt;/strong&gt; units of verifiable progress that move through three macro-phases:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Instruct &amp;amp; Scope → Shape &amp;amp; Align → Execute &amp;amp; Evolve&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;And we defined three continuous "proof streams" (the evidence a team can generate and refresh):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Proof of Delivery (PoD):&lt;/strong&gt; what was built aligns with intent, with a traceable chain across decisions and artifacts&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Proof of Value (PoV):&lt;/strong&gt; what shipped produced measurable value (not just output, but outcome)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Proof of Reliability (PoR):&lt;/strong&gt; the system continues to behave as intended as context changes over time&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcytxq9nm2x3s7d1ekr2z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcytxq9nm2x3s7d1ekr2z.png" alt="Diagram showing three proof streams: PoD (Proof of Delivery with alignment icon), PoV (Proof of Value with target icon), and PoR (Proof of Reliability with shield icon)" width="800" height="342"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;The three continuous proof streams that sustain accountability in AI-native engineering&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The goal is simple to state:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You can walk the chain in any direction&lt;/strong&gt;, from an outcome back to decisions, from decisions back to intent, from a change back to what validated it, from an artifact back to who accepted accountability.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why We're Sharing It
&lt;/h2&gt;

&lt;p&gt;Azzeddine and I spent three years developing D-POAF®. We validated it through research, formalized it with an ISBN-published specification (979-10-415-8736-0), deposited it with the National Library, and open-sourced it.&lt;/p&gt;

&lt;p&gt;But this isn't "our framework."&lt;/p&gt;

&lt;p&gt;The problem it addresses building accountable, traceable, governed systems in AI-native environments belongs to everyone:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;engineers trying to ship responsibly&lt;/li&gt;
&lt;li&gt;auditors trying to verify what "good" looks like now&lt;/li&gt;
&lt;li&gt;regulators translating laws into operational reality&lt;/li&gt;
&lt;li&gt;product teams trying to scale AI without losing control&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Nobody asked for another framework.&lt;/p&gt;

&lt;p&gt;But the world does need a reference model for proof-oriented AI-native engineering, because AI is already reshaping how software is built, whether our processes are ready or not.&lt;/p&gt;

&lt;p&gt;So we're building it in public, with the community.&lt;/p&gt;

&lt;p&gt;Because frameworks like this don't get stronger in isolation.&lt;br&gt;&lt;br&gt;
They get stronger when real teams pressure-test them.&lt;/p&gt;




&lt;h2&gt;
  
  
  Explore D-POAF®
&lt;/h2&gt;

&lt;p&gt;📖 &lt;a href="https://d-poaf.org/resources/D-POAF-Canonical-V1.pdf" rel="noopener noreferrer"&gt;Canonical Specification (PDF)&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📖 &lt;a href="https://d-poaf.org/d-poaf-framework-map/" rel="noopener noreferrer"&gt;Framework Map&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🌐 &lt;a href="https://d-poaf.org" rel="noopener noreferrer"&gt;Website&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;💻 &lt;a href="https://github.com/inovionix/d-poaf" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;💬 Community: &lt;a href="https://discord.gg/DMZMeHxzNd" rel="noopener noreferrer"&gt;Discord&lt;/a&gt; | &lt;a href="https://www.linkedin.com/groups/16635010/" rel="noopener noreferrer"&gt;LinkedIn Group&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Sara Ihsine is co-creator of D-POAF® alongside Azzeddine Ihsine. Both are research engineers specializing in AI-native software engineering, governance, and proof-oriented practices.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Connect: &lt;a href="https://www.linkedin.com/in/sara-ihsine-34799748/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>softwareengineering</category>
      <category>governance</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
