<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Vinayprasad</title>
    <description>The latest articles on Forem by Vinayprasad (@indeterminate0).</description>
    <link>https://forem.com/indeterminate0</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/indeterminate0"/>
    <language>en</language>
    <item>
      <title>From Probabilistic to Repeatable: Using Reflection to Make AI Systems More Reliable</title>
      <dc:creator>Vinayprasad</dc:creator>
      <pubDate>Tue, 14 Apr 2026 06:26:46 +0000</pubDate>
      <link>https://forem.com/indeterminate0/from-probabilistic-to-repeatable-using-reflection-to-make-ai-systems-more-reliable-5cok</link>
      <guid>https://forem.com/indeterminate0/from-probabilistic-to-repeatable-using-reflection-to-make-ai-systems-more-reliable-5cok</guid>
      <description>&lt;p&gt;In production systems, correctness isn’t enough.&lt;/p&gt;

&lt;p&gt;What matters is whether the system behaves the same way every time.&lt;/p&gt;

&lt;p&gt;One thing becomes very clear when you start using AI systems in real workflows: the outputs are good, but not always consistent. You can give the same input multiple times and still get slightly different answers. Sometimes they’re correct, sometimes they’re not. This isn’t a flaw in implementation—it’s how these systems are designed. They are fundamentally probabilistic.&lt;/p&gt;

&lt;p&gt;In production systems, however, “good answers” are not enough. What we really need is consistent behavior, predictable outcomes, and repeatable fixes. We want deterministic systems. Since LLMs are inherently probabilistic, the goal is to push them as close as possible to deterministic—consistent and repeatable in practice.&lt;/p&gt;

&lt;p&gt;Most AI systems today operate in a single-shot manner: input goes in, output comes out, and the process stops there. This approach directly exposes the probabilistic nature of the model. A more practical approach is to introduce iteration. Instead of stopping at the first answer, the system should try a solution, check whether it worked, improve it based on feedback, and repeat if necessary. This simple shift—from single-shot to iterative execution—is where reflection comes in.&lt;/p&gt;

&lt;p&gt;Reflection doesn’t eliminate randomness. Instead, it reduces the impact of incorrect outputs by introducing feedback and correction. Each iteration acts as a filter: weak or incorrect solutions are identified and replaced, while better ones move forward. Over time, this process converges toward a more stable and repeatable outcome.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fagmyj23zd0axc4e17yhp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fagmyj23zd0axc4e17yhp.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A simple example makes this clearer. Consider a basic multiplication problem like 27 × 14. A single-shot system might produce an incorrect answer like 328 and stop there. A reflection-based system, however, would re-check the calculation, identify the mistake, and correct it to 378. The improvement here doesn’t come from a better model—it comes from verification and iteration.&lt;/p&gt;

&lt;p&gt;There’s a subtle difference in how people use AI. A vibe coder typically prompts the model, takes the answer, and moves on. A software developer approaches it differently—they run the output, test it, question it, and improve it. The model is the same, but the outcome is not. One treats AI as a final answer, while the other treats it as a starting point. Reflection brings that second approach into the system itself, allowing it to continue until the result actually works.&lt;/p&gt;

&lt;p&gt;This becomes even more relevant in real systems. Imagine an alert where API latency suddenly spikes. A first attempt might be to restart the service. If the issue persists, the system observes the logs and notices database timeouts. At that point, it becomes clear that restarting the service didn’t address the root cause. A second attempt focuses on fixing the database connection, after which the system stabilizes. The key difference here is that the system didn’t stop at the first action—it adapted based on feedback.&lt;/p&gt;

&lt;p&gt;Under the hood, a reflection-based system introduces a loop around the model. Instead of a simple input-to-output flow, it follows a cycle: generate a solution, execute it, observe the results, reflect on what happened, and improve the next attempt. This loop is what transforms the system from a one-shot generator into something that can iteratively move toward correctness.&lt;/p&gt;

&lt;p&gt;In practice, this can be implemented with a simple control loop. The system generates a solution, executes it, checks whether it succeeded, and if not, incorporates feedback into the next attempt. Each iteration reduces error and increases confidence in the final outcome.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7gfl81jpdh2elok6b6ho.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7gfl81jpdh2elok6b6ho.png" alt=" "&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;attempt&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;solution&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;generate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;problem&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;solution&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;status&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;success&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;break&lt;/span&gt;

    &lt;span class="n"&gt;feedback&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;analyze&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;problem&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;enrich&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;problem&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;feedback&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What makes this approach effective is the presence of strong feedback signals. Logs, metrics, test results, and system states provide a clear indication of whether a solution worked or failed. The stronger and more objective these signals are, the more reliable the reflection process becomes. Without them, the system is essentially guessing.&lt;/p&gt;

&lt;p&gt;Of course, this approach comes with trade-offs. Iteration adds latency, increases compute usage, and introduces additional system complexity. But in most real-world scenarios, especially in production systems, reliability matters more than speed. A slightly slower system that consistently arrives at the correct outcome is far more valuable than a fast system that is unreliable.&lt;/p&gt;

&lt;p&gt;Reflection works best in scenarios where outcomes can be clearly validated—debugging, incident remediation, code execution, and data pipeline recovery are good examples. It is less useful in tasks where correctness is subjective or where immediate responses are required.&lt;/p&gt;

&lt;p&gt;Ultimately, AI systems don’t become reliable just by generating better answers. They become reliable when they can evaluate their own outputs and improve them. Not by eliminating randomness, but by correcting it until the outcome stabilizes.&lt;/p&gt;

&lt;p&gt;That shift—from generating answers to iteratively improving them—is what moves us closer to building systems that are not just intelligent, but dependable.&lt;/p&gt;

&lt;p&gt;Because in the end, it’s not about getting the right answer once—it’s about getting it right, every time it matters.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>beginners</category>
      <category>automation</category>
    </item>
    <item>
      <title>Where Should AI Actually Sit in Your System?</title>
      <dc:creator>Vinayprasad</dc:creator>
      <pubDate>Mon, 06 Apr 2026 05:04:37 +0000</pubDate>
      <link>https://forem.com/indeterminate0/where-should-ai-actually-sit-in-your-system-3p7g</link>
      <guid>https://forem.com/indeterminate0/where-should-ai-actually-sit-in-your-system-3p7g</guid>
      <description>&lt;p&gt;AI is becoming a key part of modern system design. Many teams are exploring how to integrate it across different layers of their architecture. While this opens many possibilities, it also creates a design challenge: finding where AI adds real value versus where simpler approaches might work better. Getting this balance right determines if a system remains reliable and maintainable as it grows.&lt;br&gt;
Start by Breaking the System, Not Choosing the Tool&lt;br&gt;
Before deciding to use AI, rules, or any database strategy, it’s helpful to break the system into logical layers. Most backend systems, whether in fintech, DevOps, or internal tools, tend to fall into three parts: the input layer, the decision layer, and the execution layer.&lt;br&gt;
The input layer handles how data enters the system, such as APIs, UI interactions, or external triggers. The decision layer includes business logic, orchestration, and state transitions. The execution layer is where actual changes occur, like database writes, API calls, or infrastructure actions.&lt;br&gt;
When you view systems this way, the placement of AI becomes clearer.&lt;br&gt;
Where AI Fits Well&lt;br&gt;
AI works best at the edges of the system, especially when dealing with unstructured or human-generated input. For instance, if a user types “restart the failed job for order 123,” AI can turn that into a structured format like { action: restart_job, order_id: 123 }. This is a strong example because the input is unclear and needs interpretation.&lt;br&gt;
AI can also help with decision support by ranking options, classifying inputs, or suggesting actions. Even in these cases, AI should assist rather than take control.&lt;br&gt;
Where AI Becomes Risky&lt;br&gt;
Problems arise when AI moves deeper into the system, especially in decision-making or execution. If a large language model (LLM) directly decides what actions to take and executes them, the system effectively becomes a black box. It becomes difficult to understand why something happened, reproduce issues, or enforce constraints.&lt;br&gt;
What looks simple in a demo—“user → AI → action”—can become hard to manage in production. Small changes in prompts, model versions, or inputs can lead to different outcomes, making debugging significantly more complex.&lt;br&gt;
Think in Terms of Control and Execution&lt;br&gt;
A better way to design systems is to separate control from execution. AI can help interpret input or suggest intent, but execution should remain deterministic. This means any action that changes system state—like updating a database, triggering workflows, or calling external services—should go through validation layers supported by rules and structured data.&lt;br&gt;
This separation ensures that if AI makes a mistake in interpretation, the system can catch it before anything irreversible occurs.&lt;br&gt;
Understanding Your System’s Tolerance for Uncertainty&lt;br&gt;
Every system has a certain tolerance for uncertainty. In areas like payments, infrastructure automation, or order processing, even small mistakes can have serious consequences. These systems need strong guarantees, predictable behavior, and clear audit trails.&lt;br&gt;
On the other hand, systems like chat interfaces, search, or recommendations can handle some level of approximation. In these cases, AI can be used more freely.&lt;br&gt;
The goal is not to eliminate AI, but to control where uncertainty is allowed.&lt;br&gt;
Why Structured Databases Still Matter&lt;br&gt;
As AI adoption rises, there’s a tendency to rely heavily on vector databases for storing and retrieving knowledge. While these databases are powerful, they solve a very specific problem: semantic similarity.&lt;br&gt;
Structured databases provide something different and essential: guarantees.&lt;br&gt;
They enforce constraints like uniqueness and valid relationships. They support transactions, ensuring that operations either complete fully or not at all. Most importantly, they provide predictable and repeatable results. If you query a structured database with a specific key, you will always get the same answer.&lt;br&gt;
In systems where correctness matters—like mapping an error code to a resolution or validating a state transition—this certainty is crucial.&lt;br&gt;
Where Vector Databases Fit&lt;br&gt;
Vector databases are useful when you need to find “something similar” rather than “something exact.” They are effective for searching through unstructured data such as documents, logs, or knowledge bases. They use approximate nearest neighbor algorithms, which trade perfect accuracy for speed.&lt;br&gt;
This approach works well for cases like document retrieval or context enrichment. However, in systems where even a small error could lead to incorrect actions, this approximation becomes a risk.&lt;br&gt;
State Machines vs Generative Decisions&lt;br&gt;
Most backend systems are basically state machines. They progress through well-defined states—created, processing, completed, failed—with clear rules for transitions. Rule-based systems handle this well by enforcing valid transitions and rejecting invalid ones.&lt;br&gt;
AI, however, does not understand or enforce these constraints inherently. It generates outputs based on patterns rather than strict rules, making it less suitable for controlling state transitions directly.&lt;br&gt;
Execution Safety and Reliability&lt;br&gt;
When systems perform actions, they need to be safe to retry, resistant to duplication, and easy to observe. Rule-based systems can enforce conditions like “only retry if the current state is failed,” ensuring predictable behavior.&lt;br&gt;
If AI is used directly for execution decisions without validation, it can lead to unintended actions—duplicate retries, skipped steps, or incorrect operations. Over time, this introduces instability into the system.&lt;br&gt;
Observability and Debugging&lt;br&gt;
Deterministic systems are easier to debug because the path from input to output is clear. You can track what rule was applied and why a decision was made. AI systems require additional layers of observability—tracking prompts, model versions, and retrieved context—and even then, reproducing an issue may not be easy.&lt;br&gt;
This difference becomes significant in production environments where quick diagnosis and resolution are essential.&lt;br&gt;
Cost Beyond Tokens&lt;br&gt;
While AI systems are often evaluated based on token cost, the real cost comes from latency, retries, infrastructure scaling, and operational overhead. Systems that rely heavily on AI may be faster to build at first but can be more expensive to maintain.&lt;br&gt;
In contrast, structured and rule-based systems typically require more upfront design but are generally more predictable and cost-effective over time.&lt;br&gt;
A Practical Architecture That Works&lt;br&gt;
A practical approach that works well is to let AI handle interpretation while keeping execution deterministic. In this model, user input flows through an AI layer that extracts intent, which is then validated using rules and structured data before any action is taken. The response can optionally be formatted using AI again.&lt;br&gt;
Vector databases can be included if needed to retrieve contextual information, but they should be optional and not replace core system logic.&lt;br&gt;
A Simple Way to Decide&lt;br&gt;
When designing a system, a few questions can help guide the decision:&lt;br&gt;
• Do you have a clear identifier or key? Use a structured database.&lt;br&gt;
• Can the logic be expressed as rules or state transitions? Use a rule engine.&lt;br&gt;
• Is the input unstructured or ambiguous? Use AI.&lt;br&gt;
• What happens if the system makes a mistake? If the impact is high, avoid using AI in execution paths.&lt;br&gt;
Final Thought&lt;br&gt;
Strong systems don’t try to replace deterministic logic with AI. Instead, they use AI where it makes sense—at the boundaries where interpretation is needed—while keeping the core of the system grounded in structured data and clear rules.&lt;br&gt;
AI is most effective when it is limited, not when it is given full control.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>powerfuldevs</category>
      <category>systemdesign</category>
    </item>
    <item>
      <title>How Large Systems Rethink Communication</title>
      <dc:creator>Vinayprasad</dc:creator>
      <pubDate>Fri, 16 Jan 2026 14:37:33 +0000</pubDate>
      <link>https://forem.com/indeterminate0/how-large-systems-rethink-communication-lho</link>
      <guid>https://forem.com/indeterminate0/how-large-systems-rethink-communication-lho</guid>
      <description>&lt;p&gt;Have you ever noticed how systems that worked perfectly fine suddenly start behaving differently as they grow? It’s not because the early decisions were wrong — it’s just that scale introduces new challenges, and some assumptions that felt safe initially start getting stretched. One of the first things teams revisit in this process is how different parts of the system communicate.&lt;br&gt;
Most systems start with synchronous APIs. And why not? They’re easy to reason about, simple to debug, and make the flow of requests and responses clear. One service calls another, gets an answer, and moves on. For a long time, this works beautifully. Latency is predictable, dependencies are few, feedback is immediate, and issues are easy to spot. Teams can move fast, and the system behaves exactly as expected.&lt;br&gt;
But then the system grows. Suddenly, traffic patterns are uneven, some requests spike, and others take longer than anticipated. New consumers join the system, old ones evolve, and processing capacity doesn’t always keep up with the incoming load. Coordinating when work happens across services becomes harder, and timeouts, retries, and monitoring start appearing everywhere. This isn’t a failure. The system is still doing what it was designed to do. It’s just working under new conditions.&lt;br&gt;
At this stage, the question subtly changes. Instead of asking, “Can this service respond right now?” teams start asking, “Can we make sure this work happens reliably, even if it takes time?” That’s where messaging often enters the picture. Messaging allows one part of the system to record intent and another to act on it when it can. Temporary backlogs are expected, and slower components don’t immediately block faster ones.&lt;br&gt;
Messaging doesn’t replace APIs — it complements them. Most modern systems end up using both. APIs remain for interactions that need immediacy, while messaging handles workloads that can tolerate flexible timing.&lt;br&gt;
Enterprise messaging systems like TIBCO EMS have been around for a long time to address these needs. EMS works very well in environments where delivery guarantees matter, consumers are stable, message flows are predictable, and processing happens close to event creation. Many large organisations still rely on EMS for core integrations. But as systems become more distributed and dynamic, additional needs arise — particularly around retaining data longer and allowing multiple consumers to act independently.&lt;br&gt;
This is where Kafka comes in. By treating events as a durable, ordered log, Kafka allows consumers to replay data when needed, multiple teams to read the same events independently, and processing to happen without tight coordination. Recovery becomes more predictable, and the system can handle growing complexity without changing the API-based interactions that already work. Kafka isn’t replacing earlier messaging systems — it’s expanding the architectural toolbox for modern needs, where history matters as much as delivery.&lt;br&gt;
As systems mature, communication choices become more deliberate. Some interactions need agreement on time, some on state, and some on both. There’s no single right way to design it. The best architectures recognise which guarantees each part of the system actually needs and pick the right pattern accordingly.&lt;/p&gt;

&lt;p&gt;So when systems rethink communication, it’s not because something went wrong. It’s because teams now understand the trade-offs better. Synchronous APIs feel natural early on, messaging helps reduce tight coordination later, and durable event streams make complex recovery and replay possible. Large systems evolve because experience teaches the teams what works under different constraints. That evolution is a sign of maturity — not a technical debt.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>eventdriven</category>
      <category>pubsub</category>
    </item>
    <item>
      <title>Reducing Architectural Drift When Using AI for Small Changes</title>
      <dc:creator>Vinayprasad</dc:creator>
      <pubDate>Mon, 05 Jan 2026 08:00:02 +0000</pubDate>
      <link>https://forem.com/indeterminate0/reducing-architectural-drift-when-using-ai-for-small-changes-4ljg</link>
      <guid>https://forem.com/indeterminate0/reducing-architectural-drift-when-using-ai-for-small-changes-4ljg</guid>
      <description>&lt;p&gt;In my earlier blog, I wrote about architectural drift and how AI, while accelerating delivery, can quietly push systems away from their original intent. This post is not meant to counter that argument or claim a fix. Instead, it tries to answer a more grounded question: if drift is inevitable, how do we reduce it when using AI for small, everyday change requests?&lt;br&gt;
This is written from observation, not authority. Most of what follows comes from noticing patterns—what tends to go wrong, what helps a little, and what at least makes the damage visible sooner.&lt;br&gt;
Architecture, in both the earlier blog and this one, does not mean diagrams or heavyweight documentation. It’s not about how neat the boxes look. Architecture here means the decisions that make change safe: where boundaries exist, who owns what responsibility, how data flows, and which assumptions the system quietly depends on to behave predictably. When those decisions are not preserved, AI doesn’t break the system immediately—it just optimizes locally until the global shape no longer makes sense.&lt;br&gt;
One important shift in mindset helped me personally. Instead of treating AI as a faster way to generate code, I started treating it as something that needs to be constrained before it can be helpful. Most AI-assisted changes in real systems are small. But even small changes, when repeated without architectural awareness, compound into drift. So the goal stopped being “get the correct output” and became “preserve the shape while making the change.”&lt;br&gt;
This is where non-negotiables start to matter. These are not suggestions or preferences. They are constraints that must hold true regardless of the change being requested. Things like not introducing new service boundaries, not duplicating business logic, not altering data contracts, or not crossing architectural layers. When these are left implicit, AI fills in the gaps with assumptions that feel reasonable in isolation but are harmful in aggregate.&lt;br&gt;
Another lesson was where these constraints belong. They cannot be an afterthought. The architectural intent has to be part of the original prompt, not something clarified later. Each change request should start with the same foundation, explain the impact first, and only then move toward implementation. Once a change is applied on shaky intent, fixing it later almost always costs more.&lt;br&gt;
Code duplication deserves special mention because it’s a subtle failure mode. AI is very good at re-implementing logic in slightly cleaner ways. Without explicit instructions, it will create parallel paths that look fine but slowly fragment behavior. If reuse matters—and it usually does—it needs to be said clearly. “Reuse existing logic.” “Do not create parallel implementations.” “Extend, don’t replicate.” These aren’t things the model reliably infers on its own.&lt;br&gt;
Context windows add another layer of complexity. The AI can only reason over what it currently sees. Feeding it large files or entire repositories often backfires, because constraints pushed earlier in the conversation can silently drop out of scope. A more realistic approach is to assume the model only has visibility into a small set of files and to name them explicitly. Architecture then becomes a compact working context, not a document dump.&lt;br&gt;
A simple prompt structure helped more than expected. Something along the lines of stating the non-negotiables first, defining the assumed scope, and asking the model to explain architectural impact before writing code. Not because it makes the AI smarter, but because it forces a pause before execution. That pause alone reduces accidental drift.&lt;br&gt;
It’s important to be honest about the limits of this approach. None of this is a cure. Architectural drift is a natural property of evolving systems. The goal here isn’t to prevent change, but to ensure change happens deliberately rather than by inference. These practices don’t eliminate risk; they just make it visible earlier, while correction is still possible.&lt;br&gt;
AI is exceptionally good at moving systems forward. Architecture exists to make sure we don’t move forward blindly. If architectural intent isn’t carried explicitly into our prompts, the system will still evolve—just not in a direction we consciously chose.&lt;br&gt;
This isn’t about control.&lt;br&gt;
It’s about preserving shape while allowing change.&lt;/p&gt;

&lt;p&gt;Example Architecturally Constrained Change Request Prompt&lt;/p&gt;

&lt;p&gt;You are implementing a small, incremental change to an existing system.&lt;br&gt;
The following architectural constraints are non-negotiable and must be preserved:&lt;br&gt;
The system follows a layered architecture.&lt;br&gt;
Business logic resides exclusively in the domain layer.&lt;br&gt;
Adapters are responsible only for data translation, not decision-making.&lt;br&gt;
Existing service boundaries and data contracts must remain unchanged.&lt;br&gt;
No new abstractions, services, or parallel implementations may be introduced.&lt;br&gt;
Assume your effective context is limited to the following files:&lt;br&gt;
OrderService.py, OrderValidator.py, OrderRepository.py.&lt;br&gt;
Do not infer, recreate, or duplicate logic that may exist outside this scope.&lt;br&gt;
Change request:&lt;br&gt;
Add validation for a new optional field in the order payload.&lt;br&gt;
Reuse existing validation mechanisms wherever applicable.&lt;br&gt;
Avoid duplication and ensure behavior remains centralized.&lt;br&gt;
Before implementing the change, explicitly state:&lt;br&gt;
the architectural layer where the change belongs&lt;br&gt;
whether the change risks violating any constraint listed above&lt;br&gt;
If a violation is identified, stop and explain the risk.&lt;br&gt;
If no violation exists, implement the minimal code change required, ensuring the system’s architectural shape remains intact.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>githubcopilot</category>
      <category>antigravity</category>
    </item>
    <item>
      <title>Are We Building Software or Letting It Drift?</title>
      <dc:creator>Vinayprasad</dc:creator>
      <pubDate>Thu, 01 Jan 2026 08:54:57 +0000</pubDate>
      <link>https://forem.com/indeterminate0/are-we-building-software-or-letting-it-drift-5el7</link>
      <guid>https://forem.com/indeterminate0/are-we-building-software-or-letting-it-drift-5el7</guid>
      <description>&lt;p&gt;Imagine shipping features at a pace that felt impossible just a few years ago—prompt an AI, generate clean code, run the tests, deploy—everything flows smoothly, progress feels unstoppable. Until, months later, a seemingly minor change triggers a cascade of unexpected behavior that no one can fully explain.&lt;br&gt;
Over the last few years, I’ve been watching a pattern repeat across teams and projects. It shows up in legacy-to-cloud migrations, but it doesn’t stop there. I see it just as often in brand-new systems built almost entirely with AI assistance — fast, confident, and delivered at a pace that would have been unthinkable a few years ago.&lt;br&gt;
The development landscape has changed dramatically. The underlying risk hasn’t.&lt;br&gt;
AI has made it easy to write a lot of code very quickly. Whether we’re migrating old systems or “vibe coding” new ones, the feedback loop feels great. You describe what you want, the AI produces something plausible, tests pass, and the system appears to work. Delivery feels smooth. Progress feels real.&lt;br&gt;
And that’s exactly where things start to drift.&lt;br&gt;
The issue isn’t that AI writes bad code. Often, it writes code that looks cleaner and more structured than what humans would have produced under pressure. The problem is that AI doesn’t just implement designs — it infers them. And every time we clarify, rephrase, or repeat a prompt, we give it another opportunity to reinterpret what the system should be.&lt;br&gt;
Over time, the system quietly becomes something no one explicitly designed.&lt;br&gt;
This is easiest to see during migrations. A system is supposed to be moved, not changed. But AI reorganizes modules for clarity, introduces abstractions that weren’t there before, and removes logic that appears redundant. Each change makes sense locally. Collectively, they reshape the architecture.&lt;br&gt;
What’s more concerning is that the same thing happens in new projects — often faster. When a system is built incrementally through prompts and follow-up corrections, architecture emerges implicitly. There’s rarely a single moment where someone says, “This is the shape of the system.” Instead, the shape is the sum of many small interpretations made by an agent optimizing for the last instruction it received.&lt;br&gt;
Repeated prompting makes this worse, not better. Each clarification nudges the model toward a slightly different understanding of intent. Boundaries blur. Responsibilities shift. What began as a clean mental model slowly fragments across files and layers.&lt;br&gt;
One of the most visible symptoms of this drift is redundant and dead code. AI is remarkably good at adding logic to handle edge cases it’s no longer sure are covered. It duplicates behavior across components because it no longer sees the full picture within its context window. Over time, you end up with multiple versions of “the same thing,” none of which can be safely removed because no one is fully confident what depends on what anymore.&lt;br&gt;
The code still works. That’s the trap.&lt;br&gt;
Understanding it becomes harder with every iteration. Reading it no longer explains why the system behaves the way it does — only how it happens to behave right now.&lt;br&gt;
Accidental refactoring is another quiet failure mode. You ask for a small change. The AI rewrites more than you expected because, from its perspective, consistency is improvement. Execution order shifts. Error handling changes shape. Responsibilities move just enough to matter later, but not enough to break tests today.&lt;br&gt;
Nothing is obviously wrong, so it ships.&lt;br&gt;
These problems don’t announce themselves immediately. Architectural drift rarely causes instant failures. What it does is erode the system’s ability to evolve safely. Months later, when performance degrades, or partial failures cascade, or a seemingly minor change triggers an incident, teams find themselves debugging behavior that no longer maps cleanly to any original design.&lt;br&gt;
At that point, it doesn’t matter whether the system was migrated or greenfield. The problem is the same: the architecture exists only as an emergent property of code generated over time, not as an intentional, shared understanding.&lt;br&gt;
Some teams try to address this with better prompts. Clearer instructions. Tighter constraints. That helps, but it’s not a solution. Large systems don’t fit into a single context window. AI fills gaps by inference. Redundancy creeps in. Dead code accumulates. Drift continues — just more quietly.&lt;br&gt;
The uncomfortable truth is that AI doesn’t cause architectural drift on its own. Drift happens when intent isn’t made explicit and maintained over time. AI simply accelerates the consequences of that omission.&lt;br&gt;
Used carelessly, AI makes it easy to move fast without realizing what’s being changed. Used thoughtfully, it can surface uncertainty and force conversations we’ve been postponing for years. But it cannot hold architectural intent on our behalf.&lt;br&gt;
That responsibility doesn’t go away just because code is easier to produce.&lt;br&gt;
If we treat AI-assisted development as a shortcut to delivery rather than an architectural event, we shouldn’t be surprised when systems become harder to reason about, harder to change, and more fragile over time.&lt;br&gt;
The code will compile.&lt;br&gt;
The tests will pass.&lt;br&gt;
The problems will wait.&lt;br&gt;
Architectural drift doesn’t break systems immediately. It just ensures that when they do break, no one knows what the system was meant to be or which assumptions still hold. Every change becomes risky not due to lack of skill, but because intent was never held steady while everything else accelerated.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>architecture</category>
    </item>
  </channel>
</rss>
