<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Dhruvi</title>
    <description>The latest articles on Forem by Dhruvi (@dhruvi_21).</description>
    <link>https://forem.com/dhruvi_21</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/dhruvi_21"/>
    <language>en</language>
    <item>
      <title>What Actually Breaks When You Connect AI to Real Enterprise Data</title>
      <dc:creator>Dhruvi</dc:creator>
      <pubDate>Mon, 27 Apr 2026 13:15:43 +0000</pubDate>
      <link>https://forem.com/dhruvi_21/what-actually-breaks-when-you-connect-ai-to-real-enterprise-data-55ba</link>
      <guid>https://forem.com/dhruvi_21/what-actually-breaks-when-you-connect-ai-to-real-enterprise-data-55ba</guid>
      <description>&lt;p&gt;Connecting AI to real enterprise data sounds straightforward.&lt;/p&gt;

&lt;p&gt;Give it access to your systems.&lt;br&gt;
Let it read data.&lt;br&gt;
Let it take actions.&lt;/p&gt;

&lt;p&gt;In reality, this is where things start breaking.&lt;/p&gt;

&lt;p&gt;Not because the AI is wrong.&lt;br&gt;
Because the data and systems underneath are not stable enough.&lt;/p&gt;

&lt;p&gt;The assumption that fails&lt;/p&gt;

&lt;p&gt;Most people assume:&lt;/p&gt;

&lt;p&gt;if the data exists, AI can use it&lt;/p&gt;

&lt;p&gt;In real systems, data exists in inconsistent states.&lt;/p&gt;

&lt;p&gt;Same entity&lt;br&gt;
different systems&lt;br&gt;
different values&lt;/p&gt;

&lt;p&gt;An order might be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;completed in one system&lt;/li&gt;
&lt;li&gt;pending in another&lt;/li&gt;
&lt;li&gt;duplicated somewhere else&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;AI doesn’t know which one is “correct”. It just sees all of them.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;1. Inconsistent data&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Enterprise systems are rarely in sync.&lt;/p&gt;

&lt;p&gt;You have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ERPs&lt;/li&gt;
&lt;li&gt;CRMs&lt;/li&gt;
&lt;li&gt;spreadsheets&lt;/li&gt;
&lt;li&gt;custom tools&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each one updates at different times. Some fail silently.&lt;/p&gt;

&lt;p&gt;So when AI queries across them, it gets conflicting answers.&lt;/p&gt;

&lt;p&gt;This leads to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;wrong insights&lt;/li&gt;
&lt;li&gt;incorrect decisions&lt;/li&gt;
&lt;li&gt;broken automations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The issue isn’t AI accuracy.&lt;br&gt;
It’s data consistency.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;2. Missing context&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;AI works on what it can see.&lt;/p&gt;

&lt;p&gt;But a lot of enterprise logic lives outside the data:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;manual processes&lt;/li&gt;
&lt;li&gt;unwritten rules&lt;/li&gt;
&lt;li&gt;team-specific workflows&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example:&lt;br&gt;
A record looks valid in the system.&lt;br&gt;
But internally, everyone knows it shouldn’t be processed yet.&lt;/p&gt;

&lt;p&gt;AI has no way to infer that unless the logic is formalized.&lt;/p&gt;

&lt;p&gt;So it acts on incomplete understanding.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;3. Unreliable actions&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Reading data is one problem. Acting on it is another.&lt;/p&gt;

&lt;p&gt;When AI triggers actions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;create orders&lt;/li&gt;
&lt;li&gt;update records&lt;/li&gt;
&lt;li&gt;send communications&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It depends on underlying systems behaving predictably.&lt;/p&gt;

&lt;p&gt;But those systems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;retry&lt;/li&gt;
&lt;li&gt;timeout&lt;/li&gt;
&lt;li&gt;partially fail&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without safeguards, AI actions can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;execute twice&lt;/li&gt;
&lt;li&gt;fail halfway&lt;/li&gt;
&lt;li&gt;create inconsistent states&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;4. Timing issues&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Enterprise systems are not real-time in a clean way.&lt;/p&gt;

&lt;p&gt;There are delays:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;sync jobs&lt;/li&gt;
&lt;li&gt;queues&lt;/li&gt;
&lt;li&gt;batch updates&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;AI might:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;read data before it’s updated&lt;/li&gt;
&lt;li&gt;act on stale information&lt;/li&gt;
&lt;li&gt;trigger workflows too early&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Everything looks correct individually.&lt;br&gt;
But the sequence is wrong.&lt;/p&gt;

&lt;p&gt;What changed for me&lt;/p&gt;

&lt;p&gt;I stopped thinking of AI as the hard part.&lt;/p&gt;

&lt;p&gt;The hard part is making the environment predictable enough for AI to operate.&lt;/p&gt;

&lt;p&gt;You need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;consistent data&lt;/li&gt;
&lt;li&gt;clear state&lt;/li&gt;
&lt;li&gt;reliable execution&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without that, AI just amplifies existing problems faster.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The shift&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;AI doesn’t fix messy systems.&lt;/p&gt;

&lt;p&gt;It exposes them.&lt;/p&gt;

&lt;p&gt;If your data is inconsistent, AI will surface conflicting answers.&lt;br&gt;
If your workflows are fragile, AI will break them faster.&lt;/p&gt;

&lt;p&gt;This is the kind of problem we deal with constantly at BrainPack, turning fragmented and inconsistent systems into something AI can actually operate on. The AI layer only works once the underlying infrastructure becomes predictable enough to trust.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>backend</category>
      <category>systemdesign</category>
      <category>architecture</category>
    </item>
    <item>
      <title>The Code Pattern That Keeps Our Integrations Stable in Production</title>
      <dc:creator>Dhruvi</dc:creator>
      <pubDate>Thu, 23 Apr 2026 16:31:30 +0000</pubDate>
      <link>https://forem.com/dhruvi_21/the-code-pattern-that-keeps-our-integrations-stable-in-production-3ad4</link>
      <guid>https://forem.com/dhruvi_21/the-code-pattern-that-keeps-our-integrations-stable-in-production-3ad4</guid>
      <description>&lt;p&gt;When you connect real systems, ERPs, APIs, AI workflows, things don’t behave cleanly.&lt;/p&gt;

&lt;p&gt;Requests retry.&lt;br&gt;
Webhooks get sent twice.&lt;br&gt;
Sometimes something succeeds, but you don’t get the response.&lt;/p&gt;

&lt;p&gt;And then you see it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;duplicate orders&lt;/li&gt;
&lt;li&gt;repeated emails&lt;/li&gt;
&lt;li&gt;workflows triggering twice&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is normal in production.&lt;/p&gt;

&lt;p&gt;The pattern that keeps this under control is idempotency.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The rule&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Every action should be safe to run more than once.&lt;/p&gt;

&lt;p&gt;Same input → same result.&lt;/p&gt;

&lt;p&gt;If the same request hits your system twice, nothing should break and nothing extra should happen.&lt;/p&gt;

&lt;p&gt;Where things usually go wrong&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Partial execution&lt;/strong&gt;&lt;br&gt;
Something starts, then crashes halfway.&lt;br&gt;
A retry comes in and runs everything again.&lt;/p&gt;

&lt;p&gt;If you’re not careful, you create duplicates.&lt;/p&gt;

&lt;p&gt;So instead of “just create”, you always check:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;does this already exist?&lt;/li&gt;
&lt;li&gt;should I update instead?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Multi-step flows&lt;/strong&gt;&lt;br&gt;
Most integrations don’t stop at one system.&lt;/p&gt;

&lt;p&gt;You might:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;create something in one system&lt;/li&gt;
&lt;li&gt;then send it to another&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If it fails in the middle, the retry should continue from where it stopped, not start from zero.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Side effects&lt;/strong&gt;&lt;br&gt;
This is where it gets visible.&lt;/p&gt;

&lt;p&gt;Things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;sending emails&lt;/li&gt;
&lt;li&gt;charging payments&lt;/li&gt;
&lt;li&gt;triggering automations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If these run twice, users notice immediately.&lt;/p&gt;

&lt;p&gt;So you need to control when they run and make sure they don’t fire again on retries.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What changed for me&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;I stopped assuming things run once.&lt;/p&gt;

&lt;p&gt;Now I assume:&lt;/p&gt;

&lt;p&gt;everything can retry&lt;br&gt;
everything can duplicate&lt;br&gt;
things can fail halfway&lt;/p&gt;

&lt;p&gt;So the question is always:&lt;/p&gt;

&lt;p&gt;what happens if this runs again?&lt;/p&gt;

&lt;p&gt;In systems that run all the time, this isn’t an edge case.&lt;/p&gt;

&lt;p&gt;This is how the system behaves every day.&lt;/p&gt;

&lt;p&gt;And once you build with that in mind, a lot of production issues just stop showing up.&lt;/p&gt;

&lt;p&gt;This is the kind of problem we deal with constantly at BrainPack, making unpredictable systems stable enough to layer AI on top of them. If the underlying operations are not reliable under retries, nothing built above them can be trusted.&lt;/p&gt;

</description>
      <category>api</category>
      <category>architecture</category>
      <category>backend</category>
      <category>systemdesign</category>
    </item>
  </channel>
</rss>
