<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Sanket Parmar</title>
    <description>The latest articles on Forem by Sanket Parmar (@sanket-parmar).</description>
    <link>https://forem.com/sanket-parmar</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/sanket-parmar"/>
    <language>en</language>
    <item>
      <title>My First Three Months With Angular: Things I Wish I Knew Earlier</title>
      <dc:creator>Sanket Parmar</dc:creator>
      <pubDate>Mon, 11 May 2026 06:40:30 +0000</pubDate>
      <link>https://forem.com/sanket-parmar/my-first-three-months-with-angular-things-i-wish-i-knew-earlier-4mj5</link>
      <guid>https://forem.com/sanket-parmar/my-first-three-months-with-angular-things-i-wish-i-knew-earlier-4mj5</guid>
      <description>&lt;p&gt;My Angular journey started with React. I thought the transition would take a week or two; it was three weeks just to feel comfortable. And by the second month, I was still hitting walls.&lt;/p&gt;

&lt;p&gt;This is not a tutorial. It's everything I wish someone had told me before I opened the Angular docs for the first time.&lt;/p&gt;

&lt;h2&gt;The Learning Curve Is Real, And It's Steep at the Start&lt;/h2&gt;

&lt;p&gt;Angular is an opinionated framework. It has a specific way of doing almost everything, routing, forms, HTTP calls, and state management, and it expects you to learn that way before you try to work around it.&lt;/p&gt;

&lt;p&gt;Coming from React, where you have a lot of freedom in how you structure things, this felt suffocating at first. Why do I need a module for this? Why is this service injected this way? Why does the CLI generate five files when I want a component?&lt;/p&gt;

&lt;p&gt;There are answers. But Angular doesn't explain them to you upfront. You have to learn them by building something real and running into the consequences of not understanding them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;My advice&lt;/strong&gt; - don't fight the framework in the first month. Just follow its patterns, even when they feel excessive. The reasoning becomes clear later.&lt;/p&gt;

&lt;h2&gt;Modules Confused Me More Than Anything Else&lt;/h2&gt;

&lt;p&gt;The module system was my biggest early frustration.&lt;/p&gt;

&lt;p&gt;In Angular, everything lives inside a &lt;a href="https://dev.to/deepachaurasia1/what-are-modules-in-angular-aj4"&gt;module&lt;/a&gt;: components, services, pipes, directives, you name it. Before Angular 14 introduced standalone components, you couldn't use a component anywhere without declaring it in a module first.&lt;/p&gt;

&lt;p&gt;Forget to import a module, and your component won't work, with an error message that won't tell you why.&lt;/p&gt;

&lt;p&gt;I spent two hours once debugging a template error that turned out to be a missing &lt;code&gt;FormsModule&lt;/code&gt; import in my AppModule. The fix was one line. The diagnosis took two hours because I didn't know what modules were responsible.&lt;/p&gt;

&lt;p&gt;Once I understood that modules are basically containers that control what's available where, everything clicked. But that understanding took time.&lt;/p&gt;

&lt;h2&gt;Services and Dependency Injection Took a Mindset Shift&lt;/h2&gt;

&lt;p&gt;Angular's &lt;a href="https://dev.to/jagadeeshmusali/angular-dependency-injection-in-depth-1l8n"&gt;dependency injection system&lt;/a&gt; is one of its strongest features. It also confused me completely for the first few weeks.&lt;/p&gt;

&lt;p&gt;The concept is straightforward: instead of creating instances of a class yourself, you tell Angular what you need, and it provides it. Angular manages the lifecycle, the scope, and the sharing of that instance across your app.&lt;/p&gt;

&lt;p&gt;Here's a basic example. You create a service:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;import { Injectable } from '@angular/core';

@Injectable({
  providedIn: 'root'
})
export class UserService {
  getUser() {
    return { name: 'Alex', role: 'admin' };
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;And you inject it into a component:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;import { Component } from '@angular/core';
import { UserService } from './user.service';

@Component({
  selector: 'app-profile',
  template: `&amp;lt;p&amp;gt;{{ user.name }}&amp;lt;/p&amp;gt;`
})
export class ProfileComponent {
  user = this.userService.getUser();

  constructor(private userService: UserService) {}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Angular handles creating and sharing the &lt;code&gt;UserService&lt;/code&gt; instance.&lt;/p&gt;

&lt;p&gt;What tripped me was &lt;code&gt;providedIn: 'root'&lt;/code&gt;. That line registers the service at the application root level, meaning the same instance gets shared across the entire app. If you provide a service inside a specific module instead, each instance of that module gets its own copy. I learned that distinction the hard way when I couldn't figure out why two components weren't sharing the same data.&lt;/p&gt;

&lt;h2&gt;The Difference Between Template-Driven and Reactive Forms&lt;/h2&gt;

&lt;p&gt;Angular gives you two ways to build forms. I tried both in my first month.&lt;/p&gt;

&lt;p&gt;Template-driven forms are simpler to set up. You write most of the logic in the HTML template using directives like &lt;code&gt;ngModel&lt;/code&gt;. They're quick for basic forms, but harder to test and harder to control programmatically.&lt;/p&gt;

&lt;p&gt;Reactive forms are more verbose upfront, but you control everything in the component class. Validation, value changes, and dynamic fields are all of it is handled in TypeScript, not in the template.&lt;/p&gt;

&lt;p&gt;I started with template-driven because the docs introduced them first. Halfway through building a multi-step form, I realized I needed to change field validation based on other field values. That's painful in template-driven forms.&lt;/p&gt;

&lt;p&gt;I rebuilt the form using reactive forms. It took half a day, but the result was cleaner and easier to reason about.&lt;/p&gt;

&lt;p&gt;If I were starting over, I'd learn reactive forms first and only use template-driven for the simplest cases.&lt;/p&gt;

&lt;h2&gt;A Note on Hitting the Wall&lt;/h2&gt;

&lt;p&gt;Around week five, I seriously considered going back to React.&lt;/p&gt;

&lt;p&gt;I was building a data table with filtering, pagination, and sortable columns. In React, I'd built something similar in a day and a half. In Angular, I was still debugging why my pipes weren't updating on filter changes.&lt;/p&gt;

&lt;p&gt;A friend and colleague of mine, who had been using Angular for years, is the reason I didn't quit. He told me the wall is temporary. Once the patterns clicked, the framework's structure became an advantage, not a burden. Teams could onboard faster, code was more predictable, and large apps stayed manageable.&lt;/p&gt;

&lt;p&gt;He was right. By the third month, my efforts started to pay off.&lt;/p&gt;

&lt;h2&gt;Change Detection Caught Me Off Guard&lt;/h2&gt;

&lt;p&gt;Angular's &lt;a href="https://dev.to/subhash_16/angular-change-detection-217o"&gt;change detection&lt;/a&gt; makes it fast at scale. It's also one of the things that confused me most as a beginner.&lt;/p&gt;

&lt;p&gt;Angular tracks changes to your component's data and updates the view accordingly. By default, it checks every component in the tree when anything changes. That works fine for small apps. For larger apps, the &lt;code&gt;OnPush&lt;/code&gt; change detection strategy tells Angular to only check a component when its input references change, rather than on every cycle.&lt;/p&gt;

&lt;p&gt;I didn't understand &lt;code&gt;OnPush&lt;/code&gt; until I built a list component that was noticeably slow with 500 items. Adding &lt;code&gt;changeDetection: ChangeDetectionStrategy.OnPush&lt;/code&gt; to the component decorator improved the rendering performance immediately.&lt;/p&gt;

&lt;p&gt;It's not something you need to worry about on day one. But knowing it exists, and that it's the right tool when performance becomes an issue, would have saved me time searching for what was causing the slowdown.&lt;/p&gt;

&lt;h2&gt;What Month Three Actually Felt Like&lt;/h2&gt;

&lt;p&gt;By the end of month three, I wasn't an Angular expert 'Somewhat'.&lt;/p&gt;

&lt;p&gt;I understood the module system well enough to structure a mid-sized app cleanly. I defaulted to reactive forms. I knew when to inject services at the root level and when to scope them to a module. I'd stopped fighting the CLI output and started appreciating the consistency it enforced.&lt;/p&gt;

&lt;p&gt;The things that felt like overhead in month one, the boilerplate, the strict structure, the verbose DI system, started to feel like guardrails. They existed because Angular is built for teams and large codebases, not just solo projects. Once I accepted that, my relationship with the framework changed.&lt;/p&gt;

&lt;p&gt;Angular has a real learning curve. It asks more of you upfront than most frontend frameworks. But what it gives you in return, predictability, structure, and tooling that scales, is worth the investment if you're building something that needs to last.&lt;/p&gt;

</description>
      <category>angular</category>
      <category>beginners</category>
      <category>learning</category>
      <category>developer</category>
    </item>
    <item>
      <title>Why Python Became the Default Language for AI?</title>
      <dc:creator>Sanket Parmar</dc:creator>
      <pubDate>Wed, 06 May 2026 06:35:28 +0000</pubDate>
      <link>https://forem.com/sanket-parmar/why-python-became-the-default-language-for-ai-4m64</link>
      <guid>https://forem.com/sanket-parmar/why-python-became-the-default-language-for-ai-4m64</guid>
      <description>&lt;p&gt;Python did not become the dominant AI language because it was the fastest or the most powerful. It became dominant because it was the most practical. That distinction matters a lot if you work in this space.&lt;/p&gt;

&lt;p&gt;Today, almost every major AI framework - TensorFlow, PyTorch, scikit-learn, Hugging Face Transformers, ships Python as its primary interface. When researchers publish new models, the code is in Python. When companies build AI pipelines, they reach for Python. When developers want to run a model locally in five minutes, Python makes that possible.&lt;/p&gt;

&lt;p&gt;This didn't happen overnight. It's worth understanding how it happened, and more importantly, what it means for professionals building with AI today.&lt;/p&gt;

&lt;h2&gt;How Python Got Here&lt;/h2&gt;

&lt;p&gt;Python has been around since 1991. For most of its early life, it was a general-purpose scripting language used for automation, web development, and system tasks. It was never designed for scientific computing.&lt;/p&gt;

&lt;p&gt;That changed in the mid-2000s when the scientific community started adopting it. Libraries like NumPy (2006) and SciPy gave researchers a way to do fast numerical computing in Python without writing C code by hand. Then came Matplotlib for visualization, and Pandas for data manipulation. By the early 2010s, Python had become the go-to environment for data scientists.&lt;/p&gt;

&lt;p&gt;When deep learning exploded around 2012, with AlexNet winning ImageNet by a wide margin, the research community was already living in Python. Google built TensorFlow in Python. Facebook built PyTorch in Python. The momentum was impossible to reverse.&lt;/p&gt;

&lt;h2&gt;What Python Actually Offers to AI Devs&lt;/h2&gt;

&lt;p&gt;Python is not fast at runtime. A raw loop is orders of magnitude slower than the same loop in C or Rust. But that's not the right comparison.&lt;/p&gt;

&lt;p&gt;Here's what Python actually gives AI practitioners:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Readable code that maps to math.&lt;/strong&gt; Neural networks are mathematical objects. Matrix multiplications, activation functions, gradient calculations, and Python's syntax let you write code that looks close to the math. That makes research faster,  debugging easier, and sharing code with other researchers practical.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A library ecosystem that nothing else matches.&lt;/strong&gt; PyTorch alone has thousands of community-built extensions. Hugging Face hosts over 500,000 models with Python-native APIs. The depth of tooling available in Python for AI work does not exist anywhere else. Switching languages means losing this ecosystem, and that's a real cost.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Speed where it counts.&lt;/strong&gt; The heavy computation in AI doesn't run in Python; it runs in C++ and CUDA underneath. PyTorch and TensorFlow are Python interfaces on top of compiled backends. When you call &lt;code&gt;model.forward(x)&lt;/code&gt;, Python hands off to optimized C++ code immediately. Python handles the orchestration; the heavy lifting happens in native code.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;A Personal Note&lt;/h2&gt;

&lt;p&gt;When I first started working with transformer models, I tried to understand the architecture by reading the original "Attention Is All You Need" paper alongside the code. The PyTorch implementation was almost line-for-line translatable to the paper's equations. That clarity was not an accident; Python's design made that possible. It's one of those things you don't fully appreciate until you try to do the same thing in a less expressive language.&lt;/p&gt;

&lt;h2&gt;The Modern AI Python Stack&lt;/h2&gt;

&lt;p&gt;If you're a practitioner today, this is the core stack you'll encounter across most AI projects:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://dev.to/iamfaham/pytorch-fundamentals-a-beginner-friendly-guide-16h3"&gt;PyTorch&lt;/a&gt; -&lt;/strong&gt; the dominant framework for model training and research. TensorFlow still exists, but PyTorch has taken the lead in both academia and industry.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hugging Face Transformers -&lt;/strong&gt; the standard library for working with pre-trained language models, vision models, and multimodal models.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LangChain / LlamaIndex -&lt;/strong&gt; frameworks for building applications on top of LLMs, including RAG pipelines and agents.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://dev.to/madzimai/fastapi-28pm"&gt;FastAPI&lt;/a&gt; -&lt;/strong&gt; the most common choice for serving AI models as APIs. It's fast, async-native, and easy to document.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each of these layers handles a different part of the stack. PyTorch gets the model working. Hugging Face makes pre-trained models accessible. LangChain connects models to real applications. FastAPI exposes those applications to the world.&lt;/p&gt;

&lt;h2&gt;Writing AI Code That Works in Production&lt;/h2&gt;

&lt;p&gt;This is where a lot of practitioners hit a wall. Getting a model to produce good outputs in a notebook is one thing. Deploying it reliably is another.&lt;/p&gt;

&lt;p&gt;Python gives you the tools, but it doesn't enforce discipline.&lt;/p&gt;

&lt;p&gt;Here's a concrete pattern. When serving a model with FastAPI, you need to separate your model loading from your request handling:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;from fastapi import FastAPI
from transformers import pipeline
from contextlib import asynccontextmanager

model = None

@asynccontextmanager
async def lifespan(app: FastAPI):
    global model
    model = pipeline("text-classification", model="distilbert-base-uncased-finetuned-sst-2-english")
    yield
    model = None

app = FastAPI(lifespan=lifespan)

@app.post("/predict")
async def predict(text: str):
    result = model(text)
    return {"label": result[0]["label"], "score": round(result[0]["score"], 4)}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This pattern loads the model once at startup using FastAPI's lifespan context, not on every request. Loading a transformer model on every API call would make your service unusably slow. This is one of those mistakes that's easy to make and expensive to discover in production.&lt;/p&gt;

&lt;h2&gt;What Python Cannot Do?&lt;/h2&gt;

&lt;p&gt;Python's dominance in AI does not mean it's the right tool for every part of an AI system.&lt;/p&gt;

&lt;p&gt;Inference latency is a real problem. When you need to serve tens of thousands of predictions per second, pure Python becomes a bottleneck. Production teams at large companies often rewrite inference pipelines in C++ or use tools like ONNX Runtime to compile models into optimized runtimes that bypass Python entirely.&lt;/p&gt;

&lt;p&gt;Edge deployment is another gap. Running models on devices, smartphones, embedded systems, IoT hardware, often requires C, C++, or Rust. Python doesn't run well without an interpreter, which most edge environments don't have.&lt;/p&gt;

&lt;p&gt;Concurrency is also a known weak point. Python's Global Interpreter Lock (GIL) limits true parallelism in CPU-bound tasks. For AI workloads that are GPU-bound, this rarely matters. But for CPU-intensive preprocessing at scale, it can become a constraint.&lt;/p&gt;

&lt;p&gt;Knowing these limits doesn't mean abandoning Python. It means knowing when to hand off to something else, and Python's ecosystem makes those handoffs relatively clean.&lt;/p&gt;

&lt;h2&gt;Where This Is Going&lt;/h2&gt;

&lt;p&gt;Python's position in AI looks stable for the foreseeable future. The network effects are too strong. Models, papers, tools, and talent all converge on Python. A new language would need to offer something dramatically better to break that gravity, and nothing on the horizon does.&lt;/p&gt;

&lt;p&gt;What is changing is how Python integrates with faster runtimes. Projects like Mojo, a language designed to be a superset of Python with systems-level performance, are trying to close the gap between Python's usability and C's speed. Whether Mojo or something as it succeeds is an open question, but the direction is clear: the AI community wants Python's interface with native performance underneath.&lt;/p&gt;

&lt;p&gt;For practitioners, the practical takeaway is straightforward. Python fluency is not optional in AI work. Not because it's perfect, but because it's where everything is. The frameworks, the models, the tooling, the community, it all lives in Python. Getting better at Python means getting better at AI, and that's a return on investment that compounds over time.&lt;/p&gt;

&lt;p&gt;More Reading: - Python's reach goes far beyond AI. It powers web development, data visualization, desktop apps, and game production. If you want a complete picture of everything Python can do, check out this guide on &lt;a href="https://www.cmarix.com/blog/what-is-python-used-for/" rel="noopener noreferrer"&gt;what Python is used for&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>python</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Azure Migration Is Not the Problem. Your Plan Is</title>
      <dc:creator>Sanket Parmar</dc:creator>
      <pubDate>Mon, 27 Apr 2026 12:47:14 +0000</pubDate>
      <link>https://forem.com/sanket-parmar/azure-migration-is-not-the-problem-your-plan-is-3g3k</link>
      <guid>https://forem.com/sanket-parmar/azure-migration-is-not-the-problem-your-plan-is-3g3k</guid>
      <description>&lt;p&gt;Six months after migration, your cloud bill runs 40% higher than projected. On top of that, three legacy applications are running on workarounds your team built at 2 a.m. Meanwhile, the engineer who sold the project internally is now defending it in a budget review with a slide deck that no longer reflects reality.&lt;/p&gt;

&lt;p&gt;This is not a horror story. In fact, this is Tuesday for a significant number of enterprise teams that migrate to Azure every year.&lt;/p&gt;

&lt;p&gt;Azure is a mature platform. And the failures you read about, the blown budgets, the performance regressions, the identity nightmares, almost none of them are Azure's fault. They are plan failures wearing the costume of platform failures. Your migration is only as good as the plan that preceded it, and most enterprise migration plans get built around vendor timelines and licensing incentives rather than your actual workload reality.&lt;/p&gt;

&lt;h2&gt;What Azure Migration Actually Involves&lt;/h2&gt;

&lt;p&gt;Most vendors present migration as a single motion, but it is not. There are three fundamentally different approaches, and which one applies to each workload determines your timeline, your cost, and your risk profile.&lt;/p&gt;

&lt;p&gt;Lift and shift moves workloads to Azure as-is. It is the fastest path and the lowest short-term cost, but your technical debt does not disappear. It moves with you. Re-platforming modifies workloads to take advantage of cloud-native capabilities without a full rewrite. It sounds like a compromise and often becomes the longest phase of a migration as a result. Re-architecting rebuilds applications to be genuinely cloud-native. It is the right long-term move for critical workloads and consequently the most expensive and time-intensive option. Vendor timelines rarely account for it.&lt;/p&gt;

&lt;p&gt;Most vendor proposals default to lift-and-shift because it is the fastest to quote. For that reason, you need to know which category each workload falls into before you agree to a timeline or sign anything, not after.&lt;/p&gt;

&lt;h2&gt;Where Migrations Actually Break Down&lt;/h2&gt;

&lt;h3&gt;Legacy Workload Compatibility&lt;/h3&gt;

&lt;p&gt;Applications built on older Windows Server versions, SQL Server instances, and custom middleware behave differently in Azure than on-premise, and those differences almost always surface under production load, not during testing. Before you migrate a single workload, run a full &lt;a href="https://dev.to/yayabobi/a-developers-guide-to-dependency-mapping-2apd"&gt;application dependency mapping&lt;/a&gt; exercise. Specifically, audit for hard-coded IP addresses in application configurations, custom authentication integrations that assume on-premise Active Directory proximity, third-party licenses that are not cloud-portable, and applications with undocumented external dependencies. These are precisely the items that create your 2 a.m. workarounds.&lt;/p&gt;

&lt;h3&gt;Cost Modeling&lt;/h3&gt;

&lt;p&gt;Azure's consumption-based pricing is fundamentally different from on-premise capital expenditure, and most projections are built by people who understand one model but not the other.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Three costs consistently blow budgets:&lt;/strong&gt; egress fees that accumulate during hybrid transition periods, right-sizing failures where lift-and-shift migrations overprovision by 30 to 40%, and on-demand rates paid during the 6 to 12 months before workloads stabilize enough for reserved instance commitments.&lt;/p&gt;

&lt;p&gt;Given all of this, build your cost model around three scenarios: optimistic, realistic, and worst-case, before migration starts.&lt;/p&gt;

&lt;h3&gt;Identity and Access Management&lt;/h3&gt;

&lt;p&gt;Hybrid Azure AD and on-premise Active Directory &lt;a href="https://learn.microsoft.com/en-us/entra/identity/devices/how-to-hybrid-join" rel="noopener noreferrer"&gt;coexistence&lt;/a&gt; is technically achievable and yet operationally complex in ways that only surface when something breaks in production. Conditional access policies that work in testing block production users in ways that are difficult to debug. Service accounts with hard-coded credentials do not translate cleanly to managed identities. Group Policy Objects require manual re-implementation. For that reason, treat IAM migration as its own workstream with dedicated ownership and a dedicated testing phase, not as a task buried inside the infrastructure migration timeline.&lt;/p&gt;

&lt;h3&gt;The Hybrid Period&lt;/h3&gt;

&lt;p&gt;Every enterprise migration goes through a hybrid period where some workloads sit on Azure while others keep running on-premises. The assumption going in is always that this period is short. For large enterprises, however, it typically runs 12 to 24 months. During that time, you are paying for both environments simultaneously and managing complexity across both. Beyond that, applications designed to communicate locally now communicate across a WAN connection, introducing latency that was never modeled in the original plan. Design your hybrid architecture deliberately from day one, not as a transitional afterthought you revisit when performance degrades.&lt;/p&gt;

&lt;h2&gt;What a Plan That Actually Works Looks Like&lt;/h2&gt;

&lt;p&gt;Start with a &lt;a href="https://www.cmarix.com/blog/azure-migration-zero-downtime-guide/" rel="noopener noreferrer"&gt;workload inventory&lt;/a&gt;, not a timeline. Your inventory needs to capture business criticality, technical complexity, dependency mapping, compliance requirements, and current performance baselines. Without baselines, you have no way to measure whether migration improved or degraded performance, and consequently, no credible data when costs run over.&lt;/p&gt;

&lt;p&gt;Sequence your migration deliberately. Start with non-critical, low-complexity workloads to build team confidence and surface Azure-specific issues in a low-stakes environment. From there, move to development and test environments for critical applications. Only then tackle production workloads, after your team has real Azure operational experience and your cost model has been validated against actual usage data.&lt;/p&gt;

&lt;p&gt;Build your FinOps practice before the first workload migrates. Assign cost ownership to workload owners rather than a central IT budget, and set up tagging governance from the start. Finally, define your cloud operating model, incident response, patching, monitoring, and on-call before go-live. Teams that skip this discover the gap during their first production incident in Azure, which is the worst possible time to figure it out.&lt;/p&gt;

&lt;h2&gt;The Plan Is the Product&lt;/h2&gt;

&lt;p&gt;The organizations that migrate successfully are not the ones with the biggest budgets or the most experienced vendors. Rather, they are the ones who did the unglamorous work of understanding their own workload reality before moving anything.&lt;/p&gt;

&lt;p&gt;Before your next migration planning meeting, ask yourself one question: Would your current plan survive a line-by-line review against the flaws in this article? If the answer is uncertain, you have found exactly where to start.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>database</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Not Every Button Needs a Brain: The Case Against Reflexive AI Integration</title>
      <dc:creator>Sanket Parmar</dc:creator>
      <pubDate>Wed, 22 Apr 2026 12:24:23 +0000</pubDate>
      <link>https://forem.com/sanket-parmar/not-every-button-needs-a-brain-the-case-against-reflexive-ai-integration-4lb6</link>
      <guid>https://forem.com/sanket-parmar/not-every-button-needs-a-brain-the-case-against-reflexive-ai-integration-4lb6</guid>
      <description>&lt;h2&gt;The Gold Rush Nobody Talks About&lt;/h2&gt;

&lt;p&gt;There is a moment in every technology gold rush where the pickaxe becomes the product. Somewhere between genuine innovation and mass adoption, the tool stops being a means to an end and becomes the end itself. We are living through that moment with artificial intelligence.&lt;/p&gt;

&lt;p&gt;Go through any product roadmap meeting today, and you will hear some version of the same sentence: "We should add AI to this." The reasons vary, competitive pressure, investor expectations, and a fear of looking behind the times, but the instinct is almost always the same. AI is the answer, and we are just looking for the questions to match.&lt;/p&gt;

&lt;p&gt;The problem is that reflexive AI integration does not make software smarter. It makes it heavier, harder to trust, and often worse at the one thing users actually opened the app to do.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recent stats:&lt;/strong&gt; 84% of developers already use or plan to use AI tools, yet &lt;a href="https://www.cmarix.com/blog/software-development-statistics/#:~:text=The%20Productivity%20Paradox%3A%2066%25%20of%20developers%20express%20their%20frustration%20regarding%20the%20use%20of%20AI%20%E2%80%98solutions%E2%80%99%20that%20almost%20work.%E2%80%9D%20Although%20the%20first%20stage%20of%20coding%20is%20aided%20by%20AI%2C%20refactoring%20complex%20code%20remains%20time%2Dconsuming." rel="noopener noreferrer"&gt;66%&lt;/a&gt; report frustration with AI solutions that "might work," and only 42% fully trust the code they generate. Adoption is not the challenge. Knowing when AI actually belongs in your product is.&lt;/p&gt;

&lt;h2&gt;The Integration Impulse and Where It Comes From&lt;/h2&gt;

&lt;p&gt;To understand why this is happening, it helps to separate the genuine from the performative. AI genuinely solves hard problems. It handles tasks that are probabilistic by nature, such as language understanding, pattern recognition, and anomaly detection, in ways that rule-based systems cannot. The technology is real, and the value in the right context is real.&lt;/p&gt;

&lt;p&gt;But a second force drives a lot of what gets shipped: the market signal. When a competitor launches an "AI-powered" version of a product, product teams feel the pressure to respond. When investors ask whether a product has an AI strategy, the answer shapes funding conversations. When job postings for engineers require experience with large language models, teams start finding reasons to use them. These are not engineering decisions. They are positioning decisions dressed up as engineering decisions.&lt;/p&gt;

&lt;p&gt;The result is a category of software features that exist primarily to exist. AI-generated summaries for content that takes thirty seconds to read. Chatbots layered on top of interfaces that were perfectly navigable before. "Smart" suggestions that fire on every keystroke and interrupt the flow they claim to support. These features do not emerge from user needs. They emerge from the need to have something to announce.&lt;/p&gt;

&lt;h2&gt;When AI Earns Its Place&lt;/h2&gt;

&lt;p&gt;None of this means AI integration is a mistake. It means the bar for integrating it should be higher than "because we can."&lt;/p&gt;

&lt;p&gt;The clearest signal that AI belongs in a feature is when the problem is genuinely ambiguous or open-ended. A search bar that needs to understand natural language, not just keywords, benefits from a language model. A fraud detection system that needs to recognize novel patterns in real-time transaction data benefits from a trained model. A code assistant that needs to reason about context across an entire file benefits from AI in a way that syntax highlighting simply does not. The common thread is that the task is fuzzy, the input space is enormous, or the user is trying to express something that a deterministic function cannot interpret.&lt;/p&gt;

&lt;p&gt;AI also earns its place when it removes a task users never wanted to do in the first place. Automatic meeting transcription is a good example. Nobody sits in a meeting and thinks, "I love manually summarizing this later." The AI step removes real friction from a real workflow. Compare that to an AI that rewrites your email subject line unprompted. The user did not ask for help with their subject line. They are now managing a suggestion they did not request, which is more friction, not less.&lt;/p&gt;

&lt;p&gt;The test worth running before any AI feature ships is a simple one: does this reduce the number of decisions the user has to make, or does it add new ones? If someone has to review, approve, dismiss, re-prompt, or explain themselves to the AI they just encountered, the cognitive overhead has gone up, not down. That is not a UX problem. It is a fundamental mismatch between the tool and the task.&lt;/p&gt;

&lt;h2&gt;The Hidden Costs That Don't Show Up in the Demo&lt;/h2&gt;

&lt;p&gt;Product demos are optimized for the best case. The AI suggests exactly the right thing, the user accepts it, and the workflow looks seamless. What the demo does not show is the 40% of cases where the suggestion is wrong, off-tone, or confidently incorrect in a way that damages trust.&lt;/p&gt;

&lt;p&gt;This is where reflexive AI integration creates lasting damage to a product. Trust in software is largely built on predictability. Users learn how an application behaves, and they develop intuitions about it. When AI introduces probabilistic outputs into a deterministic context, a form field that sometimes autofills correctly and sometimes does not, a filter that sometimes works and sometimes misinterprets the query, users lose their sense of what the software will do next. They start double-checking everything. They work around the AI instead of with it.&lt;/p&gt;

&lt;p&gt;There are also infrastructure costs that compound over time. Language model API calls are not free. Latency in features that previously returned results in milliseconds now returns them in seconds. Privacy considerations become significantly more complex when user input is being processed by a third-party model. These are solvable problems, but they are problems that did not exist before the AI layer was added, and they deserve honest accounting before the feature ships.&lt;/p&gt;

&lt;h2&gt;The Determinism Default&lt;/h2&gt;

&lt;p&gt;A useful mental model for deciding when AI belongs in a feature is what could be called the determinism default. Start from the assumption that every feature should be deterministic, meaning that given the same input, it produces the same output every time. Violate that assumption only when the problem cannot be solved any other way.&lt;/p&gt;

&lt;p&gt;Most software problems can be solved another way. A well-designed filter beats an AI that tries to infer what you want to filter. A clear information architecture beats a chatbot that helps users find what they need in a confusing interface. A sensible default beats a personalization engine that needs weeks of behavioral data to become useful. Deterministic solutions are cheaper to build, easier to test, faster to debug, and more trustworthy to users. They should be the starting point, not the fallback.&lt;/p&gt;

&lt;p&gt;This is not an argument against ambition. It is an argument for honesty about what problem is actually being solved. If the problem is genuinely fuzzy, if the solution space is too large for rules to cover, if the user's intent is inherently variable, then AI is the right tool. If the problem is fuzzy because the product design is unclear, adding AI does not fix the design. It hides the design debt under a layer of probabilistic behavior, and that debt compounds.&lt;/p&gt;

&lt;h2&gt;What Good Integration Actually Looks Like&lt;/h2&gt;

&lt;p&gt;The products that use AI well share a common characteristic: the AI is invisible when it is working. GitHub Copilot suggests code completions that feel like a natural extension of what the engineer is already thinking. Spam filters in email clients remove noise so effectively that users forget they exist. Image compression algorithms that use machine learning to reduce file sizes without visible quality loss do their job entirely in the background. In each case, the AI handles the part of the task that is genuinely hard, pattern recognition, probabilistic judgment, optimization across a vast input space, and stays out of the way everywhere else.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;More Reading:&lt;/strong&gt; &lt;a href="https://dev.to/jaideepparashar/when-every-app-uses-ai-what-makes-yours-different-48bb"&gt;When Every App Uses AI, What Makes Yours Different?&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The products that use AI poorly share a different characteristic: the AI is visible when it is failing. Every dismissed suggestion is a reminder that the system guessed wrong. Every correction the user has to make is the friction the AI was supposed to remove. Every loading spinner on a feature that used to be instant is a tax on the interaction that the AI integration introduced.&lt;/p&gt;

&lt;h2&gt;A Simpler Question to Ask First&lt;/h2&gt;

&lt;p&gt;Before adding AI to any feature, ask whether a new engineer joining the team could explain in one sentence why the AI is necessary. Not why AI is interesting, not why the industry is moving in this direction, not why competitors are doing it, but why this specific feature, in this specific product, for this specific user, requires probabilistic reasoning instead of deterministic logic.&lt;/p&gt;

&lt;p&gt;If the answer comes easily, the integration probably makes sense. If the answer requires a paragraph of context and a few assumptions, it is worth pausing. The best AI features are the ones where the question answers itself. The worst ones are the ones where the team convinced itself the question did not need to be asked.&lt;/p&gt;

&lt;p&gt;Not every button needs a brain. The ones that do are worth building carefully. The ones that do not are worth leaving alone.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>learning</category>
      <category>softwaredevelopment</category>
      <category>software</category>
    </item>
  </channel>
</rss>
