<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Utkarsh Kanwat</title>
    <description>The latest articles on Forem by Utkarsh Kanwat (@ukanwat).</description>
    <link>https://forem.com/ukanwat</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/ukanwat"/>
    <language>en</language>
    <item>
      <title>Why I'm Betting Against AI Agents in 2025 (Despite Building Them)</title>
      <dc:creator>Utkarsh Kanwat</dc:creator>
      <pubDate>Sun, 20 Jul 2025 10:06:37 +0000</pubDate>
      <link>https://forem.com/ukanwat/why-im-betting-against-ai-agents-in-2025-despite-building-them-1c6m</link>
      <guid>https://forem.com/ukanwat/why-im-betting-against-ai-agents-in-2025-despite-building-them-1c6m</guid>
      <description>&lt;p&gt;Everyone says 2025 is the year of AI agents. The headlines are everywhere: "Autonomous AI will transform work," "Agents are the next frontier," "The future is agentic." Meanwhile, I've spent the last year building many different agent systems that actually work in production. And that's exactly why I'm betting against the current hype.&lt;/p&gt;

&lt;p&gt;I'm not some AI skeptic writing from the sidelines. Over the past year, I've built more than a dozen production agent systems across the entire software development lifecycle:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Development agents&lt;/strong&gt;: UI generators that create functional React components from natural language, code refactoring agents that modernize legacy codebases, documentation generators that maintain API docs automatically, and function generators that convert specifications into working implementations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data &amp;amp; Infrastructure agents&lt;/strong&gt;: Database operation agents that handle complex queries and migrations, DevOps automation AI systems managing infrastructure-as-code across multiple cloud providers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Quality &amp;amp; Process agents&lt;/strong&gt;: AI-powered CI/CD pipelines that fix lint issues, generate comprehensive test suites, perform automated code reviews, and create detailed pull requests with proper descriptions.&lt;/p&gt;

&lt;p&gt;These systems work. They ship real value. They save hours of manual work every day. And that's precisely why I think much of what you're hearing about 2025 being "the year of agents" misses key realities.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;TL;DR: Three Hard Truths About AI Agents&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;After building 12+ production systems, here's what I've learned:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Error rates compound exponentially in multi-step workflows. 95% reliability per step = 36% success over 20 steps. Production needs 99.9%+.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Context windows create quadratic token costs. Long conversations become prohibitively expensive at scale.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The real challenge isn't AI capabilities, it's designing tools and feedback systems that agents can actually use effectively.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The Mathematical Reality No One Talks About
&lt;/h2&gt;

&lt;p&gt;Here's the uncomfortable truth that every AI agent company is dancing around: error compounding makes autonomous multi-step workflows mathematically impossible at production scale.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8lr0rx0r7u601obcu9x2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8lr0rx0r7u601obcu9x2.png" alt=" " width="800" height="443"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's do the math. If each step in an agent workflow has 95% reliability, which is optimistic for current LLMs,then:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;5 steps = 77% success rate
&lt;/li&gt;
&lt;li&gt;10 steps = 59% success rate&lt;/li&gt;
&lt;li&gt;20 steps = 36% success rate&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Production systems need 99.9%+ reliability. Even if you magically achieve 99% per-step reliability (which no one has), you still only get 82% success over 20 steps. This isn't a prompt engineering problem. This isn't a model capability problem. This is mathematical reality.&lt;/p&gt;

&lt;p&gt;My DevOps agent works precisely because it's not actually a 20-step autonomous workflow. It's 3-5 discrete, independently verifiable operations with explicit rollback points and human confirmation gates. The "agent" handles the complexity of generating infrastructure code, but the system is architected around the mathematical constraints of reliability.&lt;/p&gt;

&lt;p&gt;Every successful agent system I've built follows the same pattern: bounded contexts, verifiable operations, and human decision points (sometimes) at critical junctions. The moment you try to chain more than a handful of operations autonomously, the math kills you.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Token Economics That Don't Add Up
&lt;/h2&gt;

&lt;p&gt;There's another mathematical reality that agent evangelists conveniently ignore: context windows create quadratic cost scaling that makes conversational agents economically impossible.&lt;/p&gt;

&lt;p&gt;Here's what actually happens when you build a "conversational" agent:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each new interaction requires processing ALL previous context&lt;/li&gt;
&lt;li&gt;Token costs scale quadratically with conversation length
&lt;/li&gt;
&lt;li&gt;A 100-turn conversation costs $50-100 in tokens alone&lt;/li&gt;
&lt;li&gt;Multiply by thousands of users and you're looking at unsustainable economics&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I learned this the hard way when prototyping a conversational database agent. The first few interactions were cheap. By the 50th query in a session, each response was costing multiple dollars - more than the value it provided. The economics simply don't work for most scenarios.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fykseosvi7g7lrvu76dxb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fykseosvi7g7lrvu76dxb.png" alt=" " width="800" height="478"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My function generation agent succeeds because it's completely stateless: description → function → done. No context to maintain, no conversation to track, no quadratic cost explosion. It's not a "chat with your code" experience, it's a focused tool that solves a specific problem efficiently.&lt;/p&gt;

&lt;p&gt;The most successful "agents" in production aren't conversational at all. They're smart, bounded tools that do one thing well and get out of the way.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Tool Engineering Reality Wall
&lt;/h2&gt;

&lt;p&gt;Even if you solve the math problems, you hit a different kind of wall: building production-grade tools for agents is an entirely different engineering discipline that most teams underestimate.&lt;/p&gt;

&lt;p&gt;Tool calls themselves are actually quite precise now. The real challenge is tool design. Every tool needs to be carefully crafted to provide the right feedback without overwhelming the context window. You need to think about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How does the agent know if an operation partially succeeded? How do you communicate complex state changes without burning tokens?&lt;/li&gt;
&lt;li&gt;A database query might return 10,000 rows, but the agent only needs to know "query succeeded, 10k results, here are the first 5." Designing these abstractions is an art.&lt;/li&gt;
&lt;li&gt;When a tool fails, what information does the agent need to recover? Too little and it's stuck; too much and you waste context.&lt;/li&gt;
&lt;li&gt; How do you handle operations that affect each other? Database transactions, file locks, resource dependencies.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;My database agent works not because the tool calls are unreliable, but because I spent weeks designing tools that communicate effectively with the AI. Each tool returns structured feedback that the agent can actually use to make decisions, not just raw API responses.&lt;/p&gt;

&lt;p&gt;The companies promising "just connect your APIs and our agent will figure it out" haven't done this engineering work. They're treating tools like human interfaces, not AI interfaces. The result is agents that technically make successful API calls but can't actually accomplish complex workflows because they don't understand what happened.&lt;/p&gt;

&lt;p&gt;The dirty secret of every production agent system is that the AI is doing maybe 30% of the work. The other 70% is tool engineering: designing feedback interfaces, managing context efficiently, handling partial failures, and building recovery mechanisms that the AI can actually understand and use.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Integration Reality Check
&lt;/h2&gt;

&lt;p&gt;But let's say you solve the reliability problems and the economics. You still have to integrate with the real world, and the real world is a mess.&lt;/p&gt;

&lt;p&gt;Enterprise systems aren't clean APIs waiting for AI agents to orchestrate them. They're legacy systems with quirks, partial failure modes, authentication flows that change without notice, rate limits that vary by time of day, and compliance requirements that don't fit neatly into prompt templates.&lt;/p&gt;

&lt;p&gt;My database agent doesn't just "autonomously execute queries." It navigates connection pooling, handles transaction rollbacks, respects read-only replicas, manages query timeouts, and logs everything for audit trails. The AI handles query generation; everything else is traditional systems programming.&lt;/p&gt;

&lt;p&gt;The companies promising "autonomous agents that integrate with your entire tech stack" are either overly optimistic or haven't actually tried to build production systems at scale. Integration is where AI agents go to die.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Actually Works (And Why)
&lt;/h2&gt;

&lt;p&gt;After building more than a dozen different agent systems across the entire software development lifecycle, I've learned that the successful ones share a pattern:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;My UI generation agent works because humans review every generated interface before deployment. The AI handles the complexity of translating natural language into functional React components, but humans make the final decisions about user experience.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;My database agent works because it confirms every destructive operation before execution. The AI handles the complexity of translating business requirements into SQL, but humans maintain control over data integrity.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;My function generation agent works because it operates within clearly defined boundaries. Give it a specification, get back a function. No side effects, no state management, no integration complexity.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;My DevOps automation works because it generates infrastructure-as-code that can be reviewed, versioned, and rolled back. The AI handles the complexity of translating requirements into Terraform, but the deployment pipeline maintains all the safety mechanisms we've learned to rely on.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;My CI/CD agent works because each stage has clear success criteria and rollback mechanisms. The AI handles the complexity of analyzing code quality and generating fixes, but the pipeline maintains control over what actually gets merged.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The pattern is clear: AI handles complexity, humans maintain control, and traditional software engineering handles reliability.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;My Predictions&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Here's my specific prediction about who will struggle in 2025:&lt;/p&gt;

&lt;p&gt;Venture-funded "fully autonomous agent" startups will hit the economics wall first. Their demos work great with 5-step workflows, but customers will demand 20+ step processes that break down mathematically. Burn rates will spike as they try to solve unsolvable reliability problems.&lt;/p&gt;

&lt;p&gt;Enterprise software companies that bolted "AI agents" onto existing products will see adoption stagnate. Their agents can't integrate deeply enough to handle real workflows.&lt;/p&gt;

&lt;p&gt;Meanwhile, the winners will be teams building constrained, domain-specific tools that use AI for the hard parts while maintaining human control or strict boundaries over critical decisions. Think less "autonomous everything" and more "extremely capable assistants with clear boundaries."&lt;/p&gt;

&lt;p&gt;The market will learn the difference between AI that demos well and AI that ships reliably. That education will be expensive for many companies.&lt;/p&gt;

&lt;p&gt;I'm not betting against AI. I'm betting against the current approach to agent architecture. But I believe future is going to be far more valuable than the hype suggests.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building the Right Way
&lt;/h2&gt;

&lt;p&gt;If you're thinking about building with AI agents, start with these principles:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Define clear boundaries.&lt;/strong&gt; What exactly can your agent do, and what does it hand off to humans or deterministic systems?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Design for failure.&lt;/strong&gt; How do you handle the 20-40% of cases where the AI makes mistakes? What are your rollback mechanisms?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solve the economics.&lt;/strong&gt; How much does each interaction cost, and how does that scale with usage? Stateless often beats stateful.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prioritize reliability over autonomy.&lt;/strong&gt; Users trust tools that work consistently more than they value systems that occasionally do magic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Build on solid foundations.&lt;/strong&gt; Use AI for the hard parts (understanding intent, generating content), but rely on traditional software engineering for the critical parts (execution, error handling, state management).&lt;/p&gt;

&lt;p&gt;The agent revolution is coming. It just won't look anything like what everyone's promising in 2025. And that's exactly why it will succeed.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Lessons from the Trenches
&lt;/h2&gt;

&lt;p&gt;The gap between "works in demo" and "works at scale" is enormous, and most of the industry is still figuring this out.&lt;/p&gt;

&lt;p&gt;If you're working on similar problems, I'd love to continue this conversation. The challenges around agent reliability, cost optimization, and integration complexity are fascinating engineering problems that don't have obvious solutions yet. &lt;/p&gt;

&lt;p&gt;I regularly advise teams and companies navigating these exact challenges - from architecture decisions to avoiding the pitfalls I've learned about firsthand. Whether you're evaluating build vs buy decisions, debugging why your agents aren't working in production, or just want to implement them, feel free to reach out.&lt;/p&gt;

&lt;p&gt;The more people building real systems and sharing honest experiences, the faster we'll all figure out what actually works. You can find me at &lt;a href="//mailto:utkarshkanwat@gmail.com"&gt;utkarshkanwat@gmail.com&lt;/a&gt; or &lt;a href="https://x.com/ukanwat" rel="noopener noreferrer"&gt;X&lt;/a&gt; if you want to dive deeper into any of these topics.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>automation</category>
      <category>agents</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>🚀 7 AI Tools to Improve your productivity: A Deep Dive 🪄✨</title>
      <dc:creator>Utkarsh Kanwat</dc:creator>
      <pubDate>Wed, 03 Jan 2024 22:14:56 +0000</pubDate>
      <link>https://forem.com/ukanwat/7-ai-tools-to-improve-your-productivity-a-deep-dive-307</link>
      <guid>https://forem.com/ukanwat/7-ai-tools-to-improve-your-productivity-a-deep-dive-307</guid>
      <description>&lt;p&gt;As someone who has spent a considerable amount of time in the Software industry. Over time, I've always been on the lookout for tools and techniques that can help me:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Boost my productivity&lt;/li&gt;
&lt;li&gt;Reduce bugs in my code&lt;/li&gt;
&lt;li&gt;Write less but achieve more&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this post, I'll be sharing some of the AI-powered tools that have helped me improve my JavaScript productivity. I'll be sharing my personal experiences with each tool, including their strengths and weaknesses. So buckle up and let's dive in!&lt;/p&gt;

&lt;h3&gt;
  
  
  1️⃣ &lt;a href="https://github.com/features/copilot"&gt;Copilot&lt;/a&gt; by GitHub 🚁
&lt;/h3&gt;

&lt;p&gt;GitHub Copilot is an AI-powered code assistant that helps you write code faster.&lt;/p&gt;

&lt;p&gt;I've used Copilot with TypeScript, JavaScript, Dart and Python. There were moments when it felt like it read my mind and generated exactly what I wanted - it was amazing! However, these moments were rare like few times in a month.&lt;/p&gt;

&lt;p&gt;Most of the time, its performance was hit or miss. It doesn't know your codebase and often guesses function names incorrectly. There were instances where it created code with subtle bugs which forced me to spend extra time analyzing its output.&lt;/p&gt;

&lt;p&gt;Despite these shortcomings, Copilot is pretty decent at generating simple repetitive patterns and auto-completing documentation.&lt;/p&gt;

&lt;h3&gt;
  
  
  2️⃣ &lt;a href="https://github.com/ukanwat/scriptgpt/"&gt;ScriptGPT&lt;/a&gt; 🚀
&lt;/h3&gt;

&lt;p&gt;ScriptGPT is a tool I created which is designed to offload feature development to an AI agent powered by GPT4. It's tailored specifically for TS/JS projects, automatically installing required libraries, testing code, adding comments, and more.&lt;/p&gt;

&lt;p&gt;Unlike other AI-powered coding tools like GitHub Copilot and GPT-Engineer that struggle with effective code integration and building complex projects, ScriptGPT excels in these areas. It can be used alongside these tools for writing code while offloading specific project features to ScriptGPT.&lt;/p&gt;

&lt;p&gt;As the creator of this project, I might be a bit biased in my assessment. While it's not perfect and there's always room for improvement, I truly believe that ScriptGPT can be a valuable addition to a developer's toolkit. Give it a try and see how it can improve productivity! Github Repo: &lt;a href="https://github.com/ukanwat/scriptgpt/"&gt;ScriptGPT&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  3️⃣ &lt;a href="https://github.com/sourcegraph/cody"&gt;Cody AI&lt;/a&gt; 🤖
&lt;/h3&gt;

&lt;p&gt;Cody AI is an AI-powered coding assistant that I've been using in VSCode for some time now. It's transformed my coding experience in several ways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;    It's excellent at breaking down code blocks into simple summaries. This is super handy when I'm reviewing code from other projects or need a quick refresher on my own work.&lt;/li&gt;
&lt;li&gt;    It's clever at filling in the blanks in log statements, error messages or code comments.&lt;/li&gt;
&lt;li&gt;    It cuts out the need for copy-pasting by filling in gaps for common patterns.&lt;/li&gt;
&lt;li&gt;    Surprisingly, it's pretty good at creating tests.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;However, When writing in markdown, its suggestions tend to be long-winded and always positive - getting a negative sentence out of it is almost impossible! Also, its inline suggestions can sometimes be a nuisance as it doesn't really "get" your code.&lt;/p&gt;

&lt;p&gt;Despite these drawbacks, Cody AI has been a huge help when translating my code into English and constructing tests - making it an indispensable tool in my toolkit.&lt;/p&gt;

&lt;h3&gt;
  
  
  4️⃣ &lt;a href="https://github.com/eylonmiz/react-agent"&gt;React Agent&lt;/a&gt; 🕵️‍♂️
&lt;/h3&gt;

&lt;p&gt;React Agent is an AI tool designed to help with building React components. I tried using it to build a basic state management component for a React app.&lt;br&gt;
It did churn out some code that I could use, but it frequently missed out on some of my specifications or dropped features that it had added earlier It required a lot of hand-holding and attention to detail, which didn't save me much time.&lt;/p&gt;

&lt;p&gt;As it stands now, the code produced by React Agent isn't ready for production and needs a good amount of tweaking before it can be merged into an existing codebase. But it overall increased my productivity.&lt;/p&gt;

&lt;h3&gt;
  
  
  5️⃣ &lt;a href="https://v0.dev/"&gt;v0&lt;/a&gt; by Vercel 🎨
&lt;/h3&gt;

&lt;p&gt;v0 is an AI tool that generates UI designs. However, in my experience, it creates mediocre UI with questionable usage of Tailwind CSS.&lt;/p&gt;

&lt;p&gt;Anything built with v0 either needs heavy modification or ends up looking like an amateur product. I do applaud the effort, but UI designs are intricate and dynamic. We're not quite at the point where AI can consistently produce top-notch UI designs yet but It gives you a starting point for your UI.&lt;/p&gt;

&lt;h3&gt;
  
  
  6️⃣ &lt;a href="https://github.com/sweepai/sweep"&gt;Sweep AI&lt;/a&gt; 🧹
&lt;/h3&gt;

&lt;p&gt;This tool tackles the biggest issue I've faced with development with AI assist - giving context to the existing app source when making new requests. The feature of delivering the output through a PR is a neat addition. I've already made a few PRs using this. Sure, I had to make minor adjustments manually before merging them, but it certainly saved me a good half an hour.&lt;/p&gt;

&lt;h3&gt;
  
  
  7️⃣ &lt;a href="https://github.com/gpt-engineer-org/gpt-engineer"&gt;GPT-engineer&lt;/a&gt; 🧪
&lt;/h3&gt;

&lt;p&gt;GPT-engineer is an AI tool that promises to speed up the app development process.  I decided to test it by  trying to create an Express app using GPT 3.5.&lt;/p&gt;

&lt;p&gt;At first, it seemed promising. It laid out a clear architecture, chose the right frameworks, and even structured the code neatly. But the excitement was short-lived as the code it churned out was below par and I couldn't get the app to start.&lt;/p&gt;

&lt;p&gt;I thought upgrading to GPT-4 might help, and while it did give slightly improved results, it still fell short of creating a fully functional app. So, while GPT-engineer shows promise, it's safe to say it's not quite up to handling serious coding tasks just yet.&lt;/p&gt;

&lt;h3&gt;
  
  
  In a Nutshell 🌟
&lt;/h3&gt;

&lt;p&gt;AI tools aren't perfect yet. They sometimes make mistakes, and they can't always understand what you are trying to do. But they're getting better all the time, In the future, they will be more powerful and helpful. They'll be able to understand your code even better, and they'll be able to generate even more creative ideas.&lt;/p&gt;

&lt;h2&gt;
  
  
  Share Your Thoughts 🌟
&lt;/h2&gt;

&lt;p&gt;Missed any cool AI tools? 😅 Tell me your faves or awesome ones I might've missed!&lt;br&gt;
I'd also like to hear your thoughts &amp;amp; suggestions - I'm always looking to improve :)&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>productivity</category>
      <category>ai</category>
      <category>chatgpt</category>
    </item>
  </channel>
</rss>
