<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Tobias Herber</title>
    <description>The latest articles on Forem by Tobias Herber (@herber).</description>
    <link>https://forem.com/herber</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/herber"/>
    <language>en</language>
    <item>
      <title>Why Your AI Agent Needs MCP (And When It Doesn't)</title>
      <dc:creator>Tobias Herber</dc:creator>
      <pubDate>Sun, 26 Oct 2025 16:42:00 +0000</pubDate>
      <link>https://forem.com/herber/why-your-ai-agent-needs-mcp-and-when-it-doesnt-5bjg</link>
      <guid>https://forem.com/herber/why-your-ai-agent-needs-mcp-and-when-it-doesnt-5bjg</guid>
      <description>&lt;p&gt;You've built an AI agent. It's smart, it's conversational, and it can reason through complex problems. There's just one problem: it lives in a bubble.&lt;/p&gt;

&lt;p&gt;Your agent can't check Slack, pull data from your Postgres database, read files from Google Drive, or update tickets in Linear. Every time you want to add a new integration, you're staring down weeks of custom API work, authentication headaches, and the inevitable "wait, their API changed again?" moments.&lt;/p&gt;

&lt;p&gt;This is the N×M problem, and it's been quietly killing AI agent projects for years. Every agent needs to connect to M services, and every service speaks a different language. The math is brutal: 10 agents × 50 services = 500 custom integrations to build and maintain.&lt;/p&gt;

&lt;p&gt;Enter the Model Context Protocol.&lt;/p&gt;

&lt;h2&gt;
  
  
  What MCP Actually Solves
&lt;/h2&gt;

&lt;p&gt;MCP addresses the challenge where every new data source previously required its own custom implementation by providing a universal, open standard for connecting AI systems with data sources, replacing fragmented integrations with a single protocol.&lt;/p&gt;

&lt;p&gt;Someone, maybe it’s you, maybe it’s someone who puts their implementation on GitHub, writes an MCP server. People often think of it this way: Before USB-C, your laptop needed different ports for everything (power, display, data transfer, peripherals). Each required its own cable, its own standards.&lt;/p&gt;

&lt;p&gt;But here's what makes MCP different from just another API standard: it's built specifically for how AI agents actually work. MCP enforces consistency with well-defined input schemas per tool and (ideally) deterministic execution. In a perfect word, OpenAPI would do the same for HTTP. But sadly, we don’t live in a utopia.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxvlx0t3zkukvxv34qq84.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxvlx0t3zkukvxv34qq84.png" alt="MCP" width="800" height="554"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Four Reasons Your Agent Needs MCP
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Development Velocity
&lt;/h3&gt;

&lt;p&gt;The integration problem compounds exponentially. Each service has its own authentication flow, rate limiting strategy, error handling patterns, and data models. The teams we talk to often end up integrating the same applications over and over again for different projects, with only one third of organizations' internal IT software assets actually available for developers to reuse.&lt;/p&gt;

&lt;p&gt;MCP flips this equation. Build or configure the integration once, use it everywhere. MCP enables AI agents to better retrieve relevant information and produce more nuanced code with fewer attempts.&lt;/p&gt;

&lt;p&gt;The difference shows up in iteration speed. MCP allows quick validation of agent ideas before building anything, enabling comprehensive testing without writing code. You can prototype with real integrations, see what works, and only then commit to building production workflows.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Reusability at Scale
&lt;/h3&gt;

&lt;p&gt;Traditional API integrations don't travel well. The Stripe integration you built for Project A needs substantial rework for Project B. The authentication layer you wrote for the customer dashboard won't work in your internal tools. You're constantly rebuilding variations of the same thing.&lt;/p&gt;

&lt;p&gt;MCP enables a new form of reuse purpose-built for LLMs and agents, with tools, data, and workflows exposed in a format AI models can directly use without wrappers or custom integration, plus dynamic reuse where LLMs discover and use existing systems in real time.&lt;/p&gt;

&lt;p&gt;The ecosystem effect matters here. The MCP ecosystem has grown rapidly with over 16,000 MCP servers available, covering everything from databases and file systems to development tools and productivity applications. When you need a new integration, there's a decent chance someone's already built it. When you build one, others can use it (as long as you give it to them).&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Security and Compliance
&lt;/h3&gt;

&lt;p&gt;Giving AI agents access to your data infrastructure raises legitimate security concerns. With traditional API integrations, you're managing authentication, authorization, and audit logging separately for each service. The attack surface grows linearly with each integration you add.&lt;/p&gt;

&lt;p&gt;MCP standardizes authorization using OAuth 2.1 with mandatory PKCE (Proof Key for Code Exchange), providing enhanced security right out of the box and protecting against common attacks like authorization code interception.&lt;/p&gt;

&lt;p&gt;More importantly: This approach allows enterprises to integrate their existing Single Sign-On infrastructure, enabling users to access any MCP server using standard corporate credentials while maintaining centralized identity management and audit logging across all deployments.&lt;/p&gt;

&lt;p&gt;The tradeoff? MCP servers represent high-value targets because they typically store authentication tokens for multiple services, creating a "keys to the kingdom" scenario where compromising a single MCP server could grant attackers broad access to all connected services. This isn't unique to MCP—any integration hub faces this problem. The solution is proper token management, rotation policies, and treating your MCP infrastructure with the same security rigor as your authentication service. The classic painful parts of security.&lt;/p&gt;

&lt;h2&gt;
  
  
  When You Might Not Want to Use MCP
&lt;/h2&gt;

&lt;p&gt;Here's the part where we get honest: MCP isn't always the answer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftv2xc5hvnvzv5og4ow0r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftv2xc5hvnvzv5og4ow0r.png" alt="MCP" width="800" height="575"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Skip MCP If You Only Need 1-2 Simple Integrations
&lt;/h3&gt;

&lt;p&gt;The value of MCP is limited when integrating only a few tools where the overhead of MCP implementation isn't justified.&lt;/p&gt;

&lt;p&gt;If you're just pulling data from a single Postgres database, a direct connection is probably simpler. MCP's power comes from orchestrating multiple services—if you don't need that, you're adding unnecessary abstraction.&lt;/p&gt;

&lt;h3&gt;
  
  
  Skip MCP for Ultra-Low Latency Requirements
&lt;/h3&gt;

&lt;p&gt;For high-performance, low-latency applications, direct API calls are more efficient, as MCP adds a reasoning layer that introduces latency as the model decides how to use tools.&lt;/p&gt;

&lt;p&gt;If you're building high-frequency trading algorithms, IoT sensor networks, or anything where sub-100ms latency is critical, the overhead of MCP's reasoning layer will hurt. Direct API calls give you predictable, minimal latency.&lt;/p&gt;

&lt;h3&gt;
  
  
  Skip MCP in Highly Regulated Industries
&lt;/h3&gt;

&lt;p&gt;For regulated industries, MCP currently lacks native support for end-to-end encryption, is not certified under SOC 2, PCI DSS, or FedRAMP, and has sparse documentation that regulators expect.&lt;/p&gt;

&lt;p&gt;If you're in healthcare, finance, or any industry where compliance certifications matter more than velocity, traditional API approaches with established audit trails might be necessary. MCP is maturing fast here, but it's not quite enterprise-certified everywhere yet.&lt;/p&gt;

&lt;p&gt;Here’s the catch though: &lt;a href="https://metorial.com" rel="noopener noreferrer"&gt;Metorial&lt;/a&gt; solves those problems for you. Let’s go through that list step by step.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You can set up any of our more than 600 MCP server on Metorial in just a couple of clicks, not matter what you’re not going to be this fast integrating them yourself&lt;/li&gt;
&lt;li&gt;Metorial’s proprietary (but open source) MCP engine solves this by using magic, i.e., our own MCP compatible protocol, heavy container optimization, our proprietary hibernation technology, and more.&lt;/li&gt;
&lt;li&gt;We’re SOC2 and GDPR compliant. One less thing for you to worry about. One less thing for legal and procurement to bug you about.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Getting Started (The Easy Way)
&lt;/h2&gt;

&lt;p&gt;The MCP ecosystem is growing exponentially, but let's be real: the developer experience is still maturing. Developer experience limitations include substantial complexity in implementing MCP servers, with basic examples requiring hundreds of lines of code and limited testing options.&lt;/p&gt;

&lt;p&gt;This is where platforms like &lt;a href="https://metorial.com" rel="noopener noreferrer"&gt;Metorial&lt;/a&gt; come in. Instead of configuring individual MCP servers, dealing with authentication for each service, and maintaining everything yourself, you get 600+ integrations that just work. A few lines of code, and your agent can talk to Slack, GitHub, Notion, Stripe, Postgres, and hundreds of other services.&lt;/p&gt;

&lt;p&gt;It's the difference between building your own USB-C calbe or just ordering one off of Amazon.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bottom Line
&lt;/h2&gt;

&lt;p&gt;MCP is winning for agentic AI applications. Major developer-focused companies like Zed, Replit, Stripe, Linear, OpenAI, and Cursor jumped on MCP quickly, indicating strong demand for a standardized way to integrate AI with development environments.&lt;/p&gt;

&lt;p&gt;But winning doesn't mean universal. Use MCP when you need to orchestrate multiple services, enable agent autonomy, or ship integrations fast. Use direct APIs when performance, control, or simplicity is paramount.&lt;/p&gt;

&lt;p&gt;The best teams use both strategically. They prototype with MCP, optimize critical paths with direct APIs, and focus their engineering time on what actually differentiates their product.&lt;/p&gt;

&lt;p&gt;Whether you build your own MCP implementation or use a platform like &lt;a href="https://metorial.com" rel="noopener noreferrer"&gt;Metorial&lt;/a&gt;, the important thing is choosing the integration strategy that lets you ship faster. Because at the end of the day, the best integration approach is the one that gets your agent into production.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Building AI agents that need reliable integrations? Check out how &lt;a href="https://metorial.com" rel="noopener noreferrer"&gt;Metorial&lt;/a&gt; makes connecting to 600+ services as simple as a few lines of code.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>mcp</category>
      <category>agents</category>
    </item>
    <item>
      <title>My First Time Vibe Coding: A Skeptic's Journey</title>
      <dc:creator>Tobias Herber</dc:creator>
      <pubDate>Fri, 24 Oct 2025 04:51:05 +0000</pubDate>
      <link>https://forem.com/herber/my-first-time-vibe-coding-a-skeptics-journey-30gd</link>
      <guid>https://forem.com/herber/my-first-time-vibe-coding-a-skeptics-journey-30gd</guid>
      <description>&lt;p&gt;It’s October 2025 and it’s my first time vibe coding. I have mixed feelings.&lt;/p&gt;

&lt;p&gt;Frankly, when vibe coding tools came out I was very skeptical. Don't get me wrong, I like using LLMs and I've gotten so used to GitHub Copilot just completing my thoughts in VSCode and Vim. But generating entire code bases with LLMs? I'm not so sure about that.&lt;/p&gt;

&lt;p&gt;Then from time to time, I looked at code from Lovable, Repl.it, and Cursor that my friends had sent me. I wasn't impressed. It was full of redundancies. It wasn't elegant. It was code that's fine for a beginner, someone making their first strides at building products, not someone I would let touch an actual production code base.&lt;/p&gt;

&lt;p&gt;Also for context, here at Metorial we build infrastructure that's capable of running tens of thousands of MCP servers concurrently. We basically invented hibernation for MCP. Everything is completely custom and there is actual production infrastructure that our customers rely on behind it. Not something to play around with. But the other day I had this idea for a better MCP testing playground, called Starbase. Starbase is an experiment. A toy. Starbase can't take our customers down. Starbase can't expose our customer's data.&lt;/p&gt;

&lt;p&gt;People keep telling me that they vibecode so much now and I'm always like "yeah, nah!". This time, I thought I should give it a shot and Starbase is the right opportunity to do so. Not much at stake, but a lot to learn.&lt;/p&gt;

&lt;p&gt;This isn't a benchmark or anything even close to scientific. It's just my learnings as an anti-vibecoder vibecoding for the first time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting the Scene
&lt;/h2&gt;

&lt;p&gt;I used Claude Code. I felt like that's probably the best option for someone like me. I could still use the terminal, don't have to learn too much new, and don't have to switch to a new text editor.&lt;/p&gt;

&lt;p&gt;I wanted to give it a fair chance so I actually put some effort into writing the first prompt. Clearly explaining what I want it to build. Giving it technical directions. Very precisely explaining the UI and the functionality.&lt;/p&gt;

&lt;p&gt;I used a stack that it would probably have a lot of data for: Next.js + Prisma + styled components. I don't want to hear anything about Tailwind! I write CSS so should Claude Code.&lt;/p&gt;

&lt;p&gt;I tried to go step by step. I had it build a very basic version initially. Something that barely functions and lacks the features I really wanted. I then iterated feature by feature asking it to add them.&lt;/p&gt;

&lt;p&gt;And you know what? The first thing it gave me was impressive. It had a clear vision of the UI and except for a few minor bugs, it delivered. It looked cool. It worked well enough for a prototype. I was seriously reconsidering my stance on vibecoding.&lt;/p&gt;

&lt;p&gt;Then came the advanced features. Starbase is an MCP playground. Nothing complicated but also more than a todo list. MCP connections aren't super straightforward and I wanted to do the MCP connections client side while of course running the models on the backend (to not leak our API keys). A basic auth system. Some simple tables to store previous server connections. Nothing crazy.&lt;/p&gt;

&lt;h2&gt;
  
  
  The back and forth
&lt;/h2&gt;

&lt;p&gt;And then began the back and forth. It made mistakes, lots of them. Every time I got it to fix one, another one would pop up. It was tedious. Some mistakes were simple and it fixed them well. Some were difficult, like the MCP connection handling, and it really struggled with them.&lt;/p&gt;

&lt;p&gt;The MCP client-side connection logic was where things got messy. Claude Code would generate code that might look right at first glance, but it kept mixing up where the connection should be established versus where it should be used. I'd point out the issue, it would fix that specific instance, and then introduce the same pattern in a different file.&lt;/p&gt;

&lt;p&gt;What frustrated me most was the lack of consistency in error handling. Some functions would throw errors, others would return null, and some would just silently fail. When I asked it to standardize error handling across the codebase, it would update the files I mentioned but leave the rest untouched. It couldn't maintain a holistic view of the project’s architecture (that shouldn’t be a context problem to context is more than big enough to fit the entire code base into it).&lt;/p&gt;

&lt;h2&gt;
  
  
  The code quality
&lt;/h2&gt;

&lt;p&gt;The code is far from perfect. It's often very inconsistent. It doesn't adhere to the standards you've given it (or even try to set it’s own standards). This reassured my initial feelings about vibecoding. The code is like someone trying to learn and play around. Not someone who has years of experience and does stuff the right way. It’s not a senior engineer and not even a junior.&lt;/p&gt;

&lt;p&gt;The styling was all over the place. Despite my explicit instruction to use styled components consistently, I found inline styles sprinkled throughout and some components using Tailwind. When I called this out, it would fix the issue and then after a while generate some new code with the same issue.&lt;/p&gt;

&lt;p&gt;The code structure showed no understanding of separation of concerns. No modularity. No reuse. Business logic lived in components. API calls were duplicated across files. Utility functions that should have been extracted were copied and pasted. When I asked it to refactor, it would clean up some stuff but underlying problems remained.&lt;/p&gt;

&lt;p&gt;The comments were insane. Most good code is self documenting. Comments should add extra context; the code explains what (if possible), the comments explain why. Claude commented everything in grave detail. Things that are super obvious had a comment explaining what they do.&lt;/p&gt;

&lt;h2&gt;
  
  
  My take on vibe coding
&lt;/h2&gt;

&lt;p&gt;I'm a bit torn. On the one hand I am impressed. What it built works. And I was able to do other stuff while babysitting the model here and there. At the same time, after having read the code, I wouldn't let Claude Code even remotely close to Metorial's primary codebase. There's too much at stake. But that highlights an interesting point. There is code where a lot is at stake. Hell, some people are out there writing code that, if it malfunctions, could kill people (not including the people who intentionally write code for killing people). On the other hand, there are codebases where bugs just don't matter that much. Internal tools, or experiments, like Starbase.&lt;/p&gt;

&lt;p&gt;In addition to that is the fact that I love writing code. And I know many software engineers do too. There's a reason we sometimes talk about recreational programming. There's a reason why many devs work on OSS and side-projects on the weekend. Vibecoding takes that away. Instead of being the maker, you are the supervisor, which isn't fun.&lt;/p&gt;

&lt;p&gt;So my conclusion is that I've learned a lot and become a bit more open about vibecoding. While also having my initial fears and biases against it reinforced. It's an interesting tool that, if used correctly, can help many devs (and of course non-technical people) become more productive. But it's no replacement for humans and it takes away the fun. The real value proposition became clear to me afterwards: vibe coding is perfect for the stuff you don't want to write anyway. Boilerplate. Proof of concepts. Internal tools that five people will use. But for anything that matters, anything where code quality affects maintainability, reliability, or user trust, you still need human developers who care about the craft.&lt;/p&gt;

&lt;p&gt;Would I use it again? Absolutely. For the right project. But for now, keep it far away from anything that actually matters.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Building something with MCP? Check out &lt;a href="https://metorial.com/" rel="noopener noreferrer"&gt;Metorial&lt;/a&gt;. We make it dead simple to integrate 600+ tools with your agents. Open source, great SDKs, and actually maintained by people who understand infrastructure.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>vibecoding</category>
      <category>opinion</category>
    </item>
  </channel>
</rss>
