<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Susanna Wong</title>
    <description>The latest articles on Forem by Susanna Wong (@susanna_wong_4e4478740bdf).</description>
    <link>https://forem.com/susanna_wong_4e4478740bdf</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/susanna_wong_4e4478740bdf"/>
    <language>en</language>
    <item>
      <title>Supercharge Your Web Dev Game with MCP - Part 3: From Isolated Tools to End-to-End MCP Workflows</title>
      <dc:creator>Susanna Wong</dc:creator>
      <pubDate>Wed, 31 Dec 2025 19:08:37 +0000</pubDate>
      <link>https://forem.com/susanna_wong_4e4478740bdf/supercharge-your-web-dev-game-with-mcp-part-3-from-isolated-tools-to-end-to-end-mcp-workflows-179c</link>
      <guid>https://forem.com/susanna_wong_4e4478740bdf/supercharge-your-web-dev-game-with-mcp-part-3-from-isolated-tools-to-end-to-end-mcp-workflows-179c</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;This is a three-part series based on a talk I gave under the same title.&lt;/em&gt; &lt;br&gt;
&lt;em&gt;In the &lt;a href="https://dev.to/susanna_wong_4e4478740bdf/supercharge-your-web-devgame-with-chrome-mcp-part-1-9gg"&gt;first post&lt;/a&gt;, I talked about why MCP exists at all — as a way to standardise how models interact with tools. In the &lt;a href="https://dev.to/susanna_wong_4e4478740bdf/supercharge-your-web-dev-game-with-mcp-part-2-chrome-devtools-mcp-ai-driven-web-performance-1ili"&gt;second&lt;/a&gt;, I zoomed in on Chrome DevTools MCP and showed how browser-level instrumentation becomes dramatically more powerful when AI can reason over it.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;_In this final post, I want to step back and look at the bigger picture: what happens when you don’t just use MCPs, but start extending and composing them to fit your own workflows. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Chrome MCP is powerful on its own, but most developer workflows don’t fail because we lack data — they fail because we lack accumulated understanding. Using a simple performance tracker MCP built on top of Chrome MCP, I’ll use this as an example to show how chaining MCPs adds memory and automation, turning isolated tools into an end-to-end workflow.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;&lt;strong&gt;The missing piece: memory&lt;/strong&gt;&lt;br&gt;
Think about a typical performance investigation.&lt;br&gt;
You run a trace. You spot something slow. You apply a fix. You rerun the trace. You move on.&lt;/p&gt;

&lt;p&gt;The artefacts are valuable:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;traces&lt;/li&gt;
&lt;li&gt;metrics&lt;/li&gt;
&lt;li&gt;screenshots&lt;/li&gt;
&lt;li&gt;network waterfalls&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But once the task is done, they usually disappear. Some results live in ad-hoc notes. Some stay buried in folders. Some only exist in a developer’s head. And crucially, there’s no consistent way to compare runs, track trends, or answer questions like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Has this page actually improved over the last three releases?&lt;/li&gt;
&lt;li&gt;Which changes made performance worse - and when?&lt;/li&gt;
&lt;li&gt;Are we fixing the same regressions repeatedly?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8cwnfzkxjezju1tpevgs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8cwnfzkxjezju1tpevgs.png" alt="problem-statement" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is the gap an end-to-end MCP workflow is designed to fill.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Composing MCP servers, not building monoliths&lt;/strong&gt;&lt;br&gt;
The instinctive reaction might be: “Let’s just make a smarter Chrome MCP.”&lt;/p&gt;

&lt;p&gt;But that’s exactly the wrong direction.&lt;/p&gt;

&lt;p&gt;Instead of adding more responsibility to one server, MCP encourages composition.&lt;/p&gt;

&lt;p&gt;In the workflow I’ll describe here, we deliberately split responsibilities:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Chrome DevTools MCP&lt;/strong&gt;&lt;br&gt;
Responsible only for measurement&lt;br&gt;
It observes the browser and returns raw facts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- A custom Performance Tracker MCP&lt;/strong&gt;&lt;br&gt;
Responsible for memory and comparison&lt;br&gt;
It stores results, tracks changes, and exposes history.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- The LLM (via the host/client)&lt;/strong&gt;&lt;br&gt;
Responsible for reasoning&lt;br&gt;
It decides what to run, what to store, what to compare, and how to explain outcomes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzsz8ozrkjmfz1a2xyhbt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzsz8ozrkjmfz1a2xyhbt.png" alt="custom-MCP-architecture" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each piece does one job well. None of them becomes a god object.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;A clean separation of concerns&lt;/strong&gt;&lt;br&gt;
This architecture works because each layer stays honest about its role.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Chrome MCP doesn’t “understand” performance - it just measures it.&lt;/li&gt;
&lt;li&gt;The custom MCP doesn’t interpret metrics - it just remembers and exposes them.&lt;/li&gt;
&lt;li&gt;The LLM doesn’t collect raw data - it reasons over what it’s given.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This separation makes the system:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;easier to evolve&lt;/li&gt;
&lt;li&gt;easier to test&lt;/li&gt;
&lt;li&gt;easier to reuse across tools and teams&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Most importantly, it prevents us from baking intelligence into places where it doesn’t belong.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;What the custom MCP actually does&lt;/strong&gt;&lt;br&gt;
The custom MCP server is intentionally boring - and that’s a good thing.&lt;/p&gt;

&lt;p&gt;It doesn’t talk to Chrome.&lt;br&gt;
It doesn’t run traces.&lt;br&gt;
It doesn’t optimise anything.&lt;/p&gt;

&lt;p&gt;Instead, it focuses on three kinds of capability, exactly as MCP intends.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tools: actions with side effects&lt;/strong&gt;&lt;br&gt;
Tools represent intentful operations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;start an experiment&lt;/li&gt;
&lt;li&gt;save a diagnostic run&lt;/li&gt;
&lt;li&gt;log a code change&lt;/li&gt;
&lt;li&gt;compare two runs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each tool:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;has a strict schema&lt;/li&gt;
&lt;li&gt;performs one deterministic action&lt;/li&gt;
&lt;li&gt;writes or retrieves data from storage&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They’re auditable, composable, and predictable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resources: structured memory&lt;/strong&gt;&lt;br&gt;
Resources expose read-only views of state:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;experiment metadata&lt;/li&gt;
&lt;li&gt;run history&lt;/li&gt;
&lt;li&gt;change timelines&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This allows the LLM to ground its reasoning in facts:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“Compare the last three runs.”&lt;br&gt;
“Show me regressions since commit X.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prompts: reusable reasoning scaffolds&lt;/strong&gt;&lt;br&gt;
Prompts capture common analysis patterns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;explain a regression&lt;/li&gt;
&lt;li&gt;summarise improvements&lt;/li&gt;
&lt;li&gt;identify trends over time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They reduce prompt duplication and keep reasoning consistent across sessions and users.&lt;/p&gt;

&lt;p&gt;The overall setup of this MCP is laid out as follows:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbh8xu5n97nxpyrd8jlln.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbh8xu5n97nxpyrd8jlln.png" alt="MCP-server-design" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The key idea here is subtle but important:&lt;br&gt;
&lt;strong&gt;A well-designed MCP server exposes actions, memory, and reasoning scaffolds — not intelligence.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;How the end-to-end workflow plays out&lt;/strong&gt;&lt;br&gt;
Once these pieces are in place, the workflow becomes almost boringly smooth.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq12uck2rrirontyvxvlm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq12uck2rrirontyvxvlm.png" alt="MCP-workflow" width="800" height="444"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. A developer asks a high-level question:&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;“Improve performance on the checkout page.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. The LLM calls Chrome MCP to:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;launch a browser&lt;/li&gt;
&lt;li&gt;navigate to the page&lt;/li&gt;
&lt;li&gt;run a performance trace&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Chrome MCP returns raw artefacts:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Core Web Vitals&lt;/li&gt;
&lt;li&gt;traces&lt;/li&gt;
&lt;li&gt;network data&lt;/li&gt;
&lt;li&gt;screenshots&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4. The LLM analyses the data and decides:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;what’s worth keeping&lt;/li&gt;
&lt;li&gt;what changed&lt;/li&gt;
&lt;li&gt;what to fix next&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;5. The LLM calls the custom MCP to:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;save the diagnostic run&lt;/li&gt;
&lt;li&gt;log associated code changes&lt;/li&gt;
&lt;li&gt;compare against previous runs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;6. Fixes are applied.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. The exact same measurements are rerun.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8. The LLM compares before and after:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;explains improvements&lt;/li&gt;
&lt;li&gt;highlights regressions&lt;/li&gt;
&lt;li&gt;summarises trends over time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At no point does any single component need to know the whole system.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Why this feels different from “AI tools”&lt;/strong&gt;&lt;br&gt;
What’s interesting about this setup is that nothing here is particularly magical.&lt;/p&gt;

&lt;p&gt;There’s no new model capability.&lt;br&gt;
No clever prompt trick.&lt;br&gt;
No opaque automation.&lt;/p&gt;

&lt;p&gt;The leverage comes from structure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;consistent measurement&lt;/li&gt;
&lt;li&gt;persistent memory&lt;/li&gt;
&lt;li&gt;explicit composition&lt;/li&gt;
&lt;li&gt;clear responsibility boundaries&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is why MCP feels less like an AI feature and more like an architectural shift.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;From one-off assistance to continuous workflows&lt;/strong&gt;&lt;br&gt;
Most AI dev tools today are optimised for moments:&lt;br&gt;
&lt;em&gt;“Generate this code.”&lt;br&gt;
“Explain this error.”&lt;br&gt;
“Summarise this diff.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;MCP workflows are optimised for continuity:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;across runs&lt;/li&gt;
&lt;li&gt;across changes&lt;/li&gt;
&lt;li&gt;across time&lt;/li&gt;
&lt;li&gt;across people&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That’s the real shift. Not smarter models - but systems that can remember, compare, and evolve alongside the codebase.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Final thoughts&lt;/strong&gt;&lt;br&gt;
Across these three posts, I’ve tried to make one argument:&lt;br&gt;
AI becomes genuinely useful in developer workflows not when it gets better at guessing, but when it’s embedded into the same systems we already trust - browsers, filesystems, version control, and now, structured protocols like MCP.&lt;/p&gt;

&lt;p&gt;Chrome MCP shows what’s possible when AI can see the browser.&lt;br&gt;
Custom MCPs show what’s possible when AI can remember.&lt;/p&gt;

&lt;p&gt;Together, they point toward workflows that are faster, more reliable, and easier to reason about.&lt;/p&gt;

&lt;p&gt;That’s not a future vision. It’s already buildable - one focused MCP server at a time.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;This is the first time I’ve turned a talk into a blog post series, so I’d genuinely love any feedback — especially on what resonated, what didn’t, or what you’d like me to expand on next, as I’ll be continuing to develop this material for more conference talks throughout the year. Looking forward to hear from you, and see you at my next post!&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>programming</category>
      <category>webperf</category>
    </item>
    <item>
      <title>Supercharge Your Web Dev Game with MCP - Part 2: Chrome DevTools MCP + AI-Driven Web Performance</title>
      <dc:creator>Susanna Wong</dc:creator>
      <pubDate>Tue, 30 Dec 2025 02:35:32 +0000</pubDate>
      <link>https://forem.com/susanna_wong_4e4478740bdf/supercharge-your-web-dev-game-with-mcp-part-2-chrome-devtools-mcp-ai-driven-web-performance-1ili</link>
      <guid>https://forem.com/susanna_wong_4e4478740bdf/supercharge-your-web-dev-game-with-mcp-part-2-chrome-devtools-mcp-ai-driven-web-performance-1ili</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;In &lt;a href="https://dev.to/susanna_wong_4e4478740bdf/supercharge-your-web-devgame-with-chrome-mcp-part-1-9gg"&gt;part 1&lt;/a&gt; of this blog post series, I talked about why MCP exists at all - how it creates a clean contract between language models and the tools developers rely on every day. In this post, I want to zoom in on one MCP server that, in my experience, changes the game for web developers more than most: the Chrome DevTools MCP.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Chrome MCP was released recently, and there’s already plenty of content showing how to install it or wire it up. I want to focus on something slightly different: how it works under the hood, why its architecture matters, and what becomes possible when browser-level performance data is no longer something you manually collect and interpret, but something an AI agent can reason about directly. In case you are interested, here are some of the great materials to get you started on the actual steps of hooking Chrome MCP to your workflow:&lt;/em&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhv69yl9uga7x62g2l2dq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhv69yl9uga7x62g2l2dq.png" alt="chrome_mcp_resources" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;_&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why the browser matters more than ever&lt;/strong&gt;&lt;br&gt;
For web engineers, the browser is the truth.&lt;/p&gt;

&lt;p&gt;No matter how elegant your code looks in an editor, what users experience is defined by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;how fast the page renders&lt;/li&gt;
&lt;li&gt;what blocks rendering&lt;/li&gt;
&lt;li&gt;which scripts monopolise the main thread&lt;/li&gt;
&lt;li&gt;how layout shifts during load&lt;/li&gt;
&lt;li&gt;how interactions feel under real CPU and network constraints&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We already have excellent tools to inspect all of this - Chrome DevTools, Lighthouse, performance traces - but they’re still deeply manual. Running audits, capturing traces, correlating metrics, and explaining why something is slow is work that lives almost entirely in the developer’s head.&lt;/p&gt;

&lt;p&gt;Chrome MCP changes that dynamic by making the browser a programmable, inspectable system that AI can work with directly.&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;Chrome MCP &lt;em&gt;is&lt;/em&gt; CDP, but usable by AI&lt;/strong&gt;&lt;br&gt;
At its core, Chrome MCP is built on top of the Chrome DevTools Protocol (CDP) - the same low-level protocol that powers Chrome DevTools, Puppeteer, and most browser automation tools.&lt;/p&gt;

&lt;p&gt;CDP exposes almost everything happening inside the browser:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;DOM structure and computed styles&lt;/li&gt;
&lt;li&gt;network requests and timing breakdowns&lt;/li&gt;
&lt;li&gt;JavaScript execution and long tasks&lt;/li&gt;
&lt;li&gt;performance traces and Core Web Vitals&lt;/li&gt;
&lt;li&gt;rendering, layout shifts, and paint events&lt;/li&gt;
&lt;li&gt;device, CPU, and network emulation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnjt7fcy4vh6uys28h2e2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnjt7fcy4vh6uys28h2e2.png" alt="CDP" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The problem has never been capability. CDP is extremely powerful.&lt;br&gt;
 The problem is that it’s low-level, verbose, and not AI-friendly.&lt;br&gt;
Chrome MCP solves this by wrapping CDP in high-level, strongly-typed MCP tools - things like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- navigate_page
- performance_start_trace
- performance_stop_trace
- performance_analyze_insight
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Instead of emitting thousands of raw CDP events, the MCP server gathers, structures, and returns the data in a way an LLM can actually reason over.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3rf6opwe6nmep1ph4fbs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3rf6opwe6nmep1ph4fbs.png" alt="chrome_mcp_flow" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In short:&lt;br&gt;
&lt;strong&gt;&lt;em&gt;CDP gives superpowers. Chrome MCP makes those powers usable by AI.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;How to think about Chrome MCP as a developer&lt;/strong&gt;&lt;br&gt;
From an architectural point of view, I find it helpful to think of Chrome MCP as:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;em&gt;A local microservice that exposes DevTools capabilities as strongly-typed MCP tools, backed by Puppeteer/CDP, with Chrome lifecycle and isolation handled for you.&lt;/em&gt;*&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This framing matters because it changes how you integrate it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You don’t treat it like a pile of scripts&lt;/li&gt;
&lt;li&gt;You treat it like a backend service&lt;/li&gt;
&lt;li&gt;The tool schemas are your API surface&lt;/li&gt;
&lt;li&gt;Behaviour is tuned via configuration, not code changes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Under the hood, Chrome MCP is essentially a Node.js MCP server that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;launches or attaches to a Chrome instance&lt;/li&gt;
&lt;li&gt;controls it via Puppeteer and CDP&lt;/li&gt;
&lt;li&gt;exposes a curated set of browser tools via MCP&lt;/li&gt;
&lt;li&gt;returns structured data back to the client&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Because it’s packaged and distributed as an npm-based MCP server, it behaves like any other dependency in your workflow: versioned, upgradable, and composable.&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;A layered architecture, not a pile of hacks&lt;/strong&gt;&lt;br&gt;
One thing I appreciate about the Chrome MCP implementation is that it’s clearly designed as a system to seamlessly integrate into the web development workflow.&lt;/p&gt;

&lt;p&gt;At a high level, it follows a layered architecture:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MCP server layer&lt;/strong&gt;:&lt;br&gt;
 Handles protocol, tool registration, permissions, and transport.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tool adapter layer&lt;/strong&gt;:&lt;br&gt;
 Each MCP tool is a small, well-defined function with validated inputs and structured outputs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Chrome runtime layer&lt;/strong&gt;:&lt;br&gt;
 A real Chrome or Chromium instance — headless or headful — executing browser actions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data collection layer&lt;/strong&gt;:&lt;br&gt;
 Aggregates traces, metrics, screenshots, network data, and serialises them into MCP responses.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzi4m0kj9an21gvdbol9m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzi4m0kj9an21gvdbol9m.png" alt="chrome_mcp_architecture" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Conceptually, every request flows the same way:&lt;br&gt;
&lt;strong&gt;LLM / IDE → MCP client → Chrome MCP server → Puppeteer / CDP → Chrome → structured data back&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F116awhhbsombx0mxe8za.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F116awhhbsombx0mxe8za.png" alt="chrome-mcp-workflow" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That consistency is what makes the system predictable and automatable.&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;Web performance: from manual ritual to closed loop&lt;/strong&gt;&lt;br&gt;
This is where Chrome MCP really shines.&lt;/p&gt;

&lt;p&gt;Traditionally, performance work looks like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run Lighthouse&lt;/li&gt;
&lt;li&gt;Capture a trace&lt;/li&gt;
&lt;li&gt;Stare at flame charts&lt;/li&gt;
&lt;li&gt;Guess which optimisations matter&lt;/li&gt;
&lt;li&gt;Apply fixes&lt;/li&gt;
&lt;li&gt;Re-run everything&lt;/li&gt;
&lt;li&gt;Hope results are comparable&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Chrome MCP turns this into a closed-loop, repeatable workflow.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Chrome MCP does&lt;/strong&gt;&lt;br&gt;
Chrome MCP is responsible for measurement and instrumentation, not interpretation.&lt;/p&gt;

&lt;p&gt;It can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;launch a controlled browser session&lt;/li&gt;
&lt;li&gt;navigate to a page or run scripted flows&lt;/li&gt;
&lt;li&gt;start and stop performance tracing&lt;/li&gt;
&lt;li&gt;collect:

&lt;ul&gt;
&lt;li&gt;Core Web Vitals (LCP, CLS, INP)&lt;/li&gt;
&lt;li&gt;performance timelines&lt;/li&gt;
&lt;li&gt;network waterfalls&lt;/li&gt;
&lt;li&gt;screenshots and filmstrips&lt;/li&gt;
&lt;li&gt;DOM attribution (e.g. which element caused LCP)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All of this is gathered programmatically and reproducibly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flljsiwz4q0341q78rq90.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flljsiwz4q0341q78rq90.png" alt="Chrome-MCP-tools" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What the LLM does&lt;/strong&gt;&lt;br&gt;
The LLM - running in your IDE or host - reads that structured data and answers higher-level questions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Why is LCP slow?&lt;/li&gt;
&lt;li&gt;Which requests are render-blocking?&lt;/li&gt;
&lt;li&gt;Is CLS caused by images, fonts, or late hydration?&lt;/li&gt;
&lt;li&gt;Which scripts dominate main-thread time?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This separation is important. Chrome MCP provides evidence.&lt;br&gt;
The LLM provides reasoning.&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;Performance tools that matter in practice&lt;/strong&gt;&lt;br&gt;
The Chrome MCP performance toolset is intentionally small but powerful:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;performance_start_trace
performance_stop_trace
performance_analyze_insight
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These tools abstract away a lot of fragile timing logic:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;when to start tracing&lt;/li&gt;
&lt;li&gt;when to stop&lt;/li&gt;
&lt;li&gt;how to correlate events&lt;/li&gt;
&lt;li&gt;how to extract meaningful metrics&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The result is something that feels closer to a performance RPC than a browser script.&lt;/p&gt;

&lt;p&gt;Once you have that, new workflows become trivial:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;run a baseline performance trace&lt;/li&gt;
&lt;li&gt;apply a change (code splitting, lazy loading, image optimisation)&lt;/li&gt;
&lt;li&gt;re-run the same trace&lt;/li&gt;
&lt;li&gt;compare before/after metrics&lt;/li&gt;
&lt;li&gt;explain the difference in human terms&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Performance stops being a one-off audit and becomes an iterative feedback loop.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Why this changes how teams work&lt;/strong&gt;&lt;br&gt;
What excites me most isn’t that Chrome MCP can collect performance data - we’ve been able to do that for years. It’s that the data becomes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;automated&lt;/li&gt;
&lt;li&gt;repeatable&lt;/li&gt;
&lt;li&gt;explainable&lt;/li&gt;
&lt;li&gt;shareable&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Instead of screenshots and gut feelings, you get:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;concrete metrics&lt;/li&gt;
&lt;li&gt;attributed causes&lt;/li&gt;
&lt;li&gt;reproducible runs&lt;/li&gt;
&lt;li&gt;clear before/after comparisons&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That makes performance conversations easier not just with engineers, but with product and leadership too.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;A concrete example: turning performance into a feedback loop&lt;/strong&gt;&lt;br&gt;
To make this less abstract, let me walk through a real kind of scenario where Chrome MCP fundamentally changes how performance work feels.&lt;/p&gt;

&lt;p&gt;Imagine a product page that looks fine in local development, but users are reporting that it feels slow on mobile. Historically, this is where performance work gets fuzzy. You might run Lighthouse once or twice, glance at a flame chart, and come away with a vague sense that “JavaScript is heavy” or “images are probably too large”.&lt;/p&gt;

&lt;p&gt;With Chrome MCP in the loop, the workflow becomes much more explicit.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8rpdd8l57yebow6ohcdo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8rpdd8l57yebow6ohcdo.png" alt="automated workflow" width="800" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Baseline: measure, don’t guess&lt;/strong&gt;&lt;br&gt;
The first step is to establish a baseline. Using Chrome MCP, the agent launches a headless Chrome session, navigates to the page, and runs a performance trace under controlled conditions - mobile emulation, throttled CPU, and a constrained network.&lt;/p&gt;

&lt;p&gt;What comes back isn’t a score, but structured data:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;LCP at ~4.1s, attributed to a large hero image&lt;/li&gt;
&lt;li&gt;INP degraded by long main-thread tasks during hydration&lt;/li&gt;
&lt;li&gt;Several render-blocking JavaScript bundles&lt;/li&gt;
&lt;li&gt;A noticeable gap between first paint and meaningful interactivity&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This already changes the conversation. Instead of “the page feels slow”, you now have concrete signals and clear suspects.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Intervention: small, targeted fixes&lt;/strong&gt;&lt;br&gt;
Based on the trace data, the LLM proposes a short list of changes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Preload the hero image and serve a smaller responsive variant&lt;/li&gt;
&lt;li&gt;Lazy-load below-the-fold components&lt;/li&gt;
&lt;li&gt;Split a large JavaScript bundle so non-critical code doesn’t block initial render&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These aren’t generic best practices - they’re directly tied to the observed metrics and trace events. The fixes are applied, committed, and ready for validation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;After: rerun the exact same measurement&lt;/strong&gt;&lt;br&gt;
Here’s where Chrome MCP really earns its place.&lt;br&gt;
The agent reruns the same performance trace, with the same throttling and navigation flow. No manual setup. No “did I click the same thing?”. The comparison is apples-to-apples.&lt;/p&gt;

&lt;p&gt;This time, the results look very different:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;LCP drops to ~2.3s&lt;/li&gt;
&lt;li&gt;Main-thread blocking during hydration is significantly reduced&lt;/li&gt;
&lt;li&gt;Network waterfall shows fewer render-blocking resources&lt;/li&gt;
&lt;li&gt;Visual stability improves, with no unexpected layout shifts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Because both runs are machine-driven, the before/after comparison is clean. The LLM can now explain why things improved, not just that they did.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why this matters&lt;/strong&gt;&lt;br&gt;
None of this is impossible without Chrome MCP. But without it, performance work tends to be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;manual&lt;/li&gt;
&lt;li&gt;inconsistent&lt;/li&gt;
&lt;li&gt;hard to reproduce&lt;/li&gt;
&lt;li&gt;difficult to explain to others&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With Chrome MCP, performance becomes a closed loop:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Measure with evidence&lt;/li&gt;
&lt;li&gt;Apply targeted fixes&lt;/li&gt;
&lt;li&gt;Re-measure under identical conditions&lt;/li&gt;
&lt;li&gt;Explain the impact clearly&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That loop is what turns performance from a one-off audit into something you can iterate on confidently - and something AI can genuinely help with, rather than hand-waving about.&lt;/p&gt;




&lt;p&gt;Chrome MCP turns the browser into a first-class execution and measurement engine for AI-driven workflows. It doesn’t replace DevTools; it operationalises them. It takes everything we already trust about browser instrumentation and makes it programmable, composable, and AI-native.&lt;/p&gt;

&lt;p&gt;In the next post, I’ll tie everything together and look at what happens when you combine Chrome MCP with other MCP servers — filesystem, Git, design data, and automation — to create end-to-end developer workflows that go far beyond code generation.&lt;/p&gt;

&lt;p&gt;That’s where MCP stops being interesting infrastructure and starts becoming leverage.&lt;/p&gt;

&lt;p&gt;Stay tuned, and see you at the next post!&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>programming</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Supercharge Your Web Dev game with Chrome MCP - Part 1: All about the MCP</title>
      <dc:creator>Susanna Wong</dc:creator>
      <pubDate>Sun, 28 Dec 2025 01:18:34 +0000</pubDate>
      <link>https://forem.com/susanna_wong_4e4478740bdf/supercharge-your-web-devgame-with-chrome-mcp-part-1-9gg</link>
      <guid>https://forem.com/susanna_wong_4e4478740bdf/supercharge-your-web-devgame-with-chrome-mcp-part-1-9gg</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Hi everyone! I’m a product engineer who’s worked across very different kinds of organisations, from tight-knit tech teams in large corporates, to startups, to the largest consultancy in the world. Across all of them, I’ve seen how products get built at very different speeds, and how much those speeds shape the way teams work.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;For the past six months, I’ve been working at an AI startup, where speed and iteration aren’t occasional crunch modes - they’re the everyday game. In that environment, developers don’t really have the option to ignore AI tooling. If you want to keep up, you have to embrace it, not just to write code faster, but to iterate, debug, validate, and ship better.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This post is the first in a three-part series based on a talk I gave under the same title. The talk compressed a lot of ideas into slides; this series is my attempt to slow down and properly elaborate on the core concepts behind it.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;In this first part, I’ll focus on the foundations: what the Model Context Protocol (MCP) is, why it exists, and how different types of MCP servers enable more reliable AI-powered developer workflows. The next posts will go deeper into concrete examples and hands-on use cases.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;&lt;strong&gt;How MCP changes the way we build with AI&lt;/strong&gt;&lt;br&gt;
When people talk about AI in developer workflows, the conversation often jumps straight to code generation. Faster scaffolding. Better autocomplete. Smarter refactors. All useful, but also a little narrow.&lt;/p&gt;

&lt;p&gt;The real friction in day-to-day web development isn’t usually writing code. It’s everything that happens around it: checking layouts, cross-referencing designs, running tests, inspecting network requests, reproducing bugs, validating performance. None of this is hard in isolation. It’s just fragmented.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fko42yyr5lg482nk5i77k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fko42yyr5lg482nk5i77k.png" alt="MCP-webdev-usecase" width="800" height="443"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If I’m debugging a UI issue, my mental loop looks something like this: open DevTools, inspect the DOM, check the network panel, jump to Figma to confirm spacing or colors, maybe spin up Puppeteer to reproduce the issue, then go back to the editor to apply a fix. Each step is logical. Together, they form a slow, manual choreography that only exists in my head.&lt;br&gt;
This is the gap MCP fills.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;MCP - the missing connective tissue&lt;/strong&gt;&lt;br&gt;
I like to think of MCP as the love language between you, the LLM, and your tools. That phrase sounds playful, but it’s surprisingly accurate.&lt;/p&gt;

&lt;p&gt;Before MCP, LLMs were isolated. They could reason, but they couldn’t see or do much without bespoke integrations. Every tool required custom glue code. Every AI application hard-coded how it talked to Git, the filesystem, a browser, or a design tool. As the number of tools grew, the complexity didn’t scale linearly - it exploded.&lt;br&gt;
This became a real problem once LLMs started moving beyond chatbots into agents, IDEs, and production developer workflows. The model wasn’t the bottleneck anymore. Integration was.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr99al9uv7q5c2urst6u4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr99al9uv7q5c2urst6u4.png" alt="MCP-history" width="800" height="441"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;MCP emerged in late 2023 as a response to this. Its core idea is simple but powerful: standardise how models receive context and use tools. In the same way HTTP decouples browsers from servers, MCP decouples models from the systems they interact with.&lt;br&gt;
That decoupling turns out to be the key to everything that follows.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A cleaner separation of responsibilities&lt;/strong&gt;&lt;br&gt;
MCP introduces a clear Host–Client–Server architecture.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F90h0supmu4vvz7ne99mg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F90h0supmu4vvz7ne99mg.png" alt="MCP-architecture" width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As a developer, I appreciate this not because it’s novel, but because it’s familiar. It follows the same separation-of-concerns principles we already trust in software design.&lt;/p&gt;

&lt;p&gt;The host is where I work - my IDE, editor, or chat interface. It owns the UI and the interaction flow. The host doesn’t need to know how tools work internally.&lt;/p&gt;

&lt;p&gt;The client lives inside the host. It speaks MCP. Its job is to manage context, discover available tools, and orchestrate requests. I think of it as the MCP-aware brain.&lt;/p&gt;

&lt;p&gt;The server is where real capabilities live. An MCP server exposes tools, data, and prompts, and handles the messy details of talking to external systems - APIs, databases, files, browsers, or design tools.&lt;/p&gt;

&lt;p&gt;What I find elegant here is that each piece stays honest about its role. Hosts don’t become integration nightmares. Servers don’t need to care about UI or prompt phrasing. The protocol does the connecting.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1xthwhruhol7x2m9ys6n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1xthwhruhol7x2m9ys6n.png" alt="MCP-setup" width="800" height="293"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Source: “Model Context Protocol (MCP): Landscape, Security Threats, and Future Research Directions”&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;MCP servers aren’t just APIs&lt;/strong&gt;&lt;br&gt;
One subtle but important shift MCP introduces is how we think about “tools”.&lt;/p&gt;

&lt;p&gt;An MCP server doesn’t merely expose endpoints. It exposes capabilities in a model-friendly way, grouped into three concepts: tools, data, and prompts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frz2ypf8q005z7trv7gkb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frz2ypf8q005z7trv7gkb.png" alt="MCP-server-components" width="800" height="444"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Tools are actions - things the model can do. Run a database query. Call an API. Edit a file. Execute a shell command. These are explicit, structured, and side-effecting.&lt;/p&gt;

&lt;p&gt;Data (or resources) are context: things the model can read. Documentation, logs, code repositories, configuration files. They don’t execute anything, but they ground the model in reality.&lt;/p&gt;

&lt;p&gt;Prompts are reusable ways of thinking. Instead of rewriting the same instructions over and over, prompts encode best practices: how to review a PR, how to analyse logs, how to summarise changes.&lt;br&gt;
What matters isn’t any single piece, but how they work together. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prompts guide reasoning. Data provides truth. Tools enable action. This combination is what makes MCP workflows feel reliable instead of fragile.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Three kinds of MCP servers - and why the distinction matters&lt;/strong&gt;&lt;br&gt;
Over time, I’ve found it useful to group MCP servers into three broad categories. This isn’t just taxonomy for its own sake; it helps clarify what role a server plays in a system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwldc1nr6e8bk8gu5zluy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwldc1nr6e8bk8gu5zluy.png" alt="MCP-server-types" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Tool MCPs give the model hands. They let it act. Browser automation, filesystem access, Git operations, CLI wrappers - these are what turn an LLM from a passive assistant into something that can actually execute work. Debugging a UI bug becomes a closed loop: reproduce, inspect, fix, verify.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhn8dm9f5ywi0x3i7hozx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhn8dm9f5ywi0x3i7hozx.png" alt="tool-mcps" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Data MCPs give the model eyes. Without access to real designs, APIs, or schemas, an LLM is forced to guess. With Data MCPs - like Figma, Fetch/HTTP, database queries, or documentation servers - the model can reason with accuracy. Design-to-code stops being aspirational and starts being dependable.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0jph0a31lckskhbeh18b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0jph0a31lckskhbeh18b.png" alt="data-mcps" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Automation MCPs are where things get interesting. They don’t just expose a single action or dataset; they orchestrate multi-step workflows. CI pipelines, QA routines, design consistency checks - all become repeatable processes instead of manual rituals. If Tool MCPs are hands and Data MCPs are eyes, Automation MCPs are the coordinator that connects everything.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbl7jd3msdc0xyqb3nc9o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbl7jd3msdc0xyqb3nc9o.png" alt="automation-mcps" width="800" height="441"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;The real power is in composition&lt;/strong&gt;&lt;br&gt;
Individually, each MCP category is useful. Together, they’re transformative.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg1oqibzbfpjedsn219xo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg1oqibzbfpjedsn219xo.png" alt="mcp-workflows" width="800" height="441"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A design change in Figma can be detected via a Data MCP, implemented through Tool MCPs, validated in the browser, committed to Git, and wrapped into a pull request - all orchestrated by an Automation MCP. That entire flow is impossible if any one piece is missing.&lt;/p&gt;

&lt;p&gt;This is the moment where the LLM stops feeling like a chat interface and starts feeling like a teammate. Not because it’s “smarter”, but because it’s finally embedded into the same systems developers already rely on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MCP - the standard contract for your bespoke needs&lt;/strong&gt;&lt;br&gt;
MCP isn’t exciting because it’s flashy. It’s exciting because it’s boring in the best possible way. It introduces structure where there was none. It replaces bespoke glue with standard contracts. And it lets us design AI-powered workflows using the same architectural instincts we already trust.&lt;/p&gt;

&lt;p&gt;In the next post, I’ll zoom in on one specific MCP server - Chrome DevTools MCP - and show what this looks like in a real web development workflow.&lt;/p&gt;

&lt;p&gt;That’s where things get practical.&lt;/p&gt;

&lt;p&gt;Stay tuned - See you at my next post!&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>ai</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
