This turned out to be a lot easier than I expected it to be! This is my first experiment with MCP, the Model Context Protocol. My previous experience with agents and tools required telling the LLM how to do things, either in the main context of the conversation in plain english or relying on whatever LLM application I'm using to have some interface to make it easier. MCP provides a standard way to link the LLM application you are using with the instructions to the agent on what tools and abilities are available to it and how to use them.
For today's experiment, I wanted to control my web browser (Chrome) by simply telling the LLM what I want to do in a chat. I'm chatting with the LLM using AnythingLLM on my Macbook Pro and have a Chrome window open... wouldn't it be nice to say "Hey, go start a blog post for me"!?
AnythingLLM is becoming my tool of choice for connecting to my local llama.cpp server and they recently added MCP support.
The Google Chrome browser has a debugging mode that runs a server for remote tools to connect to it.
And Aleksey Smolenchuk has written a Chrome MCP Server.
Let's see if we can connect all three and do something cool!
First, let's start Chrome in debugging mode. On my Mac, this is done with the --remote-debugging-port command line option, but also requires the --user-data-dir to be something other than the default, so we'll use both options. Be sure to close out all Chrome windows, then from the command line do something like:
/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome --remote-debugging-port=9222 --user-data-dir=/tmp
It should start and tell you that it is running on 127.0.0.1 and your specified port:
Next I downloaded the Chrome MCP Server and followed the instructions on Github, which boiled down to just:
Clone the repo:
git clone https://github.com/lxe/chrome-mcp.git
cd chrome-mcp
Install Bun:
npm install -g bun
Install the dependencies and start the MCP server:
bun install
bun start
And you should see the MCP server connect to the Chrome remote debugger on port 9222 and start listening for MCP traffic on port 3000:
All that's left now is to get AnythingLLM to start using the Chrome MCP server. There are instructions on the Github page for this server, but they will be slightly different for AnythingLLM.
If you are on a Mac, the AnythingLLM MCP servers setup is in your home directory in:
~/Library/Application Support/anythingllm-desktop/storage/plugins/anythingllm_mcp_servers.json
It probably already exists, but has an empty mcpServers section. We'll copy the example from the Chrome MCP server instructions and add a "type" so that it looks like:
{
"mcpServers": {
"chrome-control": {
"url": "http://localhost:3000/sse",
"disabled": false,
"alwaysAllow": [],
"type": "sse"
}
}
}
From there we should be good to go! Let's fire up AnythingLLM. We should see the MCP server in Settings->Agent Skills:
Now if we start a chat and ask nicely with "@agent
" in our sentence, we can watch the the LLM do it's magic! Watch me ask:
@agent
I want to create a new blog post on dev.to. Can you take me there in a new chrome tab?
That was fun! Maybe I'll try to write my own MCP server next!
Top comments (0)