Building an AI Browser

When you write a protocol spec, you eventually have to test it against reality. The MCP discovery spec I'd been working on needed a client — something that would automatically discover MCP servers on websites and actually use them. No manual configuration, no hardcoded URLs. Just load a page and see what's there.

So I built one.

It Started as a Test Harness

The first version was minimal. A Chrome extension that checked /.well-known/mcp.json and /mcp.txt on every page load, logged what it found, and let me call tools from the DevTools panel. That was enough to validate that the discovery spec worked — sites that implemented it were findable, sites that didn't were correctly ignored.

But once you have discovered MCP servers surfaced in a browser, the obvious next step is connecting them to an AI. So I added a sidebar.

The sidebar turned into a proper chat interface. The chat interface needed tool permissions. Tool permissions needed an audit log. Before long I had a full browser extension: sidebar AI chat, automatic MCP discovery, a permission system, a DevTools panel showing live discovery state, and a bridge that exposed everything to external MCP clients like Claude Code.

It grew, but not gratuitously. Each addition came from actually using the previous version and running into something missing.

v0Discovery probe

Chrome extension that hits /.well-known/mcp.json and /mcp.txt on every page load and logs what it finds.

v0.1DevTools panel

A live view of the discovered servers and their tools. Manual tool calls from the panel.

v0.2Sidebar AI chat

Connecting discovered tools to a model. Calls happen in your real session.

v0.3Permission system

Per-site, per-tool consent — needed before letting the AI loose on writes.

v0.4Audit log

Replayable history of every tool call, who initiated it, and the response.

v0.5External MCP bridge

Exposes the discovered tools to other clients (Claude Code, etc.) as if they were your own server.

What Existing Solutions Don't Do

There are a few established ways to bring AI into browser interactions. I'd tried the main ones before building this.

playwright-mcp runs a headless Chromium instance and exposes browser control as MCP tools. It works well for scripted automation — filling forms, extracting data, navigating flows. But headless browsers are brittle in practice. Sites that detect automation behave differently. You lose the session state, cookies, and authentication that your real browser already has. And there's no discovery — you're controlling a browser, not browsing with AI.

Claude's official Chrome extension is a sidebar that gives Claude access to the page you're looking at. It's useful, but it's tightly coupled to Claude specifically. No MCP server discovery, no multi-model support, limited configurability.

What I wanted was different: an AI assistant that lives in my actual browser, with access to my real sessions, and that automatically picks up MCP tools from whatever site I'm on. Not headless. Not locked to one model. Not requiring any setup per site.

Three ways to bring AI into a browser

● supported · ◐ partial · ○ missing

playwright-mcp
Headless Chromium as MCP
Claude Chrome ext.
Anthropic's official sidebar
This extension
MCP-aware AI in your real browser
Real session state
Fresh headless session — no cookies, no auth
Lives in your real browser tab
Authenticated as you, on every site
Multi-model
Whatever client you wire up
Claude only
Switch per task without reconfig
Automatic MCP discovery
You hardcode the servers
No site-level discovery
Picks up /.well-known/mcp.json + /mcp.txt
Per-site setup
Scripts and selectors per target
Works, but no tools beyond page context
Zero-config when the site implements discovery
Bridges to external MCP clients
Exposes discovered tools to Claude Code, etc.

What Actually Works Well

The flexibility turned out to matter more than I expected.

Having the AI work inside a real browser session means it has access to things a headless browser never would — authenticated state, cookies set by login flows, first-party data that sites only serve to real logged-in users. When I navigate to a site that has an MCP server, the extension discovers it and the AI can call those tools as me, with my session. That's qualitatively different from running a headless browser alongside my real one.

The multi-model support also changes how I use it. Different tasks suit different models. Long research sessions where I want the model thinking carefully benefit from one setup. Quick lookups benefit from another. Being able to switch without leaving the browser or reconfiguring anything makes this practical for daily use in a way a single-model tool isn't.

The automatic discovery piece is what makes the whole thing feel coherent. I don't manage a list of MCP servers. I just browse. If a site has implemented discovery, the tools appear. If it hasn't, nothing changes. It's the difference between a tool you configure and a tool that works.

What I'm Still Figuring Out

The question I haven't answered yet is what the right use cases are. I'm using it for three different shapes of work and the focus isn't obvious yet.

The other open question is adoption. The discovery spec only delivers value when sites implement it. Right now that means hand-picking sites to test with. At some point it either reaches critical mass or it doesn't.

Current State

The extension works. I use it daily. It's not open source yet — I want to spend more time on the use cases before opening it up, because I'd rather ship something with a clear purpose than something with broad capability and unclear value.

The discovery spec is a separate thing and is already public. If you run MCP servers, you can implement discovery today regardless of what happens with the extension.


For the protocol details, see MCP Needs a robots.txt Moment.