Feedback from a complex SaaS

40 views
Skip to first unread message

Alex McManus

unread,
Mar 31, 2026, 4:31:19 AMMar 31
to Chrome Built-in AI Early Preview Program Discussions
I've been experimenting with a large, complex SaaS web application. I managed to get some complex agent-driven workflows working pretty reliably, similar to the quality of interactions I have with Claude Code. Thanks to everyone involved so far in getting WebMCP to its current state.

My main difficulty has been around dynamic tool registration. My app has a microfrontend architecture, with different apps mounted and unmounted as the user navigates around. This makes dynamic registration of tools critical, and I'd guess this would be true for most large apps with code-splitting too. I created a "navigate" tool in the header which navigates to the different apps, and when the app is loaded it registers its own specific set of tools.

This didn't work at all with the Chrome WebMCP extension connected to Gemini because it would not pick up newly-registered tools while it was still processing a prompt. In other words, it would successfully navigate to the right app, but then couldn't do anything with the tools that appear until the user gives it another prompt.

I had much more success with the MCP-B extension connected to Claude Code, which does discover new tools as it processes a prompt. I reported a couple of bugs with dynamic registration, and I think work is underway to fix them.

There were a few occasions when Claude got confused about where to go to find a tool. I largely solved this by providing a "toolmap" tool, which gave it a JSON representation of all available tools. For each tool:
  • Brief description
  • Which URL pattern it is available on
  • Which URL pattern it navigates to when executed.
Without any special prompting, Claude used and understood the toolmap when it was needed. There was a mention in the spec of a plan for making tools discoverable to search engines; if you propose something similar, you may want to also consider exposing it as a tool to the agent.

The other issue I had was due to the asynchronous nature of JavaScript and navigation - the agent navigated somewhere expecting to see some new tools, but checked for the new tools before the app had a chance to render the page and register them. I initially tried setTimeout(, 0) when resolve the navigation tool execution to give JavaScript a chance to process the route change. In the end, I had to wait 1000ms to make it reliable.

In order to allow the agent to understand what I'm currently looking at, I added a "current_view" tool that returns a JSON representation of the data currently on-screen. Components register their current data with the "current_view" tool, so that it works across the app (I provide a useCurrentView() hook). This tool supports prompts like "Order this item", where the agent uses "current_view" to understand what "this" means.
Reply all
Reply to author
Forward
0 new messages