Hello,
I recently joined the Chrome Built-in AI Early Preview Program and received the welcome message outlining the available resources and discussion channels. After reading through the materials and learning more about the WebMCP direction, it prompted a question about architectural compatibility.
While building my an agent-ready storefront, I ended up independently designing something conceptually similar to what WebMCP appears to be standardizing.
My current architecture includes:
• a deterministic context snapshot injected from the theme (productId, variantId, currency, locale) so agents don’t infer state from the DOM
• DOM contracts and custom events to make interactions machine-readable
• a governance layer for automated actions (confirm-before-write, audit logging, read-only default, kill-switch)
• an agent capability layer exposing structured skills for storefront analysis and guarded admin operations
So the storefront is already designed as an agent-readable interface, although it does not yet expose browser-native tools.
My question is:
From an architectural perspective, would a system like this be considered compatible with the direction WebMCP is proposing, or would adopting WebMCP require restructuring the interaction layer around browser-native tool registration (for example registerTool() or declarative form annotations)?
In other words, is WebMCP mainly a standardized exposure layer for agent-capable architectures, or does it assume a fundamentally different interaction model for web applications?
Thank you for your work on this it’s exciting to see the web moving toward more agent-native patterns.
Best regards,