Are we evolving from pages to apps to agents?
Recently, Vercel came to London and teamed up with Anthropic’s local chapter to host a skills event. (I am pretty sure the event was held, in Vercel fashion, at the same venue where I saw my last Ladytron concert before the Pandemic kicked off.) I also attended an AI Tinkerers event where my former director at Meta, who is now Chief Product Officer at Vercel, Tom Occhino, gave a fireside chat.
Vercel is repositioning itself as the “AI Cloud/Agent Platform” provider. It’s an interesting pivot from a company that sells hosted infrastructure for websites, essentially admitting the web-as-destination era is winding down. Their response is to build the infrastructure that allows you to deploy “agents” the same way you’d deploy a Next.js app, with easy routing between inference providers, a global API gateway, an open-source framework, and serverless compute retooled for agentic workloads. (Given that the frontier models live in Microsoft, Google, and Anthropic’s clouds, and the model hosting space is already well-established by incumbents like Hugging Face, “AI Cloud” might be a branding overstatement.)
While bespoke agents are taking off internally at companies, the question is, will consumers use the agents that agent builders deploy to the same extent they visit sites or download apps? The outlook for cloud-hosted agents reaching everyday consumers is complicated by a few structural realities. When people want a personal agent today, they reach for a frontier product like ChatGPT, Gemini, or Claude rather than a bespoke agent. And even that may be a transitional moment: Google and Apple are quietly building the on-device inference story that would let agents run locally, on the consumer’s local, personal data, without routing through anyone’s cloud. Vercel’s bet requires the cloud-hosted model to win consumer adoption before on-device inference makes it redundant. That’s a race against two of the best-capitalised companies in history, who also happen to own the devices!
The AI browser that never grew up
The last time I posted here, OpenAI had just launched Atlas, their AI browser. And I never did install it. I switched back to the decidedly non-agentic UK Chrome because I worried that OpenAI’s splashy release had put a prompt-injection target on the back of every AI browser user. (I still fire up Perplexity’s Comet when I’m researching things because it’s incredibly handy.)
Perhaps the AI Browser War went out with a whimper because consumers were concerned about prompt injection, but such risks haven’t thwarted OpenClaw enthusiasts, despite that agent’s notorious security risks (as of this writing, it already has more GitHub stars than React). It’s clear that security isn’t the adoption blocker; it's utility.
It’s 100% true that AI is changing browsing habits, but it’s not happening through agentic browser features. Rather, the change is coming from AI deployed on sites, such as Google’s AI summaries, which are replacing visits to sites. And from people using personal Agents like Claude and Gemini to get things done rather than opening hundreds of tabs.
Where data and inference live together
Providers who see the web in decline would naturally seek to repurpose their infrastructure to host agents orchestrated through their pipes that you access via apps (your apps, web apps, someone else’s apps—I wrote up an intro to all these different agents with the people at web.dev).
But for legal, privacy, and performance reasons, inference and data are both best collocated. You can see signs that Big Tech is moving in this direction:
Apple has integrated MCP into XCode and is adopting Gemini.
Most flagship Android phones ship with an SLM.
Both Apple and Google have designed chips specifically for running models on the device.
Intelligence needs to live where the data lives, and for consumers, that’s usually on their device or buried in a platform (try building an agent that can talk between X and LinkedIn. I have. And no, scraping isn’t allowed.).
If you don’t have a device story, you have an app story that uses frontier inference controlled by Big Tech that can afford the data centers, or you have an app story that uses inference already on the device. Web infra and hosting providers are hoping for the former so they can at least sell the wiring. The latter would reduce the web to a collection of hosted markdown files and APIs (although I think that would come as a relief to some providers).
Apps don’t want your agent to access your data
In November 2025, Amazon sued Perplexity for users placing orders online via Comet. Amazon claimed that Perplexity was “damaging the user experience,” but having used Amazon for many years, I find it hard to imagine Comet making that experience worse. (Disclaimer: I’ve also worked at Amazon, albeit at AWS and not on the shopping card team.) What I think is actually happening is that Amazon wants to set a precedent that will give it a defense when Google or Apple release features that do the same. Amazon doesn’t want to be reduced to an API layer. That would cut into their secondary ads market!
All these companies, from LinkedIn to Facebook, do not want to give users access to their surfaces. Those people have to visit the site or download the app, log their usage data into the company data warehouse, see the ads, the upsells, etc. It’s hugely profitable to control the environment a user inhabits, even if for just minutes, because everything from the sale to the very data that the user logged by visiting turns a profit. So expect vertical platforms to fight furiously against the Agentic Web.
WebMCP: the future of the (Intra)net?
I recently interviewed Alex Nahas, the Amazon employee who invented WebMCP to solve an intranet authorization problem. Amazon had built an internal MCP server aggregating thousands of tools. OAuth 2.1, the auth story MCP had settled on, wasn’t implemented anywhere internally. But the browser already had everything needed: session cookies, SSO, and scoped identity. Rather than reinvent the wheel, Alex ran MCP inside the browser, used the existing session, and wrote a custom transport over postMessage. The browser as a pseudo-identity provider solved what MCP working groups were still figuring out.
WebMCP isn’t actually MCP. It’s “MCP inspired” (just the tools). It surfaces the same JS functions your UI calls (think search(term) and addToCart(item, options)) to agents as they visit the site. This is an accessibility boon that will, sadly, likely never take off with vertical web platforms like Amazon, Facebook, and Salesforce for the reasons previously stated.
But it will likely have an audience among people building internal tooling (like Alex) and incumbents embracing the agentic web. I would not be surprised to see Shopify and Etsy, who partnered with OpenAI on the Agentic Commerce Protocol, implementing this to do just that.
The web isn’t going anywhere. It’s becoming infrastructure, the COBOL of human networking. Vertical platforms will fight the agentic web because their business models depend on controlling the surfaces users inhabit. Cloud providers will pitch themselves as the neutral pipes. But the real leaps are coming from engineers solving real problems around the edges with existing tools, from competitors wrapping their existing code in agent-readable interfaces to gain an edge, and from devices getting smarter with each generation. The agentic web won’t be announced. It’ll arrive one step at a time.




