The Model Context Protocol passed 97 million combined monthly Python and TypeScript SDK installs in March 2026, and it now lives under neutral governance at the Linux Foundation’s new Agentic AI Foundation (AAIF). Together these two milestones mark the end of the protocol war for how AI agents talk to tools and data. MCP won. The interesting question is no longer which protocol — it’s what you build on top of it.
The Scale
MCP launched in November 2024 with roughly 2 million monthly SDK downloads. By March 2026, monthly installs had hit 97 million. That is a 48× increase in 16 months — a growth curve that, as DEV Community pointed out, makes Kubernetes look slow. Kubernetes took nearly four years to reach comparable deployment density in its community.
The ecosystem numbers are similarly stark:
- 10,000+ active public MCP servers covering developer tools, enterprise systems, and consumer applications.
- Native support in ChatGPT, Cursor, Gemini, Microsoft Copilot, and VS Code — every major AI product you’d name now speaks MCP.
- Fortune 500 deployments running MCP in production, not just hobbyist builds.
Anthropic built MCP, but it stopped being Anthropic’s protocol the moment OpenAI, Google, and Microsoft started shipping first-class support. What’s unusual in the current AI landscape is that an interop protocol reached consensus at all — and that it happened faster than most people noticed.
The Governance Move
On December 9, 2025, the Linux Foundation announced the formation of the Agentic AI Foundation as a directed fund, with three founding project contributions:
- Anthropic’s Model Context Protocol (MCP) — the interop layer for agent-to-tool communication
- Block’s goose — an open-source AI agent framework
- OpenAI’s AGENTS.md — a developer-facing specification for describing agent behavior
The platinum member list is the tell: AWS, Anthropic, Block, Bloomberg, Cloudflare, Google, Microsoft, and OpenAI. Dozens of other firms joined at Gold and Silver tiers. When every major AI lab and every major cloud simultaneously donates their agentic-AI specifications to the same neutral foundation, the era of proprietary protocol competition is over.
The governance model matters. MCP’s existing maintainers continue to run the project — the Linux Foundation’s role is neutrality, not control. This is the same pattern that made Kubernetes durable: the foundation owns the trademark and the trust, but the technical direction stays with the community that actually builds it.
Why Neutral Governance Was Necessary
Put yourself in the position of a Fortune 500 CIO in early 2025. Anthropic is proposing MCP as the standard for how your AI agents connect to your systems. Anthropic is also an AI company that competes with OpenAI, Google, and whoever else is trying to sell you an AI platform. Betting your agent infrastructure on a protocol owned by one vendor is the same bet as building on any other proprietary standard — and it has historically ended badly.
Moving MCP under the Linux Foundation removes that objection. The protocol is now governed by the same institution that stewards Linux itself, Kubernetes, Node.js, and PyTorch. No single vendor can unilaterally change the spec, fork the implementation, or hold the ecosystem hostage to a commercial strategy.
The practical effect is that enterprise adoption accelerates. A spec owned by Anthropic is a spec that legal has questions about. A spec owned by a Linux Foundation directed fund backed by every major AI company is a spec that passes procurement on the first review.
What This Means For Builders
Three implications stand out:
If you’re building AI tooling, speak MCP. The protocol is the substrate for the entire agentic ecosystem. Tools that don’t expose an MCP server are tools that can’t be used by the AI products your customers are already running. This is the same dynamic that made HTTP mandatory for B2B APIs a decade ago — you don’t get to opt out of the standard.
If you’re building agents, you can now assume tool availability. A year ago, agent builders had to ship integrations one service at a time. With 10,000+ public MCP servers, the default posture flips: most of the tools you need already have an MCP endpoint, so your agent’s architecture can focus on reasoning and orchestration rather than integration plumbing. Agentic AI architecture patterns gets more interesting when the connector layer is commodity.
The frontier models are tuning for MCP. Claude Opus 4.7 shipped with a best-in-class MCP-Atlas score — Anthropic explicitly optimized the model for orchestrating tools over MCP. Expect OpenAI and Google to match that tuning in their next releases. The model and the protocol are co-evolving, and building against both is the lowest-risk path forward.
The Quiet Victory
Protocol wars are usually protracted and bloody. The browser wars, the container wars, the cloud API wars — all took years to resolve, and all left wreckage. MCP went from Anthropic side project to industry standard in under 18 months, with no credible alternative still in contention. That’s rare enough to be worth marking.
What it means for the next 18 months is simpler than the last 18. Less time arguing about connectors, more time building the systems on top. The plumbing is done.
Sources: Linux Foundation — AAIF announcement · Anthropic — Donating MCP · OpenAI — Joining AAIF · DEV — MCP Hits 97M Installs
