Back to Insights
AI

Anthropic Hits $30B Revenue Run Rate and Secures 3.5 Gigawatts of Compute from Google and Broadcom

Anthropic's revenue has tripled since late 2025, and a new compute deal with Google and Broadcom secures 3.5 gigawatts of processing capacity — the largest AI infrastructure agreement to date.

S5 Labs Team April 7, 2026

Anthropic announced on April 7 that its annualized revenue run rate has surpassed 30billionmorethantriplingfromtheapproximately30 billion — more than tripling from the approximately 9 billion it reported at the end of 2025. Alongside the revenue milestone, the company disclosed a new compute agreement with Google and Broadcom for 3.5 gigawatts of processing capacity, the majority housed in the United States.

The two announcements are connected. Anthropic’s revenue growth is driven by enterprise demand for Claude models, and that demand requires infrastructure at a scale that few companies can provide. The Google-Broadcom deal is how Anthropic plans to keep up.

The Revenue Growth

A jump from 9billionto9 billion to 30 billion in annualized revenue in roughly four months is extraordinary, even by the standards of the current AI industry. For context, this growth rate exceeds what OpenAI achieved over a comparable period, despite OpenAI’s $852 billion valuation and significantly larger consumer user base.

The growth is concentrated in enterprise. Anthropic now has more than 1,000 business customers spending over $1 million annually on Claude. This is not a consumer chatbot revenue story — it is an enterprise platform story, driven by organizations integrating Claude into production workflows at scale.

The Claude model family has become the backbone for a growing number of enterprise AI deployments, particularly in sectors like financial services, healthcare, legal, and software development where accuracy, reasoning, and safety characteristics matter more than raw throughput.

The Compute Deal

The agreement with Google and Broadcom secures 3.5 gigawatts of compute capacity powered by Google’s tensor processing units (TPUs). This expands on a previous deal struck in October 2025 for more than one gigawatt.

ParameterDetail
Capacity3.5 gigawatts
HardwareGoogle TPUs (manufactured by Broadcom)
LocationMajority in the United States
TimelineNew capacity comes online in 2027
Previous deal1+ gigawatt (October 2025)

To put 3.5 gigawatts in perspective: that is roughly the power consumption of a mid-sized city. A single modern data center typically consumes 50-100 megawatts. This deal represents the equivalent of 35-70 large data centers dedicated to Anthropic’s workloads.

The scale reflects both current demand and Anthropic’s expectations for where that demand is headed. Training frontier models requires enormous compute, but inference — actually running those models for customers — is what drives sustained power consumption. With over 1,000 enterprises running Claude at scale, Anthropic needs infrastructure that can handle millions of concurrent requests while maintaining the latency and reliability that enterprise customers expect.

The Multi-Cloud Strategy

Anthropic has been deliberate about distributing its infrastructure across multiple cloud providers. Claude is available through Amazon Bedrock, Google Cloud’s Vertex AI, and Microsoft Foundry. The Google-Broadcom deal strengthens the Google relationship, but Anthropic’s recent announcement of Project Glasswing made Claude Mythos Preview available across all three platforms simultaneously.

This multi-cloud approach is a competitive advantage in enterprise sales. Organizations with existing AWS, Google Cloud, or Azure commitments can adopt Claude without switching infrastructure providers. It also reduces Anthropic’s dependency on any single cloud partner — a strategic consideration given that both Google and Amazon are also competitors in the foundation model market.

The Competitive Landscape

Anthropic’s 30billionrunrateputsitinadifferentcategorythanitoccupiedevensixmonthsago.Thecompanyrecentlycloseda30 billion run rate puts it in a different category than it occupied even six months ago. The company recently closed a 30 billion Series G funding round at a $380 billion valuation, providing the capital to sustain its infrastructure expansion and research investment.

The AI infrastructure race is now defined by three dynamics:

Revenue validates demand. Anthropic’s growth demonstrates that enterprise AI spending is accelerating, not plateauing. The market is large enough to support multiple frontier model providers at significant scale.

Compute is the bottleneck. Models are improving faster than infrastructure can be built to run them. The companies that secure compute capacity now will be better positioned to serve demand over the next 2-3 years. Anthropic’s 3.5-gigawatt deal is an explicit bet on this constraint.

Enterprise customers are consolidating. Organizations that started with proof-of-concept AI projects in 2024-2025 are now moving to production deployments that require reliable, scalable infrastructure. The shift from experimentation to production is where the revenue growth is coming from.

What This Means for Businesses

For organizations evaluating AI infrastructure decisions, Anthropic’s growth signals that the enterprise AI market is maturing faster than many expected. The availability of Claude across all major cloud platforms lowers adoption barriers, and the compute expansion suggests that capacity constraints — which have limited some enterprise deployments — will ease as new infrastructure comes online in 2027.

The revenue numbers also validate a specific approach to enterprise AI: models that prioritize accuracy, safety, and reasoning depth over pure speed or cost minimization. Anthropic’s growth has come from customers willing to pay premium prices for capabilities that matter in production environments where errors have real consequences.

Want to discuss this topic?

We'd love to hear about your specific challenges and how we might help.