The AI investment cycle reached a new apex on Friday when Google confirmed plans to commit up to $40 billion to Anthropic — hours after Amazon announced its own $5 billion tranche with potential for $20 billion more. Together, the two hyperscalers could pour as much as $73 billion into the Claude maker, in what amounts to the largest infrastructure bet ever placed on a single AI company.
The Numbers, Unpacked
Google’s commitment, first reported by Bloomberg, begins with a $10 billion lead investment. Conditional on Anthropic hitting undisclosed performance targets, Google could inject an additional $30 billion. That structure mirrors a staged approach increasingly common in mega-rounds, where upside is tied to milestones rather than delivered upfront.
Amazon’s fresh commitment follows a different pattern. Having already deployed $8 billion across prior rounds, Amazon confirmed this week an immediate $5 billion injection alongside a future option of up to $20 billion. Amazon’s announcement was bundled with a separate infrastructure deal: Anthropic has committed more than $100 billion over the next ten years to AWS technologies, locking in up to 5 gigawatts of compute capacity to train and serve Claude models across current Trainium2 chips through future Trainium4 silicon.
“Our commitment to run our large language models on AWS Trainium for the next decade reflects the progress we’ve made together on custom silicon,” said Anthropic CEO Dario Amodei.
Revenue Growth Validates the Bet
The investment surge lands against a backdrop of exceptional commercial momentum. Anthropic’s run-rate revenue has surpassed $30 billion — more than three times the approximately $9 billion figure at the end of 2025. More than 100,000 customers now run Claude via Amazon Bedrock, and Anthropic says infrastructure strain has already begun impacting reliability for free, Pro, and Max tier users during peak hours.
That demand trajectory — tripling annualized revenue in under five months — is precisely the signal hyperscalers need to justify capital at this scale. For both Google and Amazon, the strategic calculus goes beyond financial return: Anthropic models run on their respective cloud platforms (Vertex AI and Bedrock), meaning every Claude API call generates compute revenue even before investment returns crystallize.
Geopolitical Subtext
The scale of commitment also reflects something larger than a business decision. OpenAI, Anthropic, and xAI are the three companies most central to U.S. government AI procurement and defense applications. Both Google and Amazon have federal cloud divisions that stand to benefit from Anthropic’s growing government footprint — a market accelerated by executive orders and procurement mandates from successive administrations.
Anthropic remains the only frontier AI model provider available on all three of the world’s largest cloud platforms: AWS Bedrock, Google Cloud Vertex AI, and Microsoft Azure Foundry — a deliberate multi-cloud posture that reduces concentration risk while maximizing distribution reach.
What Comes Next
The fresh capital is earmarked largely for compute. Nearly one gigawatt of Trainium3 capacity is expected online before year-end, with significant Trainium2 capacity arriving in Q2 2026. Anthropic has also signaled expansion of inference capacity in Asia and Europe to serve its growing international user base.
For enterprise buyers evaluating foundation model providers, the practical implication is clear: Anthropic’s infrastructure will scale materially in the next six months, easing the capacity constraints that have periodically disrupted service. For Google, the investment cements its position as a top-tier backer of the company that competes most directly with its own Gemini models — a strategic tension that has become a defining feature of the AI platform era.
(Sources: Anthropic press release, Bloomberg, The Verge, Amazon press release)
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.