Sponsored

Meta Platforms has placed what may be the boldest chip bet in Silicon Valley history: a commitment to deploy 1 gigawatt of custom AI silicon built with Broadcom, as the social media giant accelerates its effort to reduce dependence on Nvidia’s expensive and supply-constrained GPUs.

The announcement, confirmed on April 15, extends Meta’s existing Broadcom partnership through 2029 and anchors a plan to spend between $115 billion and $135 billion on AI infrastructure across 2026 — a capital outlay that dwarfs anything a single company has committed to before.

From Pilot to Petawatt

Meta’s custom chip program, known as MTIA (Meta Training and Inference Accelerator), has moved from an internal experiment to an industrial-scale deployment program. The company unveiled four distinct chip generations in March 2026 — MTIA 300, 400, 450, and 500 — each targeting progressively demanding inference workloads, including real-time image and video generation.

The MTIA 300 is already deployed across Meta’s data centers. The 400, 450, and 500 variants are on a rolling release schedule, with one new chip arriving approximately every six months. Future generations are slated to be manufactured on a 2nm process node, putting them at the leading edge of semiconductor fabrication.

The scale of the 1-gigawatt commitment is not merely symbolic. One gigawatt of chip capacity represents a decisive shift: Meta is no longer supplementing its Nvidia fleet with custom silicon — it is building a parallel inference layer designed to operate independently.

The Nvidia Dependency Problem

For years, Nvidia has held an effective monopoly on AI training compute, and increasingly on inference as well. Its H100 and B200 GPU families power the majority of frontier AI workloads across every major cloud provider. But that dominance carries a price: Nvidia’s chips are expensive, scarce, and sold at margins that can exceed 70%.

The largest technology companies have been quietly moving to build their own alternatives. Google has its TPU line. Amazon has Trainium and Inferentia. Microsoft is investing in custom silicon through internal programs. Meta’s MTIA program fits this pattern — but the 1-gigawatt scale signals a faster and harder break from Nvidia than any of its peers have announced.

Broadcom, which co-designs custom ASICs for hyperscalers, is the primary beneficiary. The deepened partnership means Broadcom will co-develop four generations of MTIA chips over the next three years, a relationship sufficiently significant that CEO Hock Tan agreed to step off Meta’s board to eliminate any governance conflicts.

What This Means for AI Infrastructure

The commercial model emerging from this deal is likely to become an industry template: hyperscalers use Nvidia for frontier model training, where GPU compute remains unmatched, and Broadcom-designed custom chips for high-volume inference, where cost efficiency and workload specificity matter more than raw performance.

For Meta, the MTIA program also supports a broader product strategy. Its generative AI features — AI-generated images in Instagram, real-time video effects in WhatsApp, and the Meta AI assistant embedded across its apps — require inference at enormous scale. Running that workload on Nvidia hardware at market rates would represent a significant and growing drag on margins.

“This isn’t about replacing Nvidia entirely,” one semiconductor analyst told TechCrunch. “It’s about owning the inference layer, where Meta runs billions of operations per day, so that they’re not writing a check to Nvidia every time a user asks for an AI-generated image.”

Broader Stakes

The Meta-Broadcom announcement lands at a moment when the entire semiconductor industry is reconfiguring around AI demand. With 31 data centers planned for 2026 and capital expenditure commitments of up to $135 billion, Meta is not building for the current moment — it is betting that AI inference will be one of the defining infrastructure costs of the next decade, and that owning that silicon is worth the investment.

For Nvidia, the signal is clear: the company’s biggest customers are building exits, even if those exits will take years to fully arrive.

L
Lois Vance

Contributing writer at Clarqo, covering technology, AI, and the digital economy.