When Jensen Huang took the stage at Nvidia’s GTC conference in March and called the Blackwell GPU architecture “the engine of the new industrial revolution,” the hyperbole seemed forgivable — even expected. Four weeks later, the financial results have given the boast empirical weight. Nvidia’s fiscal Q1 FY2027 results confirmed what analysts had only modeled: data center revenue reached $39.1 billion for the quarter, up 73% year-over-year, with Blackwell-series chips accounting for the overwhelming majority of that figure.
Blackwell’s Revenue Ramp Defies the Skeptics
When Nvidia first disclosed Blackwell’s production challenges in late 2024 — yield issues at TSMC’s CoWoS-L packaging lines that caused multi-quarter delays — analysts trimmed estimates and hedge funds shorted the stock. The bet proved catastrophically wrong. By Q2 FY2026, Blackwell shipments had normalized, and by early 2026 the ramp had become one of the fastest in semiconductor history.
The flagship NVL72 rack-scale system — 72 B200 GPUs interconnected via NVLink — now sells for approximately $3 million per unit, and hyperscalers are ordering in quantities that strain credulity. Microsoft, Google, Amazon Web Services, and Meta collectively accounted for an estimated 65% of Blackwell revenue in Q1, according to analyst notes from JPMorgan. Oracle’s AI infrastructure division alone committed to more than $4 billion in Nvidia hardware for 2026, the company disclosed in its own earnings call.
Gross margins held at 73.5%, slightly above guidance, driven by pricing power that Nvidia has maintained despite attempts by AMD and Intel to win back enterprise customers.
The Competitive Landscape Narrows — But Doesn’t Vanish
AMD’s MI325X accelerators have found a foothold in second-tier cloud providers and mid-market AI workloads, where Nvidia’s premium pricing creates an opening. AMD reported $3.7 billion in AI data center revenue for Q1 2026 — real money, but still roughly one-tenth of Nvidia’s figure. Intel’s Gaudi 3 continues to underperform its roadmap targets.
The more credible long-term challenge may come not from traditional chipmakers but from internal silicon programs. Google’s TPU v6 (“Trillium”), Meta’s MTIA 2, and Amazon’s Trainium 2 are all in active production at TSMC, collectively designed to reduce hyperscaler dependence on Nvidia. Google executives told investors in February that roughly 40% of their AI training workloads now run on TPUs, up from 25% in 2024.
Still, the transition is slow. Training frontier models at scale — the kind Meta runs for Llama 5 or OpenAI for its next-generation system — overwhelmingly favors Nvidia’s NVLink fabric and mature CUDA software ecosystem, which has two decades of optimization behind it. “Switching costs in ML infrastructure are real and underappreciated,” noted Stacy Rasgon of Bernstein Research. “CUDA lock-in isn’t a myth.”
What Comes After Blackwell
Nvidia has already provided a partial look at Rubin, the next architecture slated for production in late 2026 using TSMC’s N3P process node. Rubin’s headline claim is a 3x improvement in memory bandwidth over Blackwell — critical for large language model inference, where GPU memory capacity and bandwidth are the primary bottleneck, not raw compute.
The company also disclosed that Rubin Ultra — a multi-die configuration using TSMC’s advanced chip-on-wafer-on-substrate packaging — will target the highest-density rack deployments, competing with hypothetical successors to Google’s Trillium and Amazon’s Trainium line.
For investors, the remaining question is duration. AI capital expenditure by the five largest US hyperscalers is on track to exceed $300 billion in calendar 2026, according to projections from Bank of America. Whether that investment pace is sustainable into 2027 — or whether a digestion period follows — is the variable that most directly controls Nvidia’s forward revenue trajectory.
The Infrastructure Layer Consolidates
Beyond chips, Nvidia is expanding its software and networking moat. InfiniBand networking, sold through the Mellanox subsidiary, now contributes meaningfully to data center revenue as hyperscalers build out high-speed GPU clusters. Nvidia’s NIM (Nvidia Inference Microservices) platform, launched in 2024, has signed agreements with over 200 enterprise software vendors to embed optimized inference runtimes into SaaS products.
The company’s ambition is clear: to own not just the silicon layer of the AI stack but the networking, software, and deployment tooling that surround it. If successful, Nvidia’s market position in the AI era may prove even more durable than Intel’s in the PC era — which, at its peak in the late 1990s, captured margins that seemed impossible to sustain. Nvidia’s current margins suggest history may be rhyming.
The results that seemed inconceivable two years ago have become the baseline expectation. The harder question is what the ceiling looks like — and whether anyone in the market has the capacity to impose one.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.