Sponsored

Samsung Electronics has passed Nvidia’s qualification tests for twelve-high HBM4 stacks, according to reporting from Reuters and South Korea’s ChosunBiz on April 24, 2026. The green light ends an eighteen-month stretch in which SK Hynix was the sole supplier Nvidia trusted for its highest-bandwidth parts, and reshapes the competitive balance of the AI memory market just as hyperscalers scale out their 2026 capex cycles.

Nvidia has not issued a public statement, but two supply-chain sources cited by Reuters confirm that Samsung’s 12-Hi HBM4 samples cleared reliability, thermal and signal-integrity benchmarks against Nvidia’s reference design for the Rubin generation of accelerators, which is scheduled to ramp through the second half of 2026. SK Hynix had been the sole HBM3E supplier for Blackwell and Blackwell Ultra since 2024, giving it an outsized share of AI-era memory margins.

Why the qualification matters

High-bandwidth memory is the single most supply-constrained component in modern AI accelerators. Each Nvidia GB300 Blackwell Ultra module carries 288 GB of HBM3E across eight stacks; the Rubin successor moves to HBM4 and is expected to land between 384 GB and 512 GB per socket, with per-pin speeds pushed above 10 Gb/s. Morgan Stanley estimates that HBM demand will reach 27 billion gigabit-equivalents in 2026, up from 14 billion in 2025, and that HBM now accounts for roughly 42 percent of total DRAM industry revenue despite representing under 10 percent of bit output.

According to TrendForce data published this week, SK Hynix held an estimated 53 percent of HBM bit share in Q1 2026, Micron 24 percent, and Samsung just 23 percent, pulled down by the company’s earlier struggles to meet Nvidia’s thermal spec on HBM3E. A validated Samsung HBM4 line directly addresses that gap: Samsung has built capacity in its Pyeongtaek P4 fab to add an incremental 60,000 wafer starts per month of HBM4-class output through the rest of 2026, according to Korean trade press.

Second-source relief for hyperscalers

For Nvidia’s top customers — Microsoft, Meta, Amazon, Google, Oracle and xAI — the qualification is the most material supply-side event of the quarter. A single-source component on the critical path has been flagged as a top procurement risk in every Q1 earnings call that has referenced 2026 GPU deliveries. Counterpoint Research analyst MS Hwang estimates that Samsung’s entry could add 8 to 12 percent to total HBM4 supply by Q4, enough to shift some Rubin shipments forward from 2027 into late 2026.

The financial read-through is concrete. SK Hynix shares fell 4.1 percent in Seoul on Thursday, wiping about 6.2 trillion won (roughly $4.4 billion) off its market capitalization. Samsung closed up 3.7 percent. Micron, which is already qualified on HBM3E and is sampling HBM4, was up 1.9 percent in New York premarket.

Margins, not just share, are now in play

HBM pricing has been elevated precisely because Nvidia’s demand has outrun qualified supply. Industry checks by Bernstein put average HBM3E selling prices roughly 4.5 to 5x standard DDR5 on a per-gigabit basis. A credible second source lets Nvidia re-open pricing leverage it surrendered in 2024, and gives hyperscalers a harder floor in negotiations for 2027 allocations.

SK Hynix is unlikely to lose its leadership position in a single quarter. Its twelve-high HBM4 had a head start on yields, and its co-development agreement with TSMC on the HBM4 base die is a structural advantage for the highest-margin parts. But the premium that came from being indispensable shrinks the moment a viable alternative exists.

What to watch next

Three dates will define whether Samsung’s qualification turns into durable share. First, Nvidia’s Rubin engineering-sample shipments, which TSMC guidance on its Q1 call implied will begin in late Q3 2026. Second, Samsung’s Q2 earnings release on July 31, where the company is expected to disclose its HBM4 mix for the first time. Third, the outcome of Samsung’s ongoing negotiations with AMD and Broadcom, both of which have publicly committed to HBM4 for their next-generation AI parts and have been waiting for a second qualified source to de-risk roadmaps.

For two years, HBM has been the pressure point in the AI supply chain. Samsung’s qualification does not eliminate the bottleneck, but it meaningfully widens it.

L
Lois Vance

Contributing writer at Clarqo, covering technology, AI, and the digital economy.