Sponsored

The first operational phase of Stargate — the $500 billion AI infrastructure joint venture between OpenAI, SoftBank, and Oracle — has entered service at a purpose-built campus in Abilene, Texas. The facility, which drew its first sustained production load in Q1 2026, is consuming approximately 100 megawatts of power, with capacity designed to scale to 1.2 gigawatts across multiple phases, according to people familiar with the project’s engineering specifications.

What 100MW of AI Compute Actually Looks Like

The Abilene campus covers approximately 400 acres and hosts tens of thousands of Nvidia GB200 NVLink rack systems, the current-generation Blackwell architecture optimized for large-scale inference. At 100MW operational draw, the facility consumes roughly the same electricity as a mid-size American city.

For context: a single GB200 NVL72 rack — combining 36 Grace CPUs with 72 Blackwell GPUs — draws approximately 120kW at full load. A 100MW facility implies roughly 800 such racks in sustained operation, representing a compute envelope capable of serving inference at frontier model scale for hundreds of millions of simultaneous users.

OracleCloud Infrastructure is the primary infrastructure operator, with OpenAI consuming capacity as the anchor tenant. SoftBank’s Vision Fund 2 is the primary capital source for the first $100 billion tranche of committed investment.

Texas Grid and Power Procurement Strategy

The Abilene facility operates on ERCOT, Texas’s independent power grid. Unlike regulated utility territories in states like Virginia — where large data center loads require multi-year interconnection queues — Texas’s deregulated market allows direct power purchase agreements with generators.

Stargate has signed long-term PPAs with multiple sources: approximately 60% renewable (wind from West Texas, solar from the Permian Basin corridor), with 40% sourced from natural gas peakers to ensure 24/7 dispatchable power. The deal structure includes a $2.1 billion commitment to new renewable generation capacity, timed to come online as phases 2 and 3 of the Abilene campus ramp.

Grid analysts at Wood Mackenzie flagged in a February 2026 report that ERCOT’s planned large-load additions — of which Stargate represents the largest single commitment — are straining transmission capacity in West Texas. ERCOT is fast-tracking $4.7 billion in transmission upgrades, with completion timelines extending to 2028.

Competitive Pressure: Microsoft, Google, Amazon Race to Keep Pace

Stargate’s Phase 1 activation puts pressure on hyperscaler AI infrastructure rivals who are simultaneously executing their own unprecedented build cycles. Microsoft has committed $80 billion in 2026 capital expenditure to data centers, with a significant portion designated for AI workloads. Google’s parent Alphabet disclosed $75 billion in capex guidance for 2026 at its Q1 earnings call. Amazon’s AWS is executing a similar scale of investment.

The critical difference with Stargate is organizational structure: it is purpose-built for AI training and inference, without the competing priorities of general-purpose cloud infrastructure. OpenAI’s Sam Altman has publicly described Stargate as essential to his company’s ability to train what he calls “superintelligent” successors to current frontier models.

“The compute requirements for the next generation of models are an order of magnitude beyond what we’re running today,” Altman said at a January 2026 announcement event in Washington. “This infrastructure is how we get there.”

What Phase 2 Brings

Construction on Phase 2 of the Abilene campus is already underway, targeting an additional 300MW of capacity by Q4 2026. Stargate has also announced greenfield sites in Wisconsin, Georgia, and Pennsylvania — each designed for similar multi-gigawatt eventual capacity — along with international expansions in Japan, the UAE, and the UK.

The full $500 billion commitment spans a decade. Analysts at Morgan Stanley estimate the first $100 billion will be deployed by end of 2026, reshaping the capital equipment order books of Nvidia, Dell, and a cluster of specialized cooling and networking vendors that have seen order backlogs stretch to 18–24 months. Sources: OpenAI Stargate project disclosures, Wood Mackenzie ERCOT analysis, Bloomberg infrastructure reporting, Altman public statements.

L
Lois Vance

Contributing writer at Clarqo, covering technology, AI, and the digital economy.