Sponsored

Pre-training finished on March 24, 2026. Sam Altman told staff the same day that launch was “a few weeks” away. The insider-predicted April 14 date came and went without an announcement. As of April 17, GPT-6 — internally codenamed “Spud” — had still not shipped.

What’s taking so long? And why does it matter more than any previous OpenAI release?

What We Know

The timeline is unusually transparent for OpenAI. Pre-training completion is a defined milestone — it means the base model weights are done. What follows is alignment work, safety evaluations, red-teaming, and deployment infrastructure. For GPT-4, that process took months. For GPT-4o, considerably less. For Spud, Altman set a “few weeks” expectation in late March, which pointed to a mid-April window.

Polymarket bettors are not alarmed by the miss. As of mid-April, the prediction market placed 72% odds on a GPT-6 release before April 30, and 93% odds before June 30. That tracks with a model in final polish rather than one in trouble — but the silence is still conspicuous at a moment when OpenAI’s competitors are loud and active.

This Is Not an Incremental Model

What makes the delay worth scrutinizing is the nature of what’s coming. Greg Brockman, OpenAI’s president, described Spud as having “a big model feel — not incremental… a significant change in the way we think about model development.” Altman called it a model that could “really accelerate the economy.” These are not phrases OpenAI reaches for when shipping a capability bump.

The architecture behind GPT-6 reportedly goes beyond the transformer-plus-fine-tuning stack that has defined the GPT series. More importantly, the product vision around it does. OpenAI is building GPT-6 not as a standalone model but as the intelligence layer of a unified “super app” — one that integrates ChatGPT, Codex, and the AI browser the company has been developing under the Operator initiative. This is a platform play: a single product surface that handles conversation, code, web interaction, and task automation under one roof.

That ambition raises the stakes considerably. Shipping a platform is harder than shipping a model.

The IPO Pressure

Context matters here. OpenAI is navigating the transition from a capped-profit nonprofit to a for-profit corporation — a restructuring that has been contentious and that sets up a path toward a potential IPO in the $1 trillion valuation range. Investors and analysts are watching GPT-6 as the first major product milestone of the new era.

A delayed launch does not signal collapse. But a bungled launch — or a launch that underwhelms after months of expectation management — would be more damaging than a clean delay. OpenAI appears to be choosing precision over punctuality.

What Changes When It Ships

If GPT-6 delivers on what Brockman and Altman have described, the competitive landscape shifts in a specific way: OpenAI stops competing on model benchmarks and starts competing on platform lock-in.

The unified super app strategy — ChatGPT plus Codex plus browser automation — creates a surface that rivals cannot match by shipping a better model alone. Anthropic’s Claude, Google’s Gemini, and Meta’s Llama are all capable language models. None of them currently has the integrated product surface that OpenAI is building toward.

That is the real story behind Spud’s delay. OpenAI is not just finishing a model. It is assembling a product that reframes what the AI race is actually about — and it needs to be right the first time.

The wait, for now, continues.

L
Lois Vance

Contributing writer at Clarqo, covering technology, AI, and the digital economy.