OpenAI’s Compute Dream Is Too Big for the World

OpenAI is on a compute spree, securing multi-billion-dollar deals over the past few months. The AI startup has locked in unprecedented chip partnerships with NVIDIA, AMD, and Broadcom, totalling more than 26 gigawatts of AI compute capacity and financial commitments potentially exceeding $1 trillion.

The Broadcom deal is particularly exciting because OpenAI will be developing in-house chips through this partnership. The startup will develop and deploy 10 gigawatts of AI accelerators and networking systems. 

For Broadcom CEO Hock Tan, the partnership with OpenAI is the beginning of a new era in chip design. “GPT-5, 6, 7, on and on and each of them will require a different chip, a better chip, a more developed chip,” he said in a podcast while announcing the partnership. 

Tan added that the industry is entering exciting times, with Broadcom pushing the limits of what’s possible as it moves toward 2-nanometer technology and next-generation architectures.

OpenAI CEO Sam Altman said that the company is optimising the entire AI stack, from transistor design and chips to racks, networking, inference algorithms, and end products. By designing chips for specific workloads instead of using off-the-shelf solutions, he said the company aims to achieve better performance, greater cost-efficiency, and faster model speeds.

OpenAI president Greg Brockman recalled that the company initially believed AGI would emerge from breakthrough ideas rather than sheer computational power. However, by 2017, it became clear that scale was the real differentiator, as doubling compute consistently made their models twice as capable.

Can OpenAI Actually Pull It Off?

AS Rajgopal, CEO & MD at NxtGen Cloud Technologies, told AIM that one of the biggest hurdles for OpenAI’s compute expansion isn’t just the cost but the operational infrastructure. 

“The electrical industry is not geared up for this rapid expansion,” he said, pointing out that datacenter completion may lag behind deployment plans. He also warned that recovering investments could make services extremely expensive, and prolonged losses may create financial strain across the ecosystem.

In a recent podcast, Dylan Patel, founder and CEO of SemiAnalysis, said that OpenAI cannot monetise their AI models before having enough compute. He added that compute is needed to train models good enough to unlock adoption and create real business use cases. This creates high upfront capital requirements before revenue can be generated.

He further explained that the game is between trillion-dollar tech giants, which include Google, Meta, Microsoft, and Elon Musk’s xAI. If OpenAI doesn’t secure enough compute fast, it could be outpaced by giants with bigger balance sheets.

OpenAI’s survival depends on forming alliances that can absorb financial risk and provide compute ahead of revenue. 

About OpenAI’s $300 billion deal with Oracle, Patel said that the latter will earn a margin on the deal but is taking on huge financial risk, betting that OpenAI’s future revenue will justify the investment. If the AI market succeeds, Oracle could make tens or even hundreds of billions in profit, if not, it might need to raise debt to cover its commitments.

Circular Economy 

It remains to be seen how OpenAI will scale its massive compute ambitions. The company currently doesn’t operate its own data centres at a meaningful scale, relying instead on rented or co-owned infrastructure, mostly Microsoft’s Azure, and soon Oracle Cloud.

OpenAI generated about $4.3 billion in revenue in the first half of 2025, roughly 16% more than its total revenue for 2024. Analysts estimate the company could reach around $13 billion for the year, driven by ChatGPT subscriptions, API usage, and enterprise products like ChatGPT Deep Research. Despite this growth, OpenAI ran a loss of about $2.5 billion in the first half of 2025, mainly due to heavy research and development spending and stock-based compensation.

By announcing its multi-billion-dollar chip and cloud deals, OpenAI has created a sort of circular economy, where money flows within the ecosystem rather than generating entirely new revenue. Funds from NVIDIA, for instance, will largely be used to buy NVIDIA chips.

In the case of the AMD deal, OpenAI has agreed to buy up to 6 gigawatts of Instinct AI chips over several years. In return, AMD has given OpenAI the option to buy up to 160 million AMD shares, which is about 10% of the company, at a very low price. This arrangement creates a self-reinforcing loop where OpenAI’s chip purchases increase AMD’s sales and stock value, and the rise in AMD’s stock makes OpenAI’s shares more valuable, helping it invest further in AMD chips.

Similarly, under a $300 billion agreement with Oracle to build new US data centres, Oracle purchases large numbers of NVIDIA chips for the facilities, meaning part of the money eventually flows back to NVIDIA. 

Meanwhile, OpenAI’s custom chip plans with Broadcom are still in development and depend on external manufacturers such as TSMC and partner funding. 

Rajgopal believes this push into custom AI accelerators could reshape the chip market, though he sees operational challenges in managing multiple technology deployments simultaneously. He added that OpenAI appears to be hedging its bets—running training on NVIDIA and AMD clusters while relying on custom chips for inference.

Too Big, Too Fast?

While the compute expansion could give OpenAI an edge, Rajgopal questioned whether it translates into a long-term advantage. “Unless OpenAI delivers on AGI, the investments are not justified for the current promise of generative AI,” he said. 

Rajgopal compared the scenario to industries that typically move toward duopolies, such as Airbus and Boeing, or Walmart and Flipkart, and suggested that AI needs more players for stable growth.

On energy and scaling, Rajgopal said that while OpenAI’s funding is secure, the timelines are highly ambitious. “Money is secured; however, timelines are aspirational and will not be met,” he said, adding that the weight of expectations may slow down execution.

The post OpenAI’s Compute Dream Is Too Big for the World appeared first on Analytics India Magazine.

Scroll to Top