

Even at 81, Larry Ellison is still in builder mode.
As Oracle’s executive chairman and CTO, he is betting on AI infrastructure with the urgency of a founder chasing a new gold rush, pouring billions into data centres to train and serve large models.
Ellison has ridden many such cycles at Oracle before, turning databases, enterprise software and cloud into growth engines through conviction and timing.
This time, however, as the economics of AI and data centres come under strain, the latest bet that carries his imprint is being tested and increasingly questioned.
In December, Oracle’s stock fell more than 10% in a single session, wiping out tens of billions of dollars in market value after the company warned that AI-related capital spending would weigh on earnings.
Shares reached an all-time high of ~$328 in September but have since fallen to ~$197 as of December end, representing a drop of ~40%.
The sell-off followed reports of uncertainty about Oracle’s funding for multi-billion-dollar data centre projects.
Oracle has committed ~$300 billion to cloud services and AI data-centre capacity for OpenAI, leasing massive GPU clusters inside data centres.
Internal documents showed Oracle generated ~$900 million in revenue from renting servers powered by NVIDIA chips, but only about $125 million in gross profit, a ~14 % gross margin on that business.
So, where do we go from here?
A Fragile Boom Built on Fast-Moving Capital
“LLMs can get better by leaps and bounds, and you could still have a very financially fragile sector,” Advait Arun, senior associate for capital markets at the Centre for Public Enterprise (CPE), told AIM in an interaction.
Arun recently authored Bubble or Nothing at CPE, a comprehensive report that examines how the AI boom is being financed.
Capital continues to densify and round-trip around a small set of players.
Oracle’s commitment to OpenAI is only one example. NVIDIA and OpenAI have announced plans to deploy at least 10 gigawatts of NVIDIA systems, with up to $100 billion in planned investment.

OpenAI has committed up to $1.4 trillion in long-term infrastructure and compute investments with partners including Microsoft, Oracle and NVIDIA, even as it remains unprofitable—with those commitments explicitly intended to be funded out of future revenues paid back to the same firms.
Alphabet, Meta, Microsoft and Amazon are expected to spend over $350 billion on AI infrastructure in 2025, with that figure projected to cross $400 billion in 2026.
HSBC estimates that OpenAI alone will require at least $207 billion in computing capacity through 2030, even as it expects the company to remain unprofitable.
CPE’s report states that hyperscalers invested more than $560 billion into AI technology and data centres between 2024 and 2025, while generating just $35 billion in associated revenues.
Analysts at Bain & Company estimate the sector needs $2 trillion in new revenue just to fund the data centres profitably planned for 2030.

The circular ecosystem pushes losses around rather than recognising them—so when one node weakens, stress propagates through everyone connected.

Usage is Not Demand
Part of the disconnect between technical advancement and financial fragility lies in how demand is measured. AI revenue is rarely separated cleanly from broader cloud or software sales.
Usage is bundled into consumer products and enterprise suites, making it difficult to tell whether customers are willing to pay prices that justify the capital locked in upstream.
Many companies now claim unprecedented levels of AI adoption, often comparing it to the uptake of automobiles, smartphones or the internet.
But being considered a “user” does not necessarily mean a product is delivering sustained value or changing how work is actually done.
If the AI assistant embedded into Gmail gives a user an auto prompt—or simply a cue to use AI—does hovering the mouse over it, or accidentally clicking on it, count as a ‘use’?
Of ChatGPT’s roughly 800 million weekly active users, only about 5% subscribe to a paid plan.
Leslie Joseph, a principal analyst at Forrester Research, explained to AIM that much of what is being counted as “AI adoption” today is shallow and fragile.
“Most companies just woke up yesterday, [integrated a] Copilot tool — and that’s it,” he said, pointing to how many deployments stop at surface-level assistants instead of re-engineering data pipelines, workflows, and decision systems that would be required for AI to deliver durable productivity gains.
Joseph’s point is not that enterprises lack interest in AI, but that adoption has been reduced to surface-level tooling rather than structural change.
Adding an AI layer to existing workflows, he argues, is not the same as rebuilding how work actually gets done. “It’s a failure of imagination and execution,” he said.
Infrastructure spending, however, is being justified as if that transformation has already occurred.
“When so much use is also personal rather than enterprise, it’s not really clear that any of this can be sustained at a price point that’s reasonable for the sector’s health,” Arun explained.
Why the ‘Productive Bubble’ Analogy Breaks Down
Defenders of the boom often argue that even if AI is in a bubble, it is a productive one.
Capital overshoot, they say, builds infrastructure that remains useful long after weaker firms fail—much like fibre laid during the dot-com era or railroads constructed in the nineteenth century.
All the fibre and railroads laid down 25 years ago are still in use.
But will the GPUs being laid down today be used even five years from now? And, more importantly, will they be expected to continue to grow the revenue they help an entity earn?
CPE states that the economic life of modern AI GPUs is estimated at two to four years, even as new designs arrive annually.
That matters because GPUs are increasingly used as collateral for the loans financing AI expansion.
In 2023, high-end GPUs such as NVIDIA’s H100 were rented for about $8 an hour. By 2024, rates for that same class of hardware had fallen to close to $1 per hour, with an 80-85% price drop within a year.
Jordan Nanos, an analyst at SemiAnalysis, told AIM that GPU lifecycle decisions are governed more by economics than by capabilities.
“If the performance per dollar [achieved] GPU is higher than the cost to keep it running, people will keep using it,” he added. “GPUs get replaced when the power and floorspace in the datacenter can be used for something else”
That threshold is being reset every year by NVIDIA’s rapid upgrade cycle.
“If new GPUs are so much more performant than the old ones that it no longer makes economic sense for buyers to rent the old ones,” said Nanos.
As a result, demand drains from older variants even if they still work. Meanwhile, buyers chase the newest chips because, in an intensely competitive market, marginal gains in speed and efficiency often decide who can train faster, serve cheaper and win customers.
In practice, contracts and depreciation schedules, not hardware failure, determine when capacity exits the system.
That depreciation dynamic sits directly underneath how today’s AI infrastructure is financed.
Debt’s Role in Shaping the Boom
Today’s AI build-out is being financed through two distinct debt stacks, each with its own risks. Both are also increasingly intertwined.
Data centre developers raise capital much like commercial property firms.
Short-term construction loans and mini-perm facilities fund builds, which are then refinanced into longer-term debt once tenants are secured and cash flows stabilise.
These longer-term loans are bundled into asset-backed securities (ABS) or commercial mortgage-backed securities (CMBS) and sold to institutional investors.
Data centres already account for roughly 61% of the $79-billion digital-infrastructure securitisation market, embedding AI build-out risk into pensions, insurers and banks.
Most AI-focused data centres built in 2024-25 rely on structures that require refinancing in 2027-28, even though the debt can run 10 to 20 years while tenant leases typically last just three to five.
And what if, at renewal, tenants downsize, renegotiate aggressively, or simply walk away because GPU economics have shifted or their AI workloads no longer justify the cost?
Buildings underwritten as long-lived assets are left chasing shorter, weaker cash flows, just as mini-perm loans roll into term refinancing. This forces owners either to inject fresh equity, accept punitive terms or hand the keys to lenders.
Sitting on top of this is a second layer of leverage, GPU-backed loans taken by neoclouds and AI operators.
For example, the triangle between Google, TeraWulf, and Fluidstack is highlighted in CPE’s report.
TeraWulf, a data centre developer, issued $3.2 billion in high-yield bonds to build GPU-heavy facilities that would be leased to Fluidstack, a neocloud operator.
Fluidstack’s ability to pay those rents, in turn, was effectively guaranteed by Google, which committed to minimum payments for the capacity.
So while Google did not issue the debt itself, its promise of revenue is what made TeraWulf’s ‘junk’ bonds viable in the first place.
The bonds sit on TeraWulf’s balance sheet. FluidStack runs the GPUs. But the cash flows ultimately depend on Google.
The result is a three-layer structure where leverage is raised against Google-backed demand, even though Google keeps the debt off its own books.
Neoclouds such as CoreWeave and Lambda have taken on multi-billion-dollar GPU-backed loans, often through special purpose vehicles. This includes pledging NVIDIA chips as collateral on facilities priced well above investment-grade credit, despite the GPUs’ estimated economic life of just two to four years.
NVIDIA’s relationship with CoreWeave goes further than supply.
It has committed to buy back up to $6.3 billion of unused compute capacity through 2032 if demand falls short. In effect, NVIDIA is acting as a buyer of last resort for capacity built on its own chips. This helps CoreWeave raise debt and expand today, while limiting its own downside if customers fail to materialise.
For CoreWeave’s lenders, that promise improves near-term confidence in utilisation. For the system as a whole, it means risk is being propped up by the chipmaker itself.

But even with that backstop, depreciation still bites: GPUs pledged as collateral can lose economic value long before the loans written against them mature.
Arun calls the resulting pattern an “extend and pretend” dynamic, where refinancing delays recognition of losses rather than resolving them, as long as lenders keep rolling exposure forward.
The model only works if two things hold at once: that utilisation stays high enough to service debt, and that lenders remain willing to refinance even as hardware ages.
Accounting choices further soften the picture. CoreWeave depreciates GPUs over six years, Nebius over four, even though engineers and project-finance lawyers put true economic life closer to three to four.
Analysts at Cerno Capital estimate that if Microsoft, Alphabet, Meta and Amazon stretched data centre asset lives to six years, reported depreciation would fall 54%, from $51 billion to $28 billion. That flatters earnings today, but does nothing to stop asset values eroding.

In short, debt, which belongs with predictable cash flows and durable assets, is being layered onto uncertain demand and fast-depreciating hardware.
The Neocloud Squeeze
Neoclouds are specialised cloud providers built almost entirely around renting high-performance GPUs.
Unlike hyperscalers, they do not bundle compute with profitable software, advertising or enterprise services.
CoreWeave, a leading neocloud, derives over 60% of its revenue from just two customers—Microsoft and NVIDIA.
That concentration matters because those customers can shift workloads, renegotiate pricing or build capacity elsewhere, exposing most of CoreWeave’s cash flows to decisions it does not control.
At the same time, hyperscalers can afford to undercut the market.
AWS has cut GPU pricing by 30-45%, absorbing margin pressure through higher-margin services that neoclouds lack. In debt terms, that price pressure directly hits coverage ratios.
With GPU-backed loans sized for much higher rental assumptions, these price cuts can push projects below debt-service thresholds, forcing renegotiation or restructuring.
When Private Risk Becomes Public Exposure
Household leverage is not the epicentre. But the exposure is not benign.
Roughly 30% of American wealth sits in equity markets. AI-related capital expenditure accounted for over 40% of the US GDP growth in 2025.
A sharp correction would still hit consumption and growth. That exposure is already visible. Microsoft disclosed that its OpenAI investment reduced net income by $3.1 billion in a single quarter, implying total OpenAI losses of roughly $11.5 billion.
Losses at the frontier do not stay contained. They move outward—into earnings, pensions and portfolios.
Barclays analysts cut earnings estimates for Alphabet, Microsoft and Meta by as much as 10%. They argue that GPU depreciation was being materially underestimated and that consensus models were overpricing these companies by 5-25%.

Source: Coatue
Risk also travels through credit.
With data centres dominating digital-infrastructure ABS and CMBS issuance, AI exposure now sits on the balance sheets of insurers, pension funds and regional banks—often concentrated among a small number of developers.
Take for example the Meta-Blue Owl Capital deal to finance, develop and operate a massive data centre campus called Hyperion in Richland Parish, Louisiana.
Meta owns only 20% of the SPV, while Blue Owl owns 80%.
The debt (financed by PIMCO) stays off Meta’s books, but it provides a residual value guarantee.
This means if the data centre’s value drops due to tech obsolescence, Meta is still on the hook to reimburse investors
Besides, a recent report from Coatue, a finance research firm, noted that the Nasdaq-100 next 12 months (NTM) price-to-earnings (P/E) multiple in 2025 stands at 28x, compared to 89x in 1999.
This indicates a more reasonable valuation, rather than the wild overpricing seen in the dot-com bubble.
However, that doesn’t change the fact that a majority of these companies—from Oracle to Blue Owl, Meta, Microsoft and others—are publicly traded, and a correction will undoubtedly affect household capital.
What a Correction Looks Like
Tenants consolidate workloads, renewals slip and capacity that looked full on paper slides into partial utilisation.
Besides, cash flows weaken just as debt rolls over. Projects that cannot refinance disappear, and others survive at lower margins.
The likely survivors are hyperscalers with diversified cash flows who can cross-subsidise losses.
The casualties are neoclouds and GPU-heavy tenants whose only product is compute.
At the macro level, the pattern is familiar. Tomohiro Hirano, an economist at Royal Holloway, University of London, told AIM that bubbles often arise during periods of uneven technological growth.
“I can easily expect that if stock bubbles collapse, it will generate a recession,” he said. “Then, to stimulate the economy, the central bank will lower the policy rate, as the [Federal Reserve Board] did after the burst of IT bubbles.”
Lower interest rates often follow, pushing capital toward leverage-heavy sectors such as housing. One of the hidden spillovers of the AI build-out is its impact on energy.
Utilities are committing to 15-year cost-recovery contracts for gas turbines and grid upgrades sized for AI data-centre loads, even though tenant demand may only be contractually secure for three to five years.
If utilisation falls, Arun warns, the gap does not vanish—it is socialised onto ratepayers.
So, are we in a reckless wave of financing?
“It will only look reckless in hindsight,” Arun said.
The post The Blind Spots of The AI Bubble appeared first on Analytics India Magazine.


