
Among leading cloud providers—AWS, Google, and Microsoft Azure—Oracle remains the only one yet to develop its own AI chip. During a Q&A with the media at AI World 2025 in Las Vegas, Karan Batta, senior vice-president at Oracle Cloud Infrastructure (OCI), revealed that Oracle is open to the idea of building its own AI chip.
“Never say never,” Batta said. “But right now, when you think about AI chipsets, it’s not just the hardware itself, it’s a software ecosystem play.” He added that even if Oracle built top-tier hardware, the real challenge would be whether AI models and software libraries would actually adopt it, which could take decades. “It’s a multi-decade journey to be able to do that,” he said.
Discussing Oracle’s strategy, Batta said the company is partnering with established hardware vendors rather than creating a proprietary chip. “We are still collaborating with AMD, NVIDIA, all of our other vendors, like Ampere, to actually put in a lot of the things that we need for our customers in their next generation hardware architecture,” he explained.
Notably, Oracle and AMD recently expanded their multi-generation partnership to help customers scale AI capabilities. OCI will be a launch partner for the first publicly available AI supercluster powered by AMD Instinct MI450 Series GPUs, with an initial deployment of 50,000 GPUs planned for Q3 2026, set to expand further in 2027 and beyond.
While Oracle is not building a custom chip, the company still ensures that customer needs influence hardware development. “We are putting a lot of that feedback back into our hardware partners and deploying that in our cloud,” Batta said.
He added that Oracle’s strategy is focused on choice and scale rather than seeking an upper hand over competitors like NVIDIA and AMD. “Our benefit comes from the fact that we provide customer choice and flexibility,” he said.
Moreover, Batta highlighted that AI infrastructure is more than just GPUs. The broader ecosystem, including storage, network connectivity, and optics, is equally critical. “It’s not just about GPUs… You could not have enough optics or cables. You could be deploying a data centre in the middle of nowhere, but you need network connectivity, storage… It’s a much broader conversation than just GPU supply,” he said.
At AI World 2025, Oracle announced new networking capabilities in its Oracle Cloud Infrastructure (OCI) suite, Oracle Acceleron, which improve performance, security, and efficiency for a range of workloads.
Oracle Acceleron combines dedicated network fabrics, converged NICs, host-level zero-trust packet routing, and multi-planar network designs. These features provide direct data paths, lower latency, higher throughput, and improved security for workloads ranging from web applications to AI and high-performance computing (HPC) clusters.
What Other Cloud Providers Are Doing
Unlike Oracle, major cloud providers such as Microsoft, Amazon Web Services (AWS), and Google Cloud are investing heavily in developing custom AI chips.
Google is known for its Tensor Processing Units (TPUs), custom-built chips made to speed up machine learning tasks. Companies like Safe Superintelligence, Reliance, and OpenAI use them to run their AI models.
Most recently, Anthropic announced plans to expand its compute capacity with up to one million TPUs as part of a multibillion-dollar deal with Google.
Microsoft has launched its own AI chips, the Azure Maia 100 and Azure Cobalt 100, to make AI work faster in the Azure cloud. The Maia 100 is built for training and running large language models (LLMs), while the Cobalt 100 is an Arm-based CPU for regular computing tasks.
Similarly, AWS has developed its own AI chips, Inferentia and Trainium, to provide high-performance, cost-effective solutions for AI workloads. Inferentia is optimised for inference tasks, while Trainium is designed for training large-scale models.
Interestingly, Anthropic uses AWS’s Trainium and Inferentia chips to train and run its AI models. Apple also relies on AWS’s Graviton and Inferentia chips for AI and machine learning tasks, and is looking into using Trainium2 for pretraining its models.
Unlike these approaches, Batta revealed that Oracle is also finding ways to repurpose older GPU hardware for inference and smaller-scale AI models. Customers continue to use previous-generation GPUs, such as Ampere and Volta, for verticalised models that do not require massive LLM-scale resources.
“Repurposing existing capacity for inference is actually a big help,” Batta said, highlighting how the company democratises AI resources beyond hyper-scale deployments.
Oracle may not have its own chip yet, but it’s quietly building the foundation for the next wave of AI infrastructure, one built on choice, flexibility, and real-world performance.
The post Why Oracle Isn’t Making Its Own AI Chip Yet appeared first on Analytics India Magazine.


