“We want to take the cloud to customers, not the other way around,” were the words of Pradeep Vincent, senior VP and chief technical architect for Oracle Cloud Infrastructure (OCI), at the Oracle AI World 2025, held from October 13-16 in Las Vegas, US.
On the sidelines of the event, Vincent sat down with AIM to discuss the company’s evolving cloud strategy, its AI infrastructure ambitions, and India’s positioning and probable gains in the next wave of AI adoption.
Having witnessed cloud computing evolve over two decades, Vincent described this moment as the most transformative in his career. “This is by far the period of the highest change rate,” he said, adding, “it’s the most spectacular phase of technology evolution.”
Oracle’s engineering teams continue to adapt with this rapid evolution.
Bringing the Cloud Closer to Customers
Vincent, who has been with Oracle for over a decade, leads the technical direction across OCI’s core services, which include compute, storage, networking, data centre engineering, and security. He said that Oracle’s long-term strategy has always centred around price, performance and proximity.
“We don’t believe in asking customers to come to giant public regions. We want to package our cloud and put it close to where customers are — in their data centres or within another hyperscaler’s facilities,” Vincent explained. He added that this gives customers full OCI capability while meeting their data residency, compliance, and latency needs.
This hybrid and multi-cloud approach, placing Oracle Cloud Infrastructure within enterprise environments or alongside hyperscalers like AWS, Azure, and GCP, allows companies to run their workloads seamlessly across different environments.
Vincent added that customers running an app in AWS can use an OCI database service in a multi-cloud setup without experiencing any difference.
Building the Foundation for AI Infrastructure
Vincent highlighted Oracle’s early investments in high-performance networking, particularly RDMA (Remote Direct Memory Access), as the backbone of its AI infrastructure.
“We started deploying RDMA back in 2019 for Exadata before AI was even a big deal,” and that gave the company the architectural building block for its AI superclusters, he said.
These learnings culminated in Oracle’s Zettascale and Zettascale10, large-scale compute clusters purpose-built for AI workloads.
The company unveiled Zettascale10 at the Oracle AI World 2025. A cloud-based AI supercomputer to connect hundreds of thousands of NVIDIA GPUs across multiple data centres, Zettascale10 delivers up to 16 zettaFLOPS of peak performance.
Vincent said that the company is already constructing campus-wide AI superclusters, massive data centre campuses with gigawatt-scale power capacity. “For context, the city of San Jose uses less than one gigawatt,” he said. “We’re building campuses of that scale to power training and inference for AI workloads.”
The first such mega-campus is already under development in Abilene, Texas, with more planned worldwide.
Making AI Invisible for Enterprises
When asked about how Oracle is integrating AI into its stack, Vincent said that customers can be divided into three types—those who want AI to disappear into applications, those who need managed AI services, and those who demand bare-metal GPU clusters.
“Some customers don’t want to deal with AI directly. They just want applications with AI built in,” he said. For these customers, Oracle offers NetSuite and Fusion.
He added that some customers prefer managed Kubernetes or Slurm services for GPU clusters, while others want raw infrastructure for their internal AI workflows. Vincent believes the future of enterprise software lies in AI becoming indistinguishable from the application layer.
“Apps as we know them will disappear,” he said. “They’ll be so AI-centric that you won’t even know what’s an app anymore.”
The Data–Intelligence Balance
On the emerging concept of AI operating systems, Vincent offered a pragmatic perspective. While intelligence can be centralised, he said, data must remain distributed and secure.
He said putting data inside a model creates risks as it’s too large, hard to control, and cannot be deleted reliably, unlike a database.
Vincent said Oracle doesn’t focus on building base AI models, but on fine-tuning models, and integrating them with enterprise data management systems. They focus on agentic workflows, which combine inferencing and data management steps automatically.
India’s AI Moment
Turning to India, Vincent emphasised that the country has a massive opportunity to improve productivity. “It’s not just about building large models, but about applying AI meaningfully across industries.”
He added that India should selectively invest in sovereign and linguistically relevant AI models, particularly for sectors like education and governance. Security, as well as local, linguistic, and cultural contexts should be prioritised, as generic models may not be trained on the relevant data or suited to specific use cases, he said.
Oracle’s existing multi-cloud partnerships and dedicated regions in India, he said, are already helping enterprises deploy AI workloads closer to their data and users.
“We recently announced multi-cloud regions with GCP, AWS, and Azure in India,” he noted. Placing workloads close to the data is a key part of Oracle’s plan to support inference and fine-tuning operations.
[With inputs from Amit Raja Naik]
The post Oracle’s Secret to Building AI Superclusters appeared first on Analytics India Magazine.