While NVIDIA has dominated the AI market with its GPU offerings, the company continues to face increasing competition from potential challengers. A growing threat comes from application-specific integrated circuits (ASICs).
Bondcap, a US-based venture capital firm, said in a report that the demand for NVIDIA has outpaced its supply, and companies are also looking for AI-specific hardware for efficient results in training and deploying AI models.
“Unlike GPUs, which are designed to support a wide range of workloads, ASICs are purpose-built to handle specific computational tasks with maximum efficiency. In AI, that means optimised silicon for matrix multiplication, token generation, and inference acceleration.”
Recently, an internal memo from global finance giant JPMorgan showed a rise in the company’s forecast for the total addressable market of ASICs, from $25 billion to $30 billion.

Jukan Choi, a semiconductor market analyst, shared the memo on X, which revealed a 40-50% CAGR in the ASIC market that caters to compute acceleration, serving customers like Google, Microsoft, Meta, Amazon, and other AI-focused companies.
Furthermore, a report from Taiwanese media outlet United Daily News noted that the supply chain of ASICs is set to grow faster than NVIDIA in 2026, citing another report from Macquarie Securities.
This is due to the growing demand for chip-on-wafer-on-substrate (CoWoS). This technology helps pack multiple chips together for better performance for customers like AWS, Google, and Meta. These companies are also set to develop chips in-house to reduce reliance on NVIDIA.
AWS and Google are Resorting to In-House Chips Instead of NVIDIA
For instance, Google has established itself well in the ASIC market with its tensor processing units (TPUs), and it recently released its sixth generation.
These TPUs have played an instrumental role in developing and serving their Gemini family of AI models to users. Google released the technical report for the Gemini 2.5 models, revealing that the model was trained on a massive cluster of Google’s fifth-generation TPUs.
Furthermore, Amazon Web Services (AWS) senior director for customer and product engineering, Gadi Hutt, told CNBC that the company wants to reduce AI training costs and provide an alternative to NVIDIA’s GPUs.
The CNBC report added that Project Rainer, AWS’s initiative to build an AI supercomputer, will now contain half a million of the company’s Trainium2 chips. This order would have traditionally gone to NVIDIA.
Hutt also said that while NVIDIA’s Blackwell offers better performance than Trainium2, the latter provides better cost performance. The company also claims that Trainium2 offers a 30-40% better price-performance ratio than the current generation of GPUs.
In March, Reuters reported that Meta is testing its first in-house chip for training its AI models to reduce reliance on NVIDIA. The report stated that the company is working with Taiwan Semiconductor Manufacturing Company (TSMC) to produce the chip.
The chip is part of the Meta Training and Inference Accelerator (MTIA) series, and the company plans to start using it to train its models by 2026.
Furthermore, Commercial Times, a Taiwanese media outlet, reported on Wednesday that OpenAI is building its own training chip, which is expected to be launched in the fourth quarter of this year.
Moreover, companies like Marvell and Broadcom are among the leading players in the custom ASIC market, assisting companies in building AI infrastructure.
Marvell has partnered with AWS to develop custom chips, whereas Broadcom is said to assist Meta and OpenAI with their upcoming hardware.

Source: x.com/Jukanlosreve
Besides, companies like Cerebras, SambaNova, and Groq have developed ASIC hardware systems that significantly increase the speed at which these AI models generate outputs.
Recently, Cerebras announced that running Meta’s newest large model on their hardware outperformed NVIDIA’s Blackwell systems and set a new record previously established by the latter.
‘I Believe Most ASIC Projects Will Be Cancelled’
Although big companies like Google and AWS have achieved success with their ASIC projects, new entrants such as Cerebras and Groq will keep competing with NVIDIA, alongside various startups that have emerged to challenge them.
Citing a former Microsoft employee with expertise in cloud computing technology, AlphaSense, a market intelligence firm, said in a post on X that third-party ASIC companies will face a “steep uphill battle” against NVIDIA due to the lack of a mature software stack like CUDA.
“These companies often must directly assist clients in adapting models to their chips, making scalability difficult,” AlphaSense noted.
For context, CUDA is NVIDIA’s software stack, which helps developers program the company’s GPUs to their needs.
Furthermore, even Groq CEO Jonathan Ross stated that NVIDIA will continue to maintain its position in the market, despite his company offering hardware systems that outperform NVIDIA’s GPU in inferencing, which is the process of extracting an output from an AI model.
“Training should be done on GPUs,” Ross said in an interview earlier this year. “I think NVIDIA will sell every single GPU they make for training.”
He also added that inference-specific hardware will work hand-in-hand with NVIDIA’s GPUs. Ross said that if Groq deployed large volumes of lower-cost inference chips, the demand for training would increase.
Thus, he said that it works best for developers to train their models on NVIDIA GPUs and then use Groq’s hardware for inference. “They [NVIDIA] don’t offer fast tokens and low-cost tokens. It’s a very different product, but what they do very well is training, and they do it better than anyone else,” he said.
NVIDIA CEO Jensen Huang has repeatedly agreed that the company’s biggest challenge is high-speed inferencing, and recently called it the “ultimate extreme computing problem”.
However, for obvious reasons, he doesn’t seem to be bullish on these ASIC projects. In a Q&A session at NVIDIA’s GTC event 2025, Huang said, “I believe most of them (ASIC projects) will get cancelled.”
Listen to Jensen explaining why most ASIC projects are likely to be cancelled and the promising future of the Nvidia’s NVLink ecosystem. pic.twitter.com/hmWuA2e3Fk
— The AI Investor (@The_AI_Investor) June 11, 2025
He stated that most will not surpass the NVIDIA hardware available to consumers, particularly considering the pace at which the company advances. Therefore, these ASIC projects must catch up with NVIDIA to provide anything remotely superior.
The post Is NVIDIA’s AI Market Dominance Under a Threat? appeared first on Analytics India Magazine.