

The competition in healthcare AI is heating up. Just days after OpenAI launched ChatGPT Health, Anthropic has rolled out Claude for Healthcare, accelerating the race to embed generative AI deeper into medical workflows.
Unlike ChatGPT Health, which operates as a separate, sandboxed space within ChatGPT, Claude for Healthcare is woven directly into Anthropic’s Claude chatbot. According to the company, the new features allow Claude to securely access trusted medical and insurance databases to assist with medical-related queries and routine healthcare tasks.
For hospitals and insurers, Claude can verify whether a treatment is covered by insurance or assist with preparing documentation when claims are rejected. For patients, it can simplify complex lab reports and medical histories into understandable language.
ChatGPT Health, by contrast, offers a dedicated environment for health and wellness queries, where users can optionally connect medical records, fitness trackers or nutrition apps. This ensures responses are grounded in personal data rather than generic information.
Both offerings are compliant with the US Health Insurance Portability and Accountability Act, enabling hospitals, medical providers, insurers and consumers to handle protected health information securely. Anthropic has also integrated scientific databases into Claude and enhanced its capabilities for biological research.
Beyond OpenAI and Anthropic, startups such as Abridge and Sword Health have attracted multibillion-dollar valuations as investor interest in AI-powered medical tools continues to surge.
Winning the Race
The failures of healthcare products, launched by Google and Microsoft in the pre-generative AI era, serve as cautionary lessons for today’s AI leaders, particularly regarding privacy concerns.
In 2008, Google launched Google Health, a personal health record (PHR) service that allowed users to upload, store, manage and share their medical information, such as health conditions, medications and allergies. However, it was shut down in 2012 due to poor adoption. Microsoft’s HealthVault, another PHR platform focused on privacy and control from 2007, which allowed users to store and manage health information from various sources, met a similar fate. It was discontinued in 2019 after years of low engagement.
“Between Anthropic and OpenAI, the more effective tool will be the one that combines strong reasoning capabilities with rigorous safeguards, clinical validation and deep integration into existing healthcare workflows,” Jaspreet Bindra, co-founder of AI&Beyond, told AIM. “Accuracy, explainability and trust matter far more than speed or novelty in this space.”
OpenAI’s push into healthcare comes as it reveals that health and wellness are already one of ChatGPT’s most common use cases, with over 230 million people worldwide asking health-related questions.
Google’s recent experience highlights the risks of moving too fast. Its AI Overviews feature, launched in May 2024, faced widespread backlash after delivering inaccurate—and in some cases dangerous—health advice. Errors included suggesting users add non-toxic glue to pizza or eat “at least one small rock a day” for minerals. Health experts flagged instances of the medical guidance as “completely incorrect” or “very dangerous”. Google later restricted health-related triggers and refined its systems to avoid satirical or unreliable sources.
“These errors highlight a broader challenge with deploying generative AI at internet scale without sufficient domain-specific checks,” Bindra said. “In healthcare, especially, companies must slow down, strengthen validation layers, and be transparent about uncertainty and source reliability. The next phase of AI adoption won’t be about who launches first, but who earns trust–particularly when human lives are involved.”
Arsh Goyal, an AI and engineering expert, agrees. “The rush among the Silicon Valley giants to make it big in healthcare is also because whoever earns trust in healthcare essentially earns trust everywhere. The race is more about credibility than speed. With regulatory conversations picking up globally, the time seems to be ripe for them to venture into healthcare.”
Can Bharat’s Own Health Bot Help?
In India, IPO-bound Fractal launched Vaidya AI in 2024—a health assistant now available in beta as the “Vaidya–AI Health Advisor” app on the Google Play Store. Among the early multimodal AI tools in the medical domain, Vaidya.ai has received largely positive feedback from users and the tech community, particularly on LinkedIn and app platforms. Users cite ease of use, security, and the ability to get quick, helpful responses as key strengths.
Fractal has consistently positioned Vaidya.ai as a health companion rather than a diagnostic tool, with a full public release expected soon.
In a country where preventive healthcare often takes a back seat, can AI chatbots meaningfully shift behaviour? Dr Manav Suryavanshi, HOD of urology and section in-charge of uro oncology and robotic surgery at Amrita Hospital, believes they can, but within limits.
“AI tools are outstanding for explaining medical reports in plain language, listing possible causes of symptoms, summarising treatment options, checking drug interactions, preparing you for a doctor’s visit and helping doctors not miss rare possibilities,” he told AIM. Dr Suryavanshi agrees that while AI would become a permanent part of medicine, it must never be used as a diagnostic tool.
“For patients, the safest model is: AI for understanding. Doctors for decisions,” he cautioned.
Agreeing with Dr Suryavanshi, Dr Kingshuk Ganguly, an orthopaedic and joint replacement surgeon in Mumbai, underlined how respecting boundaries while using any AI healthcare tool is critical.
“AI is evolving rapidly and can be a useful adjunct to conventional medical care,” he said. “It can quickly give patients an overview of available treatment modalities. However, AI still struggles to understand human emotions and interactions, which is where a good clinician remains indispensable.” He also pointed to AI’s growing role in imaging technologies such as MRI, X-rays and CT scans.
As OpenAI and Anthropic position their tools as trusted allies to healthcare professionals, focused on reducing administrative burden and improving efficiency rather than delivering personalised diagnoses, the obvious question remains: what comes next? Gemini Health, DeepSeek Health or something else entirely?
The post The Billion Dollar Battle to Become Your AI Doctor appeared first on Analytics India Magazine.


