America’s Largest Hospital System Ready to Start Replacing Radiologists With AI, Its CEO Says

Just weeks after the largest nurses strike in the New York City history, the CEO of NYC Health and Hospitals has a bold vision for a future where AI, not human radiologists, examines and diagnoses X-rays.

At a panel held by Crain’s New York Business, Mitchell Katz, president and CEO of New York’s 11-hospital public benefit corporation, made overt gestures at his desire to replace highly trained radiology experts with visual language AI models, Radiology Business reported.

“We could replace a great deal of radiologists with AI at this moment, if we are ready to do the regulatory challenge,” Katz said at the panel.

One example he gave, according to Radiology, would affect women’s healthcare in particular, by automating breast cancer screening with AI tools. By sidelining radiologists until an AI system flags a reading as abnormal, Katz declared, hospitals could achieve “major savings.”

Mohammed Suhail, a radiologist at North Coast Imaging in San Diego, told Radiology that Katz’s comments are “undeniable proof that confidently uninformed hospital administrators are a danger to patients,” and are “easily duped by AI companies that are nowhere near capable of providing patient care.”

“Any attempt to implement AI-only reads would immediately result in patient harm and death, and only someone with zero understanding of radiology would say something so naive,” Suhail continued. “But in some sense, they’re correct: hospitals are happy to cut costs even if it means patient harm, as long as it’s legal.”

Indeed, a growing body of research suggests that AI in the X-ray room is a disaster waiting to happen.

In a yet-to-be-peer-reviewed study, Stanford researchers found that AI chest X-ray tools built on frontier AI models can ace medical benchmark tests without ever seeing actual images of X-rays. Rather than admit that the images are missing, the highest scoring AI systems would engage in what amounts to a cheap parlor trick: constructing an elaborate explanation for findings on X-rays it never had access to in the first place.

This situation goes a step above mere AI hallucinations into what the researchers call an AI “mirage.” Unlike the generative AI errors we’ve come to expect, the AI mirage is incredibly rational from start to finish. The issue is that these mirages aren’t based on anything, meaning usual hallucination safeguards aren’t enough to deter them.

“In this epistemic mimicry, the model simulates the entire perceptual process that would have led to the answer,” the Stanford scientists wrote. “This helps explain why reasoning traces, on their own, cannot certify visual reasoning: the trace may be fluent, coherent, and apparently image-based while being anchored to no image at all.”

On top of reinforcing previous research suggesting visual language AI models are functionally blind, this study has major implications for any hospital turning to AI to trim its radiology unit — not to mention any patient unlucky enough to be on the receiving end of a medical imaging mirage.

More on AI in healthcare: ChatGPT Health Is Staggeringly Bad at Recognizing Life-Threatening Medical Emergencies

The post America’s Largest Hospital System Ready to Start Replacing Radiologists With AI, Its CEO Says appeared first on Futurism.

Scroll to Top