Smart View- Complete news, in short words
Anup Kumar Singh, New Delhi. Artificial Intelligence (AI) is being rapidly adopted as a revolutionary technology of the future in healthcare. AI is being said to be a big solution in increasing the speed of testing, bridging the shortage of experts and providing health services to remote areas.
But at the same time, uncontrolled and irresponsible use and conclusions of AI are also posing serious threats to the safety of patients. These dangers have been highlighted by healthcare AI researcher Dr. Subhrankar Dutta, a former radiologist at Delhi AIIMS, in his
Double standards are the biggest danger of medical AI
Questions are being raised that if such a report had been made by a human radiologist, it would have led to Medical Council action, legal investigation and professional accountability. But in the case of AI, the same mistakes are being praised as 'breakthroughs'. According to Dr. Dutta, this double standard is the biggest danger of medical AI.
Dr. Subhrankar Dutta advised immediate neurosurgery referral after a viral social media post in which Google's open-source medical AI model Made Gemma confirmed a serious disease like cancer by looking at a brain MRI. The post is being touted on internet media as a ‘game-changer’ and a ‘revolution in remote clinic triage’.
AI identifies wrong lobe of brain
Dr. Subhrankar Dutta, former president of Fyma, an organization of doctors who are researching the use of Artificial Intelligence in Radiology, especially AI-driven imaging and AI benchmarking, calls this wrong. They allege that what is being presented as a game changer and remote clinic triage revolution has many fundamental and dangerous mistakes.
He says that when this MRI was seen by a trained radiologist, the findings were completely different. AI identified the wrong lobe of the brain, giving conclusions like emergency, malignancy (cancer) and surgery based on just one MRI slice. Whereas in actual medical practice, an opinion is formed only by taking together the complete MRI sequence, the patient's clinical history and other tests.
Citing these examples, Dr. Subhrankar Dutta said that this work can be done only by a qualified doctor, not AI. That said, the open-source medical AI model Med-Gemma, which went viral on social media, advised to get contrast done again even though the scan was already a contrast MRI. Which suggests that the AI model was not able to understand the image with context and was giving wrong conclusions.
Serious concern regarding India
According to Dr. Subhrankar Dutta, radiologist and healthcare AI researcher, this concern becomes even more serious in the Indian context. Because AI models trained on foreign data do not properly understand the diversity, disease patterns and resource limitations of Indian patients. It is feared that if such open-source medical AI is used without regulation amidst limited experts and rapidly growing digital health system, then patients may have to suffer direct losses.
Said that this debate is not about AI versus doctor, but uncontrolled technology versus patient safety. Advised that until medical AI is subjected to rigorous clinical validation, testing in the Indian context, clear legal accountability and a human-in-the-loop system, what is more important is its strict monitoring.