Dr. AI is ready to see you…

People say that AI is going to replace all our jobs, with the exception of doctors and a few others. While an obvious hyperbole, it has some merit. AI-powered robots cannot perform surgery alone; they also can’t deliver a life-threatening diagnosis with the same empathy of a doctor. However, as AI is experimentally incorporated into many fields, I wonder where the line blurs between algorithmic and human medicine. Should doctors use AI tools to diagnose patients? If AI makes a mistake, who takes the blame – the doctor or the technology developer?

Without a doubt, AI shows remarkable promise. Studies in the New England Journal of Medicine report that machine-learning models can detect breast cancer more accurately than traditional screening methods (NEJM, 2020). The World Health Organization similarly notes that AI could improve accuracy, efficiency, and access within global health systems (WHO, 2021). These aren’t small claims – they hint at a future where diagnostic error, one of medicine’s persistent failings, may finally shrink.

Still, like all powerful tools, AI reflects the hands that shape it. Algorithms are trained on past medical data collected by humans, which means they inherit human flaws. In 2019, Science published a landmark study revealing that a widely used healthcare algorithm systematically underestimated the needs of Black patients because historical data reflected inequities in access and treatment. If AI is built on biased data, it only makes sense that it would cement inaccuracy under the facade of technological precision.

So physicians find themselves pulled forward by innovation but bound by ethics and their oath to do no harm. Some argue that if AI improves outcomes, we should use it boldly. Others fear it may slowly erode the physician’s core identity of listening, interpreting, and understanding the human behind the symptoms.

I believe AI should be a limited tool for now. The AMA emphasizes that physicians must “retain ultimate responsibility” for decisions made with AI tools (AMA, 2022). Patients deserve transparency such as when AI is used, how their data feeds it, and what its limitations are. Most importantly, empathy and critical thinking must remain at the center of care. Algorithms can detect tumors, but they cannot comfort fear. As we continue to improve our AI models by feeding it more unbiased information, we should approach a point where physicians and technology can rely on each other for best results.

Comments

Leave a comment