Here’s something most people don’t realize: right now, an AI algorithm might be reading your medical scans before your doctor even sees them.
Over a thousand AI health tools are authorized by the FDA, and more than two out of every three doctors use AI in some capacity. We’re at a fascinating inflection point—a moment where machines are changing medicine in ways both remarkable and concerning. But here’s the question nobody’s asking loudly enough: What happens when we let algorithms shape life-or-death decisions?
The Promise: AI as a Second Set of Eyes
Consider this: AI can lighten crushing doctor workloads by drafting patient notes, spot abnormalities in scans that human eyes might miss, and speed drug discovery through protein structure prediction so advanced it earned a Nobel Prize. It can match patients to clinical trials, monitor health data in real time, and even schedule appointments while answering basic medical questions.
That’s the promise. And it’s real.
The Problem: When Machines Make Mistakes
But now consider this: Researchers at Duke University tested an FDA-cleared AI tool designed to detect brain swelling and microbleeds in Alzheimer’s patients. The tool helped radiologists find subtle spots they might have missed—but it also raised frequent false alarms, often mistaking harmless image blurs for dangerous abnormalities.
The takeaway? The AI was helpful, but only when used as a second opinion after a human expert’s careful review—not as a shortcut.
And that’s just one example. Few hospitals independently test AI tools before deploying them. Many assume FDA clearance means the tool will work flawlessly in their specific setting with their specific patient population. That’s not how it works.
Former FDA Commissioner Robert Califf has urged constant post-market monitoring of medical AI to ensure reliability and safety in real-world use. Because algorithms don’t perform the same way across different populations and contexts.
The Deskilling Danger
Here’s where it gets really interesting—and concerning.
In Europe, gastroenterologists were given an AI system to help spot polyps during colonoscopies. With the AI, they initially found more polyps—a good thing. But when they returned to performing colonoscopies without AI assistance, they detected fewer pre-cancerous polyps than before they’d ever used the system.
The researchers believe doctors had become so reliant on AI that, in its absence, they became less focused and less able to spot polyps on their own. This phenomenon—called “deskilling”—is the medical equivalent of losing your sense of direction because you always follow GPS.
Another study showed that overreliance on computerized aids can make our gaze less likely to scan peripheral visual fields. We see what the machine tells us to see—and miss what it doesn’t.
The Cognitive Off-Loading Effect
A researcher recently surveyed more than 600 people across diverse backgrounds and found that the more someone used AI tools, the weaker their critical-thinking abilities became. This “cognitive off-loading” means we’re outsourcing our thinking to machines—and our brains are getting lazier as a result.
If used uncritically, AI doesn’t just propagate wrong information—it erodes our very ability to fact-check it.
A Better Path: Intelligent Choice Architecture
So what’s the solution?
Instead of replacing human judgment, AI should be designed to augment it. This approach is called Intelligent Choice Architecture (ICA). With ICA, AI systems nudge doctors to think more carefully, not less.
For example, instead of declaring “here is a bleed,” an ICA tool might highlight an area and prompt: “check this region carefully.” It’s a subtle but crucial difference.
Apollo Hospitals in India recently began using an ICA tool to help prevent heart attacks. Rather than providing a single risk score, the new system offers a personalized breakdown of what that score means for each patient and what contributed to it—so patients know exactly which risk factors to address.
It’s guidance without replacement. Support without substitution.
The Human Element
The stethoscope amplified the human ear but didn’t replace the need for clinical examination. Blood tests provided new diagnostic information but didn’t eliminate the need for a thorough medical history. AI should follow the same principle.
We must ask: Does this AI tool make doctors more thoughtful or less? Does it encourage a second look or invite a rubber stamp?
If we commit to designing systems that sharpen rather than replace human abilities, we can combine the extraordinary power of AI with the critical thinking, compassion, and real-world judgment that only humans bring to medicine.
The Bottom Line
AI isn’t on the verge of matching human intelligence, despite what some evangelists claim. And it shouldn’t try to. The future of medicine isn’t about choosing between doctors and algorithms—it’s about humans and AI working together, each playing to their strengths.
Because at the end of the day, medicine isn’t just about diagnosing diseases. It’s about understanding people. And that’s something no algorithm can replace.
Sources:
- TIME Magazine: “AI Is Revolutionizing Health Care. But It Can’t Replace Your Doctor” by Murali Doraiswamy and Marc Benioff – https://time.com/7315960/ai-healthcare-murali-doraiswamy-marc-benioff-essay/
- American Medical Association: “2 in 3 physicians are using health AI” Survey Report – https://www.ama-assn.org/practice-management/digital-health/2-3-physicians-are-using-health-ai-78-2023
- Duke University/AJNR Study: “Testing FDA-cleared AI tool for brain MRI analysis in Alzheimer’s disease” – https://www.ajnr.org/content/early/2025/07/30/ajnr.A8946.long
- The Lancet: European colonoscopy study on AI-assisted polyp detection and deskilling effects
- NIH/PMC: “Overreliance on computerized aids and peripheral vision scanning” – https://pmc.ncbi.nlm.nih.gov/articles/PMC9500006/
- Apollo Hospitals: “AI-CVD cardiovascular risk assessment case study” – https://www.apollohealthaxis.com/case-studies/apollos-aicvd-redefining-cardiovascular-risk-assessment-globally-beyond-conventions/