Today's AI algorithms are powerful tools that recognize patterns, predict, and even make decisions. But they are not infallible, all-knowing oracles. Nor are they on the verge of matching human intelligence, despite what some evangelists of so-called artificial general intelligence suggest. A handful of recent studies reflect the possibilities but also the pitfalls, pointing out how medical AI tools can misdiagnose patients and how doctors’ own skills can weaken with AI.
A team at Duke University (including one of us) tested an FDA-cleared AI tool meant to detect swelling and microbleeds in the brain MRIs of patients with Alzheimer’s disease. The tool improved the ability of expert radiologists to find these subtle spots in an MRI, but it also raised false alarms, often mistaking harmless blurs for something dangerous. We concluded that the tool is helpful, but radiologists should do a careful read of MRIs first, and then use the tool as a second opinion—not the other way around.
All of this underscores that AI in medicine, as in every field, works best when it augments the work of humans. The future of medicine isn’t about replacing health care providers with algorithms—it’s about designing tools that sharpen human judgment and amplify what we can accomplish. Doctors and other providers must be able to gauge when AI is wrong, and must maintain the ability to work without AI tools if necessary. The way to make this happen is to build medical AI tools responsibly.
Read more | TIME