Non-human intelligence will soon be a standard part of your medical care – if it isn’t already. Can you trust it?
By Kayt Sukel
THE doctor’s eyes flit from your face to her notes. “How long would you say that’s been going on?” You think back: a few weeks, maybe longer? She marks it down. “Is it worse at certain times of day?” Tough to say – it comes and goes. She asks more questions before prodding you, listening to your heart, shining a light in your eyes. Minutes later, you have a diagnosis and a prescription. Only later do you remember that fall you had last month – should you have mentioned it? Oops.
One in 10 medical diagnoses is wrong, according to the US Institute of Medicine. In primary care, one in 20 patients will get a wrong diagnosis. Such errors contribute to as many as 80,000 unnecessary deaths each year in the US alone.
These are worrying figures, driven by the complex nature of diagnosis, which can encompass incomplete information from patients, missed hand-offs between care providers, biases that cloud doctors’ judgement, overworked staff, overbooked systems, and more. The process is riddled with opportunities for human error. This is why many want to use the constant and unflappable power of artificial intelligence to achieve more accurate diagnosis, prompt care and greater efficiency.
AI-driven diagnostic apps are already available. And it’s not just Silicon Valley types swapping clinic visits for diagnosis via smartphone. The UK National Health Service (NHS) is trialling an AI-assisted app to see if it performs better than the existing telephone triage line.