The Death Predictor: A Helpful New Tool or an Ethical Morass?

A senior in hospice care.

(© bilderstoeckchen/Fotolia)


Whenever Eric Karl Oermann has to tell a patient about a terrible prognosis, their first question is always: "how long do I have?" Oermann would like to offer a precise answer, to provide some certainty and help guide treatment. But although he's one of the country's foremost experts in medical artificial intelligence, Oermann is still dependent on a computer algorithm that's often wrong.

Doctors are notoriously terrible at guessing how long their patients will live.

Artificial intelligence, now often called deep learning or neural networks, has radically transformed language and image processing. It's allowed computers to play chess better than the world's grand masters and outwit the best Jeopardy players. But it still can't precisely tell a doctor how long a patient has left – or how to help that person live longer.

Someday, researchers predict, computers will be able to watch a video of a patient to determine their health status. Doctors will no longer have to spend hours inputting data into medical records. And computers will do a better job than specialists at identifying tiny tumors, impending crises, and, yes, figuring out how long the patient has to live. Oermann, a neurosurgeon at Mount Sinai, says all that technology will allow doctors to spend more time doing what they do best: talking with their patients. "I want to see more deep learning and computers in a clinical setting," he says, "so there can be more human interaction." But those days are still at least three to five years off, Oermann and other researchers say.

Doctors are notoriously terrible at guessing how long their patients will live, says Nigam Shah, an associate professor at Stanford University and assistant director of the school's Center for Biomedical Informatics Research. Doctors don't want to believe that their patient – whom they've come to like – will die. "Doctors over-estimate survival many-fold," Shah says. "How do you go into work, in say, oncology, and not be delusionally optimistic? You have to be."

But patients near the end of life will get better treatment – and even live longer – if they are overseen by hospice or palliative care, research shows. So, instead of relying on human bias to select those whose lives are nearing their end, Shah and his colleagues showed that they could use a deep learning algorithm based on medical records to flag incoming patients with a life expectancy of three months to a year. They use that data to indicate who might need palliative care. Then, the palliative care team can reach out to treating physicians proactively, instead of relying on their referrals or taking the time to read extensive medical charts.

But, although the system works well, Shah isn't yet sure if such indicators actually get the appropriate patients into palliative care. He's recently partnered with a palliative care doctor to run a gold-standard clinical trial to test whether patients who are flagged by this algorithm are indeed a better match for palliative care.

"What is effective from a health system perspective might not be effective from a treating physician's perspective and might not be effective from the patient's perspective," Shah notes. "I don't have a good way to guess everybody's reaction without actually studying it." Whether palliative care is appropriate, for instance, depends on more than just the patient's health status. "If the patient's not ready, the family's not ready and the doctor's not ready, then you're just banging your head against the wall," Shah says. "Given limited capacity, it's a waste of resources" to put that person in palliative care.

The algorithm isn't perfect, but "on balance, it leads to better decisions more often."

Alexander Smith and Sei Lee, both palliative care doctors, work together at the University of California, San Francisco, to develop predictions for patients who come to the hospital with a complicated prognosis or a history of decline. Their algorithm, they say, helps decide if this patient's problems – which might include diabetes, heart disease, a slow-growing cancer, and memory issues – make them eligible for hospice. The algorithm isn't perfect, they both agree, but "on balance, it leads to better decisions more often," Smith says.

Bethany Percha, an assistant professor at Mount Sinai, says that an algorithm may tell doctors that their patient is trending downward, but it doesn't do anything to change that trajectory. "Even if you can predict something, what can you do about it?" Algorithms may be able to offer treatment suggestions – but not what specific actions will alter a patient's future, says Percha, also the chief technology officer of Precise Health Enterprise, a product development group within Mount Sinai. And the algorithms remain challenging to develop. Electronic medical records may be great at her hospital, but if the patient dies at a different one, her system won't know. If she wants to be certain a patient has died, she has to merge social security records of death with her system's medical records – a time-consuming and cumbersome process.

An algorithm that learns from biased data will be biased, Shah says. Patients who are poor or African American historically have had worse health outcomes. If researchers train an algorithm on data that includes those biases, they get baked into the algorithms, which can then lead to a self-fulfilling prophesy. Smith and Lee say they've taken race out of their algorithms to avoid this bias.

Age is even trickier. There's no question that someone's risk of illness and death goes up with age. But an 85-year-old who breaks a hip running a marathon should probably be treated very differently than an 85-year-old who breaks a hip trying to get out of a chair in a dementia care unit. That's why the doctor can never be taken out of the equation, Shah says. Human judgment will always be required in medical care and an algorithm should never be followed blindly, he says.

Experts say that the flaws in artificial intelligence algorithms shouldn't prevent people from using them – carefully.

Researchers are also concerned that their algorithms will be used to ration care, or that insurance companies will use their data to justify a rate increase. If an algorithm predicts a patient is going to end up back in the hospital soon, "who's benefitting from knowing a patient is going to be readmitted? Probably the insurance company," Percha says.

Still, Percha and others say, the flaws in artificial intelligence algorithms shouldn't prevent people from using them – carefully. "These are new and exciting tools that have a lot of potential uses. We need to be conscious about how to use them going forward, but it doesn't mean we shouldn't go down this road," she says. "I think the potential benefits outweigh the risks, especially because we've barely scratched the surface of what big data can do right now."

Karen Weintraub
Karen Weintraub, an independent health and science journalist, writes regularly for The New York Times, The Washington Post, Scientific American and other news outlets. She also teaches journalism at Boston University, MIT and the Harvard Extension School, and she's writing a book about the history of Cambridge, MA, where she lives with her husband and two daughters.
Get our top stories twice a month
Follow us on

David Kurtz making DNA sequencing libraries in his lab.

Photo credit: Florian Scherer

When David M. Kurtz was doing his clinical fellowship at Stanford University Medical Center in 2009, specializing in lymphoma treatments, he found himself grappling with a question no one could answer. A typical regimen for these blood cancers prescribed six cycles of chemotherapy, but no one knew why. "The number seemed to be drawn out of a hat," Kurtz says. Some patients felt much better after just two doses, but had to endure the toxic effects of the entire course. For some elderly patients, the side effects of chemo are so harsh, they alone can kill. Others appeared to be cancer-free on the CT scans after the requisite six but then succumbed to it months later.

"Anecdotally, one patient decided to stop therapy after one dose because he felt it was so toxic that he opted for hospice instead," says Kurtz, now an oncologist at the center. "Five years down the road, he was alive and well. For him, just one dose was enough." Others would return for their one-year check up and find that their tumors grew back. Kurtz felt that while CT scans and MRIs were powerful tools, they weren't perfect ones. They couldn't tell him if there were any cancer cells left, stealthily waiting to germinate again. The scans only showed the tumor once it was back.

Blood cancers claim about 68,000 people a year, with a new diagnosis made about every three minutes, according to the Leukemia Research Foundation. For patients with B-cell lymphoma, which Kurtz focuses on, the survival chances are better than for some others. About 60 percent are cured, but the remaining 40 percent will relapse—possibly because they will have a negative CT scan, but still harbor malignant cells. "You can't see this on imaging," says Michael Green, who also treats blood cancers at University of Texas MD Anderson Medical Center.

Keep Reading Keep Reading
Lina Zeldovich
Lina Zeldovich has written about science, medicine and technology for Scientific American, Reader’s Digest, Mosaic Science and other publications. She’s an alumna of Columbia University School of Journalism and the author of the upcoming book, The Other Dark Matter: The Science and Business of Turning Waste into Wealth, from Chicago University Press. You can find her on http://linazeldovich.com/ and @linazeldovich.


Reporter Michaela Haas takes Aptera's Sol car out for a test drive in San Diego, Calif.

Courtesy Haas

The white two-seater car that rolls down the street in the Sorrento Valley of San Diego looks like a futuristic batmobile, with its long aerodynamic tail and curved underbelly. Called 'Sol' (Spanish for "sun"), it runs solely on solar and could be the future of green cars. Its maker, the California startup Aptera, has announced the production of Sol, the world's first mass-produced solar vehicle, by the end of this year. Aptera co-founder Chris Anthony points to the sky as he says, "On this sunny California day, there is ample fuel. You never need to charge the car."

If you live in a sunny state like California or Florida, you might never need to plug in the streamlined Sol because the solar panels recharge while driving and parked. Its 60-mile range is more than the average commuter needs. For cloudy weather, battery packs can be recharged electronically for a range of up to 1,000 miles. The ultra-aerodynamic shape made of lightweight materials such as carbon, Kevlar, and hemp makes the Sol four times more energy-efficient than a Tesla, according to Aptera. "The material is seven times stronger than steel and even survives hail or an angry ex-girlfriend," Anthony promises.

Keep Reading Keep Reading
Michaela Haas
Michaela Haas, PhD, is an award-winning reporter and author, most recently of Bouncing Forward: The Art and Science of Cultivating Resilience (Atria). Her work has been published in the New York Times, Mother Jones, the Huffington Post, and numerous other media. Find her at www.MichaelaHaas.com and Twitter @MichaelaHaas!