Computers Beat Oncologists In Predicting Death From Cancer: Now What?

A new study raises some difficult questions about the doctor — patient relationship.

F. Perry Wilson, MD MSCE
5 min readJun 2, 2022

--

This is Cassandra.

You remember Cassandra. Daughter of king Priam of Troy, she had the gift of prophecy — everything she foresaw would come true — but was cursed that no one would believe her. She warned her brother Paris that the abduction of Helen would bring about the destruction of Troy. But did he listen? Well, the rest is history.

We live in an age of digital Cassandras. With advancements in artificial intelligence and machine learning, our ability to predict outcomes in our patients is getting scarily good. Will we listen to those predictions and change our ways? We may want to, because, according to a new study from JAMA Network Open — the machines are now better at this than we are.

Metastatic cancer is a devastating diagnosis that comes with a lot of difficult decisions. Should we continue to the next line of chemotherapy, go for a clinical trial, or aim for palliation? Informing all those decisions is a simple prediction, based on a question every doctor has heard before — “how much time do I have left”.

This is not a conversation any of us like to have with patients, but it is an important one. It is a conversation where honesty and accuracy are key. But by and large, doctors are overly optimistic when they try to predict the life expectancy of patients with advanced cancer and other serious diseases. I don’t think this is due to sugarcoating, really, more like wishful thinking, but the fact is, when we are discussing these issues with patients, we may not be accurate.

In the face of that inaccuracy, many of us default to agnosticism — the old “I don’t have a crystal ball” trope. My impression is that patients don’t like this very much.

But what if you could have a crystal ball? Or, perhaps a silicon one. This is exactly what this paper promises.

Briefly, researchers led by Dr. Finly Zachariah at City of Hope National Medical Center used data from nearly 30,000 patients with metastatic cancer to build a machine-learning model that took in all sorts of clinical variables, from lab values to ICD-9 codes, to predict 3-month mortality.

They then built this model into the electronic health record, allowing it to make predictions, in real-time, at the same time that clinical oncologists answered a simple question: “would you be surprised if this patient were to die within the next 3 months?”

This question creates a simple binary analysis, yes I would be surprised, or no I wouldn’t. The oncologists said “not surprised” about 13% of the time. And as you can see, the patients in that “not surprised” group have substantially higher mortality than those in the surprised group.

But note that, by 3 months, more than 10% of people in the “surprised” group had died. And, since the surprised group is way bigger than the not surprised group, it turns out that 70% of all the deaths that occur in this patient population in the first 3 months are in the surprised group.

Put bluntly, an oncologist saying they would not be surprised that a patient would die within 3 months is a poor prognostic sign. But them saying they would be surprised is not terribly reassuring. Oncologists are optimistic. They only correctly identify about 30% of the patients who died within that 3-month period.

But what about the cold, unfeeling machine? Well, one wrinkle is that the machine doesn’t give a binary prediction — surprised or not — it gives a number — think of it like a percent chance of death — from zero to 100.

To level the playing field, the researchers picked a cut-point of prediction that would also only identify 30% of the people who would die within a three month period. But what’s interesting is that, at that cut-point, the people flagged as high risk of dying are much more likely to die. In other words, there are way fewer false positives.

Putting it together, if an oncologist says he or she would not be surprised that a patient would die within 3 months, it’s a poor prognostic sign. If the computer says so, it is an exceptionally poor prognostic sign.

The computer prediction was artificially set so that it would miss 70% of the deaths, just like an oncologist, but this is not totally necessary. Depending on where you set the cutpoint, you could capture more of the deaths, with an increased risk of false positives, or less of the deaths with a decreased risk of false positives, a relationship captured in this precision-recall curve.

And of course, with the computer model, you don’t have to set a cutpoint at all. You could, in theory, tell a patient that they have a 65% chance of dying within three months — although I think the stark granularity of a prediction like that might be particularly distressing.

In any case, the authors have fairly conclusively proven that the model performs better than a physician. And, at the pace artificial intelligence is improving, the models will only get better. So… now what? What do we do with Cassandra’s prediction?

Multiple studies have shown that early referral to hospice actually extends the life of patients with metastatic cancer. Should models like this be used to target hospice services? Or perhaps to flag patients who might do better with more therapy? Will physicians allow these models to override their own judgment? These questions represent the next frontier of AI in medicine. The predictions, at this point, are good. It’s time to figure out whether we’ll listen to them.

A version of this commentary first appeared on Medscape.com.

--

--

F. Perry Wilson, MD MSCE

Medicine, science, statistics. Associate Professor of Medicine and Public Health at Yale. New book “How Medicine Works and When it Doesn’t” available now.