June 20, 2018

Suicide risk prediction is not a self-driving car

car mirror rear view

‘Nice warning light,’ Dr. Greg Simon says, ‘but my hands stay on the wheel,’ balancing models with judgment.

By Greg Simon, MD, MPH, a Washington Permanente Medical Group psychiatrist and Kaiser Permanente Washington Health Research Institute senior investigator.

To intervene effectively and prevent as many cases of disease and death as possible, the first step is to predict people’s risk more precisely. We can’t predict the future perfectly, so we need predictive analytics: using information from electronic health records to see trends and predict likely outcomes.

But how can we clinicians best use the burgeoning information we get from predictive analytics to inform our own judgment? It’s a big question, which I recently had a chance to consider in microcosm, thanks to a new paper… and a new car.

My colleagues and I in the Mental Health Research Network published an American Journal of Psychiatry paper about predicting risk of suicidal behavior following outpatient visits. We created and validated 2 new models for predicting the risk of suicide: 1 for mental health visits; and the other for primary care visits with a mental health diagnosis. Both models combine electronic health record data with responses to self-report questionnaires, and both substantially outperform existing suicide risk prediction tools.

Our paper prompted many questions from clinicians and health system leaders about the practical utility of risk predictions: 

  • “Are machine learning algorithms accurate enough to replace clinicians’ judgment?” our clinical partners asked.
  • “No,” I answered, “but they are accurate enough to direct clinicians’ attention.

My family’s 12-year old Subaru is now passed down to our son, so we bought a 2018 model. The new car has a “blind spot” warning system built in. When it senses another vehicle in my likely blind spot, a warning light appears on the outside rearview mirror. If I start to merge in that direction anyway, the light starts blinking and a warning bell chimes.

I like this feature a lot. Even if I already know about the other car behind and to my right, the warning light isn’t too annoying or distracting. The warning light may go on when no car is in my blind spot: a false positive. Or it may fail to light up when a car is in my blind spot: a false negative. But unlike a human driver, the warning system doesn’t fall asleep or get distracted by something up ahead. It keeps its “eye” on one thing, all the time.

I hope that machine learning-based suicide risk prediction scores could work the same way. A high-risk score would alert my clinical colleagues and me to a potentially unsafe situation that we might not have noticed. If we’re already aware of the risk, the extra notice doesn’t hurt. If we haven’t been aware of the risk, then we’ll be prompted to pay attention and ask some additional questions. False positives and negatives are inevitable. But risk prediction scores don’t get distracted or forget relevant facts.

Like a driverless car?

We are clear that suicide risk prediction models are not the mental health equivalent of a driverless car. We anticipate that alerts or warnings based on calculated risk scores would always be delivered to actual human beings. Those people might include receptionists who would notice when a high-risk patient cancels an appointment, nurses who would call when a high-risk patient fails to attend a scheduled visit, or treating clinicians who would be alerted before a visit that a more detailed clinical assessment is probably indicated.

A calculated risk score alone would not be enough to prompt any “automated” clinical action — especially a clinical action that might be intrusive or coercive. I hope and expect that our risk prediction methods will improve. But I doubt that they will ever replace a human driver.

My new car’s blind spot warning system does not have control of the steering wheel, the accelerator, or the brake pedal. If the warning is a false positive, I can choose to ignore or override it. But that blinking light and warning chime will get my attention. If that alarm goes off, I’ll take a close look behind me before deciding that it’s actually safe to change lanes.