In the 1990s, I worked for a company named APACHE Medical Systems. (It is now named APACHE Outcomes.) APACHE was an acronym for Acute Physiology and Chronic Health Evaluation. This tool was designed to measure the severity of disease for adult patients admitted to intensive care units. In effect, we developed systems designed to predict the clinical outcomes for ICU patients. The software was a decision support package that gave health care providers and financers risk-adjusted predictive analytics on mortality, length of stay, the amount of resources needed to sustain life, etc. The premise was that by using patient data based on ICU health status, then families, physicians and insurers would have an idea of whether continuing treatment of the ICU patient would result in a viable-life outcome.
(Image: ESB Professional/Shutterstock)
The unique thing about this system was that it was like a prism; what you thought of it depended on the angle from which you examined it. From the perspective of a developer, this was a great piece of software. You input patient demographic information, severity of injury and physiological measurements and you get a pretty reliable statistical result of what the clinical outcome would be. From the perspective of a shock trauma nurse or surgeon, the software helped you to design the best treatment of care with a minimum amount of information needed, in the shortest period of time. From the perspective of the insurers, it allowed one to know the whether the costs of sustaining the patient would result in the benefit of recovery. Without a question, this was a great piece of critical care software; this was applied analytics at its best.
But let's think about this from another perspective -- it is your loved one that is the object of critical care analysis. There is an algorithm in the background informing opinions on whether efforts should be exerted to save your loved one's life. Let's consider the following scenario:
Your 84-year-old diabetic mother was driving when her glucose level dropped. She passed out and drove into a tree. You arrive at the hospital, contact her primary physician and give the attending physician her background information. Your family history detail states late-onset diabetes in prior generations with an average life expectancy of 86 years. Her blood test shows a low white blood cell count, which might be a viral infection or acute leukemia. The nurse has entered all of the information into the critical care analytics system so that the attending physician can recommend recovery, palliative or hospice care. The question in your mind is "Will they do enough?"
The question in the mind of the health care providers is "Are we doing what is right?"
The questions in the mind of the insurer are "Are they doing more than what is necessary? Will the patient have an unreasonable length of stay? Do we have to reimburse every line item used in this episode of care?"
With this being your personal scenario, the 'cool factor' of the software is overshadowed by the fact that a human life depends on the precision of the calculations and the accurate interpretation of the results. The public popularity of the software is dwarfed by the fact that the work for which someone received a 'Good Job Award' may lead to a decision to end the life of your mom. Your impartial and dispassionate use of analytics will be forgotten when you are told that choices about your mom's care will be based on a regression model that you neither designed nor tested. In that moment, your profession just got real.
So in that light:
- Are you willing to trust software developed by an analytics peer unknown to you?
- Does having a personal stake in a patient's outcome make it easier or harder for you to be an advocate for critical care analytics?
- Should software be used as a decision support tool in matters of life and death?
Please share your thoughts.