Automating Bad Decisions and the Ladder of Inference


There’s more than one way to make a poor decision. Bad data, inappropriate assumptions and flawed logic are just three of the missteps you can take on your climb up the Ladder of Inference, a concept first developed by Chris Argyris, professor of business at Harvard, in 1974, and later popularized by Peter Senge in his 1990 book, “The Fifth Discipline”. If we’re not mindful of these mental pitfalls, we’re likely to use our automated business processes to simply make bad decisions faster.

The Ladder of Inference, an oldie-but-goody, is likely familiar to you, although you may not have run across it in some time. A quick summary of the ladder’s seven rungs would be (starting at the bottom):

  • Observation: The world of observable data and experience
  • Filtering: The selection of a subset of this data for further processing
  • Meaning: Assigning meaning/interpretation to the data, through semantics or culture
  • Assumptions: Associated context, often from your Framework (below), of what you already know and the new meanings you’ve assigned
  • Conclusions: Drawn based on the assumptions and meaning applied to the filtered data
  • Framework: You alter, adjust, or adapt your belief system/knowledge framework based on your conclusions
  • Action: You take action based on the meaning of the data and your updated belief system

A simple example of the ladder in action might be: While you are walking the streets of a large city, you observe amongst the crowd a familiar face, which catches your attention (filter). Once identified (meaning) as a friend you haven’t seen in years, and based on your memory, you assume a mutual, lasting friendship and conclude that they would be pleased to see you again. Your now updated belief system moves this old friendship into the present and you walk across the street to introduce yourself (action).

That was a rather benign example. I hardly need to elaborate that there is much that can go wrong with this process, which was precisely the reason it was espoused by Argyris and Senge in the first place. One only needs to consider the present day nastiness we are seeing with respect to people observed, filtered, and labeled based on their race, religion, nationality, clothing, or language, the assumptions made and conclusions drawn, all in a rapid ascent up the Ladder to an often regrettable action.

One antidote to racing up the Ladder is to climb back down and take the process more deliberately, one rung at a time:

  • What data was chosen and why? Was it rigorously selected?
  • What are you assuming, and why? Are those assumptions valid?
  • When you said "[your inference]," did you mean "[my interpretation of it]"?
  • Run through your reasoning process again.
  • What belief lead to that action? Was it well-founded?
  • Why have you chosen this course of action? Are there other actions that should be considered?

One of the better real world examples I can think of to illustrate this point is the story of how the flawed Hubble Telescope primary mirror was allowed to be installed and launched even though tests showed that the curvature was incorrect. Several rungs of the Ladder were hastily jumped over as technicians assumed that, having gotten to the final stage, the mirror couldn’t possibly be incorrect -– it had to be their own measurements instead, so nothing was said. In the end it required a separate, expensive repair mission to install corrective optics to mitigate what otherwise would have been a billion dollar disaster and embarrassment.

Analytics interacts with the Ladder in two ways. The first is in a generally positive way with analytics as a tool to augment our decision process. Analytics can support better decisions by:

  • Providing better filters, overcoming many of our flawed selection biases (e.g. confirmation, hindsight)
  • Assigning better meanings, by distinguishing the significant from the noise (e.g. text analytics, data mining)
  • Making your assumptions explicit, by coding them into the analytic model
  • Drawing only the conclusions that are warranted by the data

However, the IoT, event stream processing, AI, and Analytics for Agency all create an entirely new path for making bad decisions, quickly, through our fully automated processes. The Ladder of Inference no longer pertains to just what goes on in our heads, but also to what goes on in our automated, rules-based, business models. When it comes to deploying automated rules-based models and systems at scale, there are two primary concerns:

  • First, what biases, assumptions and belief system have you baked into your model, either deliberately or inadvertently? If your automated business process is taking actions based on data inputs, it is racing through its very own Ladder of Inference, and therefore prone to the same decision-making mistakes as humans.
  • Speaking of humans, they likely still have an important role to play in your otherwise automated process. As I described in this post, The Man who saved the World, we tend to get into trouble, BIG trouble, when our process are too tightly coupled. A more loosely coupled process, perhaps with human monitoring or intervention at key inference rungs, can prevent a runaway calamity.

Kevin Slavin’s TED Talk on Algorithms provides several other great examples of automated processes run amok, including the one where two computers bid each other up into the millions of dollars on eBay for a $12.95 paperback book. You don’t want to be the one responsible for the next version of the Wall Street “flash crash” at your company. There is a continued role for the proper use of human intervention and exception handling to prevent situations like those.

The Ladder of Inference was conceived as a cautionary tale, a reminder of the many different paths we can take to poor decisions, actions and outcomes. As we continue to automate and scale our decision processes, as we inevitably will, let’s make certain that we are scaling GOOD decisions. Because when it comes to bad decisions, you can't make it up on volume.

Leo Sadovy, Performance Management Marketing, SAS

Leo Sadovy handles marketing for Performance Management at SAS, which includes the areas of budgeting, planning and forecasting, activity-based management, strategy management, and workforce analytics. He advocates for SAS’s best-in-class analytics capability into the offices of finance across all industry sectors. Before joining SAS, he spent seven years as Vice President of Finance for Business Operations for a North American division of Fujitsu, managing a team focused on commercial operations, customer and alliance partnerships, strategic planning, process management, and continuous improvement. During his 13-year tenure at Fujitsu, he developed and implemented the ROI model and processes used in all internal investment decisions, and also held senior management positions in finance and marketing.

Prior to Fujitsu, Sadovy was with Digital Equipment Corp. for eight years in sales and financial management. He started his management career in laser optics fabrication for Spectra-Physics and later moved into a finance position at the General Dynamics F-16 fighter plant in Fort Worth, Texas. He has an MBA in Finance and a Bachelor’s degree in Marketing. He and his wife Ellen live in North Carolina with their three college-age children, and among his unique life experiences he can count a run for US Congress and two singing performances at Carnegie Hall.

Big: Data, Model, Quality and Variety

A fresh look at big data. It's time to apply "big" not just to the data, but to the model, quality, and variety.

Yelling Analytics in a Crowded Theater

Leo Sadovy reflects on six years of writing his Value Alley blog for SAS and how analytics have matured over those same years.


Re: Solving wrong solutions
  • 12/29/2015 10:54:41 AM
NO RATINGS

The bad decision tree problem makes me wonder if we might eventually need social scientists, psychologists and similar experts to consult to IT departments about how our minds work and how to avoid false premises and other quirks of the mind that tend to lead us in wrong directions.

Re: Bias and Assumptions
  • 12/27/2015 11:57:25 AM
NO RATINGS

..

Wow, there's so much evidence, personal and professional, from my own experience that corroborates Leo's explanation of this Ladder or Inference. However I was particularly intrigued by Leo's observation that


One only needs to consider the present day nastiness we are seeing with respect to people observed, filtered, and labeled based on their race, religion, nationality, clothing, or language, the assumptions made and conclusions drawn, all in a rapid ascent up the Ladder to an often regrettable action.


 

With a hefty percentage of the adult U.S. population filtering all their information through reliable sources like Fox News, the Wall Street Journal, World Net Daily, InfoWars, and reputable political leaders like Trump, Carson, and Cruz, it's not difficult to understand how the "nastiness" is spreading and "regrettable" action becoming more commonplace ...

 

Re: Watch your assumptions
  • 12/25/2015 3:13:35 PM
NO RATINGS

Terry, assumptions sometimes are cloaked in validity because we have held them for so long. the length of time that we've held them seems to imply to us that they must be correct through time versus scrutiny. Very hard to police ourselves on these.

Re: Bias
  • 12/25/2015 3:05:03 PM
NO RATINGS

Ariella, house is a very entertaining fictional character that is far from the real world of medical practice. My daughter the physician scuffs at any reference made of the contents in the episodes.

Re: Bias and Assumptions
  • 12/22/2015 3:55:54 PM
NO RATINGS

@Seth - I probably should have taken more space to explain the Hubble mistake(s).  The mirror construction folks were using a flawed measurinng device (flawed because it was constructed in haste and not double checked by anyone else), so they shipped it to NASA not knowing it was flawed.  At NASA, either a single technician, or perhaps a group of them, I;m not sure, tested it with their own equipment and discovered the defect, but he / none of them could beleive that the mirror could have gotten this far (for mounting) having been flawed, so he / they assumed their measurements must be wrong, and never told anyone else / never told their superiors - all this only came to light AFTER the first blurry images came back. 

I love the online speed dating example - wished I thought of it myself.  I can see it as a prime-time sit-com that gets canceled mid-season, but with a few really funny epsiodes before that.

I once wrote a post entitled "Your Risks are in your Assumptions", and I was going to reference it in this post, but decided to cut it out to keep the length down.  The idea was that with all our complex modeling and spreadsheets and calculations and such, yes, there is some risk that the formulas could be corrupted, but the bigger risks are in the assumptions we use in selecting and populating the data for the model, and in the relationships we code into the formulas,  in the first place.  I've sat through so many post-mortem types of meetings where everyone wants to blame the "model" but not even question their assumptions about revenue growth or time-to-market or adoption rates or quality yields or whatever.

Watch your assumptions
  • 12/22/2015 2:50:04 PM
NO RATINGS

Another thoughtful piece, Leo... this statement lept out at me: "Making your assumptions explicit, by coding them into the analytic model."

In life and in business, I have learned the hard way to watch my assumptions. It's a good practice that helps short-circuit miscommunication or bad decisions/results. It's great advice for those dealing with statistics and analytics.

Re: Bias
  • 12/22/2015 9:37:26 AM
NO RATINGS

@SaneIt that could be a Dilbert cartoon.  A guidebook for managers entitled, "How to Make Bad Decisions Really Quickly"

Re: Bias
  • 12/22/2015 9:36:08 AM
NO RATINGS

@Seth i would hope not. He could try a much less invasive placebo treatment. In one episode that even worked on House; he believed he was getting a drug in the shot and the made him feel less pain in his leg.

Re: Bias
  • 12/22/2015 8:26:16 AM
NO RATINGS

"When I think of making bad decisions faster I think of match making dating sights."  

I never really thought of that, I guess that there are several levels that could be used to make bad decisions really quickly.  I sort of assumed that the matches were made by location and the lists of must haves/deal breakers so really the sites are only working with what they have.  It takes the human filtering out of reading profiles but that doesn't make the data any more reliable. 

Re: Bias
  • 12/21/2015 10:40:48 PM
NO RATINGS

When I think of making bad decisions faster I think of match making dating sights. 

I was aware that the Hubble telescope's mirror was flawed but wasn't aware that every level knew it was flawed but put it up there anyway.   That's the problem with flaws, people just due their part and pass it along until it causes disastor.  Also, it can be difficult to tell someone that their data is flawed and possibily invalidating all their work. 

 

@ Ariella, an aquaintace of mine is suffering from a mystery illness that is causing her pain.  Her doctor suggested that they could amputate her healthy leg as that may or may not stop the pain.   I only pray that was the only option he had and he seriously did not expect her to take it. 

Page 1 / 2   >   >>
INFORMATION RESOURCES
ANALYTICS IN ACTION
CARTERTOONS
VIEW ALL +
QUICK POLL
VIEW ALL +