First, I needed to know more about these metrics, understand why they were important to her and the business, how they were defined, and how well they performed as business indicators. Guess what? They were the unique creations of the organization, each a mélange of several factors. There was no obvious justification for their use as a means of assessing the health of the business. Yet they were routinely used as a basis for decision making. In fact, the executive was compensated based on these metrics, even though they were not measures of anything that could be directly influenced by public relations.
People stick with metrics that are familiar and accepted even when they don't know how or why the metric originated, whether it performs well for the intended purpose, or whether there might be a better alternative. In short, we sometimes use metrics that have become sentimental favorites rather than effective tools. When these ineffective metrics are used as a basis for decision making, it's bad for the business.
Would it surprise you to hear that I could tell many more stories like these? Let's look at another, and explore how this happens and what you can do to address the problem.
An engineering consultant delivered a final report to a client, who asked me about the sample size used in the project. The report didn't fully explain how the sample size had been selected. Was it appropriate?
The details hadn't been explained because the consultant didn't know the proper techniques for estimating sample sizes for statistical analysis. He had heard, somewhere, that a certain number was a good sample size. So, he used that sample size ever after. It happened that the number he had learned was a good number, for a specific type of analysis, under specific circumstances. Alas, it was not at all appropriate for the work he had just delivered to his client.
How does this happen? Often, it starts with something reasonable. Take the example of that engineering consultant. No doubt it started with someone explaining, and perhaps even showing calculations for, the sample size for a specific situation. Then someone tells someone else that this is a good sample size, but without the explanation of the relevant conditions. And so on, and so on, until the number becomes a guideline for all sorts of things, with no real understanding of reasons or requirements.
He had gotten away with this for years. Why? Because his clients didn't ask enough questions!
So what can you do to avoid sentimental metrics? Ask questions!
Ask many questions, and ask in different ways. Before a project begins, ask about methods. Ask why? Ask for documentation. Ask about assumptions. Ask about these same things as the work progresses, and when you review the final report. Confronted with a new metric -- ask how it is calculated. Ask what the metric is meant to measure. Ask for evidence that it does what it should. Ask about statistical analysis of the metric's performance. Ask about audits of the analysis process.
Got the picture? Ask away, and don't let up until you get the evidence and are able to understand it.
Do you think your company uses sentimental metrics? Share your examples below.