It matters WHAT we measure, not just THAT we measure
Consensus on the importance of “measuring” has never been higher; and it continues to gain traction as organizations invest more in engagement surveys, culture assessments, performance evaluation, balanced scorecards, organizational health tracking, and other data strategies.
We are not just data hungry, we are data-hoarders: craving more inputs, more information, more insights. It borders on obsession - by the time we find a new way to get our data “fix”, there’s another question waiting to be solved by the magic of numbers.
Time and again, however, we are reminded that not all data are created equal, and the risks of choosing the wrong measures run from inconvenient (hiring, and quickly exiting a poor fit), to expensive (misapplying resources to customer engagement variables), to downright devastating (misunderstanding profit drivers).
What we do with that data once it has been selected adds another layer of vulnerability, as misinterpretation can lead us to misunderstand our business, and to choose activities that detract from, rather than expand, the bottom line.
All this to say: it matters WHAT, HOW, and WHY we measure, not just THAT we measure.
We know this, right? After all, we are data-lovers.
Why do most organizations still lack adequate measurement strategies?
Why do many leaders have a poor understanding of their team’s performance and profitability?
Why is it SO HARD to see the whole picture?
There are four typical mistakes organizations make that undermine their measurement efforts.
#1 We don’t link measures to our strategies
While this might seem intuitive, most organizations assume the metrics being tracked are automatically in-line with their strategies. The problem with this assumption is that it ignores when and why broader metrics were selected in the first place. Moreover, higher order business metrics are rarely the most direct measures of a strategy.
For example, while we can all agree the purpose of any business is to be profitable, the priority of most businesses is to drive stakeholder value. What drives stakeholder value will differ between businesses, resulting in very different strategies. In this case, does “profit” really reflect the best measure of success?
#2 We do not validate the causal links between measures
“Useful measures are persistent (they show an outcome at one time will be similar to the outcome of that same action at a later time); and predictive (they show a causal relationship between the action and the outcome being measured).”
Behind every measure are a set of variables that drive the behaviour of those measures. Most organizations fail to validate the assumed relationships between measures, which means:
We measure too many things, resulting in a chaotic mix of irrelevant, peripheral measures
We cannot tell which measures provide valuable information about progress, and which are just noise
We cannot weigh the relative importance of each measure and cannot allocate resources accordingly.
We do not set the right performance metrics
#3 We start collecting data before knowing what we want to find out
The value of data is a function of WHAT we are collecting and WHY we are collecting it. Data is meaningless (and potentially invalid) without context. Collecting data without knowing WHY we are collecting it leads to an overflow of information with little differentiation in quality; and systemic bias resulting from the ways the data is collected.
Do you start driving if you don’t know where you’re going?
This practice leaves organizations in a position of force-fitting the data collected early on, to solve problems they uncovered later.
It’s like realizing you should have flown because your destination crosses an ocean.
*Note: Sometimes it’s impossible to know in advance which relationships will need to be explored. This is a different problem, requiring a different solution, to the measurement discussed in this post.
#4 We measure incorrectly
Validity refers to the extent to which a metric succeeds in capturing what it is supposed to capture.
Reliability is the degree to which measurement techniques reveal actual changes, without introducing errors of their own.
HOW we measure should be informed by WHAT we are measuring, and WHY. It affects both validity and reliability, thereby determining the accuracy and utility of our results.
Too often, HOW we measure is determined by factors outside of WHY we are measuring.
This last point is at the crux of all four of these measurement mistakes.
We rationalize compromising quality, citing the need to ACT NOW.
We don’t ask enough questions.
We put stock in the credibility of “objective” data.
And we perpetuate the patterns that prompted the need to measure in the first place.