Every organization on the planet is pivoting; the familiar refrain of ‘change is the only constant’ taking on new meaning as the pandemic heralds a new age of organizational experimentation. As we move deeper into worldwide lockdowns, and estimations extend the need for social distancing from weeks to years, companies are balancing triage activities with longer-term, strategic responses.
Measurement is on the radar of most organizations: ‘insights’ are listed among desired skills, absorbed into competency models, and sometimes assigned entire teams. Most, however, still use measurement as an outputs-focused activity; a way to evaluate an end goal. But the scope of measurement offerings is much, much greater – arguably, its greatest value is as a generative activity. And in our world of forced agility, this expanded understanding has never been more pertinent.
An era of experimentation
Without a ‘right’ way to navigate this new world, we have latitude to experiment; to try, to learn, and to try something else. Measurement, as a generative activity, brings structure to experiments by providing real-time insight into what is or is not working, and why.
Measurement goes from being evaluative to generative when you draw insight from leading indicators and connect them to planned actions. Rather than considering these actions as ‘corrective’, they are designed to add information to your growing understanding of how the organization is working. This creates an Insight Loop that permits continuous improvement, while refining the context against which all subsequent feedback can be understood.
Quick refresher: leading indicators are input measures that offer predictive information by focusing on the activities that lead to desired business outcomes. Lagging indicators are output measures that evaluate the extent to which business outcomes actually occurred.
Organizations tend to overvalue lagging indicators for a number of reasons: they are quantifiable, are naturally built-in to revenue-based business models, and appeal to our quantification bias through the illusion of objectivity. The problem is that while they’re ‘easy’ to measure, they are very difficult to influence; and by the time lagging indicators change, the cascade of events that led to the shift have long passed. They are a prolonged look in the rear-view mirror, since any attempts to change course will inevitably be invoked while the (undesirable) upstream processes continue, waiting for the changes to make their way ‘up’.
People are the golden metrics
People are the most useful leading indicators for organizational health.
Although vulnerability to bias causes many organizations to devalue self-reported information, this direct feedback contains essential information about how people are doing. In fact, the subjectivity of individual feedback can be enormously valuable, especially now – we need to understand how different people are conceiving of and adapting to the situation, how they’re using the supports offered, the ways these supports are differentially effective, and why.
We also need to understand how these things change over time, because they will. As this new normal becomes the norm, we are all experiencing the ambiguous loss of a past life and moving through the associated grief. Peoples’ needs are going to change, which means the needs of the organization will change. People-related measures should anchor all others.
Measuring in real-time
There are two major steps to enable real-time measurement:
Creating an Insight Loop that connects leading indicators (people measures) to organizational measures you’re already keeping track of.
Determining how you will interpret and respond to insights using a what-if plan.
Creating an Insight Loop
An Insight Loop outlines a series of connected measures that contribute to a desired output. The goal of the Insight Loop is to provide a snapshot of how a desired result (output) is generated, enabling you to visualize when, where and how those results might be enabled or compromised.
When using an Insight Loop to support new ideas, ways of working and other experiments, it’s important to prioritize direct measures. The more you privilege direct measures, the sooner you will have useable information about the intervention. Less direct measures give the problem a longer incubation period, taking longer to reveal symptoms. This is where asking people directly and trusting what they tell you becomes so important.
A number of organizations are offering virtual ‘wellness’ services (e.g., online meditation, yoga, etc.), as a means of reducing stress and building community.
What measures might we use to see whether these supports are helping?
Productivity is a relevant organizational KPI but waiting to see if and to what extent productivity changes would not only mean waiting weeks/months but might signal a problem well after its consequences were felt (e.g., disengaged workforce, suffering employees).
An Insight Loop helps you identify measures that provide crucial information earlier in the sequence. Using the wellness example:
Start by measuring the activities these interventions support – whether, who, when and why people are participating in the sessions.
Since the wellness offerings are thought to support productivity by helping to manage stress, build community and encourage work-life balance, it’s important to gain feedback about stress levels, community and balance.
Additionally, it is important to keep an eye on factors that can interfere or compete with the intervention and its intended outcomes (i.e., other sources of stress, procedural barriers, environmental factors).
Once the Insight Loop is developed, it is operationalized by establishing how you’ll use the feedback.
Planning your response
An important feature of the Insight Loop is that it is dynamic. As a generative form of measurement, the ‘results’ are indicators of where you might need to explore further. This means looking at patterns and directions of change, rather than understanding results as absolute evaluations (right/wrong, effective/ineffective).
Bring meaning to your data ahead of time by establishing what you expect to observe, what it might mean if you see something different, and the actions you can take to learn more or respond to feedback.
What should you see if the intervention is ‘working’?
What might you see if it isn’t?
If we aren’t seeing the results we expect, what other factors should we check?
What might these things tell us about our workforce, the environment, and the intervention?
Be creative about your sources!
One of the main reasons organizations do not use measurement more regularly, is the perceived difficulty of collecting data. While it does require some upfront work (i.e., deciding what to measure, coming up with questions), the task of measuring does not require the development of a platform or investment in a new system. Provided the data are being treated ethically (i.e., anonymity and/or confidentiality assured, proper storage, limiting access), you can be creative about data sources.
In fact, the more you break free of traditional notions of what ‘counts’ as data (e.g., surveys), the more accurate your insights will become.
The recent move to virtual comes with the additional benefit of embedded data collection: whatever system(s) you use, they are collecting information about engagement (who, what, when, how). You can add meaning to this ‘big’ data by integrating other, real-time data sources. Here are a few ideas:
Pulse check questionnaires
Leader insights (team meetings, coaching sessions, 1:1s)
Virtual fireside chats and town-halls
Virtual ‘always open’ comment boxes
Non-work coffee chats
Review of support uptake (use of wellness offerings, ergonomics, etc.)
Above all, the priority of the Insight Loop is usability. It’s easy to collect a LOT of data; but just because you can, doesn’t mean you should. The size of the Insight Loop should be based on how you plan to use the insights - major organizational changes should be represented by a broad and deep loop that connects measures to KPIs. For new supports, ways of working, smaller interventions and trying and testing ideas, a narrow loop that focuses on the outcomes of the specific change may be appropriate.
This is not about proving hypotheses or demonstrating repeatability; it’s about getting critical information in real-time that allows the organization to pivot and support its people.