Assessment-supported decision-making is gaining popularity, with broadening application in talent selection and people development, culture profiling, engagement surveys, change adoption, and more.
As we attempt to measure the intangible, we place faith in quantification:
We want to be successful
We want to be fair
We want to capitalize on potential
We don’t want to lose
We believe the math will point to the “right” answers, and that a scientific method will assure objectivity. We trust “validated” processes over the vulnerabilities of human judgment, remaining cautiously attuned to the risks of making the “wrong” choices.
The assessment community has responded with an ever-growing collection of alternatives. There seems to be a “validated” test for just about everything - and if there isn’t, one can be built.
Confronted with such breadth, knowing where to start can feel overwhelming. The less comfortable we are in the assessment space, the more we are encumbered by the risks of choosing poorly. Very quickly, this excess of choice adds complication, instead of a way forward.
In the midst of determining which assessment(s) to leverage, we are often confronted by an even broader question:
“Should we buy access to a reputable assessment, or build an assessment to fit our needs?”
At this, many organizations look to claims of “validity” to inform their course of action.
Established assessments tend to play heavily on claims of validity, emphasizing their predictive value, cross-industry benchmarking, and generalizability.
In contrast, there is a common perception that custom-built assessments are not valid; or at least, have not undergone the rigorous process of validation that would confirm their legitimacy.
Consequently, organizations tend to favour buying assessments to support higher-stakes decisions (e.g., hiring, promotion), and to build assessments for less risky, or “softer” imperatives (e.g., engagement).
The trouble with these patterns is that “validity” does not guarantee confidence the way we think it does; and this problem starts with our understanding of validity.
In assessment, “validity” is not singular – there are several types, some being more meaningful than others under certain conditions. To answer the question of “buy or build?”, therefore, we must first understand what makes an assessment valid.
Hint: Validity is not a property exclusive to the assessment, itself.
Let’s Talk About Validity
Did the assessment do what it claimed to?
Do the results reflect what we’re seeing in real life?
At its most basic, validity refers to the extent to which an assessment corresponds to the real-world.
While research and academia might not agree on an absolute number of types of validity, some are more widely accepted than others; and these are the types of validity that vendors, organizations, and other measurement people talk about. Before you can determine which assessment(s) fit your needs, and whether to buy or build those assessments, you should understand what these types of validity mean for the assessments you choose.
On the surface, it would appear these three types of validity refer to the assessment instrument, itself. While it is true that an assessment’s contents must (1) measure what they claim to (construct validity), (2) address all essential content for that construct (content validity), and (3) use items that hang together, measuring the same thing in ways that reflect the real-world (criterion validity), all three types of validity can also be compromised by the way we USE the assessment:
Myers-Briggs is a popular, established, “validated” assessment of personality type.
Personality type refers to your natural tendencies, motivations, reactions, and things you’re inclined to pay attention to. This means that Myers-Briggs does not measure*:
Behaviour
Intelligence
Emotional stability (vs. reactivity)
Effectiveness
Trustworthiness
Level of motivation
Ability to maintain social contracts
Learning tendencies
Mindset (e.g., growth vs. fixed)
Stress management
*Not an exhaustive list
Misapplied as a hiring tool (for example), Myers-Briggs would run into issues with:
construct validity (it would not be a good predictor of emotional stability, or the way a future leader might respond to stress)
content validity (items are unlikely to reflect the specific expectations of a leader in your organization)
criterion validity (unlikely to match behavioural performance of leaders in your organization currently, and unable to predict future behaviours of potential leader candidates)
Validity is a matter of the instrument and its application.
What does that mean for the selection of assessments and whether we buy or build?
In our next blog, we’ll examine a decision-matrix to facilitate choosing the right assessment(s).
Comments