top of page
Lindsay

Choosing to Survey…or Not.

“Do a survey!”, they said. “It will be easy!”, they said.


Surveys are used as research multitools by most organizations – they’re easy, quick, inexpensive, and can be applied to practically anything (e.g., engagement, culture, performance, learning, leadership, feedback). In some ways, they seem like the unicorn of research tools: we can ask quantitative and qualitative questions and generate data that can be analyzed (which is critical, given the importance of ‘representative samples’, ‘confidence intervals’, and ‘validity’). Surveys are basically the perfect mixed methods tool…right?


Kind of, but not really.


The trouble with surveys is that they are a wolf in sheep’s clothing. Their relative accessibility and ease of use disguises the complexity of generating high quality, reliable data; a mission that begins well before any questions are written. The first gate in survey data quality is determining whether a survey is the right method to begin with:

  • What’s your big (research) question?

  • What do you want to ask participants (and why)?

  • What kinds of data are you looking for (and why)?

  • What do you hope to do with the data?

With all of this in mind, is a survey the best way to pose these question(s) to this audience, under these circumstances, and in this context?


Although questions about appropriateness of method are fundamental to good research (question asking), rarely do we question the use of surveys to gather basically any kind of data.


The Slippery Slope


Quantification Bias

The most pervasive risk in survey use derives from the false belief that quantitative (measurable) data are naturally superior to qualitative (descriptive) data; and that this superiority means the data will produce a clear answer that tells you what to do. This leads us to ignore the inevitably human aspects of surveys (i.e., creating questions, interpreting answers, determining implications), often deterring us from questioning the ‘answers’ we believe the data have supplied.


We defer to the genius of numbers, forgetting that measurability is not an antiseptic, and anything that involves humans, involves bias:

  • Asking biased questions will generate biased (faulty) results (numbers!).

  • Structuring surveys in ways that provoke bias will skew results. Interpreting numbers with bias in the background leads to misunderstanding and misinformation (but with confidence!).

Just like bad data, bias doesn’t stink, emit flares, or skulk around in a dark cloak. We usually don’t know when it’s engaged, and it’s safe to say, it usually is. There is no inherent safety ‘in the numbers’.


Misinformation AT SCALE

Armed with our quantification bias, we make confident statements about our findings, their meanings, and what we should do about them. We feel even more confident when these findings come from and/or are supported by surveys, because they can generate enormous amounts of data, which most interpret as a good thing - no, a GREAT thing.


More data is representative! More data is reliable! More data is trustworthy!


Unless that data is bad.


Then, all we have is A LOT of bad data. And collecting more of it leads to more bad data (collecting more data can resolve some participant errors and outliers, but systemic problems rooted in survey design cannot be fixed by including more people – this just produces more bad data).


Unfortunately, in light of the quantification bias, we are immobilized from recognizing bad data (mostly because we don’t question it). Bad data doesn’t stink, so we forge ahead with analyses that lead to big claims and ‘answers’. The result can be significant misinformation that is viewed as particularly trustworthy, given the volume of data supporting it. This can be bad enough when these ‘answers’ support a single event (e.g., funding a particular program), but they’re made much, much worse when this bad data is embedded into the nucleus of the organization (e.g., building people strategy using an annually repeating survey).


Proceed with Caution

Bearing in mind that with a survey you are forfeiting the ‘why’ behind participants’ responses, surveys can be useful when they’re designed thoughtfully, ideally used in combination with other methods, and when we are honest about what we can (and can’t) do with the resulting data.


Surveys might be appropriate when:


The topic being researched can actually be counted

  • Consider analysis before survey design – how do you intend to use the results? What responses might you expect, and how will you analyze them?

  • A common error in survey use is that we ask people questions they cannot answer truthfully. Inquiring about mental states, habits from more than a few days ago, values without context, or hypothesized future behaviours all lead to low quality data.

  • Considering the ways you plan to use the data, and what you hope to ‘say’ about the results can clarify whether a survey format will get you the ‘answers’ you’re looking for.

You’re doing a preliminary exploration of a broad concept/topic area

  • Surveys can be useful when trying to narrow focus on a big topic or concept (e.g., “learning” or “leadership”).

  • Critically, the intention with such a survey is not to report firm ‘results’ or to conduct elaborate analysis. Rather, exploratory surveys explore different aspects of a big concept at a high-level; and the responses are used to determine where to focus with further investigative methods (e.g., interviews, focus groups, or a more targeted survey).

You want specific feedback on the mechanics of a recent experience

  • Key terms here are ‘specific feedback’, ‘mechanics’ and ‘recent’.

  • Specific feedback means you are asking targeted questions

  • The mechanics of an experience involve the countable, (dare I say) objective aspects of an experience: materials provided, duration of session, size of room, number of follow ups, format of content, etc.

  • Recent means within the last few days, because anything beyond that is subject to confabulation (telling lies honestly).

You are evaluating observable behaviours and/or actions

  • Behaviouralizing or operationalizing concepts is always a good idea.

  • This means describing an action or other phenomenon by what is observable. If you are inviting feedback on experiences, activities, behaviours, etc. it is best not to assume that participants share your ideas about what they’re reporting on.

  • It’s better to provide participants with a definition of ‘what good looks like’, and what a concept is and is not, to ensure participants are all reporting on the same thing (because different understandings of a concept mean they’re actually reporting measures on different things).

You have other sources of data to add texture to the survey data

  • Survey data is less risky when you have other sources of information about the concept you are exploring. In research, this is often referred to as triangulation.

  • Although this is not a catch-all (because bias can truly be introduced anywhere), considering the ways your survey results align with and differ from related data will encourage questions that can help you examine the quality of the survey data collected.

  • Impact mapping is useful to clarify where and how the data you collect by survey ‘fits’ with related sources of information.

You’re checking in on a topic for which you already have qualitative data

  • Surveys are a great complementary method of inquiry.

  • Remember, numbers need a backdrop, and insights you have already gathered through interviewing, observation, etc. can provide a more reliable background against which to interpret the survey data.

  • Maybe you have already gathered robust feedback about your coaching program, you’ve implemented some changes, and now you want to see if coaching practices are moving in the right direction (i.e., by asking if the number of coaching sessions per month has increased).

  • In this case, it is especially important to remain aware of confirmation bias (the tendency to seek evidence that confirms an existing belief) in the analysis, and to remain open to the ways the qualitative and quantitative do/do not ‘fit’ together (and explore why!).

You need to cover a very broad and/or large population

  • Sometimes, inclusivity is more important than depth of insights. In this case, surveys can be a useful way to collect large amounts of data in a single effort, while recognizing their limitations.

  • Keep these surveys exploratory, recognize you are collecting high-level feedback, be specific, use observable behaviours/actions whenever possible, and be careful about the claims you make with this high-level data.

  • Given these limits, it’s a good idea to plan for targeted follow up using other methods (e.g., interviews, focus groups), and the survey presents a great opportunity to help you decide what to explore further and with whom.

Anonymity and confidentiality are the biggest priorities

  • Not all survey considerations are about content; sometimes, the conditions of collecting feedback take priority.

  • When participants may feel threatened, exposed, or particularly vulnerable sharing feedback, surveys can be a way to protect their identities.

  • A word of caution: the anonymity of a survey might be comforting, but does not eliminate the sensitivity of some topics. The sterile nature of surveys can be damaging when empathy and connection are needed to establish psychological and personal safety. Ask yourself why anonymity and confidentiality are necessary and determine the best way to provide that security.

Comments


Commenting has been turned off.
bottom of page