When it comes to gaining organizational insights the most important feedback must include our most valuable assets: our people. Qualitative survey / research questions help us access the why and the how behind reported and observed experiences, behaviours, emotions, attitudes, habits, and opinions in ways that are unmatched by quantitative questions.
Setting up your survey to gain useable qualitative insights opens the door to an even bigger job: using the data and communicating results.
As the researcher, you have the important responsibility of sharing findings objectively*, while protecting the (psychological, personal) safety of respondents. Although this might seem obvious, stakeholder motivations, pressures to deliver ‘answers’, and organizational imperatives to self-protect can put these accountabilities at odds.
*Objectivity is an invention – not practically possible because we are the creators, curators of data. Here, objectivity is about integrity: being CAREful and reflexive (the practice of examining one’s own role/contribution to the research context and as an actor in the analysis process) about the biases we introduce.
Rather than feeling stifled by the weight of this responsibility, we see the sharing of ‘results’ as an opportunity to teach the organization how to understand and engage with data. A great deal of the pressures faced in delivering engagement, performance, culture, or change ‘results’ comes from leaders’, participants’, and other stakeholders’ misunderstandings about what the data can and cannot say (i.e., the purpose of the survey/research is unclear). Using the share-back of qualitative insights to build capability not only protects the research, it assists the future collection of better-quality data (because more people know what good looks like).
Here are some things to think about when sharing qualitative data.
Qualitative responses are intimate self-reports
Qualitative responses are intimate self-disclosures: participants are sharing and documenting their personal experiences (feelings, beliefs, stories). Once shared, those data are out permanently, susceptible to interpretation, reconfiguration, and use by others. By self-reporting, participants are relinquishing control over their own narratives. They are giving you their stories, usually in hopes that something will get better. They are vulnerable, trusting, and deserve your commitment and CARE. Your responsibility, first and foremost is to your participants. Always.
This commitment starts with careful survey design, and becomes more urgent when considering how you will (and will not) use the data:
Be clear about why you are collecting the data; the ways it will and absolutely will not be used.
State who will and who will not see the qualitative responses, and in what format (i.e., will the responses be treated before they are shared? Or will they be distributed verbatim? what demographic/identifying information will be shared/tied to responses? Will individual responses or only themes be communicated?).
Detail any differences between how you will use/share qualitative versus quantitative results; and whether the qualitative responses shared will be a selection or all responses collected.
Provide a backdrop so that qualitative responses are interpreted in context and be transparent about what this backdrop will be before asking participants to self-disclose.
Confidentiality vs censorship
Confidentiality and anonymity are commitments made to PARTICIPANTS (this is who you are protecting). Depending on the commitments made about sharing qualitative responses, any editing of those responses should be approached through the frames of participant confidentiality and anonymity. This usually means concealing names and other revealing statements that might expose the participant’s identity.
Confidentiality does not mean censorship.
Censorship involves the intentional suppression of communication in order to prevent dissemination of information that may be valuable or harmful to the censor, the individual or group censored, or the intended recipient of the communication.
The best way to determine if your data editing is about confidentiality or censorship, is to consider the motivation – who does the edit ‘protect’? Who is at risk of exposure, what kind, and what are the implications?
When editing is necessary, it is important to find ways to share the insights that become disguised. This might mean highlighting themes or the ‘essence’ of a collection of responses, alongside the edited examples to clarify what they’re ‘saying’.
Create engagement guidelines
Although mystifying to those of us who spend our days immersed in data, most people do not spend their time knee-deep in analysis. This means that most of your audience will not have practice engaging with research results; they might feel confused about what the data do/don’t say, overwhelmed by how much was collected, and/or intimidated about what they are supposed to do with it. They need your help.
Telling people how to interpret the data (even in a report) is as important – maybe even more important – than delivering the results. Why? Because if people can’t interpret the results, they can’t use them; and you aren’t doing research for the joy of creating colourful reports that sit on a shelf.
Provide your audiences with guidelines that will help them engage with the data:
Present a backdrop: set the context and give them a frame through which to understand the results (e.g., remind people of why and how the research was done, the results expected, and conditions that influenced the kinds of responses received).
Offer questions people might consider to help them think about the data and understand the results in context
Anticipate and respond to chatter proactively – what assumptions, fears, and biases might be present for your audience? Responding ahead of time to these internal dialogues can not only prevent misunderstanding, it can build trust by showing your audience that you understand their perspective (empathy!).
Provide a focus
For really big data sets, it can be helpful to give people a focus. Tell them where to look and why by connecting results to organizational priorities or strategy communications that will give meaning to what they’re looking at. You might even consider a staggered release of specific results (e.g., delivering an overall summary right away, and sharing deep-dive insights over a period of time, to align with what is happening in the organization).
Make the numbers real
Make the data “real” by placing them conversation with observations, experiences, and other knowledge that your audiences are already familiar with. Help them convert ‘the numbers’ into day-to-day practices (i.e., what did humans have to do to create these data? What activities create these experiences?), and provide parameters for understanding quantitative results: which numbers are and aren’t significant (and why)? What thresholds should they be concerned with? What are “good” scores?
Sharing results is equally an opportunity to educate the organization about how to think about assessment and research, so be transparent about your rationale when providing these guidelines.
Equip your leaders
Although you might be responsible for sharing results, your leaders are the ultimate gatekeepers when it comes to engaging with the data. A common assumption is that leaders are more comfortable with data by virtue of their roles and experience. This is often flawed. While they might be comfortable handling sales data and other organizational metrics, response-data (i.e., data created by people responding to a survey, in focus groups or interviews) are influenced by contextual and circumstantial variables they often aren’t used to considering (and might not even have access to).
Significantly, leaders can make or break participants’ trust in the process by supporting (or not) the commitments made about how the results will (and won’t) be used. This means the time invested in equipping your leaders to engage with, understand, and take action in response to the results, can deliver a high return.
Preparing leaders might look like:
Briefing leaders on what was collected, how it’s presented, and where you would like them to focus.
Providing team-specific insights to help place their results in the context of organizational results.
Brainstorming the implications of the results on their team, including how they might take action / leverage the research insights.
Helping leaders anticipate the internal chatter, vulnerabilities and questions their teams might have.
Differentiating between trust-building and trust-compromising behaviours related to data collection (i.e., behaviours that will reinforce participants’ trust and behaviours that will promote uncertainty and discomfort).
Remembering that leaders might also feel exposed and/or vulnerable and need support navigating their own reactions.
Comments