Reducing Bias in CX Surveys for Accurate Decision-Making

The Gist
- Address bias. Tackle “survivorship” bias by surveying customers at multiple journey points.
- Minimize anchoring. Reduce anchoring effects by asking questions entailing uncertainty first.
- Evaluate promoters. Gain insights from promoters by following up on NPS high-scorers.
We rely heavily on the inferences we make based on survey-based data in shaping our decisions about product and service designs. Because CX is a multifaceted concept, it is virtually impossible to detect patterns for success with precision without the aid of data analytics.
However, when our data are plagued by hidden biases arising from the way we gather our customers’ feedback, not even the most complex analytic tools can help us see what is really going on. How can we fortify the accuracy and reliability of our data against the most common sources of bias in VoC surveys?
1. Understand How Survivorship Bias Affects Your Customer Data
Survivorship bias is one of the core sources of distortion in survey-based CX data. Surveys are typically triggered after a customer makes a purchase or a service request is fulfilled. Therefore, the feedback we gather is strictly confined to the customers who somehow manage to accomplish their goals within the journey we have designed for them.
Let’s say we’re tracking CX at a physical clothing store. Only the customers who make a purchase receive a survey and have the chance to respond to it. However, those who suffer from poor CX during their visit don’t make a purchase and leave the store without ever leaving a trace. Let’s say we are tracking the CX of a mobile app. Those who decide not to continue using the app for any particular reason are also less likely to respond to in-app or follow-up surveys. People who struggle to accomplish their goals either leave prematurely before even becoming a customer or have shorter customer life cycles, hence have a drastically lower share of voice in survey results.
Is there value in the data gathered from such a biased sample? Of course, there is! To the extent that this data is strictly used to extract insights regarding what our existing customers are happy or complaining about, it is truly useful. However, the moment we assume that it tells a bigger story about how well we designed the end-to-end customer journey, or what can be done to improve our conversion funnel, we step on a very dangerous trap. Because those who felt heavy friction along the journey have very little, if any at all, voice in this type of data.
What can be done?
- Do not wait for a successful check-out or a purchase to deliver a survey. Identify natural stops along the customer journey to gather additional feedback.
- Triangulate VoC data with behavioral data to cross validate insights. For instance, if you are tracking CSAT of a vehicle lease application with a follow-up survey, make sure to pair this data with the completion rate of the funnel that starts with the number of customers inquiring about financing options and ends with the number of completed applications.
- Triangulate VoC data with VoE data to cross validate insights. Do your VoC survey results reflect your employees’ sentiment around individual touchpoints or policies? Discrepancies in VoC and VoE data is a red flag for survivorship bias.
Related Article: Lights, Camera, Action: The Final Step in Your Customer Survey Program
2. Look for Anchoring Effects in Customer Surveys
When making a decision, we are hardwired to rely heavily on the first piece of information that is offered to us. This cognitive bias is coined with the term “anchoring effect.” When responding to a survey, the first question can act as a cognitive anchor for our judgments about the subsequent questions. Let’s say, after spending an amazing afternoon at a restaurant, we received a survey. The first question was about how we liked the taste of the food, and we didn’t really like it.
Even though our overall experience might not have been severely impacted by the mediocre taste of the food, our judgment on the subsequent question about the likelihood of us recommending the restaurant to others would be influenced by our negative response to this first question. We simply (insufficiently) adjust our response away from the cognitive anchor that was set by the previous question.
If we analyze the resulting data collected via the aforementioned survey, we might mistakenly end up thinking the taste of the food is a significant driver of Net Promoter Score for this particular restaurant.
The best way to minimize the effect of anchoring bias in is to ask the questions that entail more uncertainty, such as CSAT or NPS, before asking more concrete questions about which the respondent can easily recall the pertinent factual information, such as whether or not the respondent was greeted by an employee in the lobby. It is also a good practice to systematically shuffle the order of questions measuring satisfaction about potentially related aspects of a service.
Related Article: Align Your Voice of the Customer Initiative With Your Customers
3. Follow-up on High-Scorers as Well: Met or Surpassed Your Customers’ Expectations?
On an NPS survey, when a customer selects a score lower than 6, we label that customer as a detractor and follow-up with either an open-ended why question, or a set of predetermined items, the usual suspects for dissatisfaction. However, we rarely ask a follow-up question to positive scorers. This causes a negativity bias, in which we exclusively focus on what detractors say about our service or product.
In another, I elaborated on how positive scores on self-report surveys measuring overall satisfaction or NPS don’t necessarily report the existence of a positive feeling during an experience. The punchline of that article was that customers tend to report high satisfaction in the face of a standard service. However, the subtle nuance between being merely satisfied and delighted is strategically important. Even a simple follow-up question with two choices, such as met expectations versus surpassed expectations, would provide invaluable depth to our survey results. Asking for the resulting emotion on a range of possible positive emotions is another viable follow-up option.
If we can identify and segment our customers into different shades of promoters (instead of a single bulk of positive scorers), we can better understand what drives these nuances in the resulting positive evaluation and leverage these insights in improving our service and product designs.
Learn how you can join our contributor community.