Reducing Bias in CX Surveys for Accurate Decision-Making

Reducing Bias in CX Surveys for Accurate Decision-Making

The Gist

  • Address bias. Tackle “survivorship” bias by surveying customers at multiple journey points.
  • Minimize anchoring. Reduce anchoring effects by asking questions entailing uncertainty first. 
  • Evaluate promoters. Gain insights from promoters by following up on NPS high-scorers.

We rely heavily on the inferences we make based on survey-based data in shaping our decisions about product and service designs. Because CX is a multifaceted concept, it is virtually impossible to detect patterns for success with precision without the aid of data analytics.

However, when our data are plagued by hidden biases arising from the way we gather our customers’ feedback, not even the most complex analytic tools can help us see what is really going on. How can we fortify the accuracy and reliability of our data against the most common sources of bias in VoC surveys?

1. Understand How Survivorship Bias Affects Your Customer Data

Survivorship bias is one of the core sources of distortion in survey-based CX data. Surveys are typically triggered after a customer makes a purchase or a service request is fulfilled. Therefore, the feedback we gather is strictly confined to the customers who somehow manage to accomplish their goals within the journey we have designed for them.

Let’s say we’re tracking CX at a physical clothing store. Only the customers who make a purchase receive a survey and have the chance to respond to it. However, those who suffer from poor CX during their visit don’t make a purchase and leave the store without ever leaving a trace. Let’s say we are tracking the CX of a mobile app. Those who decide not to continue using the app for any particular reason are also less likely to respond to in-app or follow-up surveys. People who struggle to accomplish their goals either leave prematurely before even becoming a customer or have shorter customer life cycles, hence have a drastically lower share of voice in survey results.

Is there value in the data gathered from such a biased sample? Of course, there is! To the extent that this data is strictly used to extract insights regarding what our existing customers are happy or complaining about, it is truly useful. However, the moment we assume that it tells a bigger story about how well we designed the end-to-end customer journey, or what can be done to improve our conversion funnel, we step on a very dangerous trap. Because those who felt heavy friction along the journey have very little, if any at all, voice in this type of data.

What can be done?

  • Do not wait for a successful check-out or a purchase to deliver a survey. Identify natural stops along the customer journey to gather additional feedback.
  • Triangulate VoC data with behavioral data to cross validate insights. For instance, if you are tracking CSAT of a vehicle lease application with a follow-up survey, make sure to pair this data with the completion rate of the funnel that starts with the number of customers inquiring about financing options and ends with the number of completed applications.
  • Triangulate VoC data with VoE data to cross validate insights. Do your VoC survey results reflect your employees’ sentiment around individual touchpoints or policies? Discrepancies in VoC and VoE data is a red flag for survivorship bias.

Related Article: Lights, Camera, Action: The Final Step in Your Customer Survey Program

2. Look for Anchoring Effects in Customer Surveys

When making a decision, we are hardwired to rely heavily on the first piece of information that is offered to us. This cognitive bias is coined with the term “anchoring effect.” When responding to a survey, the first question can act as a cognitive anchor for our judgments about the subsequent questions. Let’s say, after spending an amazing afternoon at a restaurant, we received a survey. The first question was about how we liked the taste of the food, and we didn’t really like it.

Even though our overall experience might not have been severely impacted by the mediocre taste of the food, our judgment on the subsequent question about the likelihood of us recommending the restaurant to others would be influenced by our negative response to this first question. We simply (insufficiently) adjust our response away from the cognitive anchor that was set by the previous question.

Source link