Join the next Canvs AI Product Tour ->
Canvs AI Logo

Voice of the Customer: How much are we really hearing?

How much of the customer’s voice are we hearing in our voice of the customer programs? Customer Experience (CX) has been a major focus of investment for organizations over the last ten years and is poised for continued growth. Predicated by customers empowered by social media channels that amplify their praise and complaints, and increased consumer choices, it’s good to be the customer these days. Despite this investment and focus, I’m consistently shocked by how much (or little) of actual customers’ voices are being incorporated with customer feedback. How you may ask? Let me explain. 

Customer feedback is naturally central to CX programs and surveys are one of the primary means of capturing this feedback. In many cases, this comes in the form of customer satisfaction surveys, often a Net Promoter Score (NPS) survey. Additionally, many customer experience programs focus on cultivating customer empathy throughout the customer lifecycle, including product development, concept testing, and product feedback. All of these customer feedback surveys typically have one thing in common, they include at least one open-ended question. Just to be clear on terms (and you probably already know this) open-ended questions allow respondents to answer a survey question in their own words, without predetermined choices. By their nature, open ends provide the space for consumers to express themselves (within the limits of the character count, anyway).

A typical open end in a customer satisfaction survey might be phrased something like: 

“What is the primary reason for your score?” 


“Why did you provide that rating?” 

Unfortunately, a surprisingly high amount of the feedback generated from these open-ended questions is never incorporated into CX analysis. There are several reasons for this. First, there’s the volume of data. Many brands are sending a feedback survey after every “transaction” with the customer. Following my last trip, for example, I received no fewer than five surveys. Two from the airline (one for each flight!), one from the hotel, one from a restaurant, and several from apps I used along the way. 

Here’s a confession (shh… don’t tell my insights friends): I didn’t respond to all of these surveys. But I did complete a few of them and you can be sure I utilized the open ends, particularly related to the issue I had with my hotel room not being cleaned. Getting back to the point, you can imagine the volume of open-ended text data brands is receiving from these surveys – even if just a fraction of the audience responds.

The second reason so many of these responses go wasted is the nuance, variation, and complexity of human expression. We’ve invented many ways of saying the same or similar things and even more ways of spelling them :-)! That makes analyzing open ends challenging, particularly at a large scale. As a result, many organizations only analyze a sample set of the open-ended responses they receive and/or use rudimentary tools to generate word cloud visualizations. 

Put these two forces together, and you can quickly reach the conclusion that a meaningful percentage of the customer feedback brands receive through open-ended text is NOT being incorporated into the CX program. And we haven’t even considered other sources of feedback like product reviews and, of course, social media, all of which come in the form of unstructured text data. 

I don’t write this to cast aspersions. The spirit of CX is more than willing. But I know as a business leader that I would suffer from major CX FOMO if I couldn’t unlock the insights from a significant source of my customers’ voice. Companies are certainly able to learn much from closed-ended questions and even from the sample set of verbatims they may analyze, but may still wonder: what are we missing?

Going back to my personal travel experience for a moment, one thing I noticed about the surveys I responded to was the length. These surveys start out innocently enough asking about likelihood to recommend, and fifteen minutes later you’ve finally completed the survey. I suspect the length is intended to better understand the “why” behind the rating (was it the airport lounge, the friendliness of the flight attendants, or the perceived value?), perhaps overcompensating for underutilized open ends? What if my “why” wasn’t provided as a choice or there was a meaningful nuance that couldn’t be captured by the closed-ended questions? If the brand is not thoroughly analyzing the open ends, they’re likely to miss important factors impacting the customer experience. 

Finally, open-ended questions are the best source for understanding the emotions being expressed by customers (and corresponding sentiment derived from those emotions). As recently wrote about, empathy is a differentiator for leading brands, no doubt because as AdAge recently noted: “Emotion has overtaken reasons in driving brand choice.” 

If customers are taking the time to complete our (potentially lengthy) NPS surveys, we owe it to those customers to listen to their voice, open ends and all. In today’s competitive, customer-centric markets, CX FOMO could lead to truly scary outcomes.


Schedule a demo

See how Canvs automates text analysis, saves time, and improves insights from open-ended text!


Almost there...

Please complete the information below to access this content.


See Canvs in action

Schedule a demo to see how Canvs automates text analysis, saves time, and improves insights from open-ended text!