Friday, December 2, 2016

Follow up Boosts Survey Response Rates

This is a topic I've written about several times over the last 3-4 years. It's been my contention that response
rates for customer surveys are much higher when there is an expectation of  follow-up action on the part of the company issuing the survey.

Clearly, there are ways to boost survey response rates that don't include follow-up. Survey length, survey design, number of reminders and availability of mobile device support all effect response rates. But, in my experience doing customer surveys, about the best response rate that can be expected for customer survey processes not employing direct follow-up is 25%. In other words, if a business surveying customers does everything else right and doesn't follow up, 25% is a reasonable expectation for maximum response. Though, I expect few businesses attain that level of response rate very often on their survey processes.

In my experience, for businesses employing customer survey processes where follow-up is included, the maximum response level can be much higher than 25%.  I've worked with businesses for several years implementing customer survey processes built around follow-up. What I have seen is response rates that tend to rise over time (versus declining over time as is generally the norm in surveying). And, I routinely see response rates above 40%, with some even above 50%.

For me, the evidence is in that expectations for follow-up on the part of customers drives response rates higher over time. It would seem obvious that this would be the case. But, most businesses in my experience aren't likely to act on feedback often enough to give their customers this expectation.

The chart below shows response rates from one of my clients who I've conducted eight surveys for over the last five years.  All the surveys used an NPS approach, employed what I've described above as "good" design practices and included multiple reminders plus follow up. Follow-up was facilitated by the "Notifications" process embedded within the QuestBack Essentials platform I use to deliver the surveys (meaning that follow-up is triggered by response profile). You'll see the early surveys tended to have lower response rates and the later surveys higher response rates.



Two things have been going on with this particular client. First, customers have developed the expectation that their feedback is going to be followed-up. Second, the business has actually implemented follow-up processes and responds to people giving feedback. If you notice, in the early surveys, customers did not have the expectation of follow up (they had not yet developed it). And, frankly the business really wasn't all that committed to following-up, at least at first. But, between survey 3 and Survey 4 this client began to have lots more competition. They reacted by paying a lot more attention to survey responses, acting on them individually and thereby improving customer experience across the business.

Beyond higher response rates, this business has improved customer retention, overall profitability and survived the entrance of major competition into its local market. And, in many ways has become a stronger competitor by paying attention to customer experience through the survey feedback / follow up process.

A last note.  In this customers current survey process, response rate is at 53% and still rising.

- Stewart Nash
LinkedIn: www.linkedin.com/in/stewartnash





Tuesday, November 29, 2016

Verbatim Analysis - A crucial technology to use in business

My friends at Etuma recently put out a post titled "9 Reasons why you shouldn't wait to implement a verbatim analysis solution. Click Here to read the blog post in its entirety. I liked the article enough that I thought I should reiterate the points and expound upon the underlying needs for actionable data that companies today have. Summarizing the Etuma post:

1. Leading companies in many industries have well developed customer listening skills. Most customer listening remains survey driven, but increasingly it is becoming omni-channel (meaning surveys plus social media) are quite well. Your competitors are learning and transforming how they react to the customer's voice.  Your company needs to as well.

2. Customer needs are becoming harder to predict. You need to have data that can feed analytical and predictive analysis tools in order to more rapidly detect emerging trends. Verbatim feedback is one of the data sources predictive analysis relies upon.

3. Paying people to decipher and report on customer interactions provides expensive and inconsistent data.
Feedback enters the call center from phone, emails, web forms and chat logs. Once calls are transcribed it's all unstructured open-text data and can be fed to verbatim analysis systems.

4. Front-line staff turnover is often quite high in contact centers - compounding the issues in #3 above by requiring constant training and monitoring to provide some consistency of results.

5. Many companies do not get customer feedback directly. As most of their products are sold indirectly, feedback is only as good as the filtering mechanisms that exist in their distribution networks. i.e. It is usually not good. Verbatim feedback lets product vendors monitor 3rd party review or web sites and detect issues and trends without feedback being filtered by the distribution network.

6. New products and services are introduced constantly. Detecting the presence of a new product or its impact on existing offerings is a lot easier when that information is extracted from customer comment automatically.

7. Just like new products, new competitors are much easier to detect when data sources are being analyzed and regularly.

8. Omni-channel marketing and sales makes the customer journey complex. Shopping has become more complicated. The customer journey can now involve many interactions with the company. Capturing those interactions and understanding them consistently and quickly is important and much easier with automated verbatim analysis.

9. Social media complicates communications and crisis management.  Getting quicker understanding of issues makes reacting to them a lot easier and the reactions can be much better planned.

Etuma makes the general point in the post that topic / issue detection and sentiment change detection are critical capabilities for organizations to have these days. And, especially so where a company doesn't get direct access to feedback.  The point I would make about the urgency of implementing text analysis capabilities is the "Don't know what you don't know" factor.  Meaning that without tools to detect issues and sentiment changes (Things you don't know), businesses Don't know What they Don't know.

In today's world, not knowing something important for any length of time tends to have associated costs. By the time a business learns what it needs to know about something, it may be too late to fix a problem, create a new product, add a new service or otherwise react to customer needs in effective ways.

The urgency of adopting automated text analysis solutions is clearly high.  Hopefully more businesses will do so sooner rather than later.








Tuesday, August 2, 2016

No Text Analysis = No Voice of the Customer

A bit of a provocative title to this post.  That said....

Having been part of the customer survey business for a number of years, I believed that quantitative customer surveys, and analyses of survey data, could provide a reasonable approximation of the customer's "voice". In particular Net Promoter Surveys did a good job of that. Over time, my opinion on the subject has changed, to the point where I think that quantitative customer surveys cannot any longer be exclusively relied on to approximate the "voice of the customer" (VOC). In my opinion there are two reasons why quantitative approaches don't work as well as they did: The necessity of concise surveys and the increase in alternative non-quantitative channels being used by customers to provide feedback.

"Back in the day", when customers could be relied on to complete questionnaires with fairly large numbers of questions and would do so in numbers large enough to give a reasonable sample, surveys could in fact approximate VOC. Today though, customer surveys have to be VERY concise in order to generate reasonable response rates, and verbatim questions have to be substituted for the questions we used to, but cannot any longer, ask. Customer surveys have become so concise that many businesses today use surveys with a single question and a single verbatim option. With such short surveys I don't think its possible to approximate VOC unless some sort of text analysis is being applied to the verbatim responses being received.

Additionally, social media has been adopted by many businesses (particularly in retail) as a channel for customer interaction. Here again, verbatim comments are the mechanism being used to provide opinions, make complaints, suggest changes, share experiences, etc.. Without using text analysis there's no easy way to aggregate and understand what they are talking about, which topics are trending, which are positive and which are negative.

Needless to say, for people in the VOC space, none of what I'm saying is a revelation. In fact, text analytics has been around for a while now. Customer surveys are shorter today and rely on more verbatim responses than ever before.  Survey response rates are generally lower across the board than they've been in the past. All the conditions exist for a sea change in how businesses ascertain VOC. So, to me, its clear that Text Analysis is something more and more businesses will need to adopt going forward.

For large businesses, with the necessary resources, implementing a combined social media, customer survey and text analytics based VOC process isn't difficult. It costs a lot of money, but ROI is good given the scale of their needs. Smaller businesses though have to consider lower cost and easier-to-implement approaches. Fortunately those exist, often as component based multi-vendor solutions. The key in these situations is easy-to-implement APIs so that customer feedback can flow through various channels into the text analysis tool and then on to customer dashboards or other visualization environments, as needed.

Voice of the Customer today without text analysis isn't representing the customer's voice very well. But, Text Analysis is getting easier to do and lower in cost all the time. Businesses that do not today use Text Analysis in their VOC process should consider doing so.


- Stewart Nash (www.linkedin.com/in/stewartnash)








Thursday, May 19, 2016

4 Reasons to try Etuma Text Analysis

Its been a while now since I last posted on Etuma's value proposition. Needless to say, since that time the folks at Etuma have done boatloads of product enhancements. Though I've always felt Etuma offered huge value to American businesses, I am more convinced today than ever before of its value proposition. For companies doing business in the Americas, Europe and other places where European languages prevail, Etuma can provide insights, enable better decision making and reduce the costs of customer experience analysis. 

4 Reasons Etuma Text Analysis is really good for your business....

Reason #1. - Most importantly, Etuma shows you what your customers are talking about, as well as how they feel about each topic. In today's world of text analytics - euphemistically - this is table stakes. But, Etuma "antes up", with multiple "Viewpoints" (industry and function categorization models) and automated topic / sentiment analysis.  For those who don't know, doing both topic and sentiment analysis together offers unique value because it helps pinpoint areas where problems exist, opportunities present or change is occurring. Etuma's analytics allow views based on daily, weekly, monthly, quarterly or annual changes in topic sentiment across entire sets of verbatim comments, or for any filtered subset of the data. 

Reason #2. Etuma is very user friendly. It doesn't require data scientists, data analysts, or other expensive experts to use the product in order to produce hugely valuable insights for your business. Once you feed data into Etuma, you can start analyzing results. This makes Etuma extremely cost effective from an operational standpoint. You don't need expensive consultants in order to create and use verbatim insights. Needless to say, this ability to avoid $200 per hour consultants saves money.  

Reason #3. Etuma's Dashboard-like presentation of feedback results let you share and act on insights rapidly and easily. For more in-depth analyses Etuma's connectors to Tableau and other Dashboard systems lets you easily incorporate all your verbatim feedback into your business analyses. But, once data is in Etuma, any report you create is shareable out to dozens of colleagues simply by assigning them access to it. Reports, once set up, automatically update as new feedback comes into the system. Again, operationally, Etuma saves you lots of money because your reports don't have to be manipulated by data experts in order to provide value to colleagues.

#4. Etuma is multi-language out-of-the-box. ZERO additional effort is required to have Etuma analyze verbatim feedback in any of 10 European languages. It reports results in any of the 10 languages it supports. So, if your CX team has people in Germany, France, US, Spain, etc., each can see their reports in local language, even though feedback came through in any of the other nine languages. Any comment, in any analysis can be immediately translated to local language at the click of a button. Etuma's Language support is enormously cost effective as it facilitates collaboration among CX staff in multiple countries and virtually eliminates the need for translation services for CX analysis. 

Etuma is a low cost solution. Our customers typically pay well under 1 cent per verbatim comment analyzed.  

Etuma's Value Proposition - High quality topic / sentiment analysis in a user friendly package offering an operationally efficient implementation with low requirements for data or language experts. All in a low cost product.

Lastly and best of all, Etuma offers a free trial. You load some data then start analyzing. Its that simple.  If you'd like to try Etuma. Just contact me.  

Stewart Nash
Etuma USA
stewart@etuma.com
LinkedIn: www.linkedin.com/in/stewartnash






Tuesday, May 3, 2016

Mobile / On-line delivery optimizes Transactional survey response

I've been working on Transactional surveys with a new QuestBack customer for the last couple of months where our QuestBack is replacing another vendor's product. What' been very encouraging is the response rate improvements QuestBack mobile adaptive surveys are receiving. The surveys we've implemented thus far are receiving substantially greater response rates. The difference, in my opinion is "device adaptive" survey forms. 

Device adaptive simply means that QuestBack surveys present equally well on mobile, tablet or standard laptop / desktop devices. Obviously, this allows respondents to respond from wherever they happen to be when they receive an invitation. It has absolutely increased response rates for the client. 

How much better are QuestBack's device adaptive surveys? Transactional surveys often receive lower response rates, 8-10% being typical. Mobile only survey companies discuss "mid-teens" response rates as being high. On one survey using QuestBack, the customer is receiving over 20% response versus ~12% before. On another they are approaching 30% versus 25% before.    

QuestBack's device adaptive, multi-channel and closed-loop follow up supported surveys produce high response rates and provide detailed insights about customers. In fact, one client I do surveys for using QuestBack is receiving a nearly 50% response rate on a relationship survey delivered via e-mail.  I've discussed this phenomenon in other posts before (see http://close-the-loop.blogspot.com/2015/06/mobile-support-increases-survey.html for more information). But, wanted to point out that device adaptability can have a powerful effect on transactional survey response rates too.

Stewart Nash
LinkedIn: www.LinkedIn.com/in/stewartnash/



Friday, April 15, 2016

3 Cautions with NPS Benchmarks

I'm a big fan of Net Promoter and have used NPS surveys for the last eight years, doing surveys for my clients and helping my QuestBack customers implement NPS surveys in their businesses. So this post isn't a "ding" on NPS, nor should it be construed as a reason to not use it. In fact, in my opinion, quite the contrary almost every company benefits from NPS surveying. That said, on to the post....

Benchmarks are a popular in management circles. They're used to defend performance versus competitors, compare performance against industry standards and to justify planned actions to improve the business. This isn't an argument against benchmarks, the have their place. I just think Net Promoter Score (NPS) should not be used to benchmark versus industry or competitors. Lots of folks use NPS benchmarks. I think they shouldn't. Or, at minimum should be really cautious about how they use them.

On its surface, NPS looks like a great candidate for a benchmark. It's based on a standard model.  And, successful businesses tend to have high NPS scores, make larger profits and retain customers longer than those that have lower scores. So in theory, it should generally work well as a benchmark. I believe that, in practice, it tends to not work well as a benchmark. Here's why:..
  1. Customer Management Process.  In my experience, NPS Scores as given by survey respondents are largely about person-to-person relationships. In my own use of NPS I've found that scores are almost always driven by how well a key relationship is working. Of course, other aspects of a customer's relationship with a company are contributing factors to NPS ratings, but they tend to be secondary. The key relationship tends to over ride the other stuff. This makes using NPS as a benchmark very tricky, because customer relationship management processes may differ substantially between companies. For example, hypothetically two competitors, company A and company B, where "A" has dedicated account and support people, but charges more money. and "B" has neither but charges less. Company A has high NPS ratings and Company B has lower ratings. Can we compare them both to the same NPS benchmark? Probably not. Unless CRM processes, product pricing and products/services are so similar as to be interchangeable, a NPS benchmark isn't insightful. 
  2. Observed Promoter behavior and the definition of NPS for your company. This is a HUGE issue for understanding how to use NPS Survey data. Most companies don't understand that unless they correlate their NPS survey scales to observed promoter behaviors. they don't really have an optimal tool for utilizing it. Important promoter behaviors include referral activity, retention and openness to up / cross sell. For our hypothetical companies: "A" has very high retention for customers rating it a "10" on the NPS scale and a very low retention for those "6 and under". "B" has the same retention rate as Company A, but for customers giving them a "7 or higher" and low retention for 3's and under. Clearly the standard NPS survey scale needs adjustment for both of these companies in order for them to properly focus on their promoters, passives and detractors. Again, benchmarking either of these company's NPS scores to a standard would be comparing Apples to Oranges.
  3. Effectiveness of NPS Survey follow-up process. In my observation, effectiveness of Closing-the-Loop actions on NPS survey feedback varies widely among businesses. Some follow up every survey response, some only detractor responses and some no responses at all. Since it is proven that "closing the loop" actions tend to improve NPS scores, benchmarking NPS scores where a "loop closing" process is non-existent or not comparable to whatever the benchmarked loop-closing practices are means scores are comparable. In other words, as a benchmark NPS wouldn't be relevant, at least until loop-closing was done to the same extent and effectiveness as the population in the benchmarked group. Most companies hold closely their feedback follow-up-process data.  So, its difficult to discern if your follow-up process is comparable to the benchmark.  Another reason not to rely on NPS as a benchmark.
In my opinion, NPS Should be used as a benchmark. But, only internally (i.e. against itself and over time). And even then, it should only be used as a benchmark when the definitions of promoters, passives and detractors are well understood in their NPS context for your customers. 

Stewart Nash
www.linkedin.com/in/stewartnash



Friday, April 1, 2016

"ViewPoints" improve Text Analysis Usability

"It depends on your point of view". In debates or discussion this is a phrase used to suggest different interpretations are available. As human beings, of course, we know that the meaning of spoken words changes with context. With written words though, we often don't have the same ability to infer context. And, as anyone who's analyzed a set of verbatim comments can tell you, how words are interpreted matters a lot to the quality of the analysis.

At Etuma, we've always understood the need for "globalized" views in our analyses. In our product, these globalized "views" are called "Lexicons". Etuma offers a number of  Lexicons for Voice of the Customer (VoC) and Voice of the Employee (VoE), in various industries (Retail, e-commerce, Air Travel and others), for instance.

Recently though, Etuma has added a new capability for much more granular views of verbatim comments. We call these "ViewPoints". A ViewPoint is a specific topic subset within a Lexicon. Some examples of different Etuma "ViewPoints" are:
  • "VoC Retail, Customer Service. 
  • "VoC, Air Travel, Food Service.   
  • "VoC" e-commerce, purchase experience 
Viewpoints give Etuma customers the ability to quickly and easily "tune" their analyses so that text is mapped to topics according to the data needs of text analysis users, typically end users of data. Once a ViewPoint is implemented, users simply select it as a background variable for their report and data is automatically re-organized so that non-relevent topics are excluded from the analysis and relevant topics included, regardless of their overall importance to the text stream being evaluated.  In other words, without a great deal of effort, ViewPoints let Etuma users quickly see what they want or need to see in their feedback data to better do their jobs. A very useful capability.  One I am looking forward to implementing for customers.

Learn more about Etuma's solution set at www.etuma.com

Stewart Nash
linkedin: www.linkedin.com/in/stewartnash














Tuesday, March 15, 2016

Surveys make Text Analysis Better

A few months back I wrote a post titled: "Text Analysis makes Surveys Better" (click here to read the post). The genesis for the post was my perception that organizations needed a better way to collect customer feedback than just long and involved surveys. Among others, I made three main points in the post: 
  • Functional customer surveys (those short enough to have high response rates) rely upon open answer comments for key insights into the customer experience. 
  • Comment categorization and analysis therefore is critical to a successful process. 
  • That for low volume processes (under 1000 comments / month) analysis can be a manual process. But, in higher volume surveys automated verbatim analysis adds a lot of value.
As businesses increasingly employ transaction based feedback processes they are coming to rely on customer comments almost entirely for insights. Social media is a key driver for this phenomenon as it promotes a "quick hit" type of process (i.e. select a star and make a comment). Some businesses have implemented single question, transaction based surveys using Net Promoter. Not surprisingly text analysis tools are being used to gain insight on all these comment streams.

As a result, many businesses have moved away from "research" oriented customer surveys, choosing instead to use single question Net Promoter / Customer Satisfaction surveys with open answer comment fields. In effect, these businesses have chosen to rely on verbatim feedback analysis, almost exclusively, for generating insights about their customers. 


This kind of feedback management approach has the advantage of being simple to implement and can be effective for insight generation when feedback volumes are small. The NPS metric, or satisfaction metric for that matter, provides base level context for feedback interpretation and analysis. For instance, topics with negative sentiment in comments coming from detractors are generally assumed to have some impact on NPS scores. Where feedback volumes are small, time can be taken to validate the "truth" of that assumption. Without going into lots of depth on this, in my experience the things people talk about in their comments (topics) are often same across NPS categories (i.e. Promoters often experience many of the same issues that detractors experience). So, validating "truth" associated with comments is quite important to building improved processes. NPS or CSAT by themselves are typically not enough to ensure this, as they don't by themselves provide enough context to the feedback.  


However, when feedback volumes expand in different ways the need for additional context to customer comments also expands. Some examples:  

  • Differences in regional or country specific comments 
  • Operational differences about how customers are handled (i.e. which call center handled the customer) 
  • Does the same NPS or Satisfaction scale even apply across regions or countries?
Its easy to see that a simple sort of feedback process could be problematic when comment volumes rise and interpretation complexity increases.  Some things ameliorate these challenges, at least to a degree. Automated text analysis solutions, for instance.  Text analysis tools (www.etuma.com) deal quite effectively with high volumes of comments. And, if there is background data behind the surveys (for region or country for example), these tools can use the background data to provide additional context and better analyses. 

But, even in a scenario where automated text analysis is applied to single-question NPS surveys, and background data is available, there is often a need for additional context in order to understand how to best take action on feedback.  Some types of additional context include: 

  • Expectations - What is reasonable vs. unreasonable in the customer's mind for any given challenge highlighted in their comments?
  • Alternatives - Are alternatives available to customers either from the business itself or competitors?  Are alternatives reasonable if available?
  • Costs - Are customers willing to absorb higher costs for improved processes
  • Business opportunities - Would more customers actually recommend if problems or issues are better dealt with? Would they buy more? Or more often?
These are the types of contextual "truths" that must be learned via an interactive process with customers. Customer surveys (www.questback.com) are by far the easiest and lowest cost way of getting this type of data.  

The value add of driving customer insight generation from customer feedback, in my view, is substantial. First, a lot of data becomes available to the insight generation process because of the feedback process. This enables insight generation to be a short easy follow-up survey to the initial feedback survey (which was itself short and easy). With the automation available today via APIs filtered data can emerge from the feedback process and be used to trigger insight generation.  

Of course, a process of automated feedback, automated text analysis and automated insight generation requires a single, or group of, integrated system(s). The system(s) would of course require some kind of Analytic "back end" to help make sense of all the data. I am currently working with customers who are putting together this kind of optimized feedback gathering / data analysis / insight generation process. The platforms my customers are using are relatively low cost and are easy to use. So, businesses that want to improve their processes by using more automation for feedback, analysis and insight can do so without breaking the bank, or disrupting their operations.

At the end of the day I find it fascinating how businesses are changing the way they gather and analyze customer feedback and generate insights from it based on technology.

Stewart Nash
LinkedIn: https://www.linkedin.com/in/stewartnash