Tuesday, May 3, 2016

Mobile / On-line delivery optimizes Transactional survey response

I've been working on Transactional surveys with a new QuestBack customer for the last couple of months where our QuestBack is replacing another vendor's product. What' been very encouraging is the response rate improvements QuestBack mobile adaptive surveys are receiving. The surveys we've implemented thus far are receiving substantially greater response rates. The difference, in my opinion is "device adaptive" survey forms. 

Device adaptive simply means that QuestBack surveys present equally well on mobile, tablet or standard laptop / desktop devices. Obviously, this allows respondents to respond from wherever they happen to be when they receive an invitation. It has absolutely increased response rates for the client. 

How much better are QuestBack's device adaptive surveys? Transactional surveys often receive lower response rates, 8-10% being typical. Mobile only survey companies discuss "mid-teens" response rates as being high. On one survey using QuestBack, the customer is receiving over 20% response versus ~12% before. On another they are approaching 30% versus 25% before.    

QuestBack's device adaptive, multi-channel and closed-loop follow up supported surveys produce high response rates and provide detailed insights about customers. In fact, one client I do surveys for using QuestBack is receiving a nearly 50% response rate on a relationship survey delivered via e-mail.  I've discussed this phenomenon in other posts before (see http://close-the-loop.blogspot.com/2015/06/mobile-support-increases-survey.html for more information). But, wanted to point out that device adaptability can have a powerful effect on transactional survey response rates too.

Stewart Nash
LinkedIn: www.LinkedIn.com/in/stewartnash/



Friday, April 15, 2016

3 Cautions with NPS Benchmarks

I'm a big fan of Net Promoter and have used NPS surveys for the last eight years, doing surveys for my clients and helping my QuestBack customers implement NPS surveys in their businesses. So this post isn't a "ding" on NPS, nor should it be construed as a reason to not use it. In fact, in my opinion, quite the contrary almost every company benefits from NPS surveying. That said, on to the post....

Benchmarks are a popular in management circles. They're used to defend performance versus competitors, compare performance against industry standards and to justify planned actions to improve the business. This isn't an argument against benchmarks, the have their place. I just think Net Promoter Score (NPS) should not be used to benchmark versus industry or competitors. Lots of folks use NPS benchmarks. I think they shouldn't. Or, at minimum should be really cautious about how they use them.

On its surface, NPS looks like a great candidate for a benchmark. It's based on a standard model.  And, successful businesses tend to have high NPS scores, make larger profits and retain customers longer than those that have lower scores. So in theory, it should generally work well as a benchmark. I believe that, in practice, it tends to not work well as a benchmark. Here's why:..
  1. Customer Management Process.  In my experience, NPS Scores as given by survey respondents are largely about person-to-person relationships. In my own use of NPS I've found that scores are almost always driven by how well a key relationship is working. Of course, other aspects of a customer's relationship with a company are contributing factors to NPS ratings, but they tend to be secondary. The key relationship tends to over ride the other stuff. This makes using NPS as a benchmark very tricky, because customer relationship management processes may differ substantially between companies. For example, hypothetically two competitors, company A and company B, where "A" has dedicated account and support people, but charges more money. and "B" has neither but charges less. Company A has high NPS ratings and Company B has lower ratings. Can we compare them both to the same NPS benchmark? Probably not. Unless CRM processes, product pricing and products/services are so similar as to be interchangeable, a NPS benchmark isn't insightful. 
  2. Observed Promoter behavior and the definition of NPS for your company. This is a HUGE issue for understanding how to use NPS Survey data. Most companies don't understand that unless they correlate their NPS survey scales to observed promoter behaviors. they don't really have an optimal tool for utilizing it. Important promoter behaviors include referral activity, retention and openness to up / cross sell. For our hypothetical companies: "A" has very high retention for customers rating it a "10" on the NPS scale and a very low retention for those "6 and under". "B" has the same retention rate as Company A, but for customers giving them a "7 or higher" and low retention for 3's and under. Clearly the standard NPS survey scale needs adjustment for both of these companies in order for them to properly focus on their promoters, passives and detractors. Again, benchmarking either of these company's NPS scores to a standard would be comparing Apples to Oranges.
  3. Effectiveness of NPS Survey follow-up process. In my observation, effectiveness of Closing-the-Loop actions on NPS survey feedback varies widely among businesses. Some follow up every survey response, some only detractor responses and some no responses at all. Since it is proven that "closing the loop" actions tend to improve NPS scores, benchmarking NPS scores where a "loop closing" process is non-existent or not comparable to whatever the benchmarked loop-closing practices are means scores are comparable. In other words, as a benchmark NPS wouldn't be relevant, at least until loop-closing was done to the same extent and effectiveness as the population in the benchmarked group. Most companies hold closely their feedback follow-up-process data.  So, its difficult to discern if your follow-up process is comparable to the benchmark.  Another reason not to rely on NPS as a benchmark.
In my opinion, NPS Should be used as a benchmark. But, only internally (i.e. against itself and over time). And even then, it should only be used as a benchmark when the definitions of promoters, passives and detractors are well understood in their NPS context for your customers. 

Stewart Nash
www.linkedin.com/in/stewartnash



Friday, April 1, 2016

"ViewPoints" improve Text Analysis Usability

"It depends on your point of view". In debates or discussion this is a phrase used to suggest different interpretations are available. As human beings, of course, we know that the meaning of spoken words changes with context. With written words though, we often don't have the same ability to infer context. And, as anyone who's analyzed a set of verbatim comments can tell you, how words are interpreted matters a lot to the quality of the analysis.

At Etuma, we've always understood the need for "globalized" views in our analyses. In our product, these globalized "views" are called "Lexicons". Etuma offers a number of  Lexicons for Voice of the Customer (VoC) and Voice of the Employee (VoE), in various industries (Retail, e-commerce, Air Travel and others), for instance.

Recently though, Etuma has added a new capability for much more granular views of verbatim comments. We call these "ViewPoints". A ViewPoint is a specific topic subset within a Lexicon. Some examples of different Etuma "ViewPoints" are:
  • "VoC Retail, Customer Service. 
  • "VoC, Air Travel, Food Service.   
  • "VoC" e-commerce, purchase experience 
Viewpoints give Etuma customers the ability to quickly and easily "tune" their analyses so that text is mapped to topics according to the data needs of text analysis users, typically end users of data. Once a ViewPoint is implemented, users simply select it as a background variable for their report and data is automatically re-organized so that non-relevent topics are excluded from the analysis and relevant topics included, regardless of their overall importance to the text stream being evaluated.  In other words, without a great deal of effort, ViewPoints let Etuma users quickly see what they want or need to see in their feedback data to better do their jobs. A very useful capability.  One I am looking forward to implementing for customers.

Learn more about Etuma's solution set at www.etuma.com

Stewart Nash
linkedin: www.linkedin.com/in/stewartnash














Tuesday, March 15, 2016

Surveys make Text Analysis Better

A few months back I wrote a post titled: "Text Analysis makes Surveys Better" (click here to read the post). The genesis for the post was my perception that organizations needed a better way to collect customer feedback than just long and involved surveys. Among others, I made three main points in the post: 
  • Functional customer surveys (those short enough to have high response rates) rely upon open answer comments for key insights into the customer experience. 
  • Comment categorization and analysis therefore is critical to a successful process. 
  • That for low volume processes (under 1000 comments / month) analysis can be a manual process. But, in higher volume surveys automated verbatim analysis adds a lot of value.
As businesses increasingly employ transaction based feedback processes they are coming to rely on customer comments almost entirely for insights. Social media is a key driver for this phenomenon as it promotes a "quick hit" type of process (i.e. select a star and make a comment). Some businesses have implemented single question, transaction based surveys using Net Promoter. Not surprisingly text analysis tools are being used to gain insight on all these comment streams.

As a result, many businesses have moved away from "research" oriented customer surveys, choosing instead to use single question Net Promoter / Customer Satisfaction surveys with open answer comment fields. In effect, these businesses have chosen to rely on verbatim feedback analysis, almost exclusively, for generating insights about their customers. 


This kind of feedback management approach has the advantage of being simple to implement and can be effective for insight generation when feedback volumes are small. The NPS metric, or satisfaction metric for that matter, provides base level context for feedback interpretation and analysis. For instance, topics with negative sentiment in comments coming from detractors are generally assumed to have some impact on NPS scores. Where feedback volumes are small, time can be taken to validate the "truth" of that assumption. Without going into lots of depth on this, in my experience the things people talk about in their comments (topics) are often same across NPS categories (i.e. Promoters often experience many of the same issues that detractors experience). So, validating "truth" associated with comments is quite important to building improved processes. NPS or CSAT by themselves are typically not enough to ensure this, as they don't by themselves provide enough context to the feedback.  


However, when feedback volumes expand in different ways the need for additional context to customer comments also expands. Some examples:  

  • Differences in regional or country specific comments 
  • Operational differences about how customers are handled (i.e. which call center handled the customer) 
  • Does the same NPS or Satisfaction scale even apply across regions or countries?
Its easy to see that a simple sort of feedback process could be problematic when comment volumes rise and interpretation complexity increases.  Some things ameliorate these challenges, at least to a degree. Automated text analysis solutions, for instance.  Text analysis tools (www.etuma.com) deal quite effectively with high volumes of comments. And, if there is background data behind the surveys (for region or country for example), these tools can use the background data to provide additional context and better analyses. 

But, even in a scenario where automated text analysis is applied to single-question NPS surveys, and background data is available, there is often a need for additional context in order to understand how to best take action on feedback.  Some types of additional context include: 

  • Expectations - What is reasonable vs. unreasonable in the customer's mind for any given challenge highlighted in their comments?
  • Alternatives - Are alternatives available to customers either from the business itself or competitors?  Are alternatives reasonable if available?
  • Costs - Are customers willing to absorb higher costs for improved processes
  • Business opportunities - Would more customers actually recommend if problems or issues are better dealt with? Would they buy more? Or more often?
These are the types of contextual "truths" that must be learned via an interactive process with customers. Customer surveys (www.questback.com) are by far the easiest and lowest cost way of getting this type of data.  

The value add of driving customer insight generation from customer feedback, in my view, is substantial. First, a lot of data becomes available to the insight generation process because of the feedback process. This enables insight generation to be a short easy follow-up survey to the initial feedback survey (which was itself short and easy). With the automation available today via APIs filtered data can emerge from the feedback process and be used to trigger insight generation.  

Of course, a process of automated feedback, automated text analysis and automated insight generation requires a single, or group of, integrated system(s). The system(s) would of course require some kind of Analytic "back end" to help make sense of all the data. I am currently working with customers who are putting together this kind of optimized feedback gathering / data analysis / insight generation process. The platforms my customers are using are relatively low cost and are easy to use. So, businesses that want to improve their processes by using more automation for feedback, analysis and insight can do so without breaking the bank, or disrupting their operations.

At the end of the day I find it fascinating how businesses are changing the way they gather and analyze customer feedback and generate insights from it based on technology.

Stewart Nash
LinkedIn: https://www.linkedin.com/in/stewartnash














Thursday, October 29, 2015

Feedback Action Management makes QuestBack Essentials - Essential

Two years ago I wrote post here about the importance of Action Management to customer feedback processes.  It was titled: "Turning customer feedback into action is the number one challenge for customer strategists."  The article referenced some work done by Walker Information Systems and pointed at an article written by them (click here to read my earlier post and the link to Walker's article). 

The point: Customer Feedback is a lot more useful if it is immediately actionable.

One of QuestBack Essentials' main advantages is that it makes customer feedback immediately actionable.
Kudos to QB for thinking that way as far back as 15 years ago. Their Essentials product uses a couple of different process mechanisms to make feedback actionable immediately. One of them is the equivalent to Walker's "Hot Alert" process. QuestBack calls it a "Notification", simply an e-mail that is automatically triggered to a given person based on criteria (a response profile) coming through the customer feedback instrument, which could be an e-mailed survey, a feedback form, a pop up survey or any other QuestBack created feedback instrument. Other process mechanisms include, all responses and manual inspection / selection of response to be acted on. As a result, QB Essentials is very flexible in how it helps organizations create action on received feedback.

In any case, the main issue most organizations have when trying to implement action processes on feedback is determining the combination of "feedback to be actioned" and "how to action it".  In other words, hot-alerting feedback for action only works if the "action takers" for that feedback are empowered to act on it in ways that help with the issue. This simply is not the case in most scenarios. So businesses continue to struggle with applying action management on their feedback, even when they have great tools to use like QuestBack.  
So what's different now?  Well, QuestBack is different.  Action Management has been improved in QuestBack Essentials and combined with a Case Management capability to allow customer feedback "issues" to be Highlighted, Status-ed and Actioned - all within the QuestBack Essentials platform. This capability allows new issues to be highlighted, internally discussed and disseminated, decisions to be made and actions determined.  All within short periods of time (minutes or hours potentially vs weeks or months today) To my mind, this is really cool stuff and very valuable potentially to call centers, sales forces, hr teams, etc. Really, any group of people in a business who have to react to the concerns of another group of people. And, it has "hot-alert" automated actioning too.  This combination allows an organization to standardize action taking on certain kinds of issues, while manually intervening on other issues and inspecting / organizing for yet additional kinds of issues - all at the same time.

Combined with QuestBack's Dashboarding capabilities, this lets companies inexpensively seek feedback, organize and implement follow-up actions and processes, report on feedback and actions, and Manage that feedback effectively based on surveys they do using QuestBack Essentials.

I think anyone looking to implement a closed-loop customer feedback process today would do very well to consider QB Essentials and its action management tools.

Stewart Nash
www.linkedin.com/in/stewartnash/