Monday, April 1, 2013

Three Problems with NPS

Before anyone gets the wrong idea.......

I'm a big fan of the Net Promoter Score (NPS) and of process improvement systems that use NPS as a primary indicator of customer relationship quality.  And, I'm a regular user of NPS feedback for various projects I undertake (both my own and customer's). 

For as long as NPS has been around, there have been criticisms of the metric and methodology.  In my opinion though, none of the issues critics present outweighs NPS' value to a business.  Having said that, NPS implementations seem to consistently experience certain problems.  In research I've seen and in forums I follow, people report problems with:  Consistently "closing-the-loop", understanding their internal distribution of respondents (promoters / detractors / passives) and acquiring organizational commitment to "acting" on feedback.  So, I thought I'd talk about those three issues and suggest ways to address them.

#1.  Consistently "Closing-the Loop".

Almost every article ever written about NPS makes the point that closing-the-loop is critical to the success of the process. Yet, it still doesn't happen for much of the NPS feedback businesses collect.  Businesses not "closing the loop" will always experience problems with NPS.  Solving the "loop-closing" problem doesn't have to be terribly difficult, especially for detractor feedback.  But, in an era of web surveys and e-mail responses, filtering responses (in a variety of ways) and responding to each piece of feedback is easy (and surprisingly affordable) if you have the right tools, processes and messages. 

One reason people find "closing-the-loop" difficult is their feedback tool itself.  Typically "survey" tools are not feedback management systems.  If a NPS process is going to rely on a "survey tool" versus an feedback management tool (a good way to tell is that the product's name starts with the word "survey" or "question") it's likely that your "loop closing" process is going to be ad-hoc at best. 

A second tool related issue with "closing the loop" is CRM integration.  Lots EFM vendors base their loop closing processes on CRM integration.  And, for the most part it's an effective approach.  However, because it's based on integration, the loop closing process also has to be integration based, meaning that should scenarios change (i.e. different conditions require different loop closing responses) the CRM programming has to change and the feedback tool integration programming needs to change with it.  Both of these sets of changes cost money and take time.  So, are often not done and "loop closing" suffers for it.

In my view, basic "loop closing" is largely a technology issue for NPS processes.  If you have the right tools you can pretty much always "close-the-loop"

#2.  Respondent distribution amongst NPS categories.

Businesses like to adopt standards.  With NPS its been no different.  When using NPS its a mistake to dogmatically assume that 7's & 8's are always passives.  In your company "Passives" make actually be "5's", "6's" and "7's" or other combinations of scores.  There's always been some level of difference between detractor behavior (defection) amongst various industries using NPS (Think utilities, banks, cable companies, etc.).  In my opinion, most businesses are too dogmatic about implementing NPS and don't adjust their internal scoring to reflect customer behavior over time. 

To illustrate, here's a quote from Fred Reicheld in a recent NPS oriented LinkedIn forum: "if your prior estimate of the right category for a customer is passive---because they scored seven on LTR (likely to recommend) question--but then you observe that subsequent to their survey response, they referred three good customers, doubled their own purchases and complimented your service rep, then you really should recategorize them as a promoter".

The lesson: Raw NPS scores are just indicators of end state status.  They need to be matched up with other data, especially behavior data, in order to know if a given customer (or group of customers) classifies as a promoter, passive or detractor. 

About ten years ago, I participated in a project to define customer loyalty for an industry vertical of our customers.  NPS was a new concept (so we didn't employ it), with Reicheld's first book only recently published.  We categorized customer loyalty as being in two dimensions:  Willingness to refer and Willingness to "buy more". Loyal customers ("Promoters") scored high on the scale for both dimensions.  The Reicheld comment above brought back some memories.  But, its always been clear to me that loyalty is ultimately a behavior. 

Solving the categorization issue is simply one of analysis. If your "8's" exhibit similar enough referral and purchase behavior to your "9's" and "10's" categorize them as "promoters" and treat them that way (I have one customer who has always done just that).  If your detractors are just "0's" - "3's", act on them that way.  One of the advantages of NPS is that it uses an 11 point scale.  This makes it easier to adjust the "buckets" based on customer data analysis, or for industry or cultural differences, than it would be with a smaller scale.

#3.  Getting organizational commitment to "acting" on all feedback

This is the most challenging problem NPS users tend to have.  Typically, NPS processes are "owned" by a single business area.  Often a customer support function.  Other times its marketing or even sales.  Any time NPS (an enterprise level process) is owned by a single business area, acting on feedback that requires someone outside of that area to engage a customer is going to be a challenge and a place where the process can break down.  When the process breaks down, opportunities to "build" promoters get passed up or ignored and issues can fester.  A good example of this type of situation is when a business changes its billing practices.  Finance organizations aren't often closely connected to sales and support organizations.  So, changes in billing or collections policies aren't often vetted by sales.  If a change in these policies is driving down NPS, that information has to get back to the finance department if its going to be changed.  If finance doesn't see the effect of a policy on customer relationships they aren't likely to change. 

In my last blog post I talked about how some customer feedback can be categorized as "Not Obviously Actionable".  I should have stated it as "Not obviously actionable to the business unit sending the survey".  In NPS surveys, there's always a bunch of feedback that isn't obviously actionable, either from the perspective of what to do or who should do it.  Sometimes this kind of feedback is present in "open answer" questions.  Sometimes its because one or more loyalty drivers (product capability, billing practices, etc.) correlate highly with low NPS scores.

Solving the problem requires a management level commitment to high quality customer relationships and a mechanism, call it a "Larger Loop" that integrates NPS feedback data with other kinds of data (behavior data in particular).  Analyses (and their Visualizations) need to occur in near real time, to the appropriate company departments, so that they can see how their actions impact NPS feedback data. 

Clearly, there are more than three challenges that NPS practitioners face.  These are just three I have observed on more than one occasion.  Yet, NPS remains a great tool for understanding customers and what makes them tick.  

Sunday, March 17, 2013

Fixing Customer Feedback's Non-Action Problem

Gartner research continues to show that "actioning" of customer feedback is still not a widely adopted business practice, with only about 15% of customer feedback generally receiving follow up action.  Five years ago the percentage was roughly 10%.  Clearly, businesses have been somehow unable to make full use of the technology.  Being personally aware of companies who do take full and continuing advantage of EFM I know that, generally speaking, today it requires a large ongoing investment of time (both management and staff) and money.  But, that it is ultimately worthwhile.

Since EFM's value proposition has always been largely based on enabling value creating actions to be taken on received feedback, the question is: Why is "actioning" of received feedback such a big challenge for businesses implementing EFM?  I think its an important question because becoming "customer centric" is such a major goal for businesses today.  And, customer centricity relies largely on being able to act on issues or concerns expressed by customers through feedback.

After several years now working with businesses implementing EFM I think I understand the challenge.  I see the actionability of customer feedback as being of two kinds, the "immediately obvious" and "not so immediately obvious".  Both kinds of feedback are typically represented in any batch of received customer feedback.

Immediately obvious feedback equates to an NPS "detractor" or a "very dissatisfied" customer in a customer survey.  Or, to a problem "ticket" post, request for information, request for contact, etc. from a person via a web-site input form.  It's "immediately obvious" that some kind of follow up needs to take place to a specific person.  Contacting the person and trying to "fix" their source of dissatisfaction or otherwise act on their feedback is operationally easy.  It's a matter of getting data to the right people with instructions on how to follow up.  QuestBack, for instance, automates this process out-of-the-box.  Other tools do it through CRM integration. The point it that is isn't difficult to achieve in most instances.

Not so immediately obvious feedback falls into the category of things like a low ratings for product functionality, business practices, business partners, sales/support people and the like.  Taking action on these kinds of issues, and many others besides, requires cross-functional decision making, and more importantly, additional data.  For instance: Is a product functionality issue the result of a poorly designed product, improperly trained users, new capabilities introduced by a competitor or something else?  It usually isn't clear until more analysis of data is performed.

Actioning "not immediately obvious" kinds of customer feedback requires additional data in order to put customer feedback into context and to identify what the action should be.  CRM, ERP, HRM, Finance or other systems are usually the source for this additional data.  Pulling data from multiple data sources and doing analyses typically requires time and resources.  So, when it is done, its a "project" that someone has to budget for, staff and fund. This process of performing the additional analyses and presenting results explains why businesses largely fail to act on "not immediately obvious" customer feedback. It just takes to long to develop the analyses, present them and make decisions about how to act.

Real-time Data Analysis and Visualization may solve the feedback non action problem

Web based data analysis and visualization tools have begun to get popular over the last couple of years.  These solutions allow a business to pull together data flows of customer feedback, behavior and other data (like benchmarks) and extract the key information from each into Dashboards that present information a manager needs to see in order to determine action.

Analysis dashboards also allow data to flow to managers in virtually real time.  With web services APIs into EFM, CRM, ERP and other systems data can just flow into the dashboard.  This makes analyses rapidly visible by the correct audience.

In my opinion, these new data analysis and visualization tools will combine with EFM to ultimately "fix" EFM's non-action problem.  In all the companies I know where EFM is a success, a lot of work has gone into these types of analyses that combine customer feedback with behavior, benchmarks and other data.  Just with heavy I/T involvement.  Something that isn't needed with these products. 

At QuestBack we already partner with a couple of data analysis and dashboard vendors and we may partner with more of them over time (as a reseller that won't be my call).  But, contextualizing "voice of the customer" using dashboards makes so much sense to me that I'm surprised more companies aren't looking for this capability as part of their EFM solutions.
 

Thursday, February 14, 2013

Lowering "Break Even" for justifying Text Analytics

In a world where most businesses doing customer and employee types of feedback still "code" their verbatim survey responses and other text feedback manually, the standard for "break even" on automated text analysis solutions generally seems to be on the order of ten thousand 10,000 text items per month.

Why is 10,000 the number?  In my experience, I've seen many instances where smaller volumes would justify investment in an automated solution.  Yet, to a large degree only very large businesses and government agencies with big flows of  text based feedback have adopted automated text analysis solutions. 

I think there are two reasons for this.  First is the price of the automated text analysis solutions, which typically have minimum annual costs of $100,000 per year.  So, for a business with lots of text to be coded, only when "people costs" exceed $100k/year does it make sense to invest in an automated solution.  The second reason is that people rarely do just verbatim coding in businesses today. Typically groups of people do the work in different departments as part of their regular jobs (VOC Analyst, Market Research manager, etc.). There's often no single FTE that can be "replaced" by an automated solution.  Only when the volumes of feedback become so large as to be overwhelming do businesses consider automating the analysis process.  By then, the costs of manual coding are large and they justify a large investment.

But, what would happen if the annual software cost of an automated text analysis solution could be lowered to $50,000 per year?  I think that the potential market for automated text analysis would become exponentially larger.  After all, in most businesses its a lot easier to find half of an FTE doing text analysis manually than it is to find full FTEs manually doing text analysis. 

In my opinion, there are additional reasons to consider automating text analysis at lower levels of feedback than 10K per month.  Just one is the ability of an automated solution to identify new topics.  As someone who does a number of feedback projects that employ survey based open answer questions, I regularly evaluate verbatim responses both manually and via automation.  Whenever I've used etuma360 I've found that the etuma analysis identifies topics which I had not considered based on my manual inspection process.  And, since people doing manual coding have a propensity to map all the incoming feedback using the existing coding structure and categories, manual processes will tend to miss new topics.  Automated solutions will typically pick up the new topics.  Valuing this capability is difficult though.  But, its something to consider when looking at text analysis and its cost benefit.

Etuma has a number of pricing plans that allow businesses to get into automated text analysis for less than the $100K/Year price point.  I would think that anyone with 2,000 pieces of text feedback per month would be candidates for an etuma360 implementation based on FTE considerations alone.

Tuesday, February 5, 2013

Surveying for Feedback/Response Action Management

Periodically I see discussions in articles and LinkedIn forums about the "Death of Surveys".  But, in my view, the on-line survey business is simply transforming from a focus on surveying for data collection to one of surveying for feedback and response action management (F/RAM).  This is particularly true, I think, in the case of relationship surveys (customer, partner, employee, alumni, union member, donor or "membership" types of surveys).  In short, where "relationships" exist between an entity and a population of people, something more than data collection is now necessary. 

In my view, surveying for relationship management purposes is occurring more today because of the growth of social media, on-line chat and mobile device technologies, all of which help businesses collect huge amounts of customer data. So much so, that businesses are almost overwhelmed by it. It's not a coincidence that data analysis, "big" data and data storage vendors are doing well.  All that data needs to be analyzed, correlated, cross referenced and stored.  Yet none of it really triggers businesses to build better relationships with the people they interact with.  Somewhere and some how, somebody has to ask customers how they feel in order to assess relationship quality.  If a business has lots of customers, a feedback/response action management survey is the best way to do that, because the feedback automatically propagates dialogue in a F/RAM process.

Feedback/response action management is a process that many businesses are unfamiliar with.  Its a fair bit more complex than traditional market research.  It relies on customer data to guide how response action management should be implemented and it necessitates the use of a methodological approach (NPS, CSAT, CxM or something similar).  In addition, F/RAM requires that feedback scenarios be modeled or at least thought through, so that appropriate responses can be formulated (i.e. who responds and how when a customer - from country x, with product y and issue z triggers a response action based on their survey feedback). 

A number of on-line survey platforms today can implement an F/RAM process.  Some of the platforms though are expensive to acquire.  My admittedly incomplete list of F/RAM capable tool sets includes: QuestBack (all platforms), Vovici, Medallia, Allegiance and ConFirmIT.  ClickTools and KeySurveys to my understanding only implement F/Ram processes through CRM integration (and ClickTools only for SalesForce). In my experience, almost all the other tools  "out there" are primarily focused on just data collection and analysis.

In my experience there are two critical capabilities that a tool needs to have in order to implement an F/RAM process.  First a tool needs to be able to trigger a real-time follow up action based on a survey response, customer data or a combination of both.  Second, a tool has to be able to link, in real-time, customer data to the survey at a respondent level.  Without these two capabilities F/RAM processes require lots of I/T intervention in order to get survey responses to trigger actions at a respondent level.




 

Tuesday, January 15, 2013

Organizing on-line surveys for action taking

People love to give feedback.  But, hate to take surveys. 

I see lots of surveys that start out with "Please give us feedback" or "your opinion is important to us" followed by "click here to take our short survey".  After clicking I get something that indicates the survey will take anywhere from 10-25 minutes to complete".  Like most people I immediately quit the survey.  In my mind, any time commitment beyond 5 minutes makes me a research subject, which I didn't sign up for when I clicked on the survey link.

Because most surveys are designed to collect data and employ longer questionnaires, lots of customer feedback doesn't get collected by companies that want and need it.  Surprisingly to me, even many surveys asking the Net Promoter (likely to recommend) question aren't designed to generate immediate follow up actions. 

What companies should be doing is designing surveys that customers want to take and that have built-in triggering mechanisms for enabling responses to feedback.  In my experience, key constituencies want good relationships and will take time to give feedback when asked, provided the process is respectful of their time and provides a value add (better relationship) for them.  These things become easily achievable with short, follow-up supported surveys.

Feedback surveys should use short questionnaires with 10 or less questions (presented) and a 2-5 minute maximum time commitment.  They should only ask questions that are meaningful to the customer relationship (one reason I like Net Promoter).  They should never ask for information that is already in their databases somewhere.  And, they should always have a follow up process for everyone who takes the survey.  Even if that follow up process comes later on and is general in nature.

How questions are asked should also be taken into consideration when doing feedback surveys.   Here's an example of a typical product oriented satisfaction question done to collect data:


Here is the same question designed for feedback:


On the surface these two examples look very similar.  Except here the question is followed by a more specific question based upon the answer chosen.  If the answer chosen is:  Very dissatisfied, Somewhat dissatisfied or Neither dissatisfied or satisfied, the customer gets:

You indicated you are less than satisfied with ACME Company Product XYZ please tell us why.  We will contact you shortly by e-mail to follow up.


If the customer indicates Somewhat or Very Satisfied they get:

Please tell us what you like best about ACME Company Product XYZ.   We will contact you shortly by e-mail to follow up.

In addition to triggering a question branch, in the above example, each set of answer alternatives triggers an alert or notification to someone to take follow up action.  In QuestBack we generate an e-mail to a designated person.  In other feedback management systems (and also in QuestBack if needed) triggering is done through a CRM system.  But, in either case, the survey is optimized for feedback because no response or branching is triggered if Not Applicable is selected, and specific questions and triggering are set based on specific answer alternatives. 

Good feedback oriented survey processes should have at least a couple of trigger questions.  One for Loyalty or Advocacy, One for general satisfaction or experience, and possibly one or more based on product or service attributes that can be boiled down to some actionable response.

Surveying key constituencies with a goal of creating dialogue vs. data is a trend not to be ignored.  As people get more mobile and more "Social", surveys will have to be more feedback oriented. And, designing surveys for follow up action is a great way to collect feedback, increase customer dialogue and ultimately build better and more persistent customer relationships.