Monday, April 1, 2013

Three Problems with NPS

Before anyone gets the wrong idea.......

I'm a big fan of the Net Promoter Score (NPS) and of process improvement systems that use NPS as a primary indicator of customer relationship quality.  And, I'm a regular user of NPS feedback for various projects I undertake (both my own and customer's). 

For as long as NPS has been around, there have been criticisms of the metric and methodology.  In my opinion though, none of the issues critics present outweighs NPS' value to a business.  Having said that, NPS implementations seem to consistently experience certain problems.  In research I've seen and in forums I follow, people report problems with:  Consistently "closing-the-loop", understanding their internal distribution of respondents (promoters / detractors / passives) and acquiring organizational commitment to "acting" on feedback.  So, I thought I'd talk about those three issues and suggest ways to address them.

#1.  Consistently "Closing-the Loop".

Almost every article ever written about NPS makes the point that closing-the-loop is critical to the success of the process. Yet, it still doesn't happen for much of the NPS feedback businesses collect.  Businesses not "closing the loop" will always experience problems with NPS.  Solving the "loop-closing" problem doesn't have to be terribly difficult, especially for detractor feedback.  But, in an era of web surveys and e-mail responses, filtering responses (in a variety of ways) and responding to each piece of feedback is easy (and surprisingly affordable) if you have the right tools, processes and messages. 

One reason people find "closing-the-loop" difficult is their feedback tool itself.  Typically "survey" tools are not feedback management systems.  If a NPS process is going to rely on a "survey tool" versus an feedback management tool (a good way to tell is that the product's name starts with the word "survey" or "question") it's likely that your "loop closing" process is going to be ad-hoc at best. 

A second tool related issue with "closing the loop" is CRM integration.  Lots EFM vendors base their loop closing processes on CRM integration.  And, for the most part it's an effective approach.  However, because it's based on integration, the loop closing process also has to be integration based, meaning that should scenarios change (i.e. different conditions require different loop closing responses) the CRM programming has to change and the feedback tool integration programming needs to change with it.  Both of these sets of changes cost money and take time.  So, are often not done and "loop closing" suffers for it.

In my view, basic "loop closing" is largely a technology issue for NPS processes.  If you have the right tools you can pretty much always "close-the-loop"

#2.  Respondent distribution amongst NPS categories.

Businesses like to adopt standards.  With NPS its been no different.  When using NPS its a mistake to dogmatically assume that 7's & 8's are always passives.  In your company "Passives" make actually be "5's", "6's" and "7's" or other combinations of scores.  There's always been some level of difference between detractor behavior (defection) amongst various industries using NPS (Think utilities, banks, cable companies, etc.).  In my opinion, most businesses are too dogmatic about implementing NPS and don't adjust their internal scoring to reflect customer behavior over time. 

To illustrate, here's a quote from Fred Reicheld in a recent NPS oriented LinkedIn forum: "if your prior estimate of the right category for a customer is passive---because they scored seven on LTR (likely to recommend) question--but then you observe that subsequent to their survey response, they referred three good customers, doubled their own purchases and complimented your service rep, then you really should recategorize them as a promoter".

The lesson: Raw NPS scores are just indicators of end state status.  They need to be matched up with other data, especially behavior data, in order to know if a given customer (or group of customers) classifies as a promoter, passive or detractor. 

About ten years ago, I participated in a project to define customer loyalty for an industry vertical of our customers.  NPS was a new concept (so we didn't employ it), with Reicheld's first book only recently published.  We categorized customer loyalty as being in two dimensions:  Willingness to refer and Willingness to "buy more". Loyal customers ("Promoters") scored high on the scale for both dimensions.  The Reicheld comment above brought back some memories.  But, its always been clear to me that loyalty is ultimately a behavior. 

Solving the categorization issue is simply one of analysis. If your "8's" exhibit similar enough referral and purchase behavior to your "9's" and "10's" categorize them as "promoters" and treat them that way (I have one customer who has always done just that).  If your detractors are just "0's" - "3's", act on them that way.  One of the advantages of NPS is that it uses an 11 point scale.  This makes it easier to adjust the "buckets" based on customer data analysis, or for industry or cultural differences, than it would be with a smaller scale.

#3.  Getting organizational commitment to "acting" on all feedback

This is the most challenging problem NPS users tend to have.  Typically, NPS processes are "owned" by a single business area.  Often a customer support function.  Other times its marketing or even sales.  Any time NPS (an enterprise level process) is owned by a single business area, acting on feedback that requires someone outside of that area to engage a customer is going to be a challenge and a place where the process can break down.  When the process breaks down, opportunities to "build" promoters get passed up or ignored and issues can fester.  A good example of this type of situation is when a business changes its billing practices.  Finance organizations aren't often closely connected to sales and support organizations.  So, changes in billing or collections policies aren't often vetted by sales.  If a change in these policies is driving down NPS, that information has to get back to the finance department if its going to be changed.  If finance doesn't see the effect of a policy on customer relationships they aren't likely to change. 

In my last blog post I talked about how some customer feedback can be categorized as "Not Obviously Actionable".  I should have stated it as "Not obviously actionable to the business unit sending the survey".  In NPS surveys, there's always a bunch of feedback that isn't obviously actionable, either from the perspective of what to do or who should do it.  Sometimes this kind of feedback is present in "open answer" questions.  Sometimes its because one or more loyalty drivers (product capability, billing practices, etc.) correlate highly with low NPS scores.

Solving the problem requires a management level commitment to high quality customer relationships and a mechanism, call it a "Larger Loop" that integrates NPS feedback data with other kinds of data (behavior data in particular).  Analyses (and their Visualizations) need to occur in near real time, to the appropriate company departments, so that they can see how their actions impact NPS feedback data. 

Clearly, there are more than three challenges that NPS practitioners face.  These are just three I have observed on more than one occasion.  Yet, NPS remains a great tool for understanding customers and what makes them tick.