Wednesday, December 11, 2013

An alternative to CRM based feedback - QuestBack Respondent Data

I am a big proponent of CRM Integrated Customer Feedback processes.

There is no question but that Customer feedback initiatives (especially in B2B) benefit greatly from CRM database integration. Just some of the benefits include:
  • Personalization of questionnaires, e-mail invitations, reminders and follow-ups
  • Optimized survey taking experience via data driven customization
    • "Piping" of individualized data into questions or answer alternatives
    • "Branching" the survey based on data (Customer has product X and not Y so only ask about X)
  • Shorter surveys and resultant higher response rates
  • Easier to implement and more precise follow-up processes
  • Much better reporting of results
Integrating customer surveys with CRM tends to be a win-win for both businesses doing the surveys and their customers who take the surveys. 

But, CRM integrated feedback only covers a small part of a business' overall feedback needs 

Anyone who's been involved with system integration projects knows that they can be costly, that integration can limit an application's functionality and carries a maintenance burden (like any other software).  In practice, these issues limit the use of CRM integrated feedback to only critical feedback applications such as transactional Net Promoter or Customer satisfaction. 

Since, in most businesses most feedback is not about NPS or CSAT, feedback applications typically have to operate without the benefit of CRM integration.  Bigger businesses realize this, of course, and they organize many of their more regular feedback processes around customer / user "panels" or "communities".  Product user panels are particularly prevalent in software companies, for instance.  But, panels and communities also generally require significant resources.  And, as a result, also aren't practical for many businesses. 

Importing data as an alternative.

Without a budget for integration, user panels and the people to operate those systems, businesses can be forced into doing what I call "bad feedback" projects.  "Bad" projects feature surveys that ask too many questions (often seeking data they already possess), aren't personalized or optimized for the customer, don't trigger actions based on feedback, etc..  These projects produce feedback with lots of gaps, suffer from low response rates, are difficult to organize follow up activities for, etc...

In most of my customer feedback projects I use an Import process to embed customer data into my surveys.  The products I work with (QuestBack Ask/Act and QuestBack EasyResearch) both let me import substantial amounts of customer data into my surveys.  They both also integrate with CRM systems.   QuestBack calls imported information "Respondent data".  I have found Respondent data to be a very powerful and easy-to-use capability that gets me the benefits of CRM integration, but without integration's costs

QuestBack Respondent Data

Respondent data lets me do most things in a survey that integration would let me do. With it I can:
  • Personalize invitations, reminders and follow up notifications
  • Embed imported data into follow ups, allowing people to respond quickly without need for research
  • Optimize question flow and user experience
  • Create multiple and distinct follow up processes from the same survey instrument
  • Create "prototype" feedback / follow up processes that can then be automated through integration.
  • Work with multiple CRM data sources concurrently
With Respondent data I am able to help customers to quickly attain their immediate feedback management objectives while helping them determine what should be automated and how to best deploy an automated solution.  Customers get feedback benefits quickly, evolve into more sophisticated feedback uses and then go to a database integrated feedback process in a very precise manner.

At the end of the day, CRM integrated feedback is great to have. But by using QuestBack Respondent data first, integration can cost less and feedback processes can be more effective sooner, in my opinion.

Stewart Nash
www.linkedin.com/in/stewartnash






Wednesday, October 2, 2013

Using Net Promoter to turn Feedback into Action


Ask and Act - a new approach to NPS

As readers of this blog know, I'm a big fan of the Net Promoter methodology for using customer feedback to intelligently manage customer relationships, build customer loyalty and grow new business.  Readers here also know that I sell and support QuestBack's feedback management tools. 

Just for the record, I was a fan of NPS a long time before becoming a QuestBack reseller. 

Having said that, what's always appealed to me about QuestBack is its ability to set up customer surveys with real-time follow-up processes for Detractors, Passives and Promoters, without requiring CRM integration.  Until last week though, setting up NPS categories inside QuestBack was  something of a manual process.  And, reporting NPS relied on similar use of manually set up (but automatically run) filters on the response data. 

QuestBack has changed all that with their latest release of the product. QuestBack's new version now does some really useful things when used for NPS surveys.  First, it implements the "Likely to recommend" question as a special question type.  This implementation lets QuestBack's analytics, notifications (automated follow up) and reporting tools all automatically set themselves up for doing automated follow up, data analysis and reporting based on NPS category. 

Other features include: automated category based question branching (so different questions can easily be asked of detractors, passives or promoters), automated report setup for secure viewing, and inclusion of data from multiple survey sources in the NPS calculation.

A capability that QuestBack has implemented, and which I believe will be really useful, is the ability to redefine how survey respondents fit into NPS categories.  In my experience, many organizations for different reasons, need to determine what scores define their NPS categories differently than the standard ("0-6 = Detractor", "7-8 = Passive" or "9-10 = Promoter") survey scale.  Reicheld pointed out in a recent LinkedIn forum post, that if a customer gives an "8" on an NPS survey but otherwise behaves like a "promoter", that customer is a "promoter".  If enough "8's" behave like promoters your NPS scale needs to change to reflect that reality.  QuestBack's new NPS tools let you do this while maintaining all other automation characteristics for follow up, analytics and reporting that it implements. 

I don't often write about QuestBack on this blog.  It's always been my belief that providing more general information about "closed loop" customer feedback processes would be more interesting and valuable to readers.  But, so many companies still struggle with effective closed loop feedback processes, even after they adopt NPS, that I thought writing about an easy, affordable and powerful new way to implement NPS would be, at a minimum, interesting to people. 

Stewart Nash











Friday, September 20, 2013

Turning customer feedback into action is the number one challenge for customer strategists


"Turning customer feedback into action is the number one challenge for customer strategists." - Walker Information, Inc.

Anyone who has visited this Blog knows that I've written numerous posts about the value-add of alert based survey follow-up processes. And, that QuestBack's Ask & Act toolset supports implementation of rules based on-line survey follow-up processes.

I don't normally highlight marketing materials distributed by competitors (even if they aren't really competitors). Walker and QuestBack are both in the business of helping companies take effective actions on customer feedback using on-line surveys, analytics, follow-up, reporting, etc.  Though in practice, I have never really found myself selling against them.  Our solution methodologies and price points are very different.

Walker recently released a marketing piece that I thought worthy of writing about.  Its a two-page blurb about one of their customers who received great value from implementing a "hot-Alert" process that allowed their client to implement follow-up on customer survey respondents with problems or issues.  And, also to identify and act on new sales leads from the data.  The blurb states: "In one year they (Walker's customer) were able to convert thousands of sales leads into millions in revenue." through this hot-Alert process.

For anyone interested.  The Walker piece is titled: "Taking Action Produces Big Pay-Off".  The Walker site is: www.walkerinfo.com


At QuestBack, our mantra has always been "ASK & ACT".  We've been talking about this topic for almost ten years.  We have customers who've been doing alert based follow up for at least that long.  And, it works.  This kind of recognition by another vendor like Walker is great news for us at QuestBack because it's another validation of our approach. 

Where QuestBack solutions implement Alert based survey follow-up processes at a fraction of the cost of a Walker implemented process.  I would think this kind of publicity would be good for business going forward.

Stewart Nash
s.nash@questback.com
LinkedIn: www.linkedin.com/in/stewartnash/




 

Thursday, August 1, 2013

Analyzing Chat Logs for Topics and Sentiment

I've recently been working with a client who uses on-line chat as a sales facilitation tool.  Sales chats as a use case for text analysis fascinated me from the beginning because the sheer number of people who shop via the internet is so vast and the number of companies using sales chat is growing so rapidly. If Etuma360 could do a good job of analyzing sales chats, potentially a lot of companies might be interested in Etuma's solution.

The client knew that they were having thousands of chat dialogs every month, knew that some sales agents were better at selling via chat than others and that some agents had lots of chat dialogs as compared to other agents.  But, without extensively studying all the chats, there was no way to know what the topics were that were being discussed, whether more successful agents chatted about different things than less successful agents, how customers felt about different issues being discusssed, whether repeat customers dialoged differently than new customers, etc.

After running a set of the client's data (several thousand chat dialogues) it was clear that Etuma was going to be able to help the client with both topic and sentiment readings on their chat data.  And, because Etuma can take background variables like agent number, region, website, or other data, its analysis was going to be able to tell the client what topics and sentiments expressed were for successful vs less successful agents, as there were clear differences.  Some other valuable insights came out of the analysis, including certain topics that started trending positively or negatively at at different points in time.  As chats are going on costantly, picking up newly trending topics, whether positive or negative gives the client real-time insights into issues that might be driving business. 

Lastly the topics Etuma identified were interesting in and of themselves.  Being able to see sentiment rated chat topics by itself showed the client what its customers were interested in, what they felt positive, or negative, about and how those topics related to sales.

All in all a very interesting exercise, with the take away being that analyzing sales chats is a valuable thing to do.  And, one that is important to do well when doing it.. 

Stewart Nash
www.linkedin.com/in/stewartnash

Friday, July 26, 2013

More on Feedback for Improving Sales Process Effectiveness



In my last blog post I talked about different ways for employing lead, prospect and customer feedback to improve understanding of customer needs (defined here as leads and prospects too).  One of the points I made in the post was that studies show that business buyers often are 60% of the way through the buying process by the time they contact you.  SaleForce recently blogged about this phenomenon in a post titled: "Why is Your Sales Message Irrelevant? (And 5 Other Questions to Close Deals)".   Article link here:  http://blogs.salesforce.com/company/2013/07/why-is-your-sales-message-irrelevant-5-big-questions.html.

I wanted to share their perspective, as I think it butresses my thesis on why sales organizations should be acquiring structured feedback on leads, prospects and customers.  An excerpt follows:

"Your sales message is irrelevant. Today's customers are better informed and more connected than ever before and to sell effectively, your message needs to be tailored. And your sales team--as well as your entire organization--needs to be plugged into what matters most to your customer (emphasis mine).  According to CEB, “Today’s business buyers do not contact suppliers directly until 57 percent of the purchase process is complete.That means for nearly two thirds of the buying process, your customers are out in the ether: Forming opinions, learning technical specifications, building requirements lists.”

Today’s informed customer presents greater challenges, as well as exciting opportunities.

Traditionally, salespeople have relied on call scripts and data sheets. In today’s world, the empowered customer demands a tailored message as relevant to their needs as the results of a Google search  But to improve sales performance, your salespeople must understand their prospects and customers including what they know (emphasis mine), and provide them with the best customer experience during the buying process--it’s a tall order!

The demands of the customer experience require tools that allow your reps to be in tune with the market, your customer’s buying profile, and the tactics to win."

--------------------------------------------------------------------------------------------------------------



My primary point in my earlier post was that sales organizations need feedback processes (illustrated above) in place to quickly assess and distribute "customer understanding" to sales reps.  And those processes should tie directly to the work individual sales people are tasked with.  For instance, individual results of a lead qualification survey should be delivered directly to sales reps making phone calls - prior to those calls being made.  The lead qualification data the survey provides would allow sales reps to better position themselves as consultative sellers, have more individualized customer knowledge at hand (making calls more efficient and effective) and to be more effective users of customer time in the selling process.
----------------------------------------------------------------------------------------------------------------

Stewart Nash
www.linkedin.com/in/stewartnash/

 

Tuesday, July 16, 2013

7 Ways Sales Process Feedback Can Help Sellers Sell


Customer loyalty research (NPS in particular) has definitively shown that by soliciting and acting on customer feedback, that loyalty and retention rates can be increased.  Results include more sales and more profitable sales.

I contend that by applying the same type of feedback management principles (at QuestBack we call it Ask / Act) that selling processes can be improved such that selling in all phases of the sales cycle is more effective.  This would mean higher rates of leads generated per suspect, higher rates of qualified leads per lead, higher rates of prospects converted from qualified leads and higher rates of customer acquisitions per prospect.  I've listed seven different areas where this kind of sales process feedback may offer benefits, though there are potentially more.
------------------------------------------------------------------------------------------

In my experience, the biggest flaw in sales processes is the necessity for sales people to record their activities in a CRM system.  Since they largely do only the required minimum data entry, at every sales process stage some customer insight is missing, inaccurate, out of date or a combination thereof.  This causes tremendous waste of energy across the organization, missed sales opportunities, poorly chosen sales pursuits and forgone opportunities to increase sales generally.

The problem is compounded by the progressively more difficult selling environment engendered by ever better educated prospects and customers.  Losing deals because of missing information drives sales managers crazy, and always has.  But, today where a new contact might be as much as 60% of the way through their buying process, having an inexperienced lead gen person contacting them might keep you out of a winnable sales cycle altogether.  Asking new leads to estimate their buying process stage by itself might help sales organizations perform better.


In my opinion, Feedback management can improve sales processes in the following ways:
  • Lead Generation - Validating Lead Quality and Buying process
  • Lead Nurturing - Is the lead still a lead?
  • Prospect Mining - Have we missed potential prospects?
  • Prospect Validation - Is the prospect a good fit for our solution(s)?
  • Customer Entry Profiling - Validate solution "fit", document business objectives
  • Customer Satisfaction / Loyalty - How are we doing vis-a-vis business objectives?
  • Exit  / Win Back - Can we get a lost customer back?
Lead Generation: 
Many companies take lists of "suspects" and have their inside sales team make phone calls.  "Leads" generated this way are often not good fits and cost money to validate.  An e-mail based lead validation survey sent to each "Lead" within a feedback process would provide an additional automated step that ensures a quality lead is being generated and would improve data quality in the CRM.  Surveying inbound leads in a similar fashion would do the same for them.  Fakes, frauds and indifferents would rapidly drop off (many by simply ignoring your request for feedback).  Those leads who provide feedback will be "qualified" as being interested.  More importantly they have engaged with you and are, at least, likely to be a good fit.  Using Follow-up based on answers to key questions would allow for Sales Executives to immediately get involved with high priority leads. 

Lead Nurturing:
Any lead that meets the criteria for requiring nurturing should be periodically tested for continued interest, relevant business need, timeline, or other characteristic that might indicate a need to engage, or disengage with them.  Periodically surveying "nurture" status leads is a great way to determine if those leads are valid and if they need actions beyond additional nurturing.

Prospect Validation:
After a lead has been designated as a prospect, either because of a sales meeting, webinar, product demonstration, etc., feedback should be sought from them to validate their "fit" with your solution.  Any discrepancies between "fit" articulated by the customer and perceptions logged by sales personnel should be documented and researched.  If sales meetings are used in the process, those meetings can also be evaluated using feedback surveys.  Sales meetings should offer value for customers if they don't, it is important to know why.

Prospect Mining:
In any sales process that deals with lots of leads, some that should become prospects do not.  Either via oversight, misunderstanding or other reason, some leads are overlooked or mischaracterized as not being nurturable.  Whenever a lead is moved out of the sales process an Exit Survey should be offered to them.  Within the Exit survey, there should be an opportunity for the lead to requalify themselves for re-entry into the sales process.

Customer Entry Profiling:
After a deal is done with a customer one would think that the sales organization knows and has documented the most information it ever will have on the customer.  Amazingly, that turns out to not be the case in many instances.  Again, for a variety of reasons sales people neglect to do data entry on customers who are actually buying and why they bought.  Surveying customers upon entry allows your sales process to correct for human fault and acquire key information that will be needed later to retain the customer.

Customer Satisfaction / Loyalty:
Satisfied and more importantly Loyal customers stay longer, buy more and are more profitable than other customers.  Surveying for CSAT / Loyalty today is a "no-brainer".  If you aren't doing it you should be.

Exit / Win Back:
In similar vein to Exit surveys for prospect mining, Exit / Win Back surveys are a great way to ensure that customers who should be retained are retained.  Or, if not retained, then enough is learned from them about reasons for defecting so that other customers later on can be retained based on process or product changes.
------------------------------------------------------------------------------------------

By incorporating more Feedback / Actioning mechanisms into sales processes its possible to capture and institutionalize knowledge about why some leads transition to prospects and others don't.  Why some prospects transition to customers and others do not.  By actioning feedback you'll start increasing those ratios immediately.  Over time, better information will help increase the share of leads that transition to prospects and the share of prospects that transition to customers.  As we have seen through use of NPS increasing the share of customers who stay customers has a profound effect on company profits.  The same effect can be had by applying a feedback / actioning process to leads, prospects and customers. 


Thursday, June 6, 2013

Is It Time For an EFM 2.0 ?

 





Enterprise Feedback Management (EFM) was a term originally coined to describe a lot of products designed for doing mainly Web Surveys.  Many of the products were little more than automated data collection forms.  Some however were more sophisticated, offering built-in follow up processes, respondent analysis and reporting.  As time has passed, more and more large businesses have adopted EFM tools that support follow-up, analytics, dashboards and other advanced capabilities.  They have recognized that data collection by itself isn't enough any more.  In fact, a business looking at EFM tools today wouldn't recognize a lot of the products from the 2005-2006 as real EFM tools at all. 

Is it time for EFM 2.0?

So what's changed?  I believe there are five major changes in EFM worth discussing:  Process support, Business Analytics / Visualization, Integration, Text Analysis and Social media / Mobile computing. 
  • Process Support.  A key difference between research and operational intelligence (feedback) is that insights are operationally relevant immediately, often in real-time.  Business processes operate on an accelerated pace today.  Timely, relevant and actionable feedback lets a business process operate more effectively and more efficiently.  By taking feedback from customers, partners, employees or suppliers and feeding it in real-time to key "actors" like account managers, support staff, or others is now a critical capability for EFM vendors because it supports improved operational efficiency and effectiveness at the business process level.
  • Business Analytics.  Because corporate organizational structures have become so much "flatter", data that was once developed and acted upon by local mid-level managers today must bubble up to higher level managers.  Removing the mid-level manager's data organization,  interpretation and action definition role and replacing it with Analytics, Dashboards and automated follow up processes allows flatter organizations to function better.  Many EFM tools originally did not do any feedback reporting, just "dumping" out MS Excel files for users to analyze themselves.  EFM 2.0 products not only possess their own analytical capabilities, but often they can generate dashboards or feed data (filtered or raw) to integrated BI / Dashboard solutions for further analysis / presentation.  Since this also often a real-time or near real-time capability.  Senior managers can become aware of issues and trends nearly at the same time as the operational "actors" who deal with feedback data at an operational level.
  • Integration.  EFM 2.0 products use web services to integrate with a lot of different products (CRM in particular).  But, they also integrate with e-mail, business analytics, data warehouses, etc.  The point of integration, especially for CRM, is that two processes can be automated quickly through rules applied at the CRM level.  1. Deciding who gets solicited for what  feedback, and when.  And 2. Routing completed survey and alert data to internal actors for action.  Integrations speed up processes by automating data transfer to operational systems.
  • Text Analysis.  With so much text based data being presented to businesses through survey open end questions, Web site feedback forms, Facebook pages, Chat processes, etc.  Being able to analyze this source of feedback is an increasingly key capability.  It is still somewhat of a challenge to act on in real-time for many businesses because methodologies (like Net Promoter) don't really exist for purely text based feedback yet.  However, advanced text analysis tools allow some key-word based actioning and certainly new topics that require actioning tend to bubble up through text analysis fairly quickly.
  • Social Media / Mobile Computing.  When EFM was first defined, there was no real social media or mobile computing being done.  Today it is ubiquitous.  EFM solutions have to be able to integrate with Facebook, websites or other social media sites.  And, they must be able to utilize mobile devices for their feedback gathering and follow up functions. 
Traditional EFM vendors are all busy incorporating mobile, social media and text analysis capabilities into their solution sets.  I see this continuing.  EFM vendors are also trying to create integration capabilities that allow their platforms to connect with CRM, ERP and Data warehouse solutions.  I see that continuing too. 

At QuestBack, our surveys  distribute equally well via e-mail, web and facebook today.  They display and operate equally well on either desktop or mobile devices.  We're integrated with the Etuma360 text analysis engine, several CRM systems and a number of  BI / Dashboard solutions (while also improving our internal analytics and reporting capabilities dramatically).  Other EFM vendors are trying to do the same. 

What's clear to me though is that closed loop feedback processes are more important than ever and need to be process integrated more than ever before.  Simple survey / data collection tools really no longer fit the bill for these kinds of needs. 

Maybe it is time for an EFM 2.0.

-------------------------------------------------------------------------------------------------------------------
About the author - Stewart Nash is a USA based reseller for QuestBack AS and Etuma Ltd.  For over six years he has supported clients large and small with feedback manangement projects and EFM implementations. 

LinkedIn: www.linkedin.com/in/stewartnash
E-mail: stew.nash2010@gmail.com



 

Monday, April 1, 2013

Three Problems with NPS

Before anyone gets the wrong idea.......

I'm a big fan of the Net Promoter Score (NPS) and of process improvement systems that use NPS as a primary indicator of customer relationship quality.  And, I'm a regular user of NPS feedback for various projects I undertake (both my own and customer's). 

For as long as NPS has been around, there have been criticisms of the metric and methodology.  In my opinion though, none of the issues critics present outweighs NPS' value to a business.  Having said that, NPS implementations seem to consistently experience certain problems.  In research I've seen and in forums I follow, people report problems with:  Consistently "closing-the-loop", understanding their internal distribution of respondents (promoters / detractors / passives) and acquiring organizational commitment to "acting" on feedback.  So, I thought I'd talk about those three issues and suggest ways to address them.

#1.  Consistently "Closing-the Loop".

Almost every article ever written about NPS makes the point that closing-the-loop is critical to the success of the process. Yet, it still doesn't happen for much of the NPS feedback businesses collect.  Businesses not "closing the loop" will always experience problems with NPS.  Solving the "loop-closing" problem doesn't have to be terribly difficult, especially for detractor feedback.  But, in an era of web surveys and e-mail responses, filtering responses (in a variety of ways) and responding to each piece of feedback is easy (and surprisingly affordable) if you have the right tools, processes and messages. 

One reason people find "closing-the-loop" difficult is their feedback tool itself.  Typically "survey" tools are not feedback management systems.  If a NPS process is going to rely on a "survey tool" versus an feedback management tool (a good way to tell is that the product's name starts with the word "survey" or "question") it's likely that your "loop closing" process is going to be ad-hoc at best. 

A second tool related issue with "closing the loop" is CRM integration.  Lots EFM vendors base their loop closing processes on CRM integration.  And, for the most part it's an effective approach.  However, because it's based on integration, the loop closing process also has to be integration based, meaning that should scenarios change (i.e. different conditions require different loop closing responses) the CRM programming has to change and the feedback tool integration programming needs to change with it.  Both of these sets of changes cost money and take time.  So, are often not done and "loop closing" suffers for it.

In my view, basic "loop closing" is largely a technology issue for NPS processes.  If you have the right tools you can pretty much always "close-the-loop"

#2.  Respondent distribution amongst NPS categories.

Businesses like to adopt standards.  With NPS its been no different.  When using NPS its a mistake to dogmatically assume that 7's & 8's are always passives.  In your company "Passives" make actually be "5's", "6's" and "7's" or other combinations of scores.  There's always been some level of difference between detractor behavior (defection) amongst various industries using NPS (Think utilities, banks, cable companies, etc.).  In my opinion, most businesses are too dogmatic about implementing NPS and don't adjust their internal scoring to reflect customer behavior over time. 

To illustrate, here's a quote from Fred Reicheld in a recent NPS oriented LinkedIn forum: "if your prior estimate of the right category for a customer is passive---because they scored seven on LTR (likely to recommend) question--but then you observe that subsequent to their survey response, they referred three good customers, doubled their own purchases and complimented your service rep, then you really should recategorize them as a promoter".

The lesson: Raw NPS scores are just indicators of end state status.  They need to be matched up with other data, especially behavior data, in order to know if a given customer (or group of customers) classifies as a promoter, passive or detractor. 

About ten years ago, I participated in a project to define customer loyalty for an industry vertical of our customers.  NPS was a new concept (so we didn't employ it), with Reicheld's first book only recently published.  We categorized customer loyalty as being in two dimensions:  Willingness to refer and Willingness to "buy more". Loyal customers ("Promoters") scored high on the scale for both dimensions.  The Reicheld comment above brought back some memories.  But, its always been clear to me that loyalty is ultimately a behavior. 

Solving the categorization issue is simply one of analysis. If your "8's" exhibit similar enough referral and purchase behavior to your "9's" and "10's" categorize them as "promoters" and treat them that way (I have one customer who has always done just that).  If your detractors are just "0's" - "3's", act on them that way.  One of the advantages of NPS is that it uses an 11 point scale.  This makes it easier to adjust the "buckets" based on customer data analysis, or for industry or cultural differences, than it would be with a smaller scale.

#3.  Getting organizational commitment to "acting" on all feedback

This is the most challenging problem NPS users tend to have.  Typically, NPS processes are "owned" by a single business area.  Often a customer support function.  Other times its marketing or even sales.  Any time NPS (an enterprise level process) is owned by a single business area, acting on feedback that requires someone outside of that area to engage a customer is going to be a challenge and a place where the process can break down.  When the process breaks down, opportunities to "build" promoters get passed up or ignored and issues can fester.  A good example of this type of situation is when a business changes its billing practices.  Finance organizations aren't often closely connected to sales and support organizations.  So, changes in billing or collections policies aren't often vetted by sales.  If a change in these policies is driving down NPS, that information has to get back to the finance department if its going to be changed.  If finance doesn't see the effect of a policy on customer relationships they aren't likely to change. 

In my last blog post I talked about how some customer feedback can be categorized as "Not Obviously Actionable".  I should have stated it as "Not obviously actionable to the business unit sending the survey".  In NPS surveys, there's always a bunch of feedback that isn't obviously actionable, either from the perspective of what to do or who should do it.  Sometimes this kind of feedback is present in "open answer" questions.  Sometimes its because one or more loyalty drivers (product capability, billing practices, etc.) correlate highly with low NPS scores.

Solving the problem requires a management level commitment to high quality customer relationships and a mechanism, call it a "Larger Loop" that integrates NPS feedback data with other kinds of data (behavior data in particular).  Analyses (and their Visualizations) need to occur in near real time, to the appropriate company departments, so that they can see how their actions impact NPS feedback data. 

Clearly, there are more than three challenges that NPS practitioners face.  These are just three I have observed on more than one occasion.  Yet, NPS remains a great tool for understanding customers and what makes them tick.  

Sunday, March 17, 2013

Fixing Customer Feedback's Non-Action Problem

Gartner research continues to show that "actioning" of customer feedback is still not a widely adopted business practice, with only about 15% of customer feedback generally receiving follow up action.  Five years ago the percentage was roughly 10%.  Clearly, businesses have been somehow unable to make full use of the technology.  Being personally aware of companies who do take full and continuing advantage of EFM I know that, generally speaking, today it requires a large ongoing investment of time (both management and staff) and money.  But, that it is ultimately worthwhile.

Since EFM's value proposition has always been largely based on enabling value creating actions to be taken on received feedback, the question is: Why is "actioning" of received feedback such a big challenge for businesses implementing EFM?  I think its an important question because becoming "customer centric" is such a major goal for businesses today.  And, customer centricity relies largely on being able to act on issues or concerns expressed by customers through feedback.

After several years now working with businesses implementing EFM I think I understand the challenge.  I see the actionability of customer feedback as being of two kinds, the "immediately obvious" and "not so immediately obvious".  Both kinds of feedback are typically represented in any batch of received customer feedback.

Immediately obvious feedback equates to an NPS "detractor" or a "very dissatisfied" customer in a customer survey.  Or, to a problem "ticket" post, request for information, request for contact, etc. from a person via a web-site input form.  It's "immediately obvious" that some kind of follow up needs to take place to a specific person.  Contacting the person and trying to "fix" their source of dissatisfaction or otherwise act on their feedback is operationally easy.  It's a matter of getting data to the right people with instructions on how to follow up.  QuestBack, for instance, automates this process out-of-the-box.  Other tools do it through CRM integration. The point it that is isn't difficult to achieve in most instances.

Not so immediately obvious feedback falls into the category of things like a low ratings for product functionality, business practices, business partners, sales/support people and the like.  Taking action on these kinds of issues, and many others besides, requires cross-functional decision making, and more importantly, additional data.  For instance: Is a product functionality issue the result of a poorly designed product, improperly trained users, new capabilities introduced by a competitor or something else?  It usually isn't clear until more analysis of data is performed.

Actioning "not immediately obvious" kinds of customer feedback requires additional data in order to put customer feedback into context and to identify what the action should be.  CRM, ERP, HRM, Finance or other systems are usually the source for this additional data.  Pulling data from multiple data sources and doing analyses typically requires time and resources.  So, when it is done, its a "project" that someone has to budget for, staff and fund. This process of performing the additional analyses and presenting results explains why businesses largely fail to act on "not immediately obvious" customer feedback. It just takes to long to develop the analyses, present them and make decisions about how to act.

Real-time Data Analysis and Visualization may solve the feedback non action problem

Web based data analysis and visualization tools have begun to get popular over the last couple of years.  These solutions allow a business to pull together data flows of customer feedback, behavior and other data (like benchmarks) and extract the key information from each into Dashboards that present information a manager needs to see in order to determine action.

Analysis dashboards also allow data to flow to managers in virtually real time.  With web services APIs into EFM, CRM, ERP and other systems data can just flow into the dashboard.  This makes analyses rapidly visible by the correct audience.

In my opinion, these new data analysis and visualization tools will combine with EFM to ultimately "fix" EFM's non-action problem.  In all the companies I know where EFM is a success, a lot of work has gone into these types of analyses that combine customer feedback with behavior, benchmarks and other data.  Just with heavy I/T involvement.  Something that isn't needed with these products. 

At QuestBack we already partner with a couple of data analysis and dashboard vendors and we may partner with more of them over time (as a reseller that won't be my call).  But, contextualizing "voice of the customer" using dashboards makes so much sense to me that I'm surprised more companies aren't looking for this capability as part of their EFM solutions.
 

Thursday, February 14, 2013

Lowering "Break Even" for justifying Text Analytics

In a world where most businesses doing customer and employee types of feedback still "code" their verbatim survey responses and other text feedback manually, the standard for "break even" on automated text analysis solutions generally seems to be on the order of ten thousand 10,000 text items per month.

Why is 10,000 the number?  In my experience, I've seen many instances where smaller volumes would justify investment in an automated solution.  Yet, to a large degree only very large businesses and government agencies with big flows of  text based feedback have adopted automated text analysis solutions. 

I think there are two reasons for this.  First is the price of the automated text analysis solutions, which typically have minimum annual costs of $100,000 per year.  So, for a business with lots of text to be coded, only when "people costs" exceed $100k/year does it make sense to invest in an automated solution.  The second reason is that people rarely do just verbatim coding in businesses today. Typically groups of people do the work in different departments as part of their regular jobs (VOC Analyst, Market Research manager, etc.). There's often no single FTE that can be "replaced" by an automated solution.  Only when the volumes of feedback become so large as to be overwhelming do businesses consider automating the analysis process.  By then, the costs of manual coding are large and they justify a large investment.

But, what would happen if the annual software cost of an automated text analysis solution could be lowered to $50,000 per year?  I think that the potential market for automated text analysis would become exponentially larger.  After all, in most businesses its a lot easier to find half of an FTE doing text analysis manually than it is to find full FTEs manually doing text analysis. 

In my opinion, there are additional reasons to consider automating text analysis at lower levels of feedback than 10K per month.  Just one is the ability of an automated solution to identify new topics.  As someone who does a number of feedback projects that employ survey based open answer questions, I regularly evaluate verbatim responses both manually and via automation.  Whenever I've used etuma360 I've found that the etuma analysis identifies topics which I had not considered based on my manual inspection process.  And, since people doing manual coding have a propensity to map all the incoming feedback using the existing coding structure and categories, manual processes will tend to miss new topics.  Automated solutions will typically pick up the new topics.  Valuing this capability is difficult though.  But, its something to consider when looking at text analysis and its cost benefit.

Etuma has a number of pricing plans that allow businesses to get into automated text analysis for less than the $100K/Year price point.  I would think that anyone with 2,000 pieces of text feedback per month would be candidates for an etuma360 implementation based on FTE considerations alone.

Tuesday, February 5, 2013

Surveying for Feedback/Response Action Management

Periodically I see discussions in articles and LinkedIn forums about the "Death of Surveys".  But, in my view, the on-line survey business is simply transforming from a focus on surveying for data collection to one of surveying for feedback and response action management (F/RAM).  This is particularly true, I think, in the case of relationship surveys (customer, partner, employee, alumni, union member, donor or "membership" types of surveys).  In short, where "relationships" exist between an entity and a population of people, something more than data collection is now necessary. 

In my view, surveying for relationship management purposes is occurring more today because of the growth of social media, on-line chat and mobile device technologies, all of which help businesses collect huge amounts of customer data. So much so, that businesses are almost overwhelmed by it. It's not a coincidence that data analysis, "big" data and data storage vendors are doing well.  All that data needs to be analyzed, correlated, cross referenced and stored.  Yet none of it really triggers businesses to build better relationships with the people they interact with.  Somewhere and some how, somebody has to ask customers how they feel in order to assess relationship quality.  If a business has lots of customers, a feedback/response action management survey is the best way to do that, because the feedback automatically propagates dialogue in a F/RAM process.

Feedback/response action management is a process that many businesses are unfamiliar with.  Its a fair bit more complex than traditional market research.  It relies on customer data to guide how response action management should be implemented and it necessitates the use of a methodological approach (NPS, CSAT, CxM or something similar).  In addition, F/RAM requires that feedback scenarios be modeled or at least thought through, so that appropriate responses can be formulated (i.e. who responds and how when a customer - from country x, with product y and issue z triggers a response action based on their survey feedback). 

A number of on-line survey platforms today can implement an F/RAM process.  Some of the platforms though are expensive to acquire.  My admittedly incomplete list of F/RAM capable tool sets includes: QuestBack (all platforms), Vovici, Medallia, Allegiance and ConFirmIT.  ClickTools and KeySurveys to my understanding only implement F/Ram processes through CRM integration (and ClickTools only for SalesForce). In my experience, almost all the other tools  "out there" are primarily focused on just data collection and analysis.

In my experience there are two critical capabilities that a tool needs to have in order to implement an F/RAM process.  First a tool needs to be able to trigger a real-time follow up action based on a survey response, customer data or a combination of both.  Second, a tool has to be able to link, in real-time, customer data to the survey at a respondent level.  Without these two capabilities F/RAM processes require lots of I/T intervention in order to get survey responses to trigger actions at a respondent level.




 

Tuesday, January 15, 2013

Organizing on-line surveys for action taking

People love to give feedback.  But, hate to take surveys. 

I see lots of surveys that start out with "Please give us feedback" or "your opinion is important to us" followed by "click here to take our short survey".  After clicking I get something that indicates the survey will take anywhere from 10-25 minutes to complete".  Like most people I immediately quit the survey.  In my mind, any time commitment beyond 5 minutes makes me a research subject, which I didn't sign up for when I clicked on the survey link.

Because most surveys are designed to collect data and employ longer questionnaires, lots of customer feedback doesn't get collected by companies that want and need it.  Surprisingly to me, even many surveys asking the Net Promoter (likely to recommend) question aren't designed to generate immediate follow up actions. 

What companies should be doing is designing surveys that customers want to take and that have built-in triggering mechanisms for enabling responses to feedback.  In my experience, key constituencies want good relationships and will take time to give feedback when asked, provided the process is respectful of their time and provides a value add (better relationship) for them.  These things become easily achievable with short, follow-up supported surveys.

Feedback surveys should use short questionnaires with 10 or less questions (presented) and a 2-5 minute maximum time commitment.  They should only ask questions that are meaningful to the customer relationship (one reason I like Net Promoter).  They should never ask for information that is already in their databases somewhere.  And, they should always have a follow up process for everyone who takes the survey.  Even if that follow up process comes later on and is general in nature.

How questions are asked should also be taken into consideration when doing feedback surveys.   Here's an example of a typical product oriented satisfaction question done to collect data:


Here is the same question designed for feedback:


On the surface these two examples look very similar.  Except here the question is followed by a more specific question based upon the answer chosen.  If the answer chosen is:  Very dissatisfied, Somewhat dissatisfied or Neither dissatisfied or satisfied, the customer gets:

You indicated you are less than satisfied with ACME Company Product XYZ please tell us why.  We will contact you shortly by e-mail to follow up.


If the customer indicates Somewhat or Very Satisfied they get:

Please tell us what you like best about ACME Company Product XYZ.   We will contact you shortly by e-mail to follow up.

In addition to triggering a question branch, in the above example, each set of answer alternatives triggers an alert or notification to someone to take follow up action.  In QuestBack we generate an e-mail to a designated person.  In other feedback management systems (and also in QuestBack if needed) triggering is done through a CRM system.  But, in either case, the survey is optimized for feedback because no response or branching is triggered if Not Applicable is selected, and specific questions and triggering are set based on specific answer alternatives. 

Good feedback oriented survey processes should have at least a couple of trigger questions.  One for Loyalty or Advocacy, One for general satisfaction or experience, and possibly one or more based on product or service attributes that can be boiled down to some actionable response.

Surveying key constituencies with a goal of creating dialogue vs. data is a trend not to be ignored.  As people get more mobile and more "Social", surveys will have to be more feedback oriented. And, designing surveys for follow up action is a great way to collect feedback, increase customer dialogue and ultimately build better and more persistent customer relationships.