Thursday, December 8, 2011

Some characteristics of "Great" feedback management programs



Many organizations today have programs, initiatives and technology (EFM) for gathering and managing customer, employee or other constituency feedback.  As with many new technologies, lots of them haven't received the benefits they expected.  In my opinion this is because they are still doing "research" and not "feedback management".  I think these organizations have simply swapped EFM in for something else they were doing (outsourced market research mostly).  But, haven't changed their feedback approach to "engagement" with customers, employees or other constituents.  As a result, they don't realize their feedback management programs could be giving them a lot more than they get today. 

So how to tell if a feedback management program works well?  My anecdotal thoughts on how to tell if a feedback management program is working:
  • Business users clamor for the data coming from the EFM platform and use it, unprompted, to interact with the customers, prospects, partners, employees, etc.  This is especially the case if no "top down" management edicts are required.  If "line" managers, directors and vps want the data, use it and act on it, its probably a good program.
  • Processes simply wouldn't function nearly as effectively without the feedback.  Whether it's customer support, lead generation, learning management or anything else, if the objective of the process can't be accomplished without feedback its a sign of "good" feedback management.
  • Customers (or other constituencies) get anxious about not being asked for feedback and say so to sales or support people.  This indicates the feedback management program is perceived as valuable by customers.
  • Incentives.  If you don't need them to achieve reasonable response rates (25%+), your feedback program is probably delivering value to your constituents and is "good".
In my opinion good feedback management programs have some defining and yet easy-to-evaluate attributes:
  • Win-Win.  Good feedback initiatives provide "wins" for everybody involved.  Those providing feedback benefit by doing so as much, or more, than those receiving it. 
  • Closed Loops.  Good feedback programs continually "close the loop".  Moreover, it's done on a one-to-one basis. 
  • Time investment.  Organizations with good feedback management programs invest substantial amounts of staff time to respond individually to each person giving them feedback. 
  • Expectations met. All feedback gathering activity (as opposed to research) creates some  expectation of action or dialogue.  Good feedback initiatives take this into consideration and ensure any expectations are met. 
If you aren't sure about the benefits your feedback management initiative is generating, it's easy to ask some pertinent questions: Does your customer "win" by giving you feedback?  Are you always closing loops? Does your staff invest time regularly to respond to feedback?  Are you meeting the expectations your respondents have for follow up (are you closing the loop in the right way)?

The benefits of doing feedback management "right" are compelling, with increased profitability typically accruing to organizations that do it well.  It seems to me that more organizations could be doing it well using these simple guideposts.








Friday, November 18, 2011

More on Operationalized Research


In my last post I talked about concept, and reality, of operationalized research and how it's changing the EFM marketplace.  But I was a bit disjointed in my presentation.  So, I thought a little  clarification was in order.

The key concept: The more rapidly an insight can be discovered and acted upon, the more valuable the discovery/actioneering mechanism is.  My contention is that EFM tools that automate insight discovery and action processes are more "valuable" to a business than those that require manual intervention in insight discovery or insight action-eering.

I closed the post with the statement: "By applying operationalized research capabilities to social media based feedback, businesses will be able to accelerate their understanding of prospects, customers and markets".  My thought was that speed of insight discovery and action-eering is even more important when the feedback source is a social media channel.

As many of my customers can attest, I've been very exited about QuestBack's new ability to implement it's Ask & Act process on Facebook based feedback (More info on QuestBack Social Insight).  Businesses that use QuestBack to implement "Act" processes on Facebook  feedback can designate staff to receive "rules" based insights using QuestBack surveys running inside Facebook via a Facebook application.  Those people can then immediately act on those insights with actions taken being delivered via Facebook messaging or e-mail. 

The words "research" and "operational" are not typically used in conjunction.  They in fact connote different processes.  But, QuestBack, and to be fair, a few other EFM vendors have succeeded in developing solutions that allow research to be "operationalized" using social media.  Businesses or membership organizations with Facebook strategies could benefit from deploying this kind of solution, as it will help them utilize staff more effectively, respond more rapidly to issues and opportunities in their markets and promote their responsiveness to customers.

Wednesday, November 16, 2011

Operational Feedback as Strategic Input


I've been posting recently on the topic of Operationalizing Market Research.  My focus has been on how research approaches and techniques can be applied to generating operationally significant insights (meaning useful for solving day-to-day business challenges).  In this post, I thought I'd take the opposite perspective and talk about how tactical or operational feedback processes can provide strategically valuable insights.

I work regularly with businesses to help them build and benefit from "closed-loop" stakeholder feedback processes.  In my experience, with my own clients as well as with other organizations where I'm familiar with their feedback management processes, I see the primary goal for feedback being operational in nature. i.e. Specific feedback is desired for a specific purpose.  And granted, much of the value that stems from that feedback, whether customer or employee oriented, is tactically useful, identifying a need for some type of short-term response and providing enough data to contextualize the action required.

Yet, I believe that operational feedback, if correctly designed and implemented, can and should be a strategic input too.  Take for instance customer satisfaction surveys.  In C-Sat surveys, lots of operational data is collected, disseminated and acted on, usually by account managers or customer support, with action triggers being based on responses to satisfaction or loyalty questions.  In C-Sat surveys other questions normally are asked about "drivers" loyalty or satisfaction. Product or service "quality", "effectiveness vs. competition", "value" or other characteristics are asked about so as to provide context regarding possible reasons for a satisfaction or loyalty rating.  This additional contextual data provides the required insight about customer issues or opportunities that dictate how a response to feedback should be formulated and implemented at an operational level.  However, in this example, the operational feedback becomes strategically valuable if two things occur:
  1. Feedback data on "product/service quality", "effectiveness vs. competition", "value", etc. is connected with customer information like "account category", "industry", "geography", etc. 
  2. If feedback data is gathered (and connected) regularly over time.
By doing these two things with operational feedback, strategic insights result.  For instance, "unsatisfied customers" can become "unsatisfied large customers in industry A and geography B who find "quality" to be poor, "competitiveness" neutral and "value" low.  Though there is operational insight in this example (we need to talk to these customers right now!), if this data were to persist over multiple survey cycles and across more industries or geographies, a strategic challenge (or challenges) would be highlighted. i.e. Are we marketing to the right customers? Are we devoting the right mix of resources to the Product or Service we sell?  Etc.

Strategic insights have long been the province of market research organizations.  With the broad deployment of EFM tools for operational feedback, data is now being collected on an operational basis that with a small amount of effort can be transformed into a strategic input to be used by senior management.  All that has to happen is for companies to pay attention to it.  And, use products like QuestBack that facilitate the transformation.

Monday, October 24, 2011

Operationalizing Research - EFM's Changing Landscape

The EFM marketplace has recently undergone lots of change.  Companies are merging, the technology itself is evolving to new platforms like mobile and social media, the model is migrating increasingly to "do-it-yourself" vs. outsourced, new technologies like text analytics are being incorporated into EFM and integration with other "cloud" technologies is accelerating.
  

All this has me thinking about the nature of feedback, how it's being used by businesses and how that is changing.  In my experience businesses work with two types of feedback, operational feedback, and for lack of a better term, "research" feedback.  I think that the changes I've outlined have blurred distinctions between the two.  And, that this will have a substantial impact on how businesses work with feedback data of all kinds in the future.

To me, operational feedback differs from research in two key ways: 
- The business need to take follow up actions on it. 
- And, its usefulness in building KPIs.   

Otherwise, operational and research feedback don't differ much.  Of course, it's arguable (and maybe true) that operational feedback is not accurate enough for scientific conclusion.  But, it seems to me that it can be made much more accurate by adding more "background" variables prior to solicitation (to establish population differentiation).  And, by managing respondent populations with filters afterwards (on the assumption that enough respondents participate to provide enough data). 
At the end of the day both types of feedback are simply data.  What typically differentiates research from operational feedback is not the feedback, it's the depth of analysis applied to the data. 
Looking at the many recent marketplace changes, EFM vendors focused on operationalizing research appear to be having success.  I think the business need to quickly understand and act on insights derived from complex combinations of feedback and "background data" is driving that success.
 

I believe social media is extending this market trend.  By applying operationalized research capabilities to social media based feedback, businesses will be able to accelerate their understanding of prospects, customers and markets.  And, to therefore "compete" more effectively.  If I had to guess, I would say that businesses will migrate to EFM vendors who can help them operationalize social media derived feedback.

Monday, October 10, 2011

Facebook based - Closed Loop Feedback

A couple of weeks ago I posted a note about QuestBack's new add-on module for collecting feedback from an organization's facebook friends.  As I have been learning more about the new QuestBack Social Insight product for facebook, I've become more impressed with its capabilities.  In particular, QuestBack's ability to "layer" a closed loop process on top of facebook derived feedback. 

Think about it, today you need to have people paying attention to your facebook page simply to respond to feedback.  With the new QuestBack product you can ask people to give you feedback, then rely on QuestBack to automatically route that feedback to the correct people in your company.  And, to provide the mechanism for actually "closing the loop" with facebook based feedback providers.  In my mind a pretty cool new capability. 

So how might this new facebook feedback mechanism be deployed?  A couple of ideas immediately come to mind:  Customer Support, Lead Generation and New Product Development.  With QuestBack running on a facebook page, customer support requests can be taken and processed via a quick web survey. The loop can be quickly and easily closed when a support rep gets the QuestBack notification and replies by e-mail from within QuestBack.  Same thing with leads.  Your sales guys will love it.  New product or service ideas can be collected and presented quickly and easily to product management people, who can then "close the loop" by entering into a QuestBack based dialogue with the facebook "friend" who provided the feedback.

Up until now, closing the loop on facebook based feedback was time consuming and expensive.  QuestBack makes it just another part of your closed loop feedback process. 

Thursday, September 8, 2011

Feedback Management and Social Media

A lot of pundits have positioned social media almost as an alternative to feedback management for "Voice-of-the-Customer" (VOC) initiatives.  And, especially so, when combined with text analytics solutions.  Its always been my theory that social media and feedback management were entirely complementary technologies that when properly integrated would add real value for businesses, and especially for B2C businesses.

To my mind, for businesses, the strength of social media is its ability to "find" new customer populations, new needs for existing products, new problems to address, etc.  And, to have those communities self identify their needs and wants.  It gives companies a real-time mechanism to monitor opinions and ideas.

The challenge has always been in qualifying those needs, wants, issues, challenges, etc., as well as quantifying them within their prospect and customer profiles.  i.e. Does a facebook fan's "like" really mean anything to the business.

QuestBack may have a solution to the qualification/quantification problems with social media.  They have introduced a "Social Insight" product, which allows companies to combine social media and feedback management to collect structured feedback from within social media platforms.  The press announcement can be found here: http://www.questback.com/about-us/activities/press-release-questback-launches-social-media-insight-tool/

Should be very interesting to see the effect that this product has on the market.

Thursday, August 25, 2011

Is Your I/T Department the Key to Your Feedback Management Solution?

All two way communication is feedback

One of the things I've come to realize about feedback management is that almost any two-way communication is potentially feedback generating in some way.  Under this standard hardly any feedback that could be captured in a business actually is.  Now, I grant that it would be overwhelming, not to mention pointless, to try and capture all the feedback that a business could conceiveably capture.  I merely point out that most organizations don't think about feedback from a "Big Picture" standpoint.  Most only look at feedback in the context of a very specific business problem, often one caused by a lack of specific feedback. 

Very few  businesses (I've never heard of any) try to break down their feedback into all the different kinds of feedback they get (or should get).  They don't collect and organize enough feedback to really understand what's happening with their businesses.  As a result, most still react too slowly to change. And, when they do react they often overreact. The lesson: more frequent and timely feedback helps an organization learn sooner and react better to business change. 

What's needed is a methodology for understanding an organization's feedback management needs.

Given that no one I've seen has proposed a comprehensive methodology for understanding a businesses needs for feedback, I think that the approach QuestBack has come up with is worthy of a discussion. 

QuestBack has coined the term "Event Driven Feedback Management".  Event Driven is a way to break down the "Big Picture" - the overwhelming amount of feedback a business receives - into those highly manageable and very valuable nuggets of knowledge that are distillable from the daily life of a process.  The theory behind event driven feedback is that any business process lifecycle can be broken down into a series of events that define its life history.  Each event is a key point at which feedback is highly relevant and value producing.  In an event driven world collecting the right amount of feedback is a simple matter of defining your events then collecting feedback after each of those events during the normal operation of the process you are in.

Interestingly, most businesses (and particularly large businesses with I/T architects in their  I/S departments) already understand their key business events.  Large businesses have mostly mapped their business events as part of their enterprise architecture models.  They've had to, in order to develop databases and data warehouses that support their key business processes and store the information relevant to each event occurring within those processes.   

I think applying the same methodology for breaking down a business' data and processes can be used to break down a businesses feedback collection and analysis needs. 

An interesting thought indeed that your corporate Information Architect may be the key to an effective resolution of your company's feedback managment challenges.

Saturday, July 30, 2011

Customer Engagement and Net Promoter

I recently worked on a feedback management project for a large local club soccer organization trying to help them assess their relations with members.  In the survey we asked the "Likely to recommend" question and came up with over 50% promoters and an NPS of 23.  Without going into detail about the survey, what struck me was how engaged this club's promoters were in its success.  We asked promoters how often they had recommended the club in the last year.  Over 95% indicated one or more recommendations.  Over half recommended more than five (5) times.  And, about a quarter recommended more than ten (10) times.  Many promoters indicated, via open answers that they were continuing to work on behalf of the club by actively referring people.  Clearly, a very engaged group. 
This particular club is premium priced (2x the cost of many other club programs), operating in a very competitive market and is a relatively new entrant.  So, it seems to me that the engaged-ness of their promoters has largely been the driver of their successful growth. 
  
In thinking about engagement by this group of promoters it occurs to me that the "likely-to-recommend" question is a natural "trigger" question for identifying engaged customers.  But, based on open answer responses from "neutrals" and even "detractors" at the high end of the detractor scale, I think engagement questions should be posed to more than just "promoters".  In this survey, as well as others I've done, I've seen enough similarity in open answer responses from neutrals and "high end" detractors to believe that substantial levels of leverageable engagement exist amongst those customer subsets to at least be asking them how often they are recommending, as well as one or more other questions that indicate how engaged they are. 

The next challenge of course, would be to leverage engagement amongst these "lower tier" engaged customers (i.e. neutrals and high end detractors with similar characteristics to promoters).  Doing that requires Acting on their feedback quickly and effectively to deal with the issue(s) that put them into the neutral or detractor bucket in the first place.  It would, I think, also require that something else be done that encourages "promoter" behaviors they are already partially exhibiting.

Saturday, July 23, 2011

Vovici / Verint Merger - Some thoughts

About a week ago Verint Systems acquired Vovici for $76 million.  Verint is a supplier of Workforce optimization solutions for contact centers and has a Customer Analytics platform that it seems Vovici products are going to fit into.

How will this effect the market for EFM products?
 
My opinion, Verint's acquisition of Vovici will be very good for a lot of the other vendors and potentially quite bad for a few.  Almost inevitably, Verint is going to be looking for ways to recapture its up-front cash investment of $56 million as well as it's downstream payments of nearly $20 million.  Vovici has been by far the largest spender on marketing in the EFM space to date.  So, over the next year or two, I would expect lower marketing spend by Vovici, meaning less mind share and ultimately less competition particularly at the middle of the EFM tools market.

Needless to say, I expect the middle market for EFM solutions to be where the largest changes will occur.  Organizations where the annual fees charged by Vovici are between $5k to $45K per year will begin to find that Vovici is less and less inclined to work with them on pricing and support issues as resources are diverted to large account pursuit initiatives based on Verint account relationships.  In addition, the reduced Vovici marketing spend targeted at these accounts means that more organizations will look at non-Vovici solutions.  Companies that are likely to benefit include: Questback, ConFirmIt, Qualtrics, and KeySurveys among others.

The low-end (under $5K/year) of the market shouldn't really be effected at all by Verint's acquisition of Vovici.  Vovici competes here at times largely as a result of old ex-perseus accounts.  But I would expect that Vovici will not devote (and really hasn't devoted) any sales effort to competing with products at these price points any longer.  Beneficiaries will be Zoomerang, SurveyMonkey, QuestionPro and others.

Competition at the high of the market (contracts above $50k/year+) will likely increase substantially.  Satmetrix, Medallia and MarketTools (for its CustomerSat product) will find themselves needing to defend an increasing number of their accounts as the Verint sales organization introduces programs incorporating Vovici into existing Verint customer contracts. 

Vovici also has a well developed partner program.  Consulting groups like Walker and Omega may find that Verint wants a larger chunk of the consulting revenue pie as well as potentially having products that compete with add-ons that they too offer.  This too may open up opportunities for other high end or middle tier competitors. 

All in all, if I had to guess I'd say that all the high end EFM vendors are soon going to be looking for larger partners who operate in the contact center so that they can fight off Verint using their own consolidated contracts and solutions. But, it seems to me that the rest of the vendors at the middle and lower end of the market should find growth easier going forward.

Having said all that though, I've observed and participated in the EFM market for 4+ years and been wrong before.  And, beyond observational acumen, I have no special insights into Verint's plans for Vovici going forward.  So, as usual, caveat emptor.

Saturday, July 16, 2011

EFM's Value Proposition Revisited

What is the real value of Enterprise Feedback Management?

Bruce Temkin's recent article titled: "EFM is Dead" (www.customerexperiencematters.com) stimulated a fair amount of discussion in various blogs and forums devoted to Voice of the Customer issues.  His essential point is that Customer oriented feedback and analysis EFM platforms need to evolve so as to assimilate more streams of feedback data than they currently handle (i.e. mainly survey based feedback).  My thought on the matter:  It's a fair point.  And, is likely to prove out as being true for some businesses and application today, as well as going forward for most. 

My issue with Bruce's line of reasoning is its focus on customer oriented applications of EFM. Where admittedly, vendors are working (struggling?) to incorporate new streams of social media based feedback into their feedback management models.  Yet, EFM's value proposition has always been based on more than just customer feedback.  In many businesses, EFM's primary value proposition has more to do with workforce / talent management than it does with customer experience.  So, to say that EFM should become "Customer Insight and Action" (CIA) when EFM is about customer, partner, employee, investor, training and other applications of feedback (as well as the intersection of insights from those different data sources) seems somewhat premature and maybe just a tad parochial, at least to me.

Given the debate about acronyms, I thought it might add something to the dialogue by revisiting the value proposition offered by EFM platforms.

First.  EFM's value proposition to a business is directly proportional to the number of processes where it is employed.  If you employ feedback management only in your VOC process your payback only comes from improvements based on VOC (which of course can be many).  If you employ EFM in lots of processes, the value proposition can come from lots of insights from many parts of your business.  I've included a graphic below (courtesy QuestBack) showing application areas where EFM is applied in business processes (click the image for an enlarged version).
As you can see, EFM can be applied to Sales, Marketing, Admistration, I/T, Training and HR processes.  And, in multiple ways within each. 

Second.  EFM's value proposition is directly proportional to the value of key business relationships.  Obviously customer relationships are important, even critical, for many businsesses.  Yet, for many businesses, partner relationships, regulator relationships, employee relationships, etc. may have equal or even greater value.  The point: What does it cost you as a business when a key business relationship goes sour and especially if you don't know it is souring?  The larger a business gets, the more key relationships it has. Those relationships should be paid attention to regularly.  EFM, and especially closed-loop EFM (C-EFM?) helps you do this.

Third.  EFM's value proposition is enhanced by its ability to take feedback streams from multiple, divergent data sources and put key types of data into concise metric based form.  An example being Customer Satisfaction juxtaposed with Employee Satisfaction and / or Partner Satisfaction.  Most serious EFM platforms have mechanisms that let businesses "see" the effects of changes in one set of metric data on other sets of metric data.

In short EFM may be a dying acronym.  But EFM's benefit set goes way beyond CIA for most businesses.

Thursday, June 23, 2011

Etuma - Affordable Text Analytics

Like a lot of people who help businesses with customer feedback, these days I'm frequently being asked about Text Analytics.  Sentiment analysis in particular seems to be the buzzword du jour. 
Clearly, the growth of social media as a customer connecting technology has made text analytics a more important technology for categorizing and assessing large volumes of free text feedback.  More importantly, as more and more companies actively solicit feedback via e-mail and web surveys more open text comments need to be evaluated.  Text analytics, of course, is very valuable in this regard as well.  This is particularly true if your survey tools can't filter responses on the basis of sentiment based answers (as in net promoter surveys for instance).

As usual, in emerging markets and technologies, the leading vendors are typically quite expensive and often underperform when objectively compared to solutions from less well known vendors.  A situation that would appear to be occurring again with text analytics.

Having said that, I think I've found a text analytics vendor that is both reasonably priced and very effective at processing text into usable insights.  The company is Etuma.  They are from Finland - and impressively - can assess textual feedback in 10 different languages with 90%+ accuracy and categorize that feedback seemlessly into a single report in a single language (very useful for US companies doing business in Europe).  They also do sentiment analysis at a more granular level than the leading text analytics vendors. 

To my mind Etuma offers really great tool for customer support organizations that operate worldwide, or for US companies looking to do a better job assessing their foreign sourced (non english language) customer feedback.

With 10 languages "out of the box", the ability to pull all feedback into a single report and granular sentiment analysis, all in an affordable tool, Etuma would seem to be a vendor that lots of North American companies would want to talk to.

Check them out at http://www.etuma.com/

Wednesday, June 8, 2011

Customer Feedback and the "Dialogue Dilemma"

Just published my first post on QuestBack's "Friends of Feedback" blog site.  If you're interested you can check it out at:

http://www.friendsoffeedback.com/customer-feedback-and-the-dialogue-dilemma/

An excerpt follows:

Generally speaking, businesses seek customer feedback for purposes of identifying customer issues and taking actions that improve their satisfaction and loyalty. Yet, the majority of solicited customer feedback isn’t acted upon. And worse, there are indications that if feedback isn’t acted upon, negative customer experiences result. An exact opposite effect to the one intended. I call it the “Dialogue Dilemma”.


There's a bunch other good information on customer feedback management on the F-of-F blog too....





Friday, June 3, 2011

My 7 "Rules" For B2B Customer Surveys - updated with some expert input

I was recently surfing the web and found myself at http://www.netpromoter.com/.  So, visited their Blog and found a post by Satmetrix CEO Richard Owen which talked about my Rule #7 for customer surveys and discussing the effects on him of a survey he took that failed to incorporate known information (thus violating the rule).  Rule #7 states: "Never ask a customer a question you already have the answer to." Richard did a great job documenting why this is great rule for customer surveys.  So, I thought I'd share his post here.
http://www.netpromoter.com/netpromoter_community/blogs/richard_and_laura/2011/04/16/hello-old-friend-who-are-you-again

For anyone who want to read my original post.  It's here on the site.

Wednesday, May 25, 2011

New EFM Blog from QuestBack

QuestBack has a cool new Blog site called: "Friends of Feedback".  The name is a little hokey, but it has pretty good content designed for people who use or plan to use enterprise feedback management technology in their businesses. 

The content, so far anyway, is very much focused on value add of feedback, tips for doing feedback correctly, etc. Not a site that "flogs" QuestBack. 

I encourage people to have a look...

www.friendsoffeedback.com


Sunday, May 22, 2011

Feedback vs Research - Thoughts on doing both better using EFM

Customer Feedback and Customer research are two phrases that used to mean the same thing but now are differentiated because EFM technology allows dialogue to be spawned from electronic surveys.  But, like many change agents, users of EFM have created some new business problems by their use of the technology.  With customer research, virtually anyone today can freely and easily overburden their customers with survey after survey.  All delivered over the internet or via e-mail.  So, there is a new business problem: Negative Customer Experiences-stemming from survey reponses that are not followed up.

I've been thinking about how companies can manage around the divergent goals of customer research and customer feedback so that negative customer experience does not result.  I'm not sure I've got great answers to the problem, but thought I'd jot down some ideas.

Fundamentally, a customer feedback survey is distinct from a research survey in that its feedback solicitation is an invitation to dialogue.  Whereas, a research survey is an information gathering exercise.

Most companies' customer survey processes share three characteristics:
  • Both types of surveys target many of the same people
  • The same internal entity (market research) often develops and administers the surveys 
  • And the same survey tools are often used
It's not surprising to me, therefore that feedback and research efforts would "look and feel" a lot alike to the customers being surveyed.  As a result, I think there is a significant amount of confusion amongst customers as to when their feedback will produce a dialogue and when it won't.  My guess is this confusion is at the heart of the negative experiences issue.

Furthermore, research shows that even today most customer feedback doesn't result in a dialogue.  And, survey participation rates are generally declining, except for companies who go to great pains to "close-the-loop" on customer feedback. 

"Fixing" the problem is, of course, easy.  Just always follow up on any survey response given by any customer.  Needless to say, that "solution" is probably impractical for most companies.  With most companies not even able to consider reponding directly to all the feedback they solicit.  I think though, that with the right tools and processes they could easily respond to a lot more of it than they do now.

So, what I think companies need to do is create a set of "Customer Survey "rules of engagement".   Rules that would govern how they should approach doing both customer feedback and customer research surveys.  My ideas about what those rules ought to be are outlined below:

Customer Survey “Rules of Engagement”

For Customer feedback:

• Try to integrate surveys with CRM, ERP, HRM and other systems where practicable. This can work especially well for short transaction based surveys, which are relatively easy to integrate.  Just be sure to impose rules on the users of those systems to pursue follow up on all feedback received.

• For customer relationship surveys (usually done once or twice per year) create a follow up plan for all customers and response scenarios that you can reasonably think of.
  - Make sure your survey system generates an “alert” or “notification” to a specific person for each foresee-able response scenario.
  - Any feedback that doesn’t “fit” a response scenario should be directed to the most senior available company officer.
  - This ensures an action plan will get created for almost everything that can fit into a response bucket.
  - Keep in mind that response scenarios will need to involve people other than the sales or customer support staff. Ensure those other groups understand the obligation to customers to respond to feedback alerts that will be generated.

• Relationship surveys tend to be difficult to integrate with other systems, as they tend to change over time and often blend-in research questions. So these surveys should only be done in systems that provide follow up mechanisms that are independent of CRM, ERP or HRM systems.

• Keep in mind that if you are soliciting information from all of your customers, or all customers within a segment, you’re probably seeking feedback, not doing research. So plan for a dialogue.

For Customer Research:

• Try to use anonymous surveys for research. Preferably deliver them to a sample subset of your customers or customer segments. People don’t expect follow up on anonymous surveys and don’t feel guilty about abandoning them in mid-stream.

• Try to employ “research panels” containing customer volunteers. People who you have a profile for, who know that surveys may not be followed up and don’t mind acting as research fodder.

• Label research surveys as RESEARCH. Avoid using the word “FEEDBACK” which co notates dialogue. Be sure to tell research survey participants that no follow up is planned. And, that the research won’t result in a dialogue.
  - This may hurt response rates
  - And, may force you to incentivize your research surveys

• Apply strict contact rules for all research oriented surveying. My guess is that no customer should receive more than one research survey in a given year

These are ideas and thoughts.  I'd welcome feedback from readers on other "rules of engagement" with customers being surveyed.

Thursday, April 21, 2011

Does a "C.I.A." equal an "E.F.M." ?

 Some comments on Temkin's Six "D's" of Closed Loop VOC systems

"20 best practices across these 6 Ds that will increasingly push companies to invest in Customer Insight and Action (CIA) platforms" - Bruce Temkin.

I'm a big fan of Bruce Temkin and his Customer Experience Matters blog.  So, when he released his recent report titled: "Voice Of The Customer Programs Grow Up" and blogged that businesses will be forced to invest in CIA platforms - what I call Enterprise Feedback Management (EFM) systems,  I took notice. 

As someone who personally uses, sells and supports a robust EFM system (called QuestBack) it is nice to hear an eminent customer experience persona like Bruce talk about the need for businesses to acquire EFM systems to help them manage their feedback. 

As it happens, QuestBack (http://www.questback.com/) offers out-of-the-box support for (but maybe not a complete solution to) five of Temkin's six "D's" (Detect, Disseminate, Diagnose, Design and Deploy).  And, I'm not sure any system supports the sixth "D" (Discuss).  More important though, I think, is Bruce's assertion that market research organizations need to change or risk becoming obsolete. 

My own view is that market research organizations need to change their perspectives about feedback, from a "research" perspective, to a "feedback" perspective.  This isn't to say that market research tools should be abandoned, just that market and customer data needs to be viewed as feedback contributing to a greater (wholistic?) knowledge of "the customer".  Where today research is often developed in isolation from the wholistic customer viewpoint, often being "siloed" to products or departments.

It also seems to me that with so much new data available to organizations via social media, organizations would be well served to monitor those data streams, extract key findings regularly and seek to validate (or invalidate) those findings via structured ad-hoc feedback (surveys) to their existing stakeholders.  By doing so, they would create a process that takes maximum advantage of the available technologies for listening and diagnosing market and customer trends, as well as the rapid action management tools available to them through EFM systems.

At any rate here's a "shout out" to Bruce for another very interesting report.  The report costs money ($195.00).  But you can read his executive summary and a couple of his posts about it on his site.  Link is::  http://experiencematters.wordpress.com/2011/04/17/6-ds-for-voice-of-the-customer-programs/

Tuesday, April 12, 2011

The Survey is Dead - "Long Live the Survey"

This has become a hot topic in the LinkedIn forums I follow as well as in some of the voice-of-the-customer focused media. The headline would lead you to believe that social media based feedback will over time replace customer surveys both for Market Research and Customer Management purposes.

In my opinion, social media may actually increase both the frequency as well as the value received from customer surveys.  Here's my logic.  When an issue is surfaced via a social media channel, organizations need to answer at least four questions when determining response:
  • What is the issues' potential effect on the business
  • What is it's relative importance
  • Who owns it
  • Who's going to respond 
For some issues this will be easy and determinable through the social media dialogue mechanism.  For more substantial issues though, it will require answers to those four questions. Much of the answer determining data I believe will have to come from surveys. So as issues surface, surveys will have to be sent out to put some structure and context behind the social media generated data.

Secondly, surveys and especially web-based customer surveys, simply provide organizations with too much valuable, actionable feedback (at a very low cost) to stop being the core feedback management tool for most companies. As EFM technology improves and cost-of-ownership declines this technology will be deployed by more organizations, more often than it is now.

Having said all that, social media is a wonderful new tool for listening to customer (and other constituency) feedback. It allows for the almost immediate gathering of qualitative feedback.  Which alone, and when used with survey based feedback, can provide very rich insights.  In the absence of structured feedback though, this unstructured form of feedback might just as easily be not valuable, or worse resource consuming while not adding value.

Lastly, it seems to me that social media, because of its interactive nature, could be expensive to rely on as a feedback mechanism.  People, after all, have to monitor and act on it (as well as decide how to act), determine if it should be acted on to begin with and potentially do it all quickly at all hours of the day or night. 

In the end I think Social media will be additive to the toolset used for listening to customers and creating dialogues with them.  What its best uses will eventually be, I think is still an open question.

Friday, April 1, 2011

Convergence of CFM and VOC - Some additional thoughts...

A couple of weeks ago I wrote a post on this topic with the idea in mind that technology convergence - mainly feedback management and text analytics - would eventually change the way organizations manage their feedback by providing a single application where feedback of different types could reside.  After doing some more reading (and thinking) on the subject, it seems to me that this convergence will take a long time to have any real organizational impacts.  Mainly, because entrenched business functions within organizations don't easily change the way they do things.

A couple of my thoughts on how converged Enterprise Feedback Management and Text Analytics solutions will actually operate in the real world.

I suspect that for organizations that employ EFM today, text analytics will be an add-on capability with two main uses:  To provide some automation for analyzing verbatims from surveys, and possibly, to provide a mechanism to merge at least some of their unstructured feedback with the structured feedback they get now. More importantly though, for EFM vendors facing increased competition from lower cost feedback management tools, text analytics will provide some "glue" to bind customer's more closely to them.

For organizations that mainly rely on unstructured feedback mechanisms today, convergence may mean easier access to, or possibly, more intelligent approaches to structured data gathering. Text analytics can help to create a structure for unstructured data.  But, because its hard to control for respondent profile (ensuring the response isn't all from the "wrong" sets of people) in unstructured data streams, whatever structure and results come out of the unstructured data analysis will have to be validated by surveys, which provide more controlled sets of results.

In the end little real change is likely to occur in organizations from this technology convergence. But lots of expensive software should get sold along the way.

Monday, March 28, 2011

Customer Effort Score - Is it another way to measure employee engagement

I've recently been doing some reading about Customer Effort Score (CES) and the relationship it appears to have with customer satisfaction (CSAT) and customer loyalty (CL), as measured by Net Promoter Scores (NPS).  If I understand the literature, as customer effort goes up in service engagements, so does the company's "detractor" rating within the NPS metric.  And NPS therefore goes down.  This got me thinking about the theory espoused by Heskett, et. al. in the Service Profit Chain (http://hbr.org/2008/07/putting-the-service-profit-chain-to-work/ar/1) where employee engagement is postulated to be a driver of CSAT.  Though I've always believed that employee engagement directly affects CSAT / CL, I had difficulty finding companies that were mapping employee engagement metrics against CSAT / CL metrics.  So, how could anyone really tell?  It obviously made sense, but how much increase in employee engagement was needed to improve CSAT / CL meaningfully.  And, at what cost?

Anyone who's done customer support work knows that he or she can often place more of the effort of problem resolution on to the customer, if they want to.  Or, they can take more of the effort on to themselves.  Highly engaged employees try to shift effort on to themselves in the full knowledge that the effort avoided by the customer increases that customer's loyalty and satisfaction.  Less engaged employees do the opposite, shifting effort to customers where possible, with the concurrent side effect of lower satisfaction and loyalty over time.

So, in my mind, Customer Effort Scores are a proxy for an employee engagement metric.  One which can be implemented by the support organization itself.  And, if done correctly can track back against the support and account people who are ultimately responsible for the revenue associated with the effected customers. 

CES is the missing link, literally, between employee engagement and customer loyalty.

Monday, March 21, 2011

Achieving high value feedback from short customer surveys




Web survey developers constantly strive to achieve a balance between survey length and data captured.  An optimum survey length ensures a low drop off rate while meeting the survey's data acquisition objectives.  In practice, most feedback projects sacrifice either data capture objectives or response / drop off rates. With customer surveys in particular, it's important to get both high response and to capture the required data.

 

So how can you achieve both high response and a large quantity of gathered data? 

 
I offer these three techniques
  • The single best technique for keeping surveys both concise and high value is to "Pre-Load" data into your survey database.  In the customer survey context, lots of data is typically available about customers.  Their names, purchases, account managers, regions, etc. are all known.  Often this information has already been synthesized into reporting elements in the company's customer data warehouse.  Pre-loading some of this information to your survey database ahead of time means that it can be used to filter your survey responses into more useful information.  It can also be used to pre-answer some questions or to automatically "route" or "branch" the questionnaire.  Helping to shorten it.  But most of all, any data you can pre-load from your customer databases is data you don't have to ask questions to acquire. 
  • Question Routing or Branching is another great way to shorten surveys.  Branching let's you only ask questions to those people who can or should answer them, thus shortening the questionnaire for all participants. 
  • Data Piping is also a great technique for shortening questionnaires.  By inserting pre-loaded data into survey questions or answer alternatives, piping saves you from the need to ask for data in order to answer a question.  If your survey system can both pipe in data and automatically branch / route based on piped in data it makes the survey doubly efficient from a time utilization perspective.

By using data you already have, along with "branching / routing" and "piping" techniques you can design your questionnaires to be concise while gathering lots of actionable and useful data, and do it without annoying your customers to the point where they won't give you the feedback you need.

Monday, March 14, 2011

Convergence of CFM and VOC

A number of research organizations, including Gartner and Forrester, have recently written about how Customer Feedback Management (CFM) and Voice of the Customer (VOC) are converging in businesses, based on the spread of new social media technologies for gathering qualitative feedback.  I came across great article written by Leslie Ament of Hypatia Research (http://www.www.hypatiaresearch.com/) that is a good description of how and why this is happening as well as what to think about if you are considering incorporating the new social media feedback channels into your customer feedback processes.

Customer Feedback Management (CFM) has traditionally been almost entirely based upon customer surveys.  Mainly, customer satisfaction surveys.  But, in the last several years also on customer loyalty surveys.  CFM's purpose was to gather operationally useful customer information for customer retention, sales intelligence (i.e. prospect identification) and marketing input (largely by IDing super loyal customers to do case studies on).

Voice of the Customer (VOC) programs, though also reliant upon customer surveys - often customer satisfaction surveys or sometimes CSAT questions in broader customer surveys, also included customer feedback via market research studies, data from call centers, bulletin boards, chat rooms and the like.  A variety of analytical techniques are used to distill the customer's "voice" from the data.  Today, the number of on-line feedback sources includes twitter feeds, blog commentary, Facebook, LinkedIn and other on-line communities, as well as all the other stuff.  VOC initiatives were organized around the need to maintain a clear understanding of company value proposition, competitive strength and weakness, new market opportunities, new requirements for existing products and the like. 

Technology is beginning to allow a blending of customer feedback channels such that a fuller picture of the customer can be achieved by incorporating the best elements of CFM and VOC. 

CFM was driven mostly by the customer service and sales organizations.  VOC by Marketing.  So it would seem that some new "Customer centric" structure needs to emerge in organizations to manage the convergence of all the customer data as well as the reporting and business process dynamics that will result. This is a point Bruce Tempkin has been making in his commentary at his blog "Customer Experience Matters" (http://experiencematters.wordpress.com/).  Anyway the article follows below.  Enjoy!

http://www.b-eye-network.com/view/14784

Monday, February 28, 2011

Feedback Enhanced Selling

Improving Sales Processes using Pre-Lead, Lead and Prospect feedback

As a career business development guy, I've always been intrigued by the potential of feedback enhanced selling.  And, in particular since I began representing QuestBack, a feedback management tool.  Yet even for me, until recently it seemed like collecting feedback from leads and prospects was just as easily done via direct e-mails or phone calls.  After all, what else are sales people supposed to do except directly communicate with leads, prospects and customers? 

When I look at sales jobs today, often the expectation is for sales people to contact 100 people per day by phone.  Now, no one expects anyone to have conversations with 100 people daily.  But these firms want their sales folks to dial 100 times a day in order to have maybe five meaningful conversations.  Every time I see this it bothers me.  It would appear that companies are willing to pay people to engage in behavior that results in essentially a 95% failure rate.  And, with buyers not answering their phones at all, will the number of calls be 200 a day next year or 500 five years from now?  I don't know.  But ten years ago expectations were for sales people to contact 50 leads per day.  So, the number is rising and unless a new process is found, its likely to continue to do so.

So the question is:  Is there a way to get from 5 conversations in a day to 6, 7 or 8?  I think possibly there is.  Using web or e-mail based surveys may offer a solution.  Some reasons leads don't engage with sales people are: Timing is wrong (no plans to buy for awhile), Solution "fit" is bad, or more preferable alternatives are available and "on the table".  Hence, no desire to talk to your sales people and no contact is made.  If done well lead feedback will help sales reps with timing their calls to contacts, with understanding the contacts needs and with avoiding contacting leads where no fit exists.  Result, less stressed sales reps, more targeted calling, more prospects in the funnel, happier and more qualified leads and prospects.  All good stuff.

Some thoughts on when to gather feedback from leads or prospects:

  • During website visits.  Surveys should be offered to visitors regarding the information being presented.  Idea is to identify if the information provided is sufficient and helpful or if they want more or even a contact from you.
  • After a lead enters ther sales funnel.  Survey should be done to ascertain sales cycle stage, needs analysis and timeline perception. Idea is to gauge interest level, understand budgets, determine competition and understand perception of your company's offering.
  • After a sales person has made direct contact with a lead.  Idea is to determine if the lead learned what they needed from you to move forward.  And, did his perception of you improve or deteriorate?
  • After an inside sales representative designates a lead as a prospect.  Idea is to validate the action and learn what the new prospect perceives about your offering that causes him to be interested in additional dialogue.
  • After a "key account manager" has been in contact with a prospect.  Idea is to validate that the prospect sees the product fit and sales cycle in the same way your KAM has reported it.  Collecting feedback 30, 60 or 90 days after a relationship has been started should help with understanding the possibiltiies for a sale occurring.
Technology already exists to automate a great deal of this kind of effort. QuestBack has even built a series of templates to use for automating feedback collection from leads and prospects during the sales cycle.  An example of QuestBack's sales process feedback approach can be found here: 

http://www.questback.com/areas-of-use/private-sector/lead-generation/

Tuesday, February 22, 2011

A Repost of a Good Net Promoter discussion with pros and cons...

I've written in the past about the Net Promoter methodology for measuring customer satisfaction and loyalty.  If you've read any of my Blog posts, you'll realize that I'm a big fan of net promoter. And, I frequently recommend its use to my customers.

Earlier today I was browsing a LinkedIn group where I'm a member, when I came across this post (which was a repost) of an article originally published in MarketingWeek magazine. 

I thought the article gave a pretty good account of the Pros and Cons associated with Net Promoter and its use.  So thought I'd share it here.  Enjoy.  The url is:

http://www.linkedin.com/news?viewArticle=&articleID=374834583&gid=1772348&type=member&item=44620162&articleURL=http%3A%2F%2Fwww%2Emarketingweek%2Eco%2Euk%2Fanalysis%2Ffeatures%2Fhow-to-get-more-from-your-score%2F3023551%2Earticle&urlhash=iHgy&goback=%2Egmp_1772348%2Egde_1772348_member_44620162

Thursday, February 3, 2011

Customer Surveys - Better results by Linking Responses with Business Data

In my last Blog post I talk about some things not to do in B2B customer surveys.  A big one for me is asking questions where the answer is already known to your company.  Business data is a huge area of customer questioning that fits the "Don't Ask" profile. Some examples of survey questions I've seen where business data is asked for:

- "How long have you been a customer?"
- "What region are you located in?"
- "Please select the [Company] products you have purchased or use?"
- "When did you make your last purchase from us?"

When asking this kind of question, the surveyor is taking a "short cut" by having the customer fill in or validate data that exists in the company's databases.   

The solution is to create linkage between customer surveys and business data.  By doing so, it is possible to avoid asking for business data from your customers and focus can be placed on the actual information that is desired.  Fortunately, it's usually fairly easy to get business data for the customers you plan to survey.  And, by organizing your survey process to take advantage of business data, your surveys can be shorter yet still  effective from an insight perspective while being considerate of customer time.  

Four techniques for incorporating business data into customer surveys

Needless to say, linking survey responses to business data is not a new challenge.  And, companies often go to great lengths to do it.  Larger firms often integrate customer surveys with other systems, so that the information derived from surveys becomes part of the customer record and can then be extracted, aggregated and analyzed along with other customer data via the company's normal reporting mechanism.  But, even in this scenario, data gathering via surveys that are not integrated with other systems (because I/T has to do it on a survey by survey basis) still suffer from the linkage challenge.  So, companies have evolved  four techniques to link business data with customer survey responses.

1. Brute force. In this popular approach (often deployed when using low-end survey tools) a customer list, in a spreadsheet, is developed where business data is included for each individual to be surveyed.  An identifier is "coded" to each individual customer in the spreadsheet. Each e-mail to be sent is also coded with the same identifier (hopefully) to match the data in the spreadsheet.  Most survey tools allow this kind of coding and will "kick" out resulting survey response data in a spreadsheet file. If coded correctly matching up survey responses with business data later is pretty easy.  Reporting can then be done in via spreadsheets or other data manipulation tools.   The downside is that this is a time consuming and error prone process.  Data has to assembled, coding assigned, used properly, re-matched after the survey, then handed off to a spreadsheet guru to generate the analyses and reports.  The time investment often outweighs the cost savings of using a low-end survey tool.

2. The multiple mailing method. This is similar and slightly more sophisticated approach to the brute force method. Instead of coding respondents and then matching responses later, in this method the spreadsheet is filtered in advance by the needed business data.  Each filtered subset is then sent the survey. When responses come in, you already know that batch #1 is from "RegionA large customers", Batch#2 is from "Government accounts in California", Etc. Survey responses organized and sent this way are easy to interpret and report on but hard to do subsequent analyses on.  Needless to say, this method is also somewhat cumbersome in that several or possibly many e-mailings must be set up and scheduled.  And, care must be taken to ensure that no individuals are in multiple subsets (or they'll receive multiple survey invitations).

3. The Customer Panel.  In this method business data is stored within the survey tool in a "panel" (a separate database).  Surveys are sent to panel members and responses are automatically tagged with the information stored about them in the panel.  This is a good approach generally, its only real flaw is that the panel needs to be refreshed or updated periodically so that its business data is relevant.  A second potential flaw is that the survey tools with built in panel support are often at the high end of the market or panel support is an extra cost feature of the tool.

4. Pre-load business data into the survey. In this method business data and customer names are loaded into the survey tool, the survey is designed and e-mailed out.  When responses are received they are already tagged with business data and responses are filterable based on the business criterion loaded to the survey.  Slicing the data becomes fast, easy and fairly painless. If the survey tool has good analytics and reporting tools this approach can save lots of time and provide immediately actionable data for follow up action taking.  It also doesn't require panels, integration with CRM or other systems, spreadsheet gurus for data analysis, and its not subject to data matching errors post survey.

I view method#4 as having the best combination of affordability, flexibility, analytics and reporting power, time conservation for customers and time conservation for the surveying company.   I am not aware of many survey tools that support method#4.  QuestBack is a feedback management system that does.

Wednesday, January 26, 2011

My 7 "Rules" For B2B Customer Surveys


Being in the business of selling, supporting and using web survey tools mostly for supporting customer surveys, I’ve seen hundreds of them. I’ve found that customer surveys and particularly B2B surveys need to be clear, concise and sensitive to the customer’s time. So, I’ve developed a set of “rules” that I try to follow when helping my customers with their customer surveys.

Rule #1 – Clearly define the objective for the survey

Many customer surveys I've run across have multiple (and sometimes conflicting) objectives. This happens a lot to businesses that survey their customers just once a year. In this situation, the customer survey ends up being everybody's vehicle for capturing "something". In the end it often results in too many questions, low response and completion rates, unclear follow-up actions and ultimately, annoyed customers who don't want to take your surveys.

So, I always try to define the “one critical data point” the survey must develop. And, only ask questions that provide qualification to that one data point. If that task can be achieved with less than 10 questions, secondary – but related - topics can be introduced.

A survey methodology that does this well is the net promoter method developed by Fred Reicheld. What I like about net promoter is it's based on one question - "How likely are you to recommend [company] to your family and friends". Other questions in a net promoter survey are designed to qualify responses to the "recommend" question. Done well, net promoter surveys are short and take less than a couple of minutes, while also providing insights, clearly interpretable data and clear follow up actions that have a purpose (enhance relationship quality).

Rule #2: Survey customers regularly.

It may seem counterintuitive where people complain about getting too many surveys. Yet, I think people mainly complain about "bad" surveys (i.e. too long, irrelevant questions, etc.). If you get a reputation as a "bad" surveyor, your response rates will be low. And, "opt outs" will be high. But, if your customer surveys are well done, i.e. short, easy to complete and always followed-up, your customers will interact with you in relatively high numbers even in quarterly surveys.

Rule #3: Always keep surveys short.

Personally, I don’t like to ever ask more than 10 – 15 questions in a customer survey.  And I try to avoid long matrix type questions. If you are using a regular survey process, after asking your 10 core questions you can ask some secondary related questions.  In a quarterly feedback process different sets of secondary questions can be rotated into the questionnaire.

Rule #4: Only send surveys to those people who can give you the data you need.

Blasting a survey to people who can’t give you the information you need just guarantees that you’ll annoy most of them.

Rule #5: Always act on each customer's survey response at an individual level

Acting on survey responses is critical. Customers take their valuable time to give you feedback. You have an obligation to tell them what, if anything, you are doing in response to it. Not following up tells customers there’s no point to giving you future feedback.

Rule #6: Avoid irrelevant questions at all costs.

A common mistake: Asking a purchasing contact to evaluate a product's technical capabilities. It happens all the time, but shouldn’t. And, it’s to be avoided wherever possible. It makes you look bad when it happens. After all, a customer almost always has to provide some level of documentation about who he is, where he’s located, what he does for the company, etc. If multiple roles are being surveyed, use question branching, data piping or question routing to only present relevant questions to those people. Most good feedback management systems support question branching, routing and piping. So, there’s no reason not to use those capabilities.

Rule #7: Never ask a customer a question you already have the answer to.

It amazes me how often I’m asked in surveys for data that I’ve supplied to the company many times in the past. I hate doing it each and every time. I feel that whoever built the survey was too lazy to look up relevant information about me. And worse, was willing to waste my time getting the information again. So, when a survey crosses my desk where I’m asked “dumb” questions like what’s my name, what products do I have, how long I’ve been a customer or the like, I simply terminate the survey.

More importantly, it's not considerate of a customer's time to ask questions you should have the answers to. Think about this. You are asking your customer to use his time to give you information that you could have used your time to develop.

I help businesses with customer surveys all the time.  If you'd like help with a survey project you're planning, please feel free to ask me for advice.  

My e-mail is: stew.nash2010@gmail.com
I use QuestBack for my survey projects. 



Tuesday, January 11, 2011

Action Management & Tracking Survey Results Available

I've been running a non-scientific survey (see sidebar) targeted at people who use customer feedback processes and also CRM systems. Additionally, I've been collecting information via LinkedIn posts and conversations with serious practitioners of Voice of the Customer (VOC) processes.  To get survey results just click here: http://www.nash-efm-consult.com/Action_Tracking_and_Management_Survey_Results.pdf

I've found that most companies that employ feedback management systems and a case management approach have two challenges. One is verifying that follow-up occurred - and was in some way effective (i.e. contributory to boosting NPS, CSAT or other metric). Second, is evaluating at a higher level the types of issues that are being responded to by front line people and, of course, creating strategy or higher level actions (and then communicating them) that are the "real" organizational response.

I found that a large minority of organizations (48%) don't get much value from their Alert processes. This finding bothered me because 52% report receiveing significant value and 75% report that Action Management & Tracking are very important to the success of their customer feedback initiatives.  So, I think this number shows one of a variety of possible issues, including (but not limited to):
  • Taking follow up actions is too resource intensive to justify
  • Follow up actions aren't perceived as effective (possibly because people aren't empowered to fix issues), so aren't being pursued.
  • The Action Management process isn't well supported by tools (CFM tools un-integrated with CRM?)
  • Others?
And another issue that I think should get looked into, but I don't think often does is: Staff perceptions of their follow up tasking. i.e. Do Account Managers and Support people provide:

- mostly lip service when doing survey follow up
- solve real issues that benefit the business
- feel that they are provided with the tools and resources to "add value" for customers

I think companies need a metric that reflects the employees perception of the enterprise's effectiveness at dealing with / acting on customer feedback. I think it could be quite useful to be able to gauge that sentiment on feedback "response effectiveness" as compared to CSAT or NPS.

Some of the issues with Action Management and Tracking can be dealt with by using surveys to revisit feedback customers provided 30 or 60 days earlier.  And, by surveying the employees periodically (quarterly?) on "response effectiveness" of their actions.