Wednesday, January 26, 2011

My 7 "Rules" For B2B Customer Surveys


Being in the business of selling, supporting and using web survey tools mostly for supporting customer surveys, I’ve seen hundreds of them. I’ve found that customer surveys and particularly B2B surveys need to be clear, concise and sensitive to the customer’s time. So, I’ve developed a set of “rules” that I try to follow when helping my customers with their customer surveys.

Rule #1 – Clearly define the objective for the survey

Many customer surveys I've run across have multiple (and sometimes conflicting) objectives. This happens a lot to businesses that survey their customers just once a year. In this situation, the customer survey ends up being everybody's vehicle for capturing "something". In the end it often results in too many questions, low response and completion rates, unclear follow-up actions and ultimately, annoyed customers who don't want to take your surveys.

So, I always try to define the “one critical data point” the survey must develop. And, only ask questions that provide qualification to that one data point. If that task can be achieved with less than 10 questions, secondary – but related - topics can be introduced.

A survey methodology that does this well is the net promoter method developed by Fred Reicheld. What I like about net promoter is it's based on one question - "How likely are you to recommend [company] to your family and friends". Other questions in a net promoter survey are designed to qualify responses to the "recommend" question. Done well, net promoter surveys are short and take less than a couple of minutes, while also providing insights, clearly interpretable data and clear follow up actions that have a purpose (enhance relationship quality).

Rule #2: Survey customers regularly.

It may seem counterintuitive where people complain about getting too many surveys. Yet, I think people mainly complain about "bad" surveys (i.e. too long, irrelevant questions, etc.). If you get a reputation as a "bad" surveyor, your response rates will be low. And, "opt outs" will be high. But, if your customer surveys are well done, i.e. short, easy to complete and always followed-up, your customers will interact with you in relatively high numbers even in quarterly surveys.

Rule #3: Always keep surveys short.

Personally, I don’t like to ever ask more than 10 – 15 questions in a customer survey.  And I try to avoid long matrix type questions. If you are using a regular survey process, after asking your 10 core questions you can ask some secondary related questions.  In a quarterly feedback process different sets of secondary questions can be rotated into the questionnaire.

Rule #4: Only send surveys to those people who can give you the data you need.

Blasting a survey to people who can’t give you the information you need just guarantees that you’ll annoy most of them.

Rule #5: Always act on each customer's survey response at an individual level

Acting on survey responses is critical. Customers take their valuable time to give you feedback. You have an obligation to tell them what, if anything, you are doing in response to it. Not following up tells customers there’s no point to giving you future feedback.

Rule #6: Avoid irrelevant questions at all costs.

A common mistake: Asking a purchasing contact to evaluate a product's technical capabilities. It happens all the time, but shouldn’t. And, it’s to be avoided wherever possible. It makes you look bad when it happens. After all, a customer almost always has to provide some level of documentation about who he is, where he’s located, what he does for the company, etc. If multiple roles are being surveyed, use question branching, data piping or question routing to only present relevant questions to those people. Most good feedback management systems support question branching, routing and piping. So, there’s no reason not to use those capabilities.

Rule #7: Never ask a customer a question you already have the answer to.

It amazes me how often I’m asked in surveys for data that I’ve supplied to the company many times in the past. I hate doing it each and every time. I feel that whoever built the survey was too lazy to look up relevant information about me. And worse, was willing to waste my time getting the information again. So, when a survey crosses my desk where I’m asked “dumb” questions like what’s my name, what products do I have, how long I’ve been a customer or the like, I simply terminate the survey.

More importantly, it's not considerate of a customer's time to ask questions you should have the answers to. Think about this. You are asking your customer to use his time to give you information that you could have used your time to develop.

I help businesses with customer surveys all the time.  If you'd like help with a survey project you're planning, please feel free to ask me for advice.  

My e-mail is: stew.nash2010@gmail.com
I use QuestBack for my survey projects. 



Tuesday, January 11, 2011

Action Management & Tracking Survey Results Available

I've been running a non-scientific survey (see sidebar) targeted at people who use customer feedback processes and also CRM systems. Additionally, I've been collecting information via LinkedIn posts and conversations with serious practitioners of Voice of the Customer (VOC) processes.  To get survey results just click here: http://www.nash-efm-consult.com/Action_Tracking_and_Management_Survey_Results.pdf

I've found that most companies that employ feedback management systems and a case management approach have two challenges. One is verifying that follow-up occurred - and was in some way effective (i.e. contributory to boosting NPS, CSAT or other metric). Second, is evaluating at a higher level the types of issues that are being responded to by front line people and, of course, creating strategy or higher level actions (and then communicating them) that are the "real" organizational response.

I found that a large minority of organizations (48%) don't get much value from their Alert processes. This finding bothered me because 52% report receiveing significant value and 75% report that Action Management & Tracking are very important to the success of their customer feedback initiatives.  So, I think this number shows one of a variety of possible issues, including (but not limited to):
  • Taking follow up actions is too resource intensive to justify
  • Follow up actions aren't perceived as effective (possibly because people aren't empowered to fix issues), so aren't being pursued.
  • The Action Management process isn't well supported by tools (CFM tools un-integrated with CRM?)
  • Others?
And another issue that I think should get looked into, but I don't think often does is: Staff perceptions of their follow up tasking. i.e. Do Account Managers and Support people provide:

- mostly lip service when doing survey follow up
- solve real issues that benefit the business
- feel that they are provided with the tools and resources to "add value" for customers

I think companies need a metric that reflects the employees perception of the enterprise's effectiveness at dealing with / acting on customer feedback. I think it could be quite useful to be able to gauge that sentiment on feedback "response effectiveness" as compared to CSAT or NPS.

Some of the issues with Action Management and Tracking can be dealt with by using surveys to revisit feedback customers provided 30 or 60 days earlier.  And, by surveying the employees periodically (quarterly?) on "response effectiveness" of their actions.