Tuesday, August 25, 2015

Surveying in support of unsolicited verbatim feedback

I've written a number of posts her focusing on ways to employ outbound processes for soliciting customer feedback. But, by definition, outbound feedback collection requires enough customer knowledge to determine which customers to engage, when to engage them and what topics to try and engage them on. Some of that data can be drawn from CRM databases. But, what about when no solid information exists about a customer?  

In my most recent post I discussed how text analytics can be employed to categorize verbatim feedback from transactional surveys and how this categorized data might then be used to trigger follow on topic driven surveys. This post is going to elaborate on that topic and extend into other arenas where verbatim customer feedback can be analyzed, categorized and potentially used for input to survey processes. 

One important type of unsolicited feedback companies receive is from social media and company websites, where customers visit without being asked to, or are responding to some level of marketing. The feedback businesses get from these sources can be difficult to take action on, unless it is reviewed at the actual comment level. This, of course, takes lots of resource. Text analysis tools are a logical mechanism to employ so as to reduce the manpower needed for follow up. Since topical categorization of social media based feedback is a simple matter today for text analysis systems, using them to create feedback categories is a logical step. However, most verbatim feedback - taken without context of customer data is also hard to act on.  So, surveying subsets of customers based on their topic and sentiment profiles also makes logical sense.  The same way drill down surveys can help with simple transactional feedback (last months post), they can help with understanding unsolicited feedback. 

It seems to me that getting survey questions out to people interacting with social media or survey like data into social data streams would clearly add value to the data. An example of how this might work:
- "Business A" had 10,000 comments come in last month through their web site, where they offered a feedback form with a single NPS question and a verbatim comment box. The 10K comments were run through text analysis. 1000 comments were categorized as referencing "product quality". And, of those, 200 were from "detractors" and 200 from "neutrals". 
- In situations like this, the product management (PM) team would want to try and ascertain if product quality was in fact a driver of NPS score. By sending a follow up survey to this specific group of detractors and neutrals regarding products, company A's PM team can generate further and more detailed data about any product quality issues customers are experiencing. 

The benefit of a process like the one I'm describing above is that it eliminates the need for a person to go through all the "product quality" related feedback to generate the needed information about product quality issues. People taking the follow up survey would provide quantitative data about specifics of product quality. And, the qualitative data in the text would then make a lot more sense. Reacting to feedback using a process like this would help businesses react to the right issues, as well as not over react to non issues.

With more and more customer feedback being unsolicited in nature, its important for businesses to begin thinking about ways to attach specific quantitative data to the streams of text entering their processes.  Adding surveys to text analyses based on results, can help.

Stewart Nash
www.linkedin.com/in/stewartnash









No comments:

Post a Comment