Friday, August 1, 2014

Text Analysis with almost No Manual Effort

Last week I posted what was, essentially, a critique of manual word cloud based topic mapping processes. These types of manual processes are very common among survey tool vendors who are trying to incorporate text analysis into their solutions. The critique I made was based on the notion that doing lots of manual work on word clouds and topic maps is unnecessary.  In this week's post, I'm going to try and show a better way, using Etuma360.

All text analysis has to start with a stream of text. The text can come from web forms, chat logs, forums or Surveys. Using Etuma360, that text develops a word cloud that represents the usage of words in the text stream. Rather than manually mapping words topics, what if the text analysis system already possessed a topic database that words in the word cloud were mapped to?  This would eliminate almost all the manual effort involved in word mapping to topics. Exactly how the Etuma360 product works.

A text stream is fed into Etuma360 using either a file upload or an API process.  In this case from a customer survey uploaded to Etuma360.  A word cloud is automatically generated.....


And a topic / sentiment list is also Automatically generated.  No Manual Effort required.....


Because Etuma360 does the topic mapping automatically, the analyst can focus on the visualizations his end users need......

As I said, Easy, Fast and Effective.  Consider how much money a survey analyst makes and how much time they have to spend mapping words to topics.  If its 100 hours in a year and $100 / hour.  Cost is $10K. And, that's before buying software or generating any useful analyses. Etuma360, saves you all that labor cost and more.

To take a free trial of Etuma360 Click the link below:

 https://response.questback.com/demo_nash_stewart_qbbostonusa/etuma90daytrial/





Saturday, July 26, 2014

Analyzing Survey Open Ended Questions

I've been on the distribution list for Survey Magazine (electronic version) for several years now. Recently, they published an article (I think sponsored by Cvent) about using text analysis on survey open ended questions. Though the article offers some useful advice, it seems to me that there are better ways to do text analysis on survey open ended questions than the methodology proposed in the article.  So, I thought I'd discuss some of the main points made and offer some thoughts. For those interested, the url below will bring you to the entire article. For clarity, I've italicized material from the article and colorized, in blue text, my remarks.

http://viewer.epaperflip.com/Viewer.aspx?docid=b3b90dd4-b2bd-4945-8af1-a36100b7c43e#?page=36


The Survey Magazine article begins with the basics about why its important to do text analysis. It goes on to make two points: First, that analyzing text is difficult and why ("The hardest part about including open-ended questions in your survey is analyzing responses. Unlike close-ended questions, it’s doubtful that any two open-ended responses will be exactly the same."); Second, it states that a text analysis plan is necessary. The article goes on to share how to create such a plan.

I would agree that analyzing text is difficult, especially using a manual inspection method coupled with word or phrase searches. I also agree that doing text analysis offers lots of value and insight in the right scenarios. But, in my opinion the methodology proposed in the article is "the hard way" to do text analysis and even so will really only be useful on shorter ad-hoc surveys.

The article outlines five steps in a proposed text analysis process.  

1. Use word cloud technology. The best way to begin your text analysis is by using word cloud technology. The technology sifts through responses and creates a visual representation of the most frequently used words or phrases. The larger the font of the word in your cloud, the more relevant it is to your data. Once you've seen which words pop up the most, you can start to make categories to group responses and analyze trends.

Almost all verbatim analysis technology uses word clouds in one way or another. More sophisticated products combine words or phrases with usage context. For instance, in Retail e-commerce situations words like "Web site" and "Click" show up all the time. But, analyzing them is valueless without more context.  Manually ascribing that context is a major endeavor if any real response volume is involved. So, the process outlined is very manual and ultimately subjective to the analyst mapping the word or phrase to a category. Another analyst at another time might choose to map the same data to a different category, based on their own interpretation at the time (so there's multiple dimensions of subjectivity). Word clouds are useful tools but are not the "end all" to text analysis.

2. Establish categories. The next step in analyzing your open-ended responses is creating categories. Use your word cloud for insights into the range of thoughts and feelings articulated by your respondents. For example, if you asked customers how they think your organization can improve its product, and the words “cost”, “size”, and “color” loom the largest, create categories for those words. Once you begin to read your responses file them under the appropriate categories. If any of your responses fit more than one category, put them in both.

This is largely good advice. But, again its lot of manual work that would have to be repeated on a survey by survey basis. Several commercial text analysis systems I am aware of (including Etuma360) will do this kind of work automatically and then let you tweak the topic analysis produced, saving boatloads of time. And again, building subjective categories can be potentially problematic, for reasons discussed previously.

3. Review and refine As you begin to inspect responses more closely, you will probably find that you have to make adjustments to your categories. If responses used similar words to describe conflicting sentiments, you’ll have to create new categories; if the reverse is true, you can combine categories.

If you've implemented a manually constructed and word cloud based approach, this is good advice. Language is a living, dynamic construct. Interpreting it is always a "tweaking" process. Just, there's a better way to do it than the process proposed. At Etuma, we use a set of layered ontologies to map language meanings to our topic database. Effectively, this lets us use input from hundreds of our users to improve everyone's language interpretation, largely eliminating the need for each customer to always manage that process. Other text analysis tool vendors employ statistical mapping models that they tweak for individual scenarios and customers. Point is, "topics" found in text streams should be largely auto identifiable, especially in known contexts like customer service or e-commerce. 

4. Make correlations. Now it’s time to examine the text within the framework of your overall survey. Start to couple open-ended responses with corresponding close-ended responses to draw conclusions about why respondents gave the answers that they did. If you used an open-ended question as an avenue for respondents to give an “other” answer to a multiple choice question, try and determine if there is a clear winner. You should also cross tabulate your data by demographic to see if any patterns emerge. Find out if certain groups within your sample tended to answer open-ended questions in the same way.

Clearly, this is something that should be done.  And again, in my opinion there are better ways to do it than that proposed. At Etuma, we simply connect the entire survey (and background data set) by api or upload process and automatically connect topics with filtered data subsets based on survey response categories. It's a lot less work and the analyses produced simply auto update over time.  

5. Summarize your results. After you analyze your results, summarize your findings and include any quotes from the text that were especially illustrative of your conclusions.

Of course, summation should be done, but for on-going surveys it has to be done regularly, as the topics people talk about should change over time. 

--------------------------------------------------------------------------------------------
Lots of web survey platforms are implementing word cloud based text analysis. As someone who's used text analysis tools (Etuma360 primarily) for a couple of years now, I find that it is of limited value unless certain criteria are met. Some of those are:
  • Larger surveys with lots of open ended responses  
  • Permanence. Surveys that run over long periods of time are better suited for coupled text analysis than surveys that are ad-hoc
  • Lots of background data about survey respondents
The article in Survey Magazine outlines some useful advice. In my opinion it is more useful for smaller, research oriented types of surveys (ad-hoc with a few hundred responses). For larger long running operational surveys, the text analysis approach outlined in Survey Magazine will be a lot of work. Using something like Etuma360 (www.etuma.com) for text analysis on larger and on-going surveys makes a lot more sense.

To try Etuma360 click here:


Wednesday, July 16, 2014

A cool new QuestBack Video - check it out.


If you follow this blog at all, you know that I represent QuestBack AS. QuestBack has some really good products but is far from a household name here in the U.S.  And, every now and again they do something really "cool" from a marketing perspective too.  The video below is one such example.  Click Here and check it out.

Stewart Nash
LinkedIN: www.linkedin.com/in/stewartnash

Saturday, June 21, 2014

Etuma "Dashboards" Make Verbatims Actionable



Easy to build and easy to share topic based reporting.

I work with Etuma Ltd. and their fine verbatim analysis solution - Etuma360. Over the last year or so Etuma has been building out its "Dashboard" tool.  Each enhancement to the dashboard tool has improved capabilities, usefulness and value-added.   
Etuma
With the Etuma360 dashboard I can now look at a broad topic - Let's say Detractor negative comments. Inside that data set there will be additional topics (maybe shipping damage, late arrival of goods, poor service quality, or any number of other potential issues). Though at a high level this great information. In order for someone to start acting on it, it needs additional further analysis. Etuma's dashboard tool makes this easy.  

For Example:  Etuma works with several retailers, both of the "bricks and morter" and internet varieties. In retail, damaged goods tends to be something that effects both customer perceptions as well as margins and profits. So retailers want to know about, and pay attention to, feedback regarding damaged goods their customers receive. Customers often will provide this kind of feedback in surveys or in contact forms on customer service websites. Using Etuma360 a business can automatically monitor the feedback coming through these channels. For verbatim feedback on topics relating to damage, they can then easily create a dashboard that puts all comments referencing topics related to "damage" into one report.  That report can then easily be shared with the relevant managers so they can see specific feedback (including things like order number or customer number) that would allow them to assess shippers, packaging or other aspects of the logistics chain that impact damage to shipped goods.

This kind of topic aggregation is very useful for creating actionable data from big volumes of feedback.  With Etuma's new dashboard report sharing capability, getting topic dashboards built and distributed to operational managers is both simple and easy. And, because the topic dashboards tend to be very specific, even large amounts of verbatim feedback tend to filter down into manageable "bites" of topic based feedback, making "actioning" that feedback much, much easier.  

Etuma360's built-in topic database allows even the newest customer to quickly start filtering and aggregating relevant feedback. Etuma's price model let's even smaller businesses get started.

Stewart Nash - stewart@etuma.com
LinkedIn: www.linkedin.com/in/stewartnash



Monday, June 16, 2014

Starting an NPS program from scratch - Some Thoughts


I've been working with businesses and non-profits for a number of years now, helping them implement Net Promoter Score (NPS) survey processes.  A conclusion I've come to is that, especially early on in a company's evolution with the NPS methodology, simple is best.  I've found this to be particularly true for businesses who've never tried to measure customer feedback in a structured program before.  If I were advising a new client today on starting up a NPS program from scratch, and who never had a customer survey process in place, I think I'd largely recommend a very simple two question NPS survey that would run for maybe six months or a year as a pilot program.

The purpose of the simple approach is to acquire a broad data set that can be differentiated based on transaction point, relationship type, geography, product, or other variable (or set of variables) that allow the definition of loyalty drivers and of processes that allow the client to affect those loyalty drivers.

The two questions I would want to ask:  #1. How likely would you be to recommend company XYZ to colleagues and friends?

Then, depending on the answer to #1, one of three distinct questions.

#2 for 0-6 scores: "What, in your opinion, could we do better?
#2 for 7s and 8s: "What are the three things you like best, and least, about us?
#2 for 9s and 10s: "Please explain your rating"

Businesses new to NPS typically need to learn what their customers really value and dislike most about them. Customers may value transaction efficiency or personal relationships, they may value cost effectiveness or product characteristics.  Or, they may value something else entirely. Generally businesses have an understanding of what their customers value. But often they fail to understand or (more likely) fail to create action processes that mitigate issues affecting their loyalty drivers.  

Given that NPS surveys can be initially simple, with just two questions delivered to each respondent.  The challenge for businesses is ensuring that they understand who their survey respondents are, and what they're reacting to, for each completed survey response.  Collecting NPS data by source or transaction point is therefore important.  And, in the case of invited surveys it's important that each respondent's "background" data (role, geography, product/service owned, etc.) be captured as part of the survey process.  If the business does this, it learns which transactions or relationships generate the most "likely to recommend" responses, and why or why not (through question #2).  After this data has been studied, a second generation NPS survey can be developed that tracks specific "loyalty drivers" for each transaction type, data source or relationship type.

In 3rd generation NPS survey processes, creating action taking processes based on real-time survey data becomes important.  The reason? Only by acting on feedback does a business mitigate or otherwise impact issues affecting loyalty drivers. This is the real pay-off for NPS processes.  Acting on feedback in ways that mitigate loyalty affecting issues has a long track record of improving business success. 

When starting down the NPS path there are many ways to go astray. Taking a simple approach helps avoid unnecessary resource expenditure, while allowing the development of a plan that ultimately lets the business affect loyalty drivers through closed-loop follow up at the right times and in the right places.  It may take a year or so to go from NPS start up to full NPS implementation, but, its a year worth investing in.  Done correctly it doesn't have to cost a lot to get started either.

Stewart Nash
LinkedIn: www.linkedin.com/in/stewartnash