Mining for meaning
Understanding customer satisfaction research
Many MRO distributors invest time and money in customer research. They design surveys and collect customer responses. But some are unsure exactly what to do next. Unless someone on staff has a background in statistics or research, they become intimidated by the prospect of analyzing customer survey data and turning it into action.
Progressive Distributor magazine recently spoke with Randall Stutman of Communications Research Associates in Valley Forge, Penn., about customer satisfaction research.
Q. Youve spoken to I.D.A. and ASMMA groups about measuring customer satisfaction. How have they responded to your ideas?
A. Interest in customer satisfaction research is continually building. Because of changing conditions in the marketplace, more and more businesses are striving to convert to a market- driven orientation, putting customer needs first in order to build and maintain loyalty.
As a result, many distributors are receptive to the idea of building a model of customer satisfaction, or a customer-generated inventory of the factors that drive satisfaction and loyalty. When they build customer surveys around those models, distributors can be confident theyre measuring whats really important to customers.
Q. In your next ASMMA/I.D.A. convention presentation, youll talk about mining for meaning in your customer satisfaction data. What do you mean by that?
A. In 1999, I dont know of any organization that can afford to conduct research simply for the sake of research; its not an academic exercise. Research needs to be actionable, it needs to justify itself by providing useful insights. You find insights and answer the so what? questions by analyzing your data.
But in order to get the most bang for your research buck, you need the ability to do more than simply count up the number of customers who offered each of the possible responses to any particular survey question.
Q. In other words, you need to have statistical expertise?
A. Some background and experience with statistics is helpful. But you dont have to be a statistical expert to look for meaning in your data. In fact, the statistical experts out there often get so caught up in their analytical capabilities that they lose sight of the true objective.
For example, in an effort to be thorough, statistical experts sometimes produce stacks and stacks of numbers that overwhelm the research audience and obscure the key findings. Or they report their numerical findings to the second decimal place, which lends a sense of precision to the findings that is unwarranted, given the size of the sampling error involved.
Q. So how should someone who is not a statistical expert begin to analyze their customer satisfaction data?
A. As I suggested before, calculating frequencies (the percentage of customers who offered each possible response to a question) is the simplest way to summarize results. But in addition to computing the frequencies for each survey question, you may gain insights from looking at groups of questions.
For example, you might calculate the percentage of customers who gave you high ratings on all of your overall satisfaction and loyalty ratings or the percentage of customers who provided no dissatisfied ratings across all survey items.
Another good way to describe your data is to calculate an average score, assuming that the response categories are numbers (such as ratings on a 10-point scale) or can be converted to numbers (Excellent equals 5, Good equals 4, etc.).
As you know, you calculate the average by simply adding up all the numerical responses to a question, then dividing by the total number of responses to the question. Of course, if your survey uses 9 or 99 to represent dont know, make sure you exclude these responses.
You can also use averages to create composite ratings of similar survey questions. For example, items that refer to responsiveness can be averaged together to create an overall responsiveness rating. You may want to create a customer satisfaction index for your company by averaging together all the performance items on your survey. Composite ratings provide an easy way to describe and track changes in your overall performance.
Q. What else can the non-expert do?
A. There are a couple of things you can do to help identify how to leverage customer perceptions or where to focus your improvement efforts. Suppose that your research identifies that customers are less than fully satisfied with various aspects of your organizations performance. You cant fix everything at once. Time and budget dollars are limited.
Q. Where do you start?
A. At CRA, we would probably address this dilemma by conducting multiple regression analysis, which suggests to us which individual survey items, when improved, will have the greatest overall impact on some global variable, such as overall customer satisfaction or loyalty. For the non-expert who doesnt have the ability to conduct multiple regression, theres an informal technique:
First, select the 10 to 20 most satisfied customers and the 10 to 20 least satisfied customers based on their ratings of overall satisfaction and loyalty. Then, across each of the individual survey items that measure aspects of your organizations performance youd like to improve, calculate an average rating for your group of highly satisfied customers and an average rating for your group of dissatisfied customers.
Next, across each performance survey item, compare the satisfied customers average rating with the dissatisfied customers average rating. A large gap between the two ratings for a particular item suggests that this particular factor exerts a strong influence on overall satisfaction or loyalty, because satisfied and dissatisfied customers have highly different perceptions of your performance. Accordingly, you should target your improvement efforts on the factors that most strongly differentiate your satisfied and dissatisfied customers.
Another valuable gap analysis tool that will help prioritize your improvement efforts requires the inclusion of importance items in your survey. Companies that want customer input for ranking improvement efforts can ask customers to provide ratings for the ideal company for each survey item along with their satisfaction ratings. The larger the gap between the ideal and your actual rating, the more attention that attribute should receive.
Q. Do you need specialized statistical software to mine for meaning in your customer satisfaction data?
A. The techniques Ive described frequencies, averages and gap analyses do not require statistical software. You could use a calculator or, preferably, spreadsheet software such as Excel.
But more and more organizations now use statistical software, and in particular, SPSS, which is one of the programs we use. A key advantage of SPSS is that a non-statistical expert can use it. Its a Windows-based program with drop-down menus, and you can calculate frequencies and averages more quickly than with spreadsheet software. SPSS also provides access to a variety of advanced statistical analyses. In 90 seconds, a novice using SPSS can run an analysis that 15 years ago would have taken a statistical expert all afternoon to solve.
However, the key disadvantage of SPSS is, again, that non-statistical experts can use it (and abuse it). Its deceptively easy, so novice users often succumb to temptation and get in over their heads. They end up damaging their credibility and the credibility of the customer satisfaction research process when they run the wrong analysis or misinterpret the findings.
Q. So, in summary . . .
A. There are some analytical methods that you should entrust only to a research expert. At the same time, we want to encourage non-experts to use some of the simple techniques Ive talked about to make sure theyre earning the maximum return on their research investment.
This article originally appeared in the April 1999 Progressive Distributor ASMMA/I.D.A. convention planner. Copyright 1999.
back to top back to marketing archives
|