intTypePromotion=1
zunia.vn Tuyển sinh 2024 dành cho Gen-Z zunia.vn zunia.vn
ADSENSE

Alternative designs and methods for customer satisfaction measurement

Chia sẻ: Monkey68 Monkey68 | Ngày: | Loại File: PDF | Số trang:0

160
lượt xem
23
download
 
  Download Vui lòng tải xuống để xem tài liệu đầy đủ

The purpose of this paper is to help customer survey process stakeholders understand some of the inherent tradeoffs of alternative survey methods. The scope addresses factors including size of the customer population, strengths and weaknesses of alternate...

Chủ đề:
Lưu

Nội dung Text: Alternative designs and methods for customer satisfaction measurement

  1. ALTERNATIVE DESIGNS AND METHODS FOR CUSTOMER SATISFACTION MEASUREMENT Jeff T. Israel Chief Satisfaction Officer SatisFaction Strategies, LLC Portland, OR 97229 SUMMARY The purpose of this paper is to help customer survey process stakeholders understand some of the inherent tradeoffs of alternative survey methods. The scope addresses factors including size of the customer population, strengths and weaknesses of alternate methods, survey response rates and resource constraints. When taken with the information needs of the organization, these factors converge to suggest appropriate survey methods and designs that will facilitate an effective customer satisfaction measurement (CSM) process. KEY WORDS Customer Satisfaction Measurement (CSM), research design, survey methods, response rates INTRODUCTION Research design and survey method selection comprise an important part of creating an effective CSM process. In addition to understanding the purpose and objectives for CSM (Israel, 2000), we can create a more effective CSM process by understanding the differences (tradeoffs and implications) between alternative research designs and survey methods. In CSM design, there is no standard “one-size fits all” approach. However, we can choose a method well suited to a particular situation and an approach that ensures value exceeds costs of the feedback system. The principle focus of this paper is on survey method selection given specific resource and customer population considerations. Topics such as identifying customer requirements and integrating them into CSM questionnaires are also key research design elements, but are only addressed briefly here. More information on these elements is available from other sources (Vavra, 2002; Israel, 2000; ASQ Quality Management Division, 1999, 235-246; Israel, 1994; and Israel, 1992). RESEARCH DESIGN ELEMENTS The phrase research design refers to all aspects of translating customer survey requirements and objectives into the process to be deployed. In addition to clearly stating CSM objectives, the major research design elements include: qualitative evaluation; type of customer survey; sample design; survey method selection; and, questionnaire design. SatisFaction Strategies, LLC © 2002, all rights reserved Page 1
  2. Qualitative evaluation normally follows the initial statement of CSM objectives. Qualitative methods most commonly entail depth interviews (one-on-one) or focus groups conducted with various external customer groups (segments). Qualitative customer data gathering is used to identify and clarify customer requirements and the primary components of value exchange. Results from qualitative research may not be projected to all customers but is fundamental in determining which aspects of product and service delivery should be included as metrics in the CSM quantitative survey. Internal qualitative evaluation – targeted with employees who “own” key service delivery processes – is another helpful way to identify customer requirements, and also provides focus on areas critical to customer satisfaction. In addition, internal evaluation can often lead to significant service process improvements (Israel, 1994). The types of customer surveys most often used for measuring customer satisfaction include general customer satisfaction tracking and transaction satisfaction tracking, determined by whether the population is defined in terms of customers or transactions. Other types of CSM surveys include new customer surveys and lost customer surveys. New customer surveys help ensure customer relationships get off on the right foot (i.e., high initial quality), while lost customer surveys can help identify root causes of problems driving customers into the arms of the competition. Sample design refers to how we define who the customer is (population), how we can contact them (sample frame) and the actual sampling method to be used. The population may be all customers (N); selected segments of “core customers” (N C); or, the universe of all qualified transactions in a certain time period (N QT ). The sample frame is the list of customers or transactions used to represent the population. Accurate customer databases and effective Information Technology (IT) capabilities are highly desirable in deploying CSM. Actual survey samples are drawn from the lists of customers or transactions contained in the sample frame. Simple random samples are used when the population is viewed as homogenous. When distinct customer segments are the focus, stratified random sampling is more appropriate. Sample frequency may range from real-time continuous (transaction surveys) to once every two years. Survey method selection (whether electronic, mail, phone, in-person, or some combination) may be made based on a number of factors. Population size, likely response rates, core vs. non-core supplier relationship with customers, CSM resource requirements (budget / staff resources), and desired data quality are all important factors in deciding which survey method to use. In next few sections of the paper, the relative advantages and disadvantages of the alternative survey methods are presented and tradeoffs of important factors are explored. Questionnaire design and construction is one area where special expertise (whether internal or external) is called for. Care must be taken; to ask the right questions; ensure questions accurately reflect customer requirements; use the right types of scales; and, to avoid biased wording or question order. It is important that the survey conveys professionalism and sincerity to your customers. Regardless of the type of CSM survey, questionnaires should include: quantitative metrics for both satisfaction outcomes and processes; qualitative questions to clarify improvement opportunities and customer requirements; and, questions to aid meaningful customer segmentation. SatisFaction Strategies, LLC © 2002, all rights reserved Page 2
  3. COMPARISON OF ALTERNATIVE SURVEY METHODS Several survey methods may be used to collect CSM data. The most commonly used include mail, electronic, telephone, in-person, or some combination of methods (hybrid). Each method has inherent advantages and disadvantages. The distinctions between methods usually impact the suitability of a particular survey method relative to the organization’s specific CSM information needs. The following table highlights key advantages and disadvantages of alternative survey methods across a number of key survey comparison categories. Comparison CSM Survey Method category: Electronic Mail Phone In-person Hybrid Low – Low – Likely High Very high High medium medium response rate (35 to 85%) (65 to 100%) (35 to 85%) (10 to 50%) (10 to 50%) Effectiveness Low – Low – Depends on for non-core High High medium medium methods suppliers When target Poor Poor – fair Depends on respondent Very good Very good (excluded) (rerouted) methods unknown Value in Depends on building Fair Fair Good Excellent methods relationships Short, 5”-10” Short, 5”-10” Survey length Comment Comment Medium, Long, Short / limitations questions questions 10”-20” 30”-90” Medium limited limited Qualitative Depends on data quality Fair – Poor Fair – Poor Very good Excellent methods (comments) Quantitative Depends on Good Good Very good Excellent data quality methods Cost per Lowest Moderate High Highest Blended survey On review of the information in the table, in-person surveys are ranked best in all categories except cost. Because costs are very high, in-person is often only practical when the desired sample size is relatively small, or when the value of a particular customer population warrants the additional expense. In-person surveys can add extraordinary value in customer relationship management (CRM) initiatives (Israel, 1997). Phone surveys are probably used more often than any other method. While response rates can vary widely, non-response bias is less a concern than it is for mail or electronic surveys (ASQ Quality Management Division, 1999). Like in-person methods, quality for both quantitative and qualitative data (comments) is very high. Customers answer a higher percentage of questions in SatisFaction Strategies, LLC © 2002, all rights reserved Page 3
  4. general and interviewers are able to probe and clarify any vague or incomplete responses. While still fairly expensive, phone surveys cost considerably less than in-person surveys. When the customer perceives the products or services provided by your company as “less critical” than other key suppliers, phone surveys will be more successful than less obtrusive methods (mail / electronic). Poorly executed mail and electronic surveys commonly yield disappointing response rates (10%- 15%). However, there are many tactics that may be employed to improve response rates, both for mail surveys (Dillman, 1978) and electronic survey methods. Electronic surveys are probably the easiest to administer and also the lowest in total cost (even when making additional efforts to secure higher response rates). Mail surveys are similar in being simple to administer and highly cost effective. Perhaps the biggest negatives for these methods are related to data quality. Customers my skip some questions (on purpose or by accident). If provided, their comments may be vague or unspecific. Survey length must be kept short in order to maintain reasonable response rates. Electronic surveys have some other limitations. Not all customers have access to email or the web at work, so it may not be practical to use this method for all customers. Even if they do have access to email and the web, it is fairly common for company databases to be inaccurate or incomplete in fields like email address. If email address information is lacking or not up-to-date, some customers will be excluded and survey results will be biased accordingly. Hybrid methods present some interesting alternatives. Some companies use hybrid methods to accommodate different sales channels (in-person, phone and web). Hybrid methods may also be used to provide the customer with choices on how they can respond. For example, they can be sent a mail survey but a URL with the survey web address can be included in the cover letter. Another application is to begin with unobtrusive methods (email or mail). For core customers who do not respond, follow-up phone surveys can be initiated to obtain needed sample sizes and minimize non-response bias. The key point of this discussion is that CSM method selection (and the resulting survey process design) should be driven by a number of factors. While the above factors may suggest a particular approach, it is very important to factor in some other key parameters. Population size, desired sample size and required response rates may further influence the method selection decision. IMPACTS OF POPULATION, SAMPLE SIZE AND RESPONSE RATES In the section detailing research design elements, a brief overview of sample design was presented. In our choice of sample designs, we shouldn’t presume that compiling a single list of all customers and drawing a simple random sample for the customer survey is the most desirable sampling approach. It is often more beneficial to target specific customer segments or groups according to the most critical information needs and specific survey objectives. For example, given budget constraints, a company may have to choose between a CSM process that obtains a statistically valid sample of all customers or a statistically valid sample of core customers, but not both. Which approach would you choose? SatisFaction Strategies, LLC © 2002, all rights reserved Page 4
  5. This raises several sample design related questions. First, how do we define the population? Second, should we attempt a sample or census? Third, if a sample, how big should the sample size be? Finally, how will method response rates affect the achieved sample? The table below has been prepared to illustrate important relationships between population size (N), desired sample size (n) and actual sample required to achieve the desired sample. Please note that desired sample size depends on the needed precision (expected variation) and the level of acceptable sampling error. For illustration purposes, conservative sampling requirements have been assumed. Also, the response rate estimates are meant to illustrate the relative differences between methods. Actual response rates will vary depending on the ways methods are deployed and should be expected to vary from one organization to another. Impacts of Population Size, Sample Size and Response Rates on Method Selection Desired Sample Sample Required Size (n) (Considering Probable Response Rates) ("5% precision, Electronic / Telephone In-person Population Size (N) á=.05) Mail (33%) (50%) (80%) Need 85 * Need 56 * Need 35 * Very small (N=30) n=28 Max n=10 Max n=15 Max n=24 Need 648 * Need 432 Need 270 Small (N=500) n=216 Max n=165 Meet target Meet target Need 825 Need 550 Need 344 Medium (N=1,000) n=275 Meet target Meet target Meet target Need 1092 Need 728 Need 455 Large (N=10,000) n=364 Meet target Meet target Meet target Need 1142 Need 754 Need 472 Very large (N=100,000) n=377 Meet target Meet target Meet target The shaded cells in the table highlight situations where it is unlikely that desired samples sizes could be achieved. For very small populations we may need to relax precision and acceptable sampling error to achieve a statistically valid sample (e.g., "7% precision and á=0.10). In other words, if we accept more variation in our results (expand allowable precision range) and accept higher levels of risk that our statistical inferences are incorrect (where á is probability of wrong conclusion), the effect is to reduce the required sample size. The table also shows that electronic and mail surveys with small populations are unlikely to attain desired sample sizes, even when a census is attempted. We can conclude that when dealing with small customer populations (including core segments) we may have no choice but to select a method that facilitates higher survey response rates. In addition, this table underscores the value of making extra effort to increase response rates, especially in the case of small populations. NON-RESPONSE BIAS CONCERNS With all surveys, non-response bias should always be a concern. Even if we are able to attain a statistically valid sample, we must recognize that the results from our survey sample may or may not reflect the perceptions of the entire population. Methods that are the most obtrusive SatisFaction Strategies, LLC © 2002, all rights reserved Page 5
  6. (telephone and in-person) always enjoy the highest response rates. Passive methods (mail / electronic) do not convey the same sense of urgency. The conventional wisdom among survey researchers suggests that customers who respond to mail or electronic surveys are more likely to feel a strong connection with the supplier than non- respondents; or are more likely to be either very satisfied or very dissatisfied. Those in the middle may feel less compelled to respond. Phone and in-person surveys remove most of this potential bias from non-response. Hybrid methods beginning with mail surveys, then followed by phone surveys of mail non-respondents are a useful way to determine if mail non-response bias is present. Customers who perceive your company as a critical (core) supplier in their supply chain are much more likely to respond. If your customer spends a lot of money doing business with you, you are a sole source supplier, or you provide a highly differentiated product / service, you will find them more willing to participate in your customer surveys. For non-core suppliers, short telephone surveys may be the best approach. HIGH RESPONSE RATE RESEARCH DESIGNS In-person surveys can often yield response rates of 80% - 100%. However, because of costs these may only be practical on a small scale (e.g., with core customers). The in-person methodology is efficient if there are multiple stakeholders (purchasing, manufacturing, quality) whose perceptions should be included to assess satisfaction of the organization at-large. Since out-of-town travel will be required, it is ideal if all stakeholder interviews can be scheduled on the same day or on consecutive days. Using internal, high-level (autonomous) staff – with proper training in research interviewing techniques – is an excellent way to demonstrate your commitment to the customer and foster customer relationship management (CRM) initiatives (Israel, 1997). With telephone surveys, the highest response rates are attained with surveys of less than ten minutes. Of course it also helps if you are viewed as an important supplier. In our experience the key to higher response rates is the amount of effort the research supplier is willing to expend to complete an interview. It is common research industry practice to replace a target respondent after three attempts for consumer surveys and after five attempts for business surveys. Our results for business surveys have often succeeded in attaining response rates of 60% - 85%. If a high response rate is important, be willing to make between 10 – 15 attempts. Other tips include asking survey questions at the respondent’s convenience (schedule an appointment and call back), and to spread attempts out over a few weeks, calling on different days and different times of day. If a target respondent is out for a few weeks, be prepared to follow up when they return. With very small samples, it is also helpful to send an advance letter to target respondents so they will be expecting the call for the survey interview. High response mail surveys (35% - 50%) are possible, but require both time and effort to execute. Advance communication about the survey (company newsletters for customers, bill stuffers, etc.) help build awareness. There are many possible variations, but we have found a fairly efficient process is to plan two flights of survey mailings. The first flight includes a SatisFaction Strategies, LLC © 2002, all rights reserved Page 6
  7. personalized cover letter, with a short questionnaire (limit to three pages) and prepaid addressed return envelope. Customer identification numbers are placed on the questionnaire. As returns come in, keep track of which customers have replied. About four weeks following the first mailing, send the second flight of surveys only to non-respondents. Variations that beat a single flight of mail surveys include either; send an advance letter one week before sending the survey, or send a reminder post card three to four weeks following the survey mailing. While less productive than sending two flights of surveys, these variations are easier to implement because respondent tracking is not required. The use of incentives (trinkets, lotteries for bigger prizes, or charitable donations) can also help increase response rates, especially if your customers tend to view you as a non-core supplier. Use of electronic surveys has grown rapidly in just the last five years. In the past, the biggest limitation has been the lack of universal access among customers. If equal access (to email and internet) by your customers is not an issue, this method has huge advantages both in cycle time and cost. Our experience suggests that response rates will be slightly lower than for similarly executed mail designs. As with mail surveys, keep the questionnaire short and try to do some advance public relations so customers are expecting it. The most common approach to collecting feedback is to send email messages to your customers with messages that appeal to their interests, providing the URL link to your web-based survey. If they are so inclined, a single click will bring the survey up on their computer. Five to ten minutes later their responses may be in your email inbox. If you build in a customer identification code into the on-line questionnaire, you can identify non-respondents. Seven to ten days following the first wave of emails is generally sufficient time before sending the second survey request. Within another week you will have received most of the surveys that will be returned. Total cycle time from start to finish for data collection is only two to three weeks! Again, incentives may help boost response rates. RESOURCE CONTRAINTS Resource constraints (time and money) are important aspects of CSM design. It is implicit that the value of the CSM system needs to exceed the costs to deploy and sustain it. However, sometimes decision makers see only the cost side of CSM. Benefits of CSM include: the ability to track performance over time; identify root causes of systemic problems (prevention); generate ideas for value-added continuous improvement; and, provide tactical information to foster corrective action and customer retention. CSM should be considered an investment. If CSM surveys can help retain just a few key customer accounts – likely to otherwise be lost – how valuable would that be to the organization? With respect to CSM budget, there are a few ways to tailor the survey design to minimize costs without completely sacrificing data quality. Data collection costs are usually the biggest CSM budget line item. Some ideas to reduce data collection costs include: take smaller sample sizes; shorten survey length; and, reduce data collection frequency. One tactic to reduce sample size (without impacting statistical validity) is to focus on core customer populations instead of all customers. With smaller populations we can conduct fewer surveys and still draw meaningful conclusions. SatisFaction Strategies, LLC © 2002, all rights reserved Page 7
  8. When CSM is outsourced to external research suppliers, there may be a tendency to go with low bidders. On the surface all proposals may look comparable (survey method, sample sizes, etc.), but reality may be quite different. In addition to their specific industry experience, data collection protocols vary widely from one supplier to the next. For example, not all research suppliers emphasize high response rate research designs. It takes considerably more effort to achieve high response rates. If you feel response rates are important for your customer population, ask suppliers about expected response rates, and what steps they take to minimize non-response. Whether you outsource all CSM activities or implement all phases with internal resources, take time in your resource planning to understand the skill sets and stakeholders that will be involved with the CSM process. Unless you have in-house survey research expertise, consider professional assistance for the qualitative and survey design phases. If you plan to conduct telephone or in-person interviews internally, be sure to provide the training and autonomy needed to obtain unbiased and professional results. Remember that once the survey data has been collected, the important part is just beginning. The focus of analysis should always go back to CSM objectives, but in particular should aid prevention, corrective action, continuous process improvement and strategic planning processes (ASQ Quality Management Division, 1999, 237). CONCLUSION There is no one “best method”, or for that matter, one “best research design” approach for customer satisfaction measurement. In the end, the cost versus the value of the information should dictate CSM form and function for each organization. It is critically important to be clear on both the information needs and how results will be used for improvement. In addition, clarity on the key factors impacting CSM designs and methods (and the resulting tradeoffs) facilitates the creation of an effective approach, one that will succeed in attaining the objectives of the organization. REFERENCE LIST ASQ Quality Management Division. 1999. The Certified Quality Manager Handbook. Milwaukee, WI: ASQ Quality Press Dillman, D. A. 1978. Mail and Telephone Surveys – The Total Design Method. John Wiley & Sons. Israel, J. T. 2000. “Enhance Your Quality System With Customer Satisfaction Measurement.” ASQ Customer-Supplier Division Fall Conference. Phoenix, AZ: CSD Proceedings. Israel, J. T. 1997. “Feedback To Improve Core Customer Relationships: A Framework To Implement Face-to-Face Surveys.” ASQ 51st Annual Quality Congress. Orlando, FL: ASQ Proceedings. Israel, J.T. 1994. “Increasing the Power of Customer Satisfaction Measurement – Barrier Surveys Facilitate Improvement Breakthroughs And Customer Focus.” ASQC Customer- Supplier Division Fall Conference. Denver, CO: CSD Proceedings. SatisFaction Strategies, LLC © 2002, all rights reserved Page 8
  9. Israel, J.T. 1992. “Steps to Create A Successful Customer Satisfaction Measurement System.” ASQC Customer-Supplier Division Fall Conference. San Diego, CA: CSD Proceedings. Vavra, T. G. 2002. Customer Satisfaction Measurement Simplified: A Step-by-Step Guide for ISO 9001:2000 Certification. Milwaukee, WI: ASQ Quality Press SPEAKER BIOGRAPHY Jeff Israel is the founding Principal of SatisFaction Strategies, LLC, a consulting firm specializing in the design and improvement of customer satisfaction measurement and feedback systems. He also helps his clients remove performance barriers, aligning service processes with customer expectations. He has more than 20 years experience in customer satisfaction and service quality research, serving a broad range of industries. Before starting SatisFaction Strategies, Jeff worked as a market research consultant. At Market Trends he held positions including Office Principal, Vice President, and Data Collection Manager. Prior to that he was Field Coordinator with Gilmore Research Group. Jeff received his M.B.A. from Washington State University and B.A. in Business (Marketing) from WSU. Jeff has been an active volunteer with ASQ's Customer-Supplier Division since 1992. He has served as Division Chair, Past Chair, Chair Elect, Vice Chair Marketing, Treasurer and Secretary. Currently he is CSD's Webmaster. Jeff is also is a member of the American Marketing and Business Marketing Associations. Jeff frequently speaks on customer satisfaction topics and has written and presented six papers on customer satisfaction, performance improvement and improving customer relationships. Recently, Jeff authored several chapters of The Certified Quality Manager Handbook (Quality Press), published by the Quality Management Division in 1999. In 2000, he received the Customer-Supplier Division’s R. A. Maass Award for his publications and presentations in the area of customer-supplier relations. The papers authored and cited by Jeff in this paper may be accessed and downloaded from www.satisfactionstrategies.com. SatisFaction Strategies, LLC © 2002, all rights reserved Page 9
ADSENSE

CÓ THỂ BẠN MUỐN DOWNLOAD

 

Đồng bộ tài khoản
2=>2