has collected SurveyScore data for Table 1: Survey Complexity (High to Low Engagement)more than 10,000 surveys, with over Moderate Complexity Medium Complexity High Complexity2.6 million completes. These surveys Design Attributes SurveyScore = 35 SurveyScore = 9 SurveyScore = 4span product categories (such as food Survey length (min) 9 16 17and beverage, financial, technology, Total survey pages 38 39 43entertainment, health and beauty,health care and travel) and research Total number of questions 40 41 45methods (such as concept screening, Avg. number of rows/matrix 4 13 13line and package optimization, and Avg. number of columns/matrix 5 6 6attitude and usage studies). Total number of matrix questions 8 8 8 Our team sought to determinewhether certain survey design vari- or partial rates. While survey length MarketTools fielded three surveysables could reliably predict the proved to be generally predictive of with varying levels of complexity,composite engagement measure of most respondent engagement mea- categorized as moderate, medium andrespondent behavior and percep- sures, there was wide variation in high. We analyzed 1,000 completestion that comprises TrueSample the design variables that were most for each survey. The experimentalSurveyScore. We built a model to influential in driving various mea- surveys had the same series of ques-predict engagement using survey sures of engagement. For example, tions about demographics, productsdesign variables and the TrueSample for the survey rating measure, one of purchased, etc., but differed based onSurveyScore database as inputs. the most predictive design variables the number of products respondentsPredictability is an indication that was the elapsed time per page of the said they purchased. The level of ic lysurvey design impacts engagement survey. For the speeding measure, complexity increased as more prod-in a consistent way, implying that however, elapsed time per page was ucts were chosen and more brand n nwe could recommend adjustments to not even in the top five most impor- attribute questions were displayed. In ro Othe design variables that would mini- tant design variables. the moderate category, respondents ct nmize adverse effects on engagement. Thus, adjusting just one parameter were asked one question per product.Specifically, we modeled the impact may not be sufficient to elicit desir- In the medium-complexity category,of more than 20 survey design vari- able behavior from respondents, nor respondents received 17 brand attri- le tioables (independent variables) that are will it singlehandedly improve their bute questions per product. In thewithin the control of survey design- perception of the survey-taking expe- high-complexity category, respon- E uers - such as survey length, and total rience. Instead, the findings reveal that dents were asked 17 questions forword count - on several respondent engagement is driven by a complex every product chosen, plus additional r ibengagement measures (dependent interaction among design variables. open-ended questions. o rvariables) reflecting the respondents’ This means that simple survey We computed and compared theperception of the survey and behavior design guidelines or rules are inad- SurveyScore for the three surveys. F stduring the survey. equate for motivating the desired Predictably, it dropped precipitously respondent engagement. There is no with the higher complexity levels. iClear indication axiom that applies in all cases, such The medium- and high-complexityThe research revealed that a multivar- as, “Surveys that require more than surveys received an extremely low Diate model that captures the complex 20 minutes result in poor respondent score, as shown in Table 1.interaction among design variables is engagement.” In fact, our researchers Next, we conducted a seriesable to predict overall engagement, uncovered several examples of long of statistical tests to evaluate thecomprised of both experiential and surveys that had a higher-than-normal effect of respondent engagementbehavioral variables. The fact that the survey rating as well as a lower-than- on data quality. By conducting dif-impact of these variables is predictable normal partial rate, which would run ferent analyses, we were able toprovides a clear indication that survey contrary to what one would expect if examine data quality from variousdesign directly influences respon- length alone were a deciding variable. angles for a more comprehensivedent perception and behavior, i.e., Conversely, we found examples of review. Specifically, we investigatedengagement, in a consistent way. This short surveys that had a lower-than- the following.means that survey designers do have normal survey rating because of the Will unengaging surveys:some degree of control in improving design of other variables.engagement. This also means that the • Increase the odds of sample bias?SurveyScore can be predicted prior An effect on quality • Make respondents more apt toto deploying a survey to help guide With the impact of survey design answer the same question incon-design modifications. on respondent engagement estab- sistently? We uncovered another inter- lished, the research team endeavored • Make respondents more prone toesting finding when we examined to determine whether engagement random answer choices?the influence of particular survey had an effect on data quality. The • Make respondents more likelydesign elements on specific aspects TrueSample SurveyScore database to provide inconsistent answerof engagement, such as survey rating allowed us to test this hypothesis. choices? To purchase paper reprints of this article, contact Edward Kane at FosteReprints at 866-879-9144 x131 or firstname.lastname@example.org.
by removing bad respondents and designing surveys that keep good respondents engaged. Research pro- fessionals now have evidence that survey design not only influences whether respondents abandon a survey but also impacts the data for those who complete it. The ability to predict the effect of various survey design variables on respondent engagement will help survey designers maximize engage- ment to increase the reliability of their data. Researchers no longer have to assume that a long survey will jeopardize the quality of the results, since we have shown that it is pos-opposed to one of the available prod- of selecting a lower unit price over sible to compensate for the adverseucts. Once we removed the “none” a higher one. The net result: surveys effects of certain design variables byoption from our model, the predic- with a low SurveyScore translated adjusting others. By using engage-tion accuracy dropped significantly to lower predictability and thus to ment measurement and prediction ic lyfor the high-complexity survey. In lower data quality. tools, researchers can know thataddition, the lower-scoring surveys survey design affects data quality, can n nhad more violations in price selec- Take responsibility measure engagement to help improve ro Otion order, meaning the respondents Our conclusion? Researchers must survey design and optimize design to enhance the reliability of results. | Q ct ntended to violate the expected order take responsibility for data quality le tio E u r ib o r F st D i