Prior to reading these lecture notes, you will find it helpful to finish the assigned readings and review the sties listed. The secret to a good survey is in the design. The design of the research method and the survey itself are critical. So far, this semester we’ve spent a lot of time writing the ‘research question’ along with determining a population to study. We’ve also discussed validity and reliability. All of these are crticial to the success of any survey.
In Chapter 15 of Portney, we can see that surveys can be descriptive (describing populations), exploratory (finding relationships) or experimental (i.e. exploring cause and effect). As you have realized, they can also be a combination; very often you seek to describe the state of the ‘art’ or of practices of a set population but also seek to explore relationships between variables. Our surveys may be designed to address or describe a small targeted group of individuals (i.e. RDs with the CNSC credential) or be designed to use a sample that can be generalizable to a more general population (i.e. RDs with greater than 3 years of experience). Much of the thinking behind the “w” questions (who, what, where, when…and how) go back to your research question and hypotheses. What are you trying to answer and what is the best way to answer your research questions. “Surveys are designed to produce statistics about a target population. The process by which this is done rests on inferring the characteristics of the target population from the asnwers provided by a sample of respondents” (Fowler FJ, 2009, pg 11)
While there are many types of survey research methods including interview, phone, Delphi technique, nominal group technique, etc, this lecture will focus on mail and internet survey research as these are the most frequently used survey research methods in this course. If you are using a different approach to your survey research, you should explore the literature to develop the best methods for constructing your methodology.We’ll get back to this in more detail…
Before we get into the nitty gritty of survey design, let’s review the concepts of validity and reliability in survey research. Critical to a successful survey is minimizing sources of error in your survey research. If the survey is not truly measuring what you intended for it to measure (internal validity), is not generalizable (external validity) or repeatable (i.e. do the respondents rate the variable consistently – reliability), then your survey will likely not tell you what you want to know. There are 4 basic sources of error that can occur in survey research: coverage error, sampling error, non-response error, and measurement error. Review each on the upcoming slides and think about the who, what, where, when, and how questions of survey research presented in the previous slide and how you could limit error in each area.
This picture demonstrates where in the process of survey research, error can occur. We’ll go into each type of error in the next few slides. You will never be able to control for all error – but it is important to try to control for as much error as possible and recognize where it may occur and how to reduce potential error.The target population is the population that your sample and survey results should generalize to. For example, if you survey a general sample of RDs from the Northwest, you hope that your results are representative of all RDs in the Northwest. The goals is that by describing the sample that respond to the survey, you are also able to describe the target population. After you identify your target population that you want to study, you need to choose your sampling frame. This sampling frame details the sources you will use to capture the target population with the goal of using all possible sources so the sample is representative of the population and then using sampling techniques to assure that each element (or source) of your sampling frame has an equal change of being included in the sample. It is defined as “a list, map, or conceptual specification of the units comprising the survey population from which respondents can be selected. For example, a telephone or city directory, or a list of members of a particular association or group”. http://www.statcan.gc.ca/edu/power-pouvoir/glossary-glossaire/5214842-eng.htm#frameThe sampling frame reflects those that fit within the target population who have a chance of being selected for your inclusion in your sample. Your sampling frame for a general survey of RDs who were credentialed in 2009 would include the Commission on Dietetic Registration list of Registered Dietitians credentialed in 2009 as the most comprehensive list. For other surveys, the sampling frame may be more complex. Think about how you would develop a sampling frame for NICU dietitians in the US. What sources would you use to contact this target population? Factors such as the feasibility and practicality of obtaining all possible sources for your sample need to be weighed against the value of their inclusion in your sample. Sometimes barriers such as time or economic resources may limit the ability to use all potential sample sources (i.e. cost of purchasing an e-mail list). The sample are those that are targeted once the sampling frame has been established (using sampling techniques like random sampling or stratified random sampling) and the respondents are those that actually complete the survey. Let’s talk now about the errors that can occur at each step along the process.In defining the frame, practical, economic, ethical and technical issues need to be addressed. The need to obtain timely results may prevent extending the frame far into the future. http://www.socialresearchmethods.net/tutorial/Ojwaya/sampling.html
This type of error occurs when the sampling strategy excludes a certain segment of your population or the survey method chosen minimizes the possibility of a certain group responding. For example, if you conduct a web-based survey to community members in a specific town and only 50% of community members have access to a computer, you would have coverage error. One way to reduce this is to use a “mixed-mode” survey design in which you might send those who don’t have a computer a mail survey or use the telephone
This type of error addresses the precision of the survey sample. It is not possible to survey everyone in a population; the sampling error addresses just how precise your sampling is. This occurs when you only use a subset of the larger population sample without randomization or coverage consideration. For more information on sampling errors review: http://www.socialresearchmethods.net/tutorial/Mugo/tutorial.htm and http://www.socialresearchmethods.net/kb/sampstat.phpQuestion - Can you think of an example of when sampling bias could occur?
This type of error occurs when members of your sample do not respond to your survey and the non-responders are different than those that responded. For example, if more females responded to your survey than males, you would need to understand if the results showed a bias. Would males perhaps have different answers than females to the question? You won’t know unless you somehow survey some of these non-responding males to see their answers. Hence, a non-response bias check is conducted to explore if select variables are different between responders and non-responders. The investigator will choose key questions and call or survey a sample of non-responders to ask them the questions to determine whether their collective responses are different than those gathered from the larger sample. Reducing non-response error is important and can be addressed by motivating participants to respond so that you get a good representation of all members of your sample. Using Dillman’s tailored approach method (more on this coming up) can improve response rates.
Measurement Error:This type of error often occurs when survey questions are poorly constructed. If a participant misinterprets a question, their answer will likely be inaccurate or imprecise. For example, if a survey asked participants how many servings of dairy products they consume daily but the survey did not define what a serving size was, measurement error will occur. This type of error could occur also if measurement scales used (i.e. Likert scales) are not clear.
Your choice of a sample from your target population depends to a large extent on the nature of the research question. For example a survey of “Dietitians personal and professional practices regarding soy protein” required a sample of dietitians. The decisions to be made were – do we use a local sample of NJ dietitians, a national sample, a sample of only one age group, gender, practice group or nationality. The decision rested on how ‘generalizable’ we want the results to be. If in fact, we wished to only discuss practices of dietitians age 35-45 in the Northeast – we’d use an age and state specific sample. Since our goal was to be able to have ‘generalizable’ results, we wanted a representative sample of dietitians in the U.S. That required a power calculation to determine what the necessary ‘N’ for this survey was so that we could claim our results were generalizable. However if we wished to survey “herbal medicine practices of the elderly”, how would we go about it. Those decisions require consideration of the age of the population and therefore the HOW and WHERE to survey. The elderly (depending on which cohort of elderly – over 65 vs over 85) are less likely to respond to a mail or internet survey than younger adults are. So the next question becomes how to get a representative sample. That would to some degree depend on whether you were looking at the ‘population’ of a specific state or country or whether you are looking at free-living elders or those in long term care facilities and/or assisted living centers. How would you approach the challenge of finding a sample for this survey (to survey herbal medicine practices of the elderly?)You also need to consider access to your sample. Will you have access to an email list of individuals or a listserve? What potential issues would you anticipate if you only had access to a listserve? If you were trying to survey RDs who worked in bariatric centers and did not have a list of these professionals, how would you go about finding them? Do you oversample all RDs in the Weight Management DPG assuming that those working with bariatic patients are also members of the DPG or do you contact all Bariatric Centers in the US and request contact information of the RD working in the center? These are all questions you will need to consider when selecting your sample.
WHAT – What type of survey method do you use?An InterviewIn-person TelephoneInternet (skype / elluminate)A Paper Mail SurveyEmail SurveyInternet SurveyMixed mode survey Surveys can be conducted in several different ways. Mail surveys are the most common method, although web-based e-mail surveys and mixed-mode surveys (giving participants more than one option for response) are growing in frequency. I am sure many of you have received surveys of one or my types. How often do you respond? What improves your chances of responding? Many factors must be considered when choosing a method – is your sample literature (reading and writing) or computer literature? Will your sample respond better to self administered or interview based surveys. What are the cost factors involved with each method of survey administration. Do you have access to internet based survey software (yes you do). Reviewing the literature to find similar survey research to that which you will conduct will help you understand advantages and disadvantages of each type of survey method and which will work best for your study. A classic example of the telephone survey is the Eagleton Poll. Another is interviews, either as group interviews or individually administered interviews such as in NHANES. Telephone and in person interview surveys require consistency in interview techniques and questioning among interviewers. Alterations in voice, body language cues (interview only) and other non-verbal cues can affect responses. Interviews work well in select situations. Some examples of when to use interviews include diet intake surveys when you want to be able to determine specific portion sizes or methods of preparation, psycho-social interviews and situations where you want to be sure to get complete responses to all questions. An excellent example of the last situation is surveys of the elderly. Interview strategies allow you to check responses to all questions and clarify any questions on understanding of survey questions. Telephone interviews can work only when the list of questions is short, the questions themselves are short and easy to understand and when you are asking about a popular or important issue to that consumer market. For example, a short questionnaire on uses of herbal supplements which works well with most adult populations by phone, may not work as a phone interview strategy with elders due to hearing issues. E-mail and internet surveys generally take less time to conduct compared to mail or interview type surveys because you do not have to account for the time required to mail and receive surveys or the time needed to set up and conduct interviews either in person or via the phone. Also, internet and e-mail surveys often save time on data entry as programs like SurveyMonkey can be set up to download data directly into SPSS or other statistical software programs. However, as you read in the meta-analysis by Cook et al (2000) and Honoker et al (2009), response rates tend to be slightly lower than mail surveys. Also, if survey software is available (as it is for UMDNJ students), web-based surveys can be less costly and less time consuming in regards to data input. What are the differences between using an internet based survey and an email based survey? One crucial difference is the ability to track responses individually. If you send a web-based survey to an individual’s email address, the software used can track if that respondent completed the survey or not. So, if you are sending reminder emails, you only need to send to those who did not respond to the survey. Web-based surveys sent via a listserve or posted on a web-page do not offer you the ability to track responses or track respondents to non-respondents.
Internet surveys can be delivered via 4 methods:Survey link embedded in a targeted email (to one individual)Survey questionnaire via a word document attached to an emailSurvey questionnaire attached as .EXE (self executing program) within an email messageWeb-based survey (link listed on a web-page or as a pop message.Hoonakker and Carayon, 2009
Responses to mail surveys may vary with the seasons; days of the week received and holiday seasons. While there is no empiric evidence to support my statements that follow, time and experience support them. Never mail surveys during holiday times, such as between Thanksgiving and New Years, over a holiday weekend, during the summer if you can avoid it or on Mondays, Tuesdays or Wednesdays. It’s best to mail surveys on Fridays, so that they tend to arrive early in the week on a Monday or Tuesday.Respondents may vary seasonally depending on the method you use. For example, I would not send a survey to educators in July; many baccalaureate and graduate level educators may take off during the summer or visit their offices less frequently. It would be best to survey this group in mid fall or early spring, after a semester is in session but before or after mid-semester points.
Mail surveys require direction and some form of consent. Typically there is a letter of introduction that explains the purpose of the survey, why the participant was selected, who is doing the survey and why their responses are important. The letter MUST include a statement that indicates “their return of a completed survey indicates their consent to use this information in aggregate form” – WHAT DOES THIS MEAN? It means that by completing and sending the survey back, they give their permission to use the information as long as they are not identified by name. What else should a mail survey include? A stamped, addressed return envelope! Without that your response rate may be negatively impacted. The survey and envelope make it a lot easier for an individual to simply complete it and return it. Sometimes, mail surveys may also include a pencil as well as a gratuity. While I have never myself or with a student included a gratuity or pencil, there are some that feel it adds to the response rate. Examples of gratuities I have seen include a crisp one-dollar bill, free access to continuing education programs, a tea bag with a note telling me to relax with a cup of tea while I complete the survey and a shiny coin. The letter should also include a phone and email contact for questions on the survey. Typically we also ask respondents to include a business card if they want to be mailed a copy of the results. ALL SURVEY must be coded. The principal investigator (you) must keep a list of the names and addresses (or phone numbers if a phone survey) of respondents (called a code list or participant list). Why is this important? Simply for determining who needs a second survey or second call and for examining response rate.
Dillman’s Tailored Design MethodDillman suggests in his Tailored Design Method to include five steps for mail surveys (Dillman, 2009). These include a brief pre-notice letter to alert participants of the survey’s arrival, the questionnaire mailing (with cover letter) and a pre-paid postage envelope and token of participation (if desired); a thank you postcard /reminder notice; a replacement questionnaire sent to non-respondents ~ 2 – 4 weeks post first mailing; and finally a last contact made via a different mode of delivery 2-4 weeks after last mailing. Now, if you do the math, that could get very expensive! This is the ideal and what Dillman has studied as the most effective method of increasing response rates. Modifications have been made that eliminating various steps. Also, modifications can be made when the survey is conducted via email. What must be present with all communication though, is your gratitude for their completion. Tips like using brightly colored postcards and positive messaging can help to improve your response rate.
5 contacts is ideal, however, slight modifications to the 5 steps have resulted in good response rates (i.e. eliminate final contact if good response rate after 4th contact). Mixed modes can be used with positive results. For example, step one can be sent via email, step 2 via mail, step 3 via email, etc. If you are conducting survey research, Dillman’s chapter 7 (Implementation Procedures) is extremely helpful and details each step, its justification and how to construct your letters and reminders to maximize a good return rate.
Where to survey depends on the WHO and the WHAT of the survey. We discussed the fact that in person surveys work well with the elderly but you either need to go to where they congregate or do individual interviews-, both of which have unique challenges. The where will certainly influence your population sampling, even in a shopping mall, it will be limited to those who typically go to the mail and may, while giving you some diversity will also shape your socio-economic profile of the respondents. It will not include those who don’t mall shop, can’t afford to mall shop or those who don’t like the stores in the particular mall you are using. Is it best to survey RDs during a large gathering (i.e. FNCE) when you have a captive audience or via a mail or internet survey? When deciding the WHERE question, take into consideration the other WHO, WHAT, HOW, WHEN and then discuss with colleagues and then population specific experts (geriatrics health professionals if a geriatric survey etc) what might work best for the survey you have in mind.
WHAT QUESTIONS WILL YOU ASK?The question of what to ask on the survey depends on what you want to find out. When a survey is designed it is critical to be sure that each of your research questions is addressed on the survey. Another important consideration is to limit what you ask that is extraneous data that you will not use for the research. Unless you are planning a sizable bonus or gratuity for completing the survey, you want to be as time sensitive as possible. In most cases, surveys are given to individuals to complete without any gratuity. We tend to rely on ‘the kindness of others’ – hoping they will complete and return a survey just because you gave it to them. When you determine the questions to ask, be sure to be consistent in your questionnaire design. Asking questions in several different designs, i.e. multiple choice, fill in, likert scales, etc can be confusing to the survey recipient. Try to limit the types of question format to one or two to avoid any confusion. If you are doing multiple choice questions, be sure the question and possible responses are clearly stated and the directions indicate if only one or multiple responses may be checked. Try to limit questions that have “mark all that apply” as an option. Analysis of this type of question is difficult. It is better to phrase these questions as Y/N for each option of interest. It is best to avoid open-ended questions when at all possible. Open-ended questions are difficult to analyze when you are trying to summarize all possible responses to a given question. They require interpretation on the part of the surveyor and can subsequently alter the true response intended. Multiple choice responses, while they limit options are easily quantified. There can always be an ‘other’ option on any multiple-choice question, which provides the option for write in responses. A benefit of using internet surveys is the ability to use skip-logic. The survey can be set up to automatically route participants to different questions depending on their responses. This is a very efficient and error eliminating process. While it takes some skill to set up the skip logic in SURVEY MONKEY, carefully review and testing can provide a well constructed and organized survey. When constructing internet surveys, it is also recommended to limit the number of questions per screen. You don’t want to overwhelm the respondent with too much to look at at one time. If you are using a survey that has already been validated in a similar population, you may be able to avoid PILOT TESTING. Even if the survey has been used with a population but you use a demographically different population, you need to pilot test it to determine its validity in this population. If you are designing a new survey, the best strategy is to use 2 pilots, first an expert opinion pilot. Gather experts from the area of practice or topic under study, ask them to compare the research question to the proposed survey and comment, their ‘expertise’ will help determine the validity of the survey tool itself for the research topic and population. Once edits are done, do a second population pilot. Choose a small (10-15) group of people who represent (but don’t include) the population under study. Ask them to complete the survey and comment on how long it took, questions that were confusing vs. easy to read and any language that should be changed. Once these edits are done, the survey is ready to mail. What you ask also depends on the population you are surveying. Cultural issues, level of education, age and gender should all be considered in question design and format. If you are surveying a culturally diverse population, you will need to consider having the survey translated into other languages. If you do select this option, be sure to have the translation completed by a native or translator, do not try to do it yourself with a dictionary – the results, while literal may be terrible.
Survey Questions: Constructing your survey questions is a very important process. There are many types of questions and you need to examine each piece of information that you want to know before you write your questions. Open-Ended Questions: Useful in probing respondent’s feelings and opinions, without biases or limits imposed by the researcher.Most often used in qualitative researchThis approach is useful when the researcher is not sure of all potential responses to the question.Some researchers will use open-ended questions in pilot studies to determine the range of responses, which can then be converted into a multiple-choice closed-ended question.Can be very difficult to code and analyze Closed-Ended Questions: This type of question is easily coded and provides uniformity across responsesMay not allow the respondents to express their own personal viewpoint and therefore, may provide a biased response set.When constructing a close-ended question, the responses should be exhaustive and mutually exclusive. The type of question that you use can impact your results. You can ask the same question in many different ways. Perhaps you are conducting a survey on dialysis patients regarding their bone health and you want to know about their phosphate binder compliance. The following are some ways that you can ask them about this: Do you regularly take your phosphate binders? Yes No If yes, do you ever forget to take your phosphate binders as prescribed? No Yes If you replied Yes, how often? every day several times per week once a week several times per month once a month other: _______________________________ To review more about formatting survey questions, please review Chapter 15 (Surveys) in your text.
Some Key Points on tool development and cover letters follow. Questions you want to be able to answer about your survey questions include (These are adapted from Dillman 2009, Chapter 4):Does the question necessitate an answer? If Yes, What type? Multiple choice, Y/N, open? To what degree do respondents have an accurate, easy to provide answer available for what they are being asked to report; i.e. age, gender vs. the need for thought or opinion? If you are asking about behavior, can you expect respondents to accurately recall and honestly report on prior behaviors? Is it reasonable or feasible for the respondent to provide the information requested? Will the respondent be motivated or want to answer each question Will the layout or answer types influence their response? Or as Dillman says: “Is the respondents understanding of response categories likely to be influenced by more than just words?” As you develop your survey tool, be careful how you word your questions. Aim for: clear, specific, bias-free questions that are short, concise questions that are neither hypothetical nor objectionable in nature Use simple words (from Dillman D) : tired vs. exhausted, honest vs. candid, most important vs. priority, free time vs. leisure, work vs. employment, care vs. supervision, help vs. assistance, protect vs. preserve, correct vs. rectify, nearly vs. virtually Keep your scales reasonable in length – an 11 point Likert scale will be very confusing to the respondent (and the researcher in analysis!). Order your questions so there is a logical flow and to reduce the confusion of respondents. Is your survey appropriate for the average reading level of your sample? Do you need to translate your survey?
In general, mail surveys have a better response rate than internet surveys. Several meta-analyses have explored this concept (Hoonakker & Carayon, 2009; Cook et al 2004). The average response rate among mail surveys is ~ 55 – 60%. Cook (2004) found an average response rate of 37% for 68 surveys. Examples of Return Rates:2002 Dietetics Compensation & Benefits Survey, ADAAn outside research company was contracted to collect data via mail survey from July 10 to August 26, 2002. The survey was sent to 30,000 RDs and DTRs with a usable response rate was 46% (n=13,694). 2005 Dietetics Compensation & Benefits Survey, ADA & CDRAn outside research company was contracted to collect data via mail survey from May 11 to July 5, 2005. The survey was sent to 30,000 RDs and DTRs with a usable response rate of 40% (n=12,016). E.T. McDonnell, et al. Perceptions of School Foodservice vs. Administrative Personnel Related to Childhood Obesity and Wellness Policy DevelopmentA 27 item survey, developed to assess school employees’ perceptions about childhood obesity and wellness policies were distributed to 907 school employees attending a mandatory training on new federal regulations requiring all schools sponsoring school meals programs to develop Wellness Policies. The response rate was 69% (n=628). 2006: Tatum et al. Educational Needs Assessment for RDs interested in Advanced Clinical NutritionAn email was sent to 1166 RDs (1000 randomized sample from CDR + 166 RDs who were matriculated or alumni in the UMDNJ-SHRP Graduate Programs of Clinical Nutrition Program). The email was the “cover letter” and provided a link to the survey via SuveyMonkey.com. Of the emails sent, 8.2% (n=96) of the emails were “undeliverable” secondary to outdated or incorrect email addresses. Of the 1070 usable surveys, I receive a total of 432 responses, 7 of which were unusable secondary to inadequate completion. I received a total of 425 usable responses (39.7%) from the potential sample (n=1070). Of the 1000 surveys sent out to the randomized sample of RDs, 34.3% (n=343) were completed.Of the 166 surveys sent out to the UMDNJ group, 49.4% (n=82) were completed. ADA/CDR Needs Assessment12,000 mail surveys were sent to ADA member students, RDs, DTRs, and others in the field of food and nutrition (stratified sample) sent between Sept and Dec 2008. 58% usable response rate.2009 Brody (ongoing) Advanced Level Dietetics Practice Survey. 119 advanced level practice RDs identified and recruited to participate in a 3 round survey pen and paper survey to explore advanced level practice attributes and activities in clinical nutrition. The first round survey was 23 pages with 3 pages of open ended questions. A modified Dillman’s Tailored design method was followed with 3 contacts in Round 1 (pre-notice, survey, email reminder with electronic copy of survey attached) and incentive offered to those who completed all three rounds. First round response rate = 71.4%. The round 2 survey was 24 pages (limited open ended questions) and 4 contacts were made including 1 phone call reminder and mixed mode option for survey return. Response rate = 94.1%. When designing your survey and estimating the number needed you will need to take into consideration typical response rates for the type of survey you are doing. For example if you want to survey RDs knowledge of evidence based practice, you would need to consider what’s the typical response of RDs in the U.S. to surveys (are you doing electronic or mail) by looking at prior research. In our case we’d look at the past 3 years of surveys we’ve done of RDs using electronic surveys and find out what was the mean response rate; from there we can estimate our response rate.
This session has provided an overview of survey basics; be sure to complete the readings as well and thoughtfully respond to the discussion questions. The success of any survey tool is heavily dependent on the quality of the survey itself; the questions, their wording, the order of the question and the simplicity of completion. If you get the reader to ‘partner’ with you at the beginning, enticing them to want to do the survey (without any offer of money or compensation) they will complete it. If you are doing survey research as your design, your advisor will likely assign additional readings and encourage you to look at the Dillman Tailored Design text.
Session Objectives1. Understand sources of error in survey research2. Explore types of survey methodologies (web-based, telephone, mail, mixed mode)3. Determine best practices for survey design (who, what, where, when, and how)
Your readings for the week are posted in the session (under Lessons Table – Survey Research) along with your discussion questions. Some questions are targeted to students planning to conduct survey research If you are conducting survey research, Dillman‟s Internet, Mail and Mixed Mode Surveys – The Tailored Design Method, 2009 by Don Dillman is a valuable resource as is his website: http://www.sesrc.wsu.edu/dillman/default.ASP
Descriptive Experimental Describe Explore cause and populations effect Exploratory Find relationships Surveys and Questionnaires Portney, 2009
WHO to survey WHAT to survey about What questions will you ask? HOW to survey What method is best for your sample – interview, mail, internet, mixed modes? WHEN to survey Timing is critical – need to consider your population (i.e. educators may be off during the summer so not the best time to survey this group) WHERE to survey
Target Population Coverage Error Sampling Frame Sampling Error Sample Non-response Error Respondents Measurement Error Survey ResultsAdapted from Sampling Frames and Coverage Errorwww.idready.org/courses/2005/spring/survey_SamplingFrames.pdf
The sampling frame excludes or erroneously includes a certain segment of your population Omission = Undercoverage Duplication or wrongful inclusion = Overcoverage Thesurvey method selected minimizes the possibility of a certain group responding Web-based survey sent to elderly subjects without computer skills Mail survey sent to dietetic internship alumni (10 years post graduation) using an address list from the year they graduated
The difference between the sample and the target population due to characteristics represented in the sample i.e. Sample of 100 RDs exploring personal weight perception. Sixty RDs responding to the survey were obese. Is this representative of the RD population? Can make a sample unrepresentative of the target population Causes Chance Sampling bias – The tendency to favor selection of subjects with certain characteristics due to a poor sampling plan. http://www.socialresearchmethods.net/tutorial/Mugo/ tutorial.htm and http://www.socialresearchmethods.net/kb/sampstat.p hp
Responders to your survey are different than non- responders Types of Non-Response Errors Complete: Survey fails to measure some of the characteristics of the sample. Partial: Specific information is not completed by respondents. Must design & test surveys well to reduce partial non-response errors Causes of Non-response error Chance (random error) Respondent is unable or refuses to participate in the survey Sampling bias Non-response bias check
Dueto poor survey design and question construction Participant misinterprets a question Unclear definitions or survey instructions Poorly designed survey scales Example: Likert scale with 11 possible options and little differentiation between points on the scale) Respondents tendency to exaggerate or underplay events Report they are taller and thinner than actual Respondents answer in a more socially desirable„ way Interviewer bias (too friendly, not friendly enough) if delivering the survey in-person. Must be trained!
Must consider: Do you want a generalizable sample or a targeted sample Access to the sample Email list versus listserve Cost to access sample Availability of the sample How representative is the sample i.e. A list of Dietitians in Nutrition Support DPG members may include clinicians, managers, entry level RDs, advanced level RDs, specialists in adult, pediatric nutrition support, etc. Do you want all of these characteristics represented or do you need a more targeted sampling frame?
Interview In-person Telephone Internet (Skype / Elluminate) Paper and Pen Mail Survey Electronic Survey Web-based Email survey Mixed Mode Survey Combination of methods (i.e. paper survey with option for web-based version)
Advantages Disadvantages Limit labor required May not be most Easy access to large effective way to sample reach certain Respondents have populations time to consider Need accurate answers mailing addresses High response rates Cost of supplies Time for survey return Fowler, Survey Research Methods, 4th Ed 2009
Advantages Disadvantages Easy to access large Coverage error populations Sampling error Reduced cost Measurement error Reduced time for data Non-response error entry Computer illiteracy Reduced error in data High # of non- entry deliverable emails Less time to conduct Computer security survey (fire walls, spam, etc) Flexibility in design Response rates lower Ease of administration than mail surveys Skip logic Hoonakker and Carayon, 2009
Avoid mailing during holidays Thanksgiving and Christmas Select mail out dates based on characteristics of your populations Educators may not be available in the summer Families with children may vacation in the summer Survey of high school coaches should be done during school is in session
Letter of Introduction Purpose of survey Why participant selected Who is conducting the research All of the elements of consent For mail surveys Self addressed stamped envelope included Coding of envelope to survey to participant Name and address of where to return survey to within survey (in case they loose the envelope) Dillman‟s Tailored Design Method
“Tailored design is the development of survey procedures that work together to form the survey request and motivate various types of people to respond to the survey by establishing trust and increasing the perceived benefits of completing the survey while decreasing the expected costs of participation.” (Dillman, 2009, p38)Dillman D, Smyth JD, Melani Christian L. Internet, mail, and mixed mode surveys.The tailored design method, 3rd Ed. John Wiley & Sons, Inc. 2009.
Scientific approach designed to reduce: Coverage, sampling, non-response, and measurement errors Customized survey procedures based on: Survey type Sample Budget and time
Implementation1. Brief pre-notice letter or e-mail sent a few days before survey arrives2. Survey mailing Includes cover letter, survey, stamped & addressed return envelope, and incentive (if desired)3. Thank you postcard sent ~ 1 week later as a reminder to complete survey4. Second survey mailing for non-responders 2-4 weeks after previous mailing5. Final contact in different delivery mode 2-4 weeks after previous mailing (i.e. e-mail, certified mail)
To: email@example.comSent: Tuesday, October 16, 2007 4:01 PMSubject: Please take our surveyYou have been selected to participate in a brief research survey being conducted as part of aMasters thesis. I would greatly appreciate your input. Please respond by the end of the month(10/31/07).Your responses to this survey will be used to evaluate the current utilization and implementation oftechnology, internet applications, computer software/programs for education (Blackboard, WebCt),etc. in the Didactic curriculum. All results will be used for research purposes only and by submittingthis survey, you are agreeing to these terms.Internet applications refer to Blogs, Web and Pod casts, Online discussion groups, socialnetworking sites, interactive multimedia, Wikis, website development and maintenance, etc. thatare used for educational purposes.Thank you,xxxx, Graduate student at University XClick on the following link to take the survey: <a href=h
Ifconducting in-person interviews or self – administered surveys Consult with experts to determine the best place (and time) to conduct the survey Possible options School setting (parents, teachers, coaches) Health care settings (health care professionals, patients) Grocery store or other public setting Weigh the pros and cons of mail vs internet vs in-person…
Use validated survey Limit „mark all that instruments when apply‟ questions as available these are difficult Only ask questions to analyze relating to your Use „other‟ options research questions to give respondents Limit the use of the opportunity to multiple question write in answers types (i.e. Likert, Pre-test and pilot multiple choice, test your survey for open-ended) face and content validity
Does the question necessitate an answer? If yes, what type? Multiple choice, yes/no, open ended? If asking about behavior, can you expect accurate and honest responses? Is it reasonable to request the information? Will the respondent be motivated to answer each question? Will the layout or answer types influence their responses?
Use simple words Keep scales reasonable in length Order question with a logical flow Adjust reading level of survey to your sample Do you need to translate your survey? Developing questions free from free from bias, that are clear, specific, and concise Keep your survey length appropriate for the sample Avoid double-barreled questions (asking 2 concepts)
“Areyou satisfied with the knowledge and trustworthiness of your nutrition counselor?“ I don‟t agree that snack foods such as candy bars and soda should not be available in school vending machines.” Whendid you complete your master‟s degree?
“Are you satisfied with the knowledge and trustworthiness of your nutrition counselor? Separate into 2 questions. Could ask as a Likert type question if you want scaled data “ I don‟t agree that snack foods such as candy bars and soda should not be available in school vending machines.” Confusing question. Could ask as true or false: Snack foods such as…should be allowed …. True or False When did you complete your master‟s degree? Needs clarification –year, pre or post RD?
Return rates affect how generalizable results are… Affected by: Timing of survey Survey methodology Length of survey / time required to take survey Quality and professionalism of cover letter and survey Recognition of affiliated institution Incentives Pertinence of survey topic to participants
Positive Impact Negative Impact Saliency Lack of anonymity Understanding E-mail is easy to target population ignore and discard Prenotificiation Confusion related to Personalized cover computer illiteracy letter Less incentives to Incentives respond Sponsorship Design/connection Reminders speed Nondeliverable emails Hoonakker 2009
Average mail survey response rate: 52-60% vs. average internet surveyresponse: 30 - 40%Average time to survey response: Email(7.7 days) vs. mail (16.1 days)Average internet based dietetic surveys(from UMDNJ) - 15 – 30% Hoonakker 2009, Cook, Heath & Thompson, 2000)
“There is nothing like looking, if you want to find something. You certainly usually find something, if you look, but it is not always quite the something you were after.” J.R.R.TolkienBy following the best practices of survey research, you should be able to find what you are truly looking for in your research.
Fowler FJ. Survey Research Methods, 4th Ed. Sage Publications, Inc, Los Angeles 2009 Dillman D, Smyth JD, Melani Christian L. Internet, mail, and mixed-mode surveys. The tailored design method 3rd edition. John Wiley & Sons, Inc.2009. Hoonakkker P, Carayon P. Questionnaire survey nonresponse: a comparison of postal mail and internet surveys. Intl Journal of Human-Computer Interaction 2009 25: 348-373. Cook C, Heath F, Thompson RL. A meta-analysis of response rates in web or internet based surveys. Educational & Psychological Measurements 2000; 60: 281.