The document discusses different types of research designs: cohort studies, case-control studies, and cross-sectional studies. It also covers developing survey instruments, including steps to take, ensuring reliability and validity, components of a survey, administration modes, and sources of error. Case-control studies sample individuals with and without a disease and examine exposure variables, while cohort studies follow groups over time to analyze exposure-disease relationships. Cross-sectional studies provide a snapshot of a population at a single time point through surveys.
5. Prospective versus Retrospective Cohort Studies Exposure Outcome Prospective Assessed at the beginning of the study Followed into the future for outcome Retrospective Assessed at some point in the past Outcome has already occurred
Before showing the slides, have the class brainstorm what some of the possible Pros and Cons to a cohort study could be. After we go through them in class, have them get with a partner to try and think of ways to avoid some of the cons.
Identify the purpose and focus of study Why are you creating the survey Reviewing the literature on the topic Reviewing other measures Obtain feedback from stakeholders to clarify the purpose and focus Obtain feedback from experts as to the purpose of the survey Identify the research methodology and type of instrument to use for data collection Self report or observational? What type of instrument is best? Begin to formulate questions or items Create survey items Many ways to do this Pretest items and preliminary draft Give preliminary items to experts and/or population of interest Revise instrument based on feedback Revise survey based on feedback Pilot test and revise Take revised instrument and give to population of interest Revise based on feedback Conduct analyses (pca, cronbach’s alpha, etc.) To create final instrument Administer final instrument and analyze and report results Collect data using final instrument Assess reliability and validity of survey
Content Validity : refers to representativeness of the questions in the instrument. Face Validity : measures what it is intended to measure. Criterion-Based Validity : determined by established criteria and looks at the degree of relationship between two measures of the same phenomenon. Concurrent Validity : two measurements that are focused on the present (e.g., exercise self-efficacy and current exercise participation). Predictive Validity : the measurement used will be correlated with a future measurement of the same phenomenon (e.g., MCAT score to predict medical school GPA). Construct Validity : refers to the extent to which a scale measures the construct, or theoretical framework, it is designed to measure. Convergent Validity : the extent to which an instrument’s output is associated with that of other instruments intended to measure the same exposure of interest. Discriminant/Divergent Validity : based on the idea that two instruments should not correlate highly if they measure different concepts.
Title Should convey to the user what this survey sets out to measure Shouldn’t use a detailed title if giving it to participants (biases responses)    Introduction Should convey to the user the purpose of the instrument, how it is used, and what information this survey collects from participants  Directions or instructions This should be detailed for participants How to answer questions Define any constructs and/or terms used in survey Need directions for both self report and observational surveys  Items Selection items: items that give choices (rating scale) to participants; closed ended questions Supply items: items that allow participants to write their own responses; open ended questions  Demographics Important to include some demographics in survey Only include those necessary for study Used to see if your sample is representative as well as make comparisons between groups of participants  Closing section May have this section, not always necessary Thank you to participants Information on where to go for more information or contact info of data collectors
Literature review Here you review the published literature for other measures of the construct of interest Don’t reinvent the wheel Use existing surveys to give you ideas for how to create your survey There should be a deficiency in the literature…which is why you are creating a new survey Use of existing processes Sometimes policies and procedures of an organization dictate how a survey can be created Ex. Curriculum of a teenage pregnancy prevention program…dictates the knowledge questions for the survey you create Brainstorming Usually a group activity Think about topic and jot down ideas (individually) then discuss as a group Organize ideas and review new iterations for feedback from group Ex: round robin activity. Each participant puts forth an idea and everyone discusses Delphi technique Here you use experts in the area to generate items Send lists of items that are generated to each expert Review recommendations and revise list of items Less likely to be influenced by others in a group setting Can easily be done over the internet
-Avoid Double-barreled questions -Appropriate readability -Sentence length -Simple Language -Clear, specific terminology -Exhaustive response sets -Provide instructions for questions -Culturally appropriate -Sensitive items -Social desirability -Avoid bias -Limit negatively worded items -Don’t include superfluous items
Ask the Class: Why don’t people respond? Why is nonresponse a problem