Your SlideShare is downloading. ×
0
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
General Research Design Issues in Psychology
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

General Research Design Issues in Psychology

1,632

Published on

Published in: Education, Technology
0 Comments
3 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
1,632
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
0
Comments
0
Likes
3
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • It is important to have a scientific attitude in our approach to research…Systematic: means giving serious thought to what you are doing, and how & why you are doing it. In particular, being explicit ure of the observations that are made, the circumstances in which they are made, and the role you are making them. Skeptical: refers to subjecting your ideas to possible disconfirmation. It also involves considering alternative hypotheses and interpretations and subjecting the observations and preliminary conclusions made to scrutiny (by yourself initially, then by others).Ethical: means you follow a code of conduct for the research which ensures that the interests and concerns of those taking part in, or possibly affected by, the research are safeguarded.
  • River crossing analogy:Research Focus: The general goal or objective of crossing the riverResearch Questions: Information you need to know – how many people will be crossing the river? How often? How deep is the river? How far? Current?Research Strategy: What strategy you choose to cross the river: Walk (bridge), swim, fly, boat/ferryResearch Tactics: Specific type of boat/ferry, bridge, aircraft, etc.Hakim (1987) designer of a research project needs to think like an architect. Person carrying out research needs to think like the builder or contractor. May be the same person (for small studies) or may be separate. Design deals with aims, purposes, intentions, plans within the practical constraints of time, location, budget, staff.Also depends on style of designer (innovative vs. conservative) and style of who is paying for research and audience and consumer of your final product (always consider your audience).
  • Design concerns the various things that should be considered and kept in mind when doing research. Robson proposes 5 primary components to consider:1.) Purpose(s): What is the study trying to achieve? What is your objective? Why is it being done? Are you seeking to describe something, or to explain or understand something? Are you trying to assess the effectiveness of something? Is it in response to some problem or issue for which solutions are sought? Is it hoped to change something as a result of the study?2.) Theory: What theory will guide or inform your study? How will you understand the findings? What conceptual framework links the phenomena you are studying?3.) Research questions: What questions are you trying to answer? What do you need to know to achieve the purpose(se) of the study? What is feasible to ask given the time and resources that you have available?4.) Methods: What specific techniques (e.g., questionnaires, instruments, measures, interviews, participant observation) will you use to collect data? How will the data be analyzed? How do you show that the data are trustworthy? (use validated instruments, consult the literature to see what is being used).5.) Sampling strategy: From whom will you seek data? Identify your population – how will you attain a representative sample (convenience sample, simple random, stratified random sample, cluster sample, etc). How will you balance the need to be selective with the need to collect all the data required?All these aspects need to be inter-related and kept in balance. In flexible designs there should be repeated revisiting of all these elements. The detailed framework emerges during the study.
  • A good research design framework should have high compatibility among purposes, theory, research questions, methods and sampling strategies.Purpose and theory should inform your research questions, which should inform your methods and sampling strategy.If the only research questions you can get answers are not directly relevant to the purpose of the study, then something needs to change (probably the research question)If your research questions do not link to theory, it is unlikely that you will produce answers of value (chp 3, p 61). In this case, developing theory needs to take place or research questions need to change.If the methods and/or sampling strategy are not providing answers to the research questions, something should change. Collect additional data, extend the sampling or cut down or modify the research question (example: using university convenience sample: primarily young females: could try to recruit more diverse sample, or pair down study to looking at young females).In fixed design research, you have to get this all right before you start your project (hence, the importance of pilot work – always run a pilot when possible)In flexible designs, you have to sort this all out by the conclusion of your study. In the real world, research isn’t always as neat and tidy as this. Some research questions may remain stubbornly unanswerable given limitations in sampling, data collection and resources example: effectiveness of interventions to prevent suicide 10.3 deaths per 100,000 person years (2002 statistics)
  • Can provide perspective
  • This slide provides an overview of what is involved in choosing a research strategy, and seeks to sensitize you to the pertinent issues to consider.Is a FIXED, FLEXIBLE or MULTI-STRATEGY design strategy appropriate? : a FIXED design calls for tight prespecification before you reach the main data-collection stage. If you can’t prespecify the design, don’t use the fixed approach. Data are almost always in the form of numbers; hence this type is commonly referred to as a QUANTITATIVE strategy. A FLEXIBLE design evolves during data collection. Data are typically non-numerical (usually in the form of words); hence this type is often referred to as a QUALITATIVE strategy. A MULTI-STRATEGY design combines substantial elements of both fixed and flexible design. A common type has a flexible phase followed by a fixed phase. Note: flexible designs can include the collection of small amounts of quantitative data Similarly, fixed designs can include the collection of small amounts of qualitative data.Is your proposed study an EVALUATION? Are you trying to establish the worth or value of something such as an intervention, innovation or service? This could be approached using either a fixed, flexible or multi-strategy design strategy depending on the specific purpose of the evaluation. If the focus is on an OUTCOME a fixed design is probably indicated, if it is on a PROCESS a flexible design is probably preferred. Many evaluations have an interest in both outcomes and process, and use multi-strategy design.Do you wish to carry out action research (pg. 188 Robson)? Is an action agenda central to your concerns? This typically involves direct participation in the research by others likely to be involved, coupled with an intention to initiate change. A FLEXIBLE approach is almost always used (see chat. 8 pg. 188) If you opt for a FIXED design strategy, which type is most appropriate? Two broad interpretations are widely recognized: experimental and non-experimental designs (box 4.1 on pg. 78 summarizes)If you opt for a FLEXIBLE design strategy, which type is most appropriate? Flexible designs have developed from a wide range of very different traditions. Three of these are widely used in real world studies. These are case studies, ethnographic studies and grounded theory studies. (see box 4.2 on p. 79)If you are considering a Multi-strategy design, which type is most appropriate? It may well be that a strategy which combines fixed and flexible design elements seems to be appropriate for the study with which you are involved. One or more case studies might be linked to an experiment. Alternatively a small experiment might be incorporated actually within a case study. Issues involved in the carrying out of multi-strategy designs are discussed in chapt. 7.
  • G. The purpose helps in selecting the strategy: The strategies discussed above represent different ways of collecting and analysing empirical evidence. Each has its particular strengths and weaknesses. It is also commonly suggested that there is a hierarchical relationship between the different strategies, related to the purpose of the research; that…Flexible (qualitative) strategies are appropriate for exploratory workNon-experimental fixed strategies appropriate for descriptive studiesExperiments are appropriate for explanatory studiesH. The research questions have a strong influence on the strategy to be chosen…How many? How much? Who? Where? Questions suggest use of a non-experimental fixed design (survey research)What? How? Why? Often best addressed with flexible designsSpecific methods of investigation need not be tied to particular research strategies. The methods or techniques used to collect information, what might be called the tactics of inquery, such as questionnaires of various kinds of observation, are sometimes regared as necessarily linked to particular research strategies. Thus, in fixed non-experimental designs, surveys may be seen as being carried out by structured questionnaire and experiments through specialized forms of observation, often requiring the use of measuring instruments of some sophistication. In flexible designs, grounded theory studies were often viewed as interview-based and ethnographic studies seen as entirely based on participant observation. However, it is not necessarily a tight or necessary linkage.
  • Another way to think about this issue is that a moderator variable is one that influences the strength of a relationship between two other variables, and a mediator variable is one that explains the relationship between the two other variables. As an example, let's consider the relation between social class (SES) and frequency of breast self-exams (BSE). Age might be a moderator variable, in that the relation between SES and BSE could be stronger for older women and less strong or nonexistent for younger women. Education might be a mediator variable in that it explains why there is a relation between SES and BSE. When you remove the effect of education, the relation between SES and BSE disappears.
  • Equivalent doses? Sometimes a medication is distributed at a subtherapeutic dose. For example X mg is the therapeutic dose of imipramine, but y dose is used in the study.Sample size?Length of study? (maybe not long enough for antidepressant to have an effect?Comparing apples and oranges.Placebo effect.
  • P < .0001 is really, really significant!
  • Pg. 70Variability in the procedures – If variability is minimized, the likelihood of detecting a true difference between the treatments or treatment and control conditions is increased. In terms of our formula for effect size, the differences between groups will be divided by a measure of variability; this measure will be larger when there is more and smaller when less uncontrolled variation. The larger the variability, the lower the effect size evident for a given difference between groups.
  • History: Things that have changed in the participants’ environments other than those forming a direct part of the inquiry (e.g., occurrence of major air disaster during study of effectiveness of desensitization programme on persons with fear of air travel).Testing: Changes occurring as a result of practice and experience gained by participants on any pre-tests (e.g., asking opinions about factory farming of animals before some intervention may lead respondents to think about the issues and develop more negative attitudes). Instrumentation: Some aspect(s) of the way participants were measured changed between pre-test and post-test (e.g., raters in observational study using a wider or narrower definition of a particular behavior as they get more familiar with the situation).Regression (to the mean): If participants are chosen because they are unusual or atypical (e.g., high scores) later testing will tend to give less unusual scores (“regression to the mean”); e.g., in an intervention programme with pupils with learning difficulties where ten highest-scoring pupils in a special unit are matched with ten of the lowest-scoring pupils in a mainstream school, regression effects will tend to show the former performing relatively worse on a subsequent test. See further details on p. 142.Mortality: Participants dropping out of the study (e.g., in study of adult literacy programme – selective drop-out of those who are making little progress).Maturation: Growth, change or development in participants unrelated to the treatment enquiry (e.g., evaluation extended athletics training programme with teenagers – intervening in height, weight and general maturity).Selection: Initial differences between groups prior to involvement in the enquiry (e.g., the use of an arbitrary non-random rule to produce two groups ensures they differ in one respect which may correlate with others).Selection by maturation interaction: Predisposition of groups to grow apart (or together if initially different); e.g., use of groups of boys and girls initially matched on physical strength in a study of fitness programme.Experimenter Bias: can influence research results in subtle ways. Ambiguity about causal direction: Does A cause B, or B cause A? (e.g., in any correlational study, unless it is known that A precedes B, or vice versa – or some other logical analysis is possibleDiffusion of treatments: When one group learns information or otherwise inadvertently receives aspects of a treatment intended only for a second group (e.g., in a quasi-experimental study of two classes in the same school).Compensatory equalization of treatments: If one group receives “special” treatment, there will be organizational and other pressures for a control group to receive it (e.g., nurses in a hospital study may improve the treatment of a control group on grounds of fairness).Compensatory rivalry: As above but an effect on the participants themselves – referred to as the “John Henry” effect after the steel worker who killed himself through over exertion to prove his superiority to the new steam drill; e.g., when a group in an organization sees itself under threat from a planned change in another part of the organization and improves performance).
  • Transcript

    • 1. GeneralResearch DesignIssuesPYC 5040: Advanced ResearchGrant M. Heller, Ph.D.
    • 2. The Scientific Attitude (Robson, 2002)•Systematic•Skeptical•Ethical
    • 3. Proof, Disproof & ScientificProgress (Leary, 2004)• The logical impossibility of proofo Theories cannot be proved b/c obtaining empirical supportfor a hypothesis does not necessarily mean that the theoryfrom which the hypothesis was derived is true.• The practical impossibility of disproofo Unlike proof, disproof is a logically valid operationo Absence of evidence is not necessarily evidence ofabsence.• If not proof or disproof, then what?o The scientific filter
    • 4. Scientific filteradapted by Leary (2004) from Bauer (1992)1. All ideas: Scientific training, concern for professionalreputation, availability of resources (filters outnonsense)2. Initial Research Projects: Self-judgment of viability(filters out dead ends & fringe topics)3. Research programs: Peer review (filters methodologicalbiases & errors, unimportant contributions).4. Published research: Use, replicability & extension byothers (filters out nonreplication, uninteresting &nonuseful stuff)5. Secondary Research Literature – EstablishedKnowledge
    • 5. "Real World" vs."Academic" ResearchReal World Emphasis1. Solving problems2. Robust results3. Finding basis foraction4. Often "in the field"(e.g., hospital, business, school)5. Constraints offunding & timeAcademic Emphasis1. Advancing thediscipline (aka "basicresearch")2. Est. relationships3. Developing theory4. Often "in the lab"5. Potentially lessfunding & timeconstraints?(Robson, 2002)
    • 6. "Real World" vs."Academic" Research (cont.)Real World Emphasis1. High consistency oftopic from 1 study toanother2. Generalist researcher3. Oriented to clientneeds4. Viewed as dubious bymany academics5. Need highly developedsocial skillsAcademic Emphasis1. Little consistency oftopic from 1 study toanother2. Specialist researcher3. Oriented to academicpeers4. High academicprestige5. Need some socialskills(Robson, 2002)
    • 7. Main steps when carrying out aresearch project (Robson, 2002)1) Start a research journal2) Determine focus of project3) Develop research questions4) Choose research design5) Select method(s)6) Arrange practicalities7) Collect data8) Prep data for analysis9) Analyze & Interpret data10) Report & disseminate findings
    • 8. General Design Issues• Focus on turning research questions into projects.• Strategies & tactics based on questions you wantto answer (Manstead & Semin, 1988).o Research focus, questions, strategy & tacticso River crossing analogy• Hakim (1987):o think like an architect
    • 9. A framework for researchdesign•Components to consider: (Robson, 2002)1. Purpose(s)2. Theory3. Research questions4. Methods5. Sampling strategy
    • 10. A framework for researchdesignResearchQuestionsPurpose(s)ConceptualframeworkSamplingstrategyMethods
    • 11. Research Questions• Specificquestions, framed intestable ways, thatprovide impetus forstudies.• Direct comparisonsare used;conceptual modelsare implied by thesecomparisons
    • 12. Types of ResearchQuestions1. Descriptive (what, when)2. Explorative (correlations, descriptive)3. Evaluative (applied, outcomes research)4. Predictive (underlying causal model, correlations)5. Explanatory (explicit causal model, experimentalregressions)6. Control (change cause, observe effect)To formulate research questions, focus on variables(not measures), and focus on predictive orexplanatory or control questions.
    • 13. Theory• Selection of research problem & designshould be based on theory• THEORY: a conceptualization & explanationof the phenomenon of interest (usually acausal model)o A theoretical framework guides the interpretation ofresearch results and guides the generation of furtherstudieso Explains a set of relationships among concepts
    • 14. Importance of Theory(Kazdin, 2003)• Can bring order to areaswhere findings are diffuse ormultiple• Can explain the basis ofchange & unite diverseoutcomes• Can direct our attention towhich moderators to study• Application and extension ofknowledge to the worldbeyond the laboratory
    • 15. Theory buildingBlack, 1999
    • 16. Cyclic life & evolution of atheoryBlack, 1999
    • 17. Operational Definitions(Kazdin, 2003)• Defining a concept on the basis of thespecific operations used in the study.• Allows us to measure & quantify• Concept: anxiety• Operational definition: participants scoring ator above 75th percentile on measure ofanxiety (e.g., GAD-7)
    • 18. Operational Definitions:Limitations (Kazdin, 2003)• May oversimplify the concept of interest orlimit scope/focus• May include features that are irrelevant or notcentral to the original concepto Unnecessary error or noise• Use of single measures to define a concepto May impede drawing general relationships amongconcepts
    • 19. Hypotheses (Leary, 2004)• A Hypothesis is a specific proposition thatlogically follows from the theory.• Deriving hypotheses from theory involvesdeduction.• Logical implications of a theory• If the theory is true, what would we expect toobserve?• Virtually all hypotheses can be reduced to anif-then statement.
    • 20. A well written hypothesis• Stated in declarative form• Posit a relationship between variables• Reflect a theory or body of literature uponwhich they are based• Be brief & to the point• Be testable• Null vs. Alternative (research) hypotheses
    • 21. Choosing a research strategy(Robson, 2011, pgs. 74 – 77)A. FIXED, FLEXIBLE, or MULTI-STRATEGY?B. Is your proposed study an EVALUATION?A. Focus on:A. Outcome: FixedB. Process: FlexibleC. Both: Multi-strategyC. Do you wish to carry out ACTION RESEARCH?A. Focus of improvement of practice, increasedunderstanding of practice & improvement of situationA. If so, Flexible approach almost always usedD. E. F. What design strategy is most appropriate?
    • 22. Choosing a research strategy(Robson, 2011, pgs. 74 – 77) cont.G. The purposes(s) helpsin selecting a strategyH. The researchquestions have astrong influence onthe strategyI. Specific methodsneed not be tied toparticular researchstrategies
    • 23. Research Nuts & Bolts:Variables• Anything that varies• A concept that can be measured• Represents a class of outcomes that can takeon more than one value• Typeso Independento Dependent
    • 24. Difference betweenconcepts & variablesConcepts• Subjective impression• No uniformity as to itsunderstanding amongdifferent peopleo as such, cannot bemeasuredExamples: effectiveness,satisfaction, impact,excellence, achievement,domestic violence, selfesteem, etc.Variables• Measurable though thedegree of precision variesfrom scale to scale &variable to variableExamples: Gender(male/female), age, income,weight, height, religionattitude (subjective), etc.
    • 25. Levels of measurement• Nominalo categorical (names or categories)• Ordinal• Intervalo equal distance between points• Ratioo equal distance + absolute zero point e.g., degrees Kelvin, distance, mass,time
    • 26. Research Design: KeyConcepts• Independent Variable (IV)• Dependent Variable (DV)• What effect does ___ (IV) have on ___ (DV) ?• Experimental groups vs. Control groupsIndependentVariable (IV)DependentVariable (DV)
    • 27. Mediating VariablesIce CreamSales(IV)ViolentCrime(DV)Temperature(mediator)
    • 28. Moderating VariablesTherapy(IV)Tx vs. WaitListSymptomReduction(DV)Intelligence(moderator)
    • 29. Conceptual Models• Graphical representation of a theoryo diagram of proposed relationships among theoreticalconceptsConcept (abstract, theoretical)Variable (more detailed, specifically definedmeaning)Measure (operational definition)• Conceptual models are useful for clarifying aresearch question, aims & hypotheses• Distinct from research design (later step in planning)
    • 30. Mediating & ModeratingEffects• Moderating variable(s)o age• Mediating variable(s)o Motivationo Coping Skills (Wykes & Spaulding, 2011)
    • 31. Law of Parsimony /Occams Razor• Are we looking for zebras?• Can we explain the datawith concepts & models wealready know?• Adopting the simpler of 2solutions that accountequally well for the data.
    • 32. Plausible Rival Hypotheses• Other plausible explanations for ourresults?• Example: imipramine (antidepressant)vs. St. John’s Wort (herbal) in tx ofmoderate depression.o Woelk, 2000 found equal improvementafter tx• Equal effect?• What would you want to know?• What do you think is really going on?o Placebo effect
    • 33. Correlation & CausationCorrelation does not mean causation...
    • 34. Correlation & Causation...but is a necessary component of a causalmodel.• Inferring causality (Leary, 2004):1. Covariation (correlation)2. Temporal precedence (A then B)3. All extraneous factors that might influencethe relationship between the variables ofinterest are controlled or eliminated.
    • 35. Basic Statistics Review• Descriptive Statisticso Measures of central tendency e.g., mean, median, modeo Measures of variability e.g., range, standard deviation, standard error to the mean (SEM)• Inferential statisticso Goal: to make inferences about population from a sample Parametric• e.g., t-tests, ANOVA, regression Nonparamentric• e.g., chi square
    • 36. Population vs. Sample
    • 37. The Normal Distribution
    • 38. Null Hypothesis SignificanceTesting (NHST)•If p < .05, then…o Reject the null hypothesis in favorof the alternative hypothesis.•If p > .05, then…o Fail to reject the null hypothesis or retain the null
    • 39. What does NHST tell us??• Assuming that the null hypothesis is true, pwould the odds of having attained results this ormore extreme by chance alone (assuming we havea normal distribution).• Tells us if our results are statistically significant(different that practically or clinically significant).o E.g., difference between two groups reaches a level ofstatistical significance.• Outcome is either or, does not inform us of strengthof effect.o p = .001 is not more significant that p = .01.
    • 40. NHST: Outcomes
    • 41. Types of error
    • 42. Threats to StatisticalConclusion Validity (Kazdin, 2003)• Low statistical power• Variability in the procedures• Subject heterogeneity• Unreliability of the measures• Multiple comparisons & error rates
    • 43. Null Hypothesis Significance Testing(NHST)Cohen, "p < .05, the world is flat"
    • 44. alpha inflation &Type I Error (false +)
    • 45. Statistical Power• Probability of accepting a hypothesis (the nullhypothesis) when in actuality it is false(aka, Type 2 error, false negative• Sensitivity to effects of IV• Probability is 1 – Beta• Conventionally set at 0.80 (Cohen, 1988)• Powerful designs are able to detect effects ofthe IV more easily than less powerfuldesigns.
    • 46. Methods to Increase Power(Shadish, Cook & Campbell, 2002)• Use matching, stratifying, blocking• Measure & correct for covariates• Use larger sample sizes• Use equal cell sample sizes• Improve measurement• Increase the strength of treatment• Increase the variability of treatment• Use within-participants design• Use homogenous participants selected to be responsive to tx• Reduce random setting irrelevancies• Ensure powerful statistical tests are used & assumptions met
    • 47. Effect Size (E.S.)• Way of expressing the difference betweenconditions (e.g., treatment vs. control).• A common metric that can be used betweenstudies• Magnitude of effecto Often classified into small, medium, large
    • 48. 4 Categories of E.S.(Ferguson, 2009)1. Group differences indicesa. magnitude of difference between 2 or more variablesb. e.g., Cohens d2. Strength of association indicesa. magnitude of shared variance between variablesb. e.g., Pearsons r3. Corrected estimatesa. estimates correcting for sampling errorb. e.g., Adjusted R24. Risk estimatesa. more commonly used in medical outcome researchb. e.g., relative risk (RR) & odds ratio (OR).
    • 49. Suggested E.S.Interpretation (Ferguson, 2009)Type of E.S.Est.IncludedIndicesRMPE "Moderate"effect"Strong"effectGroupDifferenceCohens d,Glass delta,Hedges g0.41 1.15 2.70Strength ofAssociationr, R, partialr, rhtau0.2 0.5 0.8SquaredAssociationIndicesr2, R2, etasquared,adj. R20.04 0.25 0.64Risk Estimates RR, OR 2.0 (interpretw/ caution)3.0 4.0
    • 50. ReliabilityA reliable measure is consistent or repeatable.more of different types of reliability later...
    • 51. but is reliability enough?phrenelogy was reliable...but was it valid?
    • 52. Validity• Internal Validityo extent to which the changes in the study DV can beattributed to changes in the IV• External Validityo extent to which the results can be generalized
    • 53. Validity (cont)• Exampleo Progress in therapy• Did therapy cause the improvement?MeasureatIntakeMeasureatTerminationTherapy
    • 54. Threats to Internal Validity• History• Maturation• Testing• Instrumentation• Statisticalregression• Differentialselection• Experimentalmortality (attrition)• Selection Xmaturation interaction• Statistical conclusionvalidity (lack ofpower)• Subject heterogeneity
    • 55. Threats to Internal Validity• Remember the acronym: MRS SMITHo Maturationo Regression to the meano Selection of subjectso Selection by maturation interactiono Mortalityo Instrumentationo Testingo History
    • 56. Threats to External Validity• Reactive effect of testing• Reactive effect of experimentalarrangements• Interaction between selection bias and the IV• Multiple treatment interference
    • 57. Defense against threats tovalidity• for External Validityo Random selection of subjects• for Internal Validityo Random assignment to conditions• Various research designs havestronger internal or external validityo often a balancing act: External vs. Internalo Mook (1983) "In defense of ExternalInvalidity"
    • 58. Regression to the Mean(RTM)How to Reduce RTM1. Random allocation to comparison groups2. Selection of Ss based on multiplemeasurements3. Estimate size of RTMa. can be subtracted from observed change to give anadjusted estimate4. Statistical control of covariates (i.e., ANCOVA)(Barnette, van der Pols & Dobson, 2005).
    • 59. Other forms of Validity• Face Validity• Content Validity• Construct Validityo Convergento Discriminant• Criterion Validityo Predictive Validity
    • 60. Range Restrictions• Beware of rangerestrictions in data• You can miss the bigpicture• Beware of floor andceiling effects as well
    • 61. Fixed or Flexible design?• Some projects using social researchmethods are pre-planned in detail: they haveFIXED designs (commonly referred to asquantitative research).• Others expect the plan to change or evolvewhile the project is underway: their design isFLEXIBLE (commonly referred to asqualitative research).
    • 62. Fixed Designs• Pre-specify exactly what you plan to happenBEFORE (a priori) the main data collection.• Examples are experiments and surveys.• They typically rely almost exclusively onquantitative data collection (and are oftenreferred to as quantitative research).• more to come...
    • 63. Flexible Designs• Initial planning is limited to the focus of the research and(possibly) to setting out some general researchquestions.• Details of the design change depending on the initialfindings.• Examples are grounded theory and ethnographicstudies.• They typically rely largely on the collection of qualitativedata (and are often referred to as qualitative research)though some quantitative data is often also collected.• More to come...

    ×