Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Like this presentation? Why not share!

- Research Design in Clinical Psychology by marina761 1903 views
- Research problem by Joitiba College o... 194704 views
- Ppt on research design by Satakshi Kaushik 72564 views
- Fixed Designs for Psychological Res... by Grant Heller 5130 views
- Types of Research Designs RS Mehta by Ram Sharan Mehta,... 95952 views
- Theoretical Issues In Pragmatics An... by Louis de Saussure 6317 views

5,466 views

Published on

No Downloads

Total views

5,466

On SlideShare

0

From Embeds

0

Number of Embeds

65

Shares

0

Downloads

0

Comments

0

Likes

7

No embeds

No notes for slide

- 1. GeneralResearch DesignIssuesPYC 5040: Advanced ResearchGrant M. Heller, Ph.D.
- 2. The Scientific Attitude (Robson, 2002)•Systematic•Skeptical•Ethical
- 3. Proof, Disproof & ScientificProgress (Leary, 2004)• The logical impossibility of proofo Theories cannot be proved b/c obtaining empirical supportfor a hypothesis does not necessarily mean that the theoryfrom which the hypothesis was derived is true.• The practical impossibility of disproofo Unlike proof, disproof is a logically valid operationo Absence of evidence is not necessarily evidence ofabsence.• If not proof or disproof, then what?o The scientific filter
- 4. Scientific filteradapted by Leary (2004) from Bauer (1992)1. All ideas: Scientific training, concern for professionalreputation, availability of resources (filters outnonsense)2. Initial Research Projects: Self-judgment of viability(filters out dead ends & fringe topics)3. Research programs: Peer review (filters methodologicalbiases & errors, unimportant contributions).4. Published research: Use, replicability & extension byothers (filters out nonreplication, uninteresting &nonuseful stuff)5. Secondary Research Literature – EstablishedKnowledge
- 5. "Real World" vs."Academic" ResearchReal World Emphasis1. Solving problems2. Robust results3. Finding basis foraction4. Often "in the field"(e.g., hospital, business, school)5. Constraints offunding & timeAcademic Emphasis1. Advancing thediscipline (aka "basicresearch")2. Est. relationships3. Developing theory4. Often "in the lab"5. Potentially lessfunding & timeconstraints?(Robson, 2002)
- 6. "Real World" vs."Academic" Research (cont.)Real World Emphasis1. High consistency oftopic from 1 study toanother2. Generalist researcher3. Oriented to clientneeds4. Viewed as dubious bymany academics5. Need highly developedsocial skillsAcademic Emphasis1. Little consistency oftopic from 1 study toanother2. Specialist researcher3. Oriented to academicpeers4. High academicprestige5. Need some socialskills(Robson, 2002)
- 7. Main steps when carrying out aresearch project (Robson, 2002)1) Start a research journal2) Determine focus of project3) Develop research questions4) Choose research design5) Select method(s)6) Arrange practicalities7) Collect data8) Prep data for analysis9) Analyze & Interpret data10) Report & disseminate findings
- 8. General Design Issues• Focus on turning research questions into projects.• Strategies & tactics based on questions you wantto answer (Manstead & Semin, 1988).o Research focus, questions, strategy & tacticso River crossing analogy• Hakim (1987):o think like an architect
- 9. A framework for researchdesign•Components to consider: (Robson, 2002)1. Purpose(s)2. Theory3. Research questions4. Methods5. Sampling strategy
- 10. A framework for researchdesignResearchQuestionsPurpose(s)ConceptualframeworkSamplingstrategyMethods
- 11. Research Questions• Specificquestions, framed intestable ways, thatprovide impetus forstudies.• Direct comparisonsare used;conceptual modelsare implied by thesecomparisons
- 12. Types of ResearchQuestions1. Descriptive (what, when)2. Explorative (correlations, descriptive)3. Evaluative (applied, outcomes research)4. Predictive (underlying causal model, correlations)5. Explanatory (explicit causal model, experimentalregressions)6. Control (change cause, observe effect)To formulate research questions, focus on variables(not measures), and focus on predictive orexplanatory or control questions.
- 13. Theory• Selection of research problem & designshould be based on theory• THEORY: a conceptualization & explanationof the phenomenon of interest (usually acausal model)o A theoretical framework guides the interpretation ofresearch results and guides the generation of furtherstudieso Explains a set of relationships among concepts
- 14. Importance of Theory(Kazdin, 2003)• Can bring order to areaswhere findings are diffuse ormultiple• Can explain the basis ofchange & unite diverseoutcomes• Can direct our attention towhich moderators to study• Application and extension ofknowledge to the worldbeyond the laboratory
- 15. Theory buildingBlack, 1999
- 16. Cyclic life & evolution of atheoryBlack, 1999
- 17. Operational Definitions(Kazdin, 2003)• Defining a concept on the basis of thespecific operations used in the study.• Allows us to measure & quantify• Concept: anxiety• Operational definition: participants scoring ator above 75th percentile on measure ofanxiety (e.g., GAD-7)
- 18. Operational Definitions:Limitations (Kazdin, 2003)• May oversimplify the concept of interest orlimit scope/focus• May include features that are irrelevant or notcentral to the original concepto Unnecessary error or noise• Use of single measures to define a concepto May impede drawing general relationships amongconcepts
- 19. Hypotheses (Leary, 2004)• A Hypothesis is a specific proposition thatlogically follows from the theory.• Deriving hypotheses from theory involvesdeduction.• Logical implications of a theory• If the theory is true, what would we expect toobserve?• Virtually all hypotheses can be reduced to anif-then statement.
- 20. A well written hypothesis• Stated in declarative form• Posit a relationship between variables• Reflect a theory or body of literature uponwhich they are based• Be brief & to the point• Be testable• Null vs. Alternative (research) hypotheses
- 21. Choosing a research strategy(Robson, 2011, pgs. 74 – 77)A. FIXED, FLEXIBLE, or MULTI-STRATEGY?B. Is your proposed study an EVALUATION?A. Focus on:A. Outcome: FixedB. Process: FlexibleC. Both: Multi-strategyC. Do you wish to carry out ACTION RESEARCH?A. Focus of improvement of practice, increasedunderstanding of practice & improvement of situationA. If so, Flexible approach almost always usedD. E. F. What design strategy is most appropriate?
- 22. Choosing a research strategy(Robson, 2011, pgs. 74 – 77) cont.G. The purposes(s) helpsin selecting a strategyH. The researchquestions have astrong influence onthe strategyI. Specific methodsneed not be tied toparticular researchstrategies
- 23. Research Nuts & Bolts:Variables• Anything that varies• A concept that can be measured• Represents a class of outcomes that can takeon more than one value• Typeso Independento Dependent
- 24. Difference betweenconcepts & variablesConcepts• Subjective impression• No uniformity as to itsunderstanding amongdifferent peopleo as such, cannot bemeasuredExamples: effectiveness,satisfaction, impact,excellence, achievement,domestic violence, selfesteem, etc.Variables• Measurable though thedegree of precision variesfrom scale to scale &variable to variableExamples: Gender(male/female), age, income,weight, height, religionattitude (subjective), etc.
- 25. Levels of measurement• Nominalo categorical (names or categories)• Ordinal• Intervalo equal distance between points• Ratioo equal distance + absolute zero point e.g., degrees Kelvin, distance, mass,time
- 26. Research Design: KeyConcepts• Independent Variable (IV)• Dependent Variable (DV)• What effect does ___ (IV) have on ___ (DV) ?• Experimental groups vs. Control groupsIndependentVariable (IV)DependentVariable (DV)
- 27. Mediating VariablesIce CreamSales(IV)ViolentCrime(DV)Temperature(mediator)
- 28. Moderating VariablesTherapy(IV)Tx vs. WaitListSymptomReduction(DV)Intelligence(moderator)
- 29. Conceptual Models• Graphical representation of a theoryo diagram of proposed relationships among theoreticalconceptsConcept (abstract, theoretical)Variable (more detailed, specifically definedmeaning)Measure (operational definition)• Conceptual models are useful for clarifying aresearch question, aims & hypotheses• Distinct from research design (later step in planning)
- 30. Mediating & ModeratingEffects• Moderating variable(s)o age• Mediating variable(s)o Motivationo Coping Skills (Wykes & Spaulding, 2011)
- 31. Law of Parsimony /Occams Razor• Are we looking for zebras?• Can we explain the datawith concepts & models wealready know?• Adopting the simpler of 2solutions that accountequally well for the data.
- 32. Plausible Rival Hypotheses• Other plausible explanations for ourresults?• Example: imipramine (antidepressant)vs. St. John’s Wort (herbal) in tx ofmoderate depression.o Woelk, 2000 found equal improvementafter tx• Equal effect?• What would you want to know?• What do you think is really going on?o Placebo effect
- 33. Correlation & CausationCorrelation does not mean causation...
- 34. Correlation & Causation...but is a necessary component of a causalmodel.• Inferring causality (Leary, 2004):1. Covariation (correlation)2. Temporal precedence (A then B)3. All extraneous factors that might influencethe relationship between the variables ofinterest are controlled or eliminated.
- 35. Basic Statistics Review• Descriptive Statisticso Measures of central tendency e.g., mean, median, modeo Measures of variability e.g., range, standard deviation, standard error to the mean (SEM)• Inferential statisticso Goal: to make inferences about population from a sample Parametric• e.g., t-tests, ANOVA, regression Nonparamentric• e.g., chi square
- 36. Population vs. Sample
- 37. The Normal Distribution
- 38. Null Hypothesis SignificanceTesting (NHST)•If p < .05, then…o Reject the null hypothesis in favorof the alternative hypothesis.•If p > .05, then…o Fail to reject the null hypothesis or retain the null
- 39. What does NHST tell us??• Assuming that the null hypothesis is true, pwould the odds of having attained results this ormore extreme by chance alone (assuming we havea normal distribution).• Tells us if our results are statistically significant(different that practically or clinically significant).o E.g., difference between two groups reaches a level ofstatistical significance.• Outcome is either or, does not inform us of strengthof effect.o p = .001 is not more significant that p = .01.
- 40. NHST: Outcomes
- 41. Types of error
- 42. Threats to StatisticalConclusion Validity (Kazdin, 2003)• Low statistical power• Variability in the procedures• Subject heterogeneity• Unreliability of the measures• Multiple comparisons & error rates
- 43. Null Hypothesis Significance Testing(NHST)Cohen, "p < .05, the world is flat"
- 44. alpha inflation &Type I Error (false +)
- 45. Statistical Power• Probability of accepting a hypothesis (the nullhypothesis) when in actuality it is false(aka, Type 2 error, false negative• Sensitivity to effects of IV• Probability is 1 – Beta• Conventionally set at 0.80 (Cohen, 1988)• Powerful designs are able to detect effects ofthe IV more easily than less powerfuldesigns.
- 46. Methods to Increase Power(Shadish, Cook & Campbell, 2002)• Use matching, stratifying, blocking• Measure & correct for covariates• Use larger sample sizes• Use equal cell sample sizes• Improve measurement• Increase the strength of treatment• Increase the variability of treatment• Use within-participants design• Use homogenous participants selected to be responsive to tx• Reduce random setting irrelevancies• Ensure powerful statistical tests are used & assumptions met
- 47. Effect Size (E.S.)• Way of expressing the difference betweenconditions (e.g., treatment vs. control).• A common metric that can be used betweenstudies• Magnitude of effecto Often classified into small, medium, large
- 48. 4 Categories of E.S.(Ferguson, 2009)1. Group differences indicesa. magnitude of difference between 2 or more variablesb. e.g., Cohens d2. Strength of association indicesa. magnitude of shared variance between variablesb. e.g., Pearsons r3. Corrected estimatesa. estimates correcting for sampling errorb. e.g., Adjusted R24. Risk estimatesa. more commonly used in medical outcome researchb. e.g., relative risk (RR) & odds ratio (OR).
- 49. Suggested E.S.Interpretation (Ferguson, 2009)Type of E.S.Est.IncludedIndicesRMPE "Moderate"effect"Strong"effectGroupDifferenceCohens d,Glass delta,Hedges g0.41 1.15 2.70Strength ofAssociationr, R, partialr, rhtau0.2 0.5 0.8SquaredAssociationIndicesr2, R2, etasquared,adj. R20.04 0.25 0.64Risk Estimates RR, OR 2.0 (interpretw/ caution)3.0 4.0
- 50. ReliabilityA reliable measure is consistent or repeatable.more of different types of reliability later...
- 51. but is reliability enough?phrenelogy was reliable...but was it valid?
- 52. Validity• Internal Validityo extent to which the changes in the study DV can beattributed to changes in the IV• External Validityo extent to which the results can be generalized
- 53. Validity (cont)• Exampleo Progress in therapy• Did therapy cause the improvement?MeasureatIntakeMeasureatTerminationTherapy
- 54. Threats to Internal Validity• History• Maturation• Testing• Instrumentation• Statisticalregression• Differentialselection• Experimentalmortality (attrition)• Selection Xmaturation interaction• Statistical conclusionvalidity (lack ofpower)• Subject heterogeneity
- 55. Threats to Internal Validity• Remember the acronym: MRS SMITHo Maturationo Regression to the meano Selection of subjectso Selection by maturation interactiono Mortalityo Instrumentationo Testingo History
- 56. Threats to External Validity• Reactive effect of testing• Reactive effect of experimentalarrangements• Interaction between selection bias and the IV• Multiple treatment interference
- 57. Defense against threats tovalidity• for External Validityo Random selection of subjects• for Internal Validityo Random assignment to conditions• Various research designs havestronger internal or external validityo often a balancing act: External vs. Internalo Mook (1983) "In defense of ExternalInvalidity"
- 58. Regression to the Mean(RTM)How to Reduce RTM1. Random allocation to comparison groups2. Selection of Ss based on multiplemeasurements3. Estimate size of RTMa. can be subtracted from observed change to give anadjusted estimate4. Statistical control of covariates (i.e., ANCOVA)(Barnette, van der Pols & Dobson, 2005).
- 59. Other forms of Validity• Face Validity• Content Validity• Construct Validityo Convergento Discriminant• Criterion Validityo Predictive Validity
- 60. Range Restrictions• Beware of rangerestrictions in data• You can miss the bigpicture• Beware of floor andceiling effects as well
- 61. Fixed or Flexible design?• Some projects using social researchmethods are pre-planned in detail: they haveFIXED designs (commonly referred to asquantitative research).• Others expect the plan to change or evolvewhile the project is underway: their design isFLEXIBLE (commonly referred to asqualitative research).
- 62. Fixed Designs• Pre-specify exactly what you plan to happenBEFORE (a priori) the main data collection.• Examples are experiments and surveys.• They typically rely almost exclusively onquantitative data collection (and are oftenreferred to as quantitative research).• more to come...
- 63. Flexible Designs• Initial planning is limited to the focus of the research and(possibly) to setting out some general researchquestions.• Details of the design change depending on the initialfindings.• Examples are grounded theory and ethnographicstudies.• They typically rely largely on the collection of qualitativedata (and are often referred to as qualitative research)though some quantitative data is often also collected.• More to come...

No public clipboards found for this slide

×
### Save the most important slides with Clipping

Clipping is a handy way to collect and organize the most important slides from a presentation. You can keep your great finds in clipboards organized around topics.

Be the first to comment