Your SlideShare is downloading. ×
0
Reliability & validity
Reliability & validity
Reliability & validity
Reliability & validity
Reliability & validity
Reliability & validity
Reliability & validity
Reliability & validity
Reliability & validity
Reliability & validity
Reliability & validity
Reliability & validity
Reliability & validity
Reliability & validity
Reliability & validity
Reliability & validity
Reliability & validity
Reliability & validity
Reliability & validity
Reliability & validity
Reliability & validity
Reliability & validity
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Reliability & validity

6,372

Published on

Published in: Education, Technology
0 Comments
6 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
6,372
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
377
Comments
0
Likes
6
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Reliability and validity
  • 2. Why do we need Reliability & Validity? (Measurement Error)A participant’s score on a particular measure consists of 2components: Observed score = True score + Measurement ErrorTrue Score = score that the participant would haveobtained if measurement was perfect—i.e., we were ableto measure without errorMeasurement Error = the component of the observedscore that is the result of factors that distort the score fromits true value
  • 3. Factors that Influence Measurement Error• Transient states of the participants: (transient mood, health, fatigue-level, etc.)• Stable attributes of the participants: (individual differences in intelligence, personality, motivation, etc.)• Situational factors of the research setting: (room temperature, lighting, crowding, etc.)
  • 4. Characteristics of Measures and ManipulationsPrecision and clarity of operationaldefinitionsTraining of observersNumber of independent observationson which a score is based (more isbetter?)Measures that induce fatigue or fear
  • 5. Actual Mistakes Equipment malfunction Errors in recording behaviors by observers Confusing response formats for self-reports Data entry errorsMeasurement error undermines the reliability(repeatability) of the measures we use
  • 6. Reliability• The reliability of a measure is an inverse function of measurement error:• The more error, the less reliable the measure• Reliable measures provide consistent measurement from occasion to occasion
  • 7. Estimating ReliabilityTotal Variance = Variance due + Variance duein a set of scores to true scores to errorReliability = True-score / Total Variance VarianceReliability can range from 0 to 1.0When a reliability coefficient equals 0, the scores reflectnothing but measurement errorRule of Thumb: measures with reliability coefficients of70% or greater have acceptable reliability
  • 8. Different Methods for Assessing ReliabilityTest-Retest ReliabilityInter-rater ReliabilityInternal Consistency Reliability
  • 9. Test-Retest ReliabilityTest-retest reliability refers to theconsistency of participant’s responsesover time (usually a few weeks, why?)Assumes the characteristic beingmeasured is stable over time—notexpected to change between test andretest
  • 10. Inter-rater ReliabilityIf a measurement involves behavioralratings by an observer/rater, we wouldexpect consistency among raters for areliable measureBest to use at least 2 independentraters, ‘blind’ to the ratings of otherobserversPrecise operational definitions and well-trained observers improve inter-raterreliability
  • 11. Internal Consistency Reliability• Relevant for measures that consist of more than 1 item (e.g., total scores on scales, or when several behavioral observations are used to obtain a single score)• Internal consistency refers to inter-item reliability, and assesses the degree of consistency among the items in a scale, or the different observations used to derive a score• Want to be sure that all the items (or observations) are measuring the same construct
  • 12. Estimates of Internal Consistency• Item-total score consistency• Split-half reliability: randomly divide items into 2 subsets and examine the consistency in total scores across the 2 subsets (any drawbacks?)• Cronbach’s Alpha: conceptually, it is the average consistency across all possible split- half reliabilities• Cronbach’s Alpha can be directly computed from data
  • 13. Estimating the Validity of a Measure• A good measure must not only be reliable, but also valid• A valid measure measures what it is intended to measure• Validity is not a property of a measure, but an indication of the extent to which an assessment measures a particular construct in a particular context—thus a measure may be valid for one purpose but not another• A measure cannot be valid unless it is reliable, but a reliable measure may not be valid
  • 14. Estimating ValidityLike reliability, validity is not absoluteValidity is the degree to which variability(individual differences) in participant’sscores on a particular measure, reflectindividual differences in thecharacteristic or construct we want tomeasureThree types of measurement validity: Face Validity Construct Validity
  • 15. Face Validity• Face validity refers to the extent to which a measure ‘appears’ to measure what it is supposed to measure• Not statistical—involves the judgment of the researcher (and the participants)• A measure has face validity—’if people think it does’• Just because a measure has face validity does not ensure that it is a valid measure (and measures lacking face validity can be valid)
  • 16. Construct ValidityMost scientific investigations involvehypothetical constructs—entities thatcannot be directly observed but areinferred from empirical evidence (e.g.,intelligence)Construct validity is assessed bystudying the relationships between themeasure of a construct and scores onmeasures of other constructsWe assess construct validity by seeingwhether a particular measure relates asit should to other measures
  • 17. Self-Esteem Example• Scores on a measure of self-esteem should be positively related to measures of confidence and optimism• But, negatively related to measures of insecurity and anxiety
  • 18. Convergent and Discriminant Validity• To have construct validity, a measure should both:• Correlate with other measures that it should be related to (convergent validity)• And, not correlate with measures that it should not correlate with (discriminant validity)
  • 19. Criterion-Related• Validity Refers to the extent to which a measure distinguishes participants on the basis of a particular behavioral criterion• The Scholastic Aptitude Test (SAT) is valid to the extent that it distinguishes between students that do well in college versus those that do not• A valid measure of marital conflict should correlate with behavioral observations (e.g., number of fights)• A valid measure of depressive symptoms should distinguish between subjects in treatment for depression and those who are not in treatment
  • 20. Two Types of Criterion- Related Validity Concurrent validity measure and criterion are assessed at the same time Predictive validity elapsed time between the administration of the measure to be validated and the criterion is a relatively long period (e.g., months or years)Predictive validity refers to a measure’s ability to distinguish participants on a relevant behavioral criterion at some point in the future
  • 21. SAT Example• High school seniors who score high on the the SAT are better prepared for college than low scorers (concurrent validity)• Probably of greater interest to college admissions administrators, SAT scores predict academic performance four years later (predictive validity)
  • 22. Thank you.

×