Assessment and Individual
         Differences
 Sullivan Turner
EDTC 610/Fall 2010
Psychometric Model
Assumes that personal traits, including
knowledge and cognitive abilities can be
measured by weight and distance
Has tremendous power to influence life
decisions
Classify children as gifted, learning
disabled, or emotionally disturbed based
on test performance
Reliability
Replicability of a test score
True Scores and Observed Scores
  • Perfect reliability is impossible
  • Measurement Error
  • True Score
  • Observed scores
  Observed Score = True Score ± Measurement Error
Reliability
Confidence Interval
• True scores will be within the
  confidence level with a known level
  of probability
Number of Items
• High reliability is desirable
• Increase number of questions to
  boost test reliability
Validity
Is concerned with the meaning of what is
measured
 A completely valid test measures fully and
accurately what it is intended to measure
Validity
What Does the Score Mean?
• Construct Validity: concerned with
  whether a test measures what it is
  intended to measure.
Validity
What Does the Score Mean?
• Concurrent Validity: evidence that a test
  measures a distinct construct within a
  theoretical system.
• Predictive Validity: predicts test
  performance
Validity
Construct Under – Representation
• Means a test falls short of representing
  all that is intended to construct
Construct Over – Representation
• Whenever a test measures something
  other than the construct that it is
  intended to measure.
Validity

Construct Over – Representation
• Measurement Contamination
 • Response - elimination strategy
 • Testwiseness
 • Test anxiety
Validity
Measurement Variance: Variation in test
scores among examinees can be expressed
quantitatively
          s2 = Σ (X – X)2
                n–1
Validity
Measurement Variance
• Construct – Irrelevant Variance
  • Every test is contaminated
  • Response eliminated strategy used in
    multiple choice testing
How Test Influence Learning
Washback Effects: Anticipation of test
consequences can feed back to influence the
processes of learning and teaching that lead up
to the test.
• Teaching to the Test
Measurement Driven Instruction
• Minimal Competency testing
• Consequential validity
Performance Assessment
Assessment
• Asking for complex responses/diagnostic
  information
Performance Assessments
• Educational value “teaching to the test”
Authentic Assessments
• Leads to products and outcomes with
  intrinsic value
Classroom Assessment
Everyday Assumptions of Testing
Designing Tests
• Multiple – Choice Question
• Constructed Response Items
 1. Scoring rubrics
 2. Holistic scoring
 3. Analytical scoring
Formative Assessment
Summative Assessment
• Summarize the effects of past
  educational experience
Formative Assessment
• Guide and match ongoing teaching and
  learning experiences
Assessment of Learning
• Promotes student learning
Standardized Testing
Raw score
• Point value given on a particular test
Normal Distribution
• Mean
• Mode
• Standard Deviation
Standard Scores
• Percentile rank
Quantitative Research
Qualitative Research
• Emphasize detailed description rather
  than numerical measurement
Quantitative Research
• Emphasizes numerical measurements of
  constructs
Descriptive Analysis
• States factual information
Attitude Interactions: ATIs
Common intuition that different students
learn under different conditions.
Aptitude
• General cognitive ability
Treatment
• Identifiable educational experience
Interaction
• Matching treatment to aptitude
Diversification of
      Instruction
Cognitive Styles
• Field dependence vs field independence
• Impulsivity vs reflectivity
Learning Styles
• Multiple Intelligences (MI) theory
• Time and Learning
• Mastery Learning
Group Differences
Gender Differences
Socioeconomic Differences
Racial – Ethnic Differences
• The Achievement Gap
• Test Bias
Learning Strategies
Increase the number of test items
Use a full representation of the
construct
Widen the process dimension of
test design
Use a variety of testing formats
Use performance assessment
Learning Strategies
Be cautious about learning styles
Consider aptitude- treatment
interactions
Give learning sufficient time
Guard against test bias
Close the achievement gap

Assessment and individual differences

  • 1.
    Assessment and Individual Differences Sullivan Turner EDTC 610/Fall 2010
  • 2.
    Psychometric Model Assumes thatpersonal traits, including knowledge and cognitive abilities can be measured by weight and distance Has tremendous power to influence life decisions Classify children as gifted, learning disabled, or emotionally disturbed based on test performance
  • 3.
    Reliability Replicability of atest score True Scores and Observed Scores • Perfect reliability is impossible • Measurement Error • True Score • Observed scores Observed Score = True Score ± Measurement Error
  • 4.
    Reliability Confidence Interval • Truescores will be within the confidence level with a known level of probability Number of Items • High reliability is desirable • Increase number of questions to boost test reliability
  • 5.
    Validity Is concerned withthe meaning of what is measured A completely valid test measures fully and accurately what it is intended to measure
  • 6.
    Validity What Does theScore Mean? • Construct Validity: concerned with whether a test measures what it is intended to measure.
  • 7.
    Validity What Does theScore Mean? • Concurrent Validity: evidence that a test measures a distinct construct within a theoretical system. • Predictive Validity: predicts test performance
  • 8.
    Validity Construct Under –Representation • Means a test falls short of representing all that is intended to construct Construct Over – Representation • Whenever a test measures something other than the construct that it is intended to measure.
  • 9.
    Validity Construct Over –Representation • Measurement Contamination • Response - elimination strategy • Testwiseness • Test anxiety
  • 10.
    Validity Measurement Variance: Variationin test scores among examinees can be expressed quantitatively s2 = Σ (X – X)2 n–1
  • 11.
    Validity Measurement Variance • Construct– Irrelevant Variance • Every test is contaminated • Response eliminated strategy used in multiple choice testing
  • 12.
    How Test InfluenceLearning Washback Effects: Anticipation of test consequences can feed back to influence the processes of learning and teaching that lead up to the test. • Teaching to the Test Measurement Driven Instruction • Minimal Competency testing • Consequential validity
  • 13.
    Performance Assessment Assessment • Askingfor complex responses/diagnostic information Performance Assessments • Educational value “teaching to the test” Authentic Assessments • Leads to products and outcomes with intrinsic value
  • 14.
    Classroom Assessment Everyday Assumptionsof Testing Designing Tests • Multiple – Choice Question • Constructed Response Items 1. Scoring rubrics 2. Holistic scoring 3. Analytical scoring
  • 15.
    Formative Assessment Summative Assessment •Summarize the effects of past educational experience Formative Assessment • Guide and match ongoing teaching and learning experiences Assessment of Learning • Promotes student learning
  • 16.
    Standardized Testing Raw score •Point value given on a particular test Normal Distribution • Mean • Mode • Standard Deviation Standard Scores • Percentile rank
  • 17.
    Quantitative Research Qualitative Research •Emphasize detailed description rather than numerical measurement Quantitative Research • Emphasizes numerical measurements of constructs Descriptive Analysis • States factual information
  • 18.
    Attitude Interactions: ATIs Commonintuition that different students learn under different conditions. Aptitude • General cognitive ability Treatment • Identifiable educational experience Interaction • Matching treatment to aptitude
  • 19.
    Diversification of Instruction Cognitive Styles • Field dependence vs field independence • Impulsivity vs reflectivity Learning Styles • Multiple Intelligences (MI) theory • Time and Learning • Mastery Learning
  • 20.
    Group Differences Gender Differences SocioeconomicDifferences Racial – Ethnic Differences • The Achievement Gap • Test Bias
  • 21.
    Learning Strategies Increase thenumber of test items Use a full representation of the construct Widen the process dimension of test design Use a variety of testing formats Use performance assessment
  • 22.
    Learning Strategies Be cautiousabout learning styles Consider aptitude- treatment interactions Give learning sufficient time Guard against test bias Close the achievement gap