Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Assessment and individual differences


Published on

Published in: Education, Business, Technology
  • Be the first to comment

Assessment and individual differences

  1. 1. Assessment and Individual Differences Sullivan TurnerEDTC 610/Fall 2010
  2. 2. Psychometric ModelAssumes that personal traits, includingknowledge and cognitive abilities can bemeasured by weight and distanceHas tremendous power to influence lifedecisionsClassify children as gifted, learningdisabled, or emotionally disturbed basedon test performance
  3. 3. ReliabilityReplicability of a test scoreTrue Scores and Observed Scores • Perfect reliability is impossible • Measurement Error • True Score • Observed scores Observed Score = True Score ± Measurement Error
  4. 4. ReliabilityConfidence Interval• True scores will be within the confidence level with a known level of probabilityNumber of Items• High reliability is desirable• Increase number of questions to boost test reliability
  5. 5. ValidityIs concerned with the meaning of what ismeasured A completely valid test measures fully andaccurately what it is intended to measure
  6. 6. ValidityWhat Does the Score Mean?• Construct Validity: concerned with whether a test measures what it is intended to measure.
  7. 7. ValidityWhat Does the Score Mean?• Concurrent Validity: evidence that a test measures a distinct construct within a theoretical system.• Predictive Validity: predicts test performance
  8. 8. ValidityConstruct Under – Representation• Means a test falls short of representing all that is intended to constructConstruct Over – Representation• Whenever a test measures something other than the construct that it is intended to measure.
  9. 9. ValidityConstruct Over – Representation• Measurement Contamination • Response - elimination strategy • Testwiseness • Test anxiety
  10. 10. ValidityMeasurement Variance: Variation in testscores among examinees can be expressedquantitatively s2 = Σ (X – X)2 n–1
  11. 11. ValidityMeasurement Variance• Construct – Irrelevant Variance • Every test is contaminated • Response eliminated strategy used in multiple choice testing
  12. 12. How Test Influence LearningWashback Effects: Anticipation of testconsequences can feed back to influence theprocesses of learning and teaching that lead upto the test.• Teaching to the TestMeasurement Driven Instruction• Minimal Competency testing• Consequential validity
  13. 13. Performance AssessmentAssessment• Asking for complex responses/diagnostic informationPerformance Assessments• Educational value “teaching to the test”Authentic Assessments• Leads to products and outcomes with intrinsic value
  14. 14. Classroom AssessmentEveryday Assumptions of TestingDesigning Tests• Multiple – Choice Question• Constructed Response Items 1. Scoring rubrics 2. Holistic scoring 3. Analytical scoring
  15. 15. Formative AssessmentSummative Assessment• Summarize the effects of past educational experienceFormative Assessment• Guide and match ongoing teaching and learning experiencesAssessment of Learning• Promotes student learning
  16. 16. Standardized TestingRaw score• Point value given on a particular testNormal Distribution• Mean• Mode• Standard DeviationStandard Scores• Percentile rank
  17. 17. Quantitative ResearchQualitative Research• Emphasize detailed description rather than numerical measurementQuantitative Research• Emphasizes numerical measurements of constructsDescriptive Analysis• States factual information
  18. 18. Attitude Interactions: ATIsCommon intuition that different studentslearn under different conditions.Aptitude• General cognitive abilityTreatment• Identifiable educational experienceInteraction• Matching treatment to aptitude
  19. 19. Diversification of InstructionCognitive Styles• Field dependence vs field independence• Impulsivity vs reflectivityLearning Styles• Multiple Intelligences (MI) theory• Time and Learning• Mastery Learning
  20. 20. Group DifferencesGender DifferencesSocioeconomic DifferencesRacial – Ethnic Differences• The Achievement Gap• Test Bias
  21. 21. Learning StrategiesIncrease the number of test itemsUse a full representation of theconstructWiden the process dimension oftest designUse a variety of testing formatsUse performance assessment
  22. 22. Learning StrategiesBe cautious about learning stylesConsider aptitude- treatmentinteractionsGive learning sufficient timeGuard against test biasClose the achievement gap