Assessment and Individual Differences Sullivan TurnerEDTC 610/Fall 2010
Psychometric ModelAssumes that personal traits, includingknowledge and cognitive abilities can bemeasured by weight and distanceHas tremendous power to influence lifedecisionsClassify children as gifted, learningdisabled, or emotionally disturbed basedon test performance
ReliabilityReplicability of a test scoreTrue Scores and Observed Scores • Perfect reliability is impossible • Measurement Error • True Score • Observed scores Observed Score = True Score ± Measurement Error
ReliabilityConfidence Interval• True scores will be within the confidence level with a known level of probabilityNumber of Items• High reliability is desirable• Increase number of questions to boost test reliability
ValidityIs concerned with the meaning of what ismeasured A completely valid test measures fully andaccurately what it is intended to measure
ValidityWhat Does the Score Mean?• Construct Validity: concerned with whether a test measures what it is intended to measure.
ValidityWhat Does the Score Mean?• Concurrent Validity: evidence that a test measures a distinct construct within a theoretical system.• Predictive Validity: predicts test performance
ValidityConstruct Under – Representation• Means a test falls short of representing all that is intended to constructConstruct Over – Representation• Whenever a test measures something other than the construct that it is intended to measure.
ValidityConstruct Over – Representation• Measurement Contamination • Response - elimination strategy • Testwiseness • Test anxiety
ValidityMeasurement Variance: Variation in testscores among examinees can be expressedquantitatively s2 = Σ (X – X)2 n–1
ValidityMeasurement Variance• Construct – Irrelevant Variance • Every test is contaminated • Response eliminated strategy used in multiple choice testing
How Test Influence LearningWashback Effects: Anticipation of testconsequences can feed back to influence theprocesses of learning and teaching that lead upto the test.• Teaching to the TestMeasurement Driven Instruction• Minimal Competency testing• Consequential validity
Performance AssessmentAssessment• Asking for complex responses/diagnostic informationPerformance Assessments• Educational value “teaching to the test”Authentic Assessments• Leads to products and outcomes with intrinsic value
Formative AssessmentSummative Assessment• Summarize the effects of past educational experienceFormative Assessment• Guide and match ongoing teaching and learning experiencesAssessment of Learning• Promotes student learning
Standardized TestingRaw score• Point value given on a particular testNormal Distribution• Mean• Mode• Standard DeviationStandard Scores• Percentile rank
Quantitative ResearchQualitative Research• Emphasize detailed description rather than numerical measurementQuantitative Research• Emphasizes numerical measurements of constructsDescriptive Analysis• States factual information
Attitude Interactions: ATIsCommon intuition that different studentslearn under different conditions.Aptitude• General cognitive abilityTreatment• Identifiable educational experienceInteraction• Matching treatment to aptitude
Diversification of InstructionCognitive Styles• Field dependence vs field independence• Impulsivity vs reflectivityLearning Styles• Multiple Intelligences (MI) theory• Time and Learning• Mastery Learning
Group DifferencesGender DifferencesSocioeconomic DifferencesRacial – Ethnic Differences• The Achievement Gap• Test Bias
Learning StrategiesIncrease the number of test itemsUse a full representation of theconstructWiden the process dimension oftest designUse a variety of testing formatsUse performance assessment
Learning StrategiesBe cautious about learning stylesConsider aptitude- treatmentinteractionsGive learning sufficient timeGuard against test biasClose the achievement gap
A particular slide catching your eye?
Clipping is a handy way to collect important slides you want to go back to later.