On the Horizon: Superintendent's Report to the BOE


Published on

January 2012 presentation explaining planned changes in testing in Connecticut and how these changes will impact decision-making over the next few years.

Published in: Education, Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

On the Horizon: Superintendent's Report to the BOE

  1. 1. IntroductionChanges in Testing on the Horizon Impact
  2. 2. This year’s kindergarteners will never take the CMT!SBCAT (?) will soon be as well known to you as CMT
  3. 3. In June 2010, 31 states, including Connecticut, joined to form the SMARTER Balanced Assessment Consortium (SBAC) Submitted application in the ‘Race to the Top’ Assessment Competition
  4. 4. • One to SBAC The USDE • Other to the awarded two Partnership‘Comprehensive for Assessment Assessment Systems’ grants of Readiness in September for College 2010 and Careers (PARCC)
  5. 5. » Through-course Assessments #1 and #2  ELA and Math; 1 – 3 tasks in a class period» Through-course Assessment #3  ELA and Math; taken over several sessions or classes» Through-course Assessment #4  ELA Speaking and Listening; each student presents; teacher scores» End-of-Year Assessment  On Computer  45 – 60 questions» Field testing 2012 – 2014 – Operational 2014
  6. 6. » CT is a GOVERNING state in SBAC
  7. 7. • Get it right, it moves you up An Adaptive Test is a • Get it wrong, test that dynamically it moves youadjusts to the trait levelof each examinee as the down test is being • Makes administered. continuous adjustments as you work
  8. 8. Developed in France around 1905, Alfred Binet’s IQ Test Is still used in schools today Is the standard against which IQ tests are compared Incorporates all the elements of an adaptive test.
  9. 9. A different starting point It uses an adaptive item-can be used for each child selection procedure. Based on a 43,000 calibrated item bankScoring method allows a Length can vary by child common score to be with the use of a variableobtained from different termination rule. subsets of items.
  10. 10. A procedure for scoring A starting rule for item responses and selecting the first item estimating trait level Pre-calibrated 43,000 item bankA method of selecting the A rule for ending the test next item
  11. 11. Mike is using cubes that measure ¼ inch on each side to fill a box that has a height of 5 ¼ inches, width of 3 inches, and length of 2 ½ inches. How many ¼ inch cubes will Mike need to fill the box? Mike is using cubes that Mike is using cubes thatmeasure ½ inch on each side measure ½ inch on each side toto fill a box that has a height fill a box that has a height of 5 of 5 ½ inches, width of 3 ¼ inches, width of 3 inches, inches, and length of 2 ½ and length of 2 ½ inches. How inches. How many ½ inch many ½ inch cubes will Mike cubes will Mike need to fill need to fill the box? the box?
  12. 12. CAT equalizes the psychologicalenvironment of the test across all abilitylevels.• High-ability students will get about 50% of the questions correct.• Low-ability students will get about 50% of the questions correct.
  13. 13. Efficiency: CATs are more efficient than conventional tests—they generally reduce test length by 50% or more.Precision: A properly designed CAT can measure all examinees with the same degree of precision.Reporting: More accurate placement of students who previously scored ‘advanced’ and ‘below basic.’
  14. 14. Reporting: Results can be made available more quickly (computer-based) Test Security/Item Exposure: Students are presented with different test itemsMore Flexibility for Computer Capacity:Students do not need to be assessed on the same schedule
  15. 15. Students cannot change an answer to an item once they have submitted it.Test prep will need to include this. Because CAT is dynamic, it canrecover from an occasional student error in answering an item. Literature shows little or no gain from answer changing.
  16. 16. Animations, simulations, on-line access to information, video or audio stimulus, moveable models Test prep will need to include this. Elicit a response from the student(e.g., selecting one or more points on a graphic, dragging and dropping a graphic from one location to another, manipulating a graph)
  17. 17. All constructed response items in the CAT will be AI scoredItems not scored with AI delivered outsideof the CAT ‘engine’ (e.g., some elements of performance tasks)SBAC will require 10 - 20% read behind to ensure accuracy AI scoring is nearly 100% reliable
  18. 18. Selected response, short constructedSummative Testing Mandatory response,Assessment window within comprehensive extended (CAT) the last 12 assessment in constructed weeks of the grades 3–8 and response, instructional 11 technology year enhanced, and performance tasks
  19. 19. Selected response, sho Interim rt constructed response, extAssessment Optional ended Available (CA) content- Learning constructed throughout response, tec cluster progressions the year hnology assessment enhanced, an d performance tasks
  20. 20. The teacher shares the learning goals with students and provides opportunities for students to monitortheir ongoing progress.
  21. 21. Importance of focusing time, energy and resources onimplementing the CCSS beginning this coming school year: Teachersmust read the standards
  22. 22. FOCUS, FOCUS, FOCUS – Deeper understanding of fewer conceptsCOHERENCE – One year builds to nextFLUENCY – Standards expect speed and accuracy DEEP UNDERSTANDING – fewer standards allow for this APPLICATION – ability to apply what they know
  23. 23. Spring 2011 – 2012 – Smarter Balanced willpilot SBCAT with some schools’ -- email toSuperintendents asking about participation ** Spring 2012 – 2013 – Larger SBCAT pilot Spring 2013 – 2014 – Every district will be required to pilot a portion of the SBCAT testCT has applied for a CMT/CAPT moratorium for2013-2014
  24. 24. 12 week window for testing – to meet 1-to-1 computer requirementsEnd-of-year window – to allow for interim assessments Less time required; more precise 43,000 test item bank
  25. 25. Innovative test items to match ‘realworld’ applications• You wouldn’t really use a protractor on a computer, rotate it, etc.• You really would click on a word to get its definition or hear it pronounced
  26. 26. Students encounter appropriately complex texts at each grade level to develop Includes short tests that skills and the conceptual require close reading; knowledge they need for Text Selection 50% literary, success in school and life. 50% informational (K – 6)50% literary non-fiction (7-12) Text Complexity Range and Quality of Texts High-Quality Text-Dependent Questions and Tasks Students’ Ability To Read Complex Texts 90% of all questions 80 – Independently should be text dependent Academic Vocabulary questions which requireWords that readers will close reading, vs. find in all types of skimming. complex texts from different disciplines.
  27. 27. • Join the limited pilot in 2012, if possibleNow • Examine existing CAT’s • Study and align to the CCSSNow • Develop new instructional strategies • Plan for a 1-to-1 solution by 2015Now • Define a new set of test-taking strategies