EasyCBM was designed in an RTI (Response to Intervention) framework. From the start, we ’ve worked to ensure that the system will help facilitate good instructional decision-making. We began with a grant from the federal Office of Special Education Programs in 2006, but have continued to expand our offerings with the help of our school district partners in Eugene 4j, Portland Public, Sisters, Bend/Redmond, Springfield, and Junction City. Explaining the chronology of how we developed the system might help you understand where we are now as well as where we ’re headed in the next few years. Between January and September of 2006, we created the K-4 assessments and our initial website, easycbm.com. At that time, the system was only in use in Eugene 4j, but people started hearing about it, and by the end of the first year (June 2006), we had a few hundred teacher users from across the country. During the 2006-2007 school year, we added comprehension measures in 2 nd grade and 5 th grade passage reading fluency and comprehension. We began to get requests to extend the system to middle schools and started to work on developing tests for those grades as well. In addition, we completed the first round of changes to the website in response to requests from teachers. By the end of 2007, we had almost a thousand teacher users. During the 2007-2008 school year, we added passage reading fluency for grades 6, 7, and 8 and piloted a small number of comprehension measures at these grade levels as well, adding them to the system during the summer so they would be ready to use in the fall of 2008. By the end of the 2007-2008 school year, we had reached 3000 teacher users. This year, 2008-2009 has seen two major changes: we added math (K-5, with 6-8 to be added as soon as we finish piloting the items); and we began beta-testing on a District EasyCBM system that will become commercially available this summer. This commercial version includes both benchmarking and progress monitoring assessments, offers districts the ability to give different levels of data access to different users, and facilitates the transfer of data between the EasyCBM system and district data warehouses. Final details about the commercial version are still being worked out, but we anticipate it costing no more than $1 per student per year. Our user group continues to grow, with regular EasyCBM averaging over 50 new teacher sign-ups each day, with a current user base of just under 10,000 teachers across the United States. In addition, we ’ve begun collaborating with the Center for Teaching and Learning (CTL) and the folks at the DIBELS Data System (DDS), discussing ways to work together in the future. You can expect to hear more about these collaborations in the future, as the details get worked out.
This is a screen shot of the homepage of the District EasyCBM. If you ’re familiar with the Regular EasyCBM site, you’ll see that it looks very similar. We are committed to continuing to offer Regular EasyCBM free to teachers and will continue to upgrade that system as we further develop the commercial District EasyCBM.
One of the first questions we had to ask ourselves when we were creating EasyCBM involved the measures we would include in the system. After reviewing a variety of published studies on the different types of progress monitoring measures, we decided to focus on these. In Kindergarten and First Grade, we offer Phonemic Segmentation, Letter Names (LN), Letter Sounds (LS), and Word Reading Fluency (WRF). We also start offering Passage Reading Fluency (PRF) in First Grade, continuing throughout the rest of the grades. Our Comprehension (MCRC) measures begin in Grade 2, where the stories are 700 words long, with 12 questions (6 test students ’ literal comprehension, 6 test their inferential comprehension). In Grades 3 – 8, the stories are 1500 words long, with 20 questions (7 test students’ literal comprehension, 7 test their inferential comprehension, and 6 test their evaluative comprehension). The Comprehension tests are designed to be computer administered, so they are scored by the computer, with the results instantly placed on the student performance graphs. The stories we use for these tests are all original works of narrative fiction. We ’ll be piloting some non-fiction comprehension measures next year. We frequently get asked about how our assessments differ from those offered on DIBELS. The main difference is that we used different statistical techniques when developing the measures. Rather than use classical statistics (which depend on comparing the average performance of one group to another), we used a more sophisticated statistical approach called Item Response Theory or IRT. An IRT analysis allows a test developer to calculate the ‘difficulty’ of each individual item, relative to all other items in the item bank. We use this information to create alternate forms of each measure that are comparable from one form to the next within a particular measure type. This way, the score a student gets on Form 4 should be the same she gets on Form 8, 9, or 16. If there is a difference in performance, it is more likely to reflect a true difference in student skill level rather than simply a form that is easier or more difficult than the others.
For our Math measures, we decided to use the National Council of Teachers of Mathematics Focal Point Standards. There are 3 Focal Point standards per grade. Our system has a total of 10 progress monitoring measures and 3 benchmark measures per focal point, with each measure comprised of 16 math items. They are designed to be taken online so the computers automatically score responses and instantly graph student performance. Again, we used IRT analyses to create comparable alternate forms within each Focal Point. By September 2009, we will have added the following enhancements to the math part of our system: In Grade K and 1, any item with words will have the option for a ‘read aloud’ administration, where students can click on a button on the screen and have the item read aloud to them. Grade K and 1, which currently have the measures listed as ‘mixed math’ rather than three unique Focal Points will be more like the other grades, with three distinct Focal Points, each with 10 progress monitoring probes and 3 benchmark assessments. Grade 6-8 will be added (provided, of course, we have sufficient numbers of students participate in the piloting to allow us to complete our analyses). Over the course of next year, we ’ll be adding the percentile guidelines to the graphs for the math measures. The reason the lines are not there now is that until we have sufficient numbers of students taking the tests, we cannot reliable predict what the percentile lines will be for the year.
This is a screen shot of the Measures tab from the District EasyCBM. You can see that there are three benchmark testing windows (Fall, Winter, and Spring). From this particular screen, teachers can print out the test booklets and student materials.
On the next slide, you see the Measures Tab from the District EasyCBM site, but this time we ’ve clicked over to the Progress Monitoring side of things. As you can see, we are looking at the measures available in Grade 5, Reading. If I were to scroll further down the page, I would see the full list of comprehension tests, then the math measures.
And here are the math measures.
Schools using District EasyCBM use the Admin tab in the system to set the specific percentiles that will show on student graphs. These cut points also determine how the color coding is applied on the benchmark class list report.
To access class lists, users go to the Reports tab and select ‘Benchmarks’ from the options.
In this case, I have accessed the data as a person with District Level Access. If I select a School, but not a Teacher, then I get a list of all students at that school in the grade I ’ve selected. Here, for example, we’re looking at the Winter Benchmark school-wide report for Kindergarten students. In this particular example, the green highlighting of the score indicates a student who scored at or above the 90 th percentile on that measure. The yellow highlighting indicates a student who scored between the 11 th and 20 th percentile, and the red highlighting indicates a student who scored at or below the 10 th percentile. These lists can be sorted by clicking on the column header, so you can sort by score on any of the measures, as well as by total z-score (standard score) averaged across all three measures, or by percentile rank.
For the system ’s potential to be realized fully, it’s important that teachers have buy in. If they don’t use the intervention screens, for example, you’ll be missing a lot of really useful data that can help guide instructional decision making.
Once student data are in the system, teachers can access a variety of reports using the Reports tab. Here, we are seeing a screenshot of the Groups tab being selected. When I select the ‘Main Group’ option, in this case, a group with 33 students, the next screen appears, where I will be able to see average scores, etc.
Because we have fully integrated our Benchmarking and Progress Monitoring assessments, they are listed in the same table. Here, I ’ve added the blue arrows to call attention to the two Benchmarking tests. All others in this table are Progress Monitoring measures.
Although group views can be helpful in terms of evaluating the effectiveness of a particular approach for your class as a whole, the individual graphs are at the heart of easyCBM. Here, you see a series of scores, from the middle of September through the middle of January.
On this screen, I ’ve added the blue arrows to indicate which of the scores are the Benchmark assessments. All others are progress monitoring measures. You should also note the key at the bottom of the graph that shows the color coding for the lines; this is the same as the color coding on the class list from the benchmark reports tab. The vertical lines (‘Triumphs’ and ‘Double Dose’) indicate interventions. At the bottom of the graph, you’ll see a brief description of what each of those interventions includes. Teachers enter this information through the “Interventions” tab, and it becomes a rich source of data to use in grade-level meetings, parent conferences, and professional development sessions with teachers at the school. Many schools print these reports and include them in their annual IEP reports as well.
On this graph, the two data points (for the Comprehension test) show where the student scored, relative to his classmates, at the fall and winter benchmark assessments. This graph shows a similar progression in skill, where we see the student progress from the 20 th percentile to near the 50 th percentile. In this case, the teacher used the passage reading fluency (PRF) measure for progress monitoring because it was much shorter (1 minute administration time), and student performance on that measure was strongly correlated with his performance on the comprehension measure, as we would expect. It ’s worth pointing out that we recommend that teachers use one and only one measure type for progress monitoring (selecting one that will be sensitive to measuring growth, based on skill level at the first benchmark test), stair-stepping up to the more challenging type of progress monitoring measure as the student masters the easier ones.
easyCBM: Benchmarking andProgress Monitoring System
Assessments to Facilitate InstructionWhat does it take for a seamless integration of abenchmarking and progress monitoringsystem?Assess the right thingsScale the measures for sensitivity and reliabilityMake it easy for schools to useGet teacher buy inAlign with important criterion measures
Assess the Right Things:Reading MeasuresEarly Literacy• Phonemic Segmentation• LN, LS• WRF, PRF Gr. 2 Gr. 3-8Literal 6 items 7 items• Comprehension Inferential 6 items 7 itemsEvaluative 6 items
Assess the Right Things:Math MeasuresNCTM Focal Point Standards3 Focal Points per grade; each PM measure has16 items aligned to a single Focal Point. EachBenchmark measure has 48 items (16 fromeach Focal Point).
School UseBenchmark data need to guide decision-making.In many districts using an RTI framework, allstudents get Tier 1 and 2; those scoring belowa particular percentile on the benchmarkand/or non-responders get Tier 3 instruction.Instructional Decisions: who, what, when, how,where, how often, with whom?
Teacher Buy-In• Teachers need to input intervention info &modify instruction based on findings.• Selection of appropriate measure for theindividual student– Current skill level– Specific deficits– Sensitivity to growth– Aligned to intervention
Aligned to Other Criterion MeasuresUpcoming studies: criterion-validity studies examiningrelationship between benchmark measures and statetests in reading and math.Free site already open for use: http://easycbm.comDistrict site will be available for use starting inSeptember, 2009.For more information: contact firstname.lastname@example.org