• Save
Two Percent Flexibility
Upcoming SlideShare
Loading in...5
×
 

Two Percent Flexibility

on

  • 805 views

Presentation for EDSP 6303, Texas Tech University

Presentation for EDSP 6303, Texas Tech University

Statistics

Views

Total Views
805
Views on SlideShare
805
Embed Views
0

Actions

Likes
0
Downloads
0
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • The 2% rule is where we are in evaluating students with disabilities who do not meet grade level testing despite high quality education.The CEC vision is for this flexibility option to move the assessment of these students in the right direction. That direction includes a longitudinal growth model and ultimately developing a way to measure performance in relation to past academic proficiency thereby demonstrating growth and achievement on an individual basis.

Two Percent Flexibility Two Percent Flexibility Presentation Transcript

  • CONTEMPORARY ISSUEIN SPECIAL EDUCATION:TWO PERCENTFLEXIBILITYM. Kalene MeeksRebecca Sheffield
  • Presentation Contents• What is the “Two Percent Flexibility” issue?• Five recent studies • Explanations • Evaluations of research quality• Implications of this research for special education policy and practice.• March 2011 Update• Logic model for possible project
  • Two Percent FlexibilityTHE ISSUE
  • The Issue:• The Two Percent Rule, effective in 2006, applies to No Child Left Behind assessment requirements, Sec. 200.6(a)(3), and gives states some leeway in assessing students with disabilities by allowing states to develop and administer alternative assessments based on modified achievement standards (AA-MAS).• Student scores on AA-MAS may account for up to two percent of the scores for states’ and districts’ Adequate Yearly Progress determinations. (Kettler et al., 2011; Two Percent Flexibility, 2011)
  • The Issue:• States should determine how their accommodation policies allow for students’ participation in the regular assessments before developing an AA-MAS.• Flexibility: AA-MAS are not mandatory.• Also, more than two percent of AA-MAS scores may count towards proficiency if less than one percent of AA-AAS (alternative assessment based on alternate achievement standards) scores are counted towards proficiency. Thus:AA-AAS + AA-MAS < 3% of students taking tests (Kettler et al., 2011; Two Percent Flexibility, 2011)
  • The Issue:Who? Students with disabilities who have difficulty meetinggrade level standards as judged by regular state tests: • must have an Individualized Education Plan (IEP) • must have access to grade level curriculum • disability must be responsible for test difficulties (not sub-par instruction) Difficult Easy Regular Test or AA-MAS AA-AAS Test w/ AccommodationsNote: The AA-MAS is different from the AA-AAS (Alternative Assessment based on Alternate Achievement Standards). Students taking AA- AAS tests are not necessarily instructed in the general curriculum. (Kettler et al., 2011; Two Percent Flexibility, 2011)
  • The Issue:AA-MAS • must be aligned with grade level content but may differ in breadth and depth. • cannot preclude student from receiving diploma.•In Texas • AA-MAS = STAAR-M • AA-AAS = STAAR-ALT• Regardless of the tests available, testing decisions must be made on an individual basis by the student’s IEP team (ARD committee). (Kettler et al., 2011; Two Percent Flexibility, 2011)
  • The Issue:The Two Percent Rule is a good place to start but thereis no consensus yet on its implementation.• Arguments: • Two percent permits either too many or not enough students. • It is not feasible to develop validated assessments in a short period of time.• Many states are trying to modify their existing assessments rather than develop new ones.
  • The Issue: Historical Flexibility for focus on Longitudinal Two Percent states to student Growth Rule develop achievement Model assessments based evaluations
  • The way we see it… The Two Percent Rule: • acknowledges that the Department of Education recognizes the need for alternative assessments based on modified academic standards. • provides a framework in which states can develop and administer these assessments.
  • Lazarus, Cormeier, &Thurlow (2011)STATES ACCOMMODATIONSPOLICIES AND DEVELOPMENT OFALTERNATE ASSESSMENTSBASED ON MODIFIEDACHIEVEMENT STANDARDS: ADISCRIMINANT ANALYSIS
  • Lazarus, Cormier, & Thurlow (2011)• Restrictive accommodation policies might affect students ability to participate in regular assessments and might, therefore, affect states decisions about using AA-MAS.• States’ policies specifically vary on the amount of accommodations that can be provided to students on the regular assessments. These include the following categories; presentation, equipment and materials, response, scheduling and timing, and setting.• These researchers compared AA-MAS use data and attempted to relate that to the states’ accommodation policies.
  • Lazarus, Cormier, & Thurlow (2011)The Questions:• Do differences in the number of allowed accommodations on regular assessments differentiate states that plan to offer AA-MAS and those that do not?• Can the number of allowable accommodations permitted on regular assessments predict: 1. states that plan to use AA-MAS. 2. the likelihood that a state will decide to develop AA-MAS.
  • Lazarus, Cormeier, & Thurlow (2011)Design • Discriminant analysis • Correlational design• Participants: Each of 50 states• Gathered state data for accommodation policies from National Center on Educational Outcomes accommodation policies database (online).• Survey: • completed by stated directors of special education. • collected dichotomous variable (plan to, do not plan to develop AA-MAS)• Exposed these variables to discriminant analysis.
  • Lazarus, Cormeier, & Thurlow (2011)Conclusion:• States that plan to offer AA-MAS allowed statistically fewer accommodations in four categories: • presentation, • equipment and materials, • scheduling or timing, and • setting.• Presentation accommodations showed an especially strong direct correlation to states’ decisions about AA-MAS development.
  • Lazarus, Cormeier, & Thurlow (2011) Impact • Demonstrated that states with more accommodations on regular exams have less likelihood of developing AA-MAS. •Inversely, states with restrictive accommodation policies more likely to offer AA-MAS.
  • Lazarus, Cormeier, & Thurlow (2011)Impact• Raises questions: Do 1) accommodation training for IEP teams and 2) implementation of accommodations with fidelity (targeted use, rather than just permitting a variety of accommodations) reduce the likelihood of students being categorized as eligible for AA-MAS?• Further study of qualitative aspects of accommodation application needed, perhaps with development of an index.
  • Elliott et al. (2010)EFFECTS OF USINGMODIFIED ITEMS TOTEST STUDENTS WITHPERSISTENT ACADEMICDIFFICULTIES
  • Elliott et al. (2010)The Questions:• Do AA MAS eligible students perform better on tests comprising highly accessible, modified items than on the original tests?• If the performances of eligible students improve on tests comprising modified items, what percentage of the students are likely to perform at a level deemed proficient in reading or math?
  • Elliott et al. (2010)Design:• Experimental Research Design• Participants selected from homogenous group of 8 th graders with disabilities. • Students were sorted by those eligible and those ineligible to take AA-MAS. This was not used for treatment but merely information for later data analysis. • From this original group, random assignments were made to 3 possible test sets.
  • Elliott et al. (2010)Design:• Each of these three test sets was given in three parts and given in varying order (to control for order effect) for a total of 36 possible unique tests.• All groups were exposed to both modified and unmodified forms of the exam; so, they serve as their own control.• The teachers administering the tests were trained with common slide set.
  • Elliott et al. (2010)Conclusions:• AA-MAS eligible students did significantly better on reading and math modified test itemsLimitations:• Only 8th grade students were tested. This design should be repeated with elementary and high school students.
  • Elliott et al. (2010)Impact:• Do AA MAS eligible students perform better on tests comprising highly accessible, modified items than on the original tests? • The observed positive effect of AA-MAS-type modifications suggests that this is a viable approach to testing students with disabilities who have poor test performance histories.• If the performances of eligible students improve on tests comprising modified items, what percentage of the students are likely to perform at a level deemed proficient in reading or math? • More students eligible for AA-MAS could meet proficiency with modifications.
  • Kettler, Rodriguez, Bolt, Elliott,Beddow, & Kurz (2011)MODIFIED MULTIPLE-CHOICE ITEMS FORALTERNATE ASSESSMENTS:RELIABILITY, DIFFICULTY,AND THE INTERACTIONPARADIGM
  • Kettler, Rodriguez, Bolt, Elliott, Beddow, & Kurz(2011)Question:Do Tests composed of modified itemshave the same reliability as tests made oforiginal items?
  • Kettler, Rodriguez, Bolt, Elliott, Beddow, & Kurz(2011)Design• Experimental design used to test reliability of AA- MAS• Three groups of 8th graders defined by AA-MAS eligible and AA-MAS ineligible chosen from several states as participant pool (CAAVES project collection).• Random participants chosen from pool and took both original and modified reading and math
  • Kettler, Rodriguez, Bolt, Elliott, Beddow, & Kurz(2011)Conclusion:• Reliability appears to be consistent on AA-MAS compared to original tests.• Analysis revealed shortening the question stem especially to be an especially effective modification, and adding graphics might be a poor modification.
  • Palmer (2009)STATE PERSPECTIVES ONIMPLEMENTING, OR CHOOSINGNOT TO IMPLEMENT, ANALTERNATE ASSESSMENTBASED ON MODIFIEDACADEMIC ACHIEVEMENTSTANDARDS
  • Palmer (2009) Federal regulations give states discretion in choosing to implement AA-MAS as part of their accountability system
  • Palmer (2009)• Non-regulatory guidance from the Department of Education advises that neither grade level assessment (with or without accommodations) or alternate academic achievement standards (AA-AAS) are appropriate for the group of students that AA-MAS should target.• Because • Grade level assessment is likely too difficult • AA-AAS does not reflect the wide range of grade level content and therefore does not demonstrate what they know or progress they have made.
  • Palmer (2009)Questions•What are states perspectives on assessments based on AA-MAS?•What are the reasons states choose to, or not to, implement AA-MAS?
  • Palmer (2009)Design• Two surveys were created: one for states that have chosen to implement AA-MAS and one for those who have chosen not to.• Surveys were given to state level directors of assessment in 24 states and 22 responses were obtained
  • Palmer (2009)ConclusionReasons given for not implementing AA-MAS: • Lack of resources • Lack of guidance • Further complicates data comparability • Violates common expectations for all • Desire to “call” students “proficient” is not enough of a reason to develop AA-MAS
  • Palmer (2009)ConclusionReasons given for choice to implement AA-MAS: •Improving accessibility •Improving appropriateness
  • Palmer (2009)ConclusionConcerns of effectiveness expressedby both sides (“it may not work”).
  • Palmer (2009)Impact•Resources and guidance are needed to implement AA-MAS.•Time will bear out effectiveness or ineffectiveness of AA-MAS and its acceptability among critics.
  • Christenson, Decker, Triezenberg,Ysseldyke, & Reschley (2007)CONSEQUENCES OFHIGH-STAKESASSESSMENT FORSTUDENTS WITH ANDWITHOUT DISABILITIES
  • Christenson, Decker, Triezenberg, Ysseldyke, &Reschley (2007)•High stakes testing is required in some states of both general education students and students with disabilities.•The original intent of high stakes testing was to positively impact educational outcomes.
  • Christenson, Decker, Triezenberg, Ysseldyke, &Reschley (2007)There is some debate over unintended effectsof high stakes testing, such as • tests’ tendency to become the objects of instruction, •scores that don’t generalize to other assessments measuring similar academic skills, and •negative impact on the motivations of struggling students.
  • Christenson, Decker, Triezenberg, Ysseldyke, &Reschley (2007) Unintended and intended effects have not been well studied in an across-state project for students with disabilities.
  • Christenson, Decker, Triezenberg, Ysseldyke, &Reschley (2007)Question•What are the intended and unintended effects of high stakes testing for general and special education students?
  • Christenson, Decker, Triezenberg, Ysseldyke, &Reschley (2007)Design:• Descriptive survey of 249 general education teachers, special education teachers and school psychologist from 99 schools in 19 states that have mandatory high school exit exams.• Named “Perspectives of Testing and Grade Promotion Survey”• Pool of possible participants hosting high stakes testing was used to randomly select participants from elementary, middle and high school.
  • Christenson, Decker, Triezenberg, Ysseldyke, &Reschley (2007)Design:• Special education teachers instructed to answer based on their students; general education and psychologists asked to answer based on students who did not pass high stakes testing or who struggled with it.• Oversampled by sending 3 surveys to each school to increase return rate. Also offered $500 school supply lottery for those returned and still only achieved 11.6% return rate.• Asked to report on 64 observable events
  • Christenson, Decker, Triezenberg, Ysseldyke, &Reschley (2007)Analysis• Data was analyzed for disparities of at least 10% between special education teachers and general education teachers answers.• This disparity analysis may contain a great deal of bias that is not accounted for in the research design.• Why? Because of special education and general education teachers varied in their perspectives of students with disabilities.
  • Christenson, Decker, Triezenberg, Ysseldyke, &Reschley (2007)Analysis: Responses of general and special education teachers were ranked then put through Spearmans Correlation for rank order. Researchers reported that the “high degree of similarity in the rank order of observable events that were reported as having increased by the two respondent groups corroborates the high degree of concordance of the descriptive statistics reported in Table 2.” (Table 2 was simply the ranked order of observable events organized in order from highest to lowest.This is an invalid conclusion.
  • Christenson, Decker, Triezenberg, Ysseldyke, &Reschley (2007)Analysis:“How do I interpret a statistically significant Spearman correlation?• It is important to realize that statistical significance does not indicate the strength of the Spearman rank-order correlation. In fact, the statistical significance testing of the Spearman correlation does not provide you with any information about the strength of the relationship. Thus, achieving a value of P = 0.001, for example, does not mean that the relationship is stronger than if you achieved a value of P = 0.04. This is because the significance test is investigating whether you can accept or reject the null hypothesis. If you set α = 0.05 then achieving a statistically significant Spearman rank-order correlation means that you can be sure that there is less than a 5% chance that the strength of the relationship you found (your rho coefficient) happened by chance if the null hypothesis were true.” http://statistics.laerd.com/statistical-guides/spearmans-rank-order- correlation-statistical-guide-2.php
  • Christenson, Decker, Triezenberg, Ysseldyke, &Reschley (2007)…….Analysis• Teachers were asked “Are there differences in how grade advancement decisions are made for students with and without disabilities? If so, what differences exist?”• Researchers reported “The majority of both general education teachers (45%) and special education teachers (51%) indicated that grade advancement decisions were either ‘occasionally’ or ‘almost never’ made in the same way for students with and without disabilities.”• What’s wrong with that observation? • 45% is not a majority, 51% is barely a majority, much more accurately represents a split, and certainly not grounds for drawing any conclusions. • Also…
  • Christenson, Decker, Triezenberg, Ysseldyke, &Reschley (2007)…….Analysis…. More Bias• “Are there differences in how grade advancement decisions are made for students with and without disabilities? If so, what differences exist?” asked of both general and special Ed teachers.• There is obvious sampling bias in this question: • General education teachers have experience with special education promotions due to inclusivity rules. • However, the reciprocal experience with special education teachers and general education students’ promotion decisions is probably non existent.
  • Christenson, Decker, Triezenberg, Ysseldyke, &Reschley (2007)Impact• Due to the numerous design problems, we do not recommend basing impact decisions on this research.• However, this might be useful in a more generalized way or as a starting point for further study.
  • Two Percent FlexibilityIMPLICATIONS OF THERESEARCH FORPRACTICE
  • Implications of the Research for Practice• Modified assessments can be developed which improve our ability to assess the progress of a select group of students.• In deciding whether or not to implement a new, alternate assessment of any type, states should first look at their accommodation policies. Could the population of students they are considering be just as easily tested with the regular state assessment if accommodation policies were changed?
  • Implications of the Research for PracticeThe existing AA-MAS assessments, when appropriatelydesigned, should be most beneficial for those students whomeet qualifying criteria (this is called a “differential boost”)(Elliot et. al, 2010) • Qualifying criteria need to be determined. • They are partially set by NLCB. • Groups like the Consortium of Alternate Assessment Validity and Experimental Studies (CAAVES, part of Elliot et al. and Kettler et al.’s studies) have clarified the criteria.
  • Implications for PracticeLeading to our logic model… • AA-MAS were designed for a specific population and are not intended to be used with all students (thus the Two Percent / Three Percent cap). • Just as researchers carefully defined a populations for their studies (Elliot et al., 2010; Kettler et al., 2011), schools must be careful in how they determine which students should take modified exams. • There are no federal mandates; therefore there is a risk for intentional/unintentional mis-assignment to the AA-MAS.
  • MARCH 2011 UPDATEimage from thinkchange.org photo from namm.org
  • March 2011 Update:• U.S. Secretary of Education, Arne Duncan, announced that the Department of Education will not continue to support the implementation of AA-MAS (Kaloi, 2011).• The CEC has not responded to this change on their website.
  • March 2011 Update:• The NCLD (National Center for Learning Disabilities) was pleased with the discontinuing of the “Two Percent” rule. They have always been opposed to the AA-MAS.• States with AA-MAS in place can continue using these tests until new assessments are developed (unclear what is coming next). “Every student with a learning disability should have every opportunity to achieve graduating from high school with a regular diploma with their peers” (Kaloi, 2011).
  • Update March 2011The “Proxy” issue: “The provision of an interim policy — sometimes called a ‘proxy’ — allowed …states that were working to develop the AA-MAS to count some students with disabilities who failed the general state assessment as “proficient” for school/district accountability purpose. While limited to certain states…this interim policy was in effect for 5 years and expired in 2009. During that time, several states used this proxy but did NOT develop the AA-MAS. Some states pressured the Department of Education to extend the interim policy” (Kaloi, 2011).
  • March 2011 Update:NCLD’s additional opposition to AA-MAS: • “The research basis [not cited] to support the new policy did not include a statistically reliable number of students with disabilities; however, the policy targets only these students. • …Too little was known about how to identify and determine which students should be taking the modified test. … students who are poor and from minority groups account for a large portion of failing students. Yet the 2% rule allows only students with disabilities to take the AA-MAS. • Many of the AA-MAS developed by states provide accommodations that are not allowed on the state general assessment. Instead of shifting students out of the general assessment, very restrictive accommodations policies should be reviewed and revised to provide the widest range of test accommodations” (Kaloi, 2011).
  • Two Percent FlexibilityLOGIC MODEL
  • Logic ModelThe situation: • In Texas, which was one of the first states to develop an AA- MAS, even before the Two Percent Flexibility rule went into place, the assignment of students to AA-MAS (TAKS-M or STAAR- M) and AA-AAS has not been done with due consideration and fidelity. This assignment is not consistent from district to district and frequently not consistent from school to school within districts (based on personal observations). • As long as “M” tests are in place, consistent and appropriate assessment determination is needed.
  • Logic ModelThe situation: • In a hypothetical district, Star of Texas Independent School District (STISD), IEP teams no not have the proper training to apply consistent, appropriate methods in determining students’ assignment to the various state assessment types. • Students with IEPs are inconsistently assigned to STAAR-Accommodated, STAAR- Modified, and STAAR-Alternate. • Factors such as teacher availability, ease of implementation, and inadequate instruction are used in determining students’ STAAR exam versions. • Students are inaccurately assessed and sometimes even insulted by their assignment to various test administrations.
  • Logic ModelThe situation: Inaccurate assignment to exams prohibits effective monitoring of student performance, leading to reduced effectiveness in students’ educational planning and implementation. Students are potentially prevented from achieving their full potentials. Schools and districts are not collecting the most accurate data and may be making formative decisions based on unjustified measures.
  • Logic Model Outputs Outcomes - ImpactInputs Activities Participation Short Medium LongAssumptions External Factors
  • Logic Model • Standards and rules for who can take theInputs AA-MAS • NCLB Sec.200.6(a)(3) regulations (The Two Percent Flexibility Rule) (Kettler et al., 2011) • State of Texas STAAR-M Participation Requirements (Texas Education Agency) • CAAVES (Consortium of Alternate Assessment Validity and Experimental Studies) AA-MAS Participation Decision Criteria (Elliott et al., 2012)
  • Logic ModelNCLB Sec.200.6(a)(3)• “a state may develop a new alternate assessment based on modified academic achievement standards or adapt its general assessment… [The AA-MAS] must cover the same grade-level content as the regular assessment… a State may employ a variety of strategies to design an [AA-MAS].”• The AA-MAS is intended for students for whom: • Standards of regular assessment are too difficult • Standards of the AA-AAS are too easy • Disability/ies have prevented them from reaching proficiency • Disability/ies make it unlikely that they will reach proficiency by the same standards within the same timeframe as students not eligible. (Kettler et al., 2011)
  • Logic ModelTexas STAAR-M Participation RequirementsKey indicators:• PLAAFP lead ARD committee to conclude “multiple years behind”• “will not progress at same rate as peers”• “disability significantly affects academic progress”
  • Logic ModelTexas STAAR-M Participation RequirementsKey indicators:• TEKS-based goals in the IEP indicating modified content• Modified content specific to area of need• IEP goals should address, at least generally, how content will be modified.
  • Logic ModelTexas STAAR-M Participation RequirementsKey indicators:• Require direct and intensive instruction for skill acquisition, maintenance, and transfer.• Direct = small group/individualized• Intensive = continuous and focused
  • Logic ModelTexas STAAR-M Participation Requirements “Modified coursework results in the student graduating on the Minimum High School Program (MHSP). Students who graduate on the MHSP are not eligible for automatic admission into a Texas four-year university.”
  • Logic ModelCAAVES AA-MAS Participation Decision Criteria 1. Student has a current IEP with goals based on academic content standards for the grade of enrollment a. Does the IEP state that the instructional material/curriculum contains grade level content? b. Are there statements from IEP members that goals and instruction align with grade level content standards? (Kettler et al., 2011)
  • Logic ModelCAAVES AA-MAS Participation Decision Criteria 2. Student’s disability precludes him/her from achieving grade-level proficiency as demonstrated by performance on assessments that can validly document academic achievement. • Previous year’s tests documenting performance at the lowest proficiency level(s) or equivalent testing documentation. (Kettler et al., 2011)
  • Logic ModelCAAVES AA-MAS Participation Decision Criteria 3. Student’s progress to date in response to appropriate instruction is such that, even with significant growth, he/she will not achieve grade- level proficiency within the year. •Written description of research-based instruction and either: • Two years of class performance records, • Three years of state achievement test scores, • Multiple curriculum-based measurement scores (Kettler et al., 2011)
  • Logic Model • People knowledgeable about TwoInputs Percent Flexibility, students with disabilities, and the ARD (Admission, Review, and Dismissal) process • Rebecca & Kalene • Professors and Colleagues at Texas Tech • Contacts at the Texas Education Agency (TEA) • Cooperation from STISD administration. • Time and technology (PowerPoint, email)
  • Logic Model Activities: Outputs • Develop training and checklists for IEP teams using NLCB, CAAVES, and TEAActivities Participation documentation. • Develop train-the-trainer materials to enable schools to conduct their own trainings with additional/new staff members. • Present training to STISD administration for their feedback and approval. • If permission granted, survey parents and teachers to determine district’s current test assignment systems. Attempt to understand the extent of inappropriate test assignments.
  • Logic Model Activities: Outputs • Modify training as necessary.Activities Participation • Schedule and implement training on individual campuses with key IEP team members. • Collect feedback from training participants to determine impact. • Is the process clearer? • Will rules/policies be adhered to? • Will administrators/supervisors monitor for fidelity?
  • Logic Model Outputs Participation:Activities Participation • Presentation/training team: • Rebecca • Kalene • STISD administration • STISD school IEP team key members: • Special education teachers • Diagnosticians • Administrators • Others • Parents/other teachers (if permission given for initial survey)
  • Logic Model Short-term Outcomes/Impact:Outcomes - Impact • IEP teams will have increasedShort Medium Long understanding of: • the intended population for STAAR-M • moral and legal obligation to correctly select the test that is best for each student • how to use a checklist to improve assessment assignment • The IEP teams will have checklists to use when assigning assessments . • Administrators will understand and be able to evaluate whether students are appropriately assessed.
  • Logic Model Medium-Term Outcomes/Impact:Outcomes - Impact • IEP teams will use checklists. • IEP teams will base all assessmentShort Medium Long assignment decisions on the intended criteria. • IEP teams will continue the “train the trainer” model • Administrators will monitor the fidelity of assessment assignment and will follow up when inappropriate assignments have been made. • Students will be assigned to STAAR-M only when it is the appropriate assessment for evaluating their progress.
  • Logic ModelOutcomes - Impact Long-Term Outcomes/Impact:Short Medium Long  Accurate assignment to exams enables effective monitoring of student performance, leading to enhanced effectiveness in students’ educational planning and implementation.  Students are better enabled to achieve their full potentials.  Schools and districts collect the most accurate data and make formative decisions based on justified measures.
  • Logic ModelAssumptions External Factors Assumptions: • Administration will be on board with the goals of the training. • Presentation will have a positive influence on IEP team decision making. • Presentation will not conflict with existing policies and/or school will be willing to update current practices and policies to match with the criteria and indicators suggested.
  • Logic ModelAssumptions External Factors External Factors • Limited amount of time will be available for training • Consideration: Perhaps ultimately an online training/webinar could be developed • Current IEPs which mis-assign students may not be changed before testing this school year • Parents’ influence on testing decisions • Some IEP team members may never receive training • Changing assessment systems in the state of Texas
  • Logic Model Outputs Outcomes - Impact Inputs Activities Participation Short Medium Long • Survey Knowledge• Standards • Check- •Trainers Impact! Actions lists• People • Develop •Admin. training• Time • Present •IEP training Teams• Tech. • Eval. impactAssumptions External FactorsReception from district, existing policies Time, existing influences
  • Two Percent FlexibilityQUESTIONS?
  • References• American Association for Public Opinion Research. (n.d.). Best practices. Retrieved from AAPOR website http://www.aapor.org• Christenson, S.L., Decker, D. M., Triezenberg, H. L., Ysseldyke, J. E., & Reschly, A. (2007). Consequences of high stakes assessment for students with and without disabilities. Educational Policy, 21, 662-690. doi:10.1177/0895904806289209• Council for Exceptional Children. (2011). Two Percent Flexibility. Retrieved from CEC website http://www.cec.sped.org• Elliott, S. N., Kettler, R. J., Beodow, P. A., Kurz, A., Compton, E., McGrath, D., . . . Roach, A. T. (2010). Effects of using modified items to test students with persistent academic difficulties. Exceptional Child, 76(4), 475-495.• Gersten, R., Fuchs, L. S., Compton, D., Coyne, M., Greenwood, C. R., Innocenti, M. S., (2005). Quality indicators for group experimental and quasi- experimental research in special education. Exceptional Children, 71 (2), 149- 164.• Kettler, R. J. Rodriguez, M. C., Bolt, D. M., Elliott, S. N., Beodow, P. A., & Kurz, A. (2011). Modified multiple-choice items for alternate assessments: Reliability, difficulty, and differential boost. Applied Measurement in Education, 24, 210-234. doi:10.1080/08957347.2011.580620
  • References (continued)• Kaloi, L. (2011). U.S. Department of Education finally backs away from a policy that masks student performance. Retrieved from NCLD website http://www.ncld.org/archive/entry/1/149• Lazarus, S. S., Cormier, D. C., & Thurlow, M. L. (2011). States’ accommodations policies and development of alternate assessments based on modified achievement standards: A discriminant analysis. Remedial and Special Education, 32, 301-308. doi:10.1177/0741932510362214• Palmer, P. W. (2009). State perspectives on implementing, or choosing not to implement, an alternate assessment based on modified academic achievement standards. Peabody Journal of Education, 84, 578-584. doi:10.1080/01619560903241051• Texas Education Agency (2011). STAAR Modified Participation Requirements. Retrieved from TEA website http://www.tea.state.tx.us/student.assessment/special-ed/staarm/partreqs/• Thompson, B., Diamond, K. E., McWilliam, R., Snyder, P., & Snyder, S. (2005). Evaluating the quality of evidence from correlational research evidence-based practice. Exceptional Children, 71 (2), 181-194.Unless otherwise indicated, all clipart and images are from Microsoft clipart.