"Assessment, Like Revision, Is Recursive"


Published on

Presentation at NISOD 2010

Published in: Education
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

"Assessment, Like Revision, Is Recursive"

  1. 1. Session description<br />Our curricula, ever evolving, are described as nimble. Our institution prides itself on our adaptability and resiliency. But are our assessment practices keeping pace with our methods and content? In the wake of the implementation of proficiency-based admissions thresholds and placement policies, the state initiated system-wide alignment in order to achieve common course numbering and course objectives. As a result, the English faculty has reconstructed its notions about writing tasks, processes, and assessment models within both developmental and transfer-level courses. This presentation will share some of our ways of thinking about student writing, including tasks and assessment and encourage you to share your experiences as well. <br />
  2. 2. Assessment, like revision, is recursive: Re-designing and re-thinking metrics and methods for the assessment of student writing<br />Jana Carter<br />Montana State University Great Falls<br />English Department<br />NISOD May 2010<br />
  3. 3. Why this is important<br />
  4. 4. Why this is important<br />Too often, especially within two-year programs, Freshman Composition is a student’s last opportunity for explicit instruction in writing skills.<br />While we know there is value in the explicit instruction and opportunities for guided practice we offer in Freshman Composition, we also know transitioning into writing in the disciplines is often difficult for students.<br />
  5. 5. Why this is important<br />Ellen Lavelle, in "The Quality of University Writing: a preliminary analysis of undergraduate portfolios," writes <br />Surprisingly, in examining the differences between early and late samples [of 325 student writing portfolios collected over eight years at Southern Illinois University Edwardsville], early writing was supported as significantly better than late writing, as measured by both the holistic scoring method and by the deep-and-surface rubric (91)<br />
  6. 6. Enrollment<br />
  7. 7. Developmental courses<br />
  8. 8. Transfer courses<br />
  9. 9. Enrollment challenges<br />The number of sections taught by part-time faculty make it necessary for us to clearly communicate our expectations to part-time faculty.<br />Challenges for maintaining coherence:<br />Finding qualified and experienced part-time faculty to teach face-to-face<br />Navigating the adjunct hiring process<br />Department chair’s active participation and involvement in hiring, evaluation, and mentoring of part-time faculty<br />Time<br />
  10. 10. What to do as enrollment rises<br />Cheer! Exciting opportunities provided by additional sections and infusion of instructional staff new to the institution.<br />Groan. Anecdotal perception of a range of student skills entering next course in sequence.<br />Get to work. Develop curriculum and coherence, develop staff, achieve buy-in, identify pool of available contingent faculty, develop assessment practices, and retool metrics.<br />
  11. 11. Getting to work: Achieving buy-in and building coherence across sections<br />We started with staff development and development of practices: <br />Annual kick-off meetings to discuss objectives, departmental philosophies, practices, and share ideas and activities<br />Regular departmental meetings, including time for mentoring, brainstorming, and problem solving, to which adjuncts are invited<br />Course template, maintained by department, for each course in the composition sequence<br />Common final exam for WRIT101<br />Level-specific textbook and materials recommendations, maintained by department<br />Centralized book ordering process through the Division secretary to avert problems with editions, texts without endorsement, and communication <br />
  12. 12. Focused discussion: Adjuncts and high-enrollment courses<br />How does your department build collaboration between adjunct faculty and full-time faculty?<br />How does your institution support efforts to achieve coherence across multiple sections of courses?<br />
  13. 13. Statewide articulation project: The Montana University System Transfer Initiative<br />
  14. 14. MUS Transfer Initiative<br />"Faculty Learning Outcomes Councils [are] drawn from specific disciplines at institutions throughout the state [and] examine individual transferable courses and reach consensus about appropriate learning outcomes for each one." (http://mus.edu/transfer/index2.asp) <br />Representative attended relevant councils<br />We worked to articulate developmental courses as well as transfer-level courses to increase portability of credits for highly mobile students<br />
  15. 15. MUS Transfer InitiativeArticulation Matrix<br />The Faculty Learning Outcome Councils wrote objectives.<br />We revised our curriculum so that we were able to articulate each course in our Composition sequence. <br />
  16. 16. MUS Transfer Initiative Participating Institutions<br />
  17. 17. outcomes<br />
  18. 18. How our outcomes measure up: 2004<br />WRIT 101 course outcomes in 2004<br />Topics, activities, and philosophy, little of which spoke to observable student behaviors, provided conditions under which performance should occur, or included definition of acceptable performance. <br />In effect, what we had was a mess.<br />Best practices from Instructional Design<br />“A learning objective is a statement of what students will be able to do when they have completed instruction. A learning objective has three major components:<br />A description of what the student will be able to do<br />The conditions under which the student will perform the task.<br />The criteria for evaluating student performance “ (Arreola)<br />
  19. 19. How our outcomes measure up: 2006<br />WRIT 101 course outcomes in 2006<br />A specific list of observable student skills, conditions under which performance should occur, and oblique references to performance thresholds.<br />In effect, what we created was a list of enabling objectives. <br />Best practices from Instructional Design<br />“Well-written learning objectives are often written at two levels:<br />• Terminal objectives: The primary skills (desired end results of instruction)<br />• Enabling objectives: Incremental skills that allow the desired end results to occur” (Shank 5)<br />
  20. 20. How our outcomes measure up: 2010<br />WRIT 101 course outcomes today<br />A specific list of observable student skills, conditions under which performance should occur, and references to rubric levels.<br />Montana University System Transfer Initiative provided terminal objectives, but we still have far too many.<br />Best practices from Instructional Design<br />“Each individual learning objective should support the overarching goal of the course, that is, the thread that unites all the topics that will be covered and all the skills students should have mastered by the end of the semester.  Bestpractice dictates that learning objectives be kept to no more than half a dozen” ("Learning Objectives ”)<br />
  21. 21. Focused discussion: Outcomes<br />Who writes your outcomes? <br />Does that level of control/autonomy allow you to meet the needs of your campus or community? <br />How do you participate in revision of those outcomes?<br />
  22. 22. Performance levels and indicators<br />
  23. 23. Where we got stuck<br />Primary challenges of assessment at the department level: "fear of compromised autonomy," "methodological skepticism," and "lack of time" (Beld 6-7)<br />We didn’t get stuck when we wrote outcomes. We did get stuck figuring out how to talk about and negotiate performance levels and indicators! <br />
  24. 24. Why we got stuck<br />Recipe for disaster: equal parts preparation and professional experience – working without institutional history or common expectations<br />We had to recognize that what we valued as good, academic writing was very different<br />We had to realize that we weren’t going to develop a perfect solution or model on the first attempt<br />Northern Arizona University initiated a rubrics program in response to a very similar problem (Garcia 13); while we’re no NAU, our rubrics project was productive. It built community by providing a springboard for discussion about values, beliefs, training, preparation, and professional experience <br />
  25. 25. How we got un-stuck<br />Recommendations: <br />"focusing on intended users and uses," by asserting that "methodological choices should be governed by users and uses—not simply to foster ownership of the results, but to enhance their quality" (Beld 8); <br />"limiting the agenda," by focusing on an annual departmental assessment agenda that is "intentionally modest" (Beld 8); and <br />"treating findings as a means to an end, not as the ends in themselves" (Beld 8)<br />How we’ve applied those principles:<br />Collaborate on all assessment-related work, think small, and stay flexible<br />
  26. 26. Evolution of assessment:Where we’ve been<br />2004<br />No official departmental standards of performance: assessment was primarily individualized and private<br />2006 <br />Beginnings of rubrics to align with Montana University System Writing Assessment (MUSWA), COMPASS e-Write, and other placement instruments <br />
  27. 27. Evolution of assessment: Where we are today<br />WRIT095, WRIT101, and WRIT201 are paperless classes<br />Tools provided by our LMS improve quality of and time invested in course administration, which is consistent with the literature: “A practical benefit of using e-learning tools mentioned most often by the lecturers was reducing effort and time they spent on administration” (Heinrich, et al 180)<br />We use rubrics to gauge performance on enabling objectives (defined locally) and terminal objectives (defined by FLOC) for WRIT101<br />
  28. 28. Arriving at the composite score paradigm<br />Benefits<br />Content and conventions rubrics provide a fair and representative composite score<br />Grammar is not an emphasis of WRIT101. The Conventions rubric places value on careful editing<br />Rubrics edited for "mean-spiritedness" (Sommers 149) for C, D, and F<br />Models of student writing<br />Buy-in from stakeholders: collaboration, use, and wide/early distribution<br />Drawbacks<br />Limitations of the current release of our LMS means our composite score is taken from a chart/explained by an external document<br />We don’t yet have a large pool of essay models<br />We do a good job training students how to analyze writing using the language of the rubrics, but we don’t yet do a super job explaining holistic scoring to colleagues<br />
  29. 29. Evolution of assessment: Looking ahead<br />Formalize the final exam processes<br />Training for adjunct faculty to increase inter-rater reliability for final exam – it’s unlikely we’ll achieve California’s 90% (U.S. Congress, Office of Technology Assessment) because it’s not that formal, but we’ll be much closer<br />Refining rubrics and assignments to meet enabling objectives better, e.g., information literacy<br />Collecting more models of student essays, especially those requiring research and properly handling resources<br />Investigating rubrics that are bigger than we are, e.g., AACU’s VALUE project, which provides rubrics for skill areas ("VALUE: Valid Assessment of Learning in Undergraduate Education")<br />
  30. 30. Works Cited<br />Beld, Jo Michelle. "Engaging Departments in Assessing Student Learning Overcoming Common Obstacles." Peer Review 12.1 (2010): 6-9. Academic Search Premier. EBSCO. Web. 23 May 2010. <br />Garcia, Paula. "Involving Graduate Teaching Assistants in Freshman Composition in Rubric Implementation. (Cover story)." Assessment Update 18.6 (2006): 1-13. Academic Search Premier. EBSCO. Web. 23 May 2010. <br />Heinrich, Eva, John Milne, and Maurice Moore. "An Investigation into E-Tool Use for Formative Assignment Assessment - Status and Recommendations." Journal of Educational Technology & Society 12.4 (2009): 176-192. Academic Search Premier. EBSCO. Web. 23 May 2010. <br />LAVELLE, ELLEN. "The Quality of University Writing: a preliminary analysis of undergraduate portfolios." Quality in Higher Education 9.1 (2003): 87. Academic Search Premier. EBSCO. Web. 23 May 2010. <br />"Learning Objectives (Teaching and Learning Laboratory @ MIT)." MIT. Web. 28 May 2010. http://web.mit.edu/tll/teaching-materials/learning-objectives/index-learning-objectives.html<br />Shank, Patti. "Writing Learning Objectives That Help You Teach and Students Learn (Part 1)." Online Classroom (2005): 4-7. Academic Search Premier. EBSCO. Web. 23 May 2010. <br />U.S. Congress, Office of Technology Assessment. American Schools: Asking the Right Questions. [Full Report.]. Washington DC: U.S. Government Printing Office, 1992. ERIC. Web. 23 May 2010. "Writing Assessment: A Position Statement." National Council of Teachers of English - Homepage. CCCC. Web. 23 May 2010. <http://www.ncte.org/cccc/resources/positions/writingassessment>. <br />"VALUE: Valid Assessment of Learning in Undergraduate Education." Association of American Colleges and Universities. Web. 23 May 2010. <http://www.aacu.org/value/index.cfm>.<br />
  31. 31. Contact and presentation information<br />Jana Carter<br />MSU Great Falls<br />jcarter@msugf.edu<br />406.771.4363 (office) <br />406.530.9748 (mobile)<br />Documents available online:<br />http://www.slideshare.net/JanaCarter/nisod-jcarter<br />Session evaluation:<br />http://www.surveymonkey.com/s/6MC2YBN<br />