1. Presentation to the Legal Studies Faculty By: Michele Lamontagne
2. Kaplan Faculty are all specialists in their fields Most of us are not trained teachers We are all working to better the program As part of our Professional Development program and to meet accreditation standards, we must work to uniformly assess the program and align our learning objectives.
3. The reputation of our program is dependent on our students entering the workplace with the requisite knowledge and superior performance standards. Setting learning objectives will insure that each graduate meets these standards. Learning objectives must be set in the following categories: declarative knowledge, procedural knowledge and problem solving. (Oosterhof, Conrad & Ely, 2008).
4. “Any knowledge that can be expressed verbally, such as factual information and explanations of principles, procedures and trends” (Oosterhof et. al,2008, p.16). Learning objectives for declarative knowledge will include: ◦ Law terms and proper use of vocabulary ◦ Research sources and their hierarchy ◦ Civil and criminal procedural rules ◦ Understanding of judicial system in government
5. “Knowledge that involves doing something, such as making discriminations, understanding concepts, and applying rules that govern relationships” (Oosterhof et.al, 2008, p.16). Learning objectives for procedural knowledge will include: ◦ Determining jurisdiction elements for a case ◦ Distinguishing between a subpoena and a summons ◦ Organizing documents for depositions ◦ Completing electronic filing of court documents
6. “Involved when one has a goal and has not yet identified a means for reaching that goal; requires use of existing declarative and procedural knowledge” (Oosterhof et.al, 2008, p.16). Learning objectives for problem solving will include: ◦ Procedure for maintenance of privileged documents in multidistrict litigation ◦ Organization of system to check potential client conflicts
7. Learning objectives should logically flow from the course curriculum. In setting objectives, follow the tasks to be learned in the order of the course. Learning objectives should actively express what a student will learn or be able to do by the end of the course. For example “help student to …”; “student will identify …”; student will discover the …; “introduce student to …” (Mihram, 2007).
8. Opportunity to assess student outcomes through observation. An example of this assessment would be having a student participate in a mock trial. Four components of Performance Objectives: Type of capability – information, discrimination, concept or rule Behavior – learning outcome must be observed by a specific behavior Situation – context in which the behavior is exhibited Special Conditions –must be present to show the objective from the learner’s behavior (Oosterhof et.al,2008).
9. Use multiple measures of student performance Structure authentic assessments that are based in real life Assessments must be designed with the question in mind –What do we want our students to know and do at the end of the course? Assignments should have explicit, clear directions and grading criteria (Palloff & Pratt, 2009).
10. Assessments must be constructed to align with the outcomes of the course. Assessments should meet the higher learning levels of Bloom’s Taxonomy – application, analysis, synthesis and evaluation. By reviewing the different levels of learning, instructors can create assessments that “measure outcomes appropriate to the course level” (Palloff & Pratt, 2009, p.23).
11. Formative assessments should be designed to give a student a better understanding of the standards required of them in a particular discipline (Yorke, 2003). In the legal studies curriculum, the formative assessment must evaluate student performance as it relates to an understanding of the applicable rules and procedures. For example, a multiple choice test that asks the student to pick the best answer in light of particular facts would assess understanding of the rules.
12. Examine what a student has learned over a longer period of time and review overall application of the course. Students examine a wide spectrum of rules and concepts across several disciplines in light of their own real life expectations. Performance would be measured on how well they apply the rules to the plan and their ability to create a cohesive strategy. A clear rubric for this assessment would be essential for the student to gauge their performance against that expected by the instructor (Palloff & Pratt, 2009).
13. Allow the instructor to observe student skills in action Good at evaluating procedural knowledge particularly application of rules and problem solving These assessments evaluate the learning process not just the end product. Scoring these assessments can be challenging since a “holistic judgment” is made. Rubrics are critical to guide students for the overall task and expectation (Oosterhof et.al, 2008).
14. Students need to be empowered learners for both learning and assessments. Need to include activities with discussions, collaborative activities and self-reflection Peer review and evaluation will help students develop good feedback skills. Reflective journals can assist students in evaluating their own performance over the entire course. (Palloff & Pratt, 2009).
15. Online assessments go beyond standard test and quizzes. Instructors are encouraged to use additional methods for student involvement and to measure learning that require student reflection. Instructors should provide opportunities to demonstrate understanding through activities that allow a student to apply a concept to situations. This will assess a student’s abilities beyond rote understanding. Blogs, wikis and posts allow students to actively participate with classmates and obtain instructor feedback. Reflective Journals and portfolios permit students to self assess their program advancement (Palloff & Pratt, 2009).
16. Assessments should strive at the course level to measure the six levels of Bloom’s Taxonomy. Knowledge and comprehension can be measured through traditional, objective criteria. (formative assessments) However, the remaining levels, application, analysis, synthesis and evaluation will require assessment through performance evaluation criteria. (Palloff & Pratt, 2009).
17. Learner feedback can be utilized as peer review, self-reflection, and assessment/instructor feedback. No matter how the student input is to be incorporated, it needs to be closely regulated by the instructor. Peer review should follow a close rubric to be sure that the responses are focused and useful. Reflective Journals are helpful for self assessment and are usually shared with the instructor. Course and instructor evaluations are essential to obtain information about particular assessments and their usefulness to the student. (Palloff & Pratt, 2009).
18. Student assessment should contain a component for learner feedback. These responses should be on a discussion board so that the instructor can monitor responses and intervene if necessary. Instructor should set expectations for the feedback: ◦ Responds to a question ◦ Reflects on what is being discussed ◦ Move the discussion in a new direction ◦ Ask a question or reflection for further thinking (Palloff & Pratt, 2009).
19. Can be created to measure a single task or complex-task performance There must be a specific capability to be assessed Guidelines include: ◦ Describing the particular task ◦ Is the focus on the process or product ◦ Identify skills that will be verified ◦ Set clear instructions (Oosterhof et. al,2008).
20. Challenges are present when the instructor cannot observe student behavior. Ambiguity in the assessment can be a major obstacle to student success. Clear directions and use of rubrics assist students in meeting instructor expectations. Increasing the number of observations of student work will increase the opportunity to measure student competencies. Scoring plans will assist instructor in measuring more complex summative and performance assessments. Feedback also plays an important role in grading student performance. Grading papers and tests require more time from the instructor while a comment during a discussion or experiment can be immediate. (Yorke, 2003).
21. Comparison with a Model – student work is compared to a sample or model completed by the instructor Checklist - provides a listing of crieria that the student must meet Rating Scales – similar to a checklist but it supplies a rating of how well the student completed the task Scoring Rubrics – supplies a range of how well the student completed an activity but allows several skills to be assessed at one time. (Oosterhof et. al, 2008).
22. When assessments are complete the next step is to determine if the assessment met the learning objectives developed at the beginning of the course. Use multiple measures including all feedback processes to review the assessments. Check to see if the assessment practice is reliable, valid and useful. (Palloff & Pratt, 2009).
23. We must continue to evaluate the tools for assessment and whether they support the curriculum. As an ongoing task we must review: ◦ Student surveys ◦ Alumni surveys ◦ Employer feedback and student job placement ◦ Professional standards ◦ Student performance on standardized tests (Palloff & Pratt, 2009). Then the process starts again!
24. Mihram, D. (2007). Assessment tools. http://cet.usc.edu/resources/teaching_learning/docs/Assessment_Sept_ 07_Final_.ppt Oosterhof, A., Conrad, R., & Ely, D. (2007). Assessing learners online. Upper Saddle River, NJ: Pearson. Palloff, R., & Pratt, K. (2009). Assessing the online learner. San Francisco, CA: John Wiley and Sons. U.S. Department of Education, Office of Planning, Evaluation, and Policy Development. (2009). Evaluation of Evidence-Based Practices in Online Learning: A Meta-Analysis and Review of Online Learning Studies, Washington, D.C. http://www.ed.gov/rschstat/eval/tech/evidence- based- practices/finalreport.pdf Yorke, M. (2003). Formative assessment in higher education: Moves towards theory and the enhancement of pedagogic practice. Higher Education, 45(4), 477.