This document provides an overview of using rubrics for assessment. It discusses the definition and types of rubrics, including analytic and holistic rubrics. The presentation covers best practices for developing rubrics such as choosing criteria, developing descriptors, and implementing rubrics. It also includes examples of analytic and holistic rubrics. The document is intended to introduce rubrics and provide guidance on creating and using rubrics for assessment.
Validity refers to a test accurately measuring what it intends to. Content validity means a test samples relevant skills, while criterion-related validity compares test scores to external criteria. Reliability means a test gives consistent results. Key factors for reliability include multiple test items, clear instructions, uniform administration conditions, and scorer reliability through objective scoring and scorer training. While reliability ensures consistent results, a test may be reliable without being valid if it does not accurately measure the target construct. Both validity and reliability are important for effective test design and interpretation.
This short SlideShare presentation explores a basic overview of test reliability and test validity. Validity is the degree to which a test measures what it is supposed to measure. Reliability is the degree to which a test consistently measures whatever it measures. Examples are given as well as a slide on considerations for writing test questions that demand higher-order thinking.
The document provides an overview of test specifications and how to write test items and tasks. It discusses:
1. Test specifications (specs) guide the creation of test content and help ensure equivalence, reliability, and validity. Specs describe how to structure tests and make difficult authoring choices.
2. Effective test development is iterative and spec-driven. Specs evolve as tests are refined through discussion. Items and tasks should be written to fit evolving specs rather than independently.
3. Evidence-centered design (ECD) treats knowledge as scientific and provides a systematic framework for relating test performance to constructs. ECD models guide test design from defining constructs to assembling and delivering the full test.
This document discusses different types of tests and assessments. It defines formative and summative assessment, and describes various types of tests including proficiency tests, achievement tests, diagnostic tests, and placement tests. It also discusses the differences between direct and indirect testing, discrete point and integrative tests, norm-referenced and criterion-referenced tests, and objective and subjective tests. The document provides examples and details on how each type of test is designed and scored.
The document discusses the validity of assessment tools. It defines validity as the degree to which a test measures what it is intended to measure. There are several types of validity discussed, including content validity (ensuring appropriate content coverage), construct validity (measuring relevant constructs), criterion validity (correlation with external criteria), concurrent validity (correlation with other measures at the same time), and predictive validity (ability to predict future outcomes). Establishing validity requires considering many factors, and reliability is a prerequisite for validity. Validity is crucial for tests to accurately measure achievement.
Standardized tests are designed to have consistent objectives and criteria across different forms of the test. They measure students' mastery of prescribed grade-level competencies. Developing a standardized test involves determining its purpose, designing test specifications, creating and selecting test items, evaluating items, specifying scoring procedures, and ongoing validation studies. The document outlines these steps and provides examples of standardized language proficiency tests like TOEFL and IELTS.
The document discusses key concepts related to testing, assessment, and teaching. It covers:
- The differences between assessment and tests, with assessment being broader and more ongoing while tests are more formal and administered.
- The importance of both formative and summative assessment in the learning process. Formative assessment helps students improve while summative evaluates learning.
- Approaches to language testing including discrete point tests, integrative tests, and communicative language testing which focuses on authentic performance.
- Current issues like new views that intelligence is multidimensional, and the benefits and challenges of traditional versus alternative and computer-based assessments.
A presentation about different types of assessment tools that can be use in assessing language. There are also some meaningful insights about language test and language assessment
Validity refers to a test accurately measuring what it intends to. Content validity means a test samples relevant skills, while criterion-related validity compares test scores to external criteria. Reliability means a test gives consistent results. Key factors for reliability include multiple test items, clear instructions, uniform administration conditions, and scorer reliability through objective scoring and scorer training. While reliability ensures consistent results, a test may be reliable without being valid if it does not accurately measure the target construct. Both validity and reliability are important for effective test design and interpretation.
This short SlideShare presentation explores a basic overview of test reliability and test validity. Validity is the degree to which a test measures what it is supposed to measure. Reliability is the degree to which a test consistently measures whatever it measures. Examples are given as well as a slide on considerations for writing test questions that demand higher-order thinking.
The document provides an overview of test specifications and how to write test items and tasks. It discusses:
1. Test specifications (specs) guide the creation of test content and help ensure equivalence, reliability, and validity. Specs describe how to structure tests and make difficult authoring choices.
2. Effective test development is iterative and spec-driven. Specs evolve as tests are refined through discussion. Items and tasks should be written to fit evolving specs rather than independently.
3. Evidence-centered design (ECD) treats knowledge as scientific and provides a systematic framework for relating test performance to constructs. ECD models guide test design from defining constructs to assembling and delivering the full test.
This document discusses different types of tests and assessments. It defines formative and summative assessment, and describes various types of tests including proficiency tests, achievement tests, diagnostic tests, and placement tests. It also discusses the differences between direct and indirect testing, discrete point and integrative tests, norm-referenced and criterion-referenced tests, and objective and subjective tests. The document provides examples and details on how each type of test is designed and scored.
The document discusses the validity of assessment tools. It defines validity as the degree to which a test measures what it is intended to measure. There are several types of validity discussed, including content validity (ensuring appropriate content coverage), construct validity (measuring relevant constructs), criterion validity (correlation with external criteria), concurrent validity (correlation with other measures at the same time), and predictive validity (ability to predict future outcomes). Establishing validity requires considering many factors, and reliability is a prerequisite for validity. Validity is crucial for tests to accurately measure achievement.
Standardized tests are designed to have consistent objectives and criteria across different forms of the test. They measure students' mastery of prescribed grade-level competencies. Developing a standardized test involves determining its purpose, designing test specifications, creating and selecting test items, evaluating items, specifying scoring procedures, and ongoing validation studies. The document outlines these steps and provides examples of standardized language proficiency tests like TOEFL and IELTS.
The document discusses key concepts related to testing, assessment, and teaching. It covers:
- The differences between assessment and tests, with assessment being broader and more ongoing while tests are more formal and administered.
- The importance of both formative and summative assessment in the learning process. Formative assessment helps students improve while summative evaluates learning.
- Approaches to language testing including discrete point tests, integrative tests, and communicative language testing which focuses on authentic performance.
- Current issues like new views that intelligence is multidimensional, and the benefits and challenges of traditional versus alternative and computer-based assessments.
A presentation about different types of assessment tools that can be use in assessing language. There are also some meaningful insights about language test and language assessment
Valiadity and reliability- Language testingPhuong Tran
The document discusses test reliability and validity. It defines reliability as the degree to which a test is free from random measurement error, and validity as the degree to which a test measures the intended construct. There are several factors that can affect test reliability and validity, including test method, personal attributes of test takers, and random factors. Reliability is necessary for validity but not sufficient, as validity also requires examining the relationship between test scores and other relevant criteria. The document outlines various approaches for estimating reliability and gathering evidence to support validity.
Language testing and evaluation validity and reliability.Vadher Ankita
This document discusses validity and reliability in language testing. It defines different types of validity including content validity, construct validity, criterion validity (concurrent and predictive validity), and face validity. It also explains how to judge the validity of a test and ensures it measures what it intends to measure. The document also defines different types of reliability such as equivalency, stability, internal, inter-rater, and intra-rater reliability. It provides examples of how each type is measured to ensure consistency in testing.
This document discusses two types of language testing: discrete point testing and integrative testing. Discrete point testing evaluates specific grammar points, words, or structures in isolation through individual questions. Integrative testing evaluates multiple language abilities simultaneously through tasks that require comprehending and producing real connected text, such as cloze tests, dictations, translations, essays, interviews, and reading passages. The document provides examples of both discrete point and integrative testing questions and formats.
Reliability (assessment of student learning I)Rey-ra Mora
Reliability refers to the consistency of test results over time and across raters. There are several potential sources of error in test scores, including issues with the test-taker, test administration, test scoring, and test construction. Several methods can be used to estimate a test's reliability, including test-retest reliability, inter-rater reliability, parallel forms reliability, internal consistency reliability, split-half reliability, and the Kuder Richardson method. Ensuring high reliability is important so that tests produce consistent results.
Power Point based on the article "Testing for language teachers" (Arthur Hughes), pages 83 to 112 (Chapter 9: Testing writing). This work is done by Idoia Argudo and Marta Ribas, in a subject from Universidad de Cantabria.
This document outlines various topics related to language testing, including types of tests, approaches to testing, validity and reliability, and achieving beneficial backwash effects. It discusses proficiency tests, achievement tests, and diagnostic tests. It also covers direct and indirect testing, norm-referenced and criterion-referenced testing, and objective and subjective testing. Validity is defined as accurately measuring the intended abilities, while reliability is consistency of results. Achieving beneficial backwash means testing abilities you want to foster and ensuring students and teachers understand the test.
Test Assembling (writing and constructing)Tasneem Ahmad
The document provides guidelines for assembling and constructing different types of test items, including multiple choice, true/false, matching, fill-in-the-blank, and essay questions. It discusses arranging items in order of difficulty and by similar format. The guidelines recommend writing clear stems and response options that avoid tricks and irrelevant clues. The document also includes a checklist for assembling the final test to ensure a consistent and fair evaluation of students.
1. Discrete point testing refers to testing language skills and components individually, one at a time, such as testing a single grammatical structure.
2. Examples of discrete point tests include multiple choice questions, fill-in-the-blank questions, true/false questions, spelling tests, and tests of phoneme recognition.
3. Discrete point tests are easy to score objectively but take more time and energy to create and do not capture real-world language use.
The document discusses different aspects of writing such as defining writing, key writing skills, stages of writing instruction, different types of writing, reasons for various writing tests, ideas for writing tasks, and factors and characteristics to consider when developing and evaluating writing tasks and tests. It emphasizes the importance of using authentic writing tasks that reflect real-world writing situations and transmitting information for a specific audience and purpose. Evaluation of writing should consider both the writing process and final product based on defined criteria.
The document discusses ethical guidelines for researchers based on the American Sociological Association's code of ethics. It outlines the ASA's five general principles of professional competence, integrity, professional and scientific responsibility, respect for people's rights and diversity, and social responsibility. It also discusses general ethical issues researchers must consider, such as avoiding harm, obtaining informed consent, respecting privacy, avoiding conflicts of interest, and ethical reporting. The document provides examples of techniques to avoid harm like debriefing and case studies of plagiarism issues in Pakistani universities.
The document discusses reliability and validity in research studies. It defines key terms like validity, reliability, and objectivity. There are different types of validity including internal, external, logical, statistical, and construct validity. Threats to validity are also outlined such as maturation, history, pre-testing, selection bias, and instrumentation. Reliability refers to consistency of measurements and is a prerequisite for validity. Absolute and relative reliability are discussed. Threats to reliability include fatigue, habituation, and lack of standardization. Measurement error also impacts reliability.
The document discusses various methods for testing writing skills, including composition writing, grading compositions, and objective tests of mechanics and punctuation. It covers testing at basic, intermediate, and advanced levels. It also addresses considerations in designing writing tests, such as providing realistic topics, setting the composition, and treating written errors in scoring. Different types of controlled writing are proposed, including using notes, completing sentences, rewriting paragraphs, and forming paragraphs from sentences.
This document discusses different types of tests used in education. It defines tests as procedures that present standardized questions or tasks to evaluate student performance. The main purposes of testing are to measure student achievement, identify strengths and weaknesses, and improve teaching methods. Tests are commonly classified based on the attributes they measure, such as intelligence, educational achievement, aptitude, or personality traits. Different classifications systems organize tests based on factors like the type of items, scoring method, administration conditions, and language emphasis.
This document discusses different techniques for testing, including:
1) Direct testing measures specific skills directly, while indirect testing measures underlying abilities. Semi-direct testing simulates direct testing through recorded responses.
2) Discrete point testing examines elements individually, while integrative testing requires combining multiple elements for a task.
3) Norm-referenced testing interprets scores relative to others, while criterion-referenced testing measures against a standard.
4) Objective tests have a single right answer, while subjective tests consider multiple factors in scoring open-ended responses.
It talks about the different types of validity in assessment.
* Face Validity
* Content Validity
* Predictive Validity
* Concurrent Validity
* Construct Validity
The document outlines the steps for developing a valid and reliable test: 1) determining test specifications, 2) planning by preparing a table of specifications, 3) writing test items, 4) preparing appropriate test formats, 5) reviewing test items, 6) pre-testing the test, and 7) validating test items through analyzing item difficulty, discrimination, and facility. The goal is to design a test that accurately measures the intended objectives and skills at an appropriate level of difficulty without cultural bias.
The document discusses steps for assembling and reviewing test items:
1) Test items should be written on index cards to facilitate editing and arrangement. Each card should include information about the learning outcome and content being measured.
2) Items should be checked against test specifications to ensure a representative sample of content is covered. Items should be organized by type and learning outcome, then difficulty level.
3) Test directions should clearly convey the purpose, time allowed, answer format, and what to do about guessing.
Experimental research is the most conclusive scientific method because the researcher directly manipulates the independent variable and studies its effects on the dependent variable. This allows the researcher to determine causation, unlike other research methods. The purpose is to establish cause-and-effect relationships between variables. Basic steps include having an experimental group that receives a treatment and a control group that does not, then comparing outcomes. Key characteristics include random assignment to control threats to internal validity. Poor designs do not include control groups or random assignment, making it impossible to determine if results are due to the treatment.
The document discusses various techniques for evaluating educational curriculum and programs. It describes evaluation as collecting data to determine the value of a program and whether it should be adopted, rejected, or revised. Several data collection techniques are examined, including observation, interviews, questionnaires, tests, and assessments. Tests are categorized based on their purpose, format, and standards. The document emphasizes that using the right technique for a given evaluation is important to obtain accurate information and make better decisions.
This document discusses rubrics, including what they are, why they are used, different types of rubrics, and steps for developing and implementing rubrics. A rubric is defined as a set of criteria that specifies the characteristics and levels of achievement for an outcome. Rubrics provide consistency in evaluation, gather rich assessment data, and allow for direct measurement of learning. There are two main types of rubrics: analytic rubrics that evaluate each criterion separately, and holistic rubrics that provide a single overall score. Developing an effective rubric involves identifying learning outcomes, determining assessment methods, choosing dimensions and performance levels, writing clear descriptors, testing the rubric, and training raters.
Proving to improve - UA Summit of Deans CouncilsMark Freeman
We report a positive benefit-cost ratio for a model of external assurance of learning uncovered by a project called Achievement Matters. It critically relies on and elevates reviewers first developing shared understandings of standards through calibration forums which include practitioners.
Valiadity and reliability- Language testingPhuong Tran
The document discusses test reliability and validity. It defines reliability as the degree to which a test is free from random measurement error, and validity as the degree to which a test measures the intended construct. There are several factors that can affect test reliability and validity, including test method, personal attributes of test takers, and random factors. Reliability is necessary for validity but not sufficient, as validity also requires examining the relationship between test scores and other relevant criteria. The document outlines various approaches for estimating reliability and gathering evidence to support validity.
Language testing and evaluation validity and reliability.Vadher Ankita
This document discusses validity and reliability in language testing. It defines different types of validity including content validity, construct validity, criterion validity (concurrent and predictive validity), and face validity. It also explains how to judge the validity of a test and ensures it measures what it intends to measure. The document also defines different types of reliability such as equivalency, stability, internal, inter-rater, and intra-rater reliability. It provides examples of how each type is measured to ensure consistency in testing.
This document discusses two types of language testing: discrete point testing and integrative testing. Discrete point testing evaluates specific grammar points, words, or structures in isolation through individual questions. Integrative testing evaluates multiple language abilities simultaneously through tasks that require comprehending and producing real connected text, such as cloze tests, dictations, translations, essays, interviews, and reading passages. The document provides examples of both discrete point and integrative testing questions and formats.
Reliability (assessment of student learning I)Rey-ra Mora
Reliability refers to the consistency of test results over time and across raters. There are several potential sources of error in test scores, including issues with the test-taker, test administration, test scoring, and test construction. Several methods can be used to estimate a test's reliability, including test-retest reliability, inter-rater reliability, parallel forms reliability, internal consistency reliability, split-half reliability, and the Kuder Richardson method. Ensuring high reliability is important so that tests produce consistent results.
Power Point based on the article "Testing for language teachers" (Arthur Hughes), pages 83 to 112 (Chapter 9: Testing writing). This work is done by Idoia Argudo and Marta Ribas, in a subject from Universidad de Cantabria.
This document outlines various topics related to language testing, including types of tests, approaches to testing, validity and reliability, and achieving beneficial backwash effects. It discusses proficiency tests, achievement tests, and diagnostic tests. It also covers direct and indirect testing, norm-referenced and criterion-referenced testing, and objective and subjective testing. Validity is defined as accurately measuring the intended abilities, while reliability is consistency of results. Achieving beneficial backwash means testing abilities you want to foster and ensuring students and teachers understand the test.
Test Assembling (writing and constructing)Tasneem Ahmad
The document provides guidelines for assembling and constructing different types of test items, including multiple choice, true/false, matching, fill-in-the-blank, and essay questions. It discusses arranging items in order of difficulty and by similar format. The guidelines recommend writing clear stems and response options that avoid tricks and irrelevant clues. The document also includes a checklist for assembling the final test to ensure a consistent and fair evaluation of students.
1. Discrete point testing refers to testing language skills and components individually, one at a time, such as testing a single grammatical structure.
2. Examples of discrete point tests include multiple choice questions, fill-in-the-blank questions, true/false questions, spelling tests, and tests of phoneme recognition.
3. Discrete point tests are easy to score objectively but take more time and energy to create and do not capture real-world language use.
The document discusses different aspects of writing such as defining writing, key writing skills, stages of writing instruction, different types of writing, reasons for various writing tests, ideas for writing tasks, and factors and characteristics to consider when developing and evaluating writing tasks and tests. It emphasizes the importance of using authentic writing tasks that reflect real-world writing situations and transmitting information for a specific audience and purpose. Evaluation of writing should consider both the writing process and final product based on defined criteria.
The document discusses ethical guidelines for researchers based on the American Sociological Association's code of ethics. It outlines the ASA's five general principles of professional competence, integrity, professional and scientific responsibility, respect for people's rights and diversity, and social responsibility. It also discusses general ethical issues researchers must consider, such as avoiding harm, obtaining informed consent, respecting privacy, avoiding conflicts of interest, and ethical reporting. The document provides examples of techniques to avoid harm like debriefing and case studies of plagiarism issues in Pakistani universities.
The document discusses reliability and validity in research studies. It defines key terms like validity, reliability, and objectivity. There are different types of validity including internal, external, logical, statistical, and construct validity. Threats to validity are also outlined such as maturation, history, pre-testing, selection bias, and instrumentation. Reliability refers to consistency of measurements and is a prerequisite for validity. Absolute and relative reliability are discussed. Threats to reliability include fatigue, habituation, and lack of standardization. Measurement error also impacts reliability.
The document discusses various methods for testing writing skills, including composition writing, grading compositions, and objective tests of mechanics and punctuation. It covers testing at basic, intermediate, and advanced levels. It also addresses considerations in designing writing tests, such as providing realistic topics, setting the composition, and treating written errors in scoring. Different types of controlled writing are proposed, including using notes, completing sentences, rewriting paragraphs, and forming paragraphs from sentences.
This document discusses different types of tests used in education. It defines tests as procedures that present standardized questions or tasks to evaluate student performance. The main purposes of testing are to measure student achievement, identify strengths and weaknesses, and improve teaching methods. Tests are commonly classified based on the attributes they measure, such as intelligence, educational achievement, aptitude, or personality traits. Different classifications systems organize tests based on factors like the type of items, scoring method, administration conditions, and language emphasis.
This document discusses different techniques for testing, including:
1) Direct testing measures specific skills directly, while indirect testing measures underlying abilities. Semi-direct testing simulates direct testing through recorded responses.
2) Discrete point testing examines elements individually, while integrative testing requires combining multiple elements for a task.
3) Norm-referenced testing interprets scores relative to others, while criterion-referenced testing measures against a standard.
4) Objective tests have a single right answer, while subjective tests consider multiple factors in scoring open-ended responses.
It talks about the different types of validity in assessment.
* Face Validity
* Content Validity
* Predictive Validity
* Concurrent Validity
* Construct Validity
The document outlines the steps for developing a valid and reliable test: 1) determining test specifications, 2) planning by preparing a table of specifications, 3) writing test items, 4) preparing appropriate test formats, 5) reviewing test items, 6) pre-testing the test, and 7) validating test items through analyzing item difficulty, discrimination, and facility. The goal is to design a test that accurately measures the intended objectives and skills at an appropriate level of difficulty without cultural bias.
The document discusses steps for assembling and reviewing test items:
1) Test items should be written on index cards to facilitate editing and arrangement. Each card should include information about the learning outcome and content being measured.
2) Items should be checked against test specifications to ensure a representative sample of content is covered. Items should be organized by type and learning outcome, then difficulty level.
3) Test directions should clearly convey the purpose, time allowed, answer format, and what to do about guessing.
Experimental research is the most conclusive scientific method because the researcher directly manipulates the independent variable and studies its effects on the dependent variable. This allows the researcher to determine causation, unlike other research methods. The purpose is to establish cause-and-effect relationships between variables. Basic steps include having an experimental group that receives a treatment and a control group that does not, then comparing outcomes. Key characteristics include random assignment to control threats to internal validity. Poor designs do not include control groups or random assignment, making it impossible to determine if results are due to the treatment.
The document discusses various techniques for evaluating educational curriculum and programs. It describes evaluation as collecting data to determine the value of a program and whether it should be adopted, rejected, or revised. Several data collection techniques are examined, including observation, interviews, questionnaires, tests, and assessments. Tests are categorized based on their purpose, format, and standards. The document emphasizes that using the right technique for a given evaluation is important to obtain accurate information and make better decisions.
This document discusses rubrics, including what they are, why they are used, different types of rubrics, and steps for developing and implementing rubrics. A rubric is defined as a set of criteria that specifies the characteristics and levels of achievement for an outcome. Rubrics provide consistency in evaluation, gather rich assessment data, and allow for direct measurement of learning. There are two main types of rubrics: analytic rubrics that evaluate each criterion separately, and holistic rubrics that provide a single overall score. Developing an effective rubric involves identifying learning outcomes, determining assessment methods, choosing dimensions and performance levels, writing clear descriptors, testing the rubric, and training raters.
Proving to improve - UA Summit of Deans CouncilsMark Freeman
We report a positive benefit-cost ratio for a model of external assurance of learning uncovered by a project called Achievement Matters. It critically relies on and elevates reviewers first developing shared understandings of standards through calibration forums which include practitioners.
This document discusses how analytics can be used to improve student success. It begins by describing a session that shows how analytics identify opportunities to improve student success. Participants will learn how to connect predictions of risk to interventions most likely to work under different conditions. The document then discusses how data is changing education and how analytics can be applied in areas like enrollment management, student services, and program design. It provides examples of how predictive analytics have been used at various institutions to improve retention, successful course completion, and graduation rates. The document emphasizes linking predictions of risk to specific interventions and measuring the impact and ROI of different interventions.
This document proposes an ontological model for representing rubrics digitally using Semantic Web standards like RDF and OWL. Currently, most rubrics shared online are in static, non-machine readable formats like Word documents or proprietary learning management systems. The proposed model aims to make rubrics sharable and reusable across different systems on the web by representing them semantically. It discusses how rubrics benefit both students and teachers by providing clear evaluation criteria and allowing for consistent grading. However, existing rubrics online often lack specificity and are not in open, transferable formats between different tools and systems.
Presented by: Aimee Badeaux, Program Director, Franciscan Missionaries of Our Lady University Nurse Anesthesia Program.
According to national nurse anesthesia program accreditation requirements, all nurse anesthesia students must participate in simulated clinical experiences, designed for competency attainment, competency assessment, or competency maintenance. This presentation will focus on the assessment and grading of final patient care simulation scenario via the use of Exam Soft rubrics, while showcasing software capabilities (ease of rubric creation, grading with multiple faculty members, release of rubric to student learner).
Introduction to Designing Assessment Plans Workshop 1Lisa M. Snyder
At the completion of this workshop, participants will be able to:
Identify the components of an assessment plan and explain to colleagues the purpose and process of assessment
Write observable, measurable learning outcomes for their program
Draft a curriculum map that identifies specific courses where program learning outcomes are addressed
Develop a plan, including a timeline, to gather, analyze, and interpret assessment data
FACT2 Learning Analytics Task Group (LATG) SCOA briefinggketcham
This document discusses the SUNY Council on Assessment Learning Analytics Task Group and their work exploring the use of learning analytics across SUNY. The task group is identifying best practices for using learning analytics for assessment, student feedback, course placement, and early intervention. They plan to share exemplary uses and conduct professional development. The task group also aims to provide opportunities for SUNY faculty to pilot learning analytics projects and tools. The goal is to advance the appropriate use of learning analytics to improve student success and learning outcomes across SUNY.
Beyond Accreditation and Standards: The Distance Educator’s Opportunity for L...Gary Matkin
This presentation will provide practical suggestions for distance educators to take a leadership position amidst the call from accrediting bodies for institutions of higher education to become more accountable and transparent. Presentation will address content management, learner feedback, “openness”, and the establishment of infrastructure to meet these new requirements.
Graded Assessment – Myth Or Fact Ppt Jan 2k10KeithH66
Graded assessment in vocational and higher education aims to provide pathways for further education and a measure of academic excellence. It should be criterion-referenced based on competencies and use scoring rubrics and exemplars to transparently assess performance. Assessment tasks should authentically measure important learning goals through scenario-based problems requiring demonstration of skills rather than just testing. Feedback should focus on improvement.
E:\T&La Special Projects\Computer Engineering & Applied Science\Grade...guest3f9d24
Graded assessment in vocational and higher education aims to provide pathways for further education and a measure of academic excellence. It should be criterion-referenced based on competencies and use scoring rubrics, exemplars, and feedback to clearly communicate assessment standards and help students improve. Developing high-quality graded assessment requires consideration of learning outcomes, teaching activities, assessment tasks, grading schemas, and validation processes.
Designing useful evaluations - An online workshop for the Jisc AF programme_I...Rachel Harris
This document summarizes a workshop on designing useful evaluations for projects funded by the JISC and Becta Curriculum Delivery Programme. The workshop covered the evaluation cycle, identifying intended outcomes and impact, determining what to evaluate, developing evaluation questions, and methods for undertaking evaluations. Examples of evaluation approaches used by different projects were discussed, including action research, external evaluators, and using both qualitative and quantitative data sources. Participants were guided in developing evaluation plans for their own projects by considering stakeholders, measures, sources of evidence, and refining evaluation questions.
This document discusses approaches to developing comprehensive teacher evaluation systems using student achievement data. It outlines five propositions for validating such systems and the design claims and evidence needed to support each proposition. It then describes different approaches used across the US, including student learning objectives, subject- and grade-alike measures, universal pre-/post-tests, and value-added composites. Both advantages and disadvantages are provided for each approach. Finally, it discusses examples from Minneapolis and Edina that use value-added measures and standardized tests to evaluate teacher effectiveness.
This document discusses curriculum mapping, which involves clarifying and assessing the relationships between curricular and co-curricular activities, courses, and programs. It provides an overview of curriculum mapping, including its benefits and key aspects. The presentation covers the mapping process, analyzing maps, and includes examples of course-level and program-level maps. It emphasizes that mapping can align instruction with learning outcomes, reveal gaps, and improve program coherence.
The document summarizes research conducted by Rajeeb Das and Timothy Brophy at the University of Florida to better understand faculty engagement in assessment processes and identify opportunities for improvement. Through surveys of assessment coordinators, stakeholder interviews, and faculty focus groups, they identified that faculty value assessment when it is used for student and program improvement. However, influential factors like class size and disciplinary accreditation requirements, as well as misconceptions about reporting requirements, can impact engagement. Based on these findings, the researchers made recommendations like facilitating peer sharing of assessment practices and clarifying reporting guidelines to cultivate greater faculty involvement.
Assessing OER impact across varied organisations and learners: experiences fr...Beck Pitt
This presentation was co-authored by Tim Coughlan (Nottingham), Beck Pitt (OU), Patrick McAndrew (OU) and Nassim Ebrahimi (Anne Arundel).
It was presented at OER13, Nottingham, UK which took place 26-27 March 2013.
Assessing OER impact across varied organisations and learners: experiences fr...OER Hub
This presentation was co-authored by Tim Coughlan (Nottingham), Beck Pitt (OU), Patrick McAndrew (OU) and Nassim Ebrahimi (Anne Arundel).
It was presented at OER13, Nottingham, UK which took place 26-27 March 2013.
Moving Beyond Student Ratings to Evaluate TeachingVicki L. Wise
Evidence of teaching quality needs to take into account multiple sources, as teaching is multidimensional. Moreover, the likelihood of obtaining reliable and valid data and making appropriate judgments are increased with more evidence.
The document discusses streamlining operations at Bonner campus centers through implementing workflow automation practices. It provides examples of workflows that can be automated, such as recruitment and selection of Bonners, managing community partnerships, and tracking CEL courses and workshops. The remainder of the document demonstrates a Notion template for a Campus Center Operations System that can help organize people, tasks, projects, resources, and tracking using a program management system to save time and improve information flow. Resources and support for getting started with Notion are also mentioned.
In this session, we’ll delve into the ways that institutions have been engaging faculty, creating courses and pathways, and working to build sustained infrastructure for civic learning and community engagement.
In this session, we’ll explore how to create cohort communities for students to explore their career interests and how civic and community engagement, in and outside of class, prepares them for post-graduate work.
Best Practices - Building a Coalition of Student-Led Service Projects.pdfBonner Foundation
In this session, we’ll share a core strategy for developing and supporting student leadership of community service by building a coalition (supported by your center) with representatives of student-led service projects, clubs, programs across the campus.
Fall Network Meeting Community Partnerships & Projects Session.pdfBonner Foundation
In this session, we’ll be able to share how we are building and managing effective community partnerships and projects. Through this process, participants can identify their strengths, opportunities, future aspirations, and resource needs.
The document summarizes an agenda for a Bonner Meetings session at the Claggett Center in November 2023. The session goals are to collaborate on meeting planning and curriculum, apply a SOAR framework to analyze meeting calendars, and brainstorm ways to assess student learning. The agenda includes reflective discussions, reviewing meeting calendars in pairs, an overview of Bonner meeting basics and highlights, applying the SOAR framework to analyze meeting calendars, and concluding with takeaways. Key aspects of effective Bonner meetings covered are meeting structure, integrating a developmental pathway for students, types of meetings held, and ensuring meeting calendars support student learning and progression over four years.
Leveraging Data to Make the Case for Bonner Like Programs.pdfBonner Foundation
This document discusses leveraging data to expand community engagement programs like Bonner Scholars on college campuses. It summarizes a study conducted at Stetson University that analyzed retention data to understand factors influencing whether students return after one semester or year. The study found that costs, engagement, academic preparation, and background all impacted retention. It suggests using this data to enhance existing programs and create new "Bonner-like" programs, with the goal of having 20% of students participating by 2027. Participants are then asked to discuss how they could conduct a similar study and expand community engagement opportunities on their own campuses.
This session aims promote learning and exchange of ideas on
how we can help students all across campus pursue careers
with purpose and meaning, especially ones that make the world
a better place. The session will engage students in a dialogue
about career goals, academic study, service experience, career
support, and group discussions based on career interests.
This opening session sets the stage for a dynamic and informative
conference focused on driving positive social change. We'll be
inspired and rooted in a sense of place by President Floyd and our
student speakers then dive into two frameworks focused on
equipping individuals to be change agents in their communities.
Participants can expect to gain valuable insights, engage in
thought-provoking discussions and be inspired by the stories of
those who work towards moving the metaphorical mountains of
social inequality, injustice, and systemic challenges.
This is What Democracy Looks Like Powerbuilding -- Cali VanCleveBonner Foundation
Community organizing has always played a prominent role in the nonprofit world. But what about long-term, sustainable activism work? Power building is a newer sect of community organizing in which people can organize around a certain issue creating power within targeted communities. The Tennessee Immigrant and Refugee Rights Coalition (TIRRC) and its 501(c)(4) TIRRC Votes has created a movement across the state, and they build power within our immigrant and refugee communities through voter engagement and services such as legal aid, educational resources, etc. It is vital to recognize the diverse forms in which we can organize around election cycles beyond simply registering people to vote. If you're interested in either immigrant and refugee rights, voter engagement, or unconventional means of organizing, this would be the place for you!
Are you aspiring to build an exciting career on the global stage? Do you dream of working across borders, cultures, and continents? In an increasingly interconnected world, an international career offers unparalleled opportunities for personal and professional growth. Join us to discuss how you can leverage your Bonner experience in a global context and to explore a wide array of international opportunities.
Prioritizing Bonner How to Support the Student Journey (1).pptxBonner Foundation
This workshop focuses on how to support students as they go through their undergraduate programs not only in the Bonner Program but in their academic and personal lives as well. Students experience a lot of changes and stress during the transitions of college, and we will be discussing some structures and strategies to support them to grow into accountable leaders while still prioritizing their wellbeing.
Preparing a strong personal statement_fall_2023_grad_general.pptxBonner Foundation
Thinking about applying to graduate school? Join Executive Director of Admissions and Enrollment, Ivone Foisy from Emory University’s Rollins School of Public Health to learn how to make your personal statement stand out to admissions committees. She will address your questions and offer examples of strong personal statements.
Current Communication Apps and Their Uses in Bonner.pdfBonner Foundation
Ariel introduces communication apps Discord and Notion that can be used by Bonner programs. Discord is an instant messaging platform that allows users to communicate via voice/video calls and text messaging in private chats or servers. Notion is a versatile organizational software. Ariel provides an overview of how to set up and customize servers/templates on each platform to meet a program's needs, including examples of useful channel types for Discord and templates for Notion. Participants are invited to ask questions and provide feedback via a form.
The document outlines the key activities and components of the Bonner Cornerstones program, including orientation, first and second year trips, capstone projects, presentations of learning, and one-on-one advising meetings. It provides examples of how different Bonner programs implement each component, with an emphasis on community building, exploring identity and social issues, and integrating service experience with academic learning. Small group discussions are included to allow participants to discuss strategies for improving or establishing these program elements at their institutions.
The document provides an overview of the recruitment, selection, and funding process for Bonner Scholars. It outlines how to promote and recruit students, with a target estimated family contribution of less than $15,000. It then details the various sources of funding Bonners receive, including annual scholarships of $6,000 on average, summer stipends, and other program support. Schools must submit student rosters and funding requests to the Bonner Foundation for approval each semester through an online system.
This document discusses managing community partnerships for service learning programs. It provides guidance on identifying lead community partners, writing position descriptions, matching students to placements, orienting students and partners, and supporting students throughout their service. It emphasizes developing long-term, reciprocal partnerships and using a developmental model where students take on increased responsibility over multiple years. It also covers managing summer service placements, including application materials and ensuring placements align with students' interests and skill levels. The goal is to create high-quality service experiences that benefit both students and community partners.
This document discusses strategies for creating a campus-wide center to promote community engagement across an institution. It addresses collaborating with various campus departments, developing community-engaged learning and faculty involvement, strategic planning, and operations. The center aims to link civic engagement to the curriculum, mobilize students, foster global and career opportunities, build inclusion, and institutionalize community engagement through communications, tracking, and assessing impact. Strategic goals and initiatives could include engaging stakeholders, linking the center's work to the institution's mission, and developing a 3-5 year written strategic plan with objectives and measures of success.
This report explores the significance of border towns and spaces for strengthening responses to young people on the move. In particular it explores the linkages of young people to local service centres with the aim of further developing service, protection, and support strategies for migrant children in border areas across the region. The report is based on a small-scale fieldwork study in the border towns of Chipata and Katete in Zambia conducted in July 2023. Border towns and spaces provide a rich source of information about issues related to the informal or irregular movement of young people across borders, including smuggling and trafficking. They can help build a picture of the nature and scope of the type of movement young migrants undertake and also the forms of protection available to them. Border towns and spaces also provide a lens through which we can better understand the vulnerabilities of young people on the move and, critically, the strategies they use to navigate challenges and access support.
The findings in this report highlight some of the key factors shaping the experiences and vulnerabilities of young people on the move – particularly their proximity to border spaces and how this affects the risks that they face. The report describes strategies that young people on the move employ to remain below the radar of visibility to state and non-state actors due to fear of arrest, detention, and deportation while also trying to keep themselves safe and access support in border towns. These strategies of (in)visibility provide a way to protect themselves yet at the same time also heighten some of the risks young people face as their vulnerabilities are not always recognised by those who could offer support.
In this report we show that the realities and challenges of life and migration in this region and in Zambia need to be better understood for support to be strengthened and tuned to meet the specific needs of young people on the move. This includes understanding the role of state and non-state stakeholders, the impact of laws and policies and, critically, the experiences of the young people themselves. We provide recommendations for immediate action, recommendations for programming to support young people on the move in the two towns that would reduce risk for young people in this area, and recommendations for longer term policy advocacy.
A Guide to AI for Smarter Nonprofits - Dr. Cori Faklaris, UNC CharlotteCori Faklaris
Working with data is a challenge for many organizations. Nonprofits in particular may need to collect and analyze sensitive, incomplete, and/or biased historical data about people. In this talk, Dr. Cori Faklaris of UNC Charlotte provides an overview of current AI capabilities and weaknesses to consider when integrating current AI technologies into the data workflow. The talk is organized around three takeaways: (1) For better or sometimes worse, AI provides you with “infinite interns.” (2) Give people permission & guardrails to learn what works with these “interns” and what doesn’t. (3) Create a roadmap for adding in more AI to assist nonprofit work, along with strategies for bias mitigation.
Monitoring Health for the SDGs - Global Health Statistics 2024 - WHOChristina Parmionova
The 2024 World Health Statistics edition reviews more than 50 health-related indicators from the Sustainable Development Goals and WHO’s Thirteenth General Programme of Work. It also highlights the findings from the Global health estimates 2021, notably the impact of the COVID-19 pandemic on life expectancy and healthy life expectancy.
Preliminary findings _OECD field visits to ten regions in the TSI EU mining r...OECDregions
Preliminary findings from OECD field visits for the project: Enhancing EU Mining Regional Ecosystems to Support the Green Transition and Secure Mineral Raw Materials Supply.
Food safety, prepare for the unexpected - So what can be done in order to be ready to address food safety, food Consumers, food producers and manufacturers, food transporters, food businesses, food retailers can ...
UN WOD 2024 will take us on a journey of discovery through the ocean's vastness, tapping into the wisdom and expertise of global policy-makers, scientists, managers, thought leaders, and artists to awaken new depths of understanding, compassion, collaboration and commitment for the ocean and all it sustains. The program will expand our perspectives and appreciation for our blue planet, build new foundations for our relationship to the ocean, and ignite a wave of action toward necessary change.
About Potato, The scientific name of the plant is Solanum tuberosum (L).Christina Parmionova
The potato is a starchy root vegetable native to the Americas that is consumed as a staple food in many parts of the world. Potatoes are tubers of the plant Solanum tuberosum, a perennial in the nightshade family Solanaceae. Wild potato species can be found from the southern United States to southern Chile
Synopsis (short abstract) In December 2023, the UN General Assembly proclaimed 30 May as the International Day of Potato.
Donate to charity during this holiday seasonSERUDS INDIA
For people who have money and are philanthropic, there are infinite opportunities to gift a needy person or child a Merry Christmas. Even if you are living on a shoestring budget, you will be surprised at how much you can do.
Donate Us
https://serudsindia.org/how-to-donate-to-charity-during-this-holiday-season/
#charityforchildren, #donateforchildren, #donateclothesforchildren, #donatebooksforchildren, #donatetoysforchildren, #sponsorforchildren, #sponsorclothesforchildren, #sponsorbooksforchildren, #sponsortoysforchildren, #seruds, #kurnool
AHMR is an interdisciplinary peer-reviewed online journal created to encourage and facilitate the study of all aspects (socio-economic, political, legislative and developmental) of Human Mobility in Africa. Through the publication of original research, policy discussions and evidence research papers AHMR provides a comprehensive forum devoted exclusively to the analysis of contemporaneous trends, migration patterns and some of the most important migration-related issues.
1. "Interrater Reliability"
Made Easy!
An Introduction and Practice with Using
Rubrics for Assessment
Raymond Barclay, PhD
Blake Stack
Bonner Foundation
Summer Leadership
Institute at Lindsey Wilson
College
May 25, 2017
2. Enrollment x Design, LLC
Agenda
• Rubric Overview – Why and Types
• Implementation
o Measurement Dimensions
o Scaling
o Scoring
o Student Engagement
o Calibration
o Reliability
• Review of University of Richmond’s civic
engagement rubrics focused on Presentations
of Learning
• Calibration & Reliability Session(s)
• Debrief
5/25/2017 2
3. Enrollment x Design, LLC
What is a
rubric?
“a set of criteria specifying the characteristics of an
outcome and the levels of achievement in each
characteristic.”
• SOURCE: Levy, J.D. Campus Labs: Data Driven Innovation. Using Rubrics in
student affairs: A direct assessment of learning.
“A rubric is a scoring guide composed of criteria used
to evaluate performance, a product, or a project. For
instructors and students alike, a rubric defines what
will be assessed. They enable students to identify
what the instructor expects from their assignment
submission. It allows evaluation according to specified
criteria, making grading and ranking simpler, fairer
and more transparent.”
• SOURCE: University of Texas-Austin Faculty Innovation Center (https://
facultyinnovate.utexas.edu/teaching/check-learning/rubrics)
5/25/2017 3
4. Enrollment x Design, LLC
Blake Stack
Bio
• University of Richmond
• Bonner Center for Civic Engagement
• Coordinator, Bonner Scholars Program
• B.S. Business Administration / B.S. Religious
Studies (Cairn ’05)
5/25/2017 4
5. Enrollment x Design, LLC
Raymond Barclay - Bio
Education (First Generation College Student):
• PhD, Psychology (Measurement/Statistics and
Cognition), Temple University, School of Education
• Specialization in Design Thinking – University of
Virginia, Graduate School of Business
• Design Thinking & Charrettes – Harvard Univ.-Graduate
School of Design
• MS , Sustainable Design – Thomas Jefferson Univ./Univ.
of Philadelphia, College of Architecture and Built
Design (in progress)
• MDIV, Princeton Theological Seminary
• Philosophy & Religious Studies - Indiana University of
Pennsylvania
Published in the following areas:
• Online learning, assessment and learning, strategic planning,
resiliency, survey development, clinical psychology,
statistical methods, (hierarchical linear analysis,
multivariate analysis, cluster and factor analysis)
Current Role - President – Enrollment x Design, LLC (NJ) –
present
Prior Roles
• Associate Vice President/Associate Provost/Associate Vice
Chancellor Roles
o The New School (NY)
o Stetson University (FL)
o University of North Carolina (NC)
o College of Charleston (SC)
• Director Roles
o The College of New Jersey (NJ)
o Burlington County College (NJ)
o The Bonner Foundation (NJ)
• Senior Analyst Roles
o Arroyo Research Services (NC) - K-16 consulting/
evaluation firm)
o Rowan Univ./Burlington County Community College
(NJ)
5/25/2017 5
6. Enrollment x Design, LLC
Strengths &
Limits of
Rubric?
Strengths
• Creates objectivity and consistency across all
students
• Clarifies grading criteria in specific terms for
performance or product
• Shows expectations and how work will be evaluated
• Promotes students' awareness and provides
benchmarks to improve their performance or
product
Limitations
• Creating effective rubrics is time consuming
• Cannot measure all aspects of student learning
• May require additional feedback after students
receive their score
SOURCE: University of Texas-Austin Faculty Innovation Center (https://
facultyinnovate.utexas.edu/teaching/check-learning/rubrics)
5/25/2017 6
7. Enrollment x Design, LLC
Why use a rubric?
• Provides both qualitative descriptions of student learning and
quantitative results
• Clearly communicates expectations to students
• Provides consistency in evaluation
• Simultaneously provides student feedback and programmatic
feedback
• Allows for timely and detailed feedback
• Promotes colleague collaboration
• Helps us refine practice
5/25/2017 7
8. Enrollment x Design, LLC
Types of rubrics? Analytic*
Analytic rubrics articulate levels of performance for each criteria used
to assess student learning.
• Advantages
• Provides vehicle for more detailed feedback on areas of strength and weakness.
• Scoring is more consistent across students and graders when compared to other
approaches.
• Criterion can be weighted to reflect the relative importance of each dimension.
• Disadvantages
• Takes more time to create and use than a holistic rubric.
• Unless each point for each criterion is well-defined raters may not arrive at the
same score
*Levy, J.D. Campus Labs: Data Driven Innovation. Using Rubrics in student affairs: A direct assessment of learning.
5/25/2017 8
9. Enrollment x Design, LLC
Analytic
Example 1:
Undergraduat
e Research
Project with
Weightings
Source: http://ias.virginia.edu/assessment/outcomes/tools/rubrics
5/25/2017 9
10. Enrollment x Design, LLC
Analytic
Example 2:
Undergraduat
e Student
Employee
SLO
Source: http://studentaffairs.stonybrook.edu/assessment/selo/index.html
5/25/2017 10
11. Enrollment x Design, LLC
Holistic
Example 1:
Essay Writing
Source: http://fdc.umbc.edu/files/2013/01/SAMPLE-HOLISTIC-RUBRIC-FOR-ESSAYS.pdf
5/25/2017 11
12. Enrollment x Design, LLC
Holistic
Example 2:
Critical
Thinking
Source: http://teaching.temple.edu/sites/tlc/files/resource/pdf/
Holistic%20Critical%20Thinking%20Scoring%20Rubric.v2%20[Accessible].pdf
5/25/2017 12
13. Enrollment x Design, LLC
Types of rubrics? Holistic*
A holistic rubric consists of a single scale with all criteria to be included
in the evaluation being considered together.
Advantages
• Emphasis on what the learner is able to demonstrate, rather than what s/he
cannot do.
• Saves time by minimizing the number of decisions raters make.
• Can be applied consistently by trained raters increasing reliability.
Disadvantages
• Does not provide specific feedback for improvement.
• When student work is at varying levels spanning the criteria points it can be
difficult to select the single best description.
• Criteria cannot be weighted.
*Levy, J.D. Campus Labs: Data Driven Innovation. Using Rubrics in student affairs: A direct assessment of learning.
5/25/2017 13
14. Enrollment x Design, LLC
Things to Consider in Developing a Rubric
(see resources at end of ppt for more information)
• Have you consulted the plethora of professional literature and online resources?
oThere are a variety of subject areas that have been refined using
professional standards and empirical research
oThere are many classroom-tested rubrics assessed by instructors and their
students
• Can adapt the criteria, rating scale, and indicators to your needs?
oWhether adapting or designing a rubric from scratch, the developmental
process is the same and you must identify the basic components of your
rubric:
➢(a) the performance criteria, (b) the rating scale, and ( c) the indicators of
performance.
SOURCE: University of Texas-Austin Faculty Innovation Center (https://
facultyinnovate.utexas.edu/teaching/check-learning/rubrics)
5/25/2017 14
15. Enrollment x Design, LLC
Criteria: Evidence of Student Learning
• Whether the product is related to an essay, a research or applied project, or a presentation the evidence of
learning or thinking must be specified
• The evidence will drive the selection of the components that are most important to evaluate relative to a
given task within a specified instructional context.
Components = Criteria
• Key questions to help prioritize the criteria:
➢Which of the proposed criteria are non-negotiable?
➢What are the learning outcomes broadly or relative to specific program?
➢Which learning outcomes will be specified within the rubric?
➢Are there skills that are essential to declare the student is competent or has a certain proficiency
levels for the task or assignment to be complete?
➢How important is it for the student to complete the task or project (interest, logic, organization,
creativity) to demonstrate this proficiency level?
➢Are there process and product expectations?
SOURCE: University of Texas-Austin Faculty Innovation
Center (https://facultyinnovate.utexas.edu/teaching/
check-learning/rubrics)
5/25/2017 15
16. Enrollment x Design, LLC
Implementation Steps
1. Identify the outcome
2. Determine how you will collect the evidence
3. Develop the rubric based on observation criteria (Anchors)
4. Train the evaluators on how to use rubric
5. Test the rubric against examples
6. Revise as needed
7. Collect the results of scoring and report out
5/25/2017 16
*http://manoa.hawaii.edu/assessment/workshops/pdf/Rubrics_in_program_assessment_ppt_2013-10.pdf
17. Enrollment x Design, LLC
Choosing Measurement Dimensions*
Measurement Goals: List the measurement dimensions you want a
student to be able to demonstrate that are relevant to curriculum/
activities
Face Validity: Discuss the proposed measurement dimensions with
others to get subjectively assess whether the rubric measures what it
is purported to measure.
Parsimony: Edit content to make sure each measurement dimension is
concise and clear while ensuring a required breadth of coverage (>=3 &
<=8)
*Levy, J.D. Campus Labs: Data Driven
Innovation. Using Rubrics in student
affairs: A direct assessment of
learning.
5/25/2017 17
18. Enrollment x Design, LLC
Writing Descriptors*
1. Describe each level of mastery for each characteristic
2. Describe the best work you could expect
3. Describe an unacceptable product
4. Develop descriptions of intermediate level products for
intermediate categories
5. Each description and each category should be mutually exclusive
6. Be specific and clear; reduce subjectivity
*University of Florida Institutional
Assessment: Writing Effective Rubrics
5/25/2017 18
19. Enrollment x Design, LLC
Rubric Development & Use is a Practiced Art
Form!*
Conduct
Training
Rater
Practice!
Rater
Discussions
&
Negotiatio
n
Rubric
Iteration
*Levy, J.D. Campus Labs: Data Driven
Innovation. Using Rubrics in student
affairs: A direct assessment of
learning.
5/25/2017 19
20. Enrollment x Design, LLC
Pick your Scaling Approach (“indicators”) - I*
a. Beginner, Developing, Accomplished
b. Marginal, Proficient, Exemplary
c. Novice, Intermediate, Proficient, Distinguished
d. Not Yet Competent, Partly competent, Competent,
Sophisticated
a. Never, Rarely, Occasionally, Often, Always
b. Never, Once, Twice, Three times, Four
times c. Never 1-3 x, 4-6x, 5-7x…
a. Not at all, Slightly, Moderately,
Considerably, A great deal
b. Yes/No
c. Met, Partially Met, Not Met
Competency
Frequency of Behavior
Extent to Which
Performed
*Levy, J.D. Campus Labs: Data Driven
Innovation. Using Rubrics in student
affairs: A direct assessment of
learning.
5/25/2017 20
21. Enrollment x Design, LLC
Pick your Scaling Approach (“indicators”) -
II*
*SOURCE: University of Texas-Austin Faculty Innovation Center (https://facultyinnovate.utexas.edu/teaching/check-
learning/rubrics)
4 3 2 1
Task Requirements All Most Some Very few or none
Frequency Always Usually Some of the time Rarely or not at all
Accuracy No errors Few errors Some errors Frequent errors
Comprehensibility Always
comprehensible
Almost always
comprehensible
Gist and main ideas
are comprehensible
Isolated bits are
comprehensible
Content Coverage Fully developed,
fully supported
Adequately
developed,
adequately
supported
Partially developed,
partially supported
Minimally
developed,
minimally supported
Vocabulary Range Broad Adequate Limited Very limited
Variety Highly varied; non-
repetitive
Varied; occasionally
repetitive
Lacks variety;
repetitive
Basic, memorized,
highly repetitive
5/25/2017 21
22. Enrollment x Design, LLC
Things to Remember about Scaling
(“Indicators”)
• What is the ideal assessment for each criteria?
o Begin with the highest level of the scale to define top quality performance.
o Work backward to lower performance levels.
• Ensure continuity in the differences between the criteria (e.g., exceeds vs. meets, and meets vs.
does not meet expectations).
o Difference between a 2 and a 3 performance should not be more than the difference between a
3 and a 4 performance.
• Edit the indicators to ensure that the levels reflect variance in quality and not a shift in
importance of the criteria.
• Make certain that the indicators reflect equal steps along the scale.
o The difference between 4 and 3 should be equivalent to the difference between 3 - 2 and 2 - 1.
o “Yes, and more,” “Yes,” “Yes, but,” and “No” are ways for the rubric developer to think about
how to describe performance at each scale point.
SOURCE: University of Texas-Austin Faculty Innovation Center (https://facultyinnovate.utexas.edu/teaching/check-
learning/rubrics)
5/25/2017 22
23. Enrollment x Design, LLC
MetaRubrics
Campus
Labs
Example
*Levy, J.D. Campus Labs: Data Driven Innovation. Using Rubrics in student affairs: A direct assessment of learning.
5/25/2017 23
24. Enrollment x Design, LLC
Scoring Guidelines
1. The grader(s) should be trained in the proper use of the rubric
2. Use multiple graders, if possible, to score student work in order to gain
greater reliability.
3. If different graders are used, make every effort to ensure that they are as
consistent as possible in their scoring by providing adequate training and
examples.
4. If working alone, or without examples, you can achieve a greater level of
internal consistency by giving preliminary ratings to students’ work
• Through this approach, clusters of similar quality will soon develop.
• After establishing a firm scoring scheme, re-grade all students’ work to
assure greater internal consistency and fairness.
SOURCE: University of Texas-Austin Faculty Innovation Center (https://
facultyinnovate.utexas.edu/teaching/check-learning/rubrics)
5/25/2017 24
25. Enrollment x Design, LLC
Students & Your Rubric
Development
• Include students in the revision and/or development process.
oWhen students are involved, the assignment itself becomes more
meaningful.
Use
• Share the rubric with students before they complete the
assignment.
oEstablishes the level of performance expected which increases
likelihood they meet those standards.
SOURCE: University of Texas-Austin Faculty Innovation Center (https://
facultyinnovate.utexas.edu/teaching/check-learning/rubrics)
5/25/2017 25
26. Enrollment x Design, LLC
Calibration of Rubrics – 12 Steps*
1. Make copies of the rubric for each
rater
2. Identify representative student
works for each level of performance:
• Case a: 1 – not met; 2 - met, 1 –
exceeded
• Case b: 2 – not met; 2 – met, 2 –
exceeded
3. Provide copies of student work with
identifiers removed
4. Provide scoring sheet
5. Facilitator explains the SLO and the
rubric
6. Each rater independently scores
student work
7. Group discussion of each student
work.
8. Reach consensus on a score for each
work.
9. Recalibrate after 3 hours or at the
beginning of each rating session.
10. Check inter-rater consistency
11. Present results in a meaningful and
clean manner
12. Use results
*http://manoa.hawaii.edu/assessment/workshops/pdf/Rubrics_in_program_assessment_ppt_2013-10.pdf
5/25/2017 26
27. Enrollment x Design, LLC
Reliability!*
A. Inter-rater reliability: Between-rater consistency (“Inter-rater
agreement:
how many pairs of raters gave exactly the same score?”)
Affected by:
• Initial starting point or approach to scale (assessment tool)
• Interpretation of descriptions
• Domain / content knowledge
B. Intra-rater consistency: Within-rater consistency (“Inter-rater reliability:
what is the correlation between rater 1 and 2?”)
Affected by:
• Internal factors: mood, fatigue, attention
• External factors: order of evidence, time of day, other situations
• Applies to both multiple-rater and single rater situations
• EXCEL FORMULA: =CORREL(array1,array2)
*Levy, J.D. Campus Labs: Data Driven Innovation. Using Rubrics in student affairs: A direct assessment of learning.
5/25/2017 27
28. Enrollment x Design, LLC
Assessment
Scenario #1
Written Reflections
5/25/2017 28
29. Enrollment x Design, LLC
Assessment
Scenario #1
Written Reflection Instructions:
1. After forming groups, review the handout
entitled ”Goals, Prompts, and Rubrics:
Written Reflection” (5 min)
2. Take time to score the first reflection
individually. (Write this down.) Then
discuss with your group and agree upon a
group score. (5 min)
3. Repeat step 2 with two subsequent
written reflections. (10 min)
5/25/2017 29
31. Enrollment x Design, LLC
Assessment
Scenario #2
Senior Presentation Instructions:
1. Break up into pairs of two.
2. Review the handout entitled ”Goals,
Prompts, and Rubrics: Senior
Presentation” (5 min)
3. While viewing the presentation:
• One person writes down quotes
• One person looks at the rubric and
attempts to determine a rating
4. Afterwards, pairs get together to
compare and discuss, and to make a final
designation for rubric rating
5/25/2017 31
33. Enrollment x Design, LLC
AAC&U VALUE RUBRICS (16)
• Intellectual and Practical Skills
• Inquiry and analysis
• Critical thinking
• Creative thinking
• Written communication
• Oral communication
• Reading
• Quantitative literacy
• Information literacy
• Teamwork
• Problem solving
• Personal and Social Responsibility
• Civic engagement—local and global
• Intercultural knowledge and competence
• Ethical reasoning
• Foundations and skills for lifelong learning
• Global learning
• Integrative and Applied Learning
• Integrative learning
SOURCE: https://www.aacu.org/value-rubrics
5/25/2017 33
35. Enrollment x Design, LLC
Rubrics for evaluating dissertations:
• Focus groups with 272 faculty , 74
departments, 10 disciplines , 9
research universities
• Experience = 3,470 dissertations,
9,890 committees
Lovitts, B. E. (2007). Making the Implicit Explicit: Creating Performance
Expectations for the Dissertation. Stylus Publishing, LLC.
5/25/2017 35
36. Enrollment x Design, LLC
University of Hawaii – Manoa: Rubric Bank
• Includes all VALUE rubrics, plus rubrics for:
o Collaboration, teamwork, participation
o Critical thinking, creative thinking
o Ethical deliberation
o Information literacy
o Reflection/Metacognition
o Oral communication
o Writing
o Project design
o Assessing assessment
• https://manoa.hawaii.edu/assessment/resources/rubricbank.htm
5/25/2017 36