Thank you for the overview of how to associate course outcomes with learning activities and assessments in the Daytona State College learning management system. This will help instructors implement a course outcomes structure in their courses.
The document discusses implementing student learning outcomes assessments in online courses using the new Canvas learning management system. It provides an overview of outcomes assessments, how they will appear and be rated in Canvas, and addresses faculty and student concerns. Key points are that outcomes assessments are used for program evaluation, not individual grading; they appear as rubrics in Canvas for faculty to rate student work; and their purpose is to measure student competency achievement, not assign points.
This document provides an overview of custom reporting in Schoolnet. It discusses defining student sets ("the who") and report parameters and data selections ("the what") to build custom reports. Key points include:
- The "who" is the student set, defined by applying filters to select specific students.
- The "what" includes report parameters like formatting, rows and columns, and data constraints to select assessment data.
- Custom reports and analysis spreadsheets allow defining the "who" and "what" separately. Pre-formatted reports have limited options.
- Reports can be saved, published to share, and tagged as "key reports" for easy access by others.
IEMS is an intelligent software which integrates all the necessary duties performed by the teacher transparently and interactively. Various modes of communication among the parent and teacher are incorporated in the system that provides a platform where people can discuss anything happening around them whether it is official or unofficial.
The presentation based on the tuning process in education. The module presented in training of university faculty. It explain how to apply tuning at course, degree and at programme level.
Kirkpatrick's Four Levels Of Evaluation Modelsikojp
Kirkpatrick's Four Levels of Evaluation Model is a framework for evaluating training programs and other professional development activities. It originated from Kirkpatrick's 1952 dissertation and was published in 1959. The four levels are: 1) Reactions, 2) Learning, 3) Behavior, and 4) Results. Level 1 assesses satisfaction with the training. Level 2 evaluates the increase in knowledge from pre-to post-training. Level 3 looks at applying skills on the job. Level 4 examines the impact on business results such as productivity or profits. The model provides an easily understood approach for evaluation but has limitations such as oversimplifying the relationship between levels.
This document outlines the application of Kirkpatrick's four-level evaluation model to assess the effectiveness of a leadership academy program for afterschool activity coordinators. The levels include: reaction (participant satisfaction), learning (knowledge gained), behavior (job performance), and results (organizational outcomes). Data is collected at each level, including surveys, observations, portfolios. Evaluation found the program improved staff retention, promotions, and student outcomes, leading the organization to continue and expand the leadership academy.
Training Feedback and Evaluation, Training Audit, Training as Continuous ProcessAshish Hande
1) Training evaluation refers to collecting outcomes to determine if training is effective and involves measuring outcomes like reactions, learning, behavior changes, and results.
2) Companies evaluate training to demonstrate that large training investments provide benefits and a return on investment.
3) Kirkpatrick's four-level model is often used to evaluate training at the reaction, learning, behavior, and results levels.
The document discusses implementing student learning outcomes assessments in online courses using the new Canvas learning management system. It provides an overview of outcomes assessments, how they will appear and be rated in Canvas, and addresses faculty and student concerns. Key points are that outcomes assessments are used for program evaluation, not individual grading; they appear as rubrics in Canvas for faculty to rate student work; and their purpose is to measure student competency achievement, not assign points.
This document provides an overview of custom reporting in Schoolnet. It discusses defining student sets ("the who") and report parameters and data selections ("the what") to build custom reports. Key points include:
- The "who" is the student set, defined by applying filters to select specific students.
- The "what" includes report parameters like formatting, rows and columns, and data constraints to select assessment data.
- Custom reports and analysis spreadsheets allow defining the "who" and "what" separately. Pre-formatted reports have limited options.
- Reports can be saved, published to share, and tagged as "key reports" for easy access by others.
IEMS is an intelligent software which integrates all the necessary duties performed by the teacher transparently and interactively. Various modes of communication among the parent and teacher are incorporated in the system that provides a platform where people can discuss anything happening around them whether it is official or unofficial.
The presentation based on the tuning process in education. The module presented in training of university faculty. It explain how to apply tuning at course, degree and at programme level.
Kirkpatrick's Four Levels Of Evaluation Modelsikojp
Kirkpatrick's Four Levels of Evaluation Model is a framework for evaluating training programs and other professional development activities. It originated from Kirkpatrick's 1952 dissertation and was published in 1959. The four levels are: 1) Reactions, 2) Learning, 3) Behavior, and 4) Results. Level 1 assesses satisfaction with the training. Level 2 evaluates the increase in knowledge from pre-to post-training. Level 3 looks at applying skills on the job. Level 4 examines the impact on business results such as productivity or profits. The model provides an easily understood approach for evaluation but has limitations such as oversimplifying the relationship between levels.
This document outlines the application of Kirkpatrick's four-level evaluation model to assess the effectiveness of a leadership academy program for afterschool activity coordinators. The levels include: reaction (participant satisfaction), learning (knowledge gained), behavior (job performance), and results (organizational outcomes). Data is collected at each level, including surveys, observations, portfolios. Evaluation found the program improved staff retention, promotions, and student outcomes, leading the organization to continue and expand the leadership academy.
Training Feedback and Evaluation, Training Audit, Training as Continuous ProcessAshish Hande
1) Training evaluation refers to collecting outcomes to determine if training is effective and involves measuring outcomes like reactions, learning, behavior changes, and results.
2) Companies evaluate training to demonstrate that large training investments provide benefits and a return on investment.
3) Kirkpatrick's four-level model is often used to evaluate training at the reaction, learning, behavior, and results levels.
The document summarizes the key aspects of implementing the International Baccalaureate Middle Years Programme (MYP). It outlines that the MYP can be implemented as a 5-year or shorter program. Schools must conform to IB standards and practices, and students must complete assessment requirements across 8 subject groups. The MYP can be taught in any language, with some requiring IB approval. Student work submitted for assessment is used anonymously for moderation and monitoring purposes. Assessment involves internal and external evaluation of student work according to subject criteria, with grades validated by the IB. Students who meet participation and grade requirements will receive an MYP certificate and record of achievement. Special cases, adverse circumstances, and malpractice are also addressed.
This document provides materials for a training on student growth measures and student learning objectives (SLOs) for district and educational service center leaders. The training is divided into four modules that cover: an introduction to student growth measures and SLOs; selecting appropriate assessments; developing growth targets; and scoring SLOs. Each module includes presentation slides, handouts, facilitator guides, and feedback forms. The training aims to help participants understand SLOs, the SLO development process, and how to evaluate the quality of SLOs.
This document provides an overview of assessment and evaluation in course design. It defines assessment as gathering information to make judgments about learner performance compared to standards to determine grades or success. Evaluation gathers information to improve teaching and learning, and can be formative (ongoing) or summative (final). Common assessment methods include tests, assignments, projects and surveys. Kirkpatrick's model outlines four levels of evaluation: reaction, learning, behavior, and results. When developing assessments, considerations include the purpose, objectives, class size, feedback, and using rubrics.
This document discusses training evaluation models. It begins with introducing training and evaluation concepts. It then describes Kirkpatrick's four-level model of training evaluation that measures reaction, learning, behavior, and results. The document also discusses Jack Phillips' five-level ROI model that adds return on investment. It notes some defects in these models and proposed improvements like measuring performance instead of behavior and motivation instead of reaction. The document concludes with discussing Intel's application of evaluation after training and findings from studies on training costs and popular evaluation techniques.
This document evaluates the effectiveness of an online HTML course. It asks questions to determine if the course provides clear grading criteria and feedback, sufficient time for assignments, and an engaging learning environment. The evaluation covers topics like instructor response time, test quality, accommodations for late work, and support services to assess the overall student experience.
Course Outcome and Program Outcome Calculation(new method)Ravikumar Tiwari
This presentation explains the new method (based on attainment level) of Course Outcome and Program Outcome Calculation. (with reference to National Board of Accreditation new SAR)
The Benefits Of Utilizing Kirkpatrick’S Four Levels Ofwendystein
The document outlines Kirkpatrick's four levels of evaluation that can be used to assess the effectiveness of safety training programs for supervisors. The four levels are: Level I) evaluate participant reaction; Level II) evaluate learning; Level III) evaluate behavior change; and Level IV) evaluate results including organizational goals and safety impacts. It provides details on the tools that can be used at each level and recommends starting with Level I and working through all four levels sequentially. The document applies this framework to evaluate the current monthly supervisor safety training program at SOCHD using reaction questionnaires, pre-and post-tests, on-the-job observations, and analyzing results.
The document discusses several models for evaluating training programs: Kirkpatrick's model, Phillips' ROI model, the CIPP model, and the COMA model. Kirkpatrick's model defines four levels of evaluation - reaction, learning, behavior, and results. Phillips' model adds a fifth level to Kirkpatrick's - return on investment (ROI). The CIPP model evaluates context, inputs, process, and products. The COMA model measures cognitive learning, organizational environment, motivation, and attitudes.
Measuring the Impact of eLearning: Turning Kirkpatrick’s Four Levels of Evalu...Lambda Solutions
Access to webinar recording here: http://go.lambdasolutions.net/webinar-growing-trend-of-open-source-learning
Whether it’s to inform, to improve, to change—or a combination of these factors, training must have measurable outcomes that contribute to larger organizational goals. Good training evaluation techniques identify and measure the impact of learning on job performance and ultimately, organization-wide business results. When it comes to measuring eLearning, Donald Kirkpatrick’s Four Level of Evaluation model is one of the most widely used and respected worldwide.
Co-hosted by Paula Yunker, with 30+ years of instructional design experience and certification in Kirkpatricks Four Levels Evaluation—this webinar will explore why learning evaluation is an important component of any training program and how you can measure the application of learning beyond the learning event itself. We’ll discuss how to implement learning evaluation that’s practical and provides value but isn’t complicated, time consuming or expensive. Paula will also share her favorite learning evaluation resources after the webinar!
Check out the slides to learn more about:
- Why learning evaluation is critical for business results
- Kirkpatrick’s four levels of evaluation explained
- Aligning learning to organizational goals
- Typical challenges implementing evaluation in an organization
- Practical strategies for implementing learning evaluation
- Our favorite learning evaluation resources
The document discusses Dr. Donald Kirkpatrick's model for evaluating training programs. The model contains 4 levels - (1) measuring participant reactions, (2) assessing learning, (3) determining behavioral changes on the job, and (4) evaluating results such as increased productivity or profits. The model provides a simple framework for conceptualizing training evaluation but has also been criticized as being too simplistic and not fully representing the evaluation process.
This document discusses using e-portfolios to assess student competence in Computing Grade 1. It outlines the objectives of using portfolios to promote self-evaluation, planning, writing and other skills. Students completed portfolio activities individually and received feedback from teachers. Analysis found that 75% of students who participated in portfolios passed the first exam, and 100% passed the second exam, suggesting portfolios improved learning outcomes. However, portfolios require more time from both students and teachers than conventional methods.
The document outlines the 8 step training evaluation process which includes defining the purpose and audience, determining participant needs, setting goals and objectives, developing the content, instructional activities, the written design, evaluation forms, and follow up activities. It also discusses reasons for evaluating training such as improving programs and demonstrating value, and factors to consider like expertise, timeframes, and organizational culture when designing evaluations.
Most of the support functions in an organisation fail to justify Return on Investment.
Here is the solution you have been looking for.
Please Note: It is not only that the training function can apply this method, but also the other support functions can also apply.
This presentation gives a fundamental understanding about Kirkpatrick's four levels of evaluation model. It also includes a brief of the fifth level of evaluation by Philip that forms the Kirkpatrick-Philip Model.
The Kirkpatrick model provides a simple 4-level approach to evaluating training programs: (1) measure participant reaction, (2) assess learning, (3) determine behavior changes, and (4) evaluate results. It is among the most widely used training evaluation models. While simple and easy to understand, it has also been criticized as being too simplistic and lacking evidence that the levels are causally related.
The document provides guidance on designing learning programs for vocational education qualifications. It discusses key considerations like mapping the program to national training package standards, analyzing units of competency, identifying learning resources and assessment methods, and structuring the program. The learning program should outline content, delivery approach, assessment requirements, and be tailored to learner needs and characteristics. An effective program is reviewed and evaluated for continuous improvement.
The document outlines fees for reserving a unit and parking space, including a P20,000 unit reservation fee, P500,000 parking price, and P10,000 parking reservation fee.
The document discusses the key elements and functions of Desire2Learn's ePortfolio tool, including its ability to store, share, and receive feedback on artifacts, reflections, collections and presentations. It allows users to connect with others through social learning features like groups, comments and assessments. Artifacts form the foundation and can include a variety of file types as well as newly added audio recordings. Advanced features provide tagging, collections, forms and import/export options to extend its functionality.
Turnitin.com is a revision tool that can help students work on paraphrasing skills, balancing quoted information with their own writing, and ensuring proper citations. It allows students to submit drafts and receive originality reports to identify areas needing revision before a final submission. Sample student comments, revision plans, and originality reports are provided to demonstrate how Turnitin.com can be used as a tool to improve writing through the revision process.
The document summarizes the key aspects of implementing the International Baccalaureate Middle Years Programme (MYP). It outlines that the MYP can be implemented as a 5-year or shorter program. Schools must conform to IB standards and practices, and students must complete assessment requirements across 8 subject groups. The MYP can be taught in any language, with some requiring IB approval. Student work submitted for assessment is used anonymously for moderation and monitoring purposes. Assessment involves internal and external evaluation of student work according to subject criteria, with grades validated by the IB. Students who meet participation and grade requirements will receive an MYP certificate and record of achievement. Special cases, adverse circumstances, and malpractice are also addressed.
This document provides materials for a training on student growth measures and student learning objectives (SLOs) for district and educational service center leaders. The training is divided into four modules that cover: an introduction to student growth measures and SLOs; selecting appropriate assessments; developing growth targets; and scoring SLOs. Each module includes presentation slides, handouts, facilitator guides, and feedback forms. The training aims to help participants understand SLOs, the SLO development process, and how to evaluate the quality of SLOs.
This document provides an overview of assessment and evaluation in course design. It defines assessment as gathering information to make judgments about learner performance compared to standards to determine grades or success. Evaluation gathers information to improve teaching and learning, and can be formative (ongoing) or summative (final). Common assessment methods include tests, assignments, projects and surveys. Kirkpatrick's model outlines four levels of evaluation: reaction, learning, behavior, and results. When developing assessments, considerations include the purpose, objectives, class size, feedback, and using rubrics.
This document discusses training evaluation models. It begins with introducing training and evaluation concepts. It then describes Kirkpatrick's four-level model of training evaluation that measures reaction, learning, behavior, and results. The document also discusses Jack Phillips' five-level ROI model that adds return on investment. It notes some defects in these models and proposed improvements like measuring performance instead of behavior and motivation instead of reaction. The document concludes with discussing Intel's application of evaluation after training and findings from studies on training costs and popular evaluation techniques.
This document evaluates the effectiveness of an online HTML course. It asks questions to determine if the course provides clear grading criteria and feedback, sufficient time for assignments, and an engaging learning environment. The evaluation covers topics like instructor response time, test quality, accommodations for late work, and support services to assess the overall student experience.
Course Outcome and Program Outcome Calculation(new method)Ravikumar Tiwari
This presentation explains the new method (based on attainment level) of Course Outcome and Program Outcome Calculation. (with reference to National Board of Accreditation new SAR)
The Benefits Of Utilizing Kirkpatrick’S Four Levels Ofwendystein
The document outlines Kirkpatrick's four levels of evaluation that can be used to assess the effectiveness of safety training programs for supervisors. The four levels are: Level I) evaluate participant reaction; Level II) evaluate learning; Level III) evaluate behavior change; and Level IV) evaluate results including organizational goals and safety impacts. It provides details on the tools that can be used at each level and recommends starting with Level I and working through all four levels sequentially. The document applies this framework to evaluate the current monthly supervisor safety training program at SOCHD using reaction questionnaires, pre-and post-tests, on-the-job observations, and analyzing results.
The document discusses several models for evaluating training programs: Kirkpatrick's model, Phillips' ROI model, the CIPP model, and the COMA model. Kirkpatrick's model defines four levels of evaluation - reaction, learning, behavior, and results. Phillips' model adds a fifth level to Kirkpatrick's - return on investment (ROI). The CIPP model evaluates context, inputs, process, and products. The COMA model measures cognitive learning, organizational environment, motivation, and attitudes.
Measuring the Impact of eLearning: Turning Kirkpatrick’s Four Levels of Evalu...Lambda Solutions
Access to webinar recording here: http://go.lambdasolutions.net/webinar-growing-trend-of-open-source-learning
Whether it’s to inform, to improve, to change—or a combination of these factors, training must have measurable outcomes that contribute to larger organizational goals. Good training evaluation techniques identify and measure the impact of learning on job performance and ultimately, organization-wide business results. When it comes to measuring eLearning, Donald Kirkpatrick’s Four Level of Evaluation model is one of the most widely used and respected worldwide.
Co-hosted by Paula Yunker, with 30+ years of instructional design experience and certification in Kirkpatricks Four Levels Evaluation—this webinar will explore why learning evaluation is an important component of any training program and how you can measure the application of learning beyond the learning event itself. We’ll discuss how to implement learning evaluation that’s practical and provides value but isn’t complicated, time consuming or expensive. Paula will also share her favorite learning evaluation resources after the webinar!
Check out the slides to learn more about:
- Why learning evaluation is critical for business results
- Kirkpatrick’s four levels of evaluation explained
- Aligning learning to organizational goals
- Typical challenges implementing evaluation in an organization
- Practical strategies for implementing learning evaluation
- Our favorite learning evaluation resources
The document discusses Dr. Donald Kirkpatrick's model for evaluating training programs. The model contains 4 levels - (1) measuring participant reactions, (2) assessing learning, (3) determining behavioral changes on the job, and (4) evaluating results such as increased productivity or profits. The model provides a simple framework for conceptualizing training evaluation but has also been criticized as being too simplistic and not fully representing the evaluation process.
This document discusses using e-portfolios to assess student competence in Computing Grade 1. It outlines the objectives of using portfolios to promote self-evaluation, planning, writing and other skills. Students completed portfolio activities individually and received feedback from teachers. Analysis found that 75% of students who participated in portfolios passed the first exam, and 100% passed the second exam, suggesting portfolios improved learning outcomes. However, portfolios require more time from both students and teachers than conventional methods.
The document outlines the 8 step training evaluation process which includes defining the purpose and audience, determining participant needs, setting goals and objectives, developing the content, instructional activities, the written design, evaluation forms, and follow up activities. It also discusses reasons for evaluating training such as improving programs and demonstrating value, and factors to consider like expertise, timeframes, and organizational culture when designing evaluations.
Most of the support functions in an organisation fail to justify Return on Investment.
Here is the solution you have been looking for.
Please Note: It is not only that the training function can apply this method, but also the other support functions can also apply.
This presentation gives a fundamental understanding about Kirkpatrick's four levels of evaluation model. It also includes a brief of the fifth level of evaluation by Philip that forms the Kirkpatrick-Philip Model.
The Kirkpatrick model provides a simple 4-level approach to evaluating training programs: (1) measure participant reaction, (2) assess learning, (3) determine behavior changes, and (4) evaluate results. It is among the most widely used training evaluation models. While simple and easy to understand, it has also been criticized as being too simplistic and lacking evidence that the levels are causally related.
The document provides guidance on designing learning programs for vocational education qualifications. It discusses key considerations like mapping the program to national training package standards, analyzing units of competency, identifying learning resources and assessment methods, and structuring the program. The learning program should outline content, delivery approach, assessment requirements, and be tailored to learner needs and characteristics. An effective program is reviewed and evaluated for continuous improvement.
The document outlines fees for reserving a unit and parking space, including a P20,000 unit reservation fee, P500,000 parking price, and P10,000 parking reservation fee.
The document discusses the key elements and functions of Desire2Learn's ePortfolio tool, including its ability to store, share, and receive feedback on artifacts, reflections, collections and presentations. It allows users to connect with others through social learning features like groups, comments and assessments. Artifacts form the foundation and can include a variety of file types as well as newly added audio recordings. Advanced features provide tagging, collections, forms and import/export options to extend its functionality.
Turnitin.com is a revision tool that can help students work on paraphrasing skills, balancing quoted information with their own writing, and ensuring proper citations. It allows students to submit drafts and receive originality reports to identify areas needing revision before a final submission. Sample student comments, revision plans, and originality reports are provided to demonstrate how Turnitin.com can be used as a tool to improve writing through the revision process.
Rubrics are assessment tools used to evaluate student work based on predefined criteria. They provide consistent grading by removing subjectivity and clearly communicating expectations. There are two main types - analytic rubrics break performance into multiple criteria while holistic rubrics assess performance as a whole. Scoring can be text-based or numeric. Instructors can create rubrics in Desire2Learn and tie them to gradebook items and activities to standardize grading.
This document outlines 3 cases of plagiarism and warns plagiarists. Case 1 is reusing one's own work from another class. Case 2 is passing off another student's work as your own. Case 3 is assembling a paper from various internet sources without proper citation. The document advises plagiarists to beware of consequences for academic dishonesty.
Este documento presenta información sobre portafolios como herramienta de evaluación auténtica. Explica que un portafolio es una colección deliberada de trabajos de estudiantes orientada por objetivos de aprendizaje. También describe diferentes tipos de portafolios, su proceso de elaboración que incluye recolección, selección, reflexión y proyección, y su utilidad para evaluar el dominio de competencias.
This document describes a series of exclusive lounge parties hosted by Cat Music for VIP partners. It provides details on 7 past events from 2011-2014, including the dates, locations, artists, and media coverage. It also announces plans for the next event in spring 2015 and offers a media package for potential partners, including promotional exposure and opportunities to distribute branded materials.
This document provides an overview of TurnItIn2 and Grademark, which allow instructors to check student papers for plagiarism, leave annotations and feedback, and enter grades directly in Dropbox. Key features include viewing the Originality Report and student paper together, using pre-defined or custom QuickMarks to leave feedback, and entering grades and comments to post to the gradebook. Support resources are available for additional help.
This document provides an overview and training on the Home Base IIS system. It covers how to log in, navigate student data and reports, create and schedule lessons and assessments, and administer online tests. The training discusses tracking student progress, analyzing assessment data, locating instructional materials, and using reports to identify struggling students and standards. It demonstrates how to create items, tests, and rubrics as well as print, schedule and assign assessments.
The aim of this presentation was to provide college staff and faculty with a framework for developing a a competency-based curriculum. The workshop was presented during the national conference of the Vietnam Association of Community Colleges on September 19, 2013.
Outcomes Town Hall for Students Spring II 2014Noelle Newhouse
This document discusses a new assessment tool called Outcomes that will be used in Canvas courses at the university. Outcomes will allow program administrators and faculty to track student progress on Program Learning Outcomes. It will not affect student grades. Ratings from Outcomes are used to evaluate program effectiveness and make improvements, not assess individual student performance. The document provides examples of how Outcomes will appear in Canvas and answers frequently asked questions about the purpose and use of data from Outcomes.
The document discusses a workshop for program evaluators (PEVs) organized by the Board of Accreditation for Engineering and Technical Education (BAETE) in Dhaka. The workshop covers several interactive sessions to help PEVs design assessment forms and schedule on-site visits. It emphasizes evaluating programs based on BAETE's 11 accreditation criteria, with a focus on assessing attainment of program outcomes and continuous quality improvement. The document provides guidance on what PEVs should look for during visits and how outcomes should be assessed to determine compliance with accreditation standards.
This document provides information for an upcoming webinar on assessments and rubrics for curriculum development. It includes the goals and agenda for the webinar, which are to define assessment terminology, propose assessments for different outcomes and levels, and access and use a materials design rubric for goals and assessments. It also discusses linking goals and assessments, examples of learning outcomes, formative and summative assessments, and designing rubrics. Participants are given follow-up tasks to complete before an in-person workshop on February 9.
Making Faculty Training More Agile with Blackboard BadgesKaitlin Walsh
As colleges increase their online course offerings, so does the need to train faculty to transition to the online format. Also, with a number of these online courses being taught by adjuncts who work full time or teach elsewhere, perhaps the biggest challenge is finding the time to complete the training. How do we cover the points they need to know in the little time they have? And how do we make the training relevant or meaningful to those who have had previous training elsewhere?
At American International College, we sought to address these challenges by using a competency-based badge system to support our training program for faculty teaching online. Faculty complete tasks to demonstrate their mastery of Blackboard skills, online pedagogy and AIC policies and procedures, with each set of tasks leading to a badge. Faculty who already have experience in these areas can demonstrate their competencies in alternate methods to earn their badges. This presentation will provide an overview of AIC’s redeveloped training program for faculty teaching online. We will address considerations of structure and implementation, as well as the benefits for full- and part-time faculty.
This document discusses a module on portfolio methods for recognition of prior learning (RPL). It outlines the learning outcomes for the module, which include describing key terms related to course structure and assessment, reflecting on past learning and how it relates to module outcomes, describing the RPL process, preparing a sample RPL portfolio, and discussing the relevance of portfolios for community capacity building. The document provides information on RPL, the national qualifications framework, presenting a case through a portfolio to gain credits or exemptions based on prior learning, and the different types of learning that can be assessed through RPL, including formal, non-formal and informal learning.
2016-12-07 Development of a Project/Problem Based Learning Body of Knowledge ...Yoshiki Sato
Our main goal in this study was to resolve the difficulty of facilitating problem-solving-learning in schools, where all facilitators carry out the effective facilitation of Problem/Project Based Learning (PBL) that satisfies a certain quality.
This paper discusses a `Project/Problem Based Learning Body of Knowledge (PBLBOK)' that was developed to enable facilitation suitable for learning scenarios.
We refer to the project management method, classify causes from learners having fallen into difficult situations, and define the viewpoints, processes, and intermediate artifacts (deliverables) of PBL in the development of PBLBOK.
We then describe how we organized the knowledge to facilitate PBL.
PBLBOK provides viewpoints, processes, and deliverables of facilitation, and also provides viewpoints for the evaluation of PBL by referring to the project management framework.
We found that teachers could efficiently and effectively facilitate and evaluate PBL by using PBLBOK.
Yoshiki Sato, Atsuo Hazeyama, Youzou Miyadera:
"Development of a {Project/Problem} Based Learning Body of Knowledge (PBLBOK)", Proc. 2016 IEEE 8th International Conference on Engineering Education (ICEED 2016), pp.189--194, 2016.(Kuala Lumpur, Malaysia)
Making Online Training More Agile with Blackboard BadgesJeremy Anderson
At American International College we implemented Blackboard accomplishments in a variety of training scenarios in order to improve tracking, accountability, and engagement. Three separate use cases are presented: online faculty certification, online student orientation, help desk cross training certification.
Presented at NERCOMP PDO Blackboard Users Group 2015.
This document outlines the steps for conducting a Continuous Improvement (CI) project. It begins with establishing a CI team that will work on the project. This includes selecting a school head, team leader, and various member roles. It then discusses gathering the Voice of the Customers through interviews and affinity diagrams to understand issues. This is followed by mapping the current process to identify any "storm clouds" or problems. The document provides examples and guidance for properly documenting each step of the CI methodology in a clear and organized manner.
This document provides guidance to school heads on building an LDM Implementation Portfolio. It contains two lessons. Lesson 1 defines what a portfolio is and identifies the contents and evidence from previous modules that can be included. Lesson 2 presents the evaluation rubric for the portfolio, including criteria and indicators. Activities guide compiling a list of evidence and discussing it in an LAC session to finalize. The portfolio will track progress of implementing learning delivery modalities and be submitted to coaches for evaluation.
This document outlines the internet course development process (ICDW) at a community college. It details the three prerequisites faculty must complete before the workshop: 1) complete Blackboard training series, 2) read the course development packet, and 3) submit an intent form. The workshop introduces concepts, models applications, and helps faculty apply strategies to their face-to-face courses for online delivery. It covers instructional design principles across six sessions with activities to design an entire online course. Upon completion, faculty receive a stipend and certificate.
Triangulation in Teaching Assessment & learning Outcomes (2) (1)Sheema Haider
This document discusses the concept of triangulation in teaching, assessment, and learning outcomes. It defines triangulation as collecting evidence of student learning from conversations (intended learning outcomes), observations (teaching methods), and products (assessments). The document emphasizes the need to establish clear links between these three elements. It provides examples of mapping intended learning outcomes to specific teaching strategies and assessment methods. The conclusion is that faculty need education on developing intended learning outcomes, incorporating innovative teaching methods focused on the outcomes, and using assessment based on the outcomes from multiple sources.
The document provides an overview of the DegreeWorks system and its features. It discusses the timeline for launching DegreeWorks, how students and faculty will access it, and how audits will differ depending on a student's catalog year. It then describes the main components of a DegreeWorks audit and how they are organized. Finally, it outlines some of DegreeWorks' capabilities like what-if analysis, look ahead planning, and GPA calculators, as well as some limitations and plans for training sessions.
The document outlines strategies for effective course and classroom management. It discusses Fink's 12 steps for course design, which include identifying learning goals and outcomes, selecting teaching activities, and integrating feedback and assessment. It provides guidance on syllabus design, including recommended components. For classroom management, it recommends planning for the first day, making a strong impression, setting clear expectations, and dealing with difficult students or fears. The overall document provides guidance to educators on best practices for course and syllabus preparation as well as classroom management techniques.
This document discusses the differences between course and program learning outcomes. It defines learning outcomes as measurable statements about what students should know or be able to do after completing a program. Outcomes can be identified at various levels, including institutional, department, program, and course levels. Program outcomes focus on students' cumulative learning across courses and characterize the integrated knowledge and skills students develop. Course outcomes identify more specific incremental knowledge and skills students learn in individual courses. An analogy compares program outcomes to a salad where individual course outcomes are ingredients that contribute to the overall program outcomes once prepared and integrated. An example shows how course outcomes align with and support broader program outcomes for a foreign language teacher training program.
This document provides guidance on developing learning outcomes. It begins by outlining the intended learning outcomes of the workshop, which are to develop outcomes adhering to the SMART principles, critique existing outcomes, and demonstrate constructive alignment. It then defines curriculum and outlines the topics to be covered, including learning outcomes, constructive alignment, and consolidation. The document provides details on writing outcomes focusing on what students can do, guidelines for effective outcomes using Bloom's taxonomy and level descriptors, and the importance of alignment between outcomes, teaching strategies, and assessment. It includes examples and activities for writing and evaluating outcomes to ensure they are specific, measurable, attainable, relevant, and targeted.
EUR-ACE Accreditation and informationn gathering : PORTFOLIO DESIGN .pptxMezhoudiNesrine
The document discusses accreditation portfolios and their contents. An accreditation portfolio contains evidence that demonstrates an academic program meets accreditation standards. It includes course syllabi, materials, assessments, evaluations and other documentation for individual courses. The portfolio checklist outlines all required components for each course, such as exams, assignments, solutions, student samples and evaluation forms. Learning outcomes must be written to clearly articulate the skills and knowledge students will gain. Faculty are expected to submit high quality, complete portfolios and continuously improve courses based on assessment results and recommendations.
This document discusses using the Desire2Learn learning management system and other tools to design competency-based courses. It outlines goals of creating course templates, instructor training, and customized design. It also discusses using tools like release conditions, assessments, and an intelligent agent to accommodate competency-based education. The document provides background on competency-based education and compares it to traditional models. It evaluates using Desire2Learn to structure courses and discusses potential enhancements using additional tools.
Planning and Creating Rubrics in D2L BrightspaceD2L Barry
Presentation at the D2L Connection: South Carolina Edition on May 10, 2019 at Piedmont Technical College, Newberry Campus.
Presenters Susan Moore and Neil Griffin of Spartanburg Community College.
Similar to Spring planning 2012 course outcomes faculty (mike) (20)
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
Physiology and chemistry of skin and pigmentation, hairs, scalp, lips and nail, Cleansing cream, Lotions, Face powders, Face packs, Lipsticks, Bath products, soaps and baby product,
Preparation and standardization of the following : Tonic, Bleaches, Dentifrices and Mouth washes & Tooth Pastes, Cosmetics for Nails.
The simplified electron and muon model, Oscillating Spacetime: The Foundation...RitikBhardwaj56
Discover the Simplified Electron and Muon Model: A New Wave-Based Approach to Understanding Particles delves into a groundbreaking theory that presents electrons and muons as rotating soliton waves within oscillating spacetime. Geared towards students, researchers, and science buffs, this book breaks down complex ideas into simple explanations. It covers topics such as electron waves, temporal dynamics, and the implications of this model on particle physics. With clear illustrations and easy-to-follow explanations, readers will gain a new outlook on the universe's fundamental nature.
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
Executive Directors Chat Leveraging AI for Diversity, Equity, and InclusionTechSoup
Let’s explore the intersection of technology and equity in the final session of our DEI series. Discover how AI tools, like ChatGPT, can be used to support and enhance your nonprofit's DEI initiatives. Participants will gain insights into practical AI applications and get tips for leveraging technology to advance their DEI goals.
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
Spring planning 2012 course outcomes faculty (mike)
1.
2. The extent each “program” or
unit is contributing to the
overall achievement of
students and the institution as
a whole.
3. The standards that Daytona State College
students must meet prior to graduating:
• critical/creative thinking
• cultural literacy
• communication
• information and technical literacy
4. • Each must tie back to an Institutional
Outcome
• Minimum of 3 measures
• Can be evaluated in any course in program
• Critical Course Activities are associated
using Rubrics
10. You will need to have identified:
– Critical Learning Activities
– Which Activities support which Course Outcomes
– What Rubrics will be used to assess the Activities
11. The following should already be completed:
– Learning Activities in your course should be
created
– Rubrics should be created
• See Rubrics for Grading training material for more
information
• Available in the Florida Online Academy & Resources
– Course Outcome structure should be created
• See your Department Chair/Program Coordinator for
more information or questions
12. We will look at five examples
of implementing course outcomes
1. Course Builder
2. Dropbox
3. Discussion
4. Gradebook Item
5. Quiz
30. • Rubrics are only viewable to students when
attached to the gradebook item.
• Instructors can grade using rubric
• Must manually enter grade from rubric to
grade item
• See Grade Item setup for details
4/9/2012 31
32. • Edit the Grade Item
• Select the Objectives Tab
4/9/2012 44
33. • Select “Associate
Course Outcomes”
button
• Locate desired
course outcome
in list
• Identify with
checkmark
• Select “Add
Selected” button
4/9/2012 45
34. • Select “Add
Assessment
Method” link
• Choose the
Assessment Type
– Numeric or Rubric
• If Numeric
– Enter the
Achievement
Threshold score
• If Rubric
– Select the desired
Rubric
4/9/2012 46
39. • Select “Associate
Course
Outcomes”
button
• Locate desired
course outcome
in list
• Identify with
checkmark
• Select “Add
Selected” button
4/9/2012 51
40. • Associate the entire quiz • Select the desired
or select a section of questions and/or sections
questions
• Select “Associate
• Select “Edit Associated
Questions” button
Questions” button
4/9/2012 52
41. • Select “Add
Assessment
Method” link
• Choose the
Assessment Type
– Numeric or Rubric
• If Numeric
– Enter the
Achievement
Threshold score
• If Rubric
– Select the desired
Rubric
4/9/2012 53
42. • Florida Online Academy
Resources
• Faculty Innovation Center
Bldg 210/Room 206F
• HELP = (386) 506-3849
• online@daytonastate.edu