A FISA for the qualification focuses on the extent to which a learner can demonstrate applied competence.
Applied competence, in terms of the NQF is evidenced through learner’s ability to integrate concepts, ideas and actions in authentic, real-life contexts and is expressed as practical, foundational and reflexive competence.
This document discusses the key characteristics of a good assessment instrument: practicality, reliability, validity, authenticity, and washback. It provides details on each characteristic, including definitions, types, and factors that can affect them. For example, it explains that a reliable instrument should provide consistent scores over time and discusses sources of unreliability like student factors or test administration issues. The document also lists different types of validity like face validity and content validity. Overall, it serves as a comprehensive overview of the essential features an effective assessment should possess.
Writing exam questions is one of the most important parts of teaching nursing. Having the right roadmap to what to include must be in the minds of nurse educators while developing those exams. This presentation provided directions on how to develop the test blueprint and how to revise questions.
This document provides an overview of item response theory (IRT), including key concepts like item response functions, item parameters, and assumptions of IRT models. IRT aims to measure latent traits through analysis of item-level data, allowing item and person parameters to be estimated independent of specific test administrations. The document outlines the 1, 2, and 3 parameter IRT models and how they relate item characteristics like difficulty and discrimination to the probability of endorsing an item based on trait level. Key assumptions like unidimensionality and local independence are also discussed.
1. The document discusses different types of test scores including raw scores, percentiles, stanines, standard scores, and grade level scores.
2. It also explains the key differences between criterion-referenced tests and norm-referenced tests. Criterion-referenced tests measure specific skills defined by objectives, while norm-referenced tests measure broad skills to rank students against others.
3. The document provides details on how each type of test score is calculated and interpreted, and what each type aims to convey about a student's performance.
This document provides guidance on writing purpose statements for qualitative, quantitative, and mixed methods research studies. It defines what a purpose statement is and its key components. Examples are given for purpose statements in qualitative studies like phenomenology, case studies, and ethnographies. Examples are also given for quantitative studies like surveys and experiments. Finally, examples are provided for different mixed methods designs, including convergent, explanatory sequential, and exploratory sequential designs. The document offers scripts and guidelines for writing effective purpose statements for different research approaches.
Item analysis is a statistical technique used to select and reject test items based on their difficulty and ability to discriminate between more and less capable examinees. It involves arranging scores, separating examinees into groups, calculating the difficulty value and discrimination index for each item, and using guidelines to identify items for inclusion or exclusion from the final test based on having appropriate difficulty levels and positively discriminating between high- and low-scoring examinees. The goal is to analyze each item and modify or remove items that perform poorly on difficulty or discrimination measures in order to create a more valid and reliable final test form.
The document discusses Krathwohl's Taxonomy of Affective Domain, which categorizes learning objectives into five main levels - Receiving, Responding, Valuing, Organization, and Characterization. It provides definitions and examples for each level. Instructional objectives are defined as specific, measurable, short-term and observable student behaviors that ensure learning is focused on reaching overall goals. The document also discusses key concepts in the affective domain, defining attitudes as mental predispositions to act in favor or disfavor of something based on cognitions, affect, behavioral intentions, and evaluations.
Dr. G. Sawarkar qualitative & quantitative evaluation methodeGaurav Sawarkar
This document discusses quantitative and qualitative methods for evaluation. Both methods provide important information but are rarely used alone and generally provide the best overview when combined. Quantitative data answers questions like "how many" through surveys and statistics, while qualitative data answers questions like "why" or "how" through interviews, observations, and focus groups. Both have strengths like precision for quantitative and context for qualitative, but also limitations such as generalizability for qualitative and complexity for quantitative. Using both methods together can provide the fullest picture of a project for evaluation purposes.
This document discusses the key characteristics of a good assessment instrument: practicality, reliability, validity, authenticity, and washback. It provides details on each characteristic, including definitions, types, and factors that can affect them. For example, it explains that a reliable instrument should provide consistent scores over time and discusses sources of unreliability like student factors or test administration issues. The document also lists different types of validity like face validity and content validity. Overall, it serves as a comprehensive overview of the essential features an effective assessment should possess.
Writing exam questions is one of the most important parts of teaching nursing. Having the right roadmap to what to include must be in the minds of nurse educators while developing those exams. This presentation provided directions on how to develop the test blueprint and how to revise questions.
This document provides an overview of item response theory (IRT), including key concepts like item response functions, item parameters, and assumptions of IRT models. IRT aims to measure latent traits through analysis of item-level data, allowing item and person parameters to be estimated independent of specific test administrations. The document outlines the 1, 2, and 3 parameter IRT models and how they relate item characteristics like difficulty and discrimination to the probability of endorsing an item based on trait level. Key assumptions like unidimensionality and local independence are also discussed.
1. The document discusses different types of test scores including raw scores, percentiles, stanines, standard scores, and grade level scores.
2. It also explains the key differences between criterion-referenced tests and norm-referenced tests. Criterion-referenced tests measure specific skills defined by objectives, while norm-referenced tests measure broad skills to rank students against others.
3. The document provides details on how each type of test score is calculated and interpreted, and what each type aims to convey about a student's performance.
This document provides guidance on writing purpose statements for qualitative, quantitative, and mixed methods research studies. It defines what a purpose statement is and its key components. Examples are given for purpose statements in qualitative studies like phenomenology, case studies, and ethnographies. Examples are also given for quantitative studies like surveys and experiments. Finally, examples are provided for different mixed methods designs, including convergent, explanatory sequential, and exploratory sequential designs. The document offers scripts and guidelines for writing effective purpose statements for different research approaches.
Item analysis is a statistical technique used to select and reject test items based on their difficulty and ability to discriminate between more and less capable examinees. It involves arranging scores, separating examinees into groups, calculating the difficulty value and discrimination index for each item, and using guidelines to identify items for inclusion or exclusion from the final test based on having appropriate difficulty levels and positively discriminating between high- and low-scoring examinees. The goal is to analyze each item and modify or remove items that perform poorly on difficulty or discrimination measures in order to create a more valid and reliable final test form.
The document discusses Krathwohl's Taxonomy of Affective Domain, which categorizes learning objectives into five main levels - Receiving, Responding, Valuing, Organization, and Characterization. It provides definitions and examples for each level. Instructional objectives are defined as specific, measurable, short-term and observable student behaviors that ensure learning is focused on reaching overall goals. The document also discusses key concepts in the affective domain, defining attitudes as mental predispositions to act in favor or disfavor of something based on cognitions, affect, behavioral intentions, and evaluations.
Dr. G. Sawarkar qualitative & quantitative evaluation methodeGaurav Sawarkar
This document discusses quantitative and qualitative methods for evaluation. Both methods provide important information but are rarely used alone and generally provide the best overview when combined. Quantitative data answers questions like "how many" through surveys and statistics, while qualitative data answers questions like "why" or "how" through interviews, observations, and focus groups. Both have strengths like precision for quantitative and context for qualitative, but also limitations such as generalizability for qualitative and complexity for quantitative. Using both methods together can provide the fullest picture of a project for evaluation purposes.
The document discusses key qualities of measurement devices: validity, reliability, practicality, and backwash effect. It defines each quality and provides examples. Validity refers to what a test measures, and includes content, construct, criterion-related, concurrent, and predictive validity. Reliability is how consistent measurements are, including equivalency, stability, internal, and inter-rater reliability. Practicality means a test is easy to construct, administer, score and interpret. Backwash effect is a test's influence on teaching and learning.
201510060347 topic 1 what is curriculumSharon Kaur
The document discusses key concepts related to curriculum including definitions of curriculum, hidden curriculum, and three approaches to curriculum - content, product, and process. It also covers foundations of curriculum in areas like philosophy, psychology, sociology and history. The stages of curriculum development including planning, design, implementation are outlined. Finally, the relationship between curriculum and instruction is explained noting that curriculum is the 'what' of education while instruction is the 'how'.
Theory based models of curriculum developmentahmedabbas1121
This document summarizes models of curriculum development by Brown (1995) and Richards (2001). Both models include needs analysis, setting learning outcomes, selecting and preparing teaching materials, instruction, and evaluation. Brown's model also includes testing as a key element, while Richards' model separately includes situation analysis and course organization. The document concludes by combining the two models into a summary of core curriculum development processes that should generally be present.
Grading & reporting systems complete presentationG Dodson
This document compares and contrasts norm-referenced and criterion-referenced grading systems. Norm-referenced systems compare students to each other, which can make learning competitive. Criterion-referenced systems compare students to learning standards regardless of peers' performance. The document discusses advantages and disadvantages of each system and argues that criterion-referenced, or standards-based, grading more accurately measures individual student learning.
Validity refers to how well a test measures the construct it intends to measure. There are several types of validity: face validity considers test relevance; content validity examines adequate sampling of behaviors; criterion-related validity correlates test scores with outcome criteria; and construct validity judges inferences about test-takers' standing on an underlying construct. Validity is crucial for establishing a test's appropriateness and usefulness for different populations.
Standardized tests are designed to have consistent objectives and criteria across different forms of the test. They measure students' mastery of prescribed grade-level competencies. Developing a standardized test involves determining its purpose, designing test specifications, creating and selecting test items, evaluating items, specifying scoring procedures, and ongoing validation studies. The document outlines these steps and provides examples of standardized language proficiency tests like TOEFL and IELTS.
Norm-Referenced and Criterion-Referenced InterpretationDenmark Aleluya
This is my report in Assessment of Student Learning 1 about Referencing Frameworks, Test Interpretation etc. I wish this could help everyone who needs a little information.
The document discusses eliminating irrelevant barriers and unintended clues in objective test items that can undermine the validity of an assessment. Factors like complex sentences, difficult vocabulary, and unclear instructions are construct-irrelevant barriers that limit students' responses. Test items should measure the intended learning outcomes and not other irrelevant abilities. Care should be taken to avoid ambiguity, wordiness, biases and other barriers that prevent students from demonstrating their actual achievement levels. Clues within items could allow students without sufficient learning to still answer correctly, preventing the items from functioning as intended.
The Nature and Scope of Curriculum DevelopmentMonica P
MST Course Design and Dev't
(class report(s)/discussion(s))
DISCLAIMER: I do not claim ownership of the photos, videos, templates, and etc used in this slideshow
This document discusses curriculum evaluation based on several models and frameworks. It describes Stufflebeam's CIPP model which evaluates the context, inputs, processes, and products of a curriculum. The document also provides a suggested six-step plan for conducting curriculum evaluation, including focusing on objectives, collecting information, organizing data, analyzing information, reporting findings, and providing continuous feedback for improvements.
The document discusses differential item functioning (DIF), which occurs when test takers from different subgroups who are matched on ability have unequal chances of correctly answering an item. The document outlines various statistical methods like comparing item characteristic curves and logistic regression for detecting DIF using item response theory. It also discusses judgmental reviews where panels examine tests for bias in areas like stereotyping, vocabulary level and format. Finally, it covers types of item bias like uniform and nonuniform bias and methods for detecting bias like using contingency tables.
This presentation covers the intricacies of the Item Response Theory. I made this presentation to explain the concepts of IRT to my lab research group at the University of Minnesota. I have taken the contents from various sources so apologies for the poor design of the presentation.
This document provides an overview and demonstration of Oracle iStore, an e-commerce application. It discusses key features such as product catalog management, pricing, checkout, order tracking, and integration with other Oracle applications. The demonstration shows the order creation process from adding products to a cart, entering shipping/payment details, order submission, and order tracking on the customer frontend. It also shows how the order is created in the backend Oracle applications.
This PPT Aims to provide Knowledge and Understanding about the concept of Bloom's Taxonomy, Cognitive Domain, Original Taxonomy, Evaluation of Taxonomy, Level of Bloom Taxonomy, Types of Knowledge, Benefits of Bloom Taxonomy, Use of Bloom Taxonomy and So on.
This document discusses test construction and interpretation. It defines a test as a way to examine someone's knowledge or skills to determine what they have learned. Testing measures the level of ability, skill or knowledge achieved. The document lists learning objectives about defining tests, understanding test characteristics, and applying concepts like planning, administering and analyzing tests. It also discusses why we need tests, such as for evaluation and assessment purposes to document and improve knowledge. Students are given activities to identify different types of tests and prepare a sample test for their subject.
Variables: Types and their Operational Definitions
Unit III: Problem identification formulation of research objectives and hypothesis (as part of M.Optom Curriculum of Pokhara University, Nepal)
This document discusses educational testing and assessment, including definitions of tests and assessments, factors that make them appealing to policymakers, the history of test-based educational reform over the past four decades, national and international assessments, technological advances, public concerns, effects on students, and issues of fairness. It covers a wide range of topics related to educational testing at a high level.
General Framework for Setting Examination Papers and Test PapersWilliam Kapambwe
The document provides guidance on developing test specifications and examination papers, including defining test content and mapping domains, using taxonomies to classify learning objectives, and selecting assessment methods that align with domains of learning. It discusses Bloom's taxonomy and provides examples of verbs for different cognitive levels. Assessment options are described for various learning domains, including cognitive, affective, and psychomotor. Frameworks like Romiszowski's are presented for relating knowledge and skills to test construction. The importance of congruence between learning outcomes and assessment methods is emphasized.
CURRICULUM DEVELOPMENT PROCESSES AND MODELS.pptxNestleJaneLeones
This document discusses curriculum development processes and models. It defines curriculum development as a planned, purposeful, and systematic process for creating improvements in education. There are typically four phases: curriculum planning, designing, implementing, and evaluating. Curriculum models provide formats for curriculum design to meet unique needs and purposes. Some well-known models discussed are Tyler's model, Taba's model, and the Galen Saylor and William Alexander model. Tyler's model focuses on educational purposes, experiences, organization, and evaluation. Taba's model takes a grassroots approach involving teacher input. The Galen Saylor model focuses on goals, objectives, curriculum design, implementation, and evaluation.
The document discusses definitions and perspectives of learner autonomy in language education. It defines autonomy as the capacity for learners to take control of their own learning, including managing learning, making cognitive choices, and choosing content. The document outlines different versions or perspectives of autonomy, such as technical, psychological, and political. It argues that learner autonomy is important now due to factors like globalization and the need for self-directed, lifelong learning.
This document discusses different types of instruments that can be used to assess an ICT integrated lesson, including written tests to evaluate skill achievement, checklists and rubrics for complex tasks and products, Likert scales to measure attitude outcomes, and observation instruments for recording behavior frequencies. It provides examples of each type and links to resources about checklists, rubrics, and observation schedules.
The document discusses key qualities of measurement devices: validity, reliability, practicality, and backwash effect. It defines each quality and provides examples. Validity refers to what a test measures, and includes content, construct, criterion-related, concurrent, and predictive validity. Reliability is how consistent measurements are, including equivalency, stability, internal, and inter-rater reliability. Practicality means a test is easy to construct, administer, score and interpret. Backwash effect is a test's influence on teaching and learning.
201510060347 topic 1 what is curriculumSharon Kaur
The document discusses key concepts related to curriculum including definitions of curriculum, hidden curriculum, and three approaches to curriculum - content, product, and process. It also covers foundations of curriculum in areas like philosophy, psychology, sociology and history. The stages of curriculum development including planning, design, implementation are outlined. Finally, the relationship between curriculum and instruction is explained noting that curriculum is the 'what' of education while instruction is the 'how'.
Theory based models of curriculum developmentahmedabbas1121
This document summarizes models of curriculum development by Brown (1995) and Richards (2001). Both models include needs analysis, setting learning outcomes, selecting and preparing teaching materials, instruction, and evaluation. Brown's model also includes testing as a key element, while Richards' model separately includes situation analysis and course organization. The document concludes by combining the two models into a summary of core curriculum development processes that should generally be present.
Grading & reporting systems complete presentationG Dodson
This document compares and contrasts norm-referenced and criterion-referenced grading systems. Norm-referenced systems compare students to each other, which can make learning competitive. Criterion-referenced systems compare students to learning standards regardless of peers' performance. The document discusses advantages and disadvantages of each system and argues that criterion-referenced, or standards-based, grading more accurately measures individual student learning.
Validity refers to how well a test measures the construct it intends to measure. There are several types of validity: face validity considers test relevance; content validity examines adequate sampling of behaviors; criterion-related validity correlates test scores with outcome criteria; and construct validity judges inferences about test-takers' standing on an underlying construct. Validity is crucial for establishing a test's appropriateness and usefulness for different populations.
Standardized tests are designed to have consistent objectives and criteria across different forms of the test. They measure students' mastery of prescribed grade-level competencies. Developing a standardized test involves determining its purpose, designing test specifications, creating and selecting test items, evaluating items, specifying scoring procedures, and ongoing validation studies. The document outlines these steps and provides examples of standardized language proficiency tests like TOEFL and IELTS.
Norm-Referenced and Criterion-Referenced InterpretationDenmark Aleluya
This is my report in Assessment of Student Learning 1 about Referencing Frameworks, Test Interpretation etc. I wish this could help everyone who needs a little information.
The document discusses eliminating irrelevant barriers and unintended clues in objective test items that can undermine the validity of an assessment. Factors like complex sentences, difficult vocabulary, and unclear instructions are construct-irrelevant barriers that limit students' responses. Test items should measure the intended learning outcomes and not other irrelevant abilities. Care should be taken to avoid ambiguity, wordiness, biases and other barriers that prevent students from demonstrating their actual achievement levels. Clues within items could allow students without sufficient learning to still answer correctly, preventing the items from functioning as intended.
The Nature and Scope of Curriculum DevelopmentMonica P
MST Course Design and Dev't
(class report(s)/discussion(s))
DISCLAIMER: I do not claim ownership of the photos, videos, templates, and etc used in this slideshow
This document discusses curriculum evaluation based on several models and frameworks. It describes Stufflebeam's CIPP model which evaluates the context, inputs, processes, and products of a curriculum. The document also provides a suggested six-step plan for conducting curriculum evaluation, including focusing on objectives, collecting information, organizing data, analyzing information, reporting findings, and providing continuous feedback for improvements.
The document discusses differential item functioning (DIF), which occurs when test takers from different subgroups who are matched on ability have unequal chances of correctly answering an item. The document outlines various statistical methods like comparing item characteristic curves and logistic regression for detecting DIF using item response theory. It also discusses judgmental reviews where panels examine tests for bias in areas like stereotyping, vocabulary level and format. Finally, it covers types of item bias like uniform and nonuniform bias and methods for detecting bias like using contingency tables.
This presentation covers the intricacies of the Item Response Theory. I made this presentation to explain the concepts of IRT to my lab research group at the University of Minnesota. I have taken the contents from various sources so apologies for the poor design of the presentation.
This document provides an overview and demonstration of Oracle iStore, an e-commerce application. It discusses key features such as product catalog management, pricing, checkout, order tracking, and integration with other Oracle applications. The demonstration shows the order creation process from adding products to a cart, entering shipping/payment details, order submission, and order tracking on the customer frontend. It also shows how the order is created in the backend Oracle applications.
This PPT Aims to provide Knowledge and Understanding about the concept of Bloom's Taxonomy, Cognitive Domain, Original Taxonomy, Evaluation of Taxonomy, Level of Bloom Taxonomy, Types of Knowledge, Benefits of Bloom Taxonomy, Use of Bloom Taxonomy and So on.
This document discusses test construction and interpretation. It defines a test as a way to examine someone's knowledge or skills to determine what they have learned. Testing measures the level of ability, skill or knowledge achieved. The document lists learning objectives about defining tests, understanding test characteristics, and applying concepts like planning, administering and analyzing tests. It also discusses why we need tests, such as for evaluation and assessment purposes to document and improve knowledge. Students are given activities to identify different types of tests and prepare a sample test for their subject.
Variables: Types and their Operational Definitions
Unit III: Problem identification formulation of research objectives and hypothesis (as part of M.Optom Curriculum of Pokhara University, Nepal)
This document discusses educational testing and assessment, including definitions of tests and assessments, factors that make them appealing to policymakers, the history of test-based educational reform over the past four decades, national and international assessments, technological advances, public concerns, effects on students, and issues of fairness. It covers a wide range of topics related to educational testing at a high level.
General Framework for Setting Examination Papers and Test PapersWilliam Kapambwe
The document provides guidance on developing test specifications and examination papers, including defining test content and mapping domains, using taxonomies to classify learning objectives, and selecting assessment methods that align with domains of learning. It discusses Bloom's taxonomy and provides examples of verbs for different cognitive levels. Assessment options are described for various learning domains, including cognitive, affective, and psychomotor. Frameworks like Romiszowski's are presented for relating knowledge and skills to test construction. The importance of congruence between learning outcomes and assessment methods is emphasized.
CURRICULUM DEVELOPMENT PROCESSES AND MODELS.pptxNestleJaneLeones
This document discusses curriculum development processes and models. It defines curriculum development as a planned, purposeful, and systematic process for creating improvements in education. There are typically four phases: curriculum planning, designing, implementing, and evaluating. Curriculum models provide formats for curriculum design to meet unique needs and purposes. Some well-known models discussed are Tyler's model, Taba's model, and the Galen Saylor and William Alexander model. Tyler's model focuses on educational purposes, experiences, organization, and evaluation. Taba's model takes a grassroots approach involving teacher input. The Galen Saylor model focuses on goals, objectives, curriculum design, implementation, and evaluation.
The document discusses definitions and perspectives of learner autonomy in language education. It defines autonomy as the capacity for learners to take control of their own learning, including managing learning, making cognitive choices, and choosing content. The document outlines different versions or perspectives of autonomy, such as technical, psychological, and political. It argues that learner autonomy is important now due to factors like globalization and the need for self-directed, lifelong learning.
This document discusses different types of instruments that can be used to assess an ICT integrated lesson, including written tests to evaluate skill achievement, checklists and rubrics for complex tasks and products, Likert scales to measure attitude outcomes, and observation instruments for recording behavior frequencies. It provides examples of each type and links to resources about checklists, rubrics, and observation schedules.
This document provides guidance on designing effective rubrics for assessing student performance. It defines what a rubric is and compares rubrics to checklists. Rubrics can be holistic, assessing the overall quality of work, or analytic, assessing various criteria separately. The document recommends determining clear criteria and descriptors, involving students, limiting criteria to key aspects, using concrete language and examples, and pilot testing rubrics. Rubrics should be task-specific and altered based on experience to improve clarity and usefulness for students.
This chapter discusses process-oriented, performance-based assessment. It emphasizes that assessment should reflect an understanding of learning as multidimensional, integrated, and revealed over time through performance. Process-oriented assessment focuses on directly observable student behaviors and competencies stated as objectives. Tasks should be carefully designed to highlight targeted competencies and be interesting for students. Scoring rubrics are used to assess student performance on tasks according to specific criteria. Rubrics define levels of performance and can be analytic, assessing each criterion separately, or holistic, providing an overall assessment. Process-oriented assessment provides a mechanism for consistent, objective evaluation and useful feedback to improve learning.
Process and product performane-based assessment Dianopesidas
This document discusses process-oriented and product-oriented performance-based assessment. Process-oriented assessment evaluates the actual task performance and does not emphasize the output. It aims to understand the processes a person uses to complete a task. Product-oriented assessment focuses on the final product and output, and evaluates it based on levels of performance like novice, skilled, and expert. Both types of assessment require carefully designing learning tasks and creating rubrics with criteria, levels of performance, and descriptors to consistently score students.
Process oriented performance-based assessmentrenarch
Performance assessment involves observing and judging a student's demonstration of skills or competencies through tasks like creating a product, responding to a prompt, or giving a presentation. It emphasizes a student's ability to apply their knowledge and skills to produce their own work. Performance assessments typically require sustained effort over multiple days and involve explaining, justifying, and defending ideas. They rely on trained evaluators to score student work using pre-specified criteria and standards. While performance assessments integrate assessment with learning and provide formative feedback, they can be difficult to score reliably and require significant time from teachers and students.
The document discusses Bloom's Revised Taxonomy, which organizes thinking skills into six levels from basic to more complex. It outlines the original and revised terms, with changes made to better reflect active thinking processes. Examples of classroom activities are provided for each of the six levels - Remembering, Understanding, Applying, Analyzing, Evaluating, and Creating.
CBA_2_Assessment_Introduction to assessmentlinetnafuna
This document defines assessment and different types of assessment. It discusses assessment as the process of collecting evidence of student learning and making judgements about whether learning standards have been met. Formative assessment is used for learning and getting feedback, while summative assessment evaluates learning at the end of a period. Assessment can be criterion-referenced against standards or norm-referenced by comparing students. Competency-based assessment is evidence-based and criterion-referenced. Training outcomes clearly describe the skills and knowledge students will demonstrate after training. Different training approaches like off-the-job, on-the-job, and distance learning each have advantages and limitations.
Linda Meyer - Moderation of FSAs - best practice (1)Linda Meyer
The document provides guidelines for setting and moderating final integrated summative assessments (FSAs). It discusses the roles and responsibilities of certification partners, assessors, moderators, and the Services SETA ETQA in setting FSAs aligned with qualifications and ensuring quality. Moderators must ensure FSAs adequately assess applied competence at the appropriate NQF level, cover relevant standards, and use a variety of assessment methods fairly. Blooms taxonomy is referenced to guide cognitive complexity. Thorough moderation processes involving internal and external moderators are described.
This document provides an overview of the qualifications pack for an assessor role in the IT-ITES sector. It includes sections on key contacts, qualifications details, a glossary of relevant terms, and the various national occupational standards units. The role of an assessor is to deliver accredited training and assessment services in the training and vocational education sector. The document outlines the minimum education, training, and experience requirements for the role and lists the applicable national occupational standards.
This document discusses summative evaluation, which is defined as evaluating the effectiveness of instructional materials with target learners. It describes the expert judgement phase of summative evaluation, which involves evaluating aspects of candidate instruction like congruence with organizational needs. The field trial phase then determines instructional effectiveness with the target group through outcomes analysis and management analysis. Key aspects of both phases are outlined.
The document discusses curriculum evaluation models and processes. It defines curriculum evaluation as assessing the strengths and weaknesses of a curriculum to improve its effectiveness. Several models are described, including Tyler's objectives-centered model which evaluates curriculum elements like objectives and student outcomes. Stufflebeam's CIPP model assesses curriculum context, inputs, processes, and products. The stakeholder-responsive model focuses on curriculum implementation from stakeholders' perspectives. Scriven's consumer-oriented model uses criteria and checklists to conduct formative or summative evaluations. Overall, the document outlines different approaches to curriculum evaluation to enhance learning outcomes.
It refers to the collection of information on which judgment might be made about the worth and the effectiveness of a particular programme. It includes making those judgments so that decision might be made about the future of programme, whether to retain the program as it stand, modify it or throw it out altogether.
This document discusses assessment of students in clinical practice. It addresses who can supervise and assess students, methods of assessment, ensuring reliability and validity, and giving feedback. It emphasizes the importance of assessing practical skills to evaluate competence. The document also discusses challenges like inconsistent assessors and outlines standards and frameworks to support learning and assessment in practice according to the NMC. It provides guidance on assessment processes, preparing students, and managing difficult situations like failing a student.
This document provides guidance for developing quality student performance measures. It introduces a rubric to help teachers self-assess measures they create. The rubric examines measures across three strands: design, build, and review. It outlines steps to rate measures in each strand and provides examples of quality indicators like clearly stating the measure's purpose, aligning items to standards, and establishing cut scores for performance levels. The goal is to help teachers build rigorous, valid and reliable measures of student achievement.
This document outlines the 9 step process for setting performance standards on educational assessments. The steps include: 1) choosing a representative panel, 2) choosing a standard setting method, 3) preparing performance category descriptions, 4) training panelists, 5) compiling ratings, 6) obtaining performance standards, 7) presenting consequences data, 8) revising standards if needed, and 9) compiling validity evidence. The purpose of setting performance standards is to communicate expected performance levels on assessments and they can serve purposes such as certification, prediction, motivation, or merely describing scale categories. Effective training and obtaining varied stakeholder input is important for developing defensible standards.
This document discusses subject benchmarking in higher education. It defines subject benchmarking as a process that creates standards for measuring academic performance in a subject area. Subject benchmark statements describe the expected knowledge, skills, and abilities of graduates in a particular field. They provide guidance for curriculum development and program review to help ensure quality and standards. The document outlines the key components of subject benchmarking, including educational aims, learning outcomes, content specifications, teaching strategies, assessment methods, and performance criteria that can be used to benchmark programs.
The document provides information about self-assessment for academic programs. It includes definitions of key terms like quality, quality assurance, and assessment. It outlines the objectives and benefits of self-assessment. The process of generating a Self-Assessment Report is described in 9 steps. Criteria for self-assessment are outlined, including 8 criteria related to areas like program mission/objectives, curriculum, facilities, faculty, and support. Methods for scoring criteria using rubrics are also explained.
This document discusses the evaluation of nursing education programs. It defines key terms like evaluation, nursing education program, and program evaluation theory. It describes the purposes of program evaluation as determining how program elements interact and influence effectiveness, and realizing program missions, goals and outcomes. The document outlines various evaluation models, tools, and theories that can be used in nursing education program evaluation. It also discusses evaluating different aspects of the program like curriculum, teaching effectiveness, outcomes, and the environment.
This document provides guidance for developing quality student performance measures created by teachers. It outlines a process using a rubric to self-assess performance measures. The rubric examines measures across three strands: design, build, and review. It is intended to help teachers create rigorous measures aligned to content standards that reliably assess student achievement. Scores are reported as raw points and performance levels to communicate student mastery of content. The goal is to establish a foundational process for developing high-quality performance tasks and assessments.
This document provides information about an upcoming training workshop on designing assessment tools to measure learning outcomes in maritime education. The workshop will be held on July 27-28, 2022 in Iloilo City, Philippines and aims to help participants gain knowledge on how to develop assessment tools. Several key principles of assessment are discussed, including that assessment should drive learning, people learn by doing and reflecting, and repetition leads to mastery. The document outlines the goal and agenda for the workshop, which will include discussions, constructing new tools, peer review, revising existing tools, and validating tools for content and alignment. Examples of assessment tools such as rubrics, checklists, concept maps, and performance tests are also provided. The workshop seeks to
A Novel Expert Evaluation Methodology Based On Fuzzy LogicLeslie Schulte
This document describes a novel expert evaluation methodology based on fuzzy logic. It aims to reproduce the cognitive mechanisms of human expert evaluators. The methodology involves fuzzifying inputs like question difficulty and student answers. Basic rules are then used to connect these fuzzy inputs to fuzzy outputs, mimicking how experts might evaluate. This allows capturing the ambiguous and uncertain nature of evaluation. The methodology provides flexible and adaptive evaluation that better reflects human reasoning compared to traditional binary approaches. It could form the basis for intelligent evaluation expert systems.
The document discusses the concepts of constructive alignment and standards-based assessment in education. It is summarized as follows:
1. Constructive alignment is an approach where learning outcomes are defined before teaching, and teaching/assessment methods are designed to achieve those outcomes and assess student achievement of standards. Assessment criteria are referenced to the defined outcomes.
2. The focus is on what and how students learn rather than just the topics taught. Learning outcomes describe what students should be able to do, like apply procedures or compare theories.
3. The goal is to support student meaning and learning through a well-designed, coherent course where intentions and assessments are aligned based on standards of what students should learn and be able to demonstrate.
Summative evaluation is used to evaluate the effectiveness of instructional programs and student learning. It has two phases - an expert judgement phase and a field trial phase. The expert judgement phase involves subject matter experts analyzing the content, design, feasibility, and current use of instructional materials to determine if they have the potential to meet learning objectives. The field trial phase tests the materials on target learners in real-world settings to assess the impact on learning, job performance, and the organization. The purpose of summative evaluation is to determine if students or organizations achieved intended learning outcomes from the instruction.
This professional development workshop teaches participants how to create meaningful rubrics that assess student learning. The workshop will provide tools for identifying the goals and objectives of student work and assessing various aspects of student products. Participants will investigate theories of rubric design, review online rubric creators, and practice developing rubrics that enhance student work while focusing on assignment goals and objectives. The workshop agenda includes discussions on starting with learning outcomes, formative and summative assessment, and designing valid and reliable rubrics using both holistic and analytic approaches.
This document discusses the relationship between quality management systems (QMS) and project management plans in the context of skills training programs. It argues that QMS and project management should be integrated rather than viewed in isolation. It then outlines a 12-step process for implementing a skills program from start to finish. Finally, it examines how to better integrate QMS and project management by dividing the process into 5 crucial steps: pre-training, training, assessment, moderation, and close out/reporting. Policies and procedures are developed for each step to quality assure the implementation of the project plan.
Similar to Final Integrated Summative Assessment (20)
The document is a newsletter from the Chartered Institute for the Management of Assessment Practice (CIMAP) providing an update on their activities.
The key points are:
1. CIMAP is actively involved in shaping the skills development landscape in South Africa through participation in various quality councils and SETA task teams.
2. The skills development landscape is undergoing changes to advance public FET provision and introduce more coherence to private provision under the Quality Council for Trades and Occupations (QCTO).
3. CIMAP is in the process of registering as a professional body with SAQA and an assessment quality partner with the QCTO.
The Chartered Institute for the Management of Assessment Practice (CIMAP) aims to professionalize the field of assessment practice in South Africa. As a non-profit organization and professional body, CIMAP represents practitioners in education, training, and development. It offers various membership levels and professional designations to practitioners including trainers, assessors, moderators, and more. CIMAP works to establish standards and conduct for the profession through activities like managing a code of conduct and continuing professional development programs.
This document provides information about the Chartered Institute for the Management of Assessment Practice (CIMAP). It includes the board members and regional conveners. It contains a message from the board welcoming members and wishing them a happy festive season. It also provides notices about upcoming events and deadlines.
The document summarizes proposed amendments to South African labour laws contained in the Labour Relations Amendment Bill of 2012. Key points include:
- Requiring unions and employers to hold ballots before strikes or lockouts and obtain compliance certificates.
- Granting some organizing rights to non-majority unions.
- Limiting fixed-term contracts to 6 months unless employers can justify longer periods.
- Regulating contract work and temporary employment agencies to prevent abuse of short-term contracts.
- Broadening employer definitions to prevent avoidance of legal obligations.
The document outlines the history and development of South Africa's qualifications framework from 1922 to 2008. It established apprenticeships initially and then expanded to include more workers and mandatory skills levies. Post-2008, it established three quality councils for different qualification levels and the Quality Council for Trades and Occupations to set standards for qualifications from levels 1-10. The process for developing occupational qualifications is also described.
MACRO & INTERNATIONAL PERSPECTIVE OF OCCUPATIONAL BASED LEARNING CIMAP
The power of institutionalized education despite national acceptance of the NQF
Inherent resistance to change in societies.
The impact of how we have all experienced education
How much has fundamentally changed in education ?
The perceived social value of formal education.
The document is Karen Deller's doctoral thesis submitted in partial fulfilment of the requirements for a Doctor of Philosophy degree from the University of Johannesburg. The thesis examines the design and implementation of a Recognition of Prior Learning (RPL) programme for the South African insurance sector. It presents the background and context for RPL and the research. The methodology used a qualitative programme evaluation approach involving interviews and case studies to understand participants' experiences of the RPL process. Key findings from the data analysis are also summarized.
The document discusses the challenges of implementing recognition of prior learning (RPL) in workplace assessments. It notes that while RPL aims to promote fairness and transformation, assessing experiential learning from different contexts poses difficulties. Specifically, workplace knowledge may differ in language and presentation from academic standards. Traditional assessments also tend to evaluate learning individually rather than considering collaborative workplace learning. Finally, it can be hard for candidates to validate extensive experience through documentation for RPL claims. Overall, the document examines how RPL aspirations can be difficult to achieve in practice due to contextual and methodological challenges.
The Chartered Institute for the Management of Assessment Practice (CIMAP) is a professional body supporting the needs of all stakeholders involved in Assessment, Moderation and ETD Practice in South Africa.
CIMAP is the ideal platform for the professionalisation of assessment practice.
The Chartered Institute for the Management of Assessment Practice (CIMAP) is a professional body supporting the needs of all stakeholders involved in Assessment, Moderation and ETD Practice in South Africa.
CIMAP is the ideal platform for the professionalisation of assessment practice.
The document is a thesis submitted by L. Meyer in partial fulfillment of the requirements for a Doctor of Philosophy degree in Management of Technology and Innovation from The Da Vinci Institute. The thesis explores accreditation and external moderation frameworks for occupationally directed education and training providers in South Africa. It examines challenges faced by providers in the accreditation and moderation processes and proposes alternative frameworks. The research methodology used a qualitative grounded theory approach, collecting data through focus groups, questionnaires, interviews and reviewing accreditation and moderation reports. The analysis suggests that South Africa's current frameworks inhibit innovation and require reforms to be more supportive of social and educational transformation.
The Chartered Institute for the Management of Assessment Practice (CIMAP) is a professional body supporting the needs of all stakeholders involved in Assessment, Moderation and ETD Practice in South Africa.
How to Setup Default Value for a Field in Odoo 17Celine George
In Odoo, we can set a default value for a field during the creation of a record for a model. We have many methods in odoo for setting a default value to the field.
Andreas Schleicher presents PISA 2022 Volume III - Creative Thinking - 18 Jun...EduSkills OECD
Andreas Schleicher, Director of Education and Skills at the OECD presents at the launch of PISA 2022 Volume III - Creative Minds, Creative Schools on 18 June 2024.
Level 3 NCEA - NZ: A Nation In the Making 1872 - 1900 SML.pptHenry Hollis
The History of NZ 1870-1900.
Making of a Nation.
From the NZ Wars to Liberals,
Richard Seddon, George Grey,
Social Laboratory, New Zealand,
Confiscations, Kotahitanga, Kingitanga, Parliament, Suffrage, Repudiation, Economic Change, Agriculture, Gold Mining, Timber, Flax, Sheep, Dairying,
A Visual Guide to 1 Samuel | A Tale of Two HeartsSteve Thomason
These slides walk through the story of 1 Samuel. Samuel is the last judge of Israel. The people reject God and want a king. Saul is anointed as the first king, but he is not a good king. David, the shepherd boy is anointed and Saul is envious of him. David shows honor while Saul continues to self destruct.
Gender and Mental Health - Counselling and Family Therapy Applications and In...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
CapTechTalks Webinar Slides June 2024 Donovan Wright.pptxCapitolTechU
Slides from a Capitol Technology University webinar held June 20, 2024. The webinar featured Dr. Donovan Wright, presenting on the Department of Defense Digital Transformation.
A Free 200-Page eBook ~ Brain and Mind Exercise.pptxOH TEIK BIN
(A Free eBook comprising 3 Sets of Presentation of a selection of Puzzles, Brain Teasers and Thinking Problems to exercise both the mind and the Right and Left Brain. To help keep the mind and brain fit and healthy. Good for both the young and old alike.
Answers are given for all the puzzles and problems.)
With Metta,
Bro. Oh Teik Bin 🙏🤓🤔🥰
2. Background to the Final
Integrated Summative
Assessment
Current vs. QCTO requirements,
Historic Experience & Context,
SETAs, Professional Bodies /
Statutory Bodies (EAAB - PDE)
Future, Provider vs. Assessment
Quality Partners (AQP)
requirements.
3. SABPP – FISA
FISA Requirement -contained in the
SABPP ETQA Assessment And
Moderation Policy.
“In the case of full HR.
qualifications (including HR.
Learnerships), the assessment cycle
also includes a Final Integrated
Summative Assessment (FISA)
against the exit level outcomes of
the HR. qualification”.
4. Occupational Qualifications
Occupational qualifications are a
feature of the revised NQF and are
designed to address skills needs in
the labour market.
They will replace legacy
occupational qualifications such as
those for the trades and work-
focused unit standards-based
qualifications.
5. Assessment is defined as..
A structured process for gathering
evidence and making judgments
about an individual’s performance in
relation to registered national
standards and qualifications (SAQA,
2001: 16).
6. Assessments
• Formative • Summative • A form of
assessment is assessment is used to assessment
“assessment that make a “judgement which permits
takes place about [learner] the learner to
during the process achievement” that is demonstrate
of learning and used at a particular applied
teaching”, with the point (usually at the competence
purpose of end) of a learning and which uses
supporting programme to a range of
learning. (SAQA, measure progress in formative and
2001: 26) terms of the requirements summative
of national standards assessment
and qualifications so that methods. NSB
credits can be awarded. Regulations (SA,
(SAQA, 2001: 26)
1998:4)
Formative Summative Integrated
7. Integrated Assessment:
Integrated assessment is meaningful if there
are clear relationships between the
purpose statement, exit level outcomes
and integrated assessment of this
Qualification.
In addition to the competence assessed to
achieve the Unit Standards:
learners must demonstrate that they can
achieve the outcomes in an integrated
manner.
8. Integrated Assessment
… should [assess] the ability to
combine key foundational, practical
and reflexive competence with some
critical cross-field outcomes and
apply these in a practical context
for a defined purpose. The context
should be relevant to real life
application (SAQA/CIDA, 2003: 62).
9. Guidelines for Integrated
Assessment
Assessment approaches must confirm a
learner’s ability to integrate knowledge.
Assessments must focus on a learners’
ability to demonstrate applied
knowledge (applied competence).
According to the NQF - evidence of
applied competence is the learners’
ability to integrate concepts, ideas and
actions in authentic, real-life contexts.
10. Guidelines for Integrated
Assessment
Assessing a number of outcomes
together;
Assessing a number of assessment criteria
together;
Assessing a number of unit standards
together;
Using a combination of assessment
methods and instruments for an
outcome/outcomes;
11. Guidelines for Integrated
Assessment
Collecting naturally occurring
evidence (such as in a workplace
setting);
Acquiring evidence from other
sources such as supervisor’s reports,
testimonials, portfolios of work
previously done, logbooks,
journals, etc. (SAQA, 2001: 55).
12. Guidelines for Integrated
Assessment
Integrated assessment should offer
an opportunity to demonstrate the
depth and breadth of learning at
all stages and in a variety of ways
throughout the learning
programme.
Assessments are important moments
in the course of learning
programmes.
13. Guidelines for Integrated
Assessment
“However, educators should guard
against over-assessment where each
outcome “(or worse, each assessment
criterion) [is assessed separately
resulting in] hundreds of little fragmented
meaningless assessments of the check-
list type, taking up valuable learner and
educator time without anything of value
being learnt” (LGWSETA, 2004: 13).
14. Under the QCTO system ..
Key quality assurance processes are
articulated as:
Monitoring and support of learner
progress during programme
implementation ; and
Final integrated summative
assessment of learners for
occupational competence.
15. Final Integrated Summative
Assessment
The FISA refers to the process of
making judgments about
achievement on a
qualification/learnership.
This is carried out when a learner is
ready to be assessed at the end of a
learnership or full qualification.
The FISA must be set by a constituent
assessor and moderated by a
constituent moderator.
16. Final Integrated Summative
Assessment
FISA should not weigh more than
continuous assessment of
providers,
The objective for the FISA is to
confirm the standard across
providers and across qualifications
- STANDARDISATION
17. Final Integrated Summative
Assessment
Database of sample assessment
instruments & tasks are included,
Examples of good integrated
assessments are combined,
Formats are designed for recording
evidence e.g. templates, log-
books.
18. Final Integrated Summative
Assessment
A FISA for the qualification focuses on
the extent to which a learner can
demonstrate applied competence.
Applied competence, in terms of the
NQF is evidenced through learner’s
ability to integrate concepts, ideas
and actions in authentic, real-life
contexts and is expressed as
practical, foundational and reflexive
competence.
19. Final Integrated Summative
Assessment
Practical competence - demonstrated
ability to perform a set of tasks and actions
in authentic contexts;
Foundational competence - demonstrated
understanding of what we are doing and
why we are doing it;
Reflexive competence - demonstrated
ability to integrate our performances with
our understanding so that we are able to
adapt to changed circumstances and
explain the reason behind these
adaptations.
20. Final Integrated Summative
Assessment
More than one FISA instrument should be
developed.
The moderator must ensure that the
assessment instrument is aligned with the
purpose and rationale of the qualification
and that it complies with the applicable
SAQA level descriptor.
Draft assessment instruments must fall
within the ambit and in accordance with
the guideline document.
Blooms and SAQA level descriptors
considered.
21. Bloom’s Taxonomy
1. Knowledge: remembering of previously learned
material; recall (facts or whole theories); bringing to
mind.
Terms: defines, describes, identifies, lists, matches,
names.
2. Comprehension: grasping the meaning of
material; interpreting (explaining or summarizing);
predicting outcome and effects (estimating future
trends).
Terms: convert, defend, distinguish, estimate,
explain, generalize, rewrite.
3. Application: ability to use learned material in a
new situation; apply rules, laws, methods, theories.
Terms: changes, computes, demonstrates,
operates, shows, uses, solves.
22. Bloom’s Taxonomy
4. Analysis questions (taking apart the known)
a. use - graph, survey, diagram, chart,
questionnaire, report....
b. observed behavior - classify, categorize, dissect,
advertise, survey.
5. Synthesis (putting things together in another
way)
a. use - article, radio show, video, puppet show,
inventions, poetry, short story...
b. observed behavior - combine, invent, compose,
hypothesis, create, produce, write.
6. Evaluation (judging outcomes)
a. use - letters, group with discussion panel, court
trial, survey, self-evaluation, value, allusions...
b. observed behavior - judge, debate, evaluating,
editorialize, recommend.
23. Design FISA
FISA Instrument is aligned;
Multiple versions;
Security;
Fit for Purpose (Blooms, Level descriptors
ect).
Technical specifications;
Validity, Authentic, Current, Reliable and
Sufficient ;
24. Final Integrated Summative
Assessment
The Moderator must ensure that the FISA
instrument is aligned with the purpose and
rationale of the qualification, and that it
complies with the applicable SAQA level
descriptor.
The FISA must be of such a length that a
well-prepared learner will be able to answer
it comfortably within the time allocated,
with reasonable time remaining for revision.
25. Final Integrated Summative
Assessment
Standard of the paper (validity, reliability and
fairness)
Questions of various types e.g. Multiple Choice;
Questions, Case studies, paragraphs, data
response, essay, etc;
Questions from which candidates are to choose –
are they of equal difficulty level;
Correct distribution in terms of cognitive levels
(Bloom’s Taxonomy);
Overall: how does the standard of the assessment
instrument compare in relation to other
qualifications/s assessment instruments and previous
assessment instruments.
26. Final Integrated Summative
Assessment
Intellectually challenging and
allowing for creative responses from
candidates;
Suitability of examples and
illustrations;
Relationship between weighting
allocation, degree of difficulty and
time allocation;
27. Technical Criteria
Cover page with all relevant
details such as time, unit
standards and instructions to
candidates
Clarity of instructions to
candidates
Lay out: learner friendly
Correct numbering
28. Technical Criteria
Mark/weighting allocation
clearly indicated;
Quality of illustrations, graphs,
tables etc must be print ready
Complete memorandum with
model answers with :
Mark/weight allocation and
provision for alternatives
29. Registered Unit Standards and
Qualifications
Relevance to registered
Qualification, i.e. unit standards,
specific outcomes, assessment
criteria
Levels of questions
Coverage of unit standards
Weighting and spread of
contents
30. Conceptual construct
example
What conceptual constructs - The
assessment instrument must deal
with:
e.g. – reasoning ability
ability to communicate
ability to translate from verbal to
symbolic
ability to compare and contrast
ability to see causal relationship
ability to express an argument clearly
31. Cognitive skills
Are these constructs representative of
the best and latest developments in the
training of this knowledge field?
Are the questions challenging and
allowing for creative responses from
candidates?
Suggested application of cognitive
levels for an NQF Level 4 qualification is:
10% knowledge
20% comprehension
40% application
30% analysis, synthesis, evaluation
32. Language and bias
Correct terminology;
Appropriate language register – for the
level of the learner;
Avoidance of gender, race, cultural,
provincial bias;
Clear and unambiguous specification
of instructions within questions e.g. list,
describe;
33. Moderation
Is there evidence that the paper
has been moderated?
Quality, standard and relevance
of input from moderator.
34. Alignment Grid
The availability of an alignment grid
showing the alignment of the
instrument with the purpose,
rational, assessment criteria and
NQF level descriptor of the
qualification.
Focus on ELO’s
35. Overall impression
Fairness of the assessment instrument as
whole,
Will the assessment instrument as a
whole assess the achievement of the
purpose of the qualification and the
exit level outcomes,
Recommendations for improvement,
Monitoring and Review.
36. FISA construct example:
Knowledge Assessment;
PoE (completed by the learner);
Workplace based project;
Expert Practitioner Panel interview
and presentation;
Focus on Work Integrated Learning
Practice.
37. Thank you
www.cimap.co.za
The Chartered Institute for the
Management of Assessment
Practice (CIMAP)
A registered NPO