The document discusses test construction and effective test questions. It covers developing a table of specifications which details the content, cognitive level, and number of items to include based on the number of class sessions. The document also discusses different types of test items like multiple choice, true/false, matching, essay, and performance based items. It provides guidance on when to use objective versus subjective items and how to match learning objectives to the appropriate item type to improve validity. Technical qualities of good tests like cognitive complexity, content quality, meaningfulness, and language appropriateness are also covered.
Multiple choice questions can assess different levels of knowledge from simple recall to interpretation and problem solving. They provide flexibility through variations like correct answer, best answer, and interpretive exercises using stimulus materials. Analysis of multiple choice questions focuses on scoring models to determine student achievement and item analysis to evaluate how well questions functioned.
The document discusses techniques for conducting item analysis to improve test items and instruction. It defines item analysis as a process that examines student responses to individual test items to assess item quality and identify areas for improvement. Item analysis provides valuable information for revising weak items, emphasizing areas of content students struggle with, and gaining insight into student understanding. The document outlines key item analysis statistics like difficulty, discrimination, and reliability, which measure item and test quality. It also describes the typical steps to performing item analysis, including calculating statistics, evaluating items, and revising tests.
Development of classroom assessment toolsAko Cheri
This document outlines the steps for developing classroom assessment tools, including constructing a table of specification (TOS). It defines a TOS as a two-way chart that describes test topics and the number of items per topic. The document explains how to prepare a TOS by listing topics, determining objectives, specifying time spent on topics, determining percentage allocation per topic, and distributing items to objectives. An example shows how to calculate the percentage and number of items for a specific topic.
This document outlines guidelines for effective test construction presented by Arnel O. Rivera. It discusses the importance of evaluation and preparing valid, reliable and usable tests. The presentation covers preparing a table of specifications, writing multiple choice and situational judgement questions, and general test construction tips like avoiding negative stems. Overall, the key messages are that preparing good tests takes time and effort, but plays an important role in student and teacher evaluation.
This work experience sheet summarizes the applicant's two most recent teaching positions. Their current position is as an elementary teacher at the Lord Jesus Learning Institute since January 2013. Previously, they worked as a preschool teacher at the Harvestshare Tutorial Learning Center from July 2008 to June 2013. In both roles, their duties included teaching students, monitoring progress, maintaining records, supervising projects, and maintaining positive relationships.
The document provides examples of different types of test questions that can be used to assess student learning, including multiple choice, true/false, matching, and essay questions. It includes sample questions for each type as well as instructions for how to structure and score the questions. The final section provides a sample lesson plan and rubric for an essay activity asking students to classify foods into food groups and provide examples.
The document discusses key aspects of an effective learning environment for students. It emphasizes that the learning environment should include a well-arranged classroom with comfortable furniture, adequate space, and displays for student work. The classroom should also be clean, well-lit, ventilated, and free from distractions. Positive interactions between the teacher and students are important to create a conducive atmosphere for learning. An ideal learning environment encourages active learning, discovery of personal meaning, differences in students, tolerance of mistakes, and cooperative self-evaluation.
The document discusses guidelines for constructing and scoring completion and essay type tests. It provides examples of completion tests involving filling in blanks with words, letters, or phrases. Essay tests are described as allowing for assessment of higher-order thinking by requiring students to organize their thoughts in writing. The document outlines objectives, types, and rules for scoring essays, including specifying criteria, maintaining anonymity, and having multiple graders to reduce bias.
Multiple choice questions can assess different levels of knowledge from simple recall to interpretation and problem solving. They provide flexibility through variations like correct answer, best answer, and interpretive exercises using stimulus materials. Analysis of multiple choice questions focuses on scoring models to determine student achievement and item analysis to evaluate how well questions functioned.
The document discusses techniques for conducting item analysis to improve test items and instruction. It defines item analysis as a process that examines student responses to individual test items to assess item quality and identify areas for improvement. Item analysis provides valuable information for revising weak items, emphasizing areas of content students struggle with, and gaining insight into student understanding. The document outlines key item analysis statistics like difficulty, discrimination, and reliability, which measure item and test quality. It also describes the typical steps to performing item analysis, including calculating statistics, evaluating items, and revising tests.
Development of classroom assessment toolsAko Cheri
This document outlines the steps for developing classroom assessment tools, including constructing a table of specification (TOS). It defines a TOS as a two-way chart that describes test topics and the number of items per topic. The document explains how to prepare a TOS by listing topics, determining objectives, specifying time spent on topics, determining percentage allocation per topic, and distributing items to objectives. An example shows how to calculate the percentage and number of items for a specific topic.
This document outlines guidelines for effective test construction presented by Arnel O. Rivera. It discusses the importance of evaluation and preparing valid, reliable and usable tests. The presentation covers preparing a table of specifications, writing multiple choice and situational judgement questions, and general test construction tips like avoiding negative stems. Overall, the key messages are that preparing good tests takes time and effort, but plays an important role in student and teacher evaluation.
This work experience sheet summarizes the applicant's two most recent teaching positions. Their current position is as an elementary teacher at the Lord Jesus Learning Institute since January 2013. Previously, they worked as a preschool teacher at the Harvestshare Tutorial Learning Center from July 2008 to June 2013. In both roles, their duties included teaching students, monitoring progress, maintaining records, supervising projects, and maintaining positive relationships.
The document provides examples of different types of test questions that can be used to assess student learning, including multiple choice, true/false, matching, and essay questions. It includes sample questions for each type as well as instructions for how to structure and score the questions. The final section provides a sample lesson plan and rubric for an essay activity asking students to classify foods into food groups and provide examples.
The document discusses key aspects of an effective learning environment for students. It emphasizes that the learning environment should include a well-arranged classroom with comfortable furniture, adequate space, and displays for student work. The classroom should also be clean, well-lit, ventilated, and free from distractions. Positive interactions between the teacher and students are important to create a conducive atmosphere for learning. An ideal learning environment encourages active learning, discovery of personal meaning, differences in students, tolerance of mistakes, and cooperative self-evaluation.
The document discusses guidelines for constructing and scoring completion and essay type tests. It provides examples of completion tests involving filling in blanks with words, letters, or phrases. Essay tests are described as allowing for assessment of higher-order thinking by requiring students to organize their thoughts in writing. The document outlines objectives, types, and rules for scoring essays, including specifying criteria, maintaining anonymity, and having multiple graders to reduce bias.
This document outlines the assessment and rating system for learning outcomes under the K to 12 Basic Education Curriculum. It discusses the philosophy, nature, levels, tools, and frequency of assessment. Assessment will be standards-based and focus on knowledge, skills, understanding, and performance. Student proficiency will be rated on a scale and determine promotion. Rubrics will provide clear guidelines for evaluating student work. Formative and summative assessments will track progress and measure proficiency. The system aims to support quality learning through self-reflection and accountability.
This document is a table of specification for a math exam that will assess students on various algebraic concepts. It outlines the major content areas, time allotted for each section, number of test items, and how items will assess different cognitive levels including remembering, understanding, applying, analyzing, and evaluating. The exam will have 50 multiple choice items testing topics such as algebraic expressions, polynomials, linear equations, and problem solving over a total time of 60 minutes.
The document provides guidance for writing test items and creating a table of specification. It explains that a table of specification is a two-way chart that describes the topics to be covered on a test and the number of items or points associated with each topic, to ensure all elements of a course of study are properly assessed. It also defines different levels of thinking skills - knowledge, comprehension, application, analysis, synthesis, and evaluation.
This document discusses assessment and test construction. It explains that assessment determines if educational goals are being met and helps teachers evaluate what is being taught and learned. It also discusses summative assessment, the grading system, and common student observations about tests. Key principles of test construction are outlined, including validity, reliability, discrimination, and comprehensiveness. The document emphasizes the importance of the Table of Specification in guiding test construction and providing a test map that describes topic coverage and cognitive levels.
This document provides an outline for a workshop on test construction and preparation. It discusses establishing a table of specifications to guide test development and ensure a balanced assessment. The principles of constructing high-quality test questions aligned to Bloom's taxonomy are explained. Participants will have an opportunity to construct and develop test questions during a workshop session. Common student complaints about test questions are also addressed, such as questions being unrelated to lessons or unclear. Factors to consider when preparing good tests are highlighted.
Person A scored an 87 on a physics test with a class average of 80 and standard deviation of 5. Person B scored an 82 on a test with a class average of 73 and standard deviation of 6. The document discusses different types of test scores such as raw scores, percentile ranks, and standard scores including z-scores, t-scores, stanines, and normal curve equivalents. It also discusses interpreting test scores using norm-referenced and criterion-referenced approaches.
The document outlines the 6 steps to prepare a table of specification for a test: 1) List topics, 2) Determine objectives, 3) Specify time spent on each topic, 4) Calculate percentage allocation for each topic, 5) Determine number of test items for each topic, 6) Distribute items to objectives. It provides an example of calculating that 20% of a 50 item test should cover the topic "Early Filipinos and their Society" since it was taught for 2 of the 10 hours on the overall topic.
This document provides guidelines for room examiners and proctors administering pen and paper tests. It outlines important reminders, definitions of terms, do's and don'ts, and procedures for before, during and after the test. Examiners are instructed to arrive on time, follow strict protocols, monitor examinees closely for cheating, and report any irregularities. Examinees must present valid ID and health documents for admission and comply with dress code and technology restrictions. Seating is arranged with distance between examinees and identity is verified before admission to ensure test integrity.
This performance monitoring and coaching form tracks an educator's progress and development over time. It documents critical incidents, their impact on teaching and student learning, and action plans for improvement. Dates are included alongside descriptions of lessons, student performance on summative tests, time management challenges, interventions for struggling students, effective teaching strategies, integrating technology, and incorporating higher-order thinking skills. Signatures from the rater and ratee are included to acknowledge progress.
This document discusses classroom assessment and grading procedures. It defines formative and summative assessment, with formative used to guide instruction and summative to evaluate learning. Summative assessment has three components: written work, performance tasks, and quarterly exams. A standards-based grading system is used, with 60 as the minimum passing grade. Scores from each component are calculated as percentages based on highest possible scores, then weighted according to the subject to determine final grades.
This document provides instructions and examples for constructing a completion test, which requires test-takers to fill in blanks with words or phrases. It outlines several rules to follow, such as giving reasonable context for the desired response, avoiding clues in wording or blank length, and arranging items to facilitate scoring. Sample test items are included to demonstrate proper formatting. The goal is to design a test that accurately measures knowledge without unintentionally cueing respondents.
This document is a mid-year review form for teacher Jefferson B. Torres. It evaluates his performance over the rating period based on key result areas (KRAs) like content knowledge and pedagogy, learning environment and diversity of learners, and curriculum and planning. Each KRA has objectives with timelines, weights, means of verification, and ratings. The form contains the ratings and remarks from both the ratee (Mr. Torres) and rater (Principal Luisito de Guzman). It will be used to assess Mr. Torres' performance at the mid-year point and identify any areas for improvement in the second half of the rating period.
The document discusses the importance and process of creating a Table of Specification (TOS) for constructing tests. A TOS is a two-way chart that describes the topics and objectives to be assessed on a test and the number of items or points associated with each. It helps teachers ensure their tests have content validity by covering the appropriate material. The key steps in preparing a TOS include identifying topics and objectives, determining time spent on each topic, calculating the percentage allocation for topics, and distributing test items to objectives based on importance.
The document provides guidelines for the implementation of the K to 12 Program's Senior High School strand. It outlines the essential learning competencies, number of hours of instruction, and assessment and certification requirements for each of the core, major, and vocational-technical-livelihood tracks and subjects. The document aims to help teachers and schools properly implement the new senior high school curriculum.
This document discusses guidelines for constructing classroom tests. It outlines the basic principles that should guide teachers, such as measuring all instructional objectives and using appropriate test items. It also describes the attributes of a good test, including validity, reliability, objectivity, and fairness. The document provides steps for constructing classroom tests, such as identifying objectives, preparing a table of specifications, writing test items, and sequencing items. It includes an example of how to prepare a table of specifications and allocate test items to topics. Finally, it lists general guidelines for writing test items, such as avoiding ambiguity and providing one correct answer.
The document discusses the process of developing a table of specifications (TOS) for assessment instruments. It defines a TOS as a table that aligns objectives, instruction, and assessment. The purpose of a TOS is to guide what topics should be included and how many items should assess each level. To prepare a TOS, teachers select learning outcomes, outline subject matter, decide on items per subtopic, and create a two-way chart listing objectives, class time spent, percentages, number of items, and item specifications based on Bloom's taxonomy. Tips for the TOS include avoiding excessive detail, focusing on major ideas, choosing an appropriate cognitive taxonomy, and weighing the distribution against student level and test constraints.
The document provides guidelines for classroom layout and facilities in schools. It discusses the standard facilities required for regular classrooms which include furniture like tables, chairs, and cabinets as well as equipment like blackboards, bulletin boards, and first aid cabinets. Specific requirements are provided for different grade levels. Instructions are also given for arranging the classroom space and allocating areas for reading corners, health corners, and the teacher's table. Broader school facilities like administrative offices, home economics rooms, and sanitation facilities are also outlined. Color schemes, usage guidelines, and safety measures for educational structures are defined.
Guidelines in Preparing Different Types of TestsJervis Panis
This document discusses guidelines for preparing different types of tests to assess learning outcomes. It describes four levels of learning outcomes: knowledge, process, understanding, and product/performance. Each level can be assessed using different tools. Objective tests like multiple choice, true/false, and matching are described. Essay tests that allow subjective responses are also covered. The key aspects of a good test discussed are validity, reliability, and usability. Principles for constructing clear test items are provided.
The document summarizes a monitoring and evaluation report from Naga Central Elementary School. It includes sections on curriculum implementation, education resources, physical facilities, and concerns. Some key findings are that classrooms are adequately equipped but some need repairs, most competencies are being taught but some students still lack mastery, and the biggest issues are a lack of teachers and high dropout rates due to financial problems.
This document provides guidance on creating alternative-response tests, also known as true-false tests, including their definition, uses, and suggestions for constructing effective true-false items. An alternative-response test consists of declarative statements that students mark as true or false. There should be an underlined word or phrase that needs correcting for the statement to be considered true. True-false items can measure a student's ability to identify factual statements, distinguish facts from opinions, and recognize cause-and-effect relationships. When constructing items, statements should be specific and avoid negatives, long sentences, multiple ideas in one statement, and trivial content. True and false statements should be about equal in length.
This document provides guidance on writing effective multiple choice exam questions. It discusses the strengths and weaknesses of multiple choice questions, describes the components of a multiple choice question, and provides tips and guidelines for writing high quality multiple choice questions that assess different levels of learning. Sample exam questions are also included to illustrate how to write questions targeting various levels of Bloom's taxonomy, from knowledge to evaluation.
The document discusses factors involved in constructing objective evaluation instruments. It describes different types of objective instruments including achievement tests, intelligence tests, diagnostic tests, formative tests, and summative tests. It also outlines the major steps for measurement including identifying what to measure, determining the appropriate design, searching for existing instruments, defining the protocol, collecting and analyzing data, and comparing results to goals. The document discusses procedures for scoring assessments, methods for recording and reporting results, and provides an assessment schedule.
This document outlines the assessment and rating system for learning outcomes under the K to 12 Basic Education Curriculum. It discusses the philosophy, nature, levels, tools, and frequency of assessment. Assessment will be standards-based and focus on knowledge, skills, understanding, and performance. Student proficiency will be rated on a scale and determine promotion. Rubrics will provide clear guidelines for evaluating student work. Formative and summative assessments will track progress and measure proficiency. The system aims to support quality learning through self-reflection and accountability.
This document is a table of specification for a math exam that will assess students on various algebraic concepts. It outlines the major content areas, time allotted for each section, number of test items, and how items will assess different cognitive levels including remembering, understanding, applying, analyzing, and evaluating. The exam will have 50 multiple choice items testing topics such as algebraic expressions, polynomials, linear equations, and problem solving over a total time of 60 minutes.
The document provides guidance for writing test items and creating a table of specification. It explains that a table of specification is a two-way chart that describes the topics to be covered on a test and the number of items or points associated with each topic, to ensure all elements of a course of study are properly assessed. It also defines different levels of thinking skills - knowledge, comprehension, application, analysis, synthesis, and evaluation.
This document discusses assessment and test construction. It explains that assessment determines if educational goals are being met and helps teachers evaluate what is being taught and learned. It also discusses summative assessment, the grading system, and common student observations about tests. Key principles of test construction are outlined, including validity, reliability, discrimination, and comprehensiveness. The document emphasizes the importance of the Table of Specification in guiding test construction and providing a test map that describes topic coverage and cognitive levels.
This document provides an outline for a workshop on test construction and preparation. It discusses establishing a table of specifications to guide test development and ensure a balanced assessment. The principles of constructing high-quality test questions aligned to Bloom's taxonomy are explained. Participants will have an opportunity to construct and develop test questions during a workshop session. Common student complaints about test questions are also addressed, such as questions being unrelated to lessons or unclear. Factors to consider when preparing good tests are highlighted.
Person A scored an 87 on a physics test with a class average of 80 and standard deviation of 5. Person B scored an 82 on a test with a class average of 73 and standard deviation of 6. The document discusses different types of test scores such as raw scores, percentile ranks, and standard scores including z-scores, t-scores, stanines, and normal curve equivalents. It also discusses interpreting test scores using norm-referenced and criterion-referenced approaches.
The document outlines the 6 steps to prepare a table of specification for a test: 1) List topics, 2) Determine objectives, 3) Specify time spent on each topic, 4) Calculate percentage allocation for each topic, 5) Determine number of test items for each topic, 6) Distribute items to objectives. It provides an example of calculating that 20% of a 50 item test should cover the topic "Early Filipinos and their Society" since it was taught for 2 of the 10 hours on the overall topic.
This document provides guidelines for room examiners and proctors administering pen and paper tests. It outlines important reminders, definitions of terms, do's and don'ts, and procedures for before, during and after the test. Examiners are instructed to arrive on time, follow strict protocols, monitor examinees closely for cheating, and report any irregularities. Examinees must present valid ID and health documents for admission and comply with dress code and technology restrictions. Seating is arranged with distance between examinees and identity is verified before admission to ensure test integrity.
This performance monitoring and coaching form tracks an educator's progress and development over time. It documents critical incidents, their impact on teaching and student learning, and action plans for improvement. Dates are included alongside descriptions of lessons, student performance on summative tests, time management challenges, interventions for struggling students, effective teaching strategies, integrating technology, and incorporating higher-order thinking skills. Signatures from the rater and ratee are included to acknowledge progress.
This document discusses classroom assessment and grading procedures. It defines formative and summative assessment, with formative used to guide instruction and summative to evaluate learning. Summative assessment has three components: written work, performance tasks, and quarterly exams. A standards-based grading system is used, with 60 as the minimum passing grade. Scores from each component are calculated as percentages based on highest possible scores, then weighted according to the subject to determine final grades.
This document provides instructions and examples for constructing a completion test, which requires test-takers to fill in blanks with words or phrases. It outlines several rules to follow, such as giving reasonable context for the desired response, avoiding clues in wording or blank length, and arranging items to facilitate scoring. Sample test items are included to demonstrate proper formatting. The goal is to design a test that accurately measures knowledge without unintentionally cueing respondents.
This document is a mid-year review form for teacher Jefferson B. Torres. It evaluates his performance over the rating period based on key result areas (KRAs) like content knowledge and pedagogy, learning environment and diversity of learners, and curriculum and planning. Each KRA has objectives with timelines, weights, means of verification, and ratings. The form contains the ratings and remarks from both the ratee (Mr. Torres) and rater (Principal Luisito de Guzman). It will be used to assess Mr. Torres' performance at the mid-year point and identify any areas for improvement in the second half of the rating period.
The document discusses the importance and process of creating a Table of Specification (TOS) for constructing tests. A TOS is a two-way chart that describes the topics and objectives to be assessed on a test and the number of items or points associated with each. It helps teachers ensure their tests have content validity by covering the appropriate material. The key steps in preparing a TOS include identifying topics and objectives, determining time spent on each topic, calculating the percentage allocation for topics, and distributing test items to objectives based on importance.
The document provides guidelines for the implementation of the K to 12 Program's Senior High School strand. It outlines the essential learning competencies, number of hours of instruction, and assessment and certification requirements for each of the core, major, and vocational-technical-livelihood tracks and subjects. The document aims to help teachers and schools properly implement the new senior high school curriculum.
This document discusses guidelines for constructing classroom tests. It outlines the basic principles that should guide teachers, such as measuring all instructional objectives and using appropriate test items. It also describes the attributes of a good test, including validity, reliability, objectivity, and fairness. The document provides steps for constructing classroom tests, such as identifying objectives, preparing a table of specifications, writing test items, and sequencing items. It includes an example of how to prepare a table of specifications and allocate test items to topics. Finally, it lists general guidelines for writing test items, such as avoiding ambiguity and providing one correct answer.
The document discusses the process of developing a table of specifications (TOS) for assessment instruments. It defines a TOS as a table that aligns objectives, instruction, and assessment. The purpose of a TOS is to guide what topics should be included and how many items should assess each level. To prepare a TOS, teachers select learning outcomes, outline subject matter, decide on items per subtopic, and create a two-way chart listing objectives, class time spent, percentages, number of items, and item specifications based on Bloom's taxonomy. Tips for the TOS include avoiding excessive detail, focusing on major ideas, choosing an appropriate cognitive taxonomy, and weighing the distribution against student level and test constraints.
The document provides guidelines for classroom layout and facilities in schools. It discusses the standard facilities required for regular classrooms which include furniture like tables, chairs, and cabinets as well as equipment like blackboards, bulletin boards, and first aid cabinets. Specific requirements are provided for different grade levels. Instructions are also given for arranging the classroom space and allocating areas for reading corners, health corners, and the teacher's table. Broader school facilities like administrative offices, home economics rooms, and sanitation facilities are also outlined. Color schemes, usage guidelines, and safety measures for educational structures are defined.
Guidelines in Preparing Different Types of TestsJervis Panis
This document discusses guidelines for preparing different types of tests to assess learning outcomes. It describes four levels of learning outcomes: knowledge, process, understanding, and product/performance. Each level can be assessed using different tools. Objective tests like multiple choice, true/false, and matching are described. Essay tests that allow subjective responses are also covered. The key aspects of a good test discussed are validity, reliability, and usability. Principles for constructing clear test items are provided.
The document summarizes a monitoring and evaluation report from Naga Central Elementary School. It includes sections on curriculum implementation, education resources, physical facilities, and concerns. Some key findings are that classrooms are adequately equipped but some need repairs, most competencies are being taught but some students still lack mastery, and the biggest issues are a lack of teachers and high dropout rates due to financial problems.
This document provides guidance on creating alternative-response tests, also known as true-false tests, including their definition, uses, and suggestions for constructing effective true-false items. An alternative-response test consists of declarative statements that students mark as true or false. There should be an underlined word or phrase that needs correcting for the statement to be considered true. True-false items can measure a student's ability to identify factual statements, distinguish facts from opinions, and recognize cause-and-effect relationships. When constructing items, statements should be specific and avoid negatives, long sentences, multiple ideas in one statement, and trivial content. True and false statements should be about equal in length.
This document provides guidance on writing effective multiple choice exam questions. It discusses the strengths and weaknesses of multiple choice questions, describes the components of a multiple choice question, and provides tips and guidelines for writing high quality multiple choice questions that assess different levels of learning. Sample exam questions are also included to illustrate how to write questions targeting various levels of Bloom's taxonomy, from knowledge to evaluation.
The document discusses factors involved in constructing objective evaluation instruments. It describes different types of objective instruments including achievement tests, intelligence tests, diagnostic tests, formative tests, and summative tests. It also outlines the major steps for measurement including identifying what to measure, determining the appropriate design, searching for existing instruments, defining the protocol, collecting and analyzing data, and comparing results to goals. The document discusses procedures for scoring assessments, methods for recording and reporting results, and provides an assessment schedule.
The document describes a one-day training workshop on AICTE's examination reform policy. The workshop will cover key topics related to outcome-based education including designing outcome-based assessments at the program and course level. Participants will learn how to use Bloom's taxonomy to assess higher-order thinking skills and choose appropriate assessment methods. The workshop will provide guidance on developing model question papers, scoring rubrics, and connecting assessments to program outcomes in line with AICTE's examination reform policy.
Performance-Based Assessment (Assessment of Learning 2, Chapter 2))paj261997
This document discusses performance-based assessment. It defines performance-based assessment as a direct and systematic observation of student performance based on predetermined criteria. This is presented as an alternative form of assessment to traditional paper-and-pencil tests. The document outlines key features of performance-based assessment, including greater realism and complexity of tasks, as well as greater time needed for assessment and use of judgment in scoring. It also discusses different types of performance-based assessment, developing rubrics to evaluate student performance, and the advantages and limitations of this assessment approach.
The document discusses developing assessment instruments for instructional design. It covers:
- Types of criterion-referenced tests including entry skills tests, pre-tests, practice tests, and post-tests.
- Designing criterion-referenced tests with considerations for test format, mastery levels, test item criteria, and assessing different domains.
- Alternative assessment instruments like rubrics for evaluating performances, products, and attitudes. Portfolio assessments are also discussed.
This document discusses developing effective assessment instruments. It defines assessment and criteria, and outlines objectives like describing how tests are used by instructional designers. It also covers types of criterion-referenced tests, designing tests for different domains, determining mastery levels, writing test items, formats, and evaluation. Key aspects include relating all assessment back to the objectives, allowing multiple opportunities to demonstrate skills, and using tools like portfolios to assess growth over time. The goal is to create valid assessments that accurately measure learners' abilities and provide useful feedback for improving instruction.
The document discusses developing criterion-referenced assessments. It explains that criterion-referenced assessments directly measure skills described in behavioral objectives and focus on gauging learner performance and instructional quality. The document provides guidance on writing test items, developing different types of assessments, setting mastery criteria, and ensuring assessments are congruent with objectives and instructional analyses. It emphasizes the importance of criterion-referenced assessments for evaluating both learners and instruction.
The document discusses developing assessment instruments for measuring learner progress and instructional quality. It covers criterion-referenced assessments that measure performance against specific standards or levels. The objectives are to describe criterion-referenced tests and different types of pre- and post-instruction assessments. It also discusses developing quality criterion-referenced test items and assessments of products, performances, and attitudes.
The document discusses developing assessment instruments for measuring learner progress and instructional quality. It describes criterion-referenced assessments that measure performance against specific standards or levels of mastery. The objectives are to describe criterion-referenced tests and how various assessment types (entry tests, pretests, practice tests, posttests) are used. It also discusses developing quality criterion-referenced test items in four categories: goal-centered, learner-centered, context-centered, and assessment-centered.
The document discusses principles of teaching methods and lesson planning. It covers traditional, time-tested, and progressive teaching methods, as well as characteristics of good methods. Variables that affect teaching methods are outlined, including objectives, students, subject matter, teachers, technology, and environment. Learning objectives and goals are defined, with objectives guiding content selection, instructional strategies, materials, and assessment. Steps for writing learning objectives are provided, focusing on observable student behaviors, conditions, and criteria. Bloom's taxonomy of cognitive, affective, and psychomotor domains is summarized, with definitions and examples of assessing the different levels.
This document discusses revising instructional materials based on formative evaluation data. It covers analyzing different types of data from formative evaluations, including learner comments, performance on tests, and time spent on instruction. Data is analyzed to identify weaknesses in the materials and instruction. Revisions are then made based on the analyzed data, with the goal of improving learner achievement and making the materials more effective. The process of revision involves reexamining objectives, instructional strategies, and other components of the materials in light of the formative evaluation findings.
The document discusses curriculum evaluation models and processes. It defines curriculum evaluation as assessing the strengths and weaknesses of a curriculum to improve its effectiveness. Several models are described, including Tyler's objectives-centered model which evaluates curriculum elements like objectives and student outcomes. Stufflebeam's CIPP model assesses curriculum context, inputs, processes, and products. The stakeholder-responsive model focuses on curriculum implementation from stakeholders' perspectives. Scriven's consumer-oriented model uses criteria and checklists to conduct formative or summative evaluations. Overall, the document outlines different approaches to curriculum evaluation to enhance learning outcomes.
Evaluation: Determining the Effect of the Intervention Ijaz Ahmad
This document discusses evaluation in the instructional design process. It defines assessment, measurement, and evaluation, and explains the purpose and goals of learner evaluation. The development of learner evaluations involves examining instructional goals and objectives to determine the intended change and criteria for success. Validity and reliability are also important concepts. Evaluations can be criterion-referenced or norm-referenced. The document provides guidelines for developing various assessment techniques, including objective test items, observations, portfolios, and rubrics. Formative and summative evaluation are described as important types for gathering feedback and determining effectiveness. The role of the instructional designer is to plan and implement efficient and effective evaluations.
This document discusses developing effective assessment instruments. It covers criterion-referenced versus norm-referenced tests, using portfolios for assessment, evaluating congruence between objectives and assessments, and Dick and Carey's five-step model for creating assessments. Key aspects include ensuring assessments measure the behaviors and criteria in course objectives, considering learner characteristics, and making assessments as realistic to the performance context as possible.
This document discusses performance-based assessment and rubrics. It defines performance-based assessment as a process that assesses students' skills and knowledge through demonstrations and real-world tasks, rather than through traditional tests. Some key benefits of performance-based assessment are that it encourages deeper learning and allows for creativity. The document also discusses how to develop good performance tasks, criteria, rubrics, and how to effectively evaluate student performance using rubrics.
The document discusses Bloom's Taxonomy, which is a framework for categorizing levels of cognition. It was originally developed in the 1950s to provide a common language for teachers. The taxonomy categorizes cognitive, affective, and psychomotor domains of learning. The cognitive domain moves from lower order thinking skills like remembering to higher order skills like evaluation. The affective domain involves attitudes, emotions, and values. The psychomotor domain encompasses physical skills and movement. The document also notes an updated version from 2001 that reorganized the taxonomy and highlighted interactions between cognitive processes and knowledge content.
This document discusses evaluating different aspects of a visual art education curriculum. It describes evaluating the curriculum at various stages, from initial development through classroom implementation and student learning. Key aspects to evaluate include the supported curriculum materials, the written scope and sequence, how the curriculum is taught in the classroom, and whether students achieved the intended learning goals. The document provides frameworks and processes for evaluating each part of the curriculum through methods like alignment analyses, observations, student assessments, and end-of-unit test analyses. The overall goal is to engage in ongoing evaluation to monitor curriculum quality and ensure students are mastering the objectives.
Lecture on the
different types of
inferential statistics and
when to use them.
Demonstration of
encoding data in SPSS
and computing statistics.
Hands on practice of
encoding a small data set
and computing statistics
in small groups.
The document provides information on assessment of student learning. It defines key terms like tests, measurement, assessment and evaluation. It describes different modes of assessment like traditional, performance and portfolio assessments and their advantages and disadvantages. It also outlines principles of high-quality assessment like clarity of learning targets and appropriateness of methods. Additionally, it discusses diagnostic, formative, summative and placement evaluation, as well as instructional objectives and learning taxonomies. Finally, it covers different types of tests based on purpose, scope, construction and interpretation.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
Assessment and Planning in Educational technology.pptxKavitha Krishnan
In an education system, it is understood that assessment is only for the students, but on the other hand, the Assessment of teachers is also an important aspect of the education system that ensures teachers are providing high-quality instruction to students. The assessment process can be used to provide feedback and support for professional development, to inform decisions about teacher retention or promotion, or to evaluate teacher effectiveness for accountability purposes.
Physiology and chemistry of skin and pigmentation, hairs, scalp, lips and nail, Cleansing cream, Lotions, Face powders, Face packs, Lipsticks, Bath products, soaps and baby product,
Preparation and standardization of the following : Tonic, Bleaches, Dentifrices and Mouth washes & Tooth Pastes, Cosmetics for Nails.
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
How to Add Chatter in the odoo 17 ERP ModuleCeline George
In Odoo, the chatter is like a chat tool that helps you work together on records. You can leave notes and track things, making it easier to talk with your team and partners. Inside chatter, all communication history, activity, and changes will be displayed.
The simplified electron and muon model, Oscillating Spacetime: The Foundation...RitikBhardwaj56
Discover the Simplified Electron and Muon Model: A New Wave-Based Approach to Understanding Particles delves into a groundbreaking theory that presents electrons and muons as rotating soliton waves within oscillating spacetime. Geared towards students, researchers, and science buffs, this book breaks down complex ideas into simple explanations. It covers topics such as electron waves, temporal dynamics, and the implications of this model on particle physics. With clear illustrations and easy-to-follow explanations, readers will gain a new outlook on the universe's fundamental nature.
4. Activity: Arrange the following steps in preparing
the table of specification used by the test
constructor.
Make a two- way chart of a table of specification
Make an outline of the subject matter to be
covered in the test
Construct the test items
Select the learning outcomes to be
measured
Decide on the number of items per subtopic
5. Philippine Professional Standards for Teachers (PPST)
Domain 5: Assessment and Reporting
Domain 5 relates to processes associated with a variety of
assessment tools and strategies used by teachers in
monitoring, evaluating, documenting and reporting learners’
needs, progress and achievement. This Domain concerns the
use of assessment data in a variety of ways to inform and
enhance the teaching and learning process and programs. It
concerns teachers providing learners with the necessary
feedback about learning outcomes. This feedback informs the
reporting cycle and enables teachers to select, organize and
use sound assessment processes.
6. Domain 5, Assessment and Reporting, is composed of five
strands:
1. Design, selection, organization and utilization of assessment
strategies
2. Monitoring and evaluation of learner progress and
achievement
3. Feedback to improve learning
4. Communication of learner needs, progress and achievement
to key stakeholders
5. Use of assessment data to enhance teaching and learning
practices and programs
7. Domain 5, Assessment and Reporting, is composed of five
strands:
1.Design, selection,
organization and utilization
of assessment strategies
8. Table of Specification
• A chart or table that details the content and level of cognitive
assessed on a test as well as the types and emphases of test of items
• Very important in addressing the validity and reliability of the test
items
• Provides the test constructor a way to ensure that the assessment is
based on the intended learning outcomes
• A way of ensuring that the number of questions on the test is
adequate to ensure dependable results that are not likely caused by
chance
• A useful guide in constructing a test and in determining the type of
test items that you need to construct
9. DO 79, S. 2003 – ASSESSMENT AND EVALUATION OF LEARNING AND REPORTING OF
STUDENTS’ PROGRESS IN PUBLIC ELEMENTARY AND SECONDARY SCHOOLS
AMENDED BY DO 82, S. 2003 – AMENDMENT OF DEPED ORDER NO. 79 S. 2003 ON
ASSESSMENT AND EVALUATION OF LEARNING AND REPORTING OF STUDENTS’
PROGRESS IN PUBLIC ELEMENTARY AND SECONDARY SCHOOLS
1. This Department, responding to the need for an assessment and evaluation system that truly
reflects student performance, issues the following guidelines in the assessment and reporting of
students’ progress:
1.1 Grades shall not be computed on the basis of any transmutation table that equates zero to a
pre-selected base (such as 50 or 70) and adjusts other scores accordingly.
1.2 Grades shall be based on assessment that covers the range of learning competencies
specified in the Philippine Elementary Learning Competencies (PELC) and Philippine Secondary
Schools Learning Competencies (PSSLC). The test shall be designed as follows:
- 60% easy items focused on basic content and skills expected of a student in
each grade or year level;
-30% medium-level items focused on higher level skills; and
-10% difficult items focused on desirable content or skills that aim to distinguish the fast learners.
10. DO 33, s 2004 - Implementing Guidelines on the
Performance-Based Grading System for SY 2004-2005
3. In assessing learning outcomes, the construction of the test
design should consist of 60% basic items, 30% more advanced
items and 10% items for distinguishing honor students.
Questions in each category should have different weights. Test
and non-test items should cover only materials actually taken up
in class.
Factual information (easy) – 60%
Moderately difficult (average) – 30%
Higher order thinking skills (difficult) – 10%
13. Bloom’s Taxonomy in 1956 Anderson/Krathwolh’s Revision in
2001
1. Knowledge: Remembering or
retrieving previously learned
material.
Examples of verbs that relate to this
function are: identify, relate. List.
Define, recall, memorize, repeat,
record, name, recognize, acquire
1. Remembering: Objectives written
on the remembering level –
retrieving, recalling, or recognizing
knowledge from memory.
Remembering is when memory is
used to produce definitions, facts,
or lists; to recite or retrieve
material.
Sample verbs: state, tell, underline,
locate, match, state, spell, fill in the
blank, identify, relate, list, define,
recall, memorize, repeat, record,
name, recognize, acquire
14. Bloom’s Taxonomy in 1956 Anderson/Krathwolh’s Revision in
2001
2. Comprehension: The ability to
grasp or construct meaning from
material.
Examples of verbs: restate, locate,
report, recognize, explain, express,
identify, discuss, describe, review,
infer, conclude, illustrate, interpret,
draw, represent, differentiate
2. Understanding: Constructing
meaning from different types of
functions be they written or graphic
message activities like interpreting,
exemplifying, classifying,
summarizing, inferring, comparing
and explaining.
Sample verbs: restate, locate, report,
recognize, explain, express, identify,
discuss, describe, review, infer,
conclude, illustrate, interpret, draw,
represent, differentiate
15. Bloom’s Taxonomy in 1956 Anderson/Krathwolh’s Revision in
2001
3. Application: The ability to use
learned material, or to implement
material in new and concrete
situations.
Examples of verbs: apply, relate,
develop, translate, use, operate,
organize, employ, restructure,
interpret, demonstrate, illustrate,
practice, calculate, show, exhibit,
dramatize
3. Applying: Carrying out or using a
procedure through executing, or
implementing. Applying relates and
refers to situations where learned
material is used through products
like models, presentations,
interviews or simulations.
Sample verbs: apply, relate, develop,
translate, use, operate, organize,
employ, restructure, interpret,
demonstrate, illustrate, practice,
calculate, show, exhibit, dramatize
16. Bloom’s Taxonomy in 1956 Anderson/Krathwolh’s Revision in
2001
4. Analysis: The ability to break down
or distinguish the parts of the
material into their components so
that their organizational structure
may be better understood.
Examples of verbs: analyze, compare,
probe, inquire, examine, contrast,
categorize, differentiate, investigate,
detect, survey, classify, deduce,
experiment, scrutinize, discover,
inspect, dissect, discriminate,
separate
4. Analyzing: Breaking material or
concepts into parts, determining how
the parts relate or interrelate to one
another or to an overall structure or
purpose. Mental actions include in
this function are differentiating,
organizing and attributing, as well as
being able to distinguish between
the components or parts. When one
is analyzing, he/she can illustrate this
mental function by creating
spreadsheets, surveys, charts, or
diagrams, or graphic representations.
17. Bloom’s Taxonomy in 1956 Anderson/Krathwolh’s Revision in
2001
5. Synthesis: The ability to put parts
together to form a coherent or
unique new whole.
Examples of verbs: produce, design,
assemble, create, prepare, predict,
modify, plan, invent, formulate,
collect, set up, generalize, document,
combine, propose, develop, arrange,
construct, organize, originate, derive,
write
5. Evaluating: Making judgments
based on criteria and standards
through checking and critiquing.
Critiques, recommendations, and
reports are some of the products
that can be created to demonstrate
the processes of evaluation.
Sample verbs: appraise, choose,
compare, conclude, decide, defend,
evaluate, give your opinion, judge,
justify, prioritize, rank, rate, select,
support, value
18. Bloom’s Taxonomy in 1956 Anderson/Krathwolh’s Revision in
2001
6. Evaluation: The ability to judge,
check, and even critique the value of
material for a given purpose.
Examples of verbs: judge, assess,
compare, evaluate, conclude,
measure, deduce, argue, decide,
choose, rate, select, estimate,
validate, consider, appraise, value,
criticize, infer
6. Creating: Putting elements together to
form a coherent or functional whole;
reorganizing elements into a new pattern
or structure through generating, planning,
or producing. Creating requires users to put
parts together in a new way or synthesize
parts into something new and different
form or product. This process is the most
difficult mental function in the new
taxonomy. Sample verbs – change,
combine, compose, construct, create,
invent, design, formulate, generate,
produce, revise, reconstruct, rearrange,
visualize, write, plan
19.
20. Table of Specification
Learning
Competency
Number of
Days
Number of
Items
Item Placement
Cognitive Level
Remembering
Understanding
(60%)
Easy
Applying
(30%)
Average
Analyzing
Evaluating
Creating
(10%)
Difficult
Basic Concepts of
Fractions
1 5 1-5
Addition of Fractions 1 5 6-10
Subtraction of Fractions 1 5 11-15
Multiplication and
Division of Fractions
3 15 16-30 31-40
Application/
Problem Solving
4 20 41-45 46-50
Total 10 50 30 15 5
21. How to Determine the No. of Items?
Formula:
No. of items =
𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑑𝑎𝑦𝑠 𝑥 𝑑𝑒𝑠𝑖𝑟𝑒𝑑 𝑡𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑖𝑡𝑒𝑚𝑠
𝑡𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑑𝑎𝑦𝑠
Example:
Learning Competency: Multiplication and Division of Fractions
Number of days: 3
Desired no. of items: 50
Total no. of class sessions: 10
No. of items =
3 𝑥 50
10
= 15
22. Check your understanding:
Directions: Complete the table by supplying the no. of items for each
learning competency.
Learning Competency No. of Class Sessions No. of Items
Musculo-Skeletal System 2
Integumentary System 2
Digestive System 3
Respiratory System 3
Circulatory System 4
Total 14
23. Workshop – Making Table of Specification
Using your Curriculum Guide, make a
table of specification for Periodic Test per
subject area in each quarter.
26. 1. Identify the different rules in constructing
multiple choice test
2. Construct multiple choice test
Objectives:
27.
28. Research indicates . . .
• Teachers tend to use tests that they have prepared
themselves much more often than any other type of
test. (How Teaching Matters, National Council for
Accreditation of Teacher Education, Oct. 2000)
• While assessment options are diverse, most classroom
educators rely on text and curriculum-embedded
questions, and tests that are overwhelmingly classified
as paper-and-pencil (National Commission on Teaching
and America’s Future, 1996)
29. Research indicates . . .
• Formal training in paper-and-pencil test construction may occur
at the preservice level (52% of the time) or as in-service
preparation (21%). A significant number of professional
educators (48%) report no formal training in developing,
administering, scoring, and interpreting tests (Education Week,
“National Survey of Public School Teachers, 2000).
• Students report a higher level of test anxiety over teacher-made
tests (64%) than over standardized tests (30%). The top three
reasons why: poor test construction, irrelevant or
obscure material coverage, and unclear directions.
(NCATE, “Summary Data on Teacher Effectiveness, Teacher
Quality, and Teacher Qualifications”, 2001)
30. Two General Categories of Test Items
1. Objective items which require students to select the
correct response from several alternatives or to supply
a word or short phrase to answer a question or
complete a statement. Objective items include:
multiple choice, true-false, matching, completion
2. Subjective or essay items which permit the student to
organize and present an original answer. Subjective
items include: short-answer essay, extended-response
essay, problem solving, performance test items
31. Creating a test is one of the most
challenging tasks confronting a
teacher.
Unfortunately, many of
us have had little, if any,
preparation in writing
tests.
32. What makes a test good or bad?
The most basic and
obvious answer to that
question is that good tests
measure what you want to
measure, and bad tests do
not.
33. When to use objective tests?
Objective tests are appropriate when:
The group to be tested is large and the test may
be reused.
Highly reliable scores must be obtained as
efficiently as possible.
Impartiality of evaluation, fairness, and
freedom from possible test scoring influences
are essential.
34. When to use objective tests?
Objective tests can be used to:
Measure almost any important educational
achievement a written test can measure.
Test understanding and ability to apply
principles.
Test ability to think critically.
Test ability to solve problems
35. The matching of
learning objective
expectations with
certain item types
provides a high
degree of test
validity: testing what
is supposed to be
tested.
36. Matching Learning Objectives with Test Items
Directions: Below are four test items categories labeled A, B, C, and D. Match the
learning objectives with the most appropriate test item category.
A – Objective test Item (MC, true-false, matching)
B – Performance Test Item
C – Essay Test Item (extended response)
D - Essay Test Item (short answer)
1. Name the parts of the human skeleton.
Answer: A
2. Appraise a composition on the basis of its organization.
Answer: C
3. Demonstrate safe laboratory skills.
Answer: B
37. Matching Learning Objectives with Test Items
Directions: Below are four test items categories labeled A, B, C, and D. Match the
learning objectives with the most appropriate test item category.
A – Objective test Item (MC, true-false, matching)
B – Performance Test Item
C – Essay Test Item (extended response)
D - Essay Test Item (short answer)
4. Cite four examples of satire that Twain uses in Huckleberry Finn.
Answer: D
5. Design a logo for a web page.
Answer: B
6. Describe the impact of a bull market.
Answer: C
7. Diagnose a physical ailment.
Answer: B
38. Matching Learning Objectives with Test Items
Directions: Below are four test items categories labeled A, B, C, and D. Match the
learning objectives with the most appropriate test item category.
A – Objective test Item (MC, true-false, matching)
B – Performance Test Item
C – Essay Test Item (extended response)
D - Essay Test Item (short answer)
8. List important mental attributes necessary for an athlete.
Answer: D
9. Categorize great American fiction writers.
Answer: A
10. Analyze the major causes of learning disabilities.
Answer: C
39. In general, test items should . . .
•Assess achievement of instructional objectives
•Measure important aspects of the subject
(concepts and conceptual relations)
•Accurately reflect the emphasis placed on
important aspects of instruction
•Measure an appropriate level of student
knowledge
•Vary in levels of difficulty
40. Technical Quality of a Test
1. Cognitive Complexity
The test questions will focus on appropriate intellectual
activity ranging from simple recall of facts to problem
solving, critical thinking, and reasoning.
2. Content Quality
The test questions will permit students to demonstrate their
knowledge of challenging and important subject matter.
3. Meaningfulness
The test questions will be worth students’ time and students
will recognize and understand their value.
41. Technical Quality of a Test
4. Language Appropriateness
The language demands will be clear and
appropriate to the assessment tasks and to
students.
5. Transfer and Generalizability
Successful performance on the test will allow
valid generalizations about achievement to be
made.
42. Technical Quality of a Test
6. Fairness
Student performance will be measured in a way
that does not give advantage to factors
irrelevant to school learning; scoring schemes
will be similarly equitable.
7. Reliability
Answers to test questions will be consistently
trusted to represent what students know.
45. Question 1:
Multiple choice items provide highly reliable test scores because:
A. They do not place a high degree of dependence on the students
reading ability
B. They place high degree of dependence on a teacher’s writing ability
C. They are subjective measurement of student achievement
D. They allow a wide sampling of content and a reduce guessing factor
Answer: D
46. Question 2:
You should:
A. Always decide on an answer before reading the alternatives
B. Always review your marked exams
C. Never change an answer
D. Always do the multiple choice items on an exam first
Answer: B
47. Question 3:
The multiple choice item on the right is
structurally undesirable because:
A. A direct question is more desirable than
an incomplete statement
B. There is no explicit problem of
information in the stem
C. The alternatives are not all plausible
D. All of the above
E. A & B only
F. B & C only
G. A & C only
H. None of the above
Answer: D
You should:
A. Always decide on an answer
before reading the
alternatives
B. Always review your marked
exams
C. Never change an answer
D. Always do the multiple
choice items on an exam first
48. Question 4:
Question 3 multiple choice item on the
right is undesirable because:
A. It relies on an answer required in a
previous item
B. The stem does not supply enough
information
C. Eight alternatives are too many and
too confusing to the students
D. More alternatives just encourage
guessing
Answer: C
Question 3:
The multiple choice item on the
right is structurally undesirable
because:
A. A direct question is more
desirable than an incomplete
statement
B. There is no explicit problem of
information in the stem
C. The alternatives are not all
plausible
D. All of the above
E. A & B only
F. B & C only
G. A & C only
H. None of the above
49. Question 5
The right answers in multiple choice questions tend to be:
A. Longer and more descriptive
B. The same length as the wrong answer
C. At least a paragraph long
D. Short
Answer: A
50. Question 6
When guessing on a multiple choice question with numbers in the
answer
A. Always pick the most extreme
B. Pick the lowest range
C. Pick answers in the middle range
D. Always pick C
Answer: C
51. Question 7:
What is the process of elimination in a multiple choice question?
A. Skipping the entire question
B. Eliminating all answers with extreme modifiers
C. Just guessing
D. Eliminating the wrong answers
Answer: D
52. Question 8
It is unlikely that a student who is unskilled in untangling negative
statements will:
A. Quickly understand multiple choice items not written in this way
B. Not quickly understand multiple choice items not written in this
way
C. Quickly understand multiple choice items written in this way
D. Not quickly understand multiple choice items written in this way
Answer: C
53. Multiple Choice Test Items
MC item consist of the stem, which identifies the question or problem
and the response alternatives or choices. Usually, students are asked to
select the one alternative that best completes a statement or answer a
question.
Item Stem: Which of the following is a chemical change?
Response Alternatives: A. Evaporation of alcohol
B. Freezing of water
C. Burning of oil
D. Melting of wax
54. General Guidelines in Constructing MC Test
1. Make the test item that is practical or with real-world
applications to the students.
2. Use diagrams or drawings when asking questions about
application, analysis or evaluation.
3. When ask to interpret or evaluate about quotations,
present actual quotations from secondary sources like
published books or newspapers.
4. Use tables, figures, or charts when asking questions to
interpret.
5. Use pictures if possible when students are required to
apply concepts and principles.
55. General Guidelines in Constructing MC Test
6. List the choices/options vertically not horizontally.
7. Avoid trivial questions.
8. Use only one correct answer or best answer format.
9. Use three to five options to discourage guessing.
10. Be sure that distracters are plausible and effective.
11. Increase the similarity of the options to increase the difficulty of
the item.
12. Do not use “none of the above” options when asking for a best
answer.
13. Avoid using “all of the above” options. It is usually the correct
answer and makes the item too easy for the examinee with partial
knowledge.
56. Guidelines in Constructing the Stem
1. The stem should be written in question form or completion form.
Research showed that it is more advisable to use question form.
2. Do not leave the blank at the beginning or at the middle of the stem when
using completion form of multiple-choice type of test.
3. The stem should pose the problem completely.
4. The stem should be clear and concise.
5. Avoid excessive and meaningless use of words in the stem.
6. State the stem in positive form. Avoid using the negative phrase like “not”
or “except.” Underline or capitalize the negative words if it can be avoided.
Example: Which of the following does not belong to the group? or Which
of the following does NOT belong to the group?
7. Avoid grammatical clues in the correct answer.
57. Guidelines in Constructing Options
1. There should be one correct or best answer in each item.
2. List options in vertical order not horizontal order beneath the stem.
3. Arrange the options in logical order and use capital letters to
indicate each option such as A, B, C, D, E.
4. No overlapping options; keep it independent.
5. All options must be homogenous in content to increase the
difficulty of an item.
6. As much as possible the length of the options must be the same or
equal.
7. Avoid using the phrase “all of the above.”
8. Avoid using the phrase “none of the above” or “I don’t know.”
58. Guidelines in Constructing the Distracters
1. The distracters should be plausible.
2. The distracters should be equally popular to all examinees.
3. Avoid using ineffective distracters. Replace distracter(s)
that are not effective to the examinees.
4. Each distracter should be chosen by at least 5% but not
more than the key answer.
5. Revise distracter(s) that are over attractive to the teachers.
They might be ambiguous to the examinees.
59. Advantages of MC Test
1. Measures learning outcomes from the knowledge to
evaluation level.
2. Scoring is highly objective, easy and reliable.
3. Scores are more reliable than subjective type of test.
4. Measures broad samples of content within a short
span of time.
5. Distracters can provide diagnostic information.
6. Item analysis can reveal the difficulty of an item and
can discriminate the good and performing students.
60. Disadvantages of MC Test
1. Time consuming to construct a good item.
2. Difficult to find effective and plausible distracters.
3. Scores can be influenced by the reading ability of the
examinees.
4. In some cases, there is more than one justifiable correct
answer.
5. Ineffectiveness in assessing the problem solving skills of
the students.
6. Not applicable when assessing the students’ ability to
organize and express ideas.
61. Activity: Improve Mo Ako!
Directions: The following multiple choice
questions are poorly constructed. Write a
better version of the question.
72. “Understand that there is always one clearly best
answer. Your goal is not to trick students or require
them to make difficult judgments about two
options that are nearly equally correct. Your goal is
to design questions that students who understand
will answer correctly and students who do not
understands will answer incorrectly.”
John A. Johnson
Dept. of Psychology
Penn State University
73. POINTS TO PONDER. . .
A good lesson makes a good question
A good question makes a good content
A good content makes a good test
A good test makes a good grade
A good grade makes a good student
A good student makes a good COMMUNITY
Jesus Ochave Ph.D.
VP Research Planning and Development
Philippine Normal University