The document discusses different types of tests used to measure student achievement, including true-false, multiple choice, and matching questions. It provides definitions of tests from various authors and explains the purpose and proper construction of each question type. The advantages and disadvantages of each test type are outlined. Guidance is given on writing clear instructions, ensuring questions align with learning objectives, and developing distractors or response options to effectively measure student skills and knowledge.
Ppt on cognitive items, items types and computer based test itemsirshad narejo
This presentation covers cognitive items, types of test items, and computer-based test items. It discusses the cognitive domain of learning and different cognitive levels like knowledge, comprehension, application, analysis, synthesis, and evaluation. It describes direct and indirect test items, including multiple choice questions, cloze procedures, sentence transformation, reordering, and fill-ins. Computer-based testing allows for tests on computers, including items with audio, tables/graphs, math/science notation, response options, text, images, active objects, video/animation, and constructed responses involving text or math.
1. The document discusses analyzing test items through quantitative and qualitative methods. It defines key terms like difficulty index, discrimination index, and describes how to calculate them.
2. The document provides examples of analyzing test items based on student response data. It shows how to determine the difficulty level, identify effective or ineffective distractors, and ways to improve test items based on the analysis.
3. The goal of the analysis is to evaluate how well items discriminate between higher and lower performing students, identify issues, and determine if items should be retained or modified to make them more effective.
Administering, analyzing, and improving the test or assessmentNema Grace Medillo
The document provides guidance on test development and administration. It discusses assembling the test, administering it, scoring it, and analyzing results both quantitatively and qualitatively. Quantitative analysis includes calculating difficulty levels and discrimination indices to evaluate items. Qualitative analysis examines items' match to objectives and technical quality. The document also describes modifications for criterion-referenced tests, such as using pre- and post-tests as upper and lower groups for analysis. Overall, the guidance aims to help avoid common pitfalls and improve tests and assessments.
How to analyze the items of a questionnaire and to find out Difficulty Value (DV), Discriminative Power (DP) and Distracter Analysis.
Presented to Dr. Huma Lodhi at the University of Education.
The document discusses item analysis, which evaluates test items and the test as a whole. It describes the U-L Index Method for conducting item analysis, which involves separating students into upper and lower scoring groups, tallying responses from each group, and calculating difficulty and discrimination indices. Difficulty index indicates how easy or difficult an item is, while discrimination index shows how well an item distinguishes high-scoring from low-scoring students. Together these can be used to interpret items and determine whether to accept, revise, or discard them. An example analysis is provided to illustrate the process.
This document defines and describes matching type tests. It begins by defining matching type tests as objective tests consisting of two sets of items that must be matched based on a specified attribute. It notes they measure the ability to identify relationships between similar items.
The document then provides more details on the structure and components of matching type tests, including that they have two columns - a premises column with items to be matched and a response column with potential matches. It discusses advantages like being more representative and removing subjective scoring. It also outlines disadvantages such as encouraging memorization without understanding.
Finally, the document provides guidelines for constructing effective matching type tests, including using clear directions, ensuring homogeneity, and using an imperfect match structure when possible.
Choosing Appropriate Evaluation Methods tool Debbie West
The document describes a tool to help evaluators select appropriate evaluation methods. It considers 11 common evaluation methods and how well they are able to answer 5 key evaluation questions and achieve 15 other evaluation goals. The tool assesses methods based on their requirements and how well a user can meet those requirements. It provides output on groups of methods best suited to answer the user's priority questions and achieve their other goals, considering both method fit and feasibility for the user. The overall output synthesizes this information to recommend mixes of methods that would be most appropriate and applicable for the user's specific evaluation needs and context.
The document provides an overview of how to understand and interpret student course evaluation data, including explaining basic statistics like percentages, means, medians, modes, and standard deviations that are used to analyze evaluation results. It also discusses how to interpret qualitative feedback from students and offers recommendations for faculty on gathering additional feedback and using resources to help analyze evaluations.
Ppt on cognitive items, items types and computer based test itemsirshad narejo
This presentation covers cognitive items, types of test items, and computer-based test items. It discusses the cognitive domain of learning and different cognitive levels like knowledge, comprehension, application, analysis, synthesis, and evaluation. It describes direct and indirect test items, including multiple choice questions, cloze procedures, sentence transformation, reordering, and fill-ins. Computer-based testing allows for tests on computers, including items with audio, tables/graphs, math/science notation, response options, text, images, active objects, video/animation, and constructed responses involving text or math.
1. The document discusses analyzing test items through quantitative and qualitative methods. It defines key terms like difficulty index, discrimination index, and describes how to calculate them.
2. The document provides examples of analyzing test items based on student response data. It shows how to determine the difficulty level, identify effective or ineffective distractors, and ways to improve test items based on the analysis.
3. The goal of the analysis is to evaluate how well items discriminate between higher and lower performing students, identify issues, and determine if items should be retained or modified to make them more effective.
Administering, analyzing, and improving the test or assessmentNema Grace Medillo
The document provides guidance on test development and administration. It discusses assembling the test, administering it, scoring it, and analyzing results both quantitatively and qualitatively. Quantitative analysis includes calculating difficulty levels and discrimination indices to evaluate items. Qualitative analysis examines items' match to objectives and technical quality. The document also describes modifications for criterion-referenced tests, such as using pre- and post-tests as upper and lower groups for analysis. Overall, the guidance aims to help avoid common pitfalls and improve tests and assessments.
How to analyze the items of a questionnaire and to find out Difficulty Value (DV), Discriminative Power (DP) and Distracter Analysis.
Presented to Dr. Huma Lodhi at the University of Education.
The document discusses item analysis, which evaluates test items and the test as a whole. It describes the U-L Index Method for conducting item analysis, which involves separating students into upper and lower scoring groups, tallying responses from each group, and calculating difficulty and discrimination indices. Difficulty index indicates how easy or difficult an item is, while discrimination index shows how well an item distinguishes high-scoring from low-scoring students. Together these can be used to interpret items and determine whether to accept, revise, or discard them. An example analysis is provided to illustrate the process.
This document defines and describes matching type tests. It begins by defining matching type tests as objective tests consisting of two sets of items that must be matched based on a specified attribute. It notes they measure the ability to identify relationships between similar items.
The document then provides more details on the structure and components of matching type tests, including that they have two columns - a premises column with items to be matched and a response column with potential matches. It discusses advantages like being more representative and removing subjective scoring. It also outlines disadvantages such as encouraging memorization without understanding.
Finally, the document provides guidelines for constructing effective matching type tests, including using clear directions, ensuring homogeneity, and using an imperfect match structure when possible.
Choosing Appropriate Evaluation Methods tool Debbie West
The document describes a tool to help evaluators select appropriate evaluation methods. It considers 11 common evaluation methods and how well they are able to answer 5 key evaluation questions and achieve 15 other evaluation goals. The tool assesses methods based on their requirements and how well a user can meet those requirements. It provides output on groups of methods best suited to answer the user's priority questions and achieve their other goals, considering both method fit and feasibility for the user. The overall output synthesizes this information to recommend mixes of methods that would be most appropriate and applicable for the user's specific evaluation needs and context.
The document provides an overview of how to understand and interpret student course evaluation data, including explaining basic statistics like percentages, means, medians, modes, and standard deviations that are used to analyze evaluation results. It also discusses how to interpret qualitative feedback from students and offers recommendations for faculty on gathering additional feedback and using resources to help analyze evaluations.
Item analysis is a statistical technique used to evaluate test items and select or reject them based on their difficulty and ability to discriminate between more and less capable examinees. It provides information about each item's difficulty value and discrimination index. The difficulty value indicates the percentage of examinees who answered the item correctly, and can be used to identify items that are too easy or too hard. The discrimination index reflects an item's ability to differentiate high-scoring from low-scoring examinees, with positive values indicating items that high performers tend to get right and low performers tend to get wrong. Item analysis allows modifying or removing items that have low discrimination or difficulty levels outside the desired range.
The document provides guidance on developing valid and reliable assessment items. It discusses three steps: 1) identifying learning outcomes, 2) selecting the appropriate assessment type (e.g. multiple choice, essay), and 3) following rules for writing good items. Specific rules are provided for multiple choice items to improve quality, such as avoiding obviously incorrect answers, not using words to cue the correct response, and not using negative wording for questions with negative stems. The document emphasizes aligning assessments clearly with learning objectives and standards.
This chapter discusses objective test items, which are items with a single correct response. It covers the general characteristics and guidelines for writing different types of objective test items, including multiple choice, matching, and true/false items. It also discusses item analysis, which is the process of analyzing statistical characteristics of each item on a test to determine if items should be retained or discarded. Key aspects covered include item difficulty, item discrimination, distractor analysis, and test reliability. The document provides detailed guidelines for writing different types of objective test items and how to conduct item analysis following test administration.
Multiple choice questions can assess different levels of knowledge from simple recall to interpretation and problem solving. They provide flexibility through variations like correct answer, best answer, and interpretive exercises using stimulus materials. Analysis of multiple choice questions focuses on scoring models to determine student achievement and item analysis to evaluate how well questions functioned.
Topic: Quantitative Item Analysis
Student Name: Hussain Shah
Class: M.Ed
Project Name: “Young Teachers' Professional Development (TPD)"
"Project Founder: Prof. Dr. Amjad Ali Arain
Faculty of Education, University of Sindh, Pakistan
The document discusses item analysis, which is the analysis of multiple choice questions on a test. It explains the need for item analysis and its advantages. Some key tools in item analysis are difficulty index, discrimination index, and distracter effectiveness. The document outlines the procedure for conducting item analysis, which involves ranking test papers, calculating difficulty and discrimination indexes using formulas, and evaluating questions based on the indexes.
Item analysis is a method used in test construction and revision to evaluate individual test items. It examines item difficulty, discrimination, and characteristic curves to identify issues and ensure items are properly assessing different ability levels. Item analysis helps improve test quality by identifying unfair or biased items and informing refinements to item wordings and test construction.
The document defines key terms related to test item design such as stem, key, distracter, and normal item format. It discusses ideal characteristics of test items such as using a clear context and avoiding negative stems. Common problems in item design are outlined like non-homogeneous response options or difficulty stemming from instructions rather than the task. Guidance is provided on ensuring items have a single correct answer and assessing the intended language aspects. The importance of validity, reliability, practicality and backwash is covered.
The document outlines the development of an aptitude subtest, describing the purpose of aptitude tests to assess a student's ability to learn and identify areas of inclination. It provides details on the taxonomy of different aptitude items that could be included in the test, such as verbal analogy, syllogism, letter and number series, topology, and visual discrimination puzzles. The goal of the aptitude test is to inform students, schools and other stakeholders about examinees' existing skills and competencies to help guide their future learning and course selection.
Item analysis is used to evaluate test items and identify areas for improvement. It examines the difficulty level, discriminating power, and effectiveness of distractors for multiple choice items. Item difficulty indexes the percentage answering correctly, while discrimination compares performance between high- and low-scoring groups. Items are evaluated based on these metrics and may be retained, modified, or rejected from the test. Accumulating item analysis data over time allows improving test quality and item banks.
Person A scored an 87 on a physics test with a class average of 80 and standard deviation of 5. Person B scored an 82 on a test with a class average of 73 and standard deviation of 6. The document discusses different types of test scores such as raw scores, percentile ranks, and standard scores including z-scores, t-scores, stanines, and normal curve equivalents. It also discusses interpreting test scores using norm-referenced and criterion-referenced approaches.
This document discusses two types of assessment items: matching items and supply items. Matching items require students to match items on the left with choices on the right. A variant is data sufficiency items which require students to determine relationships between items using symbols like >, <, or =. Supply items require students to write answers in blanks. While supply items typically test only recall, they can be constructed to test higher-order thinking by requiring students to provide synonyms rather than single correct answers. Both matching and supply items can test different levels of thinking depending on how they are designed.
Topic: Constructing Objective and Essay Type Test
Student Name: Pardeep Kumar
Class: B.Ed. Hons Elementary Part (II)
Project Name: “Young Teachers' Professional Development (TPD)"
"Project Founder: Prof. Dr. Amjad Ali Arain
Faculty of Education, University of Sindh, Pakistan
Item analysis involves statistical analysis of test items to evaluate their effectiveness. It examines student responses to individual test questions to assess question and overall test quality. Key indicators include the item difficulty index, item discrimination index, and distractor power analysis. Conducting regular item analysis helps improve instruction, identifies areas needing remediation, and builds a bank of high-quality test questions.
This document discusses item analysis, which is a statistical technique used to evaluate test questions and select high-quality items. Item analysis determines the difficulty and discrimination of each question. Question difficulty is measured by the percentage of test-takers who answered correctly. Discrimination refers to a question's ability to differentiate between high-scoring and low-scoring students. Item analysis is used to improve test quality and understand why certain tests predict learning outcomes better than others. The results can identify questions that are too difficult or easy and lack discriminating power for revision or removal from the test.
The document discusses developing a predictive model to forecast students' academic performance using fuzzy applications. It considers factors like previous test results, attendance, scores, percentiles, accuracy, exam aptitude and attempt ratios. The model is aimed at helping educational institutions enhance quality and aid students' career development. Key aspects of the model include using item response theory to analyze test questions, excluding predictors with low discrimination indexes, and basing predictions on students with over 50% attendance across multiple tests.
This document discusses using a multi-representational approach combining graphical and algebraic methods to teach absolute value equations and inequalities. It outlines common student misconceptions when solving these problems algebraically. The author implemented a lesson plan asking students to first graph absolute value examples before solving them algebraically. Prior research found graphical approaches helped students' conceptual understanding over solely algebraic manipulation.
Different types of Test
Why do We give tests?
Kinds of tests
Other categories of tests
Two Types of Test (Questions)
Subjective Test Samples
Essay
Types of Essay Items
Matching type
Completion Type
This document discusses various concepts related to educational measurement and testing. It defines key terms like measurement, evaluation, tests, validity, reliability, and different types of tests. It provides details on constructing objective tests, including writing test items, establishing test validity and reliability, and interpreting test scores. It also discusses advantages and disadvantages of objective and essay tests. Additionally, it covers topics like measures of central tendency, frequency distributions, and calculating the mean, median and mode.
The document discusses various methods of assessment in education. It defines assessment as collecting information on student learning through measurement and evaluation. Measurement involves assigning numbers to student attributes, while evaluation makes judgements on student performance. The document also discusses the purposes of assessment as providing feedback on learning, progress, strengths and weaknesses. It outlines different types of assessment tools like norm-referenced tests which compare students and criterion-referenced tests which measure learning against objectives. The key aspects of reliability and validity in effective assessment are also summarized.
This document discusses the steps in planning an achievement test:
1. Identify the purpose and specifications of the test, including content areas and weightings.
2. Select test contents by considering item format, clarity, and coverage of objectives.
3. Determine the form of the test, such as written, oral, individual or group.
4. Write test items using formats like multiple choice, true/false, matching, and completion. Ensure items accurately measure objectives.
This document provides guidance on writing effective multiple choice questions. It discusses the advantages and disadvantages of multiple choice questions, guidelines for constructing item stems and alternatives, and examples of questions at different cognitive levels. The intended learning outcomes are to explain the strengths and weaknesses of multiple choice exams, evaluate existing multiple choice items, and create effective multiple choice items that measure various learning levels. Participants are engaged in revision activities to practice applying the guidelines.
Item analysis is a statistical technique used to evaluate test items and select or reject them based on their difficulty and ability to discriminate between more and less capable examinees. It provides information about each item's difficulty value and discrimination index. The difficulty value indicates the percentage of examinees who answered the item correctly, and can be used to identify items that are too easy or too hard. The discrimination index reflects an item's ability to differentiate high-scoring from low-scoring examinees, with positive values indicating items that high performers tend to get right and low performers tend to get wrong. Item analysis allows modifying or removing items that have low discrimination or difficulty levels outside the desired range.
The document provides guidance on developing valid and reliable assessment items. It discusses three steps: 1) identifying learning outcomes, 2) selecting the appropriate assessment type (e.g. multiple choice, essay), and 3) following rules for writing good items. Specific rules are provided for multiple choice items to improve quality, such as avoiding obviously incorrect answers, not using words to cue the correct response, and not using negative wording for questions with negative stems. The document emphasizes aligning assessments clearly with learning objectives and standards.
This chapter discusses objective test items, which are items with a single correct response. It covers the general characteristics and guidelines for writing different types of objective test items, including multiple choice, matching, and true/false items. It also discusses item analysis, which is the process of analyzing statistical characteristics of each item on a test to determine if items should be retained or discarded. Key aspects covered include item difficulty, item discrimination, distractor analysis, and test reliability. The document provides detailed guidelines for writing different types of objective test items and how to conduct item analysis following test administration.
Multiple choice questions can assess different levels of knowledge from simple recall to interpretation and problem solving. They provide flexibility through variations like correct answer, best answer, and interpretive exercises using stimulus materials. Analysis of multiple choice questions focuses on scoring models to determine student achievement and item analysis to evaluate how well questions functioned.
Topic: Quantitative Item Analysis
Student Name: Hussain Shah
Class: M.Ed
Project Name: “Young Teachers' Professional Development (TPD)"
"Project Founder: Prof. Dr. Amjad Ali Arain
Faculty of Education, University of Sindh, Pakistan
The document discusses item analysis, which is the analysis of multiple choice questions on a test. It explains the need for item analysis and its advantages. Some key tools in item analysis are difficulty index, discrimination index, and distracter effectiveness. The document outlines the procedure for conducting item analysis, which involves ranking test papers, calculating difficulty and discrimination indexes using formulas, and evaluating questions based on the indexes.
Item analysis is a method used in test construction and revision to evaluate individual test items. It examines item difficulty, discrimination, and characteristic curves to identify issues and ensure items are properly assessing different ability levels. Item analysis helps improve test quality by identifying unfair or biased items and informing refinements to item wordings and test construction.
The document defines key terms related to test item design such as stem, key, distracter, and normal item format. It discusses ideal characteristics of test items such as using a clear context and avoiding negative stems. Common problems in item design are outlined like non-homogeneous response options or difficulty stemming from instructions rather than the task. Guidance is provided on ensuring items have a single correct answer and assessing the intended language aspects. The importance of validity, reliability, practicality and backwash is covered.
The document outlines the development of an aptitude subtest, describing the purpose of aptitude tests to assess a student's ability to learn and identify areas of inclination. It provides details on the taxonomy of different aptitude items that could be included in the test, such as verbal analogy, syllogism, letter and number series, topology, and visual discrimination puzzles. The goal of the aptitude test is to inform students, schools and other stakeholders about examinees' existing skills and competencies to help guide their future learning and course selection.
Item analysis is used to evaluate test items and identify areas for improvement. It examines the difficulty level, discriminating power, and effectiveness of distractors for multiple choice items. Item difficulty indexes the percentage answering correctly, while discrimination compares performance between high- and low-scoring groups. Items are evaluated based on these metrics and may be retained, modified, or rejected from the test. Accumulating item analysis data over time allows improving test quality and item banks.
Person A scored an 87 on a physics test with a class average of 80 and standard deviation of 5. Person B scored an 82 on a test with a class average of 73 and standard deviation of 6. The document discusses different types of test scores such as raw scores, percentile ranks, and standard scores including z-scores, t-scores, stanines, and normal curve equivalents. It also discusses interpreting test scores using norm-referenced and criterion-referenced approaches.
This document discusses two types of assessment items: matching items and supply items. Matching items require students to match items on the left with choices on the right. A variant is data sufficiency items which require students to determine relationships between items using symbols like >, <, or =. Supply items require students to write answers in blanks. While supply items typically test only recall, they can be constructed to test higher-order thinking by requiring students to provide synonyms rather than single correct answers. Both matching and supply items can test different levels of thinking depending on how they are designed.
Topic: Constructing Objective and Essay Type Test
Student Name: Pardeep Kumar
Class: B.Ed. Hons Elementary Part (II)
Project Name: “Young Teachers' Professional Development (TPD)"
"Project Founder: Prof. Dr. Amjad Ali Arain
Faculty of Education, University of Sindh, Pakistan
Item analysis involves statistical analysis of test items to evaluate their effectiveness. It examines student responses to individual test questions to assess question and overall test quality. Key indicators include the item difficulty index, item discrimination index, and distractor power analysis. Conducting regular item analysis helps improve instruction, identifies areas needing remediation, and builds a bank of high-quality test questions.
This document discusses item analysis, which is a statistical technique used to evaluate test questions and select high-quality items. Item analysis determines the difficulty and discrimination of each question. Question difficulty is measured by the percentage of test-takers who answered correctly. Discrimination refers to a question's ability to differentiate between high-scoring and low-scoring students. Item analysis is used to improve test quality and understand why certain tests predict learning outcomes better than others. The results can identify questions that are too difficult or easy and lack discriminating power for revision or removal from the test.
The document discusses developing a predictive model to forecast students' academic performance using fuzzy applications. It considers factors like previous test results, attendance, scores, percentiles, accuracy, exam aptitude and attempt ratios. The model is aimed at helping educational institutions enhance quality and aid students' career development. Key aspects of the model include using item response theory to analyze test questions, excluding predictors with low discrimination indexes, and basing predictions on students with over 50% attendance across multiple tests.
This document discusses using a multi-representational approach combining graphical and algebraic methods to teach absolute value equations and inequalities. It outlines common student misconceptions when solving these problems algebraically. The author implemented a lesson plan asking students to first graph absolute value examples before solving them algebraically. Prior research found graphical approaches helped students' conceptual understanding over solely algebraic manipulation.
Different types of Test
Why do We give tests?
Kinds of tests
Other categories of tests
Two Types of Test (Questions)
Subjective Test Samples
Essay
Types of Essay Items
Matching type
Completion Type
This document discusses various concepts related to educational measurement and testing. It defines key terms like measurement, evaluation, tests, validity, reliability, and different types of tests. It provides details on constructing objective tests, including writing test items, establishing test validity and reliability, and interpreting test scores. It also discusses advantages and disadvantages of objective and essay tests. Additionally, it covers topics like measures of central tendency, frequency distributions, and calculating the mean, median and mode.
The document discusses various methods of assessment in education. It defines assessment as collecting information on student learning through measurement and evaluation. Measurement involves assigning numbers to student attributes, while evaluation makes judgements on student performance. The document also discusses the purposes of assessment as providing feedback on learning, progress, strengths and weaknesses. It outlines different types of assessment tools like norm-referenced tests which compare students and criterion-referenced tests which measure learning against objectives. The key aspects of reliability and validity in effective assessment are also summarized.
This document discusses the steps in planning an achievement test:
1. Identify the purpose and specifications of the test, including content areas and weightings.
2. Select test contents by considering item format, clarity, and coverage of objectives.
3. Determine the form of the test, such as written, oral, individual or group.
4. Write test items using formats like multiple choice, true/false, matching, and completion. Ensure items accurately measure objectives.
This document provides guidance on writing effective multiple choice questions. It discusses the advantages and disadvantages of multiple choice questions, guidelines for constructing item stems and alternatives, and examples of questions at different cognitive levels. The intended learning outcomes are to explain the strengths and weaknesses of multiple choice exams, evaluate existing multiple choice items, and create effective multiple choice items that measure various learning levels. Participants are engaged in revision activities to practice applying the guidelines.
Using mcq for effective it education woodfordRipudaman Singh
This document provides guidance on using multiple choice questions effectively in information technology education. It discusses challenges such as larger class sizes and trimester study periods that have led to increased use of multiple choice assessments. The document outlines factors to consider when writing high-quality multiple choice questions, such as grammar, number of options, order of questions and answers. It also addresses criticisms that multiple choice questions only test recall and provides examples of questions that can assess higher-order thinking skills based on Bloom's taxonomy, such as comprehension, application and analysis. The overall aim is to help IT educators construct multiple choice tests that maintain the integrity and quality of assessment.
This document provides guidance on constructing objective test items for different formats, including short answer, true/false, matching, and multiple choice. It discusses the characteristics, uses, advantages, limitations, and suggestions for writing each type of item to effectively measure student learning outcomes. Multiple choice items can measure both simple and complex outcomes like knowledge, understanding, and application. While objective tests are limited in scope, the multiple choice format in particular allows for flexibility in content and reliable, structured assessment when written carefully according to the guidelines.
This document provides information on alternate-choice items, their nature and variations, advantages and limitations, principles for constructing tests, and tips for test taking. Alternate-choice items include multiple choice, true/false, yes/no, and checklist items. They are easy to score objectively but difficult to write beyond the knowledge level and more influenced by guessing. When constructing tests, each item should refer to one concept and avoid opinions, negatives, unfamiliar words, and patterns. Test takers should watch for long sentences, thoroughly read statements, and look for qualifiers.
This document provides information on alternate-choice items, their nature and variations, advantages and limitations, principles for constructing tests, and tips for test taking. Alternate-choice items include multiple choice, true-false, yes-no, checklist, and other formats that require selecting from two or more options. They can test recall and comprehension efficiently but are more susceptible to guessing than other item types. When constructing tests, items should be clear, concise, free of bias, varied in difficulty, and avoid negatives or complex sentences. Students should read carefully and watch for qualifiers when answering alternate-choice items.
Comparison Between Objective Type Tests and Subjective Type tests.Bint-e- Hawa
Objective and subjective tests are two main types of tests. Objective tests typically have single correct answers and include multiple choice, true/false, matching, and short answer questions. Subjective tests are open-ended and require subjective scoring, including restricted response and extended response essays. Both test types have advantages and limitations. Guidelines for writing high-quality test items include ensuring questions measure intended learning outcomes, providing unambiguous questions and response options, and developing clear scoring rubrics.
The document provides guidance on how to effectively take a multiple choice exam. It explains that multiple choice questions make up most exams and can measure various skills. It describes the different types of multiple choice questions and provides strategies for answering each type. The document recommends reading the questions carefully, eliminating unlikely answers, and double checking work before submitting answers.
The document provides guidance on constructing teacher-made tests, including several suggested steps:
1. Identify learning objectives and outcomes to guide test item construction.
2. Outline subject matter topics and prepare a table of specifications relating outcomes to content.
3. Select appropriate test types depending on what is being assessed, such as essays for higher-order skills or objective tests for recall.
4. Construct test items in the proper format, order items from easiest to hardest, write clear directions, and prepare the answer key.
Assessment of Learning - Multiple Choice TestXiTian Miran
A powerpoint presentation about the Multiple Choice Test as one of the assessment strategies that can be used by teachers in assessing learners. Also, this includes the introduction, definition, advantages, and limitations of Multiple Choice Test.
The document discusses best practices for constructing tests and writing test questions. It provides guidelines for developing multiple choice, true/false, matching, and essay questions. Key aspects addressed include writing clear questions, avoiding negatives, ensuring answer options are similar in length and structure, and using distractors that could plausibly be chosen. The document emphasizes the importance of validity, reliability, and usability in test design.
The document provides guidance on writing effective multiple choice questions for assessing students in information technology education. It discusses using multiple choice questions to test higher levels of cognition beyond just knowledge, such as comprehension, application, and analysis. It also covers best practices for writing high quality multiple choice questions, including using clear grammar, an appropriate number of options, plausible distractors, and testing the desired level of cognition. The goal is to provide IT teachers with information to help them construct multiple choice questions that maintain the integrity of their assessments.
This document provides guidance on writing objective examination questions in the alternative response format, specifically true-false questions. It discusses best practices such as keeping statements short and focused on one idea, avoiding ambiguous terms, and balancing the number of true and false statements. Examples are provided for single true-false, multiple true-false, and multiple correct response question types across various subjects. Tips are given such as avoiding opinions in true-false items and keeping the question stem clear and concise. Advantages of this format are its efficiency while limitations include only measuring basic knowledge and susceptibility to guessing.
(1) The teacher used selected response and constructed response assessment tools to measure learning in the cognitive domain.
(2) A sample multiple choice test item assessed students' understanding of a specific learning outcome.
(3) The test item appeared to be constructed according to established guidelines for content, format, and construction.
This document provides information about objective type tests, including definitions, classifications, and guidelines for creating different types of objective test items. It defines objective tests as those with predetermined correct answers that can be scored objectively. Objective tests are classified as supply/recall type, selection/recognition type, and matching type. The document provides examples and creation guidelines for true/false, multiple choice, completion, and matching items. It discusses measuring various levels of learning and outlines principles for preparing objective tests, such as ensuring all content is covered and maintaining confidentiality.
1. This module discusses guidelines for assembling, administering, analyzing, and improving tests. It covers properly packaging test items, reproducing the test, administering the test to reduce anxiety, and scoring procedures.
2. Item analysis is explained as a quantitative method to evaluate the difficulty and discrimination of each test item. It involves separating student papers by score, tallying correct responses from high- and low-scoring groups, and calculating difficulty and discrimination indexes.
3. Qualitative analysis examines the effectiveness of each distractor in multiple choice items. Together, quantitative and qualitative analysis identify well-performing and flawed items needing revision.
The document provides guidelines for constructing different types of test questions including matching, sentence completion, essay, and other question types. It discusses principles such as ensuring questions are clear, focused, and at an appropriate level for students. The document emphasizes that creating good tests takes time but plays an important role in evaluation. It also notes that breaking rules is acceptable when one has a good reason.
Ppt on cognitive items, items types and computer based test itemsirshad narejo
This presentation covers cognitive items, types of test items, and computer-based test items. It discusses the cognitive domain of learning and different cognitive levels like knowledge, comprehension, application, analysis, synthesis, and evaluation. It describes direct and indirect test items, including multiple choice questions, cloze procedures, sentence transformation, reordering, and fill-ins. Computer-based testing allows for tests on computers, including items with audio, tables/graphs, math/science notation, response options, text, images, active objects, video/animation, and constructed responses involving text or math.
Curriculum evaluation through learning assessmentSharon Ballasiw
The document discusses learning outcomes and assessment methods. It defines learning outcomes as the intended results of the learning process. It describes four levels of learning outcomes - knowledge, process, understanding, and performance. Assessment methods discussed include objective tests like multiple choice and matching, as well as subjective tests like restricted response, extended response essays, and authentic or performance-based assessments. Examples are provided for each type of assessment method. The document aims to help teachers choose appropriate assessment methods aligned with the intended learning outcomes.
Dokumen tersebut membahas tentang penilaian portofolio dan jenis-jenis portofolio. Portofolio dapat berisi karya peserta didik secara perorangan atau kelompok, dan dapat berupa proses atau produk. Penilaian portofolio harus berbasis kriteria yang jelas dan melibatkan berbagai sumber informasi.
Dokumen tersebut membahas tentang teknik penyusunan kisi-kisi dan penulisan soal tes, termasuk pengertian kisi-kisi, syarat kisi-kisi yang baik, jenis perilaku yang dapat diukur, langkah penyusunan butir soal, keunggulan dan kelemahan berbagai bentuk soal, serta kaidah penulisan soal pilihan ganda dan uraian.
PRINSIP DAN TEKNIK EVALUASI (LARAS&NUR ASIAH)vina serevina
Dokumen tersebut membahas tentang konsep validitas dan reliabilitas tes. Validitas merupakan ukuran sejauh mana tes dapat mengukur apa yang seharusnya diukur, yang terdiri atas validitas isi, konstruk, prediksi, dan konkuren. Reliabilitas adalah tingkat konsistensi suatu tes, yang dapat diukur melalui koefisien stabilitas, ekuivalen, dan konsistensi internal. Kedua konsep ini penting untuk mengevaluasi
TES, PENGUKURAN, PENILAIAN DAN EVALUASI (DINI&ORNELA)vina serevina
Dokumen tersebut membahas tentang tes, pengukuran, penilaian, dan evaluasi dalam pendidikan. Secara ringkas, dokumen tersebut menjelaskan bahwa pengukuran adalah proses pemberian skor terhadap hasil belajar berdasarkan kriteria tertentu, penilaian adalah proses menginterpretasikan hasil pengukuran, sedangkan evaluasi adalah proses pengambilan keputusan berdasarkan hasil penilaian.
KIRKPATRICK MODEL OF EVALUATION (LEO CHANDRA)vina serevina
The document discusses the Kirkpatrick (1994) model of evaluation, which consists of 4 levels - reaction, learning, behavior, and results. Level 1 measures participants' reactions to a training program. Level 2 assesses what participants learned. Level 3 looks at whether participants apply the new knowledge and skills on the job. Level 4 examines the overall impact on the organization in terms of outcomes like pass rates, GPA, retention rates, and satisfaction. The model provides a framework for conducting comprehensive evaluations of educational programs and their effects.
This document summarizes a journal article on portfolio assessment in education. It discusses the following key points in 3 sentences or less:
Portfolio assessment involves collecting student work over time to demonstrate learning and skills. This can include tests, homework, and other work samples. The portfolio should be owned by the student and come from multiple sources to provide an authentic and dynamic view of the student's development.
The document provides guidance on developing test items and constructing grids (blueprints) for exams. It discusses the definition and purpose of grids, as well as requirements for good grids such as representing curriculum content appropriately. Guidelines are provided for writing multiple choice, description, and practice test questions according to best practices. The document also outlines cognitive domains and thinking skills that can be measured, and describes the steps for preparing high-quality question tests.
1) Validity refers to the extent to which a test measures what it claims to measure. There are different types of validity including content validity, construct validity, predictive validity, and concurrent validity.
2) Reliability is the consistency of a test and whether it would provide the same results over multiple administrations. Factors that influence reliability include the length of the test, score distribution, difficulty level, and objectivity.
3) There are different ways to measure validity and reliability including calculating correlation coefficients and using formulas like Pearson product-moment correlation, Kuder-Richardson, Cronbach's alpha, and point biserial correlation.
This document discusses different methods for assessing students, including journal assessments, psychomotor assessments, and affective assessments. Journal assessments involve teachers recording observations about student attitudes, behaviors, strengths and weaknesses. Psychomotor assessments evaluate students' skills and abilities through aspects like movement, physical abilities, and communication. Affective assessments examine students' attitudes, values, willingness to learn, attention, and respect toward teachers. Examples are provided for documenting observations in journals and rubrics for assessing students' psychomotor and affective skills.
The document discusses learning assessment by teachers. It defines learning assessment as the planned and systematic process of collecting evidence about students' competence in knowledge, skills, and attitudes during and after the learning process. There are two main types of assessment: formative assessment to improve student learning, and summative assessment to determine student success at the end of a semester or year. The goals of assessment are to understand student mastery levels, establish remediation programs, and map school quality. Assessment should be valid, objective, neutral, integrated, systematic, accountable, educative and holistic. Types of assessments include daily tests, mid-term tests, semester tests and final tests. Classroom-based assessment is used to collect student achievement
The document discusses assessment principles and mechanisms in education. It explains that assessment involves planning, implementation, analysis of results, follow-up, and reporting. Assessment can be carried out by educators, educational units, and the government. The goals of assessment are to understand student competencies, improve the learning process, and make decisions about student promotion or graduation. Assessment results should be analyzed, followed up on, and reported to students, parents, and education authorities.
TEST, MEASUREMENT, ASSESSMENT, AND EVALUATION (DINI & ORNELA)vina serevina
The document discusses test measurement, assessment, and evaluation. It defines these terms and explains their relationships. Measurement is the process of collecting quantitative or qualitative data about learning using tools like tests or observations. Assessment is interpreting the data to make judgments about student progress. Evaluation is using assessment results to make decisions about students or improve teaching. The document also outlines the functions and uses of assessment, such as improving instruction, identifying student weaknesses, and determining if learning objectives were achieved. Different types of assessment include formative, summative, diagnostic, and skills-based. Overall, the key purpose of these processes is to support student learning and improve educational outcomes.
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
Gender and Mental Health - Counselling and Family Therapy Applications and In...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...
TECHNIQUE OF TEST (ARI & AFDAL)
1. By:
• Ari Wijaya
• Muhamad Afdal
Lecture : Dr.Ir. Vina Serevina
Magister of Physics Education
State University of Jakarta
2016
2. Arikunto (2012: 67)
Test is tool or procedure that used to know or measuring something, with the rules that has been determined
Sudijono (2011: 67)
Test is the way or procedure in measurement and assessment in educational sector, that designed
with give the task or series of the tasks while questions (must be answered), or instructions (must be
done), so that (based on data from the result of the measurement) can be produced value that show
the behavior or achievement.
3. F.L. Goodeneough dalam Sudijono (2008: 67),
Test is the tasks that be given to personal or group, with the purpose to compare their skill with the
other people.
Norman dalam Djaali dan Muljono (2008: 7),
Test is the comprehensive evaluation procedures, systematic, and objective, where the results can be
used as the basic in make decision in the teaching process that be done by teacher.
Zainal Arifin (2011:118)
Test is technique or the way that used to do measurement, which include some questions, or tasks
that must done or aswered by the learners to measure the specific competency of them.
4. Synthesis
Test is tool or peocedure that used to know or measure something, that be given to personal or
group, which in include some questions or tasks that must be done or answered by the students with
the purpose to compare their skill with the other people.
5. Test
Subjective analysis
Objective
True False
Multiple Choice
Match
Stuffing
Test
Oral
Written
True Flase
Multiple Choice
Match
Tes Isian
Essay
practice
Test
Oral
Written
True False
Multiple Choice
Match
Stuffing
Essay
Behaviour
ARIKUNTO (2012: 117) BSNP ZAINAL ARIFIN (2011: 119)
6. • Written test : the test must be done by students in
writing
• Oral test, a test or series of test or questions that be
given to students and conducted with interview.
• Performance test, a task that is generally a practice
activity or activities which measure skills
Definition
ZAINAL ARIFIN (2011: 119)
7. True - False
Statement that contain
two possible answers
(T-F)
The function to measure
the skill :
*Differentiate fact and
opinion
*Information
identification
*Corelate two
homogeneous case
*Cause and Effect
presented:
*Statement
* Picture
* table
*Diagram
ZAINAL ARIFIN (2011: 119)
8. Type of question with True - Flase
Perhatikan pernyataan mengenai Hukum Newton Tentang Gerak Berikut, lalu berilah tanda (X) pada
jawaban (B – S) yang disediakan.
Look at the statements below about Newtons law (motion), and then give mark (X) at the space has
been provided (T or F)
T : F : the first newton’s law is about inertia
T : F : The static’s object or object that motion with the constant’s velocity will be remain static or
motion with the constant’s velocity if the resultant of force is zero.
T : F : Action and reaction’s force will turn up when the object in moving condition
T : F : At the third newton’s law, the resultant of action and reaction’s force is not zero.
True – Fales at the answer can change with YES – NO or Agree-Disagree
9. Test Advantages Disadvantages
True or
False 1. Suitable for the question where
there are two alternative
answers
2. The requirement of this test is
not about the ability to read.
3. Some of questions relatively can
be answered in periodic’s scale.
4. Assesement’s procedure is easy,
objective, and can be trusted
1. It so difficult to write the question
out of the knowledge level that
dont have dual purposes.
2. The answer not describe that the
students know with well.
3. There are not the diagnostic
information from the wrong answer
4. it is possible for the students or
stimulate them to guess the answer.
ARIKUNTO (2012: 181) ZAINAL ARIFIN (2011: 137)
Advantages and Disadvantages
10. Preparation Instructions
Number of the question should be a lot, so that it can be accountable.
number of the true items is same with the number of false item
Give the simple and clearly instruction to answer the questions.
avoid the general statements, complex, and negative.
Avoid using words that can give clues about the desired answer.
ZAINAL ARIFIN (2011: 137)
11. The Way to Resolve the Weakness of True – False
Question
To resolve the weakness of true – false question, the question’s maker can adding correction column at the item
of true – false. Here, the student not just choice true or false, but also give correction if the student choice false.
Example:
Look at the statements below about newtons law about motion. And then give mark (X) at the answer (T – F)
T – F : the first newton’s law is about inertia ………………………………
T – F : The static’s object or object that motion with the constant’s velocity will be remain static or motion with
the constant’s velocity if the resultant of force is zero.. …………………………
T – F : Action and reaction’s force will turn up when the object in moving condition …………………………
T – F : At the third newton’s law, the resultant of action and reaction’s force is not zero.. ………………………
ZAINAL ARIFIN (2011: 137)
12. Multiple Choice
statement that have
some alternative answer
(A,B,C,D,E)
According to Grondlund
(1981) , more
alternative answer, so
more difficult for student
tu guess
The function is to
measure the ability of :
* memory
* understanding
* Aplication
* Synthesis
* Evaluation
Presented by :
* Distractor
* Cause and effect
* Multiple answer
* Multiple variations
* Incomplete variations
ZAINAL ARIFIN (2011: 119)
13. Example of Multiple Choice Question
CAUSE AND EFFECT : Type of the question that used to measure the ability to analysis corelation
between statement and reason
when the goalkeeper kick the ball with the certain angle, balls velocity at the highest point is zero
BECAUSE
at the highest point the potential energy is maximum
Choice answers:
A. If the statement right, reason right, and the reason is because of the statement.
B. If the statement right, reason right, and the reason isn’t because of the statement.
C. if the statement right, the reason wrong
D. If the statement wrong, the question right
E. If both of statement and qusetion is wrong
14. Example of Multiple Choice Question
MULTIPLE ANSWER
There are some alternative answer, where there are
some right answer
A carnot machine has efficiency 40%, the right
statement is... ….
(1) difference between of the temperature of the heat
in and the heat out has comparison 2/5
(2) comparison of heat out and heat in is 3/5
(3) comparison of work and heat in is 2/5
(4)comparison of work and heat out is 2/3
Alternative answer:
A. if (1),(2) and (3) right
B. if (1) and (3) right
C. if (2) and (4) right
D. If only (4) right
E. All odf the statements right
15. Example of Multiple Choice Question
MULTIPLE VARIATIONS: choice some possibility from the answer, where all of them is right, but
just only one answer most correct. Task of the student is choice the most correct answer
the students should be respect to….
A. Friend
B. Teacher
C. Parents
D. Teacher and Parents
E. Friend, Teacher, and Parents
16. Example of Multiple Choice Question
INCOMPLETE VARIATION: is question or statement that have some possibility of incomplete
answer. Task of the student is find the possibility of the right answer and complete it
number of planet in our solar system is...
A. 5 planets that consist of .....
B. 6 planets that consist of .....
C. 7 planets that consist of .....
D. 8 planets that consist of .....
E. 9 planets that consist of .....
17. Test Advantages Disadvantages
Multiple
Choice 1. can measure the result of study
from the simple until the
complex
2. structured and clear instruction
3. Wrong alternative answer can
give diagnostic information
4. Impossible to guess the answer
5. The assessment is easy, objective,
and can be trusted
1. Need longer time to set the question
2. It so difficult to find the troublemaker
3. Less effective to measure some type
of the problem solving, the ability to
organize and express the idea.
4. The result can be influenced by the
good reading skill
Advantages and Disadvantages
18. Preparation Instructions
Must be based on the basic competency and indicator of question
Give the clear instructions
Dont include the question that not relevant with the matery that has been learned
the statement in the question formulate the problem with celarly and meaningful
the statement and the choice should be show the unity of sentence that unbroken.
the alternative answer must have function, homogeneous, and logical
The choices must be shorter than its item
Try to make so that the statement and the choices its not easy to be associated.
The right alternative answer should be dont systematically
To be believed that there are only one the right answer
ZAINAL ARIFIN (2011: 143)
19. Match
Consits of set of the
question and set of the
answer, where both of
them be gathered in the
difference column.
Number of the answer
more than the question
The function to measure
the ability of :
*identification the
simple information
based on the simple
corelation
* Identification the
ability to corelate two
objects
Presented by:
* Premise
* Responses
ZAINAL ARIFIN (2011: 144)
20. Example of Match
intructions: jmatch the statement at the column A with the right answer at the column B. Fill in
your answer in the designated place.
Column A
a. Pull or Push that work in the object
b. the straight line drawn from the
starting position to the end position
c. Inertia level of an object
d. The length of the path taken by an
object
e. A change of velocity of the object
Column B
a. Position
b. movement
c. distance
d. velocity
e. acceleration
f. Force
g. Mass
……………………………
……………………………
……………………………
……………………………
……………………………
21. Test Advantages Disadvantages
Match
1. Efficient
2. shorter time to read and
response
3. Its easy to make it
4. The assessment is easy,
objective , and can be trusted
1. Only suitable for Simple matery,
cann’t be used to measure the
control understanding
2. it so difficult to make the question
that including some of homogeneous
response
3. It so easy influenced by the irrelevant
instructions.
ARIKUNTO (2012: 181) ZAINAL ARIFIN (2011: 137)
Advantages and Disadvantages of Match
22. Preparation Instructions of Match
Make the test instructions with clearly, simple, and easy to understand
matching with the basic competence and indicators
Questions at the left column and answer at the right column
Number of aswer more little than number of questions
Set the item and alternative answer systematically
Group of questions and answers at the same page
Use the simple sentence and focused on the subject
ZAINAL ARIFIN (2011: 145)
23. Stuffing
the test requiring an
answer in the form of
numbers and or
sentence that only can
be rated right or wrong.
The function is to:
* identification of
definition
* memory
* understanding
Presented by the
question in the form
complete the
interrogative sentence
with the short words,
phrase, name of the
place, name of the
character, and etc.
ZAINAL ARIFIN (2011: 144)
24. Example of Fill in
Complete the statement below with the short and right answer
1. A lot of heat is needed to increase the temperature amount one degree celcius is definition
of....
2. phase transition from liquid to solid is called....
3. 100 degree celcius = …… degree Fahreinheit
4. ideal gas processes that take place at a constant pressure is called....
5. name of the initiator of thermodynamics law is....
25. Test Advantages Disadvantages
Stuffing/
complete 1. easy to make the qusetion
2. it so difficult to guess the answer
3. Suitable for the arithmetics
question
4. The result of knowledge can be
measured widely
1. it so difficult to set the word of
answer with one word.
2. Not suitable to measure the result
of the learning complex
3. The assessment is boring and take
time.
Advantages and Disadvantages Stuffing
26. Preparation Instruction of Stuffing
Question
Make the test instructions with clearly, short, and easy to understand.
Not using question with the open aswer types
Dont take the statement with text book
the place to answer the question should be placed at the last of sentence.
Dont using the empty place too much
the statement should be only has one alternative answer
using the picture, so that make it simple and clear.
ZAINAL ARIFIN (2011: 146)
27. Essay
Statement that include
some alternative answer
and are open
The function to measure:
* memory
* understanding
* application
* Synthesis
* Evaluation
Presented by :
* Distractor
* Cause and effect
* Multiple Choice
* Multiple variation
* Incomplete variation
ZAINAL ARIFIN (2011: 119)
28. Indicator :
The students can calculate the volume of geometry (block) and
convertion the units.
Question:
A bathtub with the block shape with length 150 cm, wwidth 80 cm,
and height 75 cm. How much the volume of the bathtub in liter?
(to answer the question, write the steps)
EXAM
PLE
29. Steps Answer Key Score
1
2
3
4
5
Blocks Volume = Length x width x height
= 150 cm x 80 cm x 75 cm
= 900.000 cm3
Volume the bathtub in liter:
900.000
= -------------------- liter
1.000
= 900 liter
1
1
1
1
1
Maximum score 5
Description Objective Guidance Table scoring
30. EXAM
PLE
Indicator :
The student can description the reason why the Indonesians
citizens proud being Indonesians
question:
Write the reasons that make you proud being Indonesians
citizens !
31. Criteria answers Range of
scores
Relating with the natural wealth of Indonesia
0 - 2
The pride associated with the beauty of the homeland
Indonesia
0 - 2
The pride associated with cultural diversity, ethnicity,
customs, and traditions but united
0 - 2
The pride associated with the hospitality of the people of
Indonesia
0 - 2
Maximum Score
8
Description Non-Objective Guidance Table scoring
32. Weighting the question is give value (numbers or letters) to the question
with comparing with the other question at the same test instrument.
therefore, weighting the essay can be done in setting the test instrument.
If the question stand alone , so we cann’t determine the weight.. (Asep Jihad
dan abdul Haris: 2010)
33. No. Initial Score Initial Max.
Score
Subtantion Subtantion
Score
a b c (SBS)
01 30 60 20 10,00
02 20 40 30 15,00
03 10 20 30 15,00
04 20 20 20 20,00
Sum 80 140 100 60,00
Example : Same Weight Question, Scale from 0 - 100
34. Advantages Disadvantages
• measuring the ability of
higher order thinnking with
more freely.
• enable to do checking
• There are not chance to
cheating
• enable to happen the unfair
• Allows teachers to deviate
from the scope of teaching
materials tested
• Longer time
• It requires a lot of
instruments format.
• Opportunities subjectivity in
the assessment of more open
35. Advantages Disadvantages
• Suitable to measure the
aspect of psychomotor
behavior
• Can be used to check
suitability about
knowledge, theory, and
skills performance.
• There are not chance to
cheating.
• Difficult to do the
measurement.
• Costs more (relatively)
• Longer time (relatively)