This document discusses test construction and different types of test items. It begins by explaining the importance of preparing a table of specifications to guide test construction and ensure validity and reliability. Various types of test items are described, including multiple choice, matching, true/false, short answer, and essay. The advantages and disadvantages of each item type are provided. Steps in test construction include specifying objectives, developing a table of specifications, writing test items, pre-testing items, and revising flawed items. Guidelines for writing high-quality test items emphasize clarity, focusing items on single concepts, and providing appropriate response options.
This document discusses different types of test questions used in education measurement and evaluation. It describes supply type tests where students must supply missing information, including short answer and extended answer varieties. Short answer questions assess basic knowledge through one word to short responses, while extended/essay questions allow lengthier, paragraph responses to measure higher-order thinking. Selection type tests involve choosing from options, including true/false, matching, and multiple choice questions. The advantages and disadvantages of each question type are outlined.
A 45-year-old male presents with
progressive difficulty in walking for the past 6
months. On examination, he has weakness of
both lower limbs, increased tone and brisk
reflexes. The most likely diagnosis is:
a) Guillain-Barré syndrome
b) Amyotrophic lateral sclerosis
c) Multiple sclerosis
d) Spinal muscular atrophy
Key: b
Multiple completion type
Stem is an incomplete statement with more
than one blank
Examinee has to select the appropriate
choice to complete the statement
Directions: Each question has four alternatives.
Select the most appropriate answer to complete
the statement
The document discusses the characteristics of a good test. It defines key concepts like validity, reliability, and practicality. It explains that a good test must be valid, reliable, practical to administer, comprehensive, objective, simple, and easy to score. It also outlines various methods to establish a test's validity and reliability, such as using a table of specifications, conducting item analysis, and calculating internal consistency or test-retest reliability. Good tests also consider factors like administrability, scorability, and test length.
The document discusses alternative assessment and how it differs from traditional assessment. Alternative assessment refers to procedures that can be incorporated into daily classroom activities and measures students' direct application of skills in authentic tasks, rather than just knowledge acquisition. It emphasizes cooperation, process, and real-world applicability over competition, products, and simplistic skills. Research discussed in the document suggests that alternative assessment is better aligned with constructivist learning theories and supports student-centered education by authentically assessing performance in tasks students may encounter in life. However, barriers to its adoption include faculty resistance and lack of research on some alternative assessment methods.
Roles of Assessment in Making Classroom Instructional DecisionChebarona Apolinario
Roles of Assessment in making Classroom Instructional Decisions
There are four main roles of assessment used in the instructional process:
1) Placement assessment determines students' prerequisite skills and best mode of learning.
2) Formative assessment monitors student learning progress through continuous feedback to improve learning and instruction.
3) Diagnostic assessment identifies student learning difficulties during instruction.
4) Summative assessment evaluates instructional objectives achieved and student mastery at the end of a course unit.
This document discusses different ways to categorize tests, including by mode of response (oral, written, performance), ease of quantification of responses (objective vs. subjective), mode of administration (individual vs. group), test constructor (standardized vs. unstandardized), and mode of interpreting results (norm-referenced vs. criterion-referenced). Tests can be categorized based on whether responses are oral, written, or performance-based. Objective tests with quantifiable responses can be compared to yield scores, while subjective tests allow divergent answers like essays. Tests are also categorized by whether they are administered to individuals or groups, and whether they are standardized with established procedures or unstandardized for classroom use.
(MST) Test Construction and Material
(class report(s)/discussion(s))
DISCLAIMER: I do not claim ownership of the photos, videos, templates, and etc used in this slideshow
CREDIT/s: education-portal
This document discusses different types of test questions used in education measurement and evaluation. It describes supply type tests where students must supply missing information, including short answer and extended answer varieties. Short answer questions assess basic knowledge through one word to short responses, while extended/essay questions allow lengthier, paragraph responses to measure higher-order thinking. Selection type tests involve choosing from options, including true/false, matching, and multiple choice questions. The advantages and disadvantages of each question type are outlined.
A 45-year-old male presents with
progressive difficulty in walking for the past 6
months. On examination, he has weakness of
both lower limbs, increased tone and brisk
reflexes. The most likely diagnosis is:
a) Guillain-Barré syndrome
b) Amyotrophic lateral sclerosis
c) Multiple sclerosis
d) Spinal muscular atrophy
Key: b
Multiple completion type
Stem is an incomplete statement with more
than one blank
Examinee has to select the appropriate
choice to complete the statement
Directions: Each question has four alternatives.
Select the most appropriate answer to complete
the statement
The document discusses the characteristics of a good test. It defines key concepts like validity, reliability, and practicality. It explains that a good test must be valid, reliable, practical to administer, comprehensive, objective, simple, and easy to score. It also outlines various methods to establish a test's validity and reliability, such as using a table of specifications, conducting item analysis, and calculating internal consistency or test-retest reliability. Good tests also consider factors like administrability, scorability, and test length.
The document discusses alternative assessment and how it differs from traditional assessment. Alternative assessment refers to procedures that can be incorporated into daily classroom activities and measures students' direct application of skills in authentic tasks, rather than just knowledge acquisition. It emphasizes cooperation, process, and real-world applicability over competition, products, and simplistic skills. Research discussed in the document suggests that alternative assessment is better aligned with constructivist learning theories and supports student-centered education by authentically assessing performance in tasks students may encounter in life. However, barriers to its adoption include faculty resistance and lack of research on some alternative assessment methods.
Roles of Assessment in Making Classroom Instructional DecisionChebarona Apolinario
Roles of Assessment in making Classroom Instructional Decisions
There are four main roles of assessment used in the instructional process:
1) Placement assessment determines students' prerequisite skills and best mode of learning.
2) Formative assessment monitors student learning progress through continuous feedback to improve learning and instruction.
3) Diagnostic assessment identifies student learning difficulties during instruction.
4) Summative assessment evaluates instructional objectives achieved and student mastery at the end of a course unit.
This document discusses different ways to categorize tests, including by mode of response (oral, written, performance), ease of quantification of responses (objective vs. subjective), mode of administration (individual vs. group), test constructor (standardized vs. unstandardized), and mode of interpreting results (norm-referenced vs. criterion-referenced). Tests can be categorized based on whether responses are oral, written, or performance-based. Objective tests with quantifiable responses can be compared to yield scores, while subjective tests allow divergent answers like essays. Tests are also categorized by whether they are administered to individuals or groups, and whether they are standardized with established procedures or unstandardized for classroom use.
(MST) Test Construction and Material
(class report(s)/discussion(s))
DISCLAIMER: I do not claim ownership of the photos, videos, templates, and etc used in this slideshow
CREDIT/s: education-portal
This document provides an overview of a workshop to train participants on evaluating assessment quality using four standards: 1) Does the assessment method reflect the desired outcome, 2) Does the assessment use high-quality items, 3) Does the assessment provide enough evidence of student achievement, and 4) Does the assessment avoid bias. The workshop objectives are to apply these standards to create or revise an assessment. Guidelines are provided for different item types to help create high-quality assessments.
Assessment is used to determine if educational objectives have been achieved. It can be formative or summative and is related to course learning objectives. Assessment measures how a student's knowledge, skills, and attitudes have changed due to academic experiences. Methods of assessment have strengths and flaws according to reliability, validity, impact on learning, acceptability, and costs. Assessment can have intended and unintended consequences like encouraging cramming over reflective learning. Characteristics of good assessment include relevance, validity, reliability, and objectivity. This document provides guidelines for creating effective essay questions, including using action verbs, structuring questions, and developing rubrics for grading.
The document discusses essay questions as an assessment tool, comparing restricted response and extended response essay questions. It outlines the advantages and disadvantages of using essay questions, and provides tips for constructing, scoring, and evaluating essay questions and responses. Restricted response questions limit the scope and response, while extended response allows more freedom in topic selection and organization. Scoring can be done using analytic or holistic rubrics, with clear scoring criteria and examples of expected responses. The document aims to provide guidance on effectively utilizing essay questions to assess higher-order thinking skills.
The document discusses the characteristics of a good test. A good test is both valid and reliable. Validity means a test measures what it is intended to measure, such as a math test measuring math ability not reading ability. Reliability means test scores are consistent and not due to random chance. Tests can be made more reliable by including more test items and using objective scoring methods. Characteristics like a large number of test items, objective scoring, and piloting a test widely increase reliability.
The document outlines 9 stages of test construction: 1) Planning, 2) Preparing items, 3) Establishing validity, 4) Reliability, 5) Arranging items, 6) Writing directions, 7) Analyzing and revising, 8) Reproducing, and 9) Administering and scoring. It discusses key considerations at each stage such as writing items according to specifications, establishing content and criterion validity, determining reliability through various methods, and ensuring the test is objective, comprehensive, simple, and practical. The final stages cover arranging items by difficulty, providing clear directions, analyzing item performance, and properly administering the test.
1. The document outlines the process of test construction which involves preliminary considerations, reviewing the content domain, item/task writing, assessing content validity, revising items/tasks, field testing, revising based on field testing results, test assembly, selecting performance standards, pilot testing, and preparing manuals.
2. Key steps include specifying test purposes and intended examinees, reviewing content standards/objectives, drafting and editing items/tasks, evaluating items for validity and potential biases, conducting item analysis after field testing, revising or deleting weak items, assembling the final test, and collecting ongoing reliability and validity data.
3. Item analysis involves both qualitative review of item content and format as well as quantitative analysis
This document discusses key properties of assessment methods: validity, reliability, fairness, practicality and efficiency, and ethics. It defines validity as the degree to which a test measures what it is intended to measure. There are several types of validity including content, predictive, criterion, and construct validity. Reliability refers to an assessment producing stable and consistent results over time. Fairness means students understand what is being assessed and the method, and that assessment is used for learning not weeding out students. Practicality considers if teachers understand the assessment, it is not too complex, and can be implemented. Ethics refers to conducting assessments in a manner that conforms to professional standards of right and wrong.
Item analysis is a statistical technique used to evaluate test items and select or reject them based on their difficulty and ability to discriminate between more and less capable examinees. It provides information about each item's difficulty value and discrimination index. The difficulty value indicates the percentage of examinees who answered the item correctly, and can be used to identify items that are too easy or too hard. The discrimination index reflects an item's ability to differentiate high-scoring from low-scoring examinees, with positive values indicating items that high performers tend to get right and low performers tend to get wrong. Item analysis allows modifying or removing items that have low discrimination or difficulty levels outside the desired range.
The document discusses various grading and reporting systems used in education. It describes the objectives of grading systems as providing results to students, parents, and administrators in a brief and understandable way. Various types of systems are examined, including traditional letter grades, pass-fail, checklists of objectives, letters to parents, portfolios, and parent-teacher conferences. Guidelines are provided for developing effective grading systems and conducting productive parent-teacher meetings.
This document discusses the different purposes of student assessment: formative assessment provides feedback to help students improve, summative assessment evaluates student achievement and determines if they have met learning objectives to progress to the next level, assessment protects academic standards and institutional reputation, and analyzing assessment results provides feedback to teachers to evaluate and improve their instruction. Assessment serves to both evaluate students and inform teaching.
Lesson 3 developing a teacher made testCarlo Magno
This document provides guidance on developing teacher-made tests. It begins with an advance organizer and outlines the test development process. It then provides details on designing different item types, including selected-response, constructed-response, and interpretive exercise items. It gives guidelines for writing different item types and examples. The objectives are to explain assessment concepts and design aligned tests. It also discusses test specifications, characteristics, layout, instructions and scoring.
(1) The document discusses assessment competencies for teachers, including choosing appropriate assessment methods, administering and interpreting various assessments, using results for instructional decisions, developing valid grading procedures, and communicating results.
(2) It also outlines several standards for teachers related to choosing, developing, interpreting and using assessment results for decision making, grading, and communicating.
(3) The document discusses the concepts of assessment literacy and alternative forms of assessment like performance and portfolio assessments. It provides definitions and characteristics of these approaches.
Assessment refers to monitoring learners' progress and includes formative and summative evaluations. Formative assessment provides feedback during learning, while summative assessment measures achievement at the end. Alternative assessments evaluate students through methods like portfolios, journals, and self-assessment rather than traditional tests. Effective assessment involves learners, communicates goals, and provides feedback to improve learning. Tests are one form of assessment but must be carefully designed, administered, and interpreted to avoid harmful impacts on teaching.
The document discusses best practices for constructing tests and writing test questions. It provides guidelines for developing multiple choice, true/false, matching, and essay questions. Key aspects addressed include writing clear questions, avoiding negatives, ensuring answer options are similar in length and structure, and using distractors that could plausibly be chosen. The document emphasizes the importance of validity, reliability, and usability in test design.
This document discusses item analysis, which is a procedure used to evaluate test questions and assess whether they are effectively measuring the intended construct. It defines key terms like item difficulty, facility value, discrimination index, and discusses the purposes and steps of performing an item analysis. The purposes include selecting the best questions, identifying weaknesses, and improving the quality and effectiveness of assessments. The steps involve scoring tests, dividing students into high and low groups, calculating difficulty and discrimination indices for each item, and using the results to revise tests.
A good test should have the following key characteristics:
1. It should be a valid instrument that accurately measures what it is intended to measure as evidenced by various types of validity like content validity.
2. It should be a reliable instrument that consistently measures constructs and yields similar results over time as determined through methods like test-retest reliability.
3. It should be objective by eliminating personal bias and opinions of scorers so that different scorers arrive at the same score.
Educational assessment is important part of educational life of teachers and students. they are continuously engaged inthta . understanding about this indulge them with joy.. There is need to understand this concept with evaluation.
Summative assessments are cumulative evaluations used to measure student growth and learning after instruction, typically given at the end of a course or unit. They focus on what students have learned over a longer period of time, such as at the end of a grade level. The purpose is to determine if long-term goals have been met and provide information on a student's level of accomplishment, with final exams being a classic example. Summative assessments are compared to formative assessments in tables focusing on their differences.
The document discusses the importance of summarization for processing large amounts of text data. Automatic summarization systems aim to understand documents, determine the most important information, and present the key details in a condensed form while preserving the overall meaning. However, accurately summarizing text in a concise yet complete manner remains a challenging task that current systems have not fully solved.
This document discusses the steps involved in creating a table of specification, which is a framework or blueprint for an assessment activity. It involves identifying the constructs being measured, listing relevant content, matching the content to constructs, determining the number of items per construct, and balancing the level of difficulty. The key steps are: 1) identifying constructs, 2) listing content from the syllabus, 3) matching content to constructs, 4) determining the number and difficulty of items per construct. The table of specification helps ensure an assessment measures the intended domains and maintains psychometric quality.
This document discusses the importance and components of a Table of Specifications (TOS). It notes that a TOS is a two-way chart that describes the topics to be covered on a test and the number of items or points associated with each topic. It emphasizes that a TOS identifies the objectives and content to be measured, helps ensure a fair test, and should be developed before instruction begins. Using a TOS can improve the validity of teacher-made tests and thereby provide a more valid assessment of student achievement.
This document provides an overview of a workshop to train participants on evaluating assessment quality using four standards: 1) Does the assessment method reflect the desired outcome, 2) Does the assessment use high-quality items, 3) Does the assessment provide enough evidence of student achievement, and 4) Does the assessment avoid bias. The workshop objectives are to apply these standards to create or revise an assessment. Guidelines are provided for different item types to help create high-quality assessments.
Assessment is used to determine if educational objectives have been achieved. It can be formative or summative and is related to course learning objectives. Assessment measures how a student's knowledge, skills, and attitudes have changed due to academic experiences. Methods of assessment have strengths and flaws according to reliability, validity, impact on learning, acceptability, and costs. Assessment can have intended and unintended consequences like encouraging cramming over reflective learning. Characteristics of good assessment include relevance, validity, reliability, and objectivity. This document provides guidelines for creating effective essay questions, including using action verbs, structuring questions, and developing rubrics for grading.
The document discusses essay questions as an assessment tool, comparing restricted response and extended response essay questions. It outlines the advantages and disadvantages of using essay questions, and provides tips for constructing, scoring, and evaluating essay questions and responses. Restricted response questions limit the scope and response, while extended response allows more freedom in topic selection and organization. Scoring can be done using analytic or holistic rubrics, with clear scoring criteria and examples of expected responses. The document aims to provide guidance on effectively utilizing essay questions to assess higher-order thinking skills.
The document discusses the characteristics of a good test. A good test is both valid and reliable. Validity means a test measures what it is intended to measure, such as a math test measuring math ability not reading ability. Reliability means test scores are consistent and not due to random chance. Tests can be made more reliable by including more test items and using objective scoring methods. Characteristics like a large number of test items, objective scoring, and piloting a test widely increase reliability.
The document outlines 9 stages of test construction: 1) Planning, 2) Preparing items, 3) Establishing validity, 4) Reliability, 5) Arranging items, 6) Writing directions, 7) Analyzing and revising, 8) Reproducing, and 9) Administering and scoring. It discusses key considerations at each stage such as writing items according to specifications, establishing content and criterion validity, determining reliability through various methods, and ensuring the test is objective, comprehensive, simple, and practical. The final stages cover arranging items by difficulty, providing clear directions, analyzing item performance, and properly administering the test.
1. The document outlines the process of test construction which involves preliminary considerations, reviewing the content domain, item/task writing, assessing content validity, revising items/tasks, field testing, revising based on field testing results, test assembly, selecting performance standards, pilot testing, and preparing manuals.
2. Key steps include specifying test purposes and intended examinees, reviewing content standards/objectives, drafting and editing items/tasks, evaluating items for validity and potential biases, conducting item analysis after field testing, revising or deleting weak items, assembling the final test, and collecting ongoing reliability and validity data.
3. Item analysis involves both qualitative review of item content and format as well as quantitative analysis
This document discusses key properties of assessment methods: validity, reliability, fairness, practicality and efficiency, and ethics. It defines validity as the degree to which a test measures what it is intended to measure. There are several types of validity including content, predictive, criterion, and construct validity. Reliability refers to an assessment producing stable and consistent results over time. Fairness means students understand what is being assessed and the method, and that assessment is used for learning not weeding out students. Practicality considers if teachers understand the assessment, it is not too complex, and can be implemented. Ethics refers to conducting assessments in a manner that conforms to professional standards of right and wrong.
Item analysis is a statistical technique used to evaluate test items and select or reject them based on their difficulty and ability to discriminate between more and less capable examinees. It provides information about each item's difficulty value and discrimination index. The difficulty value indicates the percentage of examinees who answered the item correctly, and can be used to identify items that are too easy or too hard. The discrimination index reflects an item's ability to differentiate high-scoring from low-scoring examinees, with positive values indicating items that high performers tend to get right and low performers tend to get wrong. Item analysis allows modifying or removing items that have low discrimination or difficulty levels outside the desired range.
The document discusses various grading and reporting systems used in education. It describes the objectives of grading systems as providing results to students, parents, and administrators in a brief and understandable way. Various types of systems are examined, including traditional letter grades, pass-fail, checklists of objectives, letters to parents, portfolios, and parent-teacher conferences. Guidelines are provided for developing effective grading systems and conducting productive parent-teacher meetings.
This document discusses the different purposes of student assessment: formative assessment provides feedback to help students improve, summative assessment evaluates student achievement and determines if they have met learning objectives to progress to the next level, assessment protects academic standards and institutional reputation, and analyzing assessment results provides feedback to teachers to evaluate and improve their instruction. Assessment serves to both evaluate students and inform teaching.
Lesson 3 developing a teacher made testCarlo Magno
This document provides guidance on developing teacher-made tests. It begins with an advance organizer and outlines the test development process. It then provides details on designing different item types, including selected-response, constructed-response, and interpretive exercise items. It gives guidelines for writing different item types and examples. The objectives are to explain assessment concepts and design aligned tests. It also discusses test specifications, characteristics, layout, instructions and scoring.
(1) The document discusses assessment competencies for teachers, including choosing appropriate assessment methods, administering and interpreting various assessments, using results for instructional decisions, developing valid grading procedures, and communicating results.
(2) It also outlines several standards for teachers related to choosing, developing, interpreting and using assessment results for decision making, grading, and communicating.
(3) The document discusses the concepts of assessment literacy and alternative forms of assessment like performance and portfolio assessments. It provides definitions and characteristics of these approaches.
Assessment refers to monitoring learners' progress and includes formative and summative evaluations. Formative assessment provides feedback during learning, while summative assessment measures achievement at the end. Alternative assessments evaluate students through methods like portfolios, journals, and self-assessment rather than traditional tests. Effective assessment involves learners, communicates goals, and provides feedback to improve learning. Tests are one form of assessment but must be carefully designed, administered, and interpreted to avoid harmful impacts on teaching.
The document discusses best practices for constructing tests and writing test questions. It provides guidelines for developing multiple choice, true/false, matching, and essay questions. Key aspects addressed include writing clear questions, avoiding negatives, ensuring answer options are similar in length and structure, and using distractors that could plausibly be chosen. The document emphasizes the importance of validity, reliability, and usability in test design.
This document discusses item analysis, which is a procedure used to evaluate test questions and assess whether they are effectively measuring the intended construct. It defines key terms like item difficulty, facility value, discrimination index, and discusses the purposes and steps of performing an item analysis. The purposes include selecting the best questions, identifying weaknesses, and improving the quality and effectiveness of assessments. The steps involve scoring tests, dividing students into high and low groups, calculating difficulty and discrimination indices for each item, and using the results to revise tests.
A good test should have the following key characteristics:
1. It should be a valid instrument that accurately measures what it is intended to measure as evidenced by various types of validity like content validity.
2. It should be a reliable instrument that consistently measures constructs and yields similar results over time as determined through methods like test-retest reliability.
3. It should be objective by eliminating personal bias and opinions of scorers so that different scorers arrive at the same score.
Educational assessment is important part of educational life of teachers and students. they are continuously engaged inthta . understanding about this indulge them with joy.. There is need to understand this concept with evaluation.
Summative assessments are cumulative evaluations used to measure student growth and learning after instruction, typically given at the end of a course or unit. They focus on what students have learned over a longer period of time, such as at the end of a grade level. The purpose is to determine if long-term goals have been met and provide information on a student's level of accomplishment, with final exams being a classic example. Summative assessments are compared to formative assessments in tables focusing on their differences.
The document discusses the importance of summarization for processing large amounts of text data. Automatic summarization systems aim to understand documents, determine the most important information, and present the key details in a condensed form while preserving the overall meaning. However, accurately summarizing text in a concise yet complete manner remains a challenging task that current systems have not fully solved.
This document discusses the steps involved in creating a table of specification, which is a framework or blueprint for an assessment activity. It involves identifying the constructs being measured, listing relevant content, matching the content to constructs, determining the number of items per construct, and balancing the level of difficulty. The key steps are: 1) identifying constructs, 2) listing content from the syllabus, 3) matching content to constructs, 4) determining the number and difficulty of items per construct. The table of specification helps ensure an assessment measures the intended domains and maintains psychometric quality.
This document discusses the importance and components of a Table of Specifications (TOS). It notes that a TOS is a two-way chart that describes the topics to be covered on a test and the number of items or points associated with each topic. It emphasizes that a TOS identifies the objectives and content to be measured, helps ensure a fair test, and should be developed before instruction begins. Using a TOS can improve the validity of teacher-made tests and thereby provide a more valid assessment of student achievement.
Table of Specifications (TOS) and Test Construction ReviewRivera Arnel
The presentation provided an overview of test construction and highlighted the importance of creating a table of specifications to ensure tests adequately sample the intended learning outcomes. It also discussed guidelines for writing different types of test items like multiple choice and situational judgment questions, emphasizing the need for clarity, relevance, and avoiding flaws that could introduce errors. Effective test development requires understanding cognitive taxonomies and applying principles of validity, reliability, and usability.
This document outlines a table of specification for an exam covering topics in child development. It allocates 54 total hours across 5 topics: physical and motor development (8 hours), early stimulation (8 hours), exceptional development and common mental disorders (10 hours), natural history of language development (10 hours), and theories of language development (8 hours). It further breaks down the hours by knowledge, comprehension, application, analysis, synthesis and evaluation levels of learning.
This document discusses steps in test administration and scoring. It begins by outlining the objectives of understanding test administration procedures, scoring tests, and analyzing test scores. It then describes important considerations for administering tests properly such as ensuring a comfortable testing environment and clearly communicating instructions. The document explains methods for scoring answer sheets, coordinating scores between examiners, and using scores to evaluate student performance through methods like ranking, grades, percentiles and stanines.
This document discusses assessment and test construction. It explains that assessment determines if educational goals are being met and helps teachers evaluate what is being taught and learned. It also discusses summative assessment, the grading system, and common student observations about tests. Key principles of test construction are outlined, including validity, reliability, discrimination, and comprehensiveness. The document emphasizes the importance of the Table of Specification in guiding test construction and providing a test map that describes topic coverage and cognitive levels.
The document provides guidelines for constructing different types of test questions including matching, sentence completion, essay, and other question types. It discusses principles such as ensuring questions are clear, focused, and at an appropriate level for students. The document emphasizes that creating good tests takes time but plays an important role in evaluation. It also notes that breaking rules is acceptable when one has a good reason.
The document outlines the 6 steps to prepare a table of specification for a test: 1) List topics, 2) Determine objectives, 3) Specify time spent on each topic, 4) Calculate percentage allocation for each topic, 5) Determine number of test items for each topic, 6) Distribute items to objectives. It provides an example of calculating that 20% of a 50 item test should cover the topic "Early Filipinos and their Society" since it was taught for 2 of the 10 hours on the overall topic.
The document provides guidance for writing test items and creating a table of specification. It explains that a table of specification is a two-way chart that describes the topics to be covered on a test and the number of items or points associated with each topic, to ensure all elements of a course of study are properly assessed. It also defines different levels of thinking skills - knowledge, comprehension, application, analysis, synthesis, and evaluation.
The document provides guidelines for developing effective multiple choice questions (MCQs) for examinations. It recommends that MCQs should:
1. Have a clear and concise stem that presents the problem without clues to the answer. The language should be simple and suitable for students.
2. Include options that are logically arranged, mutually exclusive, and homogeneous in relationship to the stem. They should not provide clues to the answer or repeat information.
3. Provide sufficient context through a brief stimulus if needed for students to choose the correct option. The stimulus should contain all necessary information.
The document outlines the key steps in the test construction process:
1. Defining the test purpose and what construct it aims to measure.
2. Selecting an appropriate scaling method such as nominal, ordinal, interval or ratio scales.
3. Constructing initial test items that sample different cognitive domains and difficulty levels.
4. Testing items through analysis to evaluate item difficulty, reliability, validity, and discrimination.
5. Revising the test based on item analysis and feedback.
6. Publishing the finalized test along with manuals for administration and interpretation.
This document provides guidance on constructing effective test items. It outlines a 4-step process:
1. Planning - Determine content, objectives, item types, and create a blueprint.
2. Preparing - Write items according to the blueprint. Prepare directions, administration instructions, scoring keys, and an analysis chart.
3. Try-out - Administer a preliminary and final tryout on samples to identify flaws and determine item statistics.
4. Evaluation - Analyze items based on difficulty, discrimination, consistency. Determine validity, reliability, and usability of the final test.
The document discusses the development of objective assessment tools. It begins by outlining the intended learning outcomes, which are to define concepts related to objective tests, develop valid and reliable objective tests, and evaluate objective tests. It then discusses the rationale for assessment, including improving student learning and teaching. The types of objective tests are defined, including selection and supply types. The steps in planning an objective test are outlined, including identifying test objectives, deciding on the test type, and preparing a table of specifications. Characteristics of good tests like validity and reliability are also discussed.
The document discusses the preparation phase of test construction and describes the purpose and benefits of creating a table of specifications. A table of specifications serves as a blueprint for ensuring a valid, reliable, and objective test. It provides a systematic way to determine an adequate representative sample of learner behaviors and objectives to measure within a given time frame. Creating a table of specifications also allows the test constructor to determine which objectives need more emphasis and coverage. The document provides an example of a simplified table of specifications for a 4th year math test, explaining the different columns for objectives, skills, test item types, number of items, item numbers, and scoring.
This document provides guidelines for constructing effective tests. It discusses what tests are used for, including motivating learners and guiding instruction. It also outlines specific guidelines for designing tests, such as using an appropriate sample of content, ensuring clarity of tasks, and allowing adequate timing. Tests should be designed to determine if learning objectives have been achieved and encourage improvement.
The document is a letter from 4th year Bachelor of Science in Clinical Psychology students requesting assistance in validating questionnaires for their thesis. Their thesis aims to discover if technological developments in cellular phones have impacted behavioral adjustment among college students. Specifically, it will examine students at San Sebastian College - Recoletos during the 2010-2011 school year. The students are looking to contribute new knowledge to their field of specialization and are requesting a response to help with their research.
Types of test items and principles for constructing test items rkbioraj24
Types of test items and principles for constructing test items discusses various types of test items including oral tests, essay tests, short answer questions, and objective tests. It also outlines principles for constructing good test items such as ensuring validity, reliability, objectivity, comprehensiveness, and clarity. A good test should measure what it intends to measure, function consistently, yield objective scores, cover the entire syllabus, and have clear directions.
This document provides specifications for a reading test designed to assess Grade 4 ESL students in Baghdad, Iraq. It outlines the purpose of the test as measuring students' reading comprehension performance based on the curriculum from the previous semester. It describes the test takers as Grade 4 ESL students and notes the test is designed to be accessible for their level. It discusses the test design, including using multiple choice, true/false, and matching questions to assess reading comprehension. It provides details on how to construct different question types and how the test will be scored.
Three Fundamental Principles For Crafting Assessment Tasks
Six Important Guidelines For Developing Multiple Choice Items
Five Guidelines For Developing Essay Items
This document discusses guidelines for setting effective question papers and evaluating answer scripts. It outlines the important factors to consider when framing questions, such as the purpose, objectives, and type of assessment. The types of questions that can be used are described, including objective, short answer, and essay questions. Guidelines are provided for framing questions effectively and evaluating different question types, including preparing scoring keys and marking rubrics. The conclusion emphasizes the importance of teachers playing a role in the evaluation process to create a healthy learning environment.
Reading test specifications assignment-01-pptBilal Yaseen
This document outlines the test specifications for a reading comprehension assessment for 4th grade ESL students in Iraq. It will include multiple choice, true/false, and matching questions to measure students' reading achievement based on the semester curriculum. The test aims to place students in appropriate classes for the next semester. It provides accommodations for adolescent ESL learners and uses clear, plain language in passages and items. Scoring will be dichotomous with 1 point for a correct answer and 0 for incorrect.
This document discusses test construction, administration, and scoring. It covers determining what to measure, creating instruments to measure objectives, planning a test, preparing test items, and assembling the final test. When constructing a test, the document recommends determining objectives using a taxonomy, creating a table of specifications, and writing different item types like essay, true-false, matching, and multiple choice. It provides guidelines for writing high-quality items and measuring complex objectives. The document also discusses determining an appropriate test length and assembling the final test booklet.
This document provides guidance on developing effective classroom assessment tools. It discusses general principles of testing and assessment including measuring all learning objectives. It also outlines the steps to develop assessment tools, which include examining learning objectives, creating a table of specifications, constructing test items, assembling the test, and analyzing/improving the test items. The document describes different types of assessment tools like multiple choice tests, true/false, essays and their guidelines for effective creation. Overall, the document aims to help teachers create valid and reliable classroom assessments that accurately measure student learning.
Preparation of Classroom Assessment (SLP-B @ BISCAST)Ireno Alcala
The document discusses the preparation of classroom assessments. It outlines the importance of planning stages, learning objectives, relationships between objectives and testing, and using a table of specifications to ensure valid and reliable tests. It provides details on factors to consider when planning teacher-made tests, such as objectives, teaching strategies, and evaluative procedures. Guidelines are given for constructing objective-type tests, including writing clear questions and avoiding irrelevant clues. The document also discusses Ralph Tyler's evaluation framework and the role of various scholars in the field of educational assessment.
This document provides guidelines for constructing non-standardized tests, including essay questions, short answer questions, and multiple choice questions. It discusses general guidelines like arranging questions from easy to difficult and avoiding ambiguous questions. It also outlines specific steps in test construction, such as selecting objectives, assigning weightage, and preparing a blueprint. The document provides rules and merits of different question types, with a focus on writing multiple choice questions with clear stems and plausible distractors. It emphasizes objectivity and efficiency in scoring tests.
This document provides guidelines for constructing non-standardized tests, including essay questions, short answer questions, and multiple choice questions. It discusses general guidelines like arranging test items from easy to difficult and avoiding ambiguous questions. It also outlines specific steps in test construction, such as selecting objectives, assigning weightage, and preparing a blueprint. The document provides rules and merits of different question types, with a focus on writing multiple choice questions with effective stems, distractors, and alternatives. Overall, the document aims to help instructors construct valid and reliable non-standardized tests.
The document discusses different types of test items used in assessments, including objective items (e.g. multiple choice, matching), short answer items, and essay items. It provides guidance on constructing each type of item, such as using clear unambiguous language, logical response options, and ensuring items measure intended learning outcomes. The key principles are to plan assessments systematically, write high-quality items according to best practices, and analyze item performance to refine future tests.
This document provides guidelines for constructing different types of written tests to assess student learning. It begins by outlining the desired learning outcomes, which are to identify appropriate test formats for different outcomes and apply guidelines for constructing test items. It then describes various test formats, including selected response (e.g. multiple choice) and constructed response (e.g. essays, short answer). The document provides detailed guidelines for writing high-quality test items for multiple choice, matching, and true/false question formats. Teachers are advised to choose formats based on learning outcomes and cognitive level, and to write clear stems and options to develop valid and reliable assessments of student knowledge.
This document provides guidelines for writing effective essay questions to assess student learning. It defines essay questions and outlines the main types: restricted response and extended response. Guidelines are given for constructing clear questions that assess higher-order thinking and provide criteria for grading. Both advantages and disadvantages of essay questions are discussed. Overall, the document advocates that essay questions can effectively evaluate students' reasoning and analytical abilities when guidelines are followed to create valid, reliable and fair assessment.
The document discusses guidelines for constructing traditional tests, including choosing a test format, categories of tests, and how to construct items for multiple choice, true/false, matching, short answer, and essay tests. It provides examples of assessment plans that identify learning outcomes, topics, and appropriate test types. The document instructs to develop a sample three-part test by identifying learning outcomes, cognitive skills, suitable format, and test specifications.
The document provides guidance on writing effective multiple choice test questions. It discusses characteristics of good test questions such as being clear, concise, independent of each other, and measuring learning objectives. The document outlines best practices for constructing question stems and response options, including making sure there is only one right answer, responses are parallel in structure, and don't provide clues to the right answer. It also discusses using multiple choice questions to test higher-order thinking by focusing on application, analysis, and evaluation in the question and responses.
The document provides guidelines for writing test items or questions. It defines key terms related to test development such as item, item writing, item pool, test, and task. It also describes different item formats such as dichotomous, polytomous, checklists, and Likert scales. For multiple choice items, it explains the components of the stem, lead-in statement, answer options, correct answer, and distractors. The document outlines prerequisites for item writing and provides guidelines for writing clear, unambiguous items that avoid trick questions and guessing. It suggests using Bloom's Taxonomy to develop items testing different cognitive levels and provides examples of terms that can be used to frame item questions.
The document provides guidelines for writing effective test items. It defines key terms related to item writing such as item, item writing, item pool, and test. It also describes different item formats including dichotomous, polytomous, checklists, and Likert scales. The document outlines best practices for writing multiple choice, true-false, matching, short answer, and oral examination items. It emphasizes the importance of clarity, avoiding trick questions, using a variety of question types and cognitive levels, and carefully constructing item stems, options, and distractors. Adhering to these guidelines helps ensure items are valid and reliable measures of student learning.
This document discusses the needs assessment process, which is the first step in the Dick and Carey instructional design model. A needs assessment is used to identify instructional goals by determining the gap between desired goals and current status. There are six types of educational needs that can be assessed. The needs assessment aims to identify the problem, causes of the problem, and potential solutions. Determining goals is also discussed - goals should be stated in terms of new skills, knowledge, or attitudes for learners and include what learners will be able to do after instruction. A clear problem description, evidence of causes, and suggested solutions should result from a needs assessment. The importance of determining goals is that it directs all subsequent design decisions by specifying
.ppStress, Coping Mechanism, and Challenges of Nursing Students in the Clinic...DeborahFloridaEngkoh
This study aimed to determine the stress, coping mechanisms, and challenges nursing students face in clinical learning environments. Specifically, it sought to answer questions about stress levels from various sources, coping strategies used, common challenges, differences in stress levels between student groups, relationships between stress and coping strategies, and possible stress management programs. A survey was administered to nursing students to collect data to analyze these areas and identify ways to help students deal with stress.
The document discusses a workbook for educators on preparing effective essay questions. It provides an overview of the workbook's objectives, which are to help educators understand essay questions, when they should be used, and how to construct them well. The workbook is self-directed and contains sections, exercises, and feedback to help educators improve their ability to write and use effective essay questions.
Millennials, born between the 1980s and early 2000s, make up a large portion of the current workforce. 91% of millennials aspire to be leaders and many are motivated to empower others. Different generations have differing attitudes towards work-life balance, communication preferences, feedback and rewards, and values. To manage a multi-generational workforce effectively, leaders must understand these generational differences and personalize their approaches, communicate effectively using various styles and modes, and customize reward systems for different generations. Good leadership considers each individual's needs regardless of age.
There are several models that attempt to categorize different types of followership. Kelly's model describes four types: sheep, who are passive and uncritical thinkers; yes people, who are active but also uncritical; survivors, who do just enough to get by; and effective followers, who are active, independent thinkers. Potter and Rosenbach's model places followers on axes of relationship initiative versus performance initiative, identifying politicians, partners, contributors, and subordinates. The Curphy-Roellig model evaluates followers based on critical thinking versus engagement, identifying criticizers, self-starters, brown-nosers, and slackers. Effective followership requires both independent, critical thinking as well as active participation. The nature of the leader
This document discusses how organizational culture affects leadership. It defines organizational culture as the shared assumptions, values, and beliefs that govern how people behave within an organization. A positive culture that is aligned with the organization's goals and vision can enable effective leadership, while a negative "bad" culture can undermine leadership. The document also states that an organization's culture must support its leadership approach in order to be successful. It provides examples of companies that have built strong, supportive cultures through clearly communicating their values and fostering an environment where people want to work.
The document discusses leadership topics such as employee engagement, effective feedback, delegation, and conflict resolution. It provides statistics showing most employees are disengaged at work and outlines qualities of the best bosses as high clarity, consideration, and freedom. Models for giving effective feedback and empowering employees through delegation are presented. The importance of resolving workplace conflict to increase productivity and cost savings is also noted.
This document discusses leadership self-reflection and development. It encourages leaders to reflect on their strengths and weaknesses through self-assessment questionnaires. It also discusses the importance of self-awareness, being genuine, considering multiple viewpoints, and prioritizing ethics. The document promotes optimizing leadership by understanding how others perceive you and narrowing gaps through feedback. It emphasizes developing a clear leadership brand through defining your style and gaining support through training, coaching, and feedback.
This document discusses different personality types and how understanding your own personality can help you understand others. It outlines four main personality types: Popular Sanguine, Perfect Melancholy, Powerful Choleric, and Peaceful Phlegmatic. For each type, it describes typical emotions, behaviors at work and in relationships, strengths, weaknesses, and emotional needs. The document suggests people can overcome weaknesses by playing to their strengths, such as Sanguines volunteering and making friends and Melancholies being organized. It emphasizes understanding personality types rather than labels and getting the best from everyone.
Lompat jauh merupakan acara sukan dimana peserta cuba melompat sejauh mungkin dari papan pelepasan. Acara ini berasal dari zaman Yunani Kuno dan masih popular dalam sukan moden. Teknik dan rekod acara ini terus berkembang daripada masa ke semasa.
Dokumen tersebut memberikan penjelasan mengenai lari pecut, termasuk definisi, teknik mulai lari, penggunaan blok mulai, kesalahan umum, dan latihan yang dapat dilakukan. Secara ringkas, dokumen tersebut membahas tentang olahraga lari pecut, teknik dasarnya, dan cara melatih atlet untuk meningkatkan keterampilan lari pecut.
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
Temple of Asclepius in Thrace. Excavation resultsKrassimira Luka
The temple and the sanctuary around were dedicated to Asklepios Zmidrenus. This name has been known since 1875 when an inscription dedicated to him was discovered in Rome. The inscription is dated in 227 AD and was left by soldiers originating from the city of Philippopolis (modern Plovdiv).
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
Beyond Degrees - Empowering the Workforce in the Context of Skills-First.pptxEduSkills OECD
Iván Bornacelly, Policy Analyst at the OECD Centre for Skills, OECD, presents at the webinar 'Tackling job market gaps with a skills-first approach' on 12 June 2024
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
2. kheru2006@yahoo.com ppfd5105/07 2
What is In The Pro-Forma
• Basic Concept of Learning Assessment
• Basic Measurement Theory and Traditional Assessment
• Quality Assurance and Reliability and Validity Issue
• Discussion with examples of Assessment used in
classroom
• Alternative Approach to Learning Assessment
• Implementation to Alternative Assessment
• Discussion on the Strength of Alternative Assessment
• Affective Domain Assessment
• Psychomotor Domain Assessment
• School-Based Assessment : Issues and Problems
• Grading and Students Report
3. kheru2006@yahoo.com ppfd5105/07 3
At the end of the section, students will be able to
1. define what is the table of test specifications,
2. state the purpose of preparing the table of test
specifications,
3. construct a sample table of test specifications.
4. kheru2006@yahoo.com ppfd5105/07 4
Table of Specification
• Often test are done spontaneously and
not planned. Test that not well planned
usually have very low validity and
reliability. One of the best way is by
preparing a Table of Specification which is
question/item preparation plan.
• In a table of specification the following is
determined :
5. kheru2006@yahoo.com ppfd5105/07 5
Table of Specification
• Form and Time Frame
• Contents of Topics
• Level of Skills
• Allocation of percentage of questions/item
• No of questions
• Level of Difficulty
6. kheru2006@yahoo.com ppfd5105/07 6
Importance of Table of Specification
• Test can be constructed well and not
haphazardly
• Can enhance the validity and reliability of the
test
• Serve as a guide and deciding factor for
question planners
• Can ensure a good spread of topics, lesson
objectives and as well as the level of difficulty of
questions by giving appropriate emphasis
7. kheru2006@yahoo.com ppfd5105/07 7
Steps in Preparing Table of Specification
Decide The Objective of Test
Decide the Content of Test
Fill in The Content Column
Fill in The Skills Column
Decide The Types and No of Questions
Check The Distribution and % of Questions
8. kheru2006@yahoo.com ppfd5105/07 8
Example of Table of Specification (the orthodox)
Levels of Cognitive Domain
Title/Topic Evaluation Understanding Analysis Application Synthesis Knowledge Total
1 Matter 2 3 2 1 1 1 10
2 Chemical 1 3 2 2 2 1 10
3 Acid and
Base
1 2 1 2 2 1 10
4 Material in
Industry
1 2 2 2 1 1 9
5 Chemical
In
Agricultural
2 3 2 1 1 2 11
Total 50
9. kheru2006@yahoo.com ppfd5105/07 9
Steps In Test Construction
Type of Test
Objective (MCQ) Subjective (Essay)
Type of
Unlimited
Response
Type of
Response
Objective
Type
Short
Answer
question
sentence
completion
connections
true & false
multiple choice
classifying
combination
cloze objective
sequencing
10. kheru2006@yahoo.com ppfd5105/07 10
Types of Tests ~ Objectives
• Objective test is a type of test which
requires the candidate to give correct and
short answers so that it can be examined
easily and correctly. The answers are
short and usually in a few words or a
phrase only
• Can be divided into 2 main groups i.e.
short answer item and multiple choice item
11. kheru2006@yahoo.com ppfd5105/07 11
Types of Tests
• Short answer item
• Multiple choice item
• a) objective short answer type test is a
type of test that needs short answers from
the candidate
• Short answers can be divided into :
• i) Type of questions
• E.g. How many credits does a
matriculation students need to take in a
semester?
12. kheru2006@yahoo.com ppfd5105/07 12
ii) Sentence completion type
E.g. The father of vision for Malaysia is______________
iii) Connectors Type
E.g. Name the places of worship for the following religions :
Islam _______________________________________
Hinduism ____________________________________
Christianity ___________________________________
Sikhism ______________________________________
b) For the multiple choice type, candidates need to make choices
from the number of answers given. The following are some
examples of objective type of questions
13. kheru2006@yahoo.com ppfd5105/07 13
i) True/False Item
E.g. Petronas Twin Tower is the highest tower in the world
(True / False)
iii) Multiple Choice Item
E.g. What is the main crop in Malaysia?
A Coconut B Pineapple
C Cocoa D Rubber
iii) Matching Item
E.g. Match the leaders with their titles
Tun Abdul Razak Father of Vision
Tunku Abdul Rahman Father of Independence
Tun Dr Mahathir Father of Development
14. kheru2006@yahoo.com ppfd5105/07 14
iv) Classifying Type Item
E.g. You are required to choose a country which is usually
related to the names given. Write your answers in the space
given
A Melaka B Singapore C Penang
1) ____ St Francis Xavier 3 ____ Sir Francis Light
2) ____ Sir Stamford Raffles 4 ____ St Francis Assisi
v) Sequencing type
E.g. You are required to read the sentences carefully and
rearrange them in an orderly manner
1 Ali fell in the river
2 Ali went for fishing
3 Felt a jerk in the fishing hook
4 A crocodile attacked Ali
15. kheru2006@yahoo.com ppfd5105/07 15
vi) Combination type
e.g. Among the following plants, which one does not bear flowers?
I Fern
II Padi
III Grass
A I only C I and III
B I and II D II and III
vii) Cloze Item
e.g. _____________ is the chief producer of padi in Peninsular
Malaysia
16. kheru2006@yahoo.com ppfd5105/07 16
Types of Tests ~ Subjective
• Subjective test is a type of test which
requires the candidate to answer
questions using their own words and may
have more than one single answer
• Can contained items that require essay
types of many paragraphs, short, precise
answers of one or two sentences or a
single paragraph
17. kheru2006@yahoo.com ppfd5105/07 17
Subjective items can be divided into two
basic types :
a) Limited-response type
b) Unlimited-response type
a) Limited-response type limits the content and
style of answer. Requires candidates to
answer precisely, in one phrase, sentence or
short statement. Usually items will start with
words like What …,Give …,List ……,Explain
…….,Define ….etc.
e.g. “In one sentence, define subjective test.”
18. kheru2006@yahoo.com ppfd5105/07 18
b) Unlimited-response type
Unlimited-response type gives candidate to
choose the direction and scope of answer.
Candidates are free to extend answers, using
own thoughts and opinions, style of writing but
keeping to relevant topic. Usually items will
start with words like “ Discuss the probable
causes of ……”, “Compare and contrast ……”,
“In your opinion, ………..”
e.g. “Discuss the importance of learning the
Assessment in Education”
19. kheru2006@yahoo.com ppfd5105/07 19
Steps Involved In Item Building
1 Specify the Test Objective
2 Construct Table of Specification
4 Pre-Test
3 Write The Test Item
5 Modify Test Item
6 Specify Allocation of Marks
7 Keep test materials and items
In secure place
20. kheru2006@yahoo.com ppfd5105/07 20
Type Advantages Disadvantages
Multiple-choice
•Can measure all levels of student ability.
•Enables wide sampling of subject content .
•Quick and easy to score.
•Enables objective scoring.
•Can be analyzed for effectiveness.
•Difficult to construct good
items.
•Tendency to measure simple
recall.
Matching
•Relatively easy to construct.
•Conserves examinees’ reading time .
•Enables efficient and objective scoring.
•Generally unsuitable for
testing higher-order abilities.
•Tendency to measure simple
recall.
True/False •Efficient for testing large sample of information.
•Enables efficient and objective scoring.
•Permits high guess factor.
•Difficult to construct effective
items.
Completion/Short
Answer
•Minimizes guessing
•Enables coverage of fairly wide content.
•Relatively easy to construct.
•Measures limited range of
abilities.
•Cannot be machine-scored.
•Scoring is highly dependent on
judgment.
Essay
•Can be quickly and easily constructed.
•Eliminates guessing .
•Can test higher order of thinking .
•Limits amount of content
sampled.
•Time-consuming to score.
•Results in low scoring reliability.
21. kheru2006@yahoo.com ppfd5105/07 21
Steps Involved In Item Building
• Proper and systematic procedure is important in
order to design a quality test.
• The first step in item building is to determine the
objective of the test. If the objectives is to detect
student’s weakness in a specific skills, the most
suitable is a diagnostic test.
• Therefore the Table of Specification must be
able to help test designer to build a test
diagnostic purpose. Clear objectives help raise
the validity of the test
22. kheru2006@yahoo.com ppfd5105/07 22
• A test designer must adhere to the Table
of Specification, changing any detail in the
table will cause the mismatch of the test
and teacher’s objectives.
• e.g. if the test calls for 40 MCQ, the
designer must design the test following the
principles involved.
• Every item must observe the following
guidelines :
23. kheru2006@yahoo.com ppfd5105/07 23
• The stem of the item must have only one
focus
• Specific task is clearly stated
• Answer choice must relate to the stem of
the question
• Effective distracter
• Have one correct answer
• Clear sentence structure without any
ambiguity
• Must be ethical and not touch on sensitive
issues
24. kheru2006@yahoo.com ppfd5105/07 24
• When the test is complete, it should be
pre-test before administering to students
• Any mistakes should be corrected
• Unsuitable and wrong items should be
removed and replaced with new one
• A set of marking scheme is necessary so
that marking is standardized and hence
meet the objectives of test
• Must be kept in a secure and safe place
until administered to ensure confidentiality
so as the test is reliable
25. kheru2006@yahoo.com ppfd5105/07 25
Guideline on Writing Test Items
Before writing test items, refer to Table of Specification
1) Stem of question
should only have one stem, avoid negative questions
e.g. “Describe a teaching method which is not suitable to
be used to promote thinking among students”
2) The Task of the question
the task required for students to answer should be
clearly and explicitly stated. Students should focus on a
specific task
26. kheru2006@yahoo.com ppfd5105/07 26
3) Level of Difficulty
items should present intended level of difficulty as stated
in the Table of Specification.
e.g. low level cognitive domain (knowledge), average
level (understanding, application, analysis), high level
(synthesis, evaluative)
4) Proof Reading
make sure that the structure, content and language of
the items are free from mistakes
5) Analysis of Difficulties
before using the item, do analyzing and pre-test to
ensure high level of reliability and validity
27. kheru2006@yahoo.com ppfd5105/07 27
Norm-Referenced and Criterion-Reference Test
• The two types of referenced test are the norm-
referenced and criterion-referenced test
• A norm-referenced test is a type of test, assessment,
or evaluation in which the tested individual is compared
to a sample of his or her peers (referred to as a
"normative sample").[1] The term "normative
assessment" refers to the process of comparing one
test-taker to his or her peers.[1]
• A criterion-referenced test is one that provides for
translating the test score into a statement about the
behavior to be expected of a person with that score or
their relationship to a specified subject matter.[1] Robert
Glaser originally coined both terms.[2] Therefore, the
term "criterion-referenced test" is somewhat of a
misnomer, as it more accurately refers to the
interpretation that is made of the score and not the test
itself.[3]
28. kheru2006@yahoo.com ppfd5105/07 28
Norm-reference Test
• People taken the test provides the norm of achievement
• It compare own score to group score
• Test item cover all abilities or topics
• Intended for measuring general ability, assessing range
of ability in large group, selecting top candidates
• NRT does not tell whether students ready to move
• NRT not suitable for affective and psychomotor domains
• NRT tend to encourage competition and score
comparisons
• E.g. end of term , UPSR, PMR, SPM, centralized exam
* NRT looks similar to summative evaluation
29. kheru2006@yahoo.com ppfd5105/07 29
Criterion-reference Test
• Test scores are compared no to those of others
but to a given criteria or standard performance
• Students are hope to achieve the standard
determined
• Criteria are clearly defined
• Work best when measuring mastery skills
• Determine if students can proceed
• Assessing affective and psychomotor domain
• Suitable for measuring achievement of special
education
• Grouping students to instructions
• e.g. diagnostic test, driving test, typing test …..
30. kheru2006@yahoo.com ppfd5105/07 30
Summary NRT & CRT
Dimension
Criterion-Referenced
Tests
Norm-Referenced
Tests
Purpose
To determine whether each student has
achieved specific skills or concepts. To find
out how much students know before
instruction begins and after it has finished.
To rank each student with respect to
the achievement of others in broad
areas of knowledge. To discriminate
between high and low achievers.
Content
Measures specific skills which make up a
designated curriculum. These skills are
identified by teachers and curriculum
experts. Each skill is expressed as an
instructional objective.
Measures broad skill areas sampled
from a variety of textbooks, syllabi,
and the judgments of curriculum
experts.
Item
Characteristics
Each skill is tested by at least four items in
order to obtain an adequate sample of
student performance and to minimize the
effect of guessing. The items which test
any given skill are parallel in difficulty.
Each skill is usually tested by less than
four items. Items vary in difficulty.
Items are selected that discriminate
between high
and low achievers.
Score
Interpretation
Each individual is compared with a preset
standard for acceptable achievement. The
performance of other examinees is
irrelevant. A student's score is usually
expressed as a percentage. Student
achievement is reported for individual
skills.
Each individual is compared with other
examinees and assigned a score--usually
expressed as a percentile, a grade
equivalent score, or a stanine. Student
achievement is reported for broad skill
areas, although some norm-referenced
tests do report student achievement
for individual skills.
The following is adapted from: Popham, J. W. (1975). Educational evaluation. Englewood Cliffs, New Jersey: Prentice-Hall,
Inc.
31. kheru2006@yahoo.com ppfd5105/07 31
Summary of NRT and CRT
Dimension Norm-Referenced Tests Criterion-Referenced Tests
Concept
Relative-achievement evaluation
showing comparative
performance of groups in the
form of normal graph
Objective referenced evaluation
determine individual achievement
based on minimum criteria pre-
determined earlier
Aim
Compare and distinguish
performance among candidates
or among groups
Determine ability to master
learning based on certain pre-
determined criteria
Uses
Summative test to distinguish
candidates in performance level
of distinction, credit, pass or
fail
Formative test to improve
teaching-learning based on the
test result
Target
Comparison of performance
among candidates
Determination of performance
individually or in small groups
Question-
characteristics
Questions arranged from easy
to difficult. with discrimination
among candidates
Questions have almost the same
level of difficulty based o learning
objectives
Grading
Passing mark or grading is
determined after the test
result
Passing mark or grading is
determined before the test is
carried out
Coverage
Questions covered wide range of
learning skills
Question content concentrates on
limited learning skills
Examples
Public examinations such as
UPSR, PMR, SPM and STPM
Short test, course work, written
exercises and portfolio project
33. kheru2006@yahoo.com ppfd5105/07 33
PPFD5105 Short-Test
02 Jun 07
Q1. In our first meeting, you had been
exposed to the learning assessment.
i) List down the learning assessment you
had been informed.
ii) Which of the learning assessment is
meant for you as a student ?
iii) Briefly describe the learning assessment
you had chosen. [ 3,1,2 ]
.
34. kheru2006@yahoo.com ppfd5105/07 34
PPFD5105 Short-Test
02 Jun 07
• Q2. In our second meeting, we talked
about the importance of assessment for
teachers. As such, why do you think
teachers need to know about
assessment? [ 5 ]
35. kheru2006@yahoo.com ppfd5105/07 35
PPFD5105 Short-Test
02 Jun 07
Q3. We had also discussed on the
importance of assessment format for
use. Should you be assigned to be a
Ketua Panatia for your school, design a
simple assessment for use accordingly.
[10]
36. kheru2006@yahoo.com ppfd5105/07 36
PPFD5105 Short-Test
02 Jun 07
Q4. The alphabet A, E, M, T is synonymous
to assessment terms.
i) What does the alphabet stands for ?
ii) Describe briefly what you had
mentioned. [4,4]
37. kheru2006@yahoo.com ppfd5105/07 37
PPFD5105 Short-Test
02 Jun 07
Q5. Create one test item each from each
topics we had discussed and state the
difficulty level of each stand? [ 10 ]
38. kheru2006@yahoo.com ppfd5105/07 38
SUBJECT
SCHOOL-BASED
ASSESSMENT
CENTRALIZED-BASED ASSESSMENT
Types of Instrument
E.g.Project,
Portfolio
Type of Test e.g. Objective, Subjective ...
Type of Item
Based on
Project/Activity
Type of Questions e.g. closed-ended,open-ended
Number of Item Section A Section B Section C
10 3 2
Score Competent/Not-Competent
Total Score
Based on number of
competencies
assessed in subject
Section A Section B Section C
50 30 20
100
Length of Test
Throughout P&P
process in form 4 &
5
Section A Section B Section C
50 min 40 min 60 min
2 hours 30 min
Construct Assessed
COMPETENCY
Competency based
on evidence process
and product
Competency based on knowledge evidence
Section A Section B Section C
Knowledge 1 Application 1 Creativity
(experiential) (one situation to
another situation)
(Innovation)
Context Coverage Context covered to all modules in ...... form 4 and form 5
Difficulty level and
Weightage
Based on standard
in criterion
Section A Section B Sectioan C
Level of Difficulty
5
Level of Difficulty
3
Level of Difficulty
2
Accessories Allowed e.g scientific calculator, handphone
Example of Assessment Format for Malaysian Exam (English Version)
ASSESSMENT FOR (NAME OF SUBJECT e.g.Maths 7200/1))
NAME OF EXAMINATION (e.g.UPSR,PMR,SPM)