The document discusses assessment of learning and educational technology. It provides information on different types of assessment tools used to evaluate the teaching and learning process, including tests, measurements, and evaluations. Various audiovisual aids that can be used to support instruction are also outlined, such as still photography, motion pictures, and multimedia equipment. Common classroom devices and their purposes are defined.
EDUCATIONAL TECHNOLOGY AND ASSESSMENT OF LEARNINGZiloVinRoseAndus
This document discusses educational technology and the assessment of learning. It defines audiovisual aids as any device used to aid communication, such as photographs, films, and multimedia equipment. The purpose of visual aids is to engage students, stimulate imagination, facilitate understanding, provide motivation, and develop listening skills. Traditional forms include demonstrations, field trips, experiments, pictures, and real objects. Devices are classified as extrinsic, intrinsic, material, or mental. Nonprojected aids do not require equipment and include charts, graphs, maps, illustrations and handouts. The document also discusses assessing learning through formative and summative tests, standardized versus teacher-made tests, and the criteria for a good examination, including validity, reliability, and objectivity
This document discusses various types of assessment tools and testing methods. It describes assessment of learning as focusing on developing and using assessment tools to improve the teaching and learning process. Some key types of tests and assessments discussed include formative and summative tests, standardized tests, criterion-referenced and norm-referenced tests, and objective, essay and recognition tests. The document also provides guidance on constructing, administering and scoring different types of tests to effectively measure student learning outcomes.
Assessment of learning focuses on developing and using assessment tools to improve the teaching and learning process. It emphasizes using tests to measure knowledge and thinking skills. Students will learn to create rubrics for performance and portfolio assessments. There are various types of tests classified by response format, preparation method, answer nature, and purpose. Proper test construction considers objectives, sampling, item format, scoring, and other validation criteria. Data from tests is interpreted using measures of central tendency, dispersion, and other statistical tools.
This document discusses validity and reliability in assessment. Validity refers to whether inferences made from test scores are appropriate, which is ensured by addressing 12 validation criteria like relevance, content, and consequences. Reliability refers to consistency of scores and is necessary for validity. Reliability can be improved through clear scoring rubrics, multiple scorers, and item analysis to evaluate assessment items. An assessment cannot be valid without first being reliable.
Here are my slides for my report for my Advanced Measurements and Evaluation subject on Educational Measurement and Evaluation. #Polytechnic University of the Philippines. #GraduateSchool
Types of Evaluation prior to Instructional Actitspetacular
Evaluations prior to instructional act are pre-assessment, formative, and summative test. These three types of evaluation are done to determine the needs and strengths o the students
Assessment of learning and educational technology ed 09 ocamposCharlesIvanOcampos
Assessment of learning tends to aid the learner and teacher relationship in the academe. In which assessment of learning guides them to know their strength and weakness in class. It will evaluate the learners learning process.
Achievement tests measure what students have learned after a period of instruction. There are two main types - standardized tests which have uniform procedures and scoring, and teacher-made tests which assess learning in a particular classroom. Standardized tests provide norms and impartial information, while teacher-made tests help evaluate teaching effectiveness but have less accuracy and refinement. Both types of achievement tests are important for measuring student learning outcomes.
EDUCATIONAL TECHNOLOGY AND ASSESSMENT OF LEARNINGZiloVinRoseAndus
This document discusses educational technology and the assessment of learning. It defines audiovisual aids as any device used to aid communication, such as photographs, films, and multimedia equipment. The purpose of visual aids is to engage students, stimulate imagination, facilitate understanding, provide motivation, and develop listening skills. Traditional forms include demonstrations, field trips, experiments, pictures, and real objects. Devices are classified as extrinsic, intrinsic, material, or mental. Nonprojected aids do not require equipment and include charts, graphs, maps, illustrations and handouts. The document also discusses assessing learning through formative and summative tests, standardized versus teacher-made tests, and the criteria for a good examination, including validity, reliability, and objectivity
This document discusses various types of assessment tools and testing methods. It describes assessment of learning as focusing on developing and using assessment tools to improve the teaching and learning process. Some key types of tests and assessments discussed include formative and summative tests, standardized tests, criterion-referenced and norm-referenced tests, and objective, essay and recognition tests. The document also provides guidance on constructing, administering and scoring different types of tests to effectively measure student learning outcomes.
Assessment of learning focuses on developing and using assessment tools to improve the teaching and learning process. It emphasizes using tests to measure knowledge and thinking skills. Students will learn to create rubrics for performance and portfolio assessments. There are various types of tests classified by response format, preparation method, answer nature, and purpose. Proper test construction considers objectives, sampling, item format, scoring, and other validation criteria. Data from tests is interpreted using measures of central tendency, dispersion, and other statistical tools.
This document discusses validity and reliability in assessment. Validity refers to whether inferences made from test scores are appropriate, which is ensured by addressing 12 validation criteria like relevance, content, and consequences. Reliability refers to consistency of scores and is necessary for validity. Reliability can be improved through clear scoring rubrics, multiple scorers, and item analysis to evaluate assessment items. An assessment cannot be valid without first being reliable.
Here are my slides for my report for my Advanced Measurements and Evaluation subject on Educational Measurement and Evaluation. #Polytechnic University of the Philippines. #GraduateSchool
Types of Evaluation prior to Instructional Actitspetacular
Evaluations prior to instructional act are pre-assessment, formative, and summative test. These three types of evaluation are done to determine the needs and strengths o the students
Assessment of learning and educational technology ed 09 ocamposCharlesIvanOcampos
Assessment of learning tends to aid the learner and teacher relationship in the academe. In which assessment of learning guides them to know their strength and weakness in class. It will evaluate the learners learning process.
Achievement tests measure what students have learned after a period of instruction. There are two main types - standardized tests which have uniform procedures and scoring, and teacher-made tests which assess learning in a particular classroom. Standardized tests provide norms and impartial information, while teacher-made tests help evaluate teaching effectiveness but have less accuracy and refinement. Both types of achievement tests are important for measuring student learning outcomes.
This document discusses assessment of learning and test construction. It focuses on using assessment tools to improve the teaching and learning process. Assessment of learning emphasizes using tests to measure knowledge and thinking skills. It also allows students to experience developing rubrics for performance and portfolio assessments. The document then discusses measurement, evaluation, types of tests, test construction steps, and considerations for developing valid and reliable assessments.
The document discusses various topics related to assessment and testing in education. It provides information on:
1) The different types of audio-visual aids that can be used, such as still photography, video, and slides.
2) The purposes of using visual aids in teaching, which include challenging students' attention, stimulating imagination, and facilitating understanding.
3) The various types of measurement scales used in educational assessment, from nominal to ratio, and the properties of each.
4) The differences between norm-referenced and criterion-referenced tests and interpretations. Norm-referenced compares students to peers while criterion-referenced compares to an absolute standard.
5) Important considerations when constructing tests
This document discusses measurement, evaluation, and tests in physical education. It defines these terms and explains their interrelationship. Measurement involves using tests to collect quantitative data about traits, while evaluation judges the worth of those measurements. Tests are tools used for measurement. The document also discusses the need for and modern trends in measurement and evaluation, such as increased accountability, emphasis on health-related fitness, and more sophisticated instruments. It explains how measurement and evaluation are important for setting objectives, assessing achievement, research, and responding to current issues in physical education.
The document discusses test measurement, assessment, and evaluation in education. It defines key terms like test, measurement, objective and subjective tests, formative and summative assessment. Formative assessment is used for feedback, while summative assessment evaluates learning at the end of a unit. Evaluation examines overall achievement and can be process-based or examine outcomes. Assessment informs teaching, while evaluation makes judgments about performance and effectiveness.
The document discusses assessment of learning and the process of test construction. It defines key terms related to assessment such as measurement, evaluation, formative and summative tests. It also outlines the different types of tests according to response method, preparation, and nature. Additionally, it covers standards for developing good tests including validity, reliability, and objectivity. It describes the stages of test construction including planning, development, administration, analysis and revision. Finally, it discusses considerations for test construction including type of test, length, item formats and development of clear instructional objectives.
Negative marking on multiple choice tests deducts points for incorrect answers to discourage guessing, though students dislike it, and it is not used for descriptive exams. Measurement and evaluation are processes for quantifying individual achievement through assigning values or symbols to observable phenomena and determining learning outcomes through qualitative assessment. Various types of tests, measures of central tendency and variability, grading philosophies, and key terminology are discussed for educational measurement and evaluation.
The document discusses various techniques for evaluating educational curriculum and programs. It describes evaluation as collecting data to determine the value of a program and whether it should be adopted, rejected, or revised. Several data collection techniques are examined, including observation, interviews, questionnaires, tests, and assessments. Tests are categorized based on their purpose, format, and standards. The document emphasizes that using the right technique for a given evaluation is important to obtain accurate information and make better decisions.
Assessments for learning -B.ed Second year notesAbu Bashar
Understand the nature of assessment and evaluation and their role in teaching-learning process.
2. Understand the perspectives of different schools of learning on learning assessment
3. Realise the need for school based and authentic assessment
4. Examine the contextual roles of different forms of assessment in schools
5. Understand the different dimensions of learning and the related assessment procedures, tools and techniques
6. Develop assessment tasks and tools to assess learners performance
7. Analyse, manage, and interpret assessment data
8. Analyse the reporting procedures of learners performance in schools
9. Develop indicators to assess learners performance on different types of tasks
10. Examine the issues and concerns of assessment and evaluation practices in schools
11. Understand the policy perspectives on examinations and evaluation and their implementation practices
12. Traces the technology bases assessment practices and other trends at the international level
Norm referenced and Criterion Referenced TestDrSindhuAlmas
The document discusses criterion-referenced tests (CRT) and norm-referenced tests (NRT). CRTs measure student performance against a predetermined standard or criteria, such as achieving a certain score. NRTs compare student performance to other students in a norming group. CRTs are used to assess student mastery of specific standards and guide instruction, while NRTs rank students and are used for grouping, admissions, and identifying learning disabilities. The key difference is that CRTs measure performance against a fixed standard, while NRTs measure performance relative to other students.
This document discusses different ways to categorize tests, including by mode of response (oral, written, performance), ease of quantification of responses (objective vs. subjective), mode of administration (individual vs. group), test constructor (standardized vs. unstandardized), and mode of interpreting results (norm-referenced vs. criterion-referenced). Tests can be categorized based on whether responses are oral, written, or performance-based. Objective tests with quantifiable responses can be compared to yield scores, while subjective tests allow divergent answers like essays. Tests are also categorized by whether they are administered to individuals or groups, and whether they are standardized with established procedures or unstandardized for classroom use.
The document discusses various types of language assessment used in summer school including placement tests, proficiency tests, diagnostic tests, progress tests, achievement tests, and standardized tests. It explains the purposes and characteristics of these different assessments. Key points include:
- Placement tests assess students' language abilities to determine the appropriate course.
- Proficiency tests measure general language learning ability.
- Diagnostic tests identify areas where students need further help.
- Progress and achievement tests measure mastery of course goals.
- Standardized tests compare students along a continuum of language skills.
Measurement and evaluation in educationCheryl Asia
The document discusses measurement and evaluation in education. It defines measurement as quantifying individuals' attributes and skills, and evaluation as making judgements about something using criteria and standards. Evaluation involves systematically determining how well educational objectives are achieved by learners. Key points include:
- Evaluation assesses educational objectives, programs, teachers, and learners.
- Evaluation implies a systematic, continuous, and comprehensive process using various techniques.
- Evaluation assumes instructional objectives have been previously identified.
- Tests, quizzes, and other instruments are used to obtain information for evaluation purposes.
The document discusses the purpose, principles, and scope of testing and evaluation. The purpose of testing is to assess student performance and assign grades. Testing also helps predict future performance. There are four key principles of testing: practicality, reliability, validity, and authenticity. Evaluation aims to determine competence, predict educational practices, and clarify proficiency. Evaluation techniques should be selected based on their purposes and limitations. The scope of evaluation includes making value judgments, determining how well objectives were attained, and identifying student strengths, weaknesses, and needs.
This document discusses key concepts related to educational testing and measurement. It defines terms like test, assessment, measurement, and evaluation. It also describes different types of scales used in measurement, including nominal, ordinal, interval, and ratio scales. Finally, it outlines different categories of tests like norm-referenced vs. criterion-referenced tests and different types of test items and formats.
THESIS- Making the Results and Discussions portionelio dominglos
- The attitudes of farmers toward organic farming varied across different aspects of organic farming. Farmers had a very positive attitude toward the use of standards, proper harvesting and marketing, and advantages/disadvantages of organic farming. However, their attitudes were not as positive toward organic farming commodities, effectiveness, and importance.
- Statistical analysis confirmed that the farmers' attitudes differed significantly. This difference can be attributed to varying levels of patience, talent, honesty, and goals among individual farmers.
- It is concluded that the farmers have a mixed view of organic farming, being more convinced by some aspects than others. They are recommended to attend educational sessions to improve their knowledge and attitudes regarding all aspects of organic farming.
This document defines key concepts in educational measurement and evaluation including measurement, evaluation, testing, and the functions and principles of evaluation. It discusses different types of tests and measurements including criterion-referenced and norm-referenced tests. Various measures of central tendency (mean, median, mode) and variability (range, standard deviation) are explained. Different point measures like quartiles, deciles, and percentiles are also defined. Formulas for calculating measures like the mean, median, and mode from frequency distribution tables are provided.
Criterion and norm referenced evaluationJinto Philip
The document summarizes a departmental seminar on criterion and norm-referenced evaluation. The seminar was presented by Mr. Jinto Philip on August 31, 2010 from 2-3 PM in the Arts Theatre. It discussed the key differences between criterion-referenced tests, which evaluate performance against an absolute standard, and norm-referenced tests, which compare performance to other examinees. Some key differences highlighted were that criterion-referenced tests measure specific skills from a curriculum, while norm-referenced tests measure broad skill areas. Criterion-referenced tests aim to determine if students have achieved skills, while norm-referenced tests rank students against others.
Assessment of Learning focuses on developing and using assessment tools to improve the teaching and learning process. It emphasizes using tests to measure knowledge and thinking skills. Students learn how to develop rubrics to assess performance and portfolios. There are various types of tests classified by response method, preparation method, and nature. Tests are used formatively to monitor progress and summatively to measure learning outcomes. Proper test construction considers objectives, item formats, length, and scoring to create valid and reliable assessments.
Concept of classroom assessment by Dr. Shazia Zamirshaziazamir1
The document discusses the concept of classroom assessment, describing it as an ongoing process through which teachers and students interact to promote greater learning. It notes that classroom assessment emphasizes collecting student performance data to diagnose learning problems, monitor progress, and provide feedback for improvement. The document also outlines different types of assessments including diagnostic, formative, and summative assessments as well as norm-referenced and criterion-referenced assessments.
This document discusses assessment of learning and test construction. It focuses on using assessment tools to improve the teaching and learning process. Assessment of learning emphasizes using tests to measure knowledge and thinking skills. It also allows students to experience developing rubrics for performance and portfolio assessments. The document then discusses measurement, evaluation, types of tests, test construction steps, and considerations for developing valid and reliable assessments.
The document discusses various topics related to assessment and testing in education. It provides information on:
1) The different types of audio-visual aids that can be used, such as still photography, video, and slides.
2) The purposes of using visual aids in teaching, which include challenging students' attention, stimulating imagination, and facilitating understanding.
3) The various types of measurement scales used in educational assessment, from nominal to ratio, and the properties of each.
4) The differences between norm-referenced and criterion-referenced tests and interpretations. Norm-referenced compares students to peers while criterion-referenced compares to an absolute standard.
5) Important considerations when constructing tests
This document discusses measurement, evaluation, and tests in physical education. It defines these terms and explains their interrelationship. Measurement involves using tests to collect quantitative data about traits, while evaluation judges the worth of those measurements. Tests are tools used for measurement. The document also discusses the need for and modern trends in measurement and evaluation, such as increased accountability, emphasis on health-related fitness, and more sophisticated instruments. It explains how measurement and evaluation are important for setting objectives, assessing achievement, research, and responding to current issues in physical education.
The document discusses test measurement, assessment, and evaluation in education. It defines key terms like test, measurement, objective and subjective tests, formative and summative assessment. Formative assessment is used for feedback, while summative assessment evaluates learning at the end of a unit. Evaluation examines overall achievement and can be process-based or examine outcomes. Assessment informs teaching, while evaluation makes judgments about performance and effectiveness.
The document discusses assessment of learning and the process of test construction. It defines key terms related to assessment such as measurement, evaluation, formative and summative tests. It also outlines the different types of tests according to response method, preparation, and nature. Additionally, it covers standards for developing good tests including validity, reliability, and objectivity. It describes the stages of test construction including planning, development, administration, analysis and revision. Finally, it discusses considerations for test construction including type of test, length, item formats and development of clear instructional objectives.
Negative marking on multiple choice tests deducts points for incorrect answers to discourage guessing, though students dislike it, and it is not used for descriptive exams. Measurement and evaluation are processes for quantifying individual achievement through assigning values or symbols to observable phenomena and determining learning outcomes through qualitative assessment. Various types of tests, measures of central tendency and variability, grading philosophies, and key terminology are discussed for educational measurement and evaluation.
The document discusses various techniques for evaluating educational curriculum and programs. It describes evaluation as collecting data to determine the value of a program and whether it should be adopted, rejected, or revised. Several data collection techniques are examined, including observation, interviews, questionnaires, tests, and assessments. Tests are categorized based on their purpose, format, and standards. The document emphasizes that using the right technique for a given evaluation is important to obtain accurate information and make better decisions.
Assessments for learning -B.ed Second year notesAbu Bashar
Understand the nature of assessment and evaluation and their role in teaching-learning process.
2. Understand the perspectives of different schools of learning on learning assessment
3. Realise the need for school based and authentic assessment
4. Examine the contextual roles of different forms of assessment in schools
5. Understand the different dimensions of learning and the related assessment procedures, tools and techniques
6. Develop assessment tasks and tools to assess learners performance
7. Analyse, manage, and interpret assessment data
8. Analyse the reporting procedures of learners performance in schools
9. Develop indicators to assess learners performance on different types of tasks
10. Examine the issues and concerns of assessment and evaluation practices in schools
11. Understand the policy perspectives on examinations and evaluation and their implementation practices
12. Traces the technology bases assessment practices and other trends at the international level
Norm referenced and Criterion Referenced TestDrSindhuAlmas
The document discusses criterion-referenced tests (CRT) and norm-referenced tests (NRT). CRTs measure student performance against a predetermined standard or criteria, such as achieving a certain score. NRTs compare student performance to other students in a norming group. CRTs are used to assess student mastery of specific standards and guide instruction, while NRTs rank students and are used for grouping, admissions, and identifying learning disabilities. The key difference is that CRTs measure performance against a fixed standard, while NRTs measure performance relative to other students.
This document discusses different ways to categorize tests, including by mode of response (oral, written, performance), ease of quantification of responses (objective vs. subjective), mode of administration (individual vs. group), test constructor (standardized vs. unstandardized), and mode of interpreting results (norm-referenced vs. criterion-referenced). Tests can be categorized based on whether responses are oral, written, or performance-based. Objective tests with quantifiable responses can be compared to yield scores, while subjective tests allow divergent answers like essays. Tests are also categorized by whether they are administered to individuals or groups, and whether they are standardized with established procedures or unstandardized for classroom use.
The document discusses various types of language assessment used in summer school including placement tests, proficiency tests, diagnostic tests, progress tests, achievement tests, and standardized tests. It explains the purposes and characteristics of these different assessments. Key points include:
- Placement tests assess students' language abilities to determine the appropriate course.
- Proficiency tests measure general language learning ability.
- Diagnostic tests identify areas where students need further help.
- Progress and achievement tests measure mastery of course goals.
- Standardized tests compare students along a continuum of language skills.
Measurement and evaluation in educationCheryl Asia
The document discusses measurement and evaluation in education. It defines measurement as quantifying individuals' attributes and skills, and evaluation as making judgements about something using criteria and standards. Evaluation involves systematically determining how well educational objectives are achieved by learners. Key points include:
- Evaluation assesses educational objectives, programs, teachers, and learners.
- Evaluation implies a systematic, continuous, and comprehensive process using various techniques.
- Evaluation assumes instructional objectives have been previously identified.
- Tests, quizzes, and other instruments are used to obtain information for evaluation purposes.
The document discusses the purpose, principles, and scope of testing and evaluation. The purpose of testing is to assess student performance and assign grades. Testing also helps predict future performance. There are four key principles of testing: practicality, reliability, validity, and authenticity. Evaluation aims to determine competence, predict educational practices, and clarify proficiency. Evaluation techniques should be selected based on their purposes and limitations. The scope of evaluation includes making value judgments, determining how well objectives were attained, and identifying student strengths, weaknesses, and needs.
This document discusses key concepts related to educational testing and measurement. It defines terms like test, assessment, measurement, and evaluation. It also describes different types of scales used in measurement, including nominal, ordinal, interval, and ratio scales. Finally, it outlines different categories of tests like norm-referenced vs. criterion-referenced tests and different types of test items and formats.
THESIS- Making the Results and Discussions portionelio dominglos
- The attitudes of farmers toward organic farming varied across different aspects of organic farming. Farmers had a very positive attitude toward the use of standards, proper harvesting and marketing, and advantages/disadvantages of organic farming. However, their attitudes were not as positive toward organic farming commodities, effectiveness, and importance.
- Statistical analysis confirmed that the farmers' attitudes differed significantly. This difference can be attributed to varying levels of patience, talent, honesty, and goals among individual farmers.
- It is concluded that the farmers have a mixed view of organic farming, being more convinced by some aspects than others. They are recommended to attend educational sessions to improve their knowledge and attitudes regarding all aspects of organic farming.
This document defines key concepts in educational measurement and evaluation including measurement, evaluation, testing, and the functions and principles of evaluation. It discusses different types of tests and measurements including criterion-referenced and norm-referenced tests. Various measures of central tendency (mean, median, mode) and variability (range, standard deviation) are explained. Different point measures like quartiles, deciles, and percentiles are also defined. Formulas for calculating measures like the mean, median, and mode from frequency distribution tables are provided.
Criterion and norm referenced evaluationJinto Philip
The document summarizes a departmental seminar on criterion and norm-referenced evaluation. The seminar was presented by Mr. Jinto Philip on August 31, 2010 from 2-3 PM in the Arts Theatre. It discussed the key differences between criterion-referenced tests, which evaluate performance against an absolute standard, and norm-referenced tests, which compare performance to other examinees. Some key differences highlighted were that criterion-referenced tests measure specific skills from a curriculum, while norm-referenced tests measure broad skill areas. Criterion-referenced tests aim to determine if students have achieved skills, while norm-referenced tests rank students against others.
Assessment of Learning focuses on developing and using assessment tools to improve the teaching and learning process. It emphasizes using tests to measure knowledge and thinking skills. Students learn how to develop rubrics to assess performance and portfolios. There are various types of tests classified by response method, preparation method, and nature. Tests are used formatively to monitor progress and summatively to measure learning outcomes. Proper test construction considers objectives, item formats, length, and scoring to create valid and reliable assessments.
Concept of classroom assessment by Dr. Shazia Zamirshaziazamir1
The document discusses the concept of classroom assessment, describing it as an ongoing process through which teachers and students interact to promote greater learning. It notes that classroom assessment emphasizes collecting student performance data to diagnose learning problems, monitor progress, and provide feedback for improvement. The document also outlines different types of assessments including diagnostic, formative, and summative assessments as well as norm-referenced and criterion-referenced assessments.
Evaluation of educational programs in nursingNavjyot Singh
Evaluation is a systematic process to judge the value or worth of teaching and learning in nursing education. It involves collecting, analyzing, and interpreting information on student performance and growth to determine if educational objectives are being achieved. There are two main types of evaluation - formative evaluation which provides feedback during instruction, and summative evaluation which determines achievement at the end through tests and projects. Both qualitative and quantitative techniques are used for evaluation.
Examination and Evaluation-ppt presentation.pptxAbdulakilMuanje
The document defines key concepts related to assessment including tests, measurements, evaluation, and assessment. It outlines the purposes of assessment as improving student learning and program/institutional improvement. It also describes different frames of reference for interpreting test scores such as ability-referenced, growth-referenced, norm-referenced, and criterion-referenced. The document further discusses types of assessments including formative vs summative and screening vs diagnostic. It also covers Bloom's taxonomy of educational objectives for the cognitive, affective and psychomotor domains.
To add knowledge about teaching that can help the students and teachers in their learning process in which they can be both assess their way of interaction to achieve their goals in class. Assessment of learning focuses on the development and utilization of assessment tools to improve the teaching-learning process. It emphasizes on the use of testing for measuring knowledge, comprehension and other thinking skills. It allows the students to go through the standard steps in test constitution for quality assessment. Students will experience how to develop rubrics for performance-based and portfolio assessment. The presentation includes educational technology and statistical tools that helps to determine the learning of the students.
The document discusses key concepts related to educational assessment including tests, measurement, evaluation, and different types of assessment. It defines tests as instruments used to measure student performance or traits, and measurement as collecting test score data. Evaluation is interpreting and analyzing measurement data to make judgments. Assessment can be formative (assessment for learning) or summative (assessment of learning) and teachers have different roles in each. Standardized tests differ from teacher-made tests, and assessment serves various instructional purposes like identifying student needs and progress.
This document outlines the course content for a class on assessment for learning. The 6-unit course covers topics such as the basic concepts of assessment, formative and summative evaluation, assessment tools and techniques, issues in classroom assessment, assessment for inclusive practices, and reporting quantitative assessment data. Key concepts that will be addressed include the roles and purposes of assessment, principles of effective assessment practices, and using different types of assessment to support student learning.
Classroom Based Assessment Tools and Techniques 27-09-2022.pptNasirMahmood976516
This document discusses various methods and purposes of classroom-based assessment. It defines assessment as the systematic process of documenting and using data on student knowledge, skills, attitudes, and beliefs to improve learning. The document outlines different types of assessments including achievement tests, psychological tests, and performance tests. It also discusses formative assessment, which provides feedback to help students improve, versus summative assessment, which evaluates performance against standards. Finally, the document details specific formative assessment techniques teachers can use like interviews, checklists, observations, and case studies.
The document discusses various types of language assessment used in summer school programs. It describes formative assessments used to evaluate student progress on a daily basis and summative assessments like tests administered at the end of a course. The document also discusses the differences between tests, evaluations, and assessments. It provides examples of different types of language tests, their purposes, and considerations for ensuring reliability and validity.
The document discusses various topics related to evaluation in computer science including the concept of evaluation, its objectives, types of evaluation, tools and techniques used for evaluation. Some key points:
1. Evaluation is the process of determining the extent to which objectives are being attained and determines the effectiveness of teaching and learning.
2. There are two major types of evaluation - measurement which is objective and exact, and appraisal which evaluates intangible qualities through observation and opinions.
3. Evaluation tools include written, oral and practical exams as well as observation, interviews, questionnaires, and student work. Item analysis and testing difficulty and discrimination are also discussed.
This document discusses key concepts related to educational assessment including definitions of terms like tests, measurement, evaluation and assessment. It outlines the relationship between tests, measurement and assessment and describes the four phases of development of Malaysia's examination system. Norm-referenced and criterion-referenced tests are explained and the differences between them are provided. Formative and summative assessment are also defined. The roles and purposes of assessment for learning and assessment of learning are briefly described. Finally, the document touches on school-based assessment in the Malaysian context under the KSSR system.
This document discusses assessment in education. It states that assessment drives what is valued and taught in curriculums. It also discusses moving from product-oriented assessments, like tests, to process-oriented assessments like formative assessment that provide feedback to improve learning. The document outlines different assessment paradigms like assessment of learning versus assessment for learning, and discusses the principles, purposes, types and methods of educational assessment.
Classroom assessment involves collecting data on student performance through various strategies to diagnose learning problems, monitor progress, and provide feedback for improvement. It is a formative, ongoing process that is learner-centered and teacher-directed. Formative assessments are used during instruction while summative assessments are given at the end to evaluate student achievement and assign grades. Proper assessment requires clear thinking, effective communication, and matching the appropriate assessment method to the desired learning target.
This document discusses measurement and evaluation in education. It defines evaluation as the systematic collection and interpretation of evidence to make a judgement about the value and effectiveness of a program, with the aim of informing action. Evaluation is needed to determine if teaching goals and curricula are achieving their intended outcomes, to assess student progress, and to ensure quality and investment returns. Good evaluation is valid, reliable, practical, objective, and useful. While measurement provides precise quantitative data, evaluation involves more subjective assessment of broader factors like attitudes, interests, and personality.
This document provides information about an assessment unit on didactic assessment. It includes an introduction to assessment, objectives of the unit which are to develop understanding of assessment methods and apply assessment principles for effective lesson planning. It also describes different types of assessment including formative, summative, and continuous assessment. Various assessment techniques are explained such as open-ended questions, short answer questions, and examples of each. The roles and importance of assessment in the teaching and learning process are highlighted.
This document discusses various techniques for assessing students' learning progress, including assessment, measurement, evaluation, and different types of assessments. It defines assessment as gathering information to determine how instructional objectives are being achieved. Measurement refers to determining how much of a knowledge, skill, or characteristic a student possesses. Evaluation makes a judgment about performance based on standards. The document also outlines trends in classroom assessment, objectives that can be assessed, and specific assessment methods like tests, observations, performances, and self-assessments.
Assessing Students performance by Angela Uma Biswas, student of Institute of ...Angela Biswas
This document discusses assessment of student performance. It defines assessment as a systematic process of gathering data about student learning to make inferences and provide feedback. Assessment for learning promotes achievement by informing students of their progress. Effective assessment involves developing learning objectives, aligning the curriculum, collecting and using data to improve programs. The purpose of assessment is to help students track progress, receive feedback, and achieve learning goals. Teachers can assess through assignments, exams, classroom techniques, and self-assessment. Formative assessment occurs during instruction while summative assessment occurs at the end. Good assessment is valid, reliable, practical, fair, and useful for students. Feedback is also important to help students improve.
This document discusses specific techniques for evaluating curriculum, including:
1. Observation - Gathering information by directly observing programs and student/teacher behaviors. This can be unstructured or structured.
2. Interviews - Collecting verbal information from interviewees. Interviews can be unstructured or structured.
3. Questionnaires - Collecting quantitative data through surveys to get information from many people easily.
4. Unobtrusive measures - Obtaining non-reactive observations by examining physical traces or records without participants' awareness.
5. Tests - Assessing learning outcomes through various types of tests like diagnostic, proficiency, aptitude, and achievement tests as well as formative vs summative assessments.
Similar to Assessment of learning ( Anna Marie Pajara (20)
Baha Majid WCA4Z IBM Z Customer Council Boston June 2024.pdfBaha Majid
IBM watsonx Code Assistant for Z, our latest Generative AI-assisted mainframe application modernization solution. Mainframe (IBM Z) application modernization is a topic that every mainframe client is addressing to various degrees today, driven largely from digital transformation. With generative AI comes the opportunity to reimagine the mainframe application modernization experience. Infusing generative AI will enable speed and trust, help de-risk, and lower total costs associated with heavy-lifting application modernization initiatives. This document provides an overview of the IBM watsonx Code Assistant for Z which uses the power of generative AI to make it easier for developers to selectively modernize COBOL business services while maintaining mainframe qualities of service.
Odoo releases a new update every year. The latest version, Odoo 17, came out in October 2023. It brought many improvements to the user interface and user experience, along with new features in modules like accounting, marketing, manufacturing, websites, and more.
The Odoo 17 update has been a hot topic among startups, mid-sized businesses, large enterprises, and Odoo developers aiming to grow their businesses. Since it is now already the first quarter of 2024, you must have a clear idea of what Odoo 17 entails and what it can offer your business if you are still not aware of it.
This blog covers the features and functionalities. Explore the entire blog and get in touch with expert Odoo ERP consultants to leverage Odoo 17 and its features for your business too.
An Overview of Odoo ERP
Odoo ERP was first released as OpenERP software in February 2005. It is a suite of business applications used for ERP, CRM, eCommerce, websites, and project management. Ten years ago, the Odoo Enterprise edition was launched to help fund the Odoo Community version.
When you compare Odoo Community and Enterprise, the Enterprise edition offers exclusive features like mobile app access, Odoo Studio customisation, Odoo hosting, and unlimited functional support.
Today, Odoo is a well-known name used by companies of all sizes across various industries, including manufacturing, retail, accounting, marketing, healthcare, IT consulting, and R&D.
The latest version, Odoo 17, has been available since October 2023. Key highlights of this update include:
Enhanced user experience with improvements to the command bar, faster backend page loading, and multiple dashboard views.
Instant report generation, credit limit alerts for sales and invoices, separate OCR settings for invoice creation, and an auto-complete feature for forms in the accounting module.
Improved image handling and global attribute changes for mailing lists in email marketing.
A default auto-signature option and a refuse-to-sign option in HR modules.
Options to divide and merge manufacturing orders, track the status of manufacturing orders, and more in the MRP module.
Dark mode in Odoo 17.
Now that the Odoo 17 announcement is official, let’s look at what’s new in Odoo 17!
What is Odoo ERP 17?
Odoo 17 is the latest version of one of the world’s leading open-source enterprise ERPs. This version has come up with significant improvements explained here in this blog. Also, this new version aims to introduce features that enhance time-saving, efficiency, and productivity for users across various organisations.
Odoo 17, released at the Odoo Experience 2023, brought notable improvements to the user interface and added new functionalities with enhancements in performance, accessibility, data analysis, and management, further expanding its reach in the market.
Project Management: The Role of Project Dashboards.pdfKarya Keeper
Project management is a crucial aspect of any organization, ensuring that projects are completed efficiently and effectively. One of the key tools used in project management is the project dashboard, which provides a comprehensive view of project progress and performance. In this article, we will explore the role of project dashboards in project management, highlighting their key features and benefits.
UI5con 2024 - Keynote: Latest News about UI5 and it’s EcosystemPeter Muessig
Learn about the latest innovations in and around OpenUI5/SAPUI5: UI5 Tooling, UI5 linter, UI5 Web Components, Web Components Integration, UI5 2.x, UI5 GenAI.
Recording:
https://www.youtube.com/live/MSdGLG2zLy8?si=INxBHTqkwHhxV5Ta&t=0
14 th Edition of International conference on computer visionShulagnaSarkar2
About the event
14th Edition of International conference on computer vision
Computer conferences organized by ScienceFather group. ScienceFather takes the privilege to invite speakers participants students delegates and exhibitors from across the globe to its International Conference on computer conferences to be held in the Various Beautiful cites of the world. computer conferences are a discussion of common Inventions-related issues and additionally trade information share proof thoughts and insight into advanced developments in the science inventions service system. New technology may create many materials and devices with a vast range of applications such as in Science medicine electronics biomaterials energy production and consumer products.
Nomination are Open!! Don't Miss it
Visit: computer.scifat.com
Award Nomination: https://x-i.me/ishnom
Conference Submission: https://x-i.me/anicon
For Enquiry: Computer@scifat.com
E-Invoicing Implementation: A Step-by-Step Guide for Saudi Arabian CompaniesQuickdice ERP
Explore the seamless transition to e-invoicing with this comprehensive guide tailored for Saudi Arabian businesses. Navigate the process effortlessly with step-by-step instructions designed to streamline implementation and enhance efficiency.
Malibou Pitch Deck For Its €3M Seed Roundsjcobrien
French start-up Malibou raised a €3 million Seed Round to develop its payroll and human resources
management platform for VSEs and SMEs. The financing round was led by investors Breega, Y Combinator, and FCVC.
UI5con 2024 - Bring Your Own Design SystemPeter Muessig
How do you combine the OpenUI5/SAPUI5 programming model with a design system that makes its controls available as Web Components? Since OpenUI5/SAPUI5 1.120, the framework supports the integration of any Web Components. This makes it possible, for example, to natively embed own Web Components of your design system which are created with Stencil. The integration embeds the Web Components in a way that they can be used naturally in XMLViews, like with standard UI5 controls, and can be bound with data binding. Learn how you can also make use of the Web Components base class in OpenUI5/SAPUI5 to also integrate your Web Components and get inspired by the solution to generate a custom UI5 library providing the Web Components control wrappers for the native ones.
A neural network is a machine learning program, or model, that makes decisions in a manner similar to the human brain, by using processes that mimic the way biological neurons work together to identify phenomena, weigh options and arrive at conclusions.
WWDC 2024 Keynote Review: For CocoaCoders AustinPatrick Weigel
Overview of WWDC 2024 Keynote Address.
Covers: Apple Intelligence, iOS18, macOS Sequoia, iPadOS, watchOS, visionOS, and Apple TV+.
Understandable dialogue on Apple TV+
On-device app controlling AI.
Access to ChatGPT with a guest appearance by Chief Data Thief Sam Altman!
App Locking! iPhone Mirroring! And a Calculator!!
Using Query Store in Azure PostgreSQL to Understand Query PerformanceGrant Fritchey
Microsoft has added an excellent new extension in PostgreSQL on their Azure Platform. This session, presented at Posette 2024, covers what Query Store is and the types of information you can get out of it.
The Key to Digital Success_ A Comprehensive Guide to Continuous Testing Integ...kalichargn70th171
In today's business landscape, digital integration is ubiquitous, demanding swift innovation as a necessity rather than a luxury. In a fiercely competitive market with heightened customer expectations, the timely launch of flawless digital products is crucial for both acquisition and retention—any delay risks ceding market share to competitors.
What to do when you have a perfect model for your software but you are constrained by an imperfect business model?
This talk explores the challenges of bringing modelling rigour to the business and strategy levels, and talking to your non-technical counterparts in the process.
2. Assessment of Learning
• It focus on the development and utilization of assessment
tools to improve the teaching-learning process.
• Measurement refers to the quantitative aspect of evaluation.
It involves outcomes that can be quantified statistically.
• Evaluation is the qualitative aspect of determining the
outcomes of learning. It involves value judgment.
• Test consist of questions or exercises or other devices for
measuring the outcomes of learning.
3. EducationalTechnology
• Audio visual aids are defined as any devised used to aid in the
communication of an idea.
• As such, virtually anything can be used as an audio visual aid
provided it successfully communicates the idea or information
for which its is designed.
• An audio visual aid includes still photography, motion picture,
audio or video tape, slide or filmstrip, that is prepared
individually or in combination to communicate information, or
to elicit a desired audience response.
• Even though early aids, such as maps, and drawings, are still in
use, advances in the audiovisual field have opened up new
methods of representing these aids, such as video tapes, and
multimedia equipment which allow more professional and
entertaining presentation not only the classroom but also
anywhere in which ideas are to be conveyed to the audience.
4. Device
• Device is any means other than the subject-matter itself
that is employed by the teacher in presenting the
subjects matter to the learner.
• Purpose of visual devices
• 1. To challenge the student’s attention
• 2. To stimulate the imagination and develop the
mental imagery of the pupils.
• 3. To facilitate the understanding of the pupils.
• 4. To provide motivation to the learners.
• 5. To develop the ability to listen.
5. TraditionalFormsofVisualAids
• 1. Demonstration
• 2. Field trips
• 3. Laboratory experiments
• 4. Pictures, films, simulations, models
• 5. Real objects
• Classification of Devices
• 1. Extrinsic – used to supplement a method used
• Ex. Picture, graph, film strips, slides etc.
• 2. Intrinsic – used as a part of the method or teaching procedure
• Ex. Pictures, accompanying an article
• 3. Material devices – device that have no bearing on the subject
matter.
• Ex. Blackboard, chalk, books, pencil etc.
• 4. Mental Devices – a kind of device that is related in from and
meaning to the subject matter being presented.
• ex. Questions, projects, drills, lesson plan, etc.
6. NONPROJECTEDAUDIOVISUALAIDS
• Nonprojected aids are those do not require the
use of audiovisual equipment such as a projector
and screen.
• These include charts, graphs, maps, illustrations,
photographs, brochures, and handouts.
• Charts are commonly used almost anywhere
• Charts is a diagram which shows relationship.
• An organizational chart is one of the most widely
and commonly used kind of chart.
7. Classification of Tests
• 1. According to manner of response
• a. oral
• b. written
2. According to method of preparations
a. subject/essay
b. objective
3. According to the nature of answer
a. personally test
b. intelligence test
c. aptitude test
d. achievement or summative test
e. sociometric test
f. diagnostic or formative test
g. trade or vocational test
8. Classification tests
Objective test are tests which have definite answer and therefore are most
subject to personal bias.
Teacher-made tests or educational test are constructed by the teachers based
on the contents of different subjects taught.
Diagnostic tests are used to measure a student’s strengths and weakness.
Formative and summative are terms often used with evaluation, but they my
also be used with testing. Formative testing is done to monitor students’
attainment of the instructional objectives. Summative testing is done the at
the conclusion of instruction and measures the extent to which students have
attained the desired outcomes.
Standardized tests are already valid, reliable objective. Standardized tests are
tests which contents have been selected and for which norms or standards
have been established.
Standards or norms are the goals to be achieved expressed in terms of the
average performance of the population tested.
9. Criteria of a Good Examination
• A good examination must pass the following criteria
• Validity – validity refers to the degree to which a test
measures it is intended to measure. It is usefulness of
the test for a given measure.
• Reliability – reliability pertains to the degree to which a
test measure what is suppose to measure.
• Objectivity – it is the degree to which personal bias is
eliminated in the scoring of the answers. When we refer
to the quality of measurement, essentially we mean the
amount of information contained in a score generated by
the measurement.
10. Levels of Measurement
• In the scales are nominal, ordinal, interval and ratio.
• The terms nominal, ordinal, interval and ratio actually
form of hierarchy.
• Nominal scales of measurement are least sophisticated
and contain the least of information
• Ordinal, interval and ratio scales increase respectively in
sophistication.
• The arrangement is a hierarchy in the higher levels, along
with additional data.
11. Nominal Measurement
• Nominal scales – are the least sophisticated; merely
classify objects or events by assigning numbers to them.
these numbers are arbitrary and imply no quantification,
but the categories must be mutually exclusive and
exhaustive.
•Ordinal Measurement
• Ordinal scales – classify, but they also assign rank order.
As example of ordinal measurement is ranking individual
in a class according to their test scores. Student scores
could be ordered from firs, second, third and so forth to
the lowest score.
12. Interval Measurement
• In order to be able to add and subtract scores, we use interval scales,
sometimes called equal interval or equal unit measurement. this
measurement scale contains the nominal and ordinal properties and is
also characterized by equal units between score points.
• Ratio Measurement
• the most sophisticated type of measurement includes all the preceding
properties. But in a ratio scale, the zero point is not arbitrary, a score of zero
includes the absence of what is being measured. May not indicate the complete
absence of social studies knowledge.
• The desirability of ratio measurement scales is that they allow ratio comparisons.
Ratio measurement is rarely achieved in educational assessment, either
cognitive or affective areas.
• We can seldom say that one’s intelligence or achievement is 1- ½ times as great
as that of another person.
13. Norm-Referencedand CriterionReferenced
Measurement
• measurement (or testing) with criterion-referenced measurement, we are
basically referring to two different ways of interpreting information.
• Norm referenced interpretation historically has been used in educations,
today’s schools. The terminology of criterion-referenced measurement has
existed for close to three decades.
•Norm-Referenced Interpretation
• Norm-referenced Interpretation stems from the desire to
different among individuals or to discriminate among the
individuals of some defined group on whatever is being
measured.
• Is a relative interpretation based on an individual’s position with
respect to some group, often called the normative group.
• consist of the scores, usually in some form of descriptive
statistics, of the normative group.
14. Achievement Test as an Example
• Most standardized achievement tests, especially those
covering several skills and academic areas, are primarily
designed for norm-referenced interpretations.
• The form of results and the interpretations of these tests are
somewhat complex and require concepts not yet introduced
in this text.
• Scores on teacher-constructed tests are often given norm-
referenced interpretations.
• Specified percentages of scores are assigned the different
grades, and assigned to the final examination performance
will be 10 percent As, 20 percent Bs, 40 percent Cs, 20 percent
Ds, and 10 percent Fs.
15. Criterion-Referenced Interpretation
• The concepts of criterion-referenced testing have developed
with a dual meaning for criterion-referenced.
• It means referencing as individual’s performance to some
criterion that is a defined performance level.
• The second meaning for criterion-referenced involves the idea
of a defined behavioral domain-that is, a defined body of
learner behaviors. The learners performance on a testis
referenced to a specifically defined group behaviors.
• Criterion-referenced interpretation is an absolute rather than
relative interpretation, referenced to a defined body of a
learner or, as is commonly done, to some specified level of
performance.
• A student who does not attain the criterion has not mastered
the skill sufficiently to move ahead in the instructional
sequence. To a large extent, the criterion is based on teacher
judgment.
16. DistinctionsbetweenNorm-Referencedand
Criterion-ReferencedTests
• Although interpretations, not characteristics, provide the distinctions
between norm-referenced and criterion-referenced tests, the two types
tend to differ in some ways.
• Norm-Referenced Tests are usually more general and comprehensive and
cover a large domain of content and learning tasks. They are used for
survey testing, although this is not their exclusive use.
• Score are transformed to positions within the normative group
• Criterion-referenced Tests focus on specific group of learner behavior. To
show the contrast, consider an example. Arithmetic skill represent a general
and broad category of students outcomes and would likely be measured by
a norm-referenced test.
• Focus more sub skills than on broad skills
• Mastery learning is involved, criterion-referenced measurement would be
used.
17. STAGESIN TESTCONSTRUCT
• I. Planning the test
• A. Determining the objectives
• B. Preparing the Table of Specifications
• C. Selecting the Appropriate Item Format
• D. Writing the Test Items
• E. Editing the Test Items
II. Trying Out the Test
A. Administering the First Tryout – then Item Analysis
B. Administering the Second Tryout – then Item Analysis
c. Preparing the Final Form of the Test
III. Establishing Test Validity
IV. Establishing the Test Reliability
V. Interpreting the Test Score
18. MAJOR CONSIDERATIONSIN TEST
CONSTRUCTIONS
• The following are the major considerations in test
construction:
• Type of Test
• Our usual idea of testing is an in-class test that is administered
by the teacher. However, there are many variations on this
theme: group tests, individual test, written tests, oral tests,
speed tests, power tests, pretests and post tests.
• Each of these has different characteristics that must be
considered when the tests are planned. These can be
communicated to students, administrators, parents, and
others who may be affected by the testing program.
19. TestLength
• A major decision in the test planning is how many items,
should be included on the test.
• Most teacher want test scores to be determined by how much
the students understands rather than by how quickly he or she
answers the questions.
• Item Formats
• Determining what kind of items to include on the tests is a
major decision. should they be objectively scored formats
such as multiple choices or matching type?
• These are some important questions that can be answered
only by the teacher in terms of the local context, his or her
students, his or her classroom, and the specific purpose of the
test.
20. POINTSTO BE CONSIDEREDIN PREPARING
A TEST
• 1. Are the instructional objectives clearly defined?
• 2. what knowledge, skills and attitudes do you want to measure?
• 3. Did you prepare a table of specifications?
• 4. Did you formulate well defined and clear test items?
• 5. Did you employ correct English in writing the items?
• 6. Did you avoid giving clues to the correct answer?
• 7. Did you test the important ideas rather than the trivial?
• 8. Did you adapt the test’s difficulty to your student’s ability?
• 9. Did you avoid using textbook jargons?
• 10. Did you cast the items in positive form?
• 11. Did you prepare a scoring key?
• 12. Does each item have a single correct answer?
• 13. Did you review your items?
21. GENERALPRINCIPLESIN CONSTRUCTING
DIFFERENTTYPESOF TESTS
• 1. The test item should be selected very carefully. Only important facts should be
included.
• 2. The test should have extensive sampling of items.
• 3. the test items should be carefully expressed in simple, clear, definite, and
meaningful sentences.
• 4. There should be only one possible correct response for each test item.
• 5. Each item should be independent. Leading clues to other items should be avoided.
• 6. Lifting sentences from books should not be done to encourage thinking and
understanding.
• 7. The first person personal pronouns/and we should not be used.
• 8. Various types of test items should not be made to avoid monotony.
• 9. Majority of the test items should be moderate difficulty. Few difficult and few easy
items should be included.
• 10. The test items should be arranged in an ascending order or difficulty. Easy items
should be at the beginning to encourage the examinee to pursue the test and the
most difficult items should be at the end.
22. -
11. Clear, concise, and complete directions should precede all types of test.
Sample test items may be provided for expected responses.
12. Items which can be answered by previous experience alone without
knowledge of the subject matter should not be included.
13. Catchy words should not be used in the test items.
14. Test items must be based upon the objectives of the course and upon the
course content.
15. The test measure the degree of achievement or determine the difficulties
of the learners.
16. The test should emphasize ability to apply and use facts as well as
knowledge of facts.
17. The test should be of such length that it can be completed within the time
allotted by all or nearly all of the pupils. The teacher should perform the test
herself to determine its approximate time allotment.
18. Rules governing good language expression, grammar, spelling,
punctuations, and capitalization should be observed in all items.
19. Information on how scoring will be done should be provided.
20. Scoring Keys in correcting and scoring tests should be provided.
23. POINTERSTOBE OBSERVEDINCONSTRUCTING
ANDSCORINGTHE DIFFERENTTYPESOFTESTS
• A. RECALL TYPES
• 1. Simple recall type
a. This type consists of questions calling for a single word or
expressions as an answer.
b. Items usually begin with who, where, when, and what.
c. Score is the number of correct answer.
2. Completion type
a. Only important words or phrases should be omitted to avoid
confusion.
b. Blanks should be of equal lengths.
c. The blank, as much as possible, is placed near or at the end of the
sentence.
d. Articles a, an, and the should not be provided before the omitted
word or phrase to avoid clues for answers.
e. Score is the number of correct answers.
24. • 3. Enumeration
a. the exact number of expected answer should be stated.
b. Blanks should be equal lengths
c. score is the number of correct answers.
4. Identification type
a. The items should make an examinee think of a word, number, or
group of words that would complete the statement or answer the problem.
b. score is the number of correct answers.
B. RECOGNITION TYPES
1. True-False or alternative-response
a. declarative sentences should be used.
b. the number of “True” and “false” items should be more or less
equal.
c. The truth or falsity of the sentence should not be too evident.
d. negative statements should be avoided.
e. The “modified true-false” is more preferable than the “plain true-
false”.
25. -
f. In arranging the items, avoid the regular recurrence of “true” and “false”
statements.
g. Avoid using specific determiners like all, always, never, none, nothing, as a
rule, in general etc.
h. Minimize the use of qualitative terms like few, great, many, more, etc.
i. Avoid leading clues to answer in all items.
j. Score is the number of correct answers in “modified true-false and right
answers minus wrong answers in “plain true-false”.
2. YES – NO type
a. The items should be in interrogative sentences.
b. The same rules as in “true-false” are applied.
3. Multiple-response type
a. There should be three to five choices. The number of choices used
in the first item should be the same number of choices in all the items of
this type of test.
26. -
b. The choices should be numbered or lettered so that only the number
or letter can be written on the blank provided.
c. If the choices are figures, they be arranged in ascending order.
d. Avoid the use of “a” or “an” as the last word prior to the listening of
the responses
e. Random occurrence of responses should be employed.
f. The choices, as much as possible, should be at the end of the
statements.
g. The choices should be related in some way or should belong to the
same class.
h. Avoid the use of “none of these” as one of the choices.
i. Score is the number of correct answers.
4. Best answer type
a. These should be three to five choices all of which are right but vary in
their degree of merit, importance or desirability
b. The other rules for multiple response items are applied here.
c. Score is the number of correct answers.
27. -
5. Matching type
a. There should be two columns. Under “A” are the stimuli which should be
longer and more descriptive than the responses under column “B” the
response may be a word, a phrase, a number, or a formula.
b. The stimuli under column “A” should be numbered and the responses
under lines “B” should be lettered. Answers will be indicated by letters only on
lines provided in column “A”.
c. The number of pairs usually should not exceed twenty items. Less than ten
introduces chance elements. Twenty pairs may be used but more than twenty
is decidedly wasteful of time.
d. The number of responses in column “B” should be two or more than the
number of items in column “A” to avoid guessing.
e. Only one correct matching for each item should be possible.
f. Matching sets should neither be too long nor too short.
g. All items should be on the same page to avoid turning of pages in the
process of matching pairs.
h. Score is the number of correct answers.
28. C.ESSAYTYPE EXAMINATIONS
Common types of essay questions.(the type are related to purpose of which
the essay examinations are to be used.
1. comparison of two things
2. explanations of the use or meaning of a statement or passage.
3. Analysis
4. Decisions for or against
5. Discussion
How to construct essay examinations
1. determine the objectives or essentials for each questions to be
evaluated.
2. Phrase questions in simple, clear and concise language.
3. Suit the length of the questions to the time available for answering
the essay examination. The teacher should try to answer the test herself.
4. Scoring
a. Have a model answer in advance.
b. indicate the number of points for each question.
c. Score point for each essential.
29. AdvantagesandDisadvantagesoftheObjectiveType
ofTests
• Advantages
• a. the objectives test is free from personal bias in scoring.
• b. It is easy to score. With a scoring key, the test can be corrected
by different individuals without affecting the accuracy of the
grades given.
• c. It has high validity because it is comprehensive with wide
sampling of essentials.
• d. it is less time-consuming since many items can be answered in a
given time.
• e. It is fair to students since the slow writers can accomplish the
test as fast as the fast writers.
30. -
• Disadvantages
• a. It is difficult to construct and requires more time to prepare.
• b. it does not afford the students the opportunity in training for
self- and thought organization.
• c. It cannot be used to test ability in theme writing or journalistic
writing.
ADVANTAGES AND DISADVANTAGES OF THE
ESSAY TYPES OF TESTS
Advantage
a. The essay examinations can be used in practically all subjects of
the school curriculum.
b. It trains students for thought organization and self expression.
c. It affords students opportunities to express their originality and
independence of thinking.
31. -
• d. Only the essay test can be used in some subjects like composition
writing.
• e. Essay examination measures higher mental abilities like
comparison, interpretation, criticism, defense of opinion and decision.
• f. The essay test is easily prepared.
• g. It is inexpensive .
• Disadvantages
• a. The limited sampling of items makes the test unreliable measure of
achievements or abilities.
• b. Questions usually are not well prepared.
• c. Scoring is highly subjective due to the influence of the corrector’s
personal judgment.
• d. Grading of the essay test is inaccurate measure of pupil’s
achievement due to subjective of soring.
32. STATISTICALMEASURESORTOOLSUSEDIN
INTERPRETINGNUMERICALDATA
• Frequency Distributions
• A simple, common sense technique for describing a set of test
scores is through the use of a frequency distribution.
• frequency distribution is merely a listing of the possible score
values and the number of persons who achieved each score.
• Such an arrangement presents the score in a more simple and
understandable manner than merely listing all of the separate
scores. Consider a specific set of scores to clarify these ideas.
• First, list the possible score values in rank order, from highest to
lowest.
• Second column indicates the frequency or number of persons who
received each score.
• This are the example of the tables.
36. MeasuresofCentralTendency
• Frequency distribution are helpful for indicating the shape to
describe a distributions of scores, but we need more
information than the shape to describe a distribution
adequately.
• Measures of central tendency, and for the latter, we compute
measures of dispersion.
• There are three commonly used measures of central tendency
the mean, the median, and the mode, but the mean is by far
the most widely used.
• The mean
• The mean of a set of scores is the arithmetic mean. It is found
by summing the scores and dividing the sum by the number of
scores.
37. Example:
x = __X _
N
Where :
x is the mean
X is the symbol for a score, the summation operator
(it tells us to add the Xs)
N is the number of scores
for the set of scores in table 1,
X = 1 100
N = 25,
so then
X = 1 100_ = 44
25
The mean of the set of scores in table 1 is 44, the mean does not have to equal an
observed score, it is usually not even a whole number
When the scores are arranged in a frequency distribution, the formula is
X = fX__mdpt_
N
38. • Where fX mdpt means that the midpoint of the interval is
multiplied by the frequency for the interval. In computing the
mean for the scores in Table 3, using formula we obtain:
• X = 9(49)+4(46)+4(43)+3(40)+3(37)+2(34) = 43.84
25
Note that this mean is slightly different than the
mean using ungrouped data. This difference is due to
the midpoint representing the scores in the rather
than using the actual scores.
39. Themedian
• Another measure of central tendency is the median which is the
point that divides the distribution in half, that is, half of the
scores fall above the median and half of the scores fall below
the median.
• Consider again the frequency distribution in table 2.
• There were 25 scores in the distribution, so the middle score
should be the median
• Cumulative frequencies indicate the number of scores at or
below each score. Table 4 indicates the cumulative frequencies
for the data in table 2.
41. Themode
• The measure of central tendency that is the easiest to find is the
mode.
• The mode is most frequent occurring score in the distribution.
• The mode of the scores in Table is 48 Five person had scores of 48
and no other score occurred as often.
• Each of these three measures of central tendency- the mean, the
median, and the mode means a legitimate definition of “average”
performance on this test.
• There are some distribution in which all three measures of central
tendency are equal. But more often than not they will be
different.
• When the distribution has a small number of very extreme scores,
the median may be a better definition of central tendency.
• - the mean is the arithmetic average
• - the median divides the distribution in half
• -the mode is the most frequent score.
42. MEASURESOFDISPERSION
• Measures of central tendency are useful for summarizing
average performance, but they tell us nothing about how the
scores are distributed or “spread out” around the averages.
• The two sets of test scores may have equal measures of central
tendency, but they might differ in other ways.
• The Range
• Range indicates the difference between the highest and lowest in
the distribution.
• A problem with using the range is that only the two most extreme
scores are used in the computation.
• Measures of dispersion that take into consideration every score in
the distribution are the variance and the standard deviation.
Standard deviation is used a great deal in interpreting scores from
standardized tests.
43. TheVariance
• The variance measures how widely the scores in the distribution
are spread about the mean. In other words, the variance is the
average squared difference between the scores and the mean.
As a formula, it looks like this.
• s² = (X – X)²
• -------------
• N
An equivalent formula, easier to compute is:
S² = X²
----------------X²
N
44. TheStandardDeviation
• The standard deviation also indicates how spread out the
scores are, but is expressed in the same units as the original
scores.
• The standard deviation is computed by finding the square root
of the variance S = S²
• For the Table 1, the variance is 22.8. The standard deviation is
22, or 4.77
• the scores of most norm groups have the shape of a “normal”
distribution-ma symmetrical, bell-shaped distribution with
which most people are familiar.
45. Table 5. Computation of the Variance for the Scores of Table 1
Student Score
-------
X
Score – Mean
------------
x - x
(Score – Mean)²
---------------
(x – x)²
A 48 4 16
B 50 6 36
C 46 2 4
D 41 -3 9
E 37 -7 49
F 48 4 16
G 38 -6 36
H 47 3 9
I 49 5 25
J 44 0 0
-
-
-
W 47 3 9
X 40 -4 16
Y 48 4 16
Totals 1 100 0 570
46. -
• To determine the mean :
1 100
X = --------------- = 44
25
Then, to determine the variance :
(X – X)² 570
S2 = ------------ = -------- = 22.8
N 25
the usefulness of the standard deviation becomes apparent when scores
from different tests are compared.
In fine, descriptive statistics that indicate dispersion are the range, the
variance, and the standard deviation.
Standard deviation is a unit of measurement that shows by how much the
separate scores tend to differ from the mean
The variance is the square of the standard deviation. Most scores are within
two standard deviation from the mean.
47. GraphingDistribution
• A graph of distribution of test scores is often better
understood than is the frequency distribution or a mere table
of numbers.
• The general shape of the distribution is clear from the graph
• A normal distribution has most of the test scores in the middle
of the distribution and progressively fewer scores toward the
extreme.
• The scores of norm groups are seldom graphed but they could
be if we were considered about seeing the specific shape of
the distribution scores.
• Usually, we know or assume that the scores are normally
distributed.