Professional Education reviewer for teachers who are going to take the PRC LET or BLEPT examination. this reviewer covers topics ranging from different chapters.
A ppt about Properties of Assessment Method presented in our Assessment for Student Learning.
For students, teachers and other people who wants to know about the topic.
Presentation regarding the definition of identification test; advantages & disadvantages; suggestions on how to make good tests.
Disclaimer: I do not claim ownership of the photos used in this slideshow.
Is it possible to explain why the student outputs is as they are through an assessment of the processes which they did in order to arrive at the final product?
YES, through Process oriented, performance-based assessment
A ppt about Properties of Assessment Method presented in our Assessment for Student Learning.
For students, teachers and other people who wants to know about the topic.
Presentation regarding the definition of identification test; advantages & disadvantages; suggestions on how to make good tests.
Disclaimer: I do not claim ownership of the photos used in this slideshow.
Is it possible to explain why the student outputs is as they are through an assessment of the processes which they did in order to arrive at the final product?
YES, through Process oriented, performance-based assessment
Professional Education reviewer for PRC-LET or BLEPT Examinationelio dominglos
Professional Education reviewer for licensure examination for teachers. Designed reviewer for individuals who are to take PRC- BLEPT or LET covering a list of basic questions from different fields.
Professional Education school of thoughts keywordselio dominglos
Professional Education school of thoughts reviewer for licensure examination for teachers. Designed reviewer for individuals who are to take PRC-BLEPT/LET covering a list of basic questions from different fields.
World literature and Filipino Authors and their respective WOrkselio dominglos
World literature and Filipino Authors and their respective Works reviewer for licensure examination for teachers. Designed reviewer for individuals who are to take PRC-BLEPT/LET covering a list of basic questions from different fields.
General Education reviewer in passing the LET or BLEPT Examinationelio dominglos
GENERAL EDUCATION reviewer for licensure examination for teachers. Designed reviewer for individuals who are to take PRC-BLEPT/LET covering a list of basic questions from different fields.
Professional education reviewer for let or blept examineeselio dominglos
Professional Education reviewer for teachers who are going to take the PRC LET or BLEPT examination. this reviewer covers topics ranging from different chapters.
Professional Education reviewer for teachers who are going to take the PRC LET or BLEPT examination. this reviewer covers topics ranging from different chapters.
Philippine Father of different agencies, arts, medicine, and etc.elio dominglos
assigned father for different agencies, arts, sciences, medicine and others by the phillipines. designed reviewer for licensure examination for teachers in taking the PRC-BLEPT/LET.
GENERAL EDUCATION reviewer for licensure examination for teachers. Designed reviewer for individuals who are to take PRC-BLEPT/LET covering a list of basic questions from different fields.
Classification of the Subject Content of BSE Physical Sciences Majors (2015 C...elio dominglos
Undergraduate research about the Course content of Bachelor of Secondary Education - Physical Sciences majors. this covers how much units and what are the courses taken by the physics majors at Benguet State University, year 2015. Since the curriculum was already changed this serves as reference for those who are making there researches specially the secondary education in BSU.
KINDS OF TESTS
1. Intelligence test
This test measures the intelligent quotient (IQ) of an individual as genius, very superior, high
average, average, low average, borderline or mentally defective.
2. Personality test
This test measures the ways in which the individual’s interest with other individuals or in terms of the
roles an individual has assigned to himself and how he adopts in the society.
3. Aptitude test
This test is a predictive measure of a person’s likelihood of benefit from instruction or experience in
a given field.
4. Prognostic test
This test forecasts how well a person may do in a certain school subject or work.
5. Performance test
This test is a measure which often makes use of accomplishing the learning task involving minimum
accomplishment or none at all.
6. Diagnostic test
This test identifies the weaknesses of an individual’s achievement in any field which serves as basis
for remedial instruction.
7. Achievement test
This test measures how much the students attain the learning tasks. For example, NAT (National
Achievement Test)
8. Preference test
This test is a measure of vocational or academic interest of an individual or aesthetic decision by
forcing the examinee to make force options between members of paired or grouped items.
9. Scale test
This test is a series of items arranged in the order of difficulty. An example of this kind of test is the
Binet-Simon Scale.
10. Speed test
This test measures the speed and accuracy of the examinee within the time imposed. It is also called
the alertness test.
11. Power test
This test is made up of series of items arranged from easiest to the most difficult.
12. Standardized test
This test provides exact procedures in controlling the method of administration and scoring with norms
and data concerning the reliability and validity of the test.
13. Teacher-made test
This test is prepared by classroom teachers based on the contents stated in the syllabi and the
lessons taken by the students
14. Placement test
This test is used to measure the job an applicant should fill in the school setting and the grade or year
level the student should be enrolled after quitting from school.
Assessment and evaluation- A new perspective
Unit 2- Tests and its Application
Syllabus of Unit 2
Testing- Concept and Nature
Developing and Administering Teacher Developed Tests
Characteristics of a good Test
Standardization of Test
Types of Tests- Psychological Test, Reference Test, Diagnostic Tests
2.2.1. Introduction-
Teachers construct various tools for the assessment of various traits of their students.
The most commonly used tools constructed by a teacher are the achievement tests. The achievement tests are constructed as per the requirement of a particular class and subject area they teach.
Besides achievement tests, for the assessment of the traits, a teacher observes his students in a classroom, playground and during other co-curricular activities in the school. The social and emotional behavior is also observed by the teacher. All these traits are assessed. For this purpose too, tools like rating scales are constructed.
Evaluation Tools used by the teacher may both be standardized and non-standardised.
A standardized tool is one which got systematically developed norms for a population. It is one in which the procedure, apparatus and scoring have been fixed so that precisely the same test can be given at different time and place as long as it pertains to a similar type of population. The standardized tools are used in order to:
Compare achievements of different skills in different areas
Make comparison between different classes and schools They have norms for the particular population. They are norm referenced.
On the other hand, teachers make tests as per the requirements of a particular class and the subject area they teach. Hence, they are purposive and criterion referenced. They want:
to assess how well students have mastered a unit of instruction;
to determine the extent to which objectives have been achieved;
to determine the basis for assigning course marks and find out how effective their teaching has been.
So our syllabus here revolves around the Tests.
2.2.2- Developing and Administering Teacher Developed Tests-
2.2.3-CHARACTERISTICS OF GOOD MEASURING INSTRUMENT -
1. VALIDITY-
Any measuring instruments must fulfill certain conditions. This is true in all spheres, including educational evaluation.
Test validity refers to the degree to which a test accurately measures what it claims to measure. It is a critical concept in the field of psychometrics and is essential for ensuring that a test is meaningful and useful for its intended purpose. It is the test is meant to examine the understanding of scientific concept; it should do only that and should not be attended for other abilities such as his style of presentation, sentence patterns or grammatical construction. Validity is specific rather than general criterion of a good test. Validity is a matter of degree. It may be high, moderate or low.
There are several types of validity, each addressing different aspects of the testing process:
1. Face-validity, 2.Content
THESIS- Making the Results and Discussions portionelio dominglos
a guide on how to make your thesis's result and discussion portion. This slide was discussed to us by our professor in college and was shared to us by him for us to make our thesis Hence i do not own the slide but i am sharing it for information dissemination purposes and as a back-up as well.
Science reviewer for biology and physical sciences let examinesselio dominglos
Science reviewer for teachers who are going to take the PRC LET or BLEPT examination. this reviewer covers topics ranging from different chapters and different subjects.
Reviewer for teachers who are going to take the PRC LET or BLEPT examination. this reviewer covers topics ranging from different chapters and different subjects.
GENERAL EDUCATION reviewer for licensure examination for teachers. Designed reviewer for individuals who are to take PRC-BLEPT/LET covering a list of basic questions from different fields.
Physics Anxiety of Sophomore Students of Benguet State University-CTE (2015)elio dominglos
An undergraduate research about the anxiety of freshmen students when it comes to physics subjects. This was conducted at Benguet State University College of Teacher Education with the sophomore bachelor of elementary education students as the respondents.
Although this is purposely for spiritual presentation it is also for self improvement specially for the youth. This topic speaks about change. why,how and what. The change it is speaking about talks on the true change and the challenges we encounter in/to improve our relation with the Almighty. It talks on the aspects of our life which hinder us to being productive
The Cross? What does it really mean to us? this presentation merely talks about the cross, it's meaning symbolism and why it was used. it also show some descriptions on what is the use of the cross long ago.
An spiritual topic about mediocrity. The Topic stresses on giving your all in your faith, the things you do, decision and etc.. it also talks on how we might lessen this kind of personality
An spiritual message or shall we say topic about putting your 100% to the Almighty. This topic is much suited for college students since this was intended for small groups.
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
Ethnobotany and Ethnopharmacology:
Ethnobotany in herbal drug evaluation,
Impact of Ethnobotany in traditional medicine,
New development in herbals,
Bio-prospecting tools for drug discovery,
Role of Ethnopharmacology in drug evaluation,
Reverse Pharmacology.
How to Split Bills in the Odoo 17 POS ModuleCeline George
Bills have a main role in point of sale procedure. It will help to track sales, handling payments and giving receipts to customers. Bill splitting also has an important role in POS. For example, If some friends come together for dinner and if they want to divide the bill then it is possible by POS bill splitting. This slide will show how to split bills in odoo 17 POS.
This is a presentation by Dada Robert in a Your Skill Boost masterclass organised by the Excellence Foundation for South Sudan (EFSS) on Saturday, the 25th and Sunday, the 26th of May 2024.
He discussed the concept of quality improvement, emphasizing its applicability to various aspects of life, including personal, project, and program improvements. He defined quality as doing the right thing at the right time in the right way to achieve the best possible results and discussed the concept of the "gap" between what we know and what we do, and how this gap represents the areas we need to improve. He explained the scientific approach to quality improvement, which involves systematic performance analysis, testing and learning, and implementing change ideas. He also highlighted the importance of client focus and a team approach to quality improvement.
We all have good and bad thoughts from time to time and situation to situation. We are bombarded daily with spiraling thoughts(both negative and positive) creating all-consuming feel , making us difficult to manage with associated suffering. Good thoughts are like our Mob Signal (Positive thought) amidst noise(negative thought) in the atmosphere. Negative thoughts like noise outweigh positive thoughts. These thoughts often create unwanted confusion, trouble, stress and frustration in our mind as well as chaos in our physical world. Negative thoughts are also known as “distorted thinking”.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
The Indian economy is classified into different sectors to simplify the analysis and understanding of economic activities. For Class 10, it's essential to grasp the sectors of the Indian economy, understand their characteristics, and recognize their importance. This guide will provide detailed notes on the Sectors of the Indian Economy Class 10, using specific long-tail keywords to enhance comprehension.
For more information, visit-www.vavaclasses.com
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Professional education reviewer for let or blept examinees
1. http://www.elio.p4o.net http://www.jokwaist.blogspot.com
PROFESSIONAL EDUCATION
Focus: Assessment and Evaluation of Learning
Competencies:
Apply principles in constructing and interpreting alternative/authentic forms of high quality assessment.
BASIC CONCEPTS
Test - An instrument designed to measure any characteristic, quality, ability, knowledge or skill. It comprised of
items in the area it is designed to measure.
Measurement - A process of quantifying the degree to which someone/something possesses a given trait. i.e.
quality, characteristics, or feature.
Assessment - A process of gathering and organizing quantitative or qualitative data into an interpretable form
to have a basis for judgment or decision- making.
- It is a prerequisite to evaluation. It provides the information which enables evaluation
to take place.
Evaluation - A process of systematic interpretation, analysis, appraisal or judgment of the worth of
organized data as basis for decision-making. It involves judgment about the desirability of changes in
students.
Traditional Assessment - It refers to the use of pen-and-paper objective test.
Alternative Assessments - It refers to the use of methods other than pen- and-paper objective test which
includes performance test, projects, portfolios, journals, and the likes.
Authentic Assessment - It refers to the use of assessment methods that simulate true-to-life situations.
This could be objective tests that reflect real life situations or alternative methods that are
parallel to what we experience in real life.
PURPOSES OF CLASSROOM ASSESSMENT
Assessment FOR Learning - this includes three types of assessment done before and during instruction. These are
placement, formative and diagnostic.
Placement - done prior to instruction
Its purpose is to assess the needs of the learners to have basis in planning for a relevant instructions.
Teachers use this assessment to know what their students are bringing into the learning situations and use this as a
starting point for instruction.
The results of this assessment place student in specific learning groups to facilitate teaching and learning.
Formative – done during instruction
2. http://www.elio.p4o.net http://www.jokwaist.blogspot.com
It is this assessment where teachers continuously monitor the students’ level of attainment of the learning
objectives (Stiggins, 2005)
The results of this assessment are communicated clearly and promptly to the students for them to know their
strengths and weaknesses and the progress of their learning.
Diagnostic – done during instruction
This is used to determine students’ recurring or persistent difficulties.
It searches for the underlying causes of student’s learning problems that do not respond to first aid treatment.
It helps formulate a plan for detailed remedial instruction.
Assessment OF Learning - this is done after instruction. This is usually referred to as the summative assessment.
It is used to certify what students know and can do and the level of their proficiency or competency.
It results reveal whether or not instructions have successfully achieved the curriculum outcomes.
The information from assessment of learning is usually expressed as marks or letter grades.
The results of which are communicated to the students, parents and other stakeholders for decision making.
It is also a powerful factor that could pave the way for educational reforms.
3. Assessment AS Learning – this is done for teachers to understand and perform well their role of assessing FOR
and OF learning. It requires teachers to undergo training on how to assess learning and be equipped with the
following competencies needed in performing their work as assessors.
PRINCIPLES OF HIGH QUALITY CLASSROOM ASSESSMENT
Principle 1: Clear and Appropriate Learning Targets
Learning targets should be clearly stated, specific, and centers on what is truly important.
Learning Targets
(Mc Millan, 2007; Stiggins, 2007)
Knowledge-student mastery of substantive subject matter
Reasoning-student ability to use knowledge to reason and solve problems
Skills-student ability to demonstrate achievement-related skills
Products-student ability to create achievement-related products
Affective/Disposition-student attainment of affective states such as attitudes, values, interests and self-efficacy.
Principle 2: Appropriate Methods
3. http://www.elio.p4o.net http://www.jokwaist.blogspot.com
Assessment Methods
Objective Supply-Short Answer ,Completion Test
Objective Selection-Multiple ,Choice ,Matching ,True/False
Essay-Restricted, Response Extended,Response
Performance Based-Presentations Papers ,Projects, Athletics, Demonstrations, Exhibitions, Portfolios
Oral Question-Oral ,Examinations, Conferences, Interviews
Observation-Informal ,Formal
Self-Report-Attitude , Survey, Sociometric, Devices, Questionnaires,Inventories
Modes of Assessment
Traditional -The paper-and-pen-test used in assessing knowledge and thinking skills-
Standardized and teacher-made tests-Scoring is objective
Administration is easy because students can take the test at the same time-Preparation of the instrument is time
consuming.
Prone to guessing and cheating
Performance-A mode of assessment that requires actual demonstration of skills or creation of products of
learning-Practical Test-Oral and Aural Test Projects, etc-Preparation of the instrument is relatively easy -Measures
behavior that cannot be deceived-Scoring tends to be subjective without rubrics -Administration is time
consuming
Portfolio-A process of gathering multiple indicators of student’s progress to support course goals in dynamic,
ongoing and collaborative process.- Working Portfolios, Show Portfolios, Documentary Portfolios-Measures
students growth and development, Intelligence-fair-Development is time consuming , Rating tends to be
subjective without rubrics
Principle 3: Balanced
A balanced assessment sets targets in all domains of learning (cognitive, affective, and psychomotor) or domains
of intelligence (verbal-linguistic, logical-mathematical, bodily-kinesthetic, visual-spatial, musical-rhythmic,
intrapersonal-social, interpersonal-introspection, physical world-natural-existential-spiritual).
A balanced assessment makes use of both traditional and alternative assessment.
Principle 4: Validity
A. Validity - is the degree to which the assessment instrument measures what it intends to measure. It is also
refers to the usefulness of the instrument for a given purpose. It is the most important criterion of assessment
instrument.
Ways in Establishing Validity
4. http://www.elio.p4o.net http://www.jokwaist.blogspot.com
1.Face Validity - is done by examining the physical appearance of the instrument.
2.Content Validity – is done through a careful and critical examination of the objectives of assessment so that it
reflects the curricular objectives.
3.Criterion-related Validity – is established statistically such that a set of scores revealed by the measuring
instrument is correlated with the scores obtained in another external predictor of measure. It has two purposes.
a.Concurrent validity – describes the present status of the individual by correlating the sets of scores obtained
from two measures given concurrently.
b.Predictive validity – describes the future performance of an individual by correlating the sets of scores obtained
from two measures given at a longer time interval.
4.Construct Validity – is established statistically by comparing psychological traits or factors that theoretically
influence scores in a test.
Convergent Validity - is established if the instrument defines another similar trait other than what it is intended
to measure e.g. Critical Thinking Test may be correlated with Creative Thinking Test.
Divergent Validity - is established if an instrument can describe only the intended trait and not the other traits,
e.g. Critical Thinking Test may not be correlated with Reading Comprehension Test.
Principle 5: Reliability
Reliability - it refers to the consistency of scores obtained by the same person when retested using the same
instrument/its parallel or when compared with other students who took the same test.
Test-Retest-Measure of stability- Give a test twice to the same group with any time interval between tests from
several minutes to several years-Pearson r
Equivalent Forms-Measure of equivalence-Give parallel forms of tests with close time interval between forms-
Pearson r
Test-retest with Equivalent Forms-Measure of stability and equivalence-Give parallel forms of tests with
increased time interval between forms.- Pearson r
Split Half-Measure of Internal Consistency-Give a test once. Score equivalent halves of the test e.g. odd- and even-
numbered items.- Pearson r & Spearman Brown Formula
Kuder-Richardson-Measure of Internal Consistency-Give the test once then correlate the proportion/percentage
of the students passing and not passing a given item-Kuder-Richardson Formula 20 and 21
Principle 6: Fairness
A fair assessment provides all students with an equal opportunity to demonstrate achievement. The key to
fairness are as follows:
Students have knowledge of learning targets and assessment
Students are given equal opportunity to learn
5. http://www.elio.p4o.net http://www.jokwaist.blogspot.com
Students possess the pre-requisite knowledge and skills
Students are free from teacher stereotypes
Students are free from biased assessment tasks and procedures
Principle 7: Assessment should be a continuous process.
Assessment takes place in all phases of instruction. It could be done before, during and after instruction.
Activities Occurring Prior to Instruction
Understanding students’ cultural backgrounds, interests, skills and abilities as they apply across a range of
learning domains and/or subject areas;
Understanding students’ motivations and their interests in specific class content;
Clarifying and articulating the performance outcomes expected of pupils; and
Planning instruction for individuals or groups of students.
Activities occurring During instruction:
Monitoring pupil progress toward instructional goals;
Identifying gains and difficulties pupils are experiencing in learning and performing;
Adjusting instruction;
Giving contingent, specific, and credible praise and feedback;
Motivating students to learn; and
Judging the extent of pupil attainment of instructional outcomes.
Activities occurring After the appropriate instructional segment (e.g. lesson, class, semester, grade)
Describing the extent to which each pupil has attained both short- and long-term instructional goals;
Communicating strengths and weaknesses based on assessment results to students, and parents or guardians;
Recording and reporting assessment results for school-level analysis, evaluation, and decision-making;
Analyzing assessment information gathered before and during instruction to understand each students’ progress
to date and to inform future instructional planning;
Evaluating the effectiveness of instruction; and
Evaluating the effectiveness of the curriculum and materials in use
Principle 9: Communication
• Assessment targets and standards should be communicated
• Assessment results should be communicated to its important users.
6. http://www.elio.p4o.net http://www.jokwaist.blogspot.com
• Assessment results should be communicated to students through direct interaction or regular ongoing
feedback on their progress.
Principle 10: Positive Consequences
• Assessment should have a positive consequence to students, that is, it should motivate them to learn.
• Assessment should have a positive consequence on teachers that is it should help them improve the
effectiveness of their instruction.
Principle 11: Ethics
• Teacher should free the student from harmful consequence to students, that is, it should motive them to
learn.
• Assessment should have a positive consequence on teachers, that is, it should help them improve the
effectiveness of their instruction
• Teacher should be guided by laws and policies that affect their classroom assessment.
• Administrators and teachers should understand that it is inappropriate to use standardized students
achievement to measure teaching effectiveness.
• PERFORMANCE-BASED ASSESSMENT
• Performance-Based Assessment is a process of gathering information about student’s learning
through actual demonstration of essential and observable skills and creation of products that are grounded in
real world contexts and constraints. It is an assessment that is open to many possible answers and judged
using multiple criteria or standards of excellence that are pre-specified and public.
Reasons for using Performance-Based Assessment
• Dissatisfaction of the limited information obtained from selected-response test.
• Influence of cognitive psychology, which demands not only for the learning of declarative but also for
procedural knowledge.
• Negative impact of conventional test e.g. high-stake assessment, teaching for the test
• It is appropriate in experimental, discovery-based, integrated, and problem-based learning approaches.
Type of Performance-based Task
1. Demonstration-type - this is a task that requires no product
Examples: constructing a building, cooking demonstrations, entertaining tourist, teamwork,
presentations
1. Creation-type - this is a task that requires tangible products.
Examples: project plan, research paper, project flyers, discovered.
Methods of Performance-based Assessment
7. http://www.elio.p4o.net http://www.jokwaist.blogspot.com
1. Written-open ended - a written prompt is provided
Formats: Essays, open-ended test
2. Behavior-based - utilizes direct observations of behaviors in situation or simulated contexts.
Formats: structured and unstructured
3. Interview-based - examinees respond in one-to-one conference setting with the examiner to demonstrate
mastery of the skills.
Formats: structured and unstructured
4. Product-based - examines create a work sample or a product utilizing the skills / abilities.
Formats: restricted and extended
5. Portfolio-based - collections of work that are systematically gathered to serve many purposes.
How to Assess a Performance
1. Identify the competency that has to be demonstrated by the students with or without a product.
2. Describe the task to be performed by the students either individually or as a group, the resources needed,
time allotment and other requirements to be able to assess the focused competency.
7 Criteria in Selecting a Good Performance Assessment Task
• Generalizability - the likelihood that the students’ performance on the task will generalize to comparable
tasks.
• Authenticity - the task is similar to what the students might encounter in the real world as opposed to
encountering only in the school.
• Multiple Foci - the task measures multiple instructional outcomes.
• Teachability - the task allows one to master the skill that one should be proficient in
• Feasibility - the task is realistically implementable in relation to its cost, space, time, and equipment
requirements
• Scorability - the task can be reliably and accurately evaluated
• Fairness - the task is fair to all the students regardless of their social status or gender
3. Develop a scoring rubric reflecting the criteria, levels of performance and the scores
PORTFOLIO ASSESSMENT
Portfolio Assessment is also an alternative to pen-and-paper objective test. It is a purposeful, on going,
dynamic, and collaborative process of gathering multiple indicators of the learner's growth and development.
Portfolio assessment is also performance-based but more authentic than any performance-based task.
8. http://www.elio.p4o.net http://www.jokwaist.blogspot.com
Reasons for Using Portfolio Assessment
Burke (1999) actually recognizes portfolio as another type of assessment arid considered authentic because of the
following reasons:
It tests what is really happening in the classroom.
It offer; multiple indicators of students' progress.
It gives the students the responsibility of their own learning.
It offers opportunities for students to document reflections of their learning.
It demonstrates what the students know in ways that encompass their personal teaming styles and
multiple intelligences.
It offers teachers new role in the assessment process.
It allows teachers to reflect on the effectiveness of their instruction.
It provides teachers freedom of gaining insights into the student's development or 'achievement over
a period of time.
Types of Portfolios
Portfolios could come in three types: working, show, or documentary.
1. The working portfolio is a collection of a student’s day-to-day works which reflect his/her learning.
2. The show portfolio is a collection of a student's best works.
3. The documentary portfolio is a combination of a working and a show portfolio.
DEVELOPING RUBRICS
Rubric is a measuring instrument used in rating performance-based tasks. It is the “key to corrections
for assessment tasks designed to measure the attainment of learning competencies that require
demonstration of skills or creation of products of learning, It offers a set of guidelines or descriptions in
scaling different levels of performance or qualities of products of learning. It can be used in scoring both the
process and the products of learning.
Similarity of Rubric with Other Scoring Instruments
Rubric is a modified checklist and rating scale.
1. Checklist
presents the observed characteristics of a desirable performance or product
the rater checks the trait/s that has/have been observed in one's performance or product.
2. Rating Scale
9. http://www.elio.p4o.net http://www.jokwaist.blogspot.com
measures the extent or degree to which a trait has been satisfied by one's work or performance
offers an overall description of the different levels of quality of a work or a performance
uses 3 to more levels to describe the work or performance although the most common rating scales have 4 or
5 performance levels.
Holistic Rubric-It describes the overall quality of e Performance or product. In this rubric, there is only one
rating given to the entire work or performance
Advantages
It allows fast assessment.
It provides one score to describe the overall performance or quality of work.
It can indicate the general strengths and weaknesses of the work or performance
Disadvantages
It does not clearly describe the degree of the criterion satisfied or not by the performance or product.
It does not permit differential weighting of the qualities of a product or a performance.
Analytic Rubric- it describes the quality of a performance or product in terms of the identified dimensions
and/or criteria for which are rated independently to give a better picture of the quality of work or
performance.
Advantages
It clearly describes the degree of the criterion satisfied or not by the performance or product.
It permits differential weighting of the qualities of a product or a performance.
It helps raters pinpoint specific areas of strengths and weaknesses
Disadvantages
It is more time consuming to use.
It is more difficult to construct
PART II - ANALYZING TEST ITEMS
PROFESSIONAL EDUCATION
Focus: Assessment and Evaluation of Learning 2
Competencies:
1. Apply principles in constructing and interpreting traditional forms of assessment.
2. Utilize processed data and results in reporting and interpreting learners’ performance to improve
teaching and learning.
10. http://www.elio.p4o.net http://www.jokwaist.blogspot.com
3. Demonstrate skills in the use of techniques and tools in assessing affective learning.
PART I: CONTENT UPDATE
It is an instrument or systematic procedure which typically consists of a set of questions for measuring a
sample of behavior.
It is a special form of assessment made under contrived circumstances especially so that it may be
administered.
It is a systematic form of assessment that answers the question, “How well does the individual perform –
either in comparison with others or in comparison with a domain of performance task.
An instrument designed to measure any quality, ability, skill or knowledge.
Instructional Uses of Tests
grouping learners for instruction within a class
identifying learners who need corrective and enrichment experiences
measuring class progress for any given period
assigning grades/marks
guiding activities for specific learners (the slow, average, fast)
Guidance Uses of Tests
assisting learners to set educational and vocational goals
improving teacher, counselor and parents’ understanding of children with problems.
preparing information/data to guide conferences with parents about their children.
determining interests in types of occupations not previously considered or known by the students
predicting success in future educational or vocational endeavor.
Administrative Uses of Tests
determining emphasis to be given to the different learning areas in the curriculum
measuring the school progress from year to year
determining how well students are attaining worthwhile educational goals.
determining appropriateness of the school curriculum for students of different levels of ability.
developing adequate basis for pupil promotion or retention.
I. Standardized Tests – tests that have been carefully constructed by experts in the light of accepted objectives
11. http://www.elio.p4o.net http://www.jokwaist.blogspot.com
Ability Tests - combine verbal and numerical ability, reasoning and computations. Ex.: OLSAT – Otis Lennon
Standardized Ability Test
Aptitude Tests - tests which measure potential in a specific field or area; predict the degree to which an
individual will succeed in any given area such art, music, mechanical task or academic studies. Ex. DAT –
Differential Aptitude Test
II. Teacher-Made Tests - tests constructed by classroom teacher which measure and appraise student
progress in terms of specific classroom/instructional objectives.
1. Objective Type – answers are in the form of a single word or phrase or symbol.
a. Limited Response Type – requires the student to select the answer from a given number of
alternatives or choices.
i. Multiple Choice Test - consists of a stem each of which presents three to five alternatives or
options in which only one is correct of definitely better than the other. The correct option choice or
alternative in each item is merely called answer and the rest of the alternatives are called disasters or decoys
or foils.
ii. True – False or Alternative Response - consists of declarative statements that one has to respond
or mark true or false, right or wrong, correct or incorrect, yes or no, fact or opinion, agree or disagree and the
like. It is a test made up of items which allow dichotomous responses.
iii. Matching Type – consists of two parallel columns with each word, number, or symbol in one
column being matched to a word sentence, or phrase in the other column. The items in Column I or A for
which a match is sought are called premises, and the items in Column II or B from which the selection is made
are called responses.
b. Free Response Type or Supply Test – requires the student to supply or give the correct answer.
i. Short Answer – uses a direct question that can be answered by a word, phrase, number, or
symbol.
ii. Completion Test - consists of an incomplete statement that can also be answered by a word,
phrase, number, or symbol.
2. Essay Type - Essay questions provide freedom of response that is needed to adequately assess students’
ability to formulate, organize, integrate and evaluate ideas and information or apply knowledge and skills.
Restricted Essay - limits both the content and the response. Content is usually restricted by the scope of the
topic to be discussed.
Extended Essay - allows the students to select any factual information that they think is pertinent to organize
their answers in accordance with their best judgment and to integrate and evaluate ideas which they think
appropriate.
Use assessment specifications as a guide to item/task writing.
Construct more items/tasks than needed.
12. http://www.elio.p4o.net http://www.jokwaist.blogspot.com
Write the items/tasks ahead of the testing date.
Write each test item/task at an appropriate reading level and difficulty.
Write each test item/task in a way that it does not provide help in answering other test items or tasks.
Write each test item/task so that the task to be performed is clearly defined and it calls forth the performance
described in the intended learning outcome.
Write a test item/task whose answer is one that would be agreed upon by the experts.
Whenever a test is revised, recheck its relevance.
Supply Type of Test
Word the item/s so that the required answer is both brief and specific.
Do not take statements directly from textbooks.
A direct question is generally more desirable than an incomplete statement.
If the item is to be expressed in numerical units, indicate the type of answer wanted.
Blanks for answers should be equal in length and as much as possible in column to the right of the question.
When completion items are to used, do not include too many blanks.
B. Selective Type of Tests
Alternative – Response
Avoid broad, trivial statements and use of negative words especially double negatives.
Avoid long and complex sentences.
Avoid multiple facts or including two ideas in one statement, unless cause-effect relationship is being
measured.
If opinion is used, attribute it to some source unless the ability to identify opinion is used, attribute it to some
source unless the ability to identify opinion is being specifically measures.
Use proportional number of true statements and false statements.
True statements and false statements should be approximately equal in length.
2. Matching Type
Use only homogeneous material in a single matching exercise.
Include an unequal number of responses and premises and instruct the pupil that responses may be used
once, more that once, or not at all.
Keep the list of items to be matched brief, and place the shorter responses at the right.
13. http://www.elio.p4o.net http://www.jokwaist.blogspot.com
Arrange the list of responses in logical order.
Indicate in the directions the basis for matching the responses and premises.
Place all the items for one matching exercise on the same page.
Limit a matching exercise to not more than 10 to 15 items.
3. Multiple Choice
a. The stem of the item should be meaningful by itself and should present a definite problem.
b. The item stem should include as much of the item as possible and should be free o irrelevant material.
c. Use a negatively stated stem only when significant learning outcomes require it and stress/ highlight
the negative words for emphasis.
d. All the alternatives should be grammatically consistent with the stem of the item.
e. An item should only contain one correct or clearly best answer.
f. Items used to measure understanding should contain some novelty, but beware of too much.
g. All distracters should be plausible/attractive.
h. Verbal associations between the stem and the correct answer should be avoided.
i. The relative length of the alternatives/options should not provide a clue to the answer.
j. The alternatives should be arranged logically.
k. The correct answer should appear in each of the alternative positions and approximately equal
number of times but in random order.
l. Use of special alternatives such as “none of the above” or “all of the above” should be done sparingly.
m. Always have the stem and alternatives on the same page.
n. Do not use multiple choice items when other types are more appropriate.
4. Essay Type of Test
a. Restrict the use of essay questions to those learning outcomes that cannot be satisfactorily measured
by objective items.
b. Construct questions that will call forth the skills specified in the learning standards.
c. Phrase each question so that the student’s task is clearly defined or indicated.
d. Avoid the use of optional questions.
e. Indicate the approximate time limit or the number of points for each question
14. http://www.elio.p4o.net http://www.jokwaist.blogspot.com
f. Prepare an outline of the expected answer in advance or scoring rubric.
A. Major Characteristics
A. Validity – the degree to which a test measures what it is supposed or intends to measure. It is the
usefulness of the test for a given purpose. It is the most important quality/characteristics desired in
an assessment instrument.
B. Reliability - refers to the consistency of measurement that is how consistent test scores or other
assessment results are from one measurement to another. It the most important characteristics of an
assessment instrument next to validity.
B. Minor Characteristics
. Administrability - the test should be easy to administer such that the directions should clearly indicate
how a student should respond to the test/task items and how much time should he/she spend for each
test item or for the whole test.
. Scoreability - the test should be easy to score such that directions for scoring are clear, point/s for
each correct answer(s) is/are specified.
Interpretability - test scores can easily be interpreted and described in terms of the specific tasks that a
student can perform of his/her relative position in a clearly defined group.
Economy - the test should be given in the cheapest way in terms of time and effort spent for
administration of the test and answer sheets must be provided so the test can be given from time to
time.
Factors Influencing the Validity of an Assessment Instrument
1. Unclear directions. Directions that do not clearly indicate to the students how to respond to the tasks and
how to record the responses tend to reduce validity.
2. Reading vocabulary and sentence structure too difficult. Vocabulary and sentence structure that are too
complicated for the students result in the assessment of reading comprehension thus altering the meaning of
assessment result.
3. Ambiguity. Ambiguous statements in assessment tasks contribute to misinterpretations and confusion.
Ambiguity sometimes confuses the better students more that it does the poor students.
4. Inadequate time limits. Time limits that do not provide students with enough time to consider the tasks and
provide thoughtful responses can reduce the validity of interpretations of results. Rather than measuring what a
student knows about a topic or is able to do given adequate time, the assessment may become a measure of the
speed with which the student can respond. For some content (e.g. a typing test), speed may be important.
However, most assessments of achievement should minimize the effects of speed on student performance.
5. Overemphasis of easy - to assess aspects et domain at the expense of important, but hard — to assess aspects
(construct under representation). it is easy to develop test questions that assess factual recall and generally
harder to develop ones that tap conceptual understanding or higher order thinking processes such as the
15. http://www.elio.p4o.net http://www.jokwaist.blogspot.com
evaluation of competing positions or arguments. Hence, it is important to guard against under representation of
tasks getting at the important, but .more difficult to assess aspects of achievement.
6. Test items inappropriate for the outcomes being measured. Attempting to measure understanding, thinking
skills, and other complex types of achievement with test forms that are appropriate only for measuring factual
knowledge will invalidate the results.
7. Poorly constructed test items. Test items that unintentionally provide clues to the answer tend to measure
the students' alertness in detecting clues as well as mastery of skins or knowledge the test is intended to
measure.
8. Test too short If a test is too short to provide a representative sample of the performance we are interested
in, its validity will suffer accordingly.
9. Improper arrangement of items. Test items are typically arranged in order of difficulty, with the easiest items
first. Placing difficult items first in the test may cause students to spend too much time on these and prevent
them from reaching items they could easily answer. Improper arrangement may also influence validity by having a
detrimental effect on student motivation.
10. Identifiable pattern of answer. Placing correct answers in some systematic pattern (e.g.., T, 1, F, F, or B, B, B,
C, C, C, D, D, D) enables students to guess the answers to some items more easily, and this lowers validity.
Improving Test Reliability
Several test characteristics affect reliablity. They include the following:
1. Test length. In general, a longer test is more reliable than a shorter one because longer tests sample the
instructional objectives more adequately.
2. Spread of scores. The type of students taking the test can influence Reliability. A group of students with
heterogeneous ability will produce a larger spread of test scores than a group with homogeneous ability.
3. Item difficulty. In general, tests composed of items of moderate or average difficulty (.30 to JO) will have
more influence on reliability than those composed primarily of easy or very difficult items.
4. Item discrimination. In general, tests composed of more discriminating items will have greater reliability than
those composed of less discriminating items.
5. Time limits. Adding a time factor may improve reliability for lower — level cognitive test items. Since all
students do not function at the same pace, a time factor adds another criterion to the test that causes
discrimination, thus improving reliability. 'Teachers should not, however, arbitrarily impose a time limit. For
higher - level cognitive test items, the imposition of a time limit may defeat the intended purpose of the items.
A. Symmetrically Shaped Test Score Distributions
1. Normal Distribution or Bell Shaped Curve
2. Rectangular Distribution
3. U-Shaped Curve
16. http://www.elio.p4o.net http://www.jokwaist.blogspot.com
B. Skewed Distribution of Test Score
1. Positively Skewed Distribution (mean>median> mode)
2. Negatively Skewed Distribution (move>median>mean)
2. Skewness is the degree of asymmetry, or departure from symmetry of a distribution.
3. Skewed to the right: (positive skewness): if the frequency curve of a distribution has a longer “tail” to the left
of the central maximum than to the right. Most scores are below the mean.
4. Skewed to the left: (negative skewness): if the frequency curve of a distribution has a longer “tail” to the right
of the central maximum than to the left. Most scores are above the mean and there are extremely low scores.
Kurtosis -is the degree of peakedness of a distribution, usually taken relative to a normal distribution.
Leptokurtic: distribution having a realtively high peak.
Platykurtic: a distribution having flat-topped
Mesokurtic: a distribution which is moderately peaked.
Descriptive Statistics – the first step in data analysis is to describe, or summarize the data using descriptive
statistics
Measure of Central Tendency
- numerical values which describe the average or typical performance of given group in terms of certain
attributes
Basis in determining whether the group is performing better or poorer than the other groups
Mean-Arithmetic average, used when the distribution is normal/symmetrical or bell – shaped.
Most reliable/ affected by extreme scores
Median-Point in a distribution above and below which are 50% of the scores/cases;
Midpoint of a distribution;
Used when the distribution is skewed/ most stable
Mode-Most frequent/common score in a distribution;
Opposite of the mean, unreliable/unstable;
Used as quick description in term of average/typical performance of the group.
Measures of Variability
- indicate or describe how spread the scores are. The larger the measure of variability the more spread the
score are and the group is said to be heterogeneous; the smaller the less spread the scores are and the group
is said to be homogeneous.
17. http://www.elio.p4o.net http://www.jokwaist.blogspot.com
- Range-The difference between the highest and lowest score plus one;
- Counterpart of the mode it is also unreliable/unstable;
- Used as a quick, rough estimate of measure of variability.
- Standard Deviation-The counterpart of the mean, used also when the distribution is normal or symmetrical;
- Reliable/stable and so widely used
Z-score AS applied to test results
Study this group of test which was administered with the following results, and then answer the question that
follows.
Z score= score – mean / sd