The document outlines 9 principles of high quality assessment:
1. Clarity of learning targets - assessments should clearly define what knowledge, skills, and abilities are being measured.
2. Appropriateness of assessment methods - the right methods like written tests, projects, and observations should be used to match the learning targets.
3. Validity, reliability, fairness, positive consequences, practicality/efficiency, and ethics - assessments should have these key properties to be effective and accurate measures of learning.
Is it possible to explain why the student outputs is as they are through an assessment of the processes which they did in order to arrive at the final product?
YES, through Process oriented, performance-based assessment
Is it possible to explain why the student outputs is as they are through an assessment of the processes which they did in order to arrive at the final product?
YES, through Process oriented, performance-based assessment
This material is an introduction to the subject, The Teacher and the School Curriculum. Class rules and target goals for the subject have been included aside from the definition, concepts, determinants or factors encompassing curriculum.
This material is an introduction to the subject, The Teacher and the School Curriculum. Class rules and target goals for the subject have been included aside from the definition, concepts, determinants or factors encompassing curriculum.
A ppt about Properties of Assessment Method presented in our Assessment for Student Learning.
For students, teachers and other people who wants to know about the topic.
This presentation is all about improving the quality of assessment instruments / tools by following the 12 principles of high quality assessments. This is under the Education course Assessment and Evaluation of Learning 1
Assessment and evaluation- A new perspective
Unit 2- Tests and its Application
Syllabus of Unit 2
Testing- Concept and Nature
Developing and Administering Teacher Developed Tests
Characteristics of a good Test
Standardization of Test
Types of Tests- Psychological Test, Reference Test, Diagnostic Tests
2.2.1. Introduction-
Teachers construct various tools for the assessment of various traits of their students.
The most commonly used tools constructed by a teacher are the achievement tests. The achievement tests are constructed as per the requirement of a particular class and subject area they teach.
Besides achievement tests, for the assessment of the traits, a teacher observes his students in a classroom, playground and during other co-curricular activities in the school. The social and emotional behavior is also observed by the teacher. All these traits are assessed. For this purpose too, tools like rating scales are constructed.
Evaluation Tools used by the teacher may both be standardized and non-standardised.
A standardized tool is one which got systematically developed norms for a population. It is one in which the procedure, apparatus and scoring have been fixed so that precisely the same test can be given at different time and place as long as it pertains to a similar type of population. The standardized tools are used in order to:
Compare achievements of different skills in different areas
Make comparison between different classes and schools They have norms for the particular population. They are norm referenced.
On the other hand, teachers make tests as per the requirements of a particular class and the subject area they teach. Hence, they are purposive and criterion referenced. They want:
to assess how well students have mastered a unit of instruction;
to determine the extent to which objectives have been achieved;
to determine the basis for assigning course marks and find out how effective their teaching has been.
So our syllabus here revolves around the Tests.
2.2.2- Developing and Administering Teacher Developed Tests-
2.2.3-CHARACTERISTICS OF GOOD MEASURING INSTRUMENT -
1. VALIDITY-
Any measuring instruments must fulfill certain conditions. This is true in all spheres, including educational evaluation.
Test validity refers to the degree to which a test accurately measures what it claims to measure. It is a critical concept in the field of psychometrics and is essential for ensuring that a test is meaningful and useful for its intended purpose. It is the test is meant to examine the understanding of scientific concept; it should do only that and should not be attended for other abilities such as his style of presentation, sentence patterns or grammatical construction. Validity is specific rather than general criterion of a good test. Validity is a matter of degree. It may be high, moderate or low.
There are several types of validity, each addressing different aspects of the testing process:
1. Face-validity, 2.Content
Assessment of learning and educational technology ed 09 ocamposCharlesIvanOcampos
Assessment of learning tends to aid the learner and teacher relationship in the academe. In which assessment of learning guides them to know their strength and weakness in class. It will evaluate the learners learning process.
To add knowledge about teaching that can help the students and teachers in their learning process in which they can be both assess their way of interaction to achieve their goals in class. Assessment of learning focuses on the development and utilization of assessment tools to improve the teaching-learning process. It emphasizes on the use of testing for measuring knowledge, comprehension and other thinking skills. It allows the students to go through the standard steps in test constitution for quality assessment. Students will experience how to develop rubrics for performance-based and portfolio assessment. The presentation includes educational technology and statistical tools that helps to determine the learning of the students.
2. PRINCIPLES OF HIGH QUALIT Y
ASSESSMENT
1. Clarity of learning targets
2. (knowledge, reasoning, skills, products, af fects)
3. Appropriateness of Assessment Methods
4. Validity
5. Reliability
6. Fairness
7. Positive Consequences
8. Practicality and Ef ficiency
9. Ethics
3. 1. CLARITY OF LEARNING TARGETS
(knowledge, reasoning, skills, products,
affects)
Assessment can be made precise, accurate and
dependable only if what are to be achieved are
clearly stated and feasible. The learning
targets, involving knowledge, reasoning, skills,
products and effects, need to be stated in
behavioral terms which denote something
which can be observed through the behavior of
the students.
4. CLARIT Y OF LEARNING TARGETS (CONT)
Cognitive Targets
B e n j a m i n B l o o m ( 1 9 5 4 ) p r o p o s e d a h i e r a rc hy o f e d u c a t io n a l o b j e c t i ve s a t t h e
c o g ni t i ve l ev e l . T h e s e a r e :
• K n o w l e d g e – a c q u i s i t io n o f f a c t s , c o n c e p t s a n d t h e o r i e s
• C o m p r e h e n s i o n - u n d e r s t a n d i n g , i nv o l v e s c o g ni t i o n o r aw a r e n e s s o f t h e
i n te r r e l a t io n s h i p s
• A p p l ic a t i o n – t r a n s f e r o f k n o w l e d g e f r o m o n e f i e l d o f s t u d y to a n o t h e r o f f r o m o n e
c o n c e p t to a n o t h e r c o n c e p t i n t h e s a m e d i s c i p l in e
• A n a l y s i s – b r e a k i n g d o w n o f a c o n c e p t o r i d e a i n to i t s c o m p o n e n t s a n d ex p l a i ni n g g
t h e c o n c e p t a s a c o m p o s i t i o n o f t h e s e c o n c ep t s
• S y n t h e s i s – o p p o s i te o f a n a l y s i s , e n t a i l s p u t t i n g to g et h e r t h e c o m p o n e n t s i n o r d e r
to s u m m a r i z e t h e c o n c e p t
• E v a l ua t io n a n d R e a s o n in g – v a l ui n g a n d j u d g m e n t o r p u t t i n g t h e “ w o r t h ” o f a
c o n c e p t o r p r i n c i p le .
5. CLARIT Y OF LEARNING TARGETS(CONT)
Skills, Competencies and Abilities Targets
Skills – specific activities or tasks that a student can
proficiently do
Competencies – cluster of skills
Abilities – made up of relate competencies categorized as:
i. Cognitive
ii. Af fective
iii. Psychomotor
Products, Outputs and Project Targets
- tangible and concrete evidence of a student’s ability
- need to clearly specify the level of workmanship of projects
i. exper t
ii. skilled
iii. novice
6. 2. APPROPRIATENESS OF ASSESSMENT
METHODS
a. Written-Response Instruments
Objective tests – appropriate for assessing the various levels
of hierarchy of educational objectives
Essays – can test the students’ grasp of the higher level
cognitive skills
Checklists – list of several characteristics or activities
presented to the subjects of a study, where they will analyze
and place a mark opposite to the characteristics.
7. 2. APPROPRIATENESS OF ASSESSMENT
M ETHODS
b. Product Rating Scales
Used to rate products like book reports, maps, charts,
diagrams, notebooks, creative endeavors
Need to be developed to assess various products over the
years
c. Per formance Tests - Per formance checklist
Consists of a list of behaviors that make up a certain type
of performance
Used to determine whether or not an individual behaves
in a certain way when asked to complete a particular task
8. 2. APPROPRIATENESS OF ASSESSMENT
M ETHODS
d. Oral Questioning – appropriate assessment method
when the objectives are to :
Assess the students’ stock knowledge and/or
Determine the students’ ability to communicate ideas in
coherent verbal sentences.
e. Observation and Self Reports
Useful supplementary methods when used in
conjunction with oral questioning and performance tests
9. 3. PROPERTIES OF ASSESSMENT METHODS
Validity
Reliability
Fairness
Positive Consequences
Practicality and Efficiency
Ethics
10. 3. VALIDIT Y
Something valid is something fair.
A valid test is one that measures what it
is supposed to measure.
Types of Validity
Face: What do students think of the test?
Construct: Am I testing in the way I
taught?
Content: Am I testing what I taught?
Criterion-related: How does this compare
with the existing valid test?
Tests can be made more valid by making
them more subjective (open items).
11. MORE ON VALIDIT Y
Validity – appropriateness, correctness, meaningfulness and
usefulness of the specific conclusions that a teacher reaches
regarding the teaching -learning situation.
Content validity – content and format of the instrument
i. Students’ adequate experience
ii. Coverage of sufficient material
iii. Reflect the degree of emphasis
Face validity – outward appearance of the test, the lowest form
of test validity
Criterion-related validity – the test is judge against a specific
criterion
Construct validity – the test is loaded on a “construct” or factor
12. RELIABILITY
Something reliable is something that works well
and that you can trust.
A reliable test is a consistent measure of what
it is supposed to measure.
Questions:
Can we trust the results of the test?
Would we get the same results if the tests were
taken again and scored by a different person?
Tests can be made more reliable by making
them more objective (controlled items).
13. Reliability is the extent to which
an experiment, test, or any
measuring procedure yields the
same result on repeated trials.
14. Equivalency reliability is the extent to
which two items measure identical
concepts at an identical level of
difficulty. Equivalency reliability is
determined by relating two sets of test
scores to one another to highlight the
degree of relationship or association.
15. Stability reliability (sometimes
called test, re-test reliability) is the
agreement of measuring
instruments over time. To determine
stability, a measure or test is
repeated on the same subjects at a
future date.
16. Internal consistency is the extent to
which tests or procedures assess the
same characteristic, skill or quality.
It is a measure of the precision
between the observers or of the
measuring instruments used in a
study.
17. Interrater reliability is the extent to
which two or more individuals
(coders or raters) agree. Interrater
reliability addresses the consistency
of the implementation of a rating
system.
18. RELIABILIT Y – CONSISTENCY, DEPENDABILIT Y,
STABILIT Y WHICH CAN BE ESTIMATED BY
Split-half method
Calculated using the
i. Spearman-Brown prophecy formula
ii. Kuder-Richardson – KR 20 and KR21
Consistency of test results when the same test is
administered at two different time periods
i. Test-retest method
ii. Correlating the two test results
19. 5. FAIRNESS
The concept that assessment should be 'fair' covers a
number of aspects.
Student Knowledge and learning targets of
assessment
Opportunity to learn
Prerequisite knowledge and skills
Avoiding teacher stereotype
Avoiding bias in assessment tasks and
procedures
20. 6. POSITIVE CONSEQUENCES
Learning assessments provide students with
effective feedback and potentially improve
their motivation and/or self-esteem. Moreover,
assessments of learning gives students the
tools to assess themselves and understand
how to improve.
- Positive consequence on students, teachers,
parents, and other stakeholders
21. 7. PRACTICALITY AND EFFICIENCY
Something practical is something effective in
real situations.
A practical test is one which can be practically
administered.
Questions:
Will the test take longer to design than apply?
Will the test be easy to mark?
Tests can be made more practical by making
it more objective (more controlled items)
22. Teacher Familiarity with
the Method Teachers should be
Time required familiar with the test,
Complexity of - does not require too
Administration much time
Ease of scoring - implementable
Ease of Interpretation
Cost
23. RELIABILITY, VALIDITY &
PRACTICALITY
The problem:
The more reliable a test is, the less valid.
The more valid a test is, the less reliable.
The more practical a test is, (generally)
the less
valid.
The solution:
As in everything, we need a balance (in
both exams and exam items)
25. ETHICS IN ASSESSMENT – “RIGHT AND
WRONG”
Conforming to the standards of conduct of a given
profession or group
Ethical issues that may be raised
i. Possible harm to the participants.
ii. Confidentiality.
iii. Presence of concealment or deception.
iv. Temptation to assist students.