Refresher Course:
Assessment in Learning
Dazel Jean S. Alariao
What is Assessment and why is it
important in learning?
Assessment measures a student's mastery
of a skill or knowledge of a given
subject. It is an integral part of instruction,
as it determines whether or not the goals
of education are being met.
Purposes of Assessment
1. Assessment FOR Learning:
When the intent is to enhance student
learning, teachers use assessment for learning to
uncover students’ prior knowledge,
preconceptions, gaps, and learning styles.
(ex: Formative, short quiz before lesson, pretests)
Purposes of Assessment
2. Assessment AS Learning:
Emphasizes assessment as a process of
metacognition (knowledge of one’s own thought
processes). It is characterized by students reflecting on
their own learning and making adjustments so that
they achieve deeper understanding.
(ex: reflections, self-assessment, peer-assessment,
diary)
Purposes of Assessment
3. Assessment OF Learning:
Refers to strategies designed to confirm
whether or not the students have met curriculum
outcomes. Designed to measure, certify, and report
the level of students’ learning, so that reasonable
decisions can be made about students. It is graded.
(Ex: Summative test, periodical test, post tests)
Traditional vs. Authentic Assessment
Traditional Authentic
Selecting response Performing a task
Contrived Real-life
Recall/ recognition Construction/ application
Teacher-structured Student-structured
Indirect evidence Direct evidence
Traditional vs. Authentic Assessment
Traditional Authentic
A school’s mission is to
develop useful citizens
who must possess certain
body of knowledge and
skills.
A school’s mission is to
develop useful citizens
who are capable of
performing useful tasks in
the real world.
Performance Tasks
Seven Criteria in selecting a good performance Assessment
Task
1. Authenticity- that task is similar to what the students might
encounter in the real word as opposed to encountering only
in the school
2.Feasibility- the task is realistically implementable in relation
to its cost, space, time and equipment requirements
3.Generalizability- the likelihood that the student’s
performance on the task will generalize to comparable
tasks
Performance Tasks
4.Fairness- the task is fair to all students regardless of
their social status or gender
5.Teachability- the task allows one to master the skill
that one should proficient in
6.Multi Foci- the task measures multiple instructional
outcomes
7.Scorability- the task can be reliably and accurately
evaluated
Types of Rubrics
1. HOLISTIC RUBRICS
It describes the overall quality of a performance or a product. In
this rubric, there is only one rating given to the entire work or
performance.
Advantages :
• It allows fast assessment
• It provides one score to describe the overall performance
• It can indicate the general strengths and weaknesses of the
performance.
Types of Rubrics
Disadvantages:
• It does not clearly describe the degree of the criterion
satisfied or not by the performance of the product
• It does not permit differential weighting of the qualities of a
product/ performance
Types of Rubrics
2. ANALYTIC RUBRICS
describes the quality of a performance or product in terms of the
identified dimensions/ criteria for which are rated independently
to give a better picture of the quality of work or performance
Advantages:
• Clearly describes the degree of the criterion satisfied or not by
the performance
• Permits differential weighting of the qualities of a performance
• Helps pinpoint specific areas of strengths and weaknesses
Types of Rubrics
Disadvantages:
• It is more time consuming to use
• It is more difficult to construct
Checklists
Advantages:
• Provides a structured way to collect data in a quick
and efficient manner.
• Allows learners to monitor their progress.
Disadvantages:
• Gathered information is limited only to the items
included in the checklist.
• It only indicates the absence or presence and not the
context.
Rating Scale
Advantages:
• Used for behaviors not easily measured by other
means.
• Quick and easy to accomplish.
• It has consisted descriptors (e.g. always, sometimes,
rare, never)
Disadvantage/s:
• Highly subjective
• Ambiguity
Norm-referenced vs Criterion-
referenced Tests
1.Norm-referenced Test:
• Comparing how well a specific student performed
compared to the performance of the norm group,
but does not tell whether the student met,
exceeded, or fell short of proficiency.
Norm-referenced vs Criterion-
referenced Tests
2.Criterion-referenced Test:
• Comparing how well a specific student performed
compared to a predetermined standard, goal,
performance level, or other criterion. Tells a student’s
proficiency.
Selective Type of Test Construction
Selective type- provides choices for the answer
• Multiple choice- consists of a stem which describes the
problem and 3 or more alternatives which give the
suggested solutions. The incorrect alternatives are the
distractors
Most important Rule: The distractors should not be seen as
obviously the wrong answers.
Selective Type of Test Construction
Selective type- provides choices for the answer
• True-False or Alternative Response- consists of declarative
statement that one has to mark true or false, right or wrong,
correct or incorrect, yes or no, fact or opinion and the like.
• Three Forms:
• Simple- the acquisition of morality is a developmental process
True False
• Complex- the acquisition of morality is a developmental process
True False opinion
• Compound- an acquisition of morality is a developmental
process True False, if the statement Is false, what makes it false?
Selective Type of Test
• Matching type- consists of two parallel columns; Column a,
the column of premises from which a match is sought;
column b, the column of responses from which the selection
is made
Supply Test
• Short answer- uses a direct question that can be
answered by a word, phrase, number or symbol.
• Completion test- consists of an incomplete
statement.
Fill in the blanks Rule: Put the blank at the end or near
the end of the sentence
Essay Test
• Restricted response- limits the content of the
response by restricting the scope of the topic
• Extended Response- allows the students to select any
factual information that they think is pertinent to
organize their answers in accordance with their best
judgment
VALIDITY
• is a degree to which the assessment instrument
measures what it intends to measure
• It also refers to the usefulness of the instrument for a
given purpose
• It is the most important criterion of a good assessment
instrument
Ways in Establishing VALIDITY
1.Face Validity- is done by examining the
physical appearance of the instrument
2. Content Validity- is done through a careful
and critical examination of the objectives of
assessment so that it reflects the curricular
objectives
Ways in Establishing VALIDITY
3. Criterion- related validity- is established statistically such that a
set of scores revealed by the measuring instrument is correlated
with the scores obtained in another external predictor or measure
Two Purposes
a)Concurrent Validity- describes the present status of the
individual by correlating the sets of scores obtained from
two measures given concurrently
b)Predictive Validity- describes the future performance of
an individual by correlating the sets of scores obtained
from two measures given at a longer time interval.
Improving Test RELIABILTY
1. Test length. In general, a longer test is more reliable
than a shorter one because longer test sample the
instructional objectives more adequately.
2. Spread of scores. The type of students taking the
test can influence reliability. A group of students with
heterogeneous ability will produce a large spread of
test than a group with homogenous ability.
Improving Test RELIABILTY
3. Item difficulty. In general, test composed of items of moderate or
average difficulty (.30 to .70) will have more influence on reliability than
those composed primarily of easy of very difficult items.
4. Time limits. Adding a time factor may improve reliability lower-level
cognitive test items. Since all students do not function at the same pace,
a time factor adds another criterion to the test that causes discrimination,
thus improving reliability. Teachers should not, however, arbitrarily impose
a time limit. For higher level cognitive test items, the imposition of time
may defeat the intended purpose of the items.
Measures of Central Tendency
• It is a single value that is used to identify the
center of the data
• It is taught as the typical value in the set of
scores
• It tends to lie within the center if it is arranged
from lowest to highest and vice versa.
Measures of Central Tendency
A. Mean
• Refers to the arithmetic average
• Easily affected by extreme scores
B. Median
• refers to the centermost scores when the scores in the
distribution are arranged according to magnitude
(from highest to lowest score or from lowest to highest
score)
• used when there are extreme scores
Measures of Central Tendency
A. Mode
• Refers to the score/s that occurs most frequently in the
score
• Not affected by extreme values
Types of mode
• Unimodal- is score distribution that consists of one mode
• Bimodal- score distribution that consists of two modes
• Trimodal- score distribution that consists of three modes. It is
also considered as multi-modal of a score distribution that
consist of more than two modes.
Measures of Central Tendency
A. Find the Mean:
Scores of 10 Students who took their Mid-Term Exam
100 95 80 70 74
60 78 86 66 95
Measures of Central Tendency
A. Find the Mean:
1. Add all the scores: 804
2. Divide the sum by the total number of
examinees: 804/10
3. The quotient is the mean: 80.4
Measures of Central Tendency
B. Find the Median:
Scores of 10 Students who took their Mid-Term Exam
100 95 80 70 74
60 78 86 66 95
Measures of Central Tendency
B. Find the Median:
1. Arrange: 60, 66, 70, 74, 78, 80, 86, 95, 95, 100
2. Middle: 60, 66, 70, 74, 78, 80, 86, 95, 95, 100
3. If the set is even, add the two middlemost and
divide it by two: 78 + 80 = 158/2 = 79
Measures of Central Tendency
C. Find the Mode:
Scores of 10 Students who took their Mid-Term Exam
100 95 80 70 74
60 78 86 66 95
Outcomes-Based Teaching and
Learning
A process of curriculum design that starts with the
development of outcomes statements that define
what our learners should know and/or be able to do
at the end of the program. The intended learning
outcomes determine/s subject matter, teaching-
learning activities, learning resources and
assessment task.
Assessment in Learning
Questions
1. Which referred to as assessment for
learning?
A.Summative Assessment
B. Formative Assessment
C.Periodical Assessment
D.Self-Assessment
2. Which is referred to as assessment as
learning?
A.Summative assessment
B. Diagnostic assessment
C.Formative assessment
D. Self-assessment
3. Which is referred to as assessment
of learning?
A.Summative assessment
B. Formative assessment
C.Diagnostic assessment
D.Self-assessment
4. If the scores in formative assessment
are recorded, for which purpose?
A. For record keeping to trace student’s progress
B. For grading purposes
C. For comparison of the progress of one student
against other students
D. For the school record needed for regular reporting
5. Should formative assessment be
graded?
A. No, formative assessment just intends to find out where
students are so teacher can adjust instruction.
B. No, teaching has not begun’ there is no sense assessing
for grading purposes.
C. Yes, like other forms of assessments.
D. Yes, so students will get inspired to study more
6. In Outcome-based Teaching and
Learning (OBTL), which is the basis of what
to assess?
A. Intended learning outcomes
B. Teaching-learning activities
C. Subject matter
D. The development stage of the learner
7. Which statement on Outcome-based
Teaching and Learning (OBTL) is CORRECT?
A. The subject matter determines the intended learning outcome/
B. The teaching-learning activities likewise determine the learning
outcome.
C. The assessment task determines the learning outcome.
D. The intended learning outcomes determine/s subject matter,
teaching-learning activities, learning resources and assessment
task.
8. Study this multiple-choice test item then answer the question. “Which is
the capital of the Philippines?”
A. Manila B. Baguio City
C. Quezon City D. Manuel L. Quezon
Is the test item a good one?
I. No, option D is poor distractor because it is obviously wrong.
II. No, not all the distracters are good.
III. Yes, it has four good distracters and a clear question
A. I only
B. III only
C. II only
D. I and II
9. Here is a test item. Study it and answer the question
below.
_______is said to be the “rice granary of the
Philippines.” How can the question be improved?
A. Put the blank at the end or near the end of the sentence.
B. Specify if you are asking for a region, province etc.
C. Make it a True-False test.
D. Make it a multiple-choice test.
10. Should the scores in a pretest be recorded
and included in the compilation of grades?
A. Yes, recorded for posttest comparison purposes but not
for grading.
B. No, this creates in students’ negative attitude toward
tests.
C. Yes, recorded and also included in the compilation of
grades.
D. No. What matters is students improve in their scores in the
re-test.
11. Read and analyze this multiple-choice test then answer
the question: Who has been proclaimed as the national hero of
the Philippines?
A. Andres Bonifacio B. Bagumbayan
C. Jose Rizal D. Emilio Aguinaldo
Are all the questions plausible?
A. Yes, they are all possible answers.
B. Yes, except Emilio Aguinaldo, who did not found the Katipunan.
C. No, because Bagumbayan is implausible, being a name of a place
not of a person.
D. No, because there are sectors who still insist that Bonifacio should
have been proclaimed as national hero.
12. In a matching type of test, the options are in the
first column and the questions (stem) are in the second
column. Is this the CORRECT way of constructing a
multiple-choice test?
A. Yes.
B. It depends on the choice of the test constructor.
C. No, the question must be in the first column and the
options in the second column.
D. If the options are longer, then they must be in the first
column.
13. Which is an example of an extended
essay?
A. On which did Rizal not agree with Andres
Bonifacio?
B. If Rizal would be alive up to today, which
comments would he give on our present
Philippine politics?
C. What did Rizal accomplish while in exile in
Dapitan?
D. What was Rizal’s last poem?
14. Which is an example of restricted essay
A. Why did Rizal disagree with Bonifacio with regard
to armed revolution against Spain?
B. What does the professionalization of teaching
apply?
C. Explain the nature-nurture issue on human
development.
D. What is the best teaching method? Justify your
answer.
15. Is the practical test in Physical
Education a form of an authentic
assessment?
A. No.
B. Yes.
C. Yes, if it is compared with a written test.
D. No, if given individually.
16. A student submits a set of pajamas
personally worked on by her as a requirement
for he TLE class. Under which type of assessment
does this fall?
A. Diagnostic assessment
B. Formative assessment
C. Traditional assessment
D. Authentic assessment
17. A student is required to work on a PowerPoint
presentation on a given intended learning
outcome. Under which type of assessment does
the PowerPoint presentation fall?
A. Diagnostic assessment
B. Formative assessment
C. Traditional assessment
D. Authentic assessment
18. In the context of Outcome-Based Teaching-
Learning, which statement/s on assessment
is/are CORRECT?
A. If what is assessed was what was taught and which was in
accordance with the intended learning outcome, then the
assessment task is aligned to the learning outcome.
B. If what is assessed was what was not taught and which was not in
accordance with the intended learning outcome, then the
assessment task is aligned to the learning outcome.
C. The assessment task determines the learning outcomes and the
subject matter.
D. The subject matter determines what is to be assessed.
19. Which is the philosophical bases of traditional
assessment?
I. A school’s mission is to develop useful citizens who must possess certain body of
knowledge and skills.
II. The school is entrusted to teach this body of knowledge and skills and determine
if the students have acquired those knowledge and skills by testing the students
on these knowledge and skills.
III. A school’s mission is to develop useful citizens who are capable of performing
useful tasks in the real world and so the school must assess students on tasks that
duplicate or imitate real world situations.
A. I and III
B. I only
C. II and III
D. I and II
20. Which is/are the philosophical basis/es of authentic
assessment?
I. A school’s mission is to develop useful citizens who must possess certain body of
knowledge and skills.
II. The school is entrusted to teach this body of knowledge and skills and determine
if the students have acquired those knowledge and skills by testing the students
on these knowledge and skills.
III. A school’s mission is to develop useful citizens who are capable of performing
useful tasks in the real world and so the school must assess students on tasks that
duplicate or imitate real world situations
A. III only
B. I only
C. II only
D. I and II
21. With authentic assessment as basis, which
does NOT belong?
I. Recite a poem with feeling using appropriate voice quality, facial
expression and hand gestures. Perform a skit on the importance of
a national language-Mother Tongue, Grade 3.
II. Demonstrate the generation of electricity by movement of a
magnet through a coil-Science, Grade 10.
III. Sings themes or melodic fragments of given Classical period
pieces.
A. I only
B. III only
C. II only
D. none
22. Which is the basis of grading if we adopt
norm-referenced grading?
A.Performance of others
B. Performance of the upper group of the class
C.Performance of the lower group of the class
D.Intended learning outcomes
23. Which assessment tool consists of a list of specific
characteristics with a place for marking the degree to
which each characteristic is displayed? For example,
in public speaking, the characteristic “makes eye
contact” – is it done frequently, occasionally, seldom
or never.
A. Checklist
B. Scoring rubric
C. Rating scale
D. Checklist and Rating
24. If I rate how often students exhibit behaviors
or learning skills, I will use the words Always,
Frequently, Sometimes, _________.
A. Frequently
B. Rarely
C. Never
D. None
25. The following are examples of non-test
assessments EXCEPT _____________.
A. Hand signals
B. Cartooning
C. Completion (Fill-in-the-blanks written test)
D. Games
26. When do you choose to use a holistic
rubric?
A. When a quick or gross judgment needs to be
made
B. When you want to assess each criterion
separately
C. When you find scoring several criteria
cumbersome
D. When gross judgment can equally satisfy the
analytic judgment of a product/process
27. Here is a set of scores: 20, 20, 20, 20, 21, 22,
23, 24, 24, 24, 24, 25, 26, 27, 27, 28, 28, 29, 29, 30.
Which is TRUE of the score distribution?
A. It has extremely high and low scores.
B. It is bimodal.
C. The mean is equal to the mode.
D. The median is below 15
28. Which is the basis of grading if we adopt
criterion-referenced grading?
A.Performance of others
B. Performance of the upper group of the class
C.Performance of the lower group of the class
D.Set of standards
29. Non-test assessments allow the students to
manifest their acquired knowledge and skills from the
lesson. The following are examples of non-test
assessments EXCEPT
A.Portfolio
B. Journal
C.Teacher observation
D.Multiple choice quiz
30. Which statement/s on rubrics is/are
CORRECT?
I. Rubrics help teachers teach.
II. Rubrics help students learn.
III. Rubrics help coordinate instructions and assessment.
A.I only
B. II only
C.II and III
D.I, II and III

Review Refresher-Assessment-in-Learning.pptx

  • 1.
    Refresher Course: Assessment inLearning Dazel Jean S. Alariao
  • 2.
    What is Assessmentand why is it important in learning? Assessment measures a student's mastery of a skill or knowledge of a given subject. It is an integral part of instruction, as it determines whether or not the goals of education are being met.
  • 3.
    Purposes of Assessment 1.Assessment FOR Learning: When the intent is to enhance student learning, teachers use assessment for learning to uncover students’ prior knowledge, preconceptions, gaps, and learning styles. (ex: Formative, short quiz before lesson, pretests)
  • 4.
    Purposes of Assessment 2.Assessment AS Learning: Emphasizes assessment as a process of metacognition (knowledge of one’s own thought processes). It is characterized by students reflecting on their own learning and making adjustments so that they achieve deeper understanding. (ex: reflections, self-assessment, peer-assessment, diary)
  • 5.
    Purposes of Assessment 3.Assessment OF Learning: Refers to strategies designed to confirm whether or not the students have met curriculum outcomes. Designed to measure, certify, and report the level of students’ learning, so that reasonable decisions can be made about students. It is graded. (Ex: Summative test, periodical test, post tests)
  • 6.
    Traditional vs. AuthenticAssessment Traditional Authentic Selecting response Performing a task Contrived Real-life Recall/ recognition Construction/ application Teacher-structured Student-structured Indirect evidence Direct evidence
  • 7.
    Traditional vs. AuthenticAssessment Traditional Authentic A school’s mission is to develop useful citizens who must possess certain body of knowledge and skills. A school’s mission is to develop useful citizens who are capable of performing useful tasks in the real world.
  • 8.
    Performance Tasks Seven Criteriain selecting a good performance Assessment Task 1. Authenticity- that task is similar to what the students might encounter in the real word as opposed to encountering only in the school 2.Feasibility- the task is realistically implementable in relation to its cost, space, time and equipment requirements 3.Generalizability- the likelihood that the student’s performance on the task will generalize to comparable tasks
  • 9.
    Performance Tasks 4.Fairness- thetask is fair to all students regardless of their social status or gender 5.Teachability- the task allows one to master the skill that one should proficient in 6.Multi Foci- the task measures multiple instructional outcomes 7.Scorability- the task can be reliably and accurately evaluated
  • 10.
    Types of Rubrics 1.HOLISTIC RUBRICS It describes the overall quality of a performance or a product. In this rubric, there is only one rating given to the entire work or performance. Advantages : • It allows fast assessment • It provides one score to describe the overall performance • It can indicate the general strengths and weaknesses of the performance.
  • 11.
    Types of Rubrics Disadvantages: •It does not clearly describe the degree of the criterion satisfied or not by the performance of the product • It does not permit differential weighting of the qualities of a product/ performance
  • 12.
    Types of Rubrics 2.ANALYTIC RUBRICS describes the quality of a performance or product in terms of the identified dimensions/ criteria for which are rated independently to give a better picture of the quality of work or performance Advantages: • Clearly describes the degree of the criterion satisfied or not by the performance • Permits differential weighting of the qualities of a performance • Helps pinpoint specific areas of strengths and weaknesses
  • 13.
    Types of Rubrics Disadvantages: •It is more time consuming to use • It is more difficult to construct
  • 14.
    Checklists Advantages: • Provides astructured way to collect data in a quick and efficient manner. • Allows learners to monitor their progress. Disadvantages: • Gathered information is limited only to the items included in the checklist. • It only indicates the absence or presence and not the context.
  • 15.
    Rating Scale Advantages: • Usedfor behaviors not easily measured by other means. • Quick and easy to accomplish. • It has consisted descriptors (e.g. always, sometimes, rare, never) Disadvantage/s: • Highly subjective • Ambiguity
  • 16.
    Norm-referenced vs Criterion- referencedTests 1.Norm-referenced Test: • Comparing how well a specific student performed compared to the performance of the norm group, but does not tell whether the student met, exceeded, or fell short of proficiency.
  • 17.
    Norm-referenced vs Criterion- referencedTests 2.Criterion-referenced Test: • Comparing how well a specific student performed compared to a predetermined standard, goal, performance level, or other criterion. Tells a student’s proficiency.
  • 18.
    Selective Type ofTest Construction Selective type- provides choices for the answer • Multiple choice- consists of a stem which describes the problem and 3 or more alternatives which give the suggested solutions. The incorrect alternatives are the distractors Most important Rule: The distractors should not be seen as obviously the wrong answers.
  • 19.
    Selective Type ofTest Construction Selective type- provides choices for the answer • True-False or Alternative Response- consists of declarative statement that one has to mark true or false, right or wrong, correct or incorrect, yes or no, fact or opinion and the like. • Three Forms: • Simple- the acquisition of morality is a developmental process True False • Complex- the acquisition of morality is a developmental process True False opinion • Compound- an acquisition of morality is a developmental process True False, if the statement Is false, what makes it false?
  • 20.
    Selective Type ofTest • Matching type- consists of two parallel columns; Column a, the column of premises from which a match is sought; column b, the column of responses from which the selection is made
  • 21.
    Supply Test • Shortanswer- uses a direct question that can be answered by a word, phrase, number or symbol. • Completion test- consists of an incomplete statement. Fill in the blanks Rule: Put the blank at the end or near the end of the sentence
  • 22.
    Essay Test • Restrictedresponse- limits the content of the response by restricting the scope of the topic • Extended Response- allows the students to select any factual information that they think is pertinent to organize their answers in accordance with their best judgment
  • 23.
    VALIDITY • is adegree to which the assessment instrument measures what it intends to measure • It also refers to the usefulness of the instrument for a given purpose • It is the most important criterion of a good assessment instrument
  • 24.
    Ways in EstablishingVALIDITY 1.Face Validity- is done by examining the physical appearance of the instrument 2. Content Validity- is done through a careful and critical examination of the objectives of assessment so that it reflects the curricular objectives
  • 25.
    Ways in EstablishingVALIDITY 3. Criterion- related validity- is established statistically such that a set of scores revealed by the measuring instrument is correlated with the scores obtained in another external predictor or measure Two Purposes a)Concurrent Validity- describes the present status of the individual by correlating the sets of scores obtained from two measures given concurrently b)Predictive Validity- describes the future performance of an individual by correlating the sets of scores obtained from two measures given at a longer time interval.
  • 26.
    Improving Test RELIABILTY 1.Test length. In general, a longer test is more reliable than a shorter one because longer test sample the instructional objectives more adequately. 2. Spread of scores. The type of students taking the test can influence reliability. A group of students with heterogeneous ability will produce a large spread of test than a group with homogenous ability.
  • 27.
    Improving Test RELIABILTY 3.Item difficulty. In general, test composed of items of moderate or average difficulty (.30 to .70) will have more influence on reliability than those composed primarily of easy of very difficult items. 4. Time limits. Adding a time factor may improve reliability lower-level cognitive test items. Since all students do not function at the same pace, a time factor adds another criterion to the test that causes discrimination, thus improving reliability. Teachers should not, however, arbitrarily impose a time limit. For higher level cognitive test items, the imposition of time may defeat the intended purpose of the items.
  • 28.
    Measures of CentralTendency • It is a single value that is used to identify the center of the data • It is taught as the typical value in the set of scores • It tends to lie within the center if it is arranged from lowest to highest and vice versa.
  • 29.
    Measures of CentralTendency A. Mean • Refers to the arithmetic average • Easily affected by extreme scores B. Median • refers to the centermost scores when the scores in the distribution are arranged according to magnitude (from highest to lowest score or from lowest to highest score) • used when there are extreme scores
  • 30.
    Measures of CentralTendency A. Mode • Refers to the score/s that occurs most frequently in the score • Not affected by extreme values Types of mode • Unimodal- is score distribution that consists of one mode • Bimodal- score distribution that consists of two modes • Trimodal- score distribution that consists of three modes. It is also considered as multi-modal of a score distribution that consist of more than two modes.
  • 31.
    Measures of CentralTendency A. Find the Mean: Scores of 10 Students who took their Mid-Term Exam 100 95 80 70 74 60 78 86 66 95
  • 32.
    Measures of CentralTendency A. Find the Mean: 1. Add all the scores: 804 2. Divide the sum by the total number of examinees: 804/10 3. The quotient is the mean: 80.4
  • 33.
    Measures of CentralTendency B. Find the Median: Scores of 10 Students who took their Mid-Term Exam 100 95 80 70 74 60 78 86 66 95
  • 34.
    Measures of CentralTendency B. Find the Median: 1. Arrange: 60, 66, 70, 74, 78, 80, 86, 95, 95, 100 2. Middle: 60, 66, 70, 74, 78, 80, 86, 95, 95, 100 3. If the set is even, add the two middlemost and divide it by two: 78 + 80 = 158/2 = 79
  • 35.
    Measures of CentralTendency C. Find the Mode: Scores of 10 Students who took their Mid-Term Exam 100 95 80 70 74 60 78 86 66 95
  • 36.
    Outcomes-Based Teaching and Learning Aprocess of curriculum design that starts with the development of outcomes statements that define what our learners should know and/or be able to do at the end of the program. The intended learning outcomes determine/s subject matter, teaching- learning activities, learning resources and assessment task.
  • 37.
  • 38.
    1. Which referredto as assessment for learning? A.Summative Assessment B. Formative Assessment C.Periodical Assessment D.Self-Assessment
  • 39.
    2. Which isreferred to as assessment as learning? A.Summative assessment B. Diagnostic assessment C.Formative assessment D. Self-assessment
  • 40.
    3. Which isreferred to as assessment of learning? A.Summative assessment B. Formative assessment C.Diagnostic assessment D.Self-assessment
  • 41.
    4. If thescores in formative assessment are recorded, for which purpose? A. For record keeping to trace student’s progress B. For grading purposes C. For comparison of the progress of one student against other students D. For the school record needed for regular reporting
  • 42.
    5. Should formativeassessment be graded? A. No, formative assessment just intends to find out where students are so teacher can adjust instruction. B. No, teaching has not begun’ there is no sense assessing for grading purposes. C. Yes, like other forms of assessments. D. Yes, so students will get inspired to study more
  • 43.
    6. In Outcome-basedTeaching and Learning (OBTL), which is the basis of what to assess? A. Intended learning outcomes B. Teaching-learning activities C. Subject matter D. The development stage of the learner
  • 44.
    7. Which statementon Outcome-based Teaching and Learning (OBTL) is CORRECT? A. The subject matter determines the intended learning outcome/ B. The teaching-learning activities likewise determine the learning outcome. C. The assessment task determines the learning outcome. D. The intended learning outcomes determine/s subject matter, teaching-learning activities, learning resources and assessment task.
  • 45.
    8. Study thismultiple-choice test item then answer the question. “Which is the capital of the Philippines?” A. Manila B. Baguio City C. Quezon City D. Manuel L. Quezon Is the test item a good one? I. No, option D is poor distractor because it is obviously wrong. II. No, not all the distracters are good. III. Yes, it has four good distracters and a clear question A. I only B. III only C. II only D. I and II
  • 46.
    9. Here isa test item. Study it and answer the question below. _______is said to be the “rice granary of the Philippines.” How can the question be improved? A. Put the blank at the end or near the end of the sentence. B. Specify if you are asking for a region, province etc. C. Make it a True-False test. D. Make it a multiple-choice test.
  • 47.
    10. Should thescores in a pretest be recorded and included in the compilation of grades? A. Yes, recorded for posttest comparison purposes but not for grading. B. No, this creates in students’ negative attitude toward tests. C. Yes, recorded and also included in the compilation of grades. D. No. What matters is students improve in their scores in the re-test.
  • 48.
    11. Read andanalyze this multiple-choice test then answer the question: Who has been proclaimed as the national hero of the Philippines? A. Andres Bonifacio B. Bagumbayan C. Jose Rizal D. Emilio Aguinaldo Are all the questions plausible? A. Yes, they are all possible answers. B. Yes, except Emilio Aguinaldo, who did not found the Katipunan. C. No, because Bagumbayan is implausible, being a name of a place not of a person. D. No, because there are sectors who still insist that Bonifacio should have been proclaimed as national hero.
  • 49.
    12. In amatching type of test, the options are in the first column and the questions (stem) are in the second column. Is this the CORRECT way of constructing a multiple-choice test? A. Yes. B. It depends on the choice of the test constructor. C. No, the question must be in the first column and the options in the second column. D. If the options are longer, then they must be in the first column.
  • 50.
    13. Which isan example of an extended essay? A. On which did Rizal not agree with Andres Bonifacio? B. If Rizal would be alive up to today, which comments would he give on our present Philippine politics? C. What did Rizal accomplish while in exile in Dapitan? D. What was Rizal’s last poem?
  • 51.
    14. Which isan example of restricted essay A. Why did Rizal disagree with Bonifacio with regard to armed revolution against Spain? B. What does the professionalization of teaching apply? C. Explain the nature-nurture issue on human development. D. What is the best teaching method? Justify your answer.
  • 52.
    15. Is thepractical test in Physical Education a form of an authentic assessment? A. No. B. Yes. C. Yes, if it is compared with a written test. D. No, if given individually.
  • 53.
    16. A studentsubmits a set of pajamas personally worked on by her as a requirement for he TLE class. Under which type of assessment does this fall? A. Diagnostic assessment B. Formative assessment C. Traditional assessment D. Authentic assessment
  • 54.
    17. A studentis required to work on a PowerPoint presentation on a given intended learning outcome. Under which type of assessment does the PowerPoint presentation fall? A. Diagnostic assessment B. Formative assessment C. Traditional assessment D. Authentic assessment
  • 55.
    18. In thecontext of Outcome-Based Teaching- Learning, which statement/s on assessment is/are CORRECT? A. If what is assessed was what was taught and which was in accordance with the intended learning outcome, then the assessment task is aligned to the learning outcome. B. If what is assessed was what was not taught and which was not in accordance with the intended learning outcome, then the assessment task is aligned to the learning outcome. C. The assessment task determines the learning outcomes and the subject matter. D. The subject matter determines what is to be assessed.
  • 56.
    19. Which isthe philosophical bases of traditional assessment? I. A school’s mission is to develop useful citizens who must possess certain body of knowledge and skills. II. The school is entrusted to teach this body of knowledge and skills and determine if the students have acquired those knowledge and skills by testing the students on these knowledge and skills. III. A school’s mission is to develop useful citizens who are capable of performing useful tasks in the real world and so the school must assess students on tasks that duplicate or imitate real world situations. A. I and III B. I only C. II and III D. I and II
  • 57.
    20. Which is/arethe philosophical basis/es of authentic assessment? I. A school’s mission is to develop useful citizens who must possess certain body of knowledge and skills. II. The school is entrusted to teach this body of knowledge and skills and determine if the students have acquired those knowledge and skills by testing the students on these knowledge and skills. III. A school’s mission is to develop useful citizens who are capable of performing useful tasks in the real world and so the school must assess students on tasks that duplicate or imitate real world situations A. III only B. I only C. II only D. I and II
  • 58.
    21. With authenticassessment as basis, which does NOT belong? I. Recite a poem with feeling using appropriate voice quality, facial expression and hand gestures. Perform a skit on the importance of a national language-Mother Tongue, Grade 3. II. Demonstrate the generation of electricity by movement of a magnet through a coil-Science, Grade 10. III. Sings themes or melodic fragments of given Classical period pieces. A. I only B. III only C. II only D. none
  • 59.
    22. Which isthe basis of grading if we adopt norm-referenced grading? A.Performance of others B. Performance of the upper group of the class C.Performance of the lower group of the class D.Intended learning outcomes
  • 60.
    23. Which assessmenttool consists of a list of specific characteristics with a place for marking the degree to which each characteristic is displayed? For example, in public speaking, the characteristic “makes eye contact” – is it done frequently, occasionally, seldom or never. A. Checklist B. Scoring rubric C. Rating scale D. Checklist and Rating
  • 61.
    24. If Irate how often students exhibit behaviors or learning skills, I will use the words Always, Frequently, Sometimes, _________. A. Frequently B. Rarely C. Never D. None
  • 62.
    25. The followingare examples of non-test assessments EXCEPT _____________. A. Hand signals B. Cartooning C. Completion (Fill-in-the-blanks written test) D. Games
  • 63.
    26. When doyou choose to use a holistic rubric? A. When a quick or gross judgment needs to be made B. When you want to assess each criterion separately C. When you find scoring several criteria cumbersome D. When gross judgment can equally satisfy the analytic judgment of a product/process
  • 64.
    27. Here isa set of scores: 20, 20, 20, 20, 21, 22, 23, 24, 24, 24, 24, 25, 26, 27, 27, 28, 28, 29, 29, 30. Which is TRUE of the score distribution? A. It has extremely high and low scores. B. It is bimodal. C. The mean is equal to the mode. D. The median is below 15
  • 65.
    28. Which isthe basis of grading if we adopt criterion-referenced grading? A.Performance of others B. Performance of the upper group of the class C.Performance of the lower group of the class D.Set of standards
  • 66.
    29. Non-test assessmentsallow the students to manifest their acquired knowledge and skills from the lesson. The following are examples of non-test assessments EXCEPT A.Portfolio B. Journal C.Teacher observation D.Multiple choice quiz
  • 67.
    30. Which statement/son rubrics is/are CORRECT? I. Rubrics help teachers teach. II. Rubrics help students learn. III. Rubrics help coordinate instructions and assessment. A.I only B. II only C.II and III D.I, II and III