SlideShare a Scribd company logo
Assessment of Learning
By: OCAMPOS, CHARLES IVAN C.
2ND year
BEED – SNED
CONTENT
I. EducationalTechnology
II. Assessment of Learning
III. Statistical Measures orTools Use
In Interpreting Numerical Data
I. Educational Technology
▪ Audio-visual aids are defined as any devices used to aid
in the communication of an idea. As such, virtually
anything can be used as an audio-visual aid provided it
successfully communicates the idea or information for
which it is designed.
▪ An audio-visual aid includes still photography, motion
picture, audio or video tape, slide or filmstrip, that is
prepared individually or in combination to communicate
information or to elicit a desired audio response.
DEVICE
▪ Device is any means other than the subject-matter itself
that is employed by the teacher in presenting the subject
matter to the learner
Purpose of Visual Devices
1. To challenge students’
attention
2. To stimulate the imagination
and develop the mental
imagery of the pupils
3. To facilitate the
understanding of the pupils
4. To provide motivation to the
learners
5. To develop the ability to
listen
TraditionalVisual Aids
1. Demonstration
2. Field trips
3. Laboratory experiments
4. Pictures, film simulations, models
5. Real objects
Classification of Devices
1.Extrinsic – used of supplement a method used.
Example: pictures, graph, film strips, slides, etc.
2.Intrinsic – used as a part of the method or teaching procedures.
Example: pictures accompanying an article.
3.Material Devices – device that have no bearing on the subject matter.
Example: blackboard, chalk, books, pencils, etc.
4.Mental Devices – a kind of device that is related in form and meaning to the
subject matter being presented.
Example: questions, projects, drills, lesson plans, etc.
NON-PROJECTED AUDIOVISUAL AIDS
Non-projected aids are those that do not require the use of audio-visual
equipment such as a projector and screen.These include charts, graphs,
maps, illustrations, photographs, brochures, and handouts. Charts are
commonly used almost everywhere.
A chart is a diagram which shows relationships. An organizational chart is
one of the most widely and commonly used kind of chart.
II. ASSESSMENT OF LEARNING
It focuses on the development and utilization of assessment
tools to improve the teaching-learning process. It emphasizes
on the use of testing for measuring knowledge, comprehension
and other thinking skills. It allows the students to go through
the standard steps in test constitution for quality assessment.
Students will experience how to develop rubrics for
performance-based and portfolio assessment.
Measurement
 Refers to the quantitative aspect of evaluation. It
involves outcomes that can be quantified
statistically. It also be defined as a process in
determining and differentiating the information
about the attributes or characteristics of things.
Evaluation
 Is the quantitative aspect of determining the
outcomes pf learning. It involves value
judgement. Evaluation is more
comprehensive than measurement.
Test
 Consist of questions or exercises or other devices
for measuring the outcomes pf learning.
CLASSIFICATION OF TESTS
According to manner of
response
a. Oral
b. Written
According to method
of preparation
a. Subjective / essay
b. Objective
CLASSIFICATION OF TESTS
According to the nature
of answer
a. Personality tests
b. Intelligence test
c. Aptitude test
d. Achievement or summative test
e. Sociometric test
f. Diagnostic or formative test
g. Trade or vocational test
CLASSIFICATION OF TESTS
▪ Objective tests are tests which have definite answers and therefore are not
subject to personal bias.
▪ Teacher-made tests or educational test are constructed by the teachers
based on the contents pf different subjects taught.
▪ Diagnostic tests are used to measure a student’s strengths and weaknesses,
usually to identify deficiencies in skills or performance.
▪ Formative and Summative are terms often used with evaluation, but they
may also be used with testing. Formative testing is done to monitor students’
attainment of the instructional objectives. Formative testing occurs over a period
of time and monitors student progress. Summative testing is done at the
conclusion of instruction and measures the extent to which students have
attained the desired outcomes
CLASSIFICATION OF TEST
▪ Standardized tests are already valid, reliable and objective. Standardized tests are
test for which contents have been selected and for which norms or standards have been
established. Psychological test and government national examinations ate examples of
standardized tests.
▪ Standards or norms are the goals to be achieved expressed in terms of the average
performance of the population tested. (265)
▪ Criterion-referenced measure is a measuring device with a predetermined level of
success or standard on the part pf the test-takers. For example, a level of 75 percent
score in all the test items could be considered a satisfactory performance.
▪ Norm-referenced measure is a test that is scored on the basis of the norm or
standard level of accomplishment by the whole group taking the test.The grades of the
students are based on the normal curve of distribution.
CRITERIA OF A GOOD EXAMINATION
A good examination must pass the following criteria:
1.Validity
- Refers to the degree to which a test measures what is intended to
measure. It is the usefulness of the test for a given measure.
- A valid test is always reliable. To test the validity of a test it is to
be presented in order to determine if it really measures what it
intends to measure or what it purports to measure.
2. Reliability
- Pertains to the degree to which a test measures what it
supposed to measure.
- The test of reliability is the consistency of the results when
it is administered to different groups of individuals with
similar characteristics in different places at different times.
- Also, the results are almost similar when the test is given
to the se group of individuals at different days and the
coefficient of correlation is not less than 0.85.
3. Objectivity
- Is the degree to which personal bias is eliminated in the scoring of the answers?When
refer to the quality of measurement, essentially, we mean the amount of information
contained in a score generated by the measurement.
-Measures of student’s instructional outcomes are rarely as precise as those pf physical
characteristics such as height and weight.
4. Nominal Measurement
-Are the least sophisticated; they merely classify objects or even by assigning number to
them.
-These numbers are arbitrary and imply no quantification, but the categories must be
mutually exclusive and exhaustive.
-For example, one could nominate designate baseball positions by assigning the pitcher
the numeral 1; the catcher, 2; the first baseman, 3; the second baseman, 4; and so on.
These assignments are arbitrary of these numbers is meaningful. For example, 1 plus 2
does not equal 3, because a pitcher plus a catcher does not equal a first baseman.
5. Ordinal Measurement
-Ordinal scales classify, but they also assign rank order. An example of ordinal
measurement is ranking individuals in a class according to their test scores.
-Students’ scores could be ordered from first, second, third, and so forth to the lowest
score. Such a scale gives more information than nominal measurement, but it still
has limitations.
-The units of ordinal are most likely unequal.The number of points separating the first
and second students probably does not equal the number separating the fifth and sixth
students.
6. Interval Measurement
-In order to be able to add and subtract scores, we use interval scales, sometimes called
equal interval or equal unit measurement.
-This measurement scale contains the nominal and ordinal properties and also
characterized by equal units between score points.
-Examples include thermometers and calendar years.
7. Ratio Measurement
- The most sophisticated type of measurement includes all the preceding properties,
but in a ratio scale, the zero point is not arbitrary; a score of zero includes the
absence of what is being measured.
- For example, if a person’s wealth equaled zero, he or she would have no wealth at all.
This is unlike a social studies test, where missing every item (i.e., receiving a score of
zero)
- Ratio measurement is rarely achieved in educational assessment, either cognitive or
affective areas.
8. Norm-Referenced and Criterion Referenced Measurement
-When we contrast norm-referenced measurement (or testing) with criterion-
referenced measurement, we are basically refereeing to two different ways
of interpreting information. However, Popham (1988, page 135) points out that certain
characteristics tend to go with each type of measurement, and it is unlikely that results
of norm-referenced test are interpreted in criterion-referenced ways and vice versa.
Norm-referenced
Interpretation
 historically has been used
in education norm-referenced
test continue to comprise a
substantial portion of the
measurement is today’s schools.
 It stems from the desire to
differentiate among individuals or
to discriminate among the
individuals for some defined
group on whatever is being
measured. In norm-referenced
measurement, an individual’s
score in interpreted by comparing
it to the scores of a defined group,
often called the normative group.
Norms represents the scores
earned by one or more groups of
students who have taken the test.
Criterion-
Referenced Interpretation
 have developed with a dual
meaning for criterion-referenced.
On one hand, it
means referencing an
individual’s performance to some
criterion that is a defined
performance level.
 The individual’s score is
interpreted in absolute rather
than relative terms.The criterion,
in this situation, means some level
of specified performance that has
been determined independently
of how other might perform.
Distinctions between Norm-
Referenced and Criterion-
Referenced Tests
Although interpretations, not characteristics, provide
the distinction between norm-referenced and
criterion-referenced test, the two types do tend to
differ in some ways. Norm-referenced test are usually
more general and comprehensive and cover a large
domain of content and learning tasks.They are used
for survey testing, although this is not their exclusive
use.
Criterion-referenced
tests
 focus on a specific group of
learner behaviors.To show the
contrast, consider an example.
Arithmetic skills represent a
general and broad category of
student outcomes and would
likely be measured by a norm-
referenced test.
 On the other hand, behaviors
such as solving addition problems
with two five-digit numbers or
determining the multiplication
products of three- and four digits
numbers are much more specific
and may be measured by
criterion-referenced tests.
Norm-referenced tests
 is a relative interpretation based
on an individual’s position with
respect to some group, often
called the normative group.
Norms consist of the scores
usually in some form of
descriptive statistics, of the
normative group.
 AchievementTest as Example
-Most standardized achievement
tests, especially those covering
several skills and academic areas,
are primarily designed for norm-
referenced interpretations.
POINTS TO BE CONSIDERED IN PREPARING A
TEST
1. Are the instructional objectives clearly
defined?
2. What knowledge, skills and attitudes do
want to measure?
3. Did you prepare a table of specifications?
4. Did you formulate well defined and clear test
items?
5. Did you employ correct English in writing the
items?
6. Did you avoid giving clues to the correct
answer?
7. Did you test the important ideas rather than
the trivial?
8. Did you adapt the test’s difficulty to your
student’s ability?
9. Did you avoid using textbooks jargons?
10. Did you cast the items in positive forms?
11. Did you prepare a scoring key?
12. Does each item have single correct answer?
13. Did you review your items?
STAGES IN TEST CONSTRUCTION
I. Planning theTest
a. Determining the Objectives
b. Preparing theTable of Specifications
c. Selecting the Appropriate Item Format
d. Writing theTest Items
e. Editing theTest Items
STAGES IN TEST CONSTRUCTION
II. Trying Out theTest
a. Administering the FirstTry-out- then Item Analysis
b. Administering the SecondTry-out- then Item
Analysis
c. Preparing the Final Form of theTest
STAGES IN TEST CONSTRUCTION
III. EstablishingTestValidity
IV. Establishing theTest Reliability
V. Interpreting theTest Score
MAJOR CONSIDERATIONS IN TEST
CONSTRUCTION
The following are the major considerations in test construction:
Type ofTest
▪ Our usual idea of testing is an in-class test that is administered by the
teacher. However, there are many vibrations on this theme: group test,
individual test, written test, oral test, speed test, power test, and pretest
and posttest. Each of these has different characteristics that must be
considered when the test is planned.
▪ If it is a take-home test rather than an in-class test, how do you make sure
that students work independently, have equal access to sources and
resources, or speed a sufficient but not enormous amount of time on the
task? If it is a pretest, should it exactly match the past test so that a gain
score can be computed, or should the pretest contain items that are
diagnostic of prerequisite skills and knowledge? If it is an achievement test
should partial credit be awarded, should there be penalties for guessing, or
should points be deducted for grammar and spelling errors?
MAJOR CONSIDERATIONS IN
TEST CONSTRUCTION
Test Length
▪ A major decision in the test planning is how many items
should be included on the test.There should be enough to
cover the content adequately, but the length of the class
period or the attention span of fatigue limits of the
students usually restricts the test length. Decisions about
test length are usually based on practical constraints more
than on theoretical considerations.
MAJOR CONSIDERATIONS
IN TEST CONSTRUCTION
Item Formats
▪ Determining what kind of items is included on the test is a
major decision. Should they be objectively scored formats
such as multiple choice or matching type? Should they
cause the students to organize their own thoughts through
short answer essay formats?These are important questions
that can be answered only by the teacher in terms of the
local context, his or her students, his or her classroom, and
the specific purpose of the test. Once the planning decision
is made, the item writing begins.This tank is often the
most feared by beginning test constructors. However, the
procedures are more common sense than formal rules.
GENERAL PRINCIPLES IN CONSTRUCTING
DIFFERENT TYPES OF TEST
1. The test items should be selected very carefully.Only important facts should be
included.
2. The test should have extensive sampling of items.
3. The test items should be carefully expressed in simple, clear, definite, and
meaningful sentences.
4. There should be only one possible correct response for each test item.
5. Each item should be independent. Leading clues to other items should be
avoided.
6. Lifting sentences from books should not be done to encourage thinking and
understanding.
7. The first-person personal pronouns / and we should not be used.
8. Various types of test items should be made to avoid monotony.
9. Majority of the test items should be of moderate difficulty. Few difficult and few
easy items should be included.
10. The test items should be arranged in an ascending order of difficulty. Easy items
should be at the beginning to encourage the examinee to pursue the test and the
most difficult items should be at the end.
11. Clear concise and complete directions should precede all types of test.
Sample test. Sample test items may be provided for expected responses.
12. Items which can be answered by previous experience alone without
knowledge of the subject matter should not be included.
13. Catchy words should not be used in the test items.
14.Test items must be based upon the objectives of the course and upon
the course content.
15. The test should measure the degree of achievement or determine the
difficulties of the learners.
16.The test should emphasize ability to apply and use facts as well as
knowledge of facts.
17.The test should be of such length that it can be completed within the
time allotted by all or nearly all of the pupils.The teacher should perform
the test herself to determine its approximate time allotment.
18. Rules to governing good language expression, grammar, spelling,
punctuation, and capitalization should be observed in all times.
19. Information on how scoring will be done should be provided.
POINTERS TO BE OBSERVED IN CONSTRUCTING
AND SCORING THE DIFFERENT TYPES OF TESTS
A. RECALLTYPES
1. Simple recall type
a) This type of consists of questions calling for a single word or expressions as an answer.
b) Items usually begin with who, where, when, and what.
c) Score is the number of correct answers.
2. Completion type
a) Only important words or phrases should be omitted to avoid confusion.
b) Blanks should be of equal lengths.
c) The blank, as much as possible, is placed near or at the end of the sentence.
d) Articles a, an, and they should not be provide before the end of omitted word or phrase to avoid clues for answers.
e) Score is the number of correct answers.
POINTERS TO BE OBSERVED IN CONSTRUCTING AND
SCORING THE DIFFERENT TYPES OF TESTS
3. EnumerationType
a) The exact number of expected answers should be started.
b) Blanks should be equal lengths.
c) Score is the number of correct answers.
4. Identification type
a) The items should make an examinee think of a word, number, or group of words
that would complete the statement or answer the problem.
b) Score is the number of correct answers.
B. RECOGNITION TYPES
1.True-false or alternate-response type
a) Declarative sentences should be used.
b) The number of “true” and “false” items should be more or less equal.
c) The truth or falsity of the sentence should not be too evident.
d) Negative statements should be avoided.
e) The “modified true – false” is more preferable than the plain true-false”.
f) In arranging the items, avoid the regular recurrence of “true” and “false” statements.
g) Avoid using specific determiners like: all, always, never, none, nothing, most, often, some, etc, and avoid weak statements as
may, sometimes, as a rule, in general etc.
h) Minimize the use of qualitative terms like; few, great, many, more, etc.
i) Avoid leading clues to answers in all times.
j) Score is the number of correct answers in “modified true-false and right answers minus wrong answers in “plain true-false”.
POINTERS TO BE OBSERVED IN CONSTRUCTING AND
SCORING THE DIFFERENT TYPES OF TESTS
POINTERS TO BE OBSERVED IN
CONSTRUCTING AND SCORING THE DIFFERENT TYPES
OF TESTS
2.Yes-No type
a) The items should be in interrogative sentences.
b) The same rules as in true-false are applied.
3. Multiple-response type
a) There should be three to five choices.The number of choices used in the first item
should be the same number of choices in all the items of this type of test.
b) The choices should be numbered or lettered so that only the number or letter can be
written on blank provided.
c) If the choices are figures, they should be arranged in ascending order.
d) Avoid the use of “a” or “an” as the last word prior to the listing of the responses.
POINTERS TO BE OBSERVED
IN CONSTRUCTING AND SCORING THE DIFFERENT
TYPES OF TESTS
e. Random occurrence of responses should be employed
f.The choices, as much as possible, should be at the end of the
statements.
g.The choices should be related in some way or should belong to the
same class.
h. Avoid the use of “none of these” as one of the choices.
I. Score is the number of correct answers.
POINTERS TO BE
OBSERVED IN CONSTRUCTING AND SCORING THE
DIFFERENT TYPES OF TESTS
4. Best answer type
a. There should be three to five choices all of which are right
but vary in their degree of merit, importance or
desirability
b. The other rules for multiple-response items are applied
here.
c. Score is the number of correct answers.
POINTERS TO BE
OBSERVED IN CONSTRUCTING AND SCORING THE
DIFFERENT TYPES OF TESTS
5. MatchingType
a. There should be two columns. Under “A” are the stimuli which should be longer and more
descriptive than the responses under column “B”.The response may be a word, a phrase, a
number, or a formula.
b. The stimuli under column “A” should be numbered and the response under column “B” should be
lettered. Answers will be indicated by letters only on lines provided in column “A”.
c. The number of pairs usually should not exceed twenty items. Less than ten introduces chance
elements.Twenty pairs may be used but more than twenty is decidedly wasteful of time.
d. The number of responses in column “B” should be two or more than the number of items in
Column “A” to avoid guessing.
e. Only one correct matching for each item should be possible.
f. Matching sets should neither be to long nor too short.
g. All items should be on the same page to avoid turning of pages in the process of matching pairs.
h. Score is the number of correct answers.
POINTERS TO
BE OBSERVED IN CONSTRUCTING AND SCORING
THE DIFFERENT TYPES OF TESTS
C. EssayType of Examinations
1. Common types of essay questions. (The types are related to purposes of
which the essay examinations are to be used).
a. Comparison of two things
b. Explanations of the use or meaning of a statement or passage.
c. Analysis
d. Decisions for or against
e. Discussion
POINTERS
TO BE OBSERVED IN CONSTRUCTING AND SCORING T
HE DIFFERENT TYPES OF TESTS
2. How to construct essay examinations.
a. Determine the objectives or essentials for each question to be evaluated.
b. Phrase question in simple, clear and concise language.
c. Suit the length of the questions to the time available for answering the essay
examination.The teacher should try to answer the test herself.
d. Scoring:
e. Have a model answer in advance.
f. Indicate the number of points for each question.
g. Score a point for each essential.
Advantages and Disadvantages of the
Objective Type of Tests
Advantages
a. The objectives test is free from personal bias in scoring.
b. It is easy to score.With a scoring key, the test can be corrected by different individuals without
affecting the accuracy of the grades given.
c. It has high validity because it is comprehensive with wide sampling of essentials
d. It is less time-consuming since may items can be answered in a given time
e. It is fair to students since the slow writers can accomplish the test as fast as writes.
Disadvantages
a. a. It is difficult to construct and requires more time to prepare.
b. b. It does not afford the students the opportunity in training for self- and thought organization
c. c. It cannot be used to test ability in theme writing or journalistic writing
ADVANTAGES AND DISADVANTAGES
OF THE ESSAY TYPE OF TESTS
Advantages
a.The essay examination can be used in practically in all subjects of the school curriculum.
b. It trains students for thought organization and self-expression.
c. It affords students opportunities to express their originality and independence of thinking.
d. Only the essay test can be used in some subjects like composition writing and journalistic
writing in which cannot be tested by the objective type test.
e. Essay examination measures higher mental abilities like comparison, interpretation,
criticism, defence of opinion and decision.
f.The essay test is easily prepared.
g. It is inexpensive.
ADVANTAGES AND DISADVANTAGES
OF THE ESSAY TYPE OF TESTS
Disadvantages
a.The limited sampling of items makes the test unreliable measures of
achievements or abilities.
b. Questions usually are not well prepared.
c. Scoring is highly subjective due to the influence of the corrector’s
personal judgment.
d. Grading of the essay test is inaccurate measure of pupils’
achievements due to subjective of scoring.
III. STATISTICAL MEASURES OR TOOLS
USED IN INTERPRETING NUMERICAL DATA
Frequency Distributions
▪ A simple, common sense technique for describing a set of test scores is through the use of a
frequency distribution. A frequency distribution is merely a listing of the possible score
values and the number of persons who achieved each score. Such an arrangement presents
the scores in a more simple and understandable manner than merely listing all of the
separate scores. Considers a specific set of scores to clarify these ideas.
▪ A set of scores for a group of 25 students who took a 50-items test is listed inTable 1. It is
easier to analyse the scores if they are arranged in a simple frequency distribution. (The
frequency distribution for the same set of scores is given inTable 2).The steps that are
involved in creating the frequency distribution are:
▪ First list the possible scores values in rank order, from highest to lowest.Then a second
column indicates the frequency or number of persons who received each score. For
example, three students received a score of 47, two received 40 and so forth.There is no
need to list the score values below the lowest score that anyone received.
Table 1. Scores of 25 Students on a 50 ItemTest
Student Score Student Score
A 48 N 43
B 50 O 47
C 46 P 48
D 41 Q 42
E 37 R 44
F 48 S 38
G 38 T 49
H 47 U 34
I 49 V 35
J 44 W 47
K 48 X 40
L 49 Y 48
M 40
Table 2. Frequency Distribution of the 25 Scores ofTable 1
Score Frequency Score Frequency
50 1 41 1
49 3 40 2
48 5 39 0
47 3 38 2
46 1 37 1
45 0 36 0
44 2 35 1
43 1 34 1
42 1
• When there is a wide range of scores in a frequency distribution, the
distribution can be quite long, with a lot of zeros in the column of
frequencies. Such a frequency distribution can make interpretation of the
scores difficult and confusing. A grouped of frequency distribution would be
more appropriate in this kind of situation. Groups of score values are listed
rather than each separate possible score value.
• If we were to change the frequency distribution inTable 2 into a grouped
frequency distribution, we might choose intervals such as 48-50, 45-47, and
so forth.The frequency corresponding to interval 48-50 would be 9 (1+3+5).
The choice of the width of the interval is arbitrary, but it must be the same for
all intervals. In addition, it is a good idea to have as odd- numbered interval
width (we used 3 above) so that the midpoint of the interval is a whole
number.This strategy will simplify subsequent graphs and description of the
data.The grouped frequency distribution is presented inTable 3.
Table 3. Grouped Frequency Distribution
Score Interval Frequency
48-50 9
45-47 4
42-44 4
39-41 3
36-38 3
Frequency distributionssummarize sets of test scores by listing the number of
people who received each test score. All of the test scores can be listed separately, or
the sources can be grouped in a frequency distribution.
MEASURES OF CENTRALTENDEDNCY
• Frequency distributions are helpful for indicating the shape to describe a
distribution of scores, but we need more information than the shape to
describe a distribution adequately. We need to know where on the scale of
measurement a distribution is located and how the scores are dispersed in the
distribution. For the former, we compute measures of central tendency, and
for the latter, we compute measures of dispersion. Measures of central
tendency are points on the scale of measurement, and they are representative
of how the scores tend to average.There are three commonly used measures of
central tendency; the mean, the median, mode, but the mean is by far the
most widely used.
The Mean
• The mean of a set of scores is the arithmetic mean. It is found by summing the
scores and dividing the sum by the number of scores.The mean is the most
commonly used measure of central tendency because it is easily understood and
is based on all of the scores in the set; hence, it summarizes a lot of information.
The formula for the mean is as follows:
The Median
• Another measure of central tendency is the median which is the point that
divides the distribution in half; that is, the half of the scores fall above the
median and half of the scores fall below the median.
• When there are only few scores, the median can often be found by
inspection. If there is an odd number of scores, the middle score is the
median. Where there is an even number of scores, the median is halfway
between the two middles scores. However, when there are tied scores in
the middle’s distribution, or when the scores are in a frequency distribution,
the median may not be so obvious.
• Consider again the frequency distribution inTable 2.There were 25 scores in
the distribution, so the middle score should be the median. A
straightforward way to find this median is to augment the frequency
distribution with a column of cumulative frequencies.
• Cumulative frequencies indicate the number of scores at or below each
score.Table 4 indicates the cumulative frequencies for the data inTale 2.
Table 4. Frequency Distribution, Cumulative Frequencies for
the Scores ofTable 2
Score Frequency Cumulative Frequency
50 1 25
49 3 24
48 5 21
47 3 16
46 1 13
45 0 12
44 2 12
43 1 10
42 1 9
41 1 8
40 2 7
39 0 5
38 2 5
37 1 3
36 0 2
35 1 2
34 1 1
For example, 7 people scored at or below a score of 40, and 21 persons scored
at or below a score of 48.
 To find the median, we need to locate the middle score in the cumulative
frequency column, because this score is the median. Since there are 25
scores in the distribution, the middles one is the 13th, a score of 46.Thus,
46 is the median of this distribution; half of the people scored above 46
and half scored.
 When there are ties in the middle of the distribution, there may be a need
to interpolate between scores to get the exact median. However, such
precisions are not needed for most classroom tests.The whole number
closest to the median is usually sufficient.
The Mode
• The measure of central tendency that is the easiest to find is the mode.The mode is the most frequently
occurring score in the distribution.The mode of the scores inTable is 48. Five people had scores of 48 and
no other score occurred as often.
• Each of these three measures of central tendency – the mean, median, and the mode means a legitimate
definition of “average” performance on this test. However, each does provide different information.The
arithmetic average was 44; half of the people scored at or below 46 and more people received 48 than any
other score.
• There are some distributions in which all the three measures of central tendency are equal, but more
often than not they will be different.The choice of which measure of central tendency is best will differ
from situation to situation. The mean is used most often, perhaps because it includes information from
all of the scores.
• When a distribution has a small number of very extreme scores, though, the median may be a better
definition of central tendency.The mode provides the least information and is used infrequently as an
“average”.The mode can be used with nominal scale data, just as an indicator of the most frequently
appearing category.The mean, the median, and the mode all describe central tendency:
• The mean is the arithmetic average.
• The median divides the distribution in half
• The mode is the most frequent score.
MEASURES OF DISPERSION
Measures of central tendency are useful for summarizing average performance, but
they tell as nothing about how the scores are distributed or “spread out” but they
might be differed in other ways. One the distributions may have the scores tightly
clustered around the average, and the other distribution may have scores that are
widely separated. As you may have anticipated, there are descriptive statistics that
measures dispersion, which also are called measures of variability. These measures
indicate how spread out the scores tends to be.
The Range
The range indicates the difference between the highest and lowest scores in the
distribution. It is simple to calculate, but it provides limited information.We subtract
the lowest from the highest score and add 1 so that we include both scores in the
spread between them. For the scores ofTables 2, the range is 50 – 34 + 1 = 17.
A problem with using the range is that only the two most extreme scores are used in
the computation.There is no indication of the spread of scores between the highest
and lowest. Measures of dispersion that take into consideration every score in the
distribution are the variance and the standard deviation.The standard deviation is
used a great deal in interpreting scores from standardized test.
The Variance
The variance measures
how widely the scores
in the distribution are
spread about the
mean. In other words,
the variance is the
average squared
difference between the
sources and the mean.
As a formula, it looks
like this:
The computation of the variance for the scores of Tables
1 is illustrated in Table 5. The data for students K through
V are omitted to save space, but these values are
included in the column totals and in the computation.
The Standard Deviation
The standard deviation also indicates how spread
out the scores is, but it is expressed in the same
units as the original scores. The standards
deviation is computed by finding the square root
of the variance.
S = S2
For the data inTable 1, the variance is 22.8.The standard
deviation is 22.8, or 4.77.
The scores of most norm groups have the shape of a “normal
distribution-a symmetrical, bell-shaped distribution with which
most people are familiar.With normal distribution, about 95
percent of the scores are within two standard deviations of the
mean.
Even when scores are not normally distributed, most of the
scores will be within two standard deviations of the mean. In
the example, the mean minus two standard deviations is 34.46,
and the mean plus two standard deviations is 53.54.Therefore,
only one score is outside of this interval; the lowest score, 43, is
slightly more than two standard deviations from the mean.
The usefulness of the standard deviation becomes apparent when scores from
different test are compared. Suppose that two tests are given to the same class one
fractions and the other on reading comprehensive.The fractions test has a mean of 30
and a standard deviation of 8; the reading comprehensive test has a mean of 60 and a
standard deviation of 10.
IfAnn scored 38 on the fractions test and 55 on the reading comprehensive test, it
appears from the raw scores that she did better in reading than in fractions, because 55
is greater than 38.
Descriptive statistics that indicate dispersion are the range, the variance, and the
standard deviation.
The rangeis the difference between the highest and lowest scores in the distribution
plus one.
The standard deviation is a unit of measurement that shows by how much the
separate scores tend to differ from the mean.
The variance is the square of the standard deviation. Most scores are within two
standard deviations from the mean.
Graphing Distributions
 A graph of a distribution of test scores is often better understood
than is the frequency distribution or a mere table of numbers.
 The general pattern of scores, as well as any unique characteristics
of the distribution, can be seen easily in simple graphs. There are
several kinds of graph that can be used, but a simple bar graph or
histogram, is as useful as any.
 The general shape of the distribution is clear from the graph. Most
of the scores in this distribution are high, at the upper end of the
graph.
 Such a shape is quite common for the scores of classroom tests.
 A normal distribution has most of the test scores in the middle of the
distribution and progressively fewer scores toward extremes. The
scores of norm groups ate seldom graphed but they could be if we
were concerned about seeing the specific shape of the distribution
of scores.
Assessment of learning and educational technology ed   09 ocampos

More Related Content

What's hot

EDUCATIONAL TECHNOLOGY AND ASSESSMENT OF LEARNING
EDUCATIONAL TECHNOLOGY AND ASSESSMENT OF LEARNINGEDUCATIONAL TECHNOLOGY AND ASSESSMENT OF LEARNING
EDUCATIONAL TECHNOLOGY AND ASSESSMENT OF LEARNING
ZiloVinRoseAndus
 
Measurement, evaluation and examination
Measurement, evaluation and  examinationMeasurement, evaluation and  examination
Measurement, evaluation and examination
Sohel Ahmed
 
Testing, Measurement and Evaluation
Testing, Measurement and EvaluationTesting, Measurement and Evaluation
Testing, Measurement and Evaluation
Anita Anwer Ali
 
MEASUREMENT AND EVALUATION
MEASUREMENT AND EVALUATIONMEASUREMENT AND EVALUATION
MEASUREMENT AND EVALUATIONTawanda Shonhiwa
 
Achievement tests
Achievement testsAchievement tests
Achievement testsManu Sethi
 
Assessment of learning ( Anna Marie Pajara
Assessment of learning ( Anna Marie PajaraAssessment of learning ( Anna Marie Pajara
Assessment of learning ( Anna Marie Pajara
CherryDangoy
 
Educational Measurement
Educational MeasurementEducational Measurement
Educational Measurement
AJ Briones
 
Educational measurement, assessment and evaluation
Educational measurement, assessment and evaluationEducational measurement, assessment and evaluation
Educational measurement, assessment and evaluationBoyet Aluan
 
measurement assessment and evaluation
measurement assessment and evaluationmeasurement assessment and evaluation
measurement assessment and evaluation
alizia54
 
Students Evaluation and Examination Methods
Students  Evaluation and Examination MethodsStudents  Evaluation and Examination Methods
Students Evaluation and Examination Methods
Ahmed-Refat Refat
 
Assessment
AssessmentAssessment
Assessment
AnshuDembla
 
Educational Technology and Assessment of Learning
Educational Technology and Assessment of LearningEducational Technology and Assessment of Learning
Educational Technology and Assessment of Learning
RacelLove
 
Educational Assessment and Evoluation (8602)
Educational Assessment and Evoluation (8602)Educational Assessment and Evoluation (8602)
Educational Assessment and Evoluation (8602)
Saad ID ReEs
 
Concept and nature of measurment and evaluation (1)
Concept and nature of measurment and evaluation (1)Concept and nature of measurment and evaluation (1)
Concept and nature of measurment and evaluation (1)
dheerajvyas5
 
Concept of Test, Measurement, Assessment and Evaluation
Concept of Test, Measurement, Assessment and Evaluation Concept of Test, Measurement, Assessment and Evaluation
Concept of Test, Measurement, Assessment and Evaluation
HadeeqaTanveer
 
Test and mesurment
Test and mesurmentTest and mesurment
Test and mesurment
Tahir Ramzan Bhat
 
Assessment of Learning
Assessment of LearningAssessment of Learning
Assessment of Learning
jesselmaeugmad
 
Evaluation in Teaching learning process
Evaluation in Teaching learning processEvaluation in Teaching learning process
Evaluation in Teaching learning process
Enu Sambyal
 
Evaluation and measurement
Evaluation and measurementEvaluation and measurement
Evaluation and measurement
Pratibha Srivastava
 

What's hot (19)

EDUCATIONAL TECHNOLOGY AND ASSESSMENT OF LEARNING
EDUCATIONAL TECHNOLOGY AND ASSESSMENT OF LEARNINGEDUCATIONAL TECHNOLOGY AND ASSESSMENT OF LEARNING
EDUCATIONAL TECHNOLOGY AND ASSESSMENT OF LEARNING
 
Measurement, evaluation and examination
Measurement, evaluation and  examinationMeasurement, evaluation and  examination
Measurement, evaluation and examination
 
Testing, Measurement and Evaluation
Testing, Measurement and EvaluationTesting, Measurement and Evaluation
Testing, Measurement and Evaluation
 
MEASUREMENT AND EVALUATION
MEASUREMENT AND EVALUATIONMEASUREMENT AND EVALUATION
MEASUREMENT AND EVALUATION
 
Achievement tests
Achievement testsAchievement tests
Achievement tests
 
Assessment of learning ( Anna Marie Pajara
Assessment of learning ( Anna Marie PajaraAssessment of learning ( Anna Marie Pajara
Assessment of learning ( Anna Marie Pajara
 
Educational Measurement
Educational MeasurementEducational Measurement
Educational Measurement
 
Educational measurement, assessment and evaluation
Educational measurement, assessment and evaluationEducational measurement, assessment and evaluation
Educational measurement, assessment and evaluation
 
measurement assessment and evaluation
measurement assessment and evaluationmeasurement assessment and evaluation
measurement assessment and evaluation
 
Students Evaluation and Examination Methods
Students  Evaluation and Examination MethodsStudents  Evaluation and Examination Methods
Students Evaluation and Examination Methods
 
Assessment
AssessmentAssessment
Assessment
 
Educational Technology and Assessment of Learning
Educational Technology and Assessment of LearningEducational Technology and Assessment of Learning
Educational Technology and Assessment of Learning
 
Educational Assessment and Evoluation (8602)
Educational Assessment and Evoluation (8602)Educational Assessment and Evoluation (8602)
Educational Assessment and Evoluation (8602)
 
Concept and nature of measurment and evaluation (1)
Concept and nature of measurment and evaluation (1)Concept and nature of measurment and evaluation (1)
Concept and nature of measurment and evaluation (1)
 
Concept of Test, Measurement, Assessment and Evaluation
Concept of Test, Measurement, Assessment and Evaluation Concept of Test, Measurement, Assessment and Evaluation
Concept of Test, Measurement, Assessment and Evaluation
 
Test and mesurment
Test and mesurmentTest and mesurment
Test and mesurment
 
Assessment of Learning
Assessment of LearningAssessment of Learning
Assessment of Learning
 
Evaluation in Teaching learning process
Evaluation in Teaching learning processEvaluation in Teaching learning process
Evaluation in Teaching learning process
 
Evaluation and measurement
Evaluation and measurementEvaluation and measurement
Evaluation and measurement
 

Similar to Assessment of learning and educational technology ed 09 ocampos

EDUCATIONAL TECHNOLOGY AND ASSESSMENT OF LEARNING
EDUCATIONAL TECHNOLOGY AND ASSESSMENT OF LEARNINGEDUCATIONAL TECHNOLOGY AND ASSESSMENT OF LEARNING
EDUCATIONAL TECHNOLOGY AND ASSESSMENT OF LEARNING
ZiloVinRoseAndus
 
ASSESSMENT OF LEARNING
ASSESSMENT OF LEARNINGASSESSMENT OF LEARNING
ASSESSMENT OF LEARNING
WendyAngeliAcero
 
Assessment, Mearurements, Evaluation.pptx
Assessment, Mearurements, Evaluation.pptxAssessment, Mearurements, Evaluation.pptx
Assessment, Mearurements, Evaluation.pptx
TanveerIqbalIqbal
 
Measurement & Evaluation pptx
Measurement & Evaluation pptxMeasurement & Evaluation pptx
Measurement & Evaluation pptx
Aliimtiaz35
 
Types of test
Types of testTypes of test
Types of test
Shams ud din Pandrani
 
Unit 2.pptx
Unit 2.pptxUnit 2.pptx
Unit 2.pptx
Samruddhi Chepe
 
Educational Evaluation for Special Education
Educational Evaluation for Special EducationEducational Evaluation for Special Education
Educational Evaluation for Special Education
International advisers
 
Evaluation and measurement nursing education
Evaluation and measurement nursing educationEvaluation and measurement nursing education
Evaluation and measurement nursing educationparvathysree
 
Educational Assessment and Evaluation
Educational Assessment and Evaluation Educational Assessment and Evaluation
Educational Assessment and Evaluation
HennaAnsari
 
Purpose of measurement and evaluation
Purpose of measurement and evaluationPurpose of measurement and evaluation
Purpose of measurement and evaluation
Rochelle Nato
 
Construction of Tests
Construction of TestsConstruction of Tests
Construction of Tests
Dakshta1
 
constructionoftests-211015110341 (1).pptx
constructionoftests-211015110341 (1).pptxconstructionoftests-211015110341 (1).pptx
constructionoftests-211015110341 (1).pptx
GajeSingh9
 
Assessment of Learning
Assessment of LearningAssessment of Learning
Assessment of Learning
ReycelMaeVelasquez
 
STANDARDIZED & NON-STANDARDIZED TESTS.pptx
STANDARDIZED & NON-STANDARDIZED TESTS.pptxSTANDARDIZED & NON-STANDARDIZED TESTS.pptx
STANDARDIZED & NON-STANDARDIZED TESTS.pptx
Deepti Kukreti
 
LET REVIEW MEASUREMENT content back-up.ppt
LET REVIEW MEASUREMENT content back-up.pptLET REVIEW MEASUREMENT content back-up.ppt
LET REVIEW MEASUREMENT content back-up.ppt
PinkyLim7
 
Evalution
Evalution Evalution
Evalution
suresh kumar
 
What-is-Educational-Assessment.pptx
What-is-Educational-Assessment.pptxWhat-is-Educational-Assessment.pptx
What-is-Educational-Assessment.pptx
ANIOAYRochelleDaoaya
 
Techniques and tools of evaluation prabhuswamy m
Techniques and tools of evaluation prabhuswamy mTechniques and tools of evaluation prabhuswamy m
Techniques and tools of evaluation prabhuswamy m
Dr. Prabhuswamy Mallappa
 
Module 1
Module 1Module 1
Tools and Techniques for Assessment
Tools and Techniques for AssessmentTools and Techniques for Assessment
Tools and Techniques for Assessment
USMAN GANI AL HAQUE
 

Similar to Assessment of learning and educational technology ed 09 ocampos (20)

EDUCATIONAL TECHNOLOGY AND ASSESSMENT OF LEARNING
EDUCATIONAL TECHNOLOGY AND ASSESSMENT OF LEARNINGEDUCATIONAL TECHNOLOGY AND ASSESSMENT OF LEARNING
EDUCATIONAL TECHNOLOGY AND ASSESSMENT OF LEARNING
 
ASSESSMENT OF LEARNING
ASSESSMENT OF LEARNINGASSESSMENT OF LEARNING
ASSESSMENT OF LEARNING
 
Assessment, Mearurements, Evaluation.pptx
Assessment, Mearurements, Evaluation.pptxAssessment, Mearurements, Evaluation.pptx
Assessment, Mearurements, Evaluation.pptx
 
Measurement & Evaluation pptx
Measurement & Evaluation pptxMeasurement & Evaluation pptx
Measurement & Evaluation pptx
 
Types of test
Types of testTypes of test
Types of test
 
Unit 2.pptx
Unit 2.pptxUnit 2.pptx
Unit 2.pptx
 
Educational Evaluation for Special Education
Educational Evaluation for Special EducationEducational Evaluation for Special Education
Educational Evaluation for Special Education
 
Evaluation and measurement nursing education
Evaluation and measurement nursing educationEvaluation and measurement nursing education
Evaluation and measurement nursing education
 
Educational Assessment and Evaluation
Educational Assessment and Evaluation Educational Assessment and Evaluation
Educational Assessment and Evaluation
 
Purpose of measurement and evaluation
Purpose of measurement and evaluationPurpose of measurement and evaluation
Purpose of measurement and evaluation
 
Construction of Tests
Construction of TestsConstruction of Tests
Construction of Tests
 
constructionoftests-211015110341 (1).pptx
constructionoftests-211015110341 (1).pptxconstructionoftests-211015110341 (1).pptx
constructionoftests-211015110341 (1).pptx
 
Assessment of Learning
Assessment of LearningAssessment of Learning
Assessment of Learning
 
STANDARDIZED & NON-STANDARDIZED TESTS.pptx
STANDARDIZED & NON-STANDARDIZED TESTS.pptxSTANDARDIZED & NON-STANDARDIZED TESTS.pptx
STANDARDIZED & NON-STANDARDIZED TESTS.pptx
 
LET REVIEW MEASUREMENT content back-up.ppt
LET REVIEW MEASUREMENT content back-up.pptLET REVIEW MEASUREMENT content back-up.ppt
LET REVIEW MEASUREMENT content back-up.ppt
 
Evalution
Evalution Evalution
Evalution
 
What-is-Educational-Assessment.pptx
What-is-Educational-Assessment.pptxWhat-is-Educational-Assessment.pptx
What-is-Educational-Assessment.pptx
 
Techniques and tools of evaluation prabhuswamy m
Techniques and tools of evaluation prabhuswamy mTechniques and tools of evaluation prabhuswamy m
Techniques and tools of evaluation prabhuswamy m
 
Module 1
Module 1Module 1
Module 1
 
Tools and Techniques for Assessment
Tools and Techniques for AssessmentTools and Techniques for Assessment
Tools and Techniques for Assessment
 

Recently uploaded

The Roman Empire A Historical Colossus.pdf
The Roman Empire A Historical Colossus.pdfThe Roman Empire A Historical Colossus.pdf
The Roman Empire A Historical Colossus.pdf
kaushalkr1407
 
The Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official PublicationThe Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official Publication
Delapenabediema
 
Introduction to Quality Improvement Essentials
Introduction to Quality Improvement EssentialsIntroduction to Quality Improvement Essentials
Introduction to Quality Improvement Essentials
Excellence Foundation for South Sudan
 
Operation Blue Star - Saka Neela Tara
Operation Blue Star   -  Saka Neela TaraOperation Blue Star   -  Saka Neela Tara
Operation Blue Star - Saka Neela Tara
Balvir Singh
 
Supporting (UKRI) OA monographs at Salford.pptx
Supporting (UKRI) OA monographs at Salford.pptxSupporting (UKRI) OA monographs at Salford.pptx
Supporting (UKRI) OA monographs at Salford.pptx
Jisc
 
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptx
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptxStudents, digital devices and success - Andreas Schleicher - 27 May 2024..pptx
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptx
EduSkills OECD
 
Thesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.pptThesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.ppt
EverAndrsGuerraGuerr
 
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
MysoreMuleSoftMeetup
 
Cambridge International AS A Level Biology Coursebook - EBook (MaryFosbery J...
Cambridge International AS  A Level Biology Coursebook - EBook (MaryFosbery J...Cambridge International AS  A Level Biology Coursebook - EBook (MaryFosbery J...
Cambridge International AS A Level Biology Coursebook - EBook (MaryFosbery J...
AzmatAli747758
 
Fish and Chips - have they had their chips
Fish and Chips - have they had their chipsFish and Chips - have they had their chips
Fish and Chips - have they had their chips
GeoBlogs
 
Basic phrases for greeting and assisting costumers
Basic phrases for greeting and assisting costumersBasic phrases for greeting and assisting costumers
Basic phrases for greeting and assisting costumers
PedroFerreira53928
 
PART A. Introduction to Costumer Service
PART A. Introduction to Costumer ServicePART A. Introduction to Costumer Service
PART A. Introduction to Costumer Service
PedroFerreira53928
 
Model Attribute Check Company Auto Property
Model Attribute  Check Company Auto PropertyModel Attribute  Check Company Auto Property
Model Attribute Check Company Auto Property
Celine George
 
The approach at University of Liverpool.pptx
The approach at University of Liverpool.pptxThe approach at University of Liverpool.pptx
The approach at University of Liverpool.pptx
Jisc
 
Unit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdfUnit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdf
Thiyagu K
 
How to Create Map Views in the Odoo 17 ERP
How to Create Map Views in the Odoo 17 ERPHow to Create Map Views in the Odoo 17 ERP
How to Create Map Views in the Odoo 17 ERP
Celine George
 
How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...
Jisc
 
Overview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with MechanismOverview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with Mechanism
DeeptiGupta154
 
The French Revolution Class 9 Study Material pdf free download
The French Revolution Class 9 Study Material pdf free downloadThe French Revolution Class 9 Study Material pdf free download
The French Revolution Class 9 Study Material pdf free download
Vivekanand Anglo Vedic Academy
 
2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...
Sandy Millin
 

Recently uploaded (20)

The Roman Empire A Historical Colossus.pdf
The Roman Empire A Historical Colossus.pdfThe Roman Empire A Historical Colossus.pdf
The Roman Empire A Historical Colossus.pdf
 
The Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official PublicationThe Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official Publication
 
Introduction to Quality Improvement Essentials
Introduction to Quality Improvement EssentialsIntroduction to Quality Improvement Essentials
Introduction to Quality Improvement Essentials
 
Operation Blue Star - Saka Neela Tara
Operation Blue Star   -  Saka Neela TaraOperation Blue Star   -  Saka Neela Tara
Operation Blue Star - Saka Neela Tara
 
Supporting (UKRI) OA monographs at Salford.pptx
Supporting (UKRI) OA monographs at Salford.pptxSupporting (UKRI) OA monographs at Salford.pptx
Supporting (UKRI) OA monographs at Salford.pptx
 
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptx
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptxStudents, digital devices and success - Andreas Schleicher - 27 May 2024..pptx
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptx
 
Thesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.pptThesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.ppt
 
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
 
Cambridge International AS A Level Biology Coursebook - EBook (MaryFosbery J...
Cambridge International AS  A Level Biology Coursebook - EBook (MaryFosbery J...Cambridge International AS  A Level Biology Coursebook - EBook (MaryFosbery J...
Cambridge International AS A Level Biology Coursebook - EBook (MaryFosbery J...
 
Fish and Chips - have they had their chips
Fish and Chips - have they had their chipsFish and Chips - have they had their chips
Fish and Chips - have they had their chips
 
Basic phrases for greeting and assisting costumers
Basic phrases for greeting and assisting costumersBasic phrases for greeting and assisting costumers
Basic phrases for greeting and assisting costumers
 
PART A. Introduction to Costumer Service
PART A. Introduction to Costumer ServicePART A. Introduction to Costumer Service
PART A. Introduction to Costumer Service
 
Model Attribute Check Company Auto Property
Model Attribute  Check Company Auto PropertyModel Attribute  Check Company Auto Property
Model Attribute Check Company Auto Property
 
The approach at University of Liverpool.pptx
The approach at University of Liverpool.pptxThe approach at University of Liverpool.pptx
The approach at University of Liverpool.pptx
 
Unit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdfUnit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdf
 
How to Create Map Views in the Odoo 17 ERP
How to Create Map Views in the Odoo 17 ERPHow to Create Map Views in the Odoo 17 ERP
How to Create Map Views in the Odoo 17 ERP
 
How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...
 
Overview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with MechanismOverview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with Mechanism
 
The French Revolution Class 9 Study Material pdf free download
The French Revolution Class 9 Study Material pdf free downloadThe French Revolution Class 9 Study Material pdf free download
The French Revolution Class 9 Study Material pdf free download
 
2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...
 

Assessment of learning and educational technology ed 09 ocampos

  • 1. Assessment of Learning By: OCAMPOS, CHARLES IVAN C. 2ND year BEED – SNED
  • 2. CONTENT I. EducationalTechnology II. Assessment of Learning III. Statistical Measures orTools Use In Interpreting Numerical Data
  • 3. I. Educational Technology ▪ Audio-visual aids are defined as any devices used to aid in the communication of an idea. As such, virtually anything can be used as an audio-visual aid provided it successfully communicates the idea or information for which it is designed. ▪ An audio-visual aid includes still photography, motion picture, audio or video tape, slide or filmstrip, that is prepared individually or in combination to communicate information or to elicit a desired audio response.
  • 4. DEVICE ▪ Device is any means other than the subject-matter itself that is employed by the teacher in presenting the subject matter to the learner
  • 5. Purpose of Visual Devices 1. To challenge students’ attention 2. To stimulate the imagination and develop the mental imagery of the pupils 3. To facilitate the understanding of the pupils 4. To provide motivation to the learners 5. To develop the ability to listen
  • 6. TraditionalVisual Aids 1. Demonstration 2. Field trips 3. Laboratory experiments 4. Pictures, film simulations, models 5. Real objects
  • 7. Classification of Devices 1.Extrinsic – used of supplement a method used. Example: pictures, graph, film strips, slides, etc. 2.Intrinsic – used as a part of the method or teaching procedures. Example: pictures accompanying an article. 3.Material Devices – device that have no bearing on the subject matter. Example: blackboard, chalk, books, pencils, etc. 4.Mental Devices – a kind of device that is related in form and meaning to the subject matter being presented. Example: questions, projects, drills, lesson plans, etc.
  • 8. NON-PROJECTED AUDIOVISUAL AIDS Non-projected aids are those that do not require the use of audio-visual equipment such as a projector and screen.These include charts, graphs, maps, illustrations, photographs, brochures, and handouts. Charts are commonly used almost everywhere. A chart is a diagram which shows relationships. An organizational chart is one of the most widely and commonly used kind of chart.
  • 9. II. ASSESSMENT OF LEARNING It focuses on the development and utilization of assessment tools to improve the teaching-learning process. It emphasizes on the use of testing for measuring knowledge, comprehension and other thinking skills. It allows the students to go through the standard steps in test constitution for quality assessment. Students will experience how to develop rubrics for performance-based and portfolio assessment.
  • 10. Measurement  Refers to the quantitative aspect of evaluation. It involves outcomes that can be quantified statistically. It also be defined as a process in determining and differentiating the information about the attributes or characteristics of things.
  • 11. Evaluation  Is the quantitative aspect of determining the outcomes pf learning. It involves value judgement. Evaluation is more comprehensive than measurement.
  • 12. Test  Consist of questions or exercises or other devices for measuring the outcomes pf learning.
  • 13. CLASSIFICATION OF TESTS According to manner of response a. Oral b. Written According to method of preparation a. Subjective / essay b. Objective
  • 14. CLASSIFICATION OF TESTS According to the nature of answer a. Personality tests b. Intelligence test c. Aptitude test d. Achievement or summative test e. Sociometric test f. Diagnostic or formative test g. Trade or vocational test
  • 15. CLASSIFICATION OF TESTS ▪ Objective tests are tests which have definite answers and therefore are not subject to personal bias. ▪ Teacher-made tests or educational test are constructed by the teachers based on the contents pf different subjects taught. ▪ Diagnostic tests are used to measure a student’s strengths and weaknesses, usually to identify deficiencies in skills or performance. ▪ Formative and Summative are terms often used with evaluation, but they may also be used with testing. Formative testing is done to monitor students’ attainment of the instructional objectives. Formative testing occurs over a period of time and monitors student progress. Summative testing is done at the conclusion of instruction and measures the extent to which students have attained the desired outcomes
  • 16. CLASSIFICATION OF TEST ▪ Standardized tests are already valid, reliable and objective. Standardized tests are test for which contents have been selected and for which norms or standards have been established. Psychological test and government national examinations ate examples of standardized tests. ▪ Standards or norms are the goals to be achieved expressed in terms of the average performance of the population tested. (265) ▪ Criterion-referenced measure is a measuring device with a predetermined level of success or standard on the part pf the test-takers. For example, a level of 75 percent score in all the test items could be considered a satisfactory performance. ▪ Norm-referenced measure is a test that is scored on the basis of the norm or standard level of accomplishment by the whole group taking the test.The grades of the students are based on the normal curve of distribution.
  • 17. CRITERIA OF A GOOD EXAMINATION A good examination must pass the following criteria: 1.Validity - Refers to the degree to which a test measures what is intended to measure. It is the usefulness of the test for a given measure. - A valid test is always reliable. To test the validity of a test it is to be presented in order to determine if it really measures what it intends to measure or what it purports to measure.
  • 18. 2. Reliability - Pertains to the degree to which a test measures what it supposed to measure. - The test of reliability is the consistency of the results when it is administered to different groups of individuals with similar characteristics in different places at different times. - Also, the results are almost similar when the test is given to the se group of individuals at different days and the coefficient of correlation is not less than 0.85.
  • 19. 3. Objectivity - Is the degree to which personal bias is eliminated in the scoring of the answers?When refer to the quality of measurement, essentially, we mean the amount of information contained in a score generated by the measurement. -Measures of student’s instructional outcomes are rarely as precise as those pf physical characteristics such as height and weight. 4. Nominal Measurement -Are the least sophisticated; they merely classify objects or even by assigning number to them. -These numbers are arbitrary and imply no quantification, but the categories must be mutually exclusive and exhaustive. -For example, one could nominate designate baseball positions by assigning the pitcher the numeral 1; the catcher, 2; the first baseman, 3; the second baseman, 4; and so on. These assignments are arbitrary of these numbers is meaningful. For example, 1 plus 2 does not equal 3, because a pitcher plus a catcher does not equal a first baseman.
  • 20. 5. Ordinal Measurement -Ordinal scales classify, but they also assign rank order. An example of ordinal measurement is ranking individuals in a class according to their test scores. -Students’ scores could be ordered from first, second, third, and so forth to the lowest score. Such a scale gives more information than nominal measurement, but it still has limitations. -The units of ordinal are most likely unequal.The number of points separating the first and second students probably does not equal the number separating the fifth and sixth students. 6. Interval Measurement -In order to be able to add and subtract scores, we use interval scales, sometimes called equal interval or equal unit measurement. -This measurement scale contains the nominal and ordinal properties and also characterized by equal units between score points. -Examples include thermometers and calendar years.
  • 21. 7. Ratio Measurement - The most sophisticated type of measurement includes all the preceding properties, but in a ratio scale, the zero point is not arbitrary; a score of zero includes the absence of what is being measured. - For example, if a person’s wealth equaled zero, he or she would have no wealth at all. This is unlike a social studies test, where missing every item (i.e., receiving a score of zero) - Ratio measurement is rarely achieved in educational assessment, either cognitive or affective areas. 8. Norm-Referenced and Criterion Referenced Measurement -When we contrast norm-referenced measurement (or testing) with criterion- referenced measurement, we are basically refereeing to two different ways of interpreting information. However, Popham (1988, page 135) points out that certain characteristics tend to go with each type of measurement, and it is unlikely that results of norm-referenced test are interpreted in criterion-referenced ways and vice versa.
  • 22. Norm-referenced Interpretation  historically has been used in education norm-referenced test continue to comprise a substantial portion of the measurement is today’s schools.  It stems from the desire to differentiate among individuals or to discriminate among the individuals for some defined group on whatever is being measured. In norm-referenced measurement, an individual’s score in interpreted by comparing it to the scores of a defined group, often called the normative group. Norms represents the scores earned by one or more groups of students who have taken the test. Criterion- Referenced Interpretation  have developed with a dual meaning for criterion-referenced. On one hand, it means referencing an individual’s performance to some criterion that is a defined performance level.  The individual’s score is interpreted in absolute rather than relative terms.The criterion, in this situation, means some level of specified performance that has been determined independently of how other might perform.
  • 23. Distinctions between Norm- Referenced and Criterion- Referenced Tests Although interpretations, not characteristics, provide the distinction between norm-referenced and criterion-referenced test, the two types do tend to differ in some ways. Norm-referenced test are usually more general and comprehensive and cover a large domain of content and learning tasks.They are used for survey testing, although this is not their exclusive use.
  • 24. Criterion-referenced tests  focus on a specific group of learner behaviors.To show the contrast, consider an example. Arithmetic skills represent a general and broad category of student outcomes and would likely be measured by a norm- referenced test.  On the other hand, behaviors such as solving addition problems with two five-digit numbers or determining the multiplication products of three- and four digits numbers are much more specific and may be measured by criterion-referenced tests. Norm-referenced tests  is a relative interpretation based on an individual’s position with respect to some group, often called the normative group. Norms consist of the scores usually in some form of descriptive statistics, of the normative group.  AchievementTest as Example -Most standardized achievement tests, especially those covering several skills and academic areas, are primarily designed for norm- referenced interpretations.
  • 25. POINTS TO BE CONSIDERED IN PREPARING A TEST 1. Are the instructional objectives clearly defined? 2. What knowledge, skills and attitudes do want to measure? 3. Did you prepare a table of specifications? 4. Did you formulate well defined and clear test items? 5. Did you employ correct English in writing the items? 6. Did you avoid giving clues to the correct answer? 7. Did you test the important ideas rather than the trivial? 8. Did you adapt the test’s difficulty to your student’s ability? 9. Did you avoid using textbooks jargons? 10. Did you cast the items in positive forms? 11. Did you prepare a scoring key? 12. Does each item have single correct answer? 13. Did you review your items?
  • 26. STAGES IN TEST CONSTRUCTION I. Planning theTest a. Determining the Objectives b. Preparing theTable of Specifications c. Selecting the Appropriate Item Format d. Writing theTest Items e. Editing theTest Items
  • 27. STAGES IN TEST CONSTRUCTION II. Trying Out theTest a. Administering the FirstTry-out- then Item Analysis b. Administering the SecondTry-out- then Item Analysis c. Preparing the Final Form of theTest
  • 28. STAGES IN TEST CONSTRUCTION III. EstablishingTestValidity IV. Establishing theTest Reliability V. Interpreting theTest Score
  • 29. MAJOR CONSIDERATIONS IN TEST CONSTRUCTION The following are the major considerations in test construction: Type ofTest ▪ Our usual idea of testing is an in-class test that is administered by the teacher. However, there are many vibrations on this theme: group test, individual test, written test, oral test, speed test, power test, and pretest and posttest. Each of these has different characteristics that must be considered when the test is planned. ▪ If it is a take-home test rather than an in-class test, how do you make sure that students work independently, have equal access to sources and resources, or speed a sufficient but not enormous amount of time on the task? If it is a pretest, should it exactly match the past test so that a gain score can be computed, or should the pretest contain items that are diagnostic of prerequisite skills and knowledge? If it is an achievement test should partial credit be awarded, should there be penalties for guessing, or should points be deducted for grammar and spelling errors?
  • 30. MAJOR CONSIDERATIONS IN TEST CONSTRUCTION Test Length ▪ A major decision in the test planning is how many items should be included on the test.There should be enough to cover the content adequately, but the length of the class period or the attention span of fatigue limits of the students usually restricts the test length. Decisions about test length are usually based on practical constraints more than on theoretical considerations.
  • 31. MAJOR CONSIDERATIONS IN TEST CONSTRUCTION Item Formats ▪ Determining what kind of items is included on the test is a major decision. Should they be objectively scored formats such as multiple choice or matching type? Should they cause the students to organize their own thoughts through short answer essay formats?These are important questions that can be answered only by the teacher in terms of the local context, his or her students, his or her classroom, and the specific purpose of the test. Once the planning decision is made, the item writing begins.This tank is often the most feared by beginning test constructors. However, the procedures are more common sense than formal rules.
  • 32. GENERAL PRINCIPLES IN CONSTRUCTING DIFFERENT TYPES OF TEST 1. The test items should be selected very carefully.Only important facts should be included. 2. The test should have extensive sampling of items. 3. The test items should be carefully expressed in simple, clear, definite, and meaningful sentences. 4. There should be only one possible correct response for each test item. 5. Each item should be independent. Leading clues to other items should be avoided. 6. Lifting sentences from books should not be done to encourage thinking and understanding. 7. The first-person personal pronouns / and we should not be used. 8. Various types of test items should be made to avoid monotony. 9. Majority of the test items should be of moderate difficulty. Few difficult and few easy items should be included. 10. The test items should be arranged in an ascending order of difficulty. Easy items should be at the beginning to encourage the examinee to pursue the test and the most difficult items should be at the end. 11. Clear concise and complete directions should precede all types of test. Sample test. Sample test items may be provided for expected responses. 12. Items which can be answered by previous experience alone without knowledge of the subject matter should not be included. 13. Catchy words should not be used in the test items. 14.Test items must be based upon the objectives of the course and upon the course content. 15. The test should measure the degree of achievement or determine the difficulties of the learners. 16.The test should emphasize ability to apply and use facts as well as knowledge of facts. 17.The test should be of such length that it can be completed within the time allotted by all or nearly all of the pupils.The teacher should perform the test herself to determine its approximate time allotment. 18. Rules to governing good language expression, grammar, spelling, punctuation, and capitalization should be observed in all times. 19. Information on how scoring will be done should be provided.
  • 33. POINTERS TO BE OBSERVED IN CONSTRUCTING AND SCORING THE DIFFERENT TYPES OF TESTS A. RECALLTYPES 1. Simple recall type a) This type of consists of questions calling for a single word or expressions as an answer. b) Items usually begin with who, where, when, and what. c) Score is the number of correct answers. 2. Completion type a) Only important words or phrases should be omitted to avoid confusion. b) Blanks should be of equal lengths. c) The blank, as much as possible, is placed near or at the end of the sentence. d) Articles a, an, and they should not be provide before the end of omitted word or phrase to avoid clues for answers. e) Score is the number of correct answers.
  • 34. POINTERS TO BE OBSERVED IN CONSTRUCTING AND SCORING THE DIFFERENT TYPES OF TESTS 3. EnumerationType a) The exact number of expected answers should be started. b) Blanks should be equal lengths. c) Score is the number of correct answers. 4. Identification type a) The items should make an examinee think of a word, number, or group of words that would complete the statement or answer the problem. b) Score is the number of correct answers.
  • 35. B. RECOGNITION TYPES 1.True-false or alternate-response type a) Declarative sentences should be used. b) The number of “true” and “false” items should be more or less equal. c) The truth or falsity of the sentence should not be too evident. d) Negative statements should be avoided. e) The “modified true – false” is more preferable than the plain true-false”. f) In arranging the items, avoid the regular recurrence of “true” and “false” statements. g) Avoid using specific determiners like: all, always, never, none, nothing, most, often, some, etc, and avoid weak statements as may, sometimes, as a rule, in general etc. h) Minimize the use of qualitative terms like; few, great, many, more, etc. i) Avoid leading clues to answers in all times. j) Score is the number of correct answers in “modified true-false and right answers minus wrong answers in “plain true-false”. POINTERS TO BE OBSERVED IN CONSTRUCTING AND SCORING THE DIFFERENT TYPES OF TESTS
  • 36. POINTERS TO BE OBSERVED IN CONSTRUCTING AND SCORING THE DIFFERENT TYPES OF TESTS 2.Yes-No type a) The items should be in interrogative sentences. b) The same rules as in true-false are applied. 3. Multiple-response type a) There should be three to five choices.The number of choices used in the first item should be the same number of choices in all the items of this type of test. b) The choices should be numbered or lettered so that only the number or letter can be written on blank provided. c) If the choices are figures, they should be arranged in ascending order. d) Avoid the use of “a” or “an” as the last word prior to the listing of the responses.
  • 37. POINTERS TO BE OBSERVED IN CONSTRUCTING AND SCORING THE DIFFERENT TYPES OF TESTS e. Random occurrence of responses should be employed f.The choices, as much as possible, should be at the end of the statements. g.The choices should be related in some way or should belong to the same class. h. Avoid the use of “none of these” as one of the choices. I. Score is the number of correct answers.
  • 38. POINTERS TO BE OBSERVED IN CONSTRUCTING AND SCORING THE DIFFERENT TYPES OF TESTS 4. Best answer type a. There should be three to five choices all of which are right but vary in their degree of merit, importance or desirability b. The other rules for multiple-response items are applied here. c. Score is the number of correct answers.
  • 39. POINTERS TO BE OBSERVED IN CONSTRUCTING AND SCORING THE DIFFERENT TYPES OF TESTS 5. MatchingType a. There should be two columns. Under “A” are the stimuli which should be longer and more descriptive than the responses under column “B”.The response may be a word, a phrase, a number, or a formula. b. The stimuli under column “A” should be numbered and the response under column “B” should be lettered. Answers will be indicated by letters only on lines provided in column “A”. c. The number of pairs usually should not exceed twenty items. Less than ten introduces chance elements.Twenty pairs may be used but more than twenty is decidedly wasteful of time. d. The number of responses in column “B” should be two or more than the number of items in Column “A” to avoid guessing. e. Only one correct matching for each item should be possible. f. Matching sets should neither be to long nor too short. g. All items should be on the same page to avoid turning of pages in the process of matching pairs. h. Score is the number of correct answers.
  • 40. POINTERS TO BE OBSERVED IN CONSTRUCTING AND SCORING THE DIFFERENT TYPES OF TESTS C. EssayType of Examinations 1. Common types of essay questions. (The types are related to purposes of which the essay examinations are to be used). a. Comparison of two things b. Explanations of the use or meaning of a statement or passage. c. Analysis d. Decisions for or against e. Discussion
  • 41. POINTERS TO BE OBSERVED IN CONSTRUCTING AND SCORING T HE DIFFERENT TYPES OF TESTS 2. How to construct essay examinations. a. Determine the objectives or essentials for each question to be evaluated. b. Phrase question in simple, clear and concise language. c. Suit the length of the questions to the time available for answering the essay examination.The teacher should try to answer the test herself. d. Scoring: e. Have a model answer in advance. f. Indicate the number of points for each question. g. Score a point for each essential.
  • 42. Advantages and Disadvantages of the Objective Type of Tests Advantages a. The objectives test is free from personal bias in scoring. b. It is easy to score.With a scoring key, the test can be corrected by different individuals without affecting the accuracy of the grades given. c. It has high validity because it is comprehensive with wide sampling of essentials d. It is less time-consuming since may items can be answered in a given time e. It is fair to students since the slow writers can accomplish the test as fast as writes. Disadvantages a. a. It is difficult to construct and requires more time to prepare. b. b. It does not afford the students the opportunity in training for self- and thought organization c. c. It cannot be used to test ability in theme writing or journalistic writing
  • 43. ADVANTAGES AND DISADVANTAGES OF THE ESSAY TYPE OF TESTS Advantages a.The essay examination can be used in practically in all subjects of the school curriculum. b. It trains students for thought organization and self-expression. c. It affords students opportunities to express their originality and independence of thinking. d. Only the essay test can be used in some subjects like composition writing and journalistic writing in which cannot be tested by the objective type test. e. Essay examination measures higher mental abilities like comparison, interpretation, criticism, defence of opinion and decision. f.The essay test is easily prepared. g. It is inexpensive.
  • 44. ADVANTAGES AND DISADVANTAGES OF THE ESSAY TYPE OF TESTS Disadvantages a.The limited sampling of items makes the test unreliable measures of achievements or abilities. b. Questions usually are not well prepared. c. Scoring is highly subjective due to the influence of the corrector’s personal judgment. d. Grading of the essay test is inaccurate measure of pupils’ achievements due to subjective of scoring.
  • 45. III. STATISTICAL MEASURES OR TOOLS USED IN INTERPRETING NUMERICAL DATA Frequency Distributions ▪ A simple, common sense technique for describing a set of test scores is through the use of a frequency distribution. A frequency distribution is merely a listing of the possible score values and the number of persons who achieved each score. Such an arrangement presents the scores in a more simple and understandable manner than merely listing all of the separate scores. Considers a specific set of scores to clarify these ideas. ▪ A set of scores for a group of 25 students who took a 50-items test is listed inTable 1. It is easier to analyse the scores if they are arranged in a simple frequency distribution. (The frequency distribution for the same set of scores is given inTable 2).The steps that are involved in creating the frequency distribution are: ▪ First list the possible scores values in rank order, from highest to lowest.Then a second column indicates the frequency or number of persons who received each score. For example, three students received a score of 47, two received 40 and so forth.There is no need to list the score values below the lowest score that anyone received.
  • 46. Table 1. Scores of 25 Students on a 50 ItemTest Student Score Student Score A 48 N 43 B 50 O 47 C 46 P 48 D 41 Q 42 E 37 R 44 F 48 S 38 G 38 T 49 H 47 U 34 I 49 V 35 J 44 W 47 K 48 X 40 L 49 Y 48 M 40
  • 47. Table 2. Frequency Distribution of the 25 Scores ofTable 1 Score Frequency Score Frequency 50 1 41 1 49 3 40 2 48 5 39 0 47 3 38 2 46 1 37 1 45 0 36 0 44 2 35 1 43 1 34 1 42 1
  • 48. • When there is a wide range of scores in a frequency distribution, the distribution can be quite long, with a lot of zeros in the column of frequencies. Such a frequency distribution can make interpretation of the scores difficult and confusing. A grouped of frequency distribution would be more appropriate in this kind of situation. Groups of score values are listed rather than each separate possible score value. • If we were to change the frequency distribution inTable 2 into a grouped frequency distribution, we might choose intervals such as 48-50, 45-47, and so forth.The frequency corresponding to interval 48-50 would be 9 (1+3+5). The choice of the width of the interval is arbitrary, but it must be the same for all intervals. In addition, it is a good idea to have as odd- numbered interval width (we used 3 above) so that the midpoint of the interval is a whole number.This strategy will simplify subsequent graphs and description of the data.The grouped frequency distribution is presented inTable 3.
  • 49. Table 3. Grouped Frequency Distribution Score Interval Frequency 48-50 9 45-47 4 42-44 4 39-41 3 36-38 3
  • 50. Frequency distributionssummarize sets of test scores by listing the number of people who received each test score. All of the test scores can be listed separately, or the sources can be grouped in a frequency distribution. MEASURES OF CENTRALTENDEDNCY • Frequency distributions are helpful for indicating the shape to describe a distribution of scores, but we need more information than the shape to describe a distribution adequately. We need to know where on the scale of measurement a distribution is located and how the scores are dispersed in the distribution. For the former, we compute measures of central tendency, and for the latter, we compute measures of dispersion. Measures of central tendency are points on the scale of measurement, and they are representative of how the scores tend to average.There are three commonly used measures of central tendency; the mean, the median, mode, but the mean is by far the most widely used.
  • 51. The Mean • The mean of a set of scores is the arithmetic mean. It is found by summing the scores and dividing the sum by the number of scores.The mean is the most commonly used measure of central tendency because it is easily understood and is based on all of the scores in the set; hence, it summarizes a lot of information. The formula for the mean is as follows:
  • 52.
  • 53.
  • 54. The Median • Another measure of central tendency is the median which is the point that divides the distribution in half; that is, the half of the scores fall above the median and half of the scores fall below the median. • When there are only few scores, the median can often be found by inspection. If there is an odd number of scores, the middle score is the median. Where there is an even number of scores, the median is halfway between the two middles scores. However, when there are tied scores in the middle’s distribution, or when the scores are in a frequency distribution, the median may not be so obvious. • Consider again the frequency distribution inTable 2.There were 25 scores in the distribution, so the middle score should be the median. A straightforward way to find this median is to augment the frequency distribution with a column of cumulative frequencies. • Cumulative frequencies indicate the number of scores at or below each score.Table 4 indicates the cumulative frequencies for the data inTale 2.
  • 55. Table 4. Frequency Distribution, Cumulative Frequencies for the Scores ofTable 2 Score Frequency Cumulative Frequency 50 1 25 49 3 24 48 5 21 47 3 16 46 1 13 45 0 12 44 2 12 43 1 10 42 1 9 41 1 8 40 2 7 39 0 5 38 2 5 37 1 3 36 0 2 35 1 2 34 1 1
  • 56. For example, 7 people scored at or below a score of 40, and 21 persons scored at or below a score of 48.  To find the median, we need to locate the middle score in the cumulative frequency column, because this score is the median. Since there are 25 scores in the distribution, the middles one is the 13th, a score of 46.Thus, 46 is the median of this distribution; half of the people scored above 46 and half scored.  When there are ties in the middle of the distribution, there may be a need to interpolate between scores to get the exact median. However, such precisions are not needed for most classroom tests.The whole number closest to the median is usually sufficient.
  • 57. The Mode • The measure of central tendency that is the easiest to find is the mode.The mode is the most frequently occurring score in the distribution.The mode of the scores inTable is 48. Five people had scores of 48 and no other score occurred as often. • Each of these three measures of central tendency – the mean, median, and the mode means a legitimate definition of “average” performance on this test. However, each does provide different information.The arithmetic average was 44; half of the people scored at or below 46 and more people received 48 than any other score. • There are some distributions in which all the three measures of central tendency are equal, but more often than not they will be different.The choice of which measure of central tendency is best will differ from situation to situation. The mean is used most often, perhaps because it includes information from all of the scores. • When a distribution has a small number of very extreme scores, though, the median may be a better definition of central tendency.The mode provides the least information and is used infrequently as an “average”.The mode can be used with nominal scale data, just as an indicator of the most frequently appearing category.The mean, the median, and the mode all describe central tendency: • The mean is the arithmetic average. • The median divides the distribution in half • The mode is the most frequent score.
  • 58. MEASURES OF DISPERSION Measures of central tendency are useful for summarizing average performance, but they tell as nothing about how the scores are distributed or “spread out” but they might be differed in other ways. One the distributions may have the scores tightly clustered around the average, and the other distribution may have scores that are widely separated. As you may have anticipated, there are descriptive statistics that measures dispersion, which also are called measures of variability. These measures indicate how spread out the scores tends to be. The Range The range indicates the difference between the highest and lowest scores in the distribution. It is simple to calculate, but it provides limited information.We subtract the lowest from the highest score and add 1 so that we include both scores in the spread between them. For the scores ofTables 2, the range is 50 – 34 + 1 = 17. A problem with using the range is that only the two most extreme scores are used in the computation.There is no indication of the spread of scores between the highest and lowest. Measures of dispersion that take into consideration every score in the distribution are the variance and the standard deviation.The standard deviation is used a great deal in interpreting scores from standardized test.
  • 59. The Variance The variance measures how widely the scores in the distribution are spread about the mean. In other words, the variance is the average squared difference between the sources and the mean. As a formula, it looks like this:
  • 60. The computation of the variance for the scores of Tables 1 is illustrated in Table 5. The data for students K through V are omitted to save space, but these values are included in the column totals and in the computation. The Standard Deviation The standard deviation also indicates how spread out the scores is, but it is expressed in the same units as the original scores. The standards deviation is computed by finding the square root of the variance. S = S2
  • 61. For the data inTable 1, the variance is 22.8.The standard deviation is 22.8, or 4.77. The scores of most norm groups have the shape of a “normal distribution-a symmetrical, bell-shaped distribution with which most people are familiar.With normal distribution, about 95 percent of the scores are within two standard deviations of the mean. Even when scores are not normally distributed, most of the scores will be within two standard deviations of the mean. In the example, the mean minus two standard deviations is 34.46, and the mean plus two standard deviations is 53.54.Therefore, only one score is outside of this interval; the lowest score, 43, is slightly more than two standard deviations from the mean.
  • 62.
  • 63.
  • 64. The usefulness of the standard deviation becomes apparent when scores from different test are compared. Suppose that two tests are given to the same class one fractions and the other on reading comprehensive.The fractions test has a mean of 30 and a standard deviation of 8; the reading comprehensive test has a mean of 60 and a standard deviation of 10. IfAnn scored 38 on the fractions test and 55 on the reading comprehensive test, it appears from the raw scores that she did better in reading than in fractions, because 55 is greater than 38. Descriptive statistics that indicate dispersion are the range, the variance, and the standard deviation. The rangeis the difference between the highest and lowest scores in the distribution plus one. The standard deviation is a unit of measurement that shows by how much the separate scores tend to differ from the mean. The variance is the square of the standard deviation. Most scores are within two standard deviations from the mean.
  • 65. Graphing Distributions  A graph of a distribution of test scores is often better understood than is the frequency distribution or a mere table of numbers.  The general pattern of scores, as well as any unique characteristics of the distribution, can be seen easily in simple graphs. There are several kinds of graph that can be used, but a simple bar graph or histogram, is as useful as any.  The general shape of the distribution is clear from the graph. Most of the scores in this distribution are high, at the upper end of the graph.  Such a shape is quite common for the scores of classroom tests.  A normal distribution has most of the test scores in the middle of the distribution and progressively fewer scores toward extremes. The scores of norm groups ate seldom graphed but they could be if we were concerned about seeing the specific shape of the distribution of scores.