SUBJECT SNED 5
GE – 7
IFP - 1
STATISTICS
ENGLISH
TEACHER MAAM
SIR
CARDONA
KARL
Assessment of Learning
By: OLAZO,YVONNE EREKA D.
2ND year
BEED – SNED
CONTENT
I. EducationalTechnology
A. Device
B. Classification of Devices
C. Non-projected Audio-Visual Aids
CONTENT
II. Assessment of Learning
A. Measurement
Evaluation
Test
B. Classification ofTests
C. Criteria of a Good Examination
D. Points to Consider in Preparing aTest
E. Stages inTest Constructing
CONTENT
F. Major Consideration inTest Construction
G. General Principles in Constructing Different
Types of Test
H. Pointers to be Observed in Constructing
And Scoring the Different Types ofTest
CONTENT
III. Statistical Measures orTools Use
In Interpreting Numerical Data
A. Frequency of Distribution
B. Measures of CentralTendency
C. Measures of Dispersion
D. Standard of Deviation
E. Graphing Distibution
I. Educational Technology
▪ Audio-visual aids are defined as any devices used to aid
in the communication of an idea. As such, virtually
anything can be used as an audio-visual aid provided it
successfully communicates the idea or information for
which it is designed.
▪ An audio-visual aid includes still photography, motion
picture, audio or video tape, slide or filmstrip, that is
prepared individually or in combination to communicate
information or to elicit a desired audio response.
DEVICE
 Device is any means other than the subject-matter itself
that is employed by the teacher in presenting the subject
matter to the learner
Purpose of Visual Devices
1. To challenge students’
attention
2. To stimulate the imagination
and develop the mental
imagery of the pupils
3. To facilitate the understanding
of the pupils
4. To provide motivation to the
learners
5. To develop the ability to listen
TraditionalVisual Aids
1. Demonstration
2. Field trips
3. Laboratory experiments
4. Pictures, film simulations, models
5. Real objects
Classification of Devices
1.Extrinsic – used of supplement a method used.
Example: pictures, graph, film strips, slides, etc.
2.Intrinsic – used as a part of the method or teaching procedures.
Example: pictures accompanying an article.
3.Material Devices – device that have no bearing on the subject matter.
Example: blackboard, chalk, books, pencils, etc.
4.Mental Devices – a kind of device that is related in form and meaning to the
subject matter being presented.
Example: questions, projects, drills, lesson plans, etc.
NON-PROJECTED
AUDIOVISUAL AIDS
Non-projected aids are those that do not require the use of
audio-visual equipment such as a projector and screen. These
include charts, graphs, maps, illustrations, photographs,
brochures, and handouts. Charts are commonly used almost
everywhere.
A chart is a diagram which shows relationships. An
organizational chart is one of the most widely and commonly
used kind of chart.
II. ASSESSMENT OF LEARNING
It focuses on the development and utilization of assessment
tools to improve the teaching-learning process. It emphasizes
on the use of testing for measuring knowledge, comprehension
and other thinking skills. It allows the students to go through
the standard steps in test constitution for quality assessment.
Students will experience how to develop rubrics for
performance-based and portfolio assessment.
Measurement
Refers to the quantitative aspect of evaluation. It
involves outcomes that can be quantified
statistically. It also be defined as a process in
determining and differentiating the information
about the attributes or characteristics of things.
Evaluation
Is the quantitative aspect of determining the
outcomes pf learning. It involves value
judgement. Evaluation is more comprehensive
than measurement.
Test
Consist of questions or exercises or other devices for
measuring the outcomes pf learning.
CLASSIFICATION OF TESTS
According to manner of
response
a. Oral
b. Written
According to method
of preparation
a. Subjective / essay
b. Objective
CLASSIFICATION OF TESTS
According to the nature
of answer
a. Personality tests
b. Intelligence test
c. Aptitude test
d. Achievement or summative test
e. Sociometric test
f. Diagnostic or formative test
g. Trade or vocational test
CLASSIFICATION OF TESTS
▪ Objective tests are tests which have definite answers and therefore are not
subject to personal bias.
▪ Teacher-made tests or educational test are constructed by the teachers based
on the contents pf different subjects taught.
▪ Diagnostic tests are used to measure a student’s strengths and weaknesses,
usually to identify deficiencies in skills or performance.
▪ Formative and Summative are terms often used with evaluation, but they may
also be used with testing. Formative testing is done to monitor students’
attainment of the instructional objectives. Formative testing occurs over a
period of time and monitors student progress. Summative testing is done at the
conclusion of instruction and measures the extent to which students have
attained the desired outcomes
CLASSIFICATION OF TEST
▪ Standardized tests are already valid, reliable and objective. Standardized tests
are test for which contents have been selected and for which norms or standards
have been established. Psychological test and government national examinations
ate examples of standardized tests.
▪ Standards or norms are the goals to be achieved expressed in terms of the
average performance of the population tested.
▪ Criterion-referenced measure is a measuring device with a predetermined level of
success or standard on the part pf the test-takers. For example, a level of 75
percent score in all the test items could be considered a satisfactory performance.
▪ Norm-referenced measure is a test that is scored on the basis of the norm or
standard level of accomplishment by the whole group taking the test.The grades
of the students are based on the normal curve of distribution.
CRITERIA OF A GOOD
EXAMINATION
A good examination must pass the
following criteria:
1.Validity
- Refers to the degree to which a test measures what is
intended to measure. It is the usefulness of the test
for a given measure.
- A valid test is always reliable. To test the validity of a
test it is to be presented in order to determine if it really
measures what it intends to measure or what it
purports to measure.
2. Reliability
- Pertains to the degree to which a test measures what it supposed to
measure.
- The test of reliability is the consistency of the results when it is
administered to different groups of individuals with similar characteristics
in different places at different times.
- Also, the results are almost similar when the test is given to the se group
of individuals at different days and the coefficient of correlation is not less
than 0.85.
3. Objectivity
- Is the degree to which personal bias is eliminated in the scoring of the answers?When
refer to the quality of measurement, essentially, we mean the amount of information
contained in a score generated by the measurement.
-Measures of student’s instructional outcomes are rarely as precise as those pf physical
characteristics such as height and weight.
4. Nominal Measurement
-Are the least sophisticated; they merely classify objects or even by assigning number to
them.
-These numbers are arbitrary and imply no quantification, but the categories must be
mutually exclusive and exhaustive.
-For example, one could nominate designate baseball positions by assigning the pitcher
the numeral 1; the catcher, 2; the first baseman, 3; the second baseman, 4; and so on.
These assignments are arbitrary of these numbers is meaningful. For example, 1 plus 2
does not equal 3, because a pitcher plus a catcher does not equal a first baseman.
5. Ordinal Measurement
-Ordinal scales classify, but they also assign rank order. An example of ordinal
measurement is ranking individuals in a class according to their test scores.
-Students’ scores could be ordered from first, second, third, and so forth to the lowest
score. Such a scale gives more information than nominal measurement, but it still
has limitations.
-The units of ordinal are most likely unequal.The number of points separating the first
and second students probably does not equal the number separating the fifth and sixth
students.
6. Interval Measurement
-In order to be able to add and subtract scores, we use interval scales, sometimes called
equal interval or equal unit measurement.
-This measurement scale contains the nominal and ordinal properties and also
characterized by equal units between score points.
-Examples include thermometers and calendar years.
7. Ratio Measurement
- The most sophisticated type of measurement includes all the preceding properties,
but in a ratio scale, the zero point is not arbitrary; a score of zero includes the
absence of what is being measured.
- For example, if a person’s wealth equaled zero, he or she would have no wealth at all.
This is unlike a social studies test, where missing every item (i.e., receiving a score of
zero)
- Ratio measurement is rarely achieved in educational assessment, either cognitive or
affective areas.
8. Norm-Referenced and Criterion Referenced Measurement
-When we contrast norm-referenced measurement (or testing) with criterion-
referenced measurement, we are basically refereeing to two different ways
of interpreting information. However, Popham (1988, page 135) points out that certain
characteristics tend to go with each type of measurement, and it is unlikely that results
of norm-referenced test are interpreted in criterion-referenced ways and vice versa.
Norm-referenced
Interpretation
 historically has been used
in education norm-referenced
test continue to comprise a
substantial portion of the
measurement is today’s schools.
 It stems from the desire to
differentiate among individuals or
to discriminate among the
individuals for some defined
group on whatever is being
measured. In norm-referenced
measurement, an individual’s
score in interpreted by comparing
it to the scores of a defined group,
often called the normative group.
Norms represents the scores
earned by one or more groups of
students who have taken the test.
Criterion-
Referenced Interpretation
 have developed with a dual
meaning for criterion-referenced.
On one hand, it
means referencing an
individual’s performance to some
criterion that is a defined
performance level.
 The individual’s score is
interpreted in absolute rather
than relative terms.The criterion,
in this situation, means some level
of specified performance that has
been determined independently
of how other might perform.
Distinctions between Norm-
Referenced and Criterion-
Referenced Tests
Although interpretations, not characteristics, provide the
distinction between norm-referenced and criterion-referenced
test, the two types do tend to differ in some ways. Norm-
referenced test are usually more general and comprehensive and
cover a large domain of content and learning tasks. They are used
for survey testing, although this is not their exclusive use.
Criterion-referenced
tests
 focus on a specific group of
learner behaviours.To show the
contrast, consider an example.
Arithmetic skills represent a
general and broad category of
student outcomes and would
likely be measured by a norm-
referenced test.
 On the other hand, behaviors such
as solving addition problems with
two five-digit numbers or
determining the multiplication
products of three- and four digits
numbers are much more specific
and may be measured by
criterion-referenced tests.
Norm-referenced tests
 is a relative interpretation based
on an individual’s position with
respect to some group, often
called the normative group.
Norms consist of the scores
usually in some form of
descriptive statistics, of the
normative group.
 AchievementTest as Example
Most standardized achievement
tests, especially those covering
several skills and academic areas,
are primarily designed for norm-
referenced interpretations.
POINTSTO BE CONSIDERED IN
PREPARING ATEST
1. Are the instructional objectives
clearly defined?
2. What knowledge, skills and attitudes
do want to measure?
3. Did you prepare a table of
specifications?
4. Did you formulate well defined and
clear test items?
5. Did you employ correct English in
writing the items?
6. Did you avoid giving clues to the
correct answer?
7. Did you test the important ideas
rather than the trivial?
8. Did you adapt the test’s difficulty to
your student’s ability?
9. Did you avoid using textbooks
jargons?
10. Did you cast the items in positive
forms?
11. Did you prepare a scoring key?
12. Does each item have single correct
answer?
13. Did you review your items?
STAGES IN TEST CONSTRUCTION
I. Planning theTest
a. Determining the Objectives
b. Preparing theTable of Specifications
c. Selecting the Appropriate Item Format
d. Writing theTest Items
e. Editing theTest Items
STAGES IN TEST CONSTRUCTION
II. Trying Out theTest
a. Administering the FirstTry-out- then Item Analysis
b. Administering the SecondTry-out- then Item
Analysis
c. Preparing the Final Form of theTest
STAGES IN TEST CONSTRUCTION
III. EstablishingTestValidity
IV. Establishing theTest Reliability
V. Interpreting theTest Score
MAJOR CONSIDERATIONS INTEST
CONSTRUCTION
The following are the major considerations
in test construction:
Type ofTest
▪ Our usual idea of testing is an in-class test that is administered by the teacher.
However, there are many vibrations on this theme: group test, individual test,
written test, oral test, speed test, power test, and pre-test and posttest. Each of
these has different characteristics that must be considered when the test is
planned.
▪ If it is a take-home test rather than an in-class test, how do you make sure that
students work independently, have equal access to sources and resources, or
speed a sufficient but not enormous amount of time on the task? If it is a pretest,
should it exactly match the past test so that a gain score can be computed, or
should the pretest contain items that are diagnostic of prerequisite skills and
knowledge? If it is an achievement test should partial credit be awarded, should
there be penalties for guessing, or should points be deducted for grammar and
spelling errors?
Test Length
▪ A major decision in the test planning is how many items should be
included on the test. There should be enough to cover the content
adequately, but the length of the class period or the attention span
of fatigue limits of the students usually restricts the test length.
Decisions about test length are usually based on practical
constraints more than on theoretical considerations.
Item Formats
▪ Determining what kind of items is included on the test is a major
decision. Should they be objectively scored formats such as multiple
choice or matching type? Should they cause the students to organize
their own thoughts through short answer essay formats? These are
important questions that can be answered only by the teacher in
terms of the local context, his or her students, his or her classroom,
and the specific purpose of the test. Once the planning decision is
made, the item writing begins. This tank is often the most feared by
beginning test constructors. However, the procedures are more
common sense than formal rules.
GENERAL PRINCIPLES IN
CONSTRUCTING
DIFFERENTTYPES OFTEST
1. The test items should be selected very carefully. Only important
facts should be included.
2. The test should have extensive sampling of items.
3. The test items should be carefully expressed in simple, clear,
definite, and meaningful sentences.
4. There should be only one possible correct response for each test
item.
5. Each item should be independent. Leading clues to other items
should be avoided.
6. Lifting sentences from books should not be done to encourage
thinking and understanding.
7. The first-person personal pronouns / and we should not be used.
8. Various types of test items should be made to avoid monotony.
9. Majority of the test items should be of moderate difficulty. Few
difficult and few easy items should be included.
10. The test items should be arranged in an ascending order of
difficulty. Easy items should be at the beginning to encourage the
examinee to pursue the test and the most difficult items should be
at the end.
11. Clear concise and complete directions should precede all types of
test. Sample test. Sample test items may be provided for expected
responses.
12. Items which can be answered by previous experience
alone without knowledge of the subject matter should not be
included.
13. Catchy words should not be used in the test items.
14.Test items must be based upon the objectives of the
course and upon the course content.
15. The test should measure the degree of achievement or
determine the difficulties of the learners.
16.The test should emphasize ability to apply and use facts
as well as knowledge of facts.
17.The test should be of such length that it can be completed
within the time allotted by all or nearly all of the pupils.The
teacher should perform the test herself to determine its
approximate time allotment.
18. Rules to governing good language expression, grammar,
spelling, punctuation, and capitalization should be observed
in all times.
19. Information on how scoring will be done should be
provided.
POINTERSTO BE OBSERVED IN
CONSTRUCTING AND SCORING
THE DIFFERENTTYPES OFTESTS
A. RECALLTYPES
1. Simple recall type
a) This type of consists of questions calling for a single word or expressions as an answer.
b) Items usually begin with who, where, when, and what.
c) Score is the number of correct answers.
2. Completion type
a) Only important words or phrases should be omitted to avoid confusion.
b) Blanks should be of equal lengths.
c) The blank, as much as possible, is placed near or at the end of the sentence.
d) Articles a, an, and they should not be provide before the end of omitted word or phrase to avoid clues for answers.
e) Score is the number of correct answers.
3. EnumerationType
a) The exact number of expected answers should be started.
b) Blanks should be equal lengths.
c) Score is the number of correct answers.
4. Identification type
a) The items should make an examinee think of a word, number, or group of words
that would complete the statement or answer the problem.
b) Score is the number of correct answers.
B. RECOGNITIONTYPES
1.True-false or alternate-response type
a) Declarative sentences should be used.
b) The number of “true” and “false” items should be more or less equal.
c) The truth or falsity of the sentence should not be too evident.
d) Negative statements should be avoided.
e) The “modified true – false” is more preferable than the plain true-false”.
f) In arranging the items, avoid the regular recurrence of “true” and “false” statements.
g) Avoid using specific determiners like: all, always, never, none, nothing, most, often, some, etc, and avoid weak statements as
may, sometimes, as a rule, in general etc.
h) Minimize the use of qualitative terms like; few, great, many, more, etc.
i) Avoid leading clues to answers in all times.
j) Score is the number of correct answers in “modified true-false and right answers minus wrong answers in “plain true-false”.
2.Yes-No type
a) The items should be in interrogative sentences.
b) The same rules as in true-false are applied.
3. Multiple-response type
a) There should be three to five choices.The number of choices used in the first item
should be the same number of choices in all the items of this type of test.
b) The choices should be numbered or lettered so that only the number or letter can be
written on blank provided.
c) If the choices are figures, they should be arranged in ascending order.
d) Avoid the use of “a” or “an” as the last word prior to the listing of the responses.
e. Random occurrence of responses should be employed
f. The choices, as much as possible, should be at the end of the
statements.
g. The choices should be related in some way or should belong to the
same class.
h. Avoid the use of “none of these” as one of the choices.
I. Score is the number of correct answers.
4. Best answer type
a. There should be three to five choices all of which are right
but vary in their degree of merit, importance or desirability
b. The other rules for multiple-response items are applied
here.
c. Score is the number of correct answers.
5. MatchingType
a. There should be two columns. Under “A” are the stimuli which should be longer and more
descriptive than the responses under column “B”.The response may be a word, a phrase, a
number, or a formula.
b. The stimuli under column “A” should be numbered and the response under column “B” should
be lettered.Answers will be indicated by letters only on lines provided in column “A”.
c. The number of pairs usually should not exceed twenty items. Less than ten introduces chance
elements.Twenty pairs may be used but more than twenty is decidedly wasteful of time.
d. The number of responses in column “B” should be two or more than the number of items in
Column “A” to avoid guessing.
e. Only one correct matching for each item should be possible.
f. Matching sets should neither be to long nor too short.
g. All items should be on the same page to avoid turning of pages in the process of
matching pairs.
h. Score is the number of correct answers.
C. EssayType of Examinations
1. Common types of essay questions. (The types are related to purposes of
which the essay examinations are to be used).
a. Comparison of two things
b. Explanations of the use or meaning of a statement or passage.
c. Analysis
d. Decisions for or against
e. Discussion
2. How to construct essay examinations.
a. Determine the objectives or essentials for each question to be evaluated.
b. Phrase question in simple, clear and concise language.
c. Suit the length of the questions to the time available for answering the essay
examination.The teacher should try to answer the test herself.
d. Scoring:
e. Have a model answer in advance.
f. Indicate the number of points for each question.
g. Score a point for each essential.
Advantages and Disadvantages of the
Objective Type of Tests
Advantages
a. The objectives test is free from personal bias in scoring.
b. It is easy to score.With a scoring key, the test can be corrected by different individuals without
affecting the accuracy of the grades given.
c. It has high validity because it is comprehensive with wide sampling of essentials
d. It is less time-consuming since may items can be answered in a given time
e. It is fair to students since the slow writers can accomplish the test as fast as writes.
Disadvantages
a. a. It is difficult to construct and requires more time to prepare.
b. b. It does not afford the students the opportunity in training for self- and thought organization
c. c. It cannot be used to test ability in theme writing or journalistic writing
Advantages
a.The essay examination can be used in practically in all subjects of the school curriculum.
b. It trains students for thought organization and self-expression.
c. It affords students opportunities to express their originality and independence of thinking.
d. Only the essay test can be used in some subjects like composition writing and journalistic
writing in which cannot be tested by the objective type test.
e. Essay examination measures higher mental abilities like comparison, interpretation,
criticism, defence of opinion and decision.
f.The essay test is easily prepared.
g. It is inexpensive.
Disadvantages
a. The limited sampling of items makes the test unreliable measures of
achievements or abilities.
b. Questions usually are not well prepared.
c. Scoring is highly subjective due to the influence of the corrector’s
personal judgment.
d. Grading of the essay test is inaccurate measure of pupils’
achievements due to subjective of scoring.
III. STATISTICAL MEASURES
ORTOOLS
USED IN INTERPRETING NUMERICAL
DATA
Frequency Distributions
▪ A simple, common sense technique for describing a set of test scores is through the
use of a frequency distribution. A frequency distribution is merely a listing of the
possible score values and the number of persons who achieved each score. Such an
arrangement presents the scores in a more simple and understandable manner than
merely listing all of the separate scores. Considers a specific set of scores to clarify
these ideas.
▪ A set of scores for a group of 25 students who took a 50-items test is listed inTable 1.
It is easier to analyse the scores if they are arranged in a simple frequency
distribution. (The frequency distribution for the same set of scores is given inTable
2).The steps that are involved in creating the frequency distribution are:
▪ First list the possible scores values in rank order, from highest to lowest.Then a
second column indicates the frequency or number of persons who received each
score. For example, three students received a score of 47, two received 40 and so
forth.There is no need to list the score values below the lowest score that anyone
received.
Table 1. Scores of 25 Students on a 50 ItemTest
Student Score Student Score
A 48 N 43
B 50 O 47
C 46 P 48
D 41 Q 42
E 37 R 44
F 48 S 38
G 38 T 49
H 47 U 34
I 49 V 35
J 44 W 47
K 48 X 40
L 49 Y 48
M 40
Table 2. Frequency Distribution of the 25 Scores ofTable 1
Score Frequency Score Frequency
50 1 41 1
49 3 40 2
48 5 39 0
47 3 38 2
46 1 37 1
45 0 36 0
44 2 35 1
43 1 34 1
42 1
• When there is a wide range of scores in a frequency distribution, the distribution can be
quite long, with a lot of zeros in the column of frequencies. Such a frequency
distribution can make interpretation of the scores difficult and confusing.A grouped of
frequency distribution would be more appropriate in this kind of situation. Groups of
score values are listed rather than each separate possible score value.
• If we were to change the frequency distribution inTable 2 into a grouped frequency
distribution, we might choose intervals such as 48-50, 45-47, and so forth.The
frequency corresponding to interval 48-50 would be 9 (1+3+5).The choice of the width
of the interval is arbitrary, but it must be the same for all intervals. In addition, it is a
good idea to have as odd- numbered interval width (we used 3 above) so that the
midpoint of the interval is a whole number.This strategy will simplify subsequent
graphs and description of the data.The grouped frequency distribution is presented in
Table 3.
Table 3. Grouped Frequency Distribution
Score Interval Frequency
48-50 9
45-47 4
42-44 4
39-41 3
36-38 3
Frequency distributions summarize sets of test
scores by listing the number of people who received
each test score. All of the test scores can be listed
separately, or the sources can be grouped in a
frequency distribution.
• Frequency distributions are helpful for indicating the shape to describe a
distribution of scores, but we need more information than the shape to
describe a distribution adequately.We need to know where on the scale
of measurement a distribution is located and how the scores are
dispersed in the distribution. For the former, we compute measures of
central tendency, and for the latter, we compute measures of
dispersion. Measures of central tendency are points on the scale of
measurement, and they are representative of how the scores tend to
average.There are three commonly used measures of central tendency;
the mean, the median, mode, but the mean is by far the most widely
used.
MEASURES OF CENTRALTENDEDNCY
The Mean
• The mean of a set of scores is the arithmetic mean. It is found by
summing the scores and dividing the sum by the number of scores.
The mean is the most commonly used measure of central tendency
because it is easily understood and is based on all of the scores in the
set; hence, it summarizes a lot of information.The formula for the
mean is as follows:
The Median
• Another measure of central tendency is the median which is the point that
divides the distribution in half; that is, the half of the scores fall above the
median and half of the scores fall below the median.
• When there are only few scores, the median can often be found by
inspection. If there is an odd number of scores, the middle score is the
median. Where there is an even number of scores, the median is halfway
between the two middles scores. However, when there are tied scores in
the middle’s distribution, or when the scores are in a frequency distribution,
the median may not be so obvious.
• Consider again the frequency distribution inTable 2.There were 25 scores in
the distribution, so the middle score should be the median. A
straightforward way to find this median is to augment the frequency
distribution with a column of cumulative frequencies.
• Cumulative frequencies indicate the number of scores at or below each
score.Table 4 indicates the cumulative frequencies for the data inTale 2.
Table 4. Frequency Distribution, Cumulative Frequencies for
the Scores ofTable 2
Score Frequency Cumulative Frequency
50 1 25
49 3 24
48 5 21
47 3 16
46 1 13
45 0 12
44 2 12
43 1 10
42 1 9
41 1 8
40 2 7
39 0 5
38 2 5
37 1 3
36 0 2
35 1 2
34 1 1
For example, 7 people scored at or below a score of 40, and 21 persons scored
at or below a score of 48.
 To find the median, we need to locate the middle score in the cumulative
frequency column, because this score is the median. Since there are 25
scores in the distribution, the middles one is the 13th, a score of 46.Thus,
46 is the median of this distribution; half of the people scored above 46
and half scored.
 When there are ties in the middle of the distribution, there may be a need
to interpolate between scores to get the exact median. However, such
precisions are not needed for most classroom tests.The whole number
closest to the median is usually sufficient.
The Mode
• The measure of central tendency that is the easiest to find is the mode.The mode is the most frequently
occurring score in the distribution.The mode of the scores inTable is 48. Five people had scores of 48 and
no other score occurred as often.
• Each of these three measures of central tendency – the mean, median, and the mode means a legitimate
definition of “average” performance on this test. However, each does provide different information.The
arithmetic average was 44; half of the people scored at or below 46 and more people received 48 than any
other score.
• There are some distributions in which all the three measures of central tendency are equal, but more
often than not they will be different.The choice of which measure of central tendency is best will differ
from situation to situation. The mean is used most often, perhaps because it includes information from
all of the scores.
• When a distribution has a small number of very extreme scores, though, the median may be a better
definition of central tendency.The mode provides the least information and is used infrequently as an
“average”.The mode can be used with nominal scale data, just as an indicator of the most frequently
appearing category.The mean, the median, and the mode all describe central tendency:
• The mean is the arithmetic average.
• The median divides the distribution in half
• The mode is the most frequent score.
MEASURES OF DISPERSION
Measures of central tendency are useful for summarizing average performance, but
they tell as nothing about how the scores are distributed or “spread out” but they
might be differed in other ways. One the distributions may have the scores tightly
clustered around the average, and the other distribution may have scores that are
widely separated. As you may have anticipated, there are descriptive statistics that
measures dispersion, which also are called measures of variability. These measures
indicate how spread out the scores tends to be.
The Range
The range indicates the difference between the highest and lowest scores in the
distribution. It is simple to calculate, but it provides limited information.We subtract
the lowest from the highest score and add 1 so that we include both scores in the
spread between them. For the scores ofTables 2, the range is 50 – 34 + 1 = 17.
A problem with using the range is that only the two most extreme scores are used in
the computation.There is no indication of the spread of scores between the highest
and lowest. Measures of dispersion that take into consideration every score in the
distribution are the variance and the standard deviation.The standard deviation is
used a great deal in interpreting scores from standardized test.
The Variance
The variance measures
how widely the scores
in the distribution are
spread about the
mean. In other words,
the variance is the
average squared
difference between the
sources and the mean.
As a formula, it looks
like this:
The computation of the variance for the scores of Tables 1 is
illustrated in Table 5. The data for students K through V are
omitted to save space, but these values are included in the
column totals and in the computation.
The Standard Deviation
The standard deviation also indicates how spread out the
scores is, but it is expressed in the same units as the original
scores. The standards deviation is computed by finding the
square root of the variance.
S = S2
For the data inTable 1, the variance is 22.8.The standard
deviation is 22.8, or 4.77.
The scores of most norm groups have the shape of a “normal
distribution-a symmetrical, bell-shaped distribution with which
most people are familiar.With normal distribution, about 95
percent of the scores are within two standard deviations of the
mean.
Even when scores are not normally distributed, most of the
scores will be within two standard deviations of the mean. In
the example, the mean minus two standard deviations is 34.46,
and the mean plus two standard deviations is 53.54.Therefore,
only one score is outside of this interval; the lowest score, 43, is
slightly more than two standard deviations from the mean.
The usefulness of the standard deviation becomes apparent when scores from
different test are compared. Suppose that two tests are given to the same class one
fractions and the other on reading comprehensive.The fractions test has a mean of 30
and a standard deviation of 8; the reading comprehensive test has a mean of 60 and a
standard deviation of 10.
IfAnn scored 38 on the fractions test and 55 on the reading comprehensive test, it
appears from the raw scores that she did better in reading than in fractions, because 55
is greater than 38.
Descriptive statistics that indicate dispersion are the range, the variance, and the
standard deviation.
The rangeis the difference between the highest and lowest scores in the distribution
plus one.
The standard deviation is a unit of measurement that shows by how much the
separate scores tend to differ from the mean.
The variance is the square of the standard deviation. Most scores are within two
standard deviations from the mean.
Graphing Distributions
 A graph of a distribution of test scores is often better understood
than is the frequency distribution or a mere table of numbers.
 The general pattern of scores, as well as any unique characteristics
of the distribution, can be seen easily in simple graphs. There are
several kinds of graph that can be used, but a simple bar graph or
histogram, is as useful as any.
 The general shape of the distribution is clear from the graph. Most
of the scores in this distribution are high, at the upper end of the
graph.
 Such a shape is quite common for the scores of classroom tests.
 A normal distribution has most of the test scores in the middle of the
distribution and progressively fewer scores toward extremes. The
scores of norm groups ate seldom graphed but they could be if we
were concerned about seeing the specific shape of the distribution
of scores.
Educational technology-assessment-of-learning-and-statistical-measures-ed-09-2020

Educational technology-assessment-of-learning-and-statistical-measures-ed-09-2020

  • 1.
    SUBJECT SNED 5 GE– 7 IFP - 1 STATISTICS ENGLISH
  • 2.
  • 3.
    Assessment of Learning By:OLAZO,YVONNE EREKA D. 2ND year BEED – SNED
  • 4.
    CONTENT I. EducationalTechnology A. Device B.Classification of Devices C. Non-projected Audio-Visual Aids
  • 5.
    CONTENT II. Assessment ofLearning A. Measurement Evaluation Test B. Classification ofTests C. Criteria of a Good Examination D. Points to Consider in Preparing aTest E. Stages inTest Constructing
  • 6.
    CONTENT F. Major ConsiderationinTest Construction G. General Principles in Constructing Different Types of Test H. Pointers to be Observed in Constructing And Scoring the Different Types ofTest
  • 7.
    CONTENT III. Statistical MeasuresorTools Use In Interpreting Numerical Data A. Frequency of Distribution B. Measures of CentralTendency C. Measures of Dispersion D. Standard of Deviation E. Graphing Distibution
  • 8.
    I. Educational Technology ▪Audio-visual aids are defined as any devices used to aid in the communication of an idea. As such, virtually anything can be used as an audio-visual aid provided it successfully communicates the idea or information for which it is designed. ▪ An audio-visual aid includes still photography, motion picture, audio or video tape, slide or filmstrip, that is prepared individually or in combination to communicate information or to elicit a desired audio response.
  • 9.
    DEVICE  Device isany means other than the subject-matter itself that is employed by the teacher in presenting the subject matter to the learner
  • 10.
    Purpose of VisualDevices 1. To challenge students’ attention 2. To stimulate the imagination and develop the mental imagery of the pupils 3. To facilitate the understanding of the pupils 4. To provide motivation to the learners 5. To develop the ability to listen
  • 11.
    TraditionalVisual Aids 1. Demonstration 2.Field trips 3. Laboratory experiments 4. Pictures, film simulations, models 5. Real objects
  • 12.
    Classification of Devices 1.Extrinsic– used of supplement a method used. Example: pictures, graph, film strips, slides, etc. 2.Intrinsic – used as a part of the method or teaching procedures. Example: pictures accompanying an article. 3.Material Devices – device that have no bearing on the subject matter. Example: blackboard, chalk, books, pencils, etc. 4.Mental Devices – a kind of device that is related in form and meaning to the subject matter being presented. Example: questions, projects, drills, lesson plans, etc.
  • 13.
    NON-PROJECTED AUDIOVISUAL AIDS Non-projected aidsare those that do not require the use of audio-visual equipment such as a projector and screen. These include charts, graphs, maps, illustrations, photographs, brochures, and handouts. Charts are commonly used almost everywhere. A chart is a diagram which shows relationships. An organizational chart is one of the most widely and commonly used kind of chart.
  • 14.
    II. ASSESSMENT OFLEARNING It focuses on the development and utilization of assessment tools to improve the teaching-learning process. It emphasizes on the use of testing for measuring knowledge, comprehension and other thinking skills. It allows the students to go through the standard steps in test constitution for quality assessment. Students will experience how to develop rubrics for performance-based and portfolio assessment.
  • 15.
    Measurement Refers to thequantitative aspect of evaluation. It involves outcomes that can be quantified statistically. It also be defined as a process in determining and differentiating the information about the attributes or characteristics of things.
  • 16.
    Evaluation Is the quantitativeaspect of determining the outcomes pf learning. It involves value judgement. Evaluation is more comprehensive than measurement.
  • 17.
    Test Consist of questionsor exercises or other devices for measuring the outcomes pf learning.
  • 18.
    CLASSIFICATION OF TESTS Accordingto manner of response a. Oral b. Written According to method of preparation a. Subjective / essay b. Objective
  • 19.
    CLASSIFICATION OF TESTS Accordingto the nature of answer a. Personality tests b. Intelligence test c. Aptitude test d. Achievement or summative test e. Sociometric test f. Diagnostic or formative test g. Trade or vocational test
  • 20.
    CLASSIFICATION OF TESTS ▪Objective tests are tests which have definite answers and therefore are not subject to personal bias. ▪ Teacher-made tests or educational test are constructed by the teachers based on the contents pf different subjects taught. ▪ Diagnostic tests are used to measure a student’s strengths and weaknesses, usually to identify deficiencies in skills or performance. ▪ Formative and Summative are terms often used with evaluation, but they may also be used with testing. Formative testing is done to monitor students’ attainment of the instructional objectives. Formative testing occurs over a period of time and monitors student progress. Summative testing is done at the conclusion of instruction and measures the extent to which students have attained the desired outcomes
  • 21.
    CLASSIFICATION OF TEST ▪Standardized tests are already valid, reliable and objective. Standardized tests are test for which contents have been selected and for which norms or standards have been established. Psychological test and government national examinations ate examples of standardized tests. ▪ Standards or norms are the goals to be achieved expressed in terms of the average performance of the population tested. ▪ Criterion-referenced measure is a measuring device with a predetermined level of success or standard on the part pf the test-takers. For example, a level of 75 percent score in all the test items could be considered a satisfactory performance. ▪ Norm-referenced measure is a test that is scored on the basis of the norm or standard level of accomplishment by the whole group taking the test.The grades of the students are based on the normal curve of distribution.
  • 22.
    CRITERIA OF AGOOD EXAMINATION A good examination must pass the following criteria:
  • 23.
    1.Validity - Refers tothe degree to which a test measures what is intended to measure. It is the usefulness of the test for a given measure. - A valid test is always reliable. To test the validity of a test it is to be presented in order to determine if it really measures what it intends to measure or what it purports to measure.
  • 24.
    2. Reliability - Pertainsto the degree to which a test measures what it supposed to measure. - The test of reliability is the consistency of the results when it is administered to different groups of individuals with similar characteristics in different places at different times. - Also, the results are almost similar when the test is given to the se group of individuals at different days and the coefficient of correlation is not less than 0.85.
  • 25.
    3. Objectivity - Isthe degree to which personal bias is eliminated in the scoring of the answers?When refer to the quality of measurement, essentially, we mean the amount of information contained in a score generated by the measurement. -Measures of student’s instructional outcomes are rarely as precise as those pf physical characteristics such as height and weight. 4. Nominal Measurement -Are the least sophisticated; they merely classify objects or even by assigning number to them. -These numbers are arbitrary and imply no quantification, but the categories must be mutually exclusive and exhaustive. -For example, one could nominate designate baseball positions by assigning the pitcher the numeral 1; the catcher, 2; the first baseman, 3; the second baseman, 4; and so on. These assignments are arbitrary of these numbers is meaningful. For example, 1 plus 2 does not equal 3, because a pitcher plus a catcher does not equal a first baseman.
  • 26.
    5. Ordinal Measurement -Ordinalscales classify, but they also assign rank order. An example of ordinal measurement is ranking individuals in a class according to their test scores. -Students’ scores could be ordered from first, second, third, and so forth to the lowest score. Such a scale gives more information than nominal measurement, but it still has limitations. -The units of ordinal are most likely unequal.The number of points separating the first and second students probably does not equal the number separating the fifth and sixth students. 6. Interval Measurement -In order to be able to add and subtract scores, we use interval scales, sometimes called equal interval or equal unit measurement. -This measurement scale contains the nominal and ordinal properties and also characterized by equal units between score points. -Examples include thermometers and calendar years.
  • 27.
    7. Ratio Measurement -The most sophisticated type of measurement includes all the preceding properties, but in a ratio scale, the zero point is not arbitrary; a score of zero includes the absence of what is being measured. - For example, if a person’s wealth equaled zero, he or she would have no wealth at all. This is unlike a social studies test, where missing every item (i.e., receiving a score of zero) - Ratio measurement is rarely achieved in educational assessment, either cognitive or affective areas. 8. Norm-Referenced and Criterion Referenced Measurement -When we contrast norm-referenced measurement (or testing) with criterion- referenced measurement, we are basically refereeing to two different ways of interpreting information. However, Popham (1988, page 135) points out that certain characteristics tend to go with each type of measurement, and it is unlikely that results of norm-referenced test are interpreted in criterion-referenced ways and vice versa.
  • 28.
    Norm-referenced Interpretation  historically hasbeen used in education norm-referenced test continue to comprise a substantial portion of the measurement is today’s schools.  It stems from the desire to differentiate among individuals or to discriminate among the individuals for some defined group on whatever is being measured. In norm-referenced measurement, an individual’s score in interpreted by comparing it to the scores of a defined group, often called the normative group. Norms represents the scores earned by one or more groups of students who have taken the test. Criterion- Referenced Interpretation  have developed with a dual meaning for criterion-referenced. On one hand, it means referencing an individual’s performance to some criterion that is a defined performance level.  The individual’s score is interpreted in absolute rather than relative terms.The criterion, in this situation, means some level of specified performance that has been determined independently of how other might perform.
  • 29.
    Distinctions between Norm- Referencedand Criterion- Referenced Tests Although interpretations, not characteristics, provide the distinction between norm-referenced and criterion-referenced test, the two types do tend to differ in some ways. Norm- referenced test are usually more general and comprehensive and cover a large domain of content and learning tasks. They are used for survey testing, although this is not their exclusive use.
  • 30.
    Criterion-referenced tests  focus ona specific group of learner behaviours.To show the contrast, consider an example. Arithmetic skills represent a general and broad category of student outcomes and would likely be measured by a norm- referenced test.  On the other hand, behaviors such as solving addition problems with two five-digit numbers or determining the multiplication products of three- and four digits numbers are much more specific and may be measured by criterion-referenced tests. Norm-referenced tests  is a relative interpretation based on an individual’s position with respect to some group, often called the normative group. Norms consist of the scores usually in some form of descriptive statistics, of the normative group.  AchievementTest as Example Most standardized achievement tests, especially those covering several skills and academic areas, are primarily designed for norm- referenced interpretations.
  • 31.
    POINTSTO BE CONSIDEREDIN PREPARING ATEST
  • 32.
    1. Are theinstructional objectives clearly defined? 2. What knowledge, skills and attitudes do want to measure? 3. Did you prepare a table of specifications? 4. Did you formulate well defined and clear test items? 5. Did you employ correct English in writing the items? 6. Did you avoid giving clues to the correct answer? 7. Did you test the important ideas rather than the trivial? 8. Did you adapt the test’s difficulty to your student’s ability? 9. Did you avoid using textbooks jargons? 10. Did you cast the items in positive forms? 11. Did you prepare a scoring key? 12. Does each item have single correct answer? 13. Did you review your items?
  • 33.
    STAGES IN TESTCONSTRUCTION I. Planning theTest a. Determining the Objectives b. Preparing theTable of Specifications c. Selecting the Appropriate Item Format d. Writing theTest Items e. Editing theTest Items
  • 34.
    STAGES IN TESTCONSTRUCTION II. Trying Out theTest a. Administering the FirstTry-out- then Item Analysis b. Administering the SecondTry-out- then Item Analysis c. Preparing the Final Form of theTest
  • 35.
    STAGES IN TESTCONSTRUCTION III. EstablishingTestValidity IV. Establishing theTest Reliability V. Interpreting theTest Score
  • 36.
    MAJOR CONSIDERATIONS INTEST CONSTRUCTION Thefollowing are the major considerations in test construction:
  • 37.
    Type ofTest ▪ Ourusual idea of testing is an in-class test that is administered by the teacher. However, there are many vibrations on this theme: group test, individual test, written test, oral test, speed test, power test, and pre-test and posttest. Each of these has different characteristics that must be considered when the test is planned. ▪ If it is a take-home test rather than an in-class test, how do you make sure that students work independently, have equal access to sources and resources, or speed a sufficient but not enormous amount of time on the task? If it is a pretest, should it exactly match the past test so that a gain score can be computed, or should the pretest contain items that are diagnostic of prerequisite skills and knowledge? If it is an achievement test should partial credit be awarded, should there be penalties for guessing, or should points be deducted for grammar and spelling errors?
  • 38.
    Test Length ▪ Amajor decision in the test planning is how many items should be included on the test. There should be enough to cover the content adequately, but the length of the class period or the attention span of fatigue limits of the students usually restricts the test length. Decisions about test length are usually based on practical constraints more than on theoretical considerations.
  • 39.
    Item Formats ▪ Determiningwhat kind of items is included on the test is a major decision. Should they be objectively scored formats such as multiple choice or matching type? Should they cause the students to organize their own thoughts through short answer essay formats? These are important questions that can be answered only by the teacher in terms of the local context, his or her students, his or her classroom, and the specific purpose of the test. Once the planning decision is made, the item writing begins. This tank is often the most feared by beginning test constructors. However, the procedures are more common sense than formal rules.
  • 40.
  • 41.
    1. The testitems should be selected very carefully. Only important facts should be included. 2. The test should have extensive sampling of items. 3. The test items should be carefully expressed in simple, clear, definite, and meaningful sentences. 4. There should be only one possible correct response for each test item. 5. Each item should be independent. Leading clues to other items should be avoided. 6. Lifting sentences from books should not be done to encourage thinking and understanding. 7. The first-person personal pronouns / and we should not be used. 8. Various types of test items should be made to avoid monotony. 9. Majority of the test items should be of moderate difficulty. Few difficult and few easy items should be included. 10. The test items should be arranged in an ascending order of difficulty. Easy items should be at the beginning to encourage the examinee to pursue the test and the most difficult items should be at the end. 11. Clear concise and complete directions should precede all types of test. Sample test. Sample test items may be provided for expected responses. 12. Items which can be answered by previous experience alone without knowledge of the subject matter should not be included. 13. Catchy words should not be used in the test items. 14.Test items must be based upon the objectives of the course and upon the course content. 15. The test should measure the degree of achievement or determine the difficulties of the learners. 16.The test should emphasize ability to apply and use facts as well as knowledge of facts. 17.The test should be of such length that it can be completed within the time allotted by all or nearly all of the pupils.The teacher should perform the test herself to determine its approximate time allotment. 18. Rules to governing good language expression, grammar, spelling, punctuation, and capitalization should be observed in all times. 19. Information on how scoring will be done should be provided.
  • 42.
    POINTERSTO BE OBSERVEDIN CONSTRUCTING AND SCORING THE DIFFERENTTYPES OFTESTS
  • 43.
    A. RECALLTYPES 1. Simplerecall type a) This type of consists of questions calling for a single word or expressions as an answer. b) Items usually begin with who, where, when, and what. c) Score is the number of correct answers. 2. Completion type a) Only important words or phrases should be omitted to avoid confusion. b) Blanks should be of equal lengths. c) The blank, as much as possible, is placed near or at the end of the sentence. d) Articles a, an, and they should not be provide before the end of omitted word or phrase to avoid clues for answers. e) Score is the number of correct answers.
  • 44.
    3. EnumerationType a) Theexact number of expected answers should be started. b) Blanks should be equal lengths. c) Score is the number of correct answers. 4. Identification type a) The items should make an examinee think of a word, number, or group of words that would complete the statement or answer the problem. b) Score is the number of correct answers.
  • 45.
    B. RECOGNITIONTYPES 1.True-false oralternate-response type a) Declarative sentences should be used. b) The number of “true” and “false” items should be more or less equal. c) The truth or falsity of the sentence should not be too evident. d) Negative statements should be avoided. e) The “modified true – false” is more preferable than the plain true-false”. f) In arranging the items, avoid the regular recurrence of “true” and “false” statements. g) Avoid using specific determiners like: all, always, never, none, nothing, most, often, some, etc, and avoid weak statements as may, sometimes, as a rule, in general etc. h) Minimize the use of qualitative terms like; few, great, many, more, etc. i) Avoid leading clues to answers in all times. j) Score is the number of correct answers in “modified true-false and right answers minus wrong answers in “plain true-false”.
  • 46.
    2.Yes-No type a) Theitems should be in interrogative sentences. b) The same rules as in true-false are applied. 3. Multiple-response type a) There should be three to five choices.The number of choices used in the first item should be the same number of choices in all the items of this type of test. b) The choices should be numbered or lettered so that only the number or letter can be written on blank provided. c) If the choices are figures, they should be arranged in ascending order. d) Avoid the use of “a” or “an” as the last word prior to the listing of the responses.
  • 47.
    e. Random occurrenceof responses should be employed f. The choices, as much as possible, should be at the end of the statements. g. The choices should be related in some way or should belong to the same class. h. Avoid the use of “none of these” as one of the choices. I. Score is the number of correct answers.
  • 48.
    4. Best answertype a. There should be three to five choices all of which are right but vary in their degree of merit, importance or desirability b. The other rules for multiple-response items are applied here. c. Score is the number of correct answers.
  • 49.
    5. MatchingType a. Thereshould be two columns. Under “A” are the stimuli which should be longer and more descriptive than the responses under column “B”.The response may be a word, a phrase, a number, or a formula. b. The stimuli under column “A” should be numbered and the response under column “B” should be lettered.Answers will be indicated by letters only on lines provided in column “A”. c. The number of pairs usually should not exceed twenty items. Less than ten introduces chance elements.Twenty pairs may be used but more than twenty is decidedly wasteful of time. d. The number of responses in column “B” should be two or more than the number of items in Column “A” to avoid guessing. e. Only one correct matching for each item should be possible. f. Matching sets should neither be to long nor too short. g. All items should be on the same page to avoid turning of pages in the process of matching pairs. h. Score is the number of correct answers.
  • 50.
    C. EssayType ofExaminations 1. Common types of essay questions. (The types are related to purposes of which the essay examinations are to be used). a. Comparison of two things b. Explanations of the use or meaning of a statement or passage. c. Analysis d. Decisions for or against e. Discussion
  • 51.
    2. How toconstruct essay examinations. a. Determine the objectives or essentials for each question to be evaluated. b. Phrase question in simple, clear and concise language. c. Suit the length of the questions to the time available for answering the essay examination.The teacher should try to answer the test herself. d. Scoring: e. Have a model answer in advance. f. Indicate the number of points for each question. g. Score a point for each essential.
  • 52.
    Advantages and Disadvantagesof the Objective Type of Tests Advantages a. The objectives test is free from personal bias in scoring. b. It is easy to score.With a scoring key, the test can be corrected by different individuals without affecting the accuracy of the grades given. c. It has high validity because it is comprehensive with wide sampling of essentials d. It is less time-consuming since may items can be answered in a given time e. It is fair to students since the slow writers can accomplish the test as fast as writes. Disadvantages a. a. It is difficult to construct and requires more time to prepare. b. b. It does not afford the students the opportunity in training for self- and thought organization c. c. It cannot be used to test ability in theme writing or journalistic writing
  • 53.
    Advantages a.The essay examinationcan be used in practically in all subjects of the school curriculum. b. It trains students for thought organization and self-expression. c. It affords students opportunities to express their originality and independence of thinking. d. Only the essay test can be used in some subjects like composition writing and journalistic writing in which cannot be tested by the objective type test. e. Essay examination measures higher mental abilities like comparison, interpretation, criticism, defence of opinion and decision. f.The essay test is easily prepared. g. It is inexpensive.
  • 54.
    Disadvantages a. The limitedsampling of items makes the test unreliable measures of achievements or abilities. b. Questions usually are not well prepared. c. Scoring is highly subjective due to the influence of the corrector’s personal judgment. d. Grading of the essay test is inaccurate measure of pupils’ achievements due to subjective of scoring.
  • 55.
    III. STATISTICAL MEASURES ORTOOLS USEDIN INTERPRETING NUMERICAL DATA
  • 56.
    Frequency Distributions ▪ Asimple, common sense technique for describing a set of test scores is through the use of a frequency distribution. A frequency distribution is merely a listing of the possible score values and the number of persons who achieved each score. Such an arrangement presents the scores in a more simple and understandable manner than merely listing all of the separate scores. Considers a specific set of scores to clarify these ideas. ▪ A set of scores for a group of 25 students who took a 50-items test is listed inTable 1. It is easier to analyse the scores if they are arranged in a simple frequency distribution. (The frequency distribution for the same set of scores is given inTable 2).The steps that are involved in creating the frequency distribution are: ▪ First list the possible scores values in rank order, from highest to lowest.Then a second column indicates the frequency or number of persons who received each score. For example, three students received a score of 47, two received 40 and so forth.There is no need to list the score values below the lowest score that anyone received.
  • 57.
    Table 1. Scoresof 25 Students on a 50 ItemTest Student Score Student Score A 48 N 43 B 50 O 47 C 46 P 48 D 41 Q 42 E 37 R 44 F 48 S 38 G 38 T 49 H 47 U 34 I 49 V 35 J 44 W 47 K 48 X 40 L 49 Y 48 M 40
  • 58.
    Table 2. FrequencyDistribution of the 25 Scores ofTable 1 Score Frequency Score Frequency 50 1 41 1 49 3 40 2 48 5 39 0 47 3 38 2 46 1 37 1 45 0 36 0 44 2 35 1 43 1 34 1 42 1
  • 59.
    • When thereis a wide range of scores in a frequency distribution, the distribution can be quite long, with a lot of zeros in the column of frequencies. Such a frequency distribution can make interpretation of the scores difficult and confusing.A grouped of frequency distribution would be more appropriate in this kind of situation. Groups of score values are listed rather than each separate possible score value. • If we were to change the frequency distribution inTable 2 into a grouped frequency distribution, we might choose intervals such as 48-50, 45-47, and so forth.The frequency corresponding to interval 48-50 would be 9 (1+3+5).The choice of the width of the interval is arbitrary, but it must be the same for all intervals. In addition, it is a good idea to have as odd- numbered interval width (we used 3 above) so that the midpoint of the interval is a whole number.This strategy will simplify subsequent graphs and description of the data.The grouped frequency distribution is presented in Table 3.
  • 60.
    Table 3. GroupedFrequency Distribution Score Interval Frequency 48-50 9 45-47 4 42-44 4 39-41 3 36-38 3
  • 61.
    Frequency distributions summarizesets of test scores by listing the number of people who received each test score. All of the test scores can be listed separately, or the sources can be grouped in a frequency distribution.
  • 62.
    • Frequency distributionsare helpful for indicating the shape to describe a distribution of scores, but we need more information than the shape to describe a distribution adequately.We need to know where on the scale of measurement a distribution is located and how the scores are dispersed in the distribution. For the former, we compute measures of central tendency, and for the latter, we compute measures of dispersion. Measures of central tendency are points on the scale of measurement, and they are representative of how the scores tend to average.There are three commonly used measures of central tendency; the mean, the median, mode, but the mean is by far the most widely used. MEASURES OF CENTRALTENDEDNCY
  • 63.
    The Mean • Themean of a set of scores is the arithmetic mean. It is found by summing the scores and dividing the sum by the number of scores. The mean is the most commonly used measure of central tendency because it is easily understood and is based on all of the scores in the set; hence, it summarizes a lot of information.The formula for the mean is as follows:
  • 66.
    The Median • Anothermeasure of central tendency is the median which is the point that divides the distribution in half; that is, the half of the scores fall above the median and half of the scores fall below the median. • When there are only few scores, the median can often be found by inspection. If there is an odd number of scores, the middle score is the median. Where there is an even number of scores, the median is halfway between the two middles scores. However, when there are tied scores in the middle’s distribution, or when the scores are in a frequency distribution, the median may not be so obvious. • Consider again the frequency distribution inTable 2.There were 25 scores in the distribution, so the middle score should be the median. A straightforward way to find this median is to augment the frequency distribution with a column of cumulative frequencies. • Cumulative frequencies indicate the number of scores at or below each score.Table 4 indicates the cumulative frequencies for the data inTale 2.
  • 67.
    Table 4. FrequencyDistribution, Cumulative Frequencies for the Scores ofTable 2 Score Frequency Cumulative Frequency 50 1 25 49 3 24 48 5 21 47 3 16 46 1 13 45 0 12 44 2 12 43 1 10 42 1 9 41 1 8 40 2 7 39 0 5 38 2 5 37 1 3 36 0 2 35 1 2 34 1 1
  • 68.
    For example, 7people scored at or below a score of 40, and 21 persons scored at or below a score of 48.  To find the median, we need to locate the middle score in the cumulative frequency column, because this score is the median. Since there are 25 scores in the distribution, the middles one is the 13th, a score of 46.Thus, 46 is the median of this distribution; half of the people scored above 46 and half scored.  When there are ties in the middle of the distribution, there may be a need to interpolate between scores to get the exact median. However, such precisions are not needed for most classroom tests.The whole number closest to the median is usually sufficient.
  • 69.
    The Mode • Themeasure of central tendency that is the easiest to find is the mode.The mode is the most frequently occurring score in the distribution.The mode of the scores inTable is 48. Five people had scores of 48 and no other score occurred as often. • Each of these three measures of central tendency – the mean, median, and the mode means a legitimate definition of “average” performance on this test. However, each does provide different information.The arithmetic average was 44; half of the people scored at or below 46 and more people received 48 than any other score. • There are some distributions in which all the three measures of central tendency are equal, but more often than not they will be different.The choice of which measure of central tendency is best will differ from situation to situation. The mean is used most often, perhaps because it includes information from all of the scores. • When a distribution has a small number of very extreme scores, though, the median may be a better definition of central tendency.The mode provides the least information and is used infrequently as an “average”.The mode can be used with nominal scale data, just as an indicator of the most frequently appearing category.The mean, the median, and the mode all describe central tendency: • The mean is the arithmetic average. • The median divides the distribution in half • The mode is the most frequent score.
  • 70.
    MEASURES OF DISPERSION Measuresof central tendency are useful for summarizing average performance, but they tell as nothing about how the scores are distributed or “spread out” but they might be differed in other ways. One the distributions may have the scores tightly clustered around the average, and the other distribution may have scores that are widely separated. As you may have anticipated, there are descriptive statistics that measures dispersion, which also are called measures of variability. These measures indicate how spread out the scores tends to be. The Range The range indicates the difference between the highest and lowest scores in the distribution. It is simple to calculate, but it provides limited information.We subtract the lowest from the highest score and add 1 so that we include both scores in the spread between them. For the scores ofTables 2, the range is 50 – 34 + 1 = 17. A problem with using the range is that only the two most extreme scores are used in the computation.There is no indication of the spread of scores between the highest and lowest. Measures of dispersion that take into consideration every score in the distribution are the variance and the standard deviation.The standard deviation is used a great deal in interpreting scores from standardized test.
  • 71.
    The Variance The variancemeasures how widely the scores in the distribution are spread about the mean. In other words, the variance is the average squared difference between the sources and the mean. As a formula, it looks like this:
  • 72.
    The computation ofthe variance for the scores of Tables 1 is illustrated in Table 5. The data for students K through V are omitted to save space, but these values are included in the column totals and in the computation. The Standard Deviation The standard deviation also indicates how spread out the scores is, but it is expressed in the same units as the original scores. The standards deviation is computed by finding the square root of the variance. S = S2
  • 73.
    For the datainTable 1, the variance is 22.8.The standard deviation is 22.8, or 4.77. The scores of most norm groups have the shape of a “normal distribution-a symmetrical, bell-shaped distribution with which most people are familiar.With normal distribution, about 95 percent of the scores are within two standard deviations of the mean. Even when scores are not normally distributed, most of the scores will be within two standard deviations of the mean. In the example, the mean minus two standard deviations is 34.46, and the mean plus two standard deviations is 53.54.Therefore, only one score is outside of this interval; the lowest score, 43, is slightly more than two standard deviations from the mean.
  • 76.
    The usefulness ofthe standard deviation becomes apparent when scores from different test are compared. Suppose that two tests are given to the same class one fractions and the other on reading comprehensive.The fractions test has a mean of 30 and a standard deviation of 8; the reading comprehensive test has a mean of 60 and a standard deviation of 10. IfAnn scored 38 on the fractions test and 55 on the reading comprehensive test, it appears from the raw scores that she did better in reading than in fractions, because 55 is greater than 38. Descriptive statistics that indicate dispersion are the range, the variance, and the standard deviation. The rangeis the difference between the highest and lowest scores in the distribution plus one. The standard deviation is a unit of measurement that shows by how much the separate scores tend to differ from the mean. The variance is the square of the standard deviation. Most scores are within two standard deviations from the mean.
  • 77.
    Graphing Distributions  Agraph of a distribution of test scores is often better understood than is the frequency distribution or a mere table of numbers.  The general pattern of scores, as well as any unique characteristics of the distribution, can be seen easily in simple graphs. There are several kinds of graph that can be used, but a simple bar graph or histogram, is as useful as any.  The general shape of the distribution is clear from the graph. Most of the scores in this distribution are high, at the upper end of the graph.  Such a shape is quite common for the scores of classroom tests.  A normal distribution has most of the test scores in the middle of the distribution and progressively fewer scores toward extremes. The scores of norm groups ate seldom graphed but they could be if we were concerned about seeing the specific shape of the distribution of scores.