3. EVALUATION
The term evaluation is derived from the
word ‘valoir’ which means ‘to be worth’.
Evaluation is the process of judging the
value or worth of an individuals
achievements or characteristics.
Evaluation is a decision making process that
leads to suggestions for actions to improve
participants’ effectiveness and program
efficiency.
4. EVALUATION
MEANING
• The term evaluation is derived from the
word ‘valoir’, which means ‘to be worth’.
• Evaluation is the process of judging the value
of worth of an individual’s achievements or
characteristics
5. DEFINITION
• Evaluation is the process of determining to
what extent the educational objectives are
being realized. (Ralph Taylor)
• Evaluation is a systemic examination of
educational and social progress.
(Conbach.et al)
• Evaluation involves assessing the strengths
and weaknesses of programs, policies,
personnel, products, and organizations to
improve their effectiveness. (American
Evaluation Association)
6. DEFINITIONS
• It is the systematic collection and
interpretation through systematic and
formal means of relevant information
which serves as a basis for rational
judgments in decision situation.
Dressel
7. ASSESSMENT
• Assessment: Assessment is used in situations
where the procedures involve more objective
instruments and when these instruments are
measuring personal attributes. The term
assessment is used when a numerical value is not
involving
• The word ‘assess’ comes from the Latin verb
‘assidere’ meaning ‘to sit with’.
• Assessment can focus on the individual learner,
the learning community (class, workshop, or
other organized group of learners), the
institution, or the educational system.
8. DEFINITIONS
• ASSESSMENT:
• “Assessment is the process of documenting,
usually in measurable terms, knowledge, skill,
attitudes, and beliefs.”
• “Assessment in education is the process of
gathering, interpreting, recording, and using
information about pupils’ responses to an
educational task.”
___ Harlen, Gipps, Broadfoot,Nuttal,1992
9. NATURE OF EVALUATION
• Evaluation in educational context implies broad
programme that examines achievements,
attitude, interests, personality, traits and
skills factors which are taken into consideration.
Thus cognitive, affective and psychomotor
learning outcome is measured in the evaluation
process.
• The evaluation is a two part process.
✔The first part of evaluation is the determination
of what is to evaluate (Goal) and
✔The second part is the judgment of whether the
goals are being achieved.
10. PURPOSES OF EVALUATION
❖The overall purpose will be to provide information to enable each
student to develop according to his potential with in the frame work of
educational objectives. Bloom stated the following purposes:-
1) To discover the extent of competence through which the student have
developed in initiating, organizing and improving his/her day to day work
and to diagnose his/her strength and weakness with a view to further
guidance.
2) To appraise the status of and changes in student’s behaviour.
3) To assess the student’s progress from time to time and disclose
student’s needs and possibility.
4) To predict the student’s future in academic success.
5) To provide basis for modification of the curriculum and course.
6) To motivate students for better attainment and growth.
7) To improve instructional and measuring devices.
8) To locate areas where remedial measures are needed.
11. PURPOSES OF EVALUATION
• To provide short-term goals to the students to work towards the
achievement of educational objectives.
• To clarify the intended learning outcome.
• To diagnose the strengths and weaknesses of students.
• To encourage student learning by measuring their achievement and
informing them about their success.
• To assess the student's progress throughout the year.
• To determine whether a particular student is competent enough to be
advanced to the next class.
• To report the student's progress to parents.
13. PURPOSES OF EVALUATION IN
NURSING
• To determine the level of knowledge and understanding of the students at
various times.
• To determine the level of student’s clinical performance at various stage.
• To diagnose each student’s strength and weakness and to suggest remedial
measures.
• To encourage student’s learning by measuring their achievement and inform them
their success.
• To help students to acquire the attitude of and skills in self evaluation and self
directing in their study.
• To provide motivation in practicing critical thinking, the application of principle,
the making of judgment, etc.
• To estimate the effectiveness of teaching and learning technique, of subject
content and instructional media in reaching the goals.
• To gather information for administrative purpose such as selecting students for
higher courses, placement of students for advanced training, etc.
14. SCOPE OF EVALUATION
❖ a) Certification
❖ b) Feedback
❖ c) Monitoring the program
❖ d) Safeguarding the public
❖ e) Baseline for guidance and counseling
❖ f) Placement and promotion in job
❖ g) Development of tools and techniques and appraise the methods of
instructions.
15. SCOPE OF EVALUATION
o Value judgment
o Ascertaining the extent to which the educational objectives have been attained
o Effectiveness of appraisal or methods of instruction
o Identifies pupil’s strength and weakness, difficulties and problems, needs and
demands
o Provides baseline for guidance and counseling
o Placement and promotions in job
o Development of attitudes, interests, capabilities, creativity, originality,
knowledge, skills etc.
o Development of tools and techniques
o Development of curriculum and for its revision
o Interpretation of results
o Helpful for curriculum planners and administrators to improve the curriculum
pattern
16. SCOPE OF EVALUATION
Evaluation and the teacher
• Evaluation helps the teacher to adopt a student centred approach in her teaching
and to individualize instruction. Evaluation helps the teacher to organize
appropriate learning activities for the students to realize the objectives and also
to find out the extent to which objectives are realized
Evaluation and students
• Evaluation helps the students to makes them aware of the objectives of the
program, it increases motivation and it encourages students good study habits. It
also increases the ability and skills of the students
Evaluation and administrators
• Evaluation helps the administrators to take appropriate decision in planning
curricular and co-curricular program
Evaluation and parents
• A systematic and continuous program of student’s evaluation keeps the parents
well versed with the performance of their children and helps them to take
appropriate action for their further improvement
17. CHARACTERISTICS OF
EVALUATION
1. Continuous process.
2. Includes academic and non academic subjects.
3. Procedure for improving the product.
4. Discovers the needs of an individual student and to design learning
experience that will solve their needs.
5. Correlation between the educational system and the system of
evaluation.
6. Complex process which need scientific techniques and tools.
7. Evaluation is purpose oriented.
18. PRINCIPLES OF EVALUATION
• Evaluation is a process of determining the extend
to which educational objectives are achieved by
pupils. It should not be viewed as merely a
collection of techniques for obtaining information
about pupil’s behaviour. (Gronlund & Guilbert)
• Evaluation is most effective when based on sound
operational principles. The following principles
stated by Gronlund provide a framework within
which the process of evaluation may be viewed.
19. PRINCIPLES OF EVALUATION
1.Determining and clarifying what is to be evaluated always has
priority in the evaluation process: The objectives must be clearly stated
before evaluation is made.
2.Techniques should be selected in term of the purposes to be served.
Every evaluation technique is appropriate for some uses and inappropriate for
another. Therefore while selecting an evaluation technique one must be well
aware of the strength and limitations of the techniques.
3.Comprehensive evaluation requires a variety of evaluation
techniques. It is not possible to evaluate all the aspect of achievement with the
help of a single technique. For the better evaluation the techniques like
objective tests, essay tests, observational techniques etc. should be used. So
that a complete’ picture of the pupil achievement and development can be
assessed.
4.Proper use of evaluation techniques require an awareness of their
limitations as well as their strength. Evaluation can be done with the help
of simple observation or highly developed standardized tests. But whatever the
instrument or technique may be it has its own limitation. There may be
measurement errors. Sampling error is a common factor in educational and
20. PRINCIPLES OF EVALUATION
5.Evaluation is a means to an end and not an
end in itself. The evaluation technique is used to
take decisions about the learner. It is not merely
gathering data about the learner. Because blind
collection of data is wastage of both time and
effort. But the evaluation is meant for some useful
purpose.
21. GENERAL PRINCIPLES OF EVALUATION
• The value of evidence is gained through careful appraisal of teaching
learning process
• Evaluation is a continuous process, the teacher should make a plan of
evaluation to cover entire course
• The objectives should be stated in terms of behaviour
• It determines to what extent the objectives of the course are being
met
• Identifying and defining the educational objectives for maximum
benefit
• Methods of evaluation should be selected on the basis of purpose to
be served for and type of behaviour to be measured
22. • Comprehensive evaluation requires variety of evaluation
techniques
• Proper use of evaluation techniques requires awareness
about limitations as well as their strength
• The worth or value of teaching method/ learning method
or the materials of instruction is known until their effect
being measured
• Adequacy of experience should be made in terms of
excellence of performance and quality of experience
• Records for practice should reflect the objectives of
practice and give evidence to the extent of achievement
of these objectives
27. DEFINITION
• Bradfield defines measurement is a process of assigning
symbols to the dimensions of phenomenon in order to
characterize the status of the phenomenon as precisely as
possible.
• The process of obtaining numerical description of the degree
to which an individual possesses a particular
characteristic.(answer the question,”how much”)
28. NATURE OF MEASUREMENT
It should be quantitative in nature
It must be precise & accurate(instrument)
It must be reliable
It must be valid
It must be objective in nature.
29. CHARACTERISTICS OF MEASUREMENT
• Quantitative
• Easy to understand
• Encourage appropriate behaviour.
• Visible.
• Defined and mutually understood.
• Encompasses both inputs & output.
• Measures only what is important.
• Multidimensional.
• Facilitates trust.
35. Qualities and
Characteristics of
Evaluation
1. E Ethically Conducted
2. V Values Diverse Opinions
3. A Accurate and Technically Adequte Information
4. L Leads to Continous Learning and Improvement
5. U Uses Participatory Methods
6. A Affordable/Appropriate in Terms of Budget
7. T Technical Persons Carry it Out/Timely Carried Out
8. I Indicators properly selected and Studied
9. O Opens opportunity for better understanding
developmental change
10. N Never Used for Fixing Blame and Finding Faults
36. QUALITIES OF AN EVALUATION TOOL/CRITERIA
FOR SELECTION OF ASSESSMENT,
TECHNIQUES & METHODS
1. Validity
• Validity is the most important quality needed for an evaluation tool. If the tool is able to measure
what it is intended to measure, it can be said that the tool is valid. It should fulfill the objectives for
which it is developed.
2. Reliability
• Reliability of a tool refers to the degree of consistency and accuracy with which it measures what it is
intended to measure. If the evaluation gives more or less the same result every time it is used, such
evaluation is said to be reliable.
3. Objectivity this is the extent to which several independent and competent examiners agree on what
constitutes an acceptable level of performance.
• A tool is said to be objective if it is free from personal bias of interpreting its scope as well as in
scoring the responses. Objectivity is one of the most primary pre-requisites required for maintaining
all other qualities of a good tool.
4. Practicability depends upon the time required to construct an examination, to administer and score
it, and to interpret the results, and on its overall simplicity of use.
• The overall simplicity of use of a test for both the constructor and for the learner
5. Relevance
• The degree to which the criteria established for selecting the item so that they conform to the aims of
the measuring instrument
37. 6. Equilibrium/ equity
• Achievement of the correct proportion among questions allotted to each
of the objectives and teaching content
• Equity: extent to which the questions set in the examination correspond
to the teaching content.
7. Specificity
• The items in a test should be specific to the objectives
8. Discrimination quality of each element of a measuring instrument which
makes it possible to distinguish between good and poor students in
relation to a given variable.
• The basic function of all educational measurement is to place individuals in
a defined scale in accordance with differences in their achievements
9. Efficiency
• It ensures the greater possible number of independent answers per unit of
time
10. Time
• The required time to answer items should be provided to avoid hurry,
guessing, taking risks or chances etc.
38. 11. Length
• The number of items in the test should depend upon the objectives
and content of the topic
12. Test usefulness
• Grading or ranking can be possible with items in the test
13. Precise and clear
• Items should be precise and clear so that students can answer well
and score marks
14. Comprehensiveness
• The total content and objectives have to be kept in mind while
preparing items for the test
15. Adequacy
• A measuring instrument should be adequate i. e. balanced and fair
39. 16. Ease of Administrability It means that a test can be administered
with clarity, ease, and uniformity.
• Provision should be made for the preparation, distribution and
collection of test materials
17. Ease of scoring
• Simple scoring is good.
18. Economy
• It should be computed in terms of the validity of the tests per unit of
cost
19. Comparability
• A test possess comparability when scores resulting from its use can
be interpreted in terms of a common base that has a natural or
accepted meaning
20. Utility
• It serves a definite need in the situation in which it is used.
41. ASSESSMENT METHODS &
TECHNIQUES
• Assessment methods and techniques depend on the domain to be
examined. However it is not practical to assess each domain
independently. The examination tool can be selected based on the
major domain to be tested.
• There are mainly three domains under which the student’s
performance is assessed.
• They are mainly cognitive, psychomotor and affective domains.
42. CLASSIFICATION OF EVALUATION TECHNIQUES:
• All techniques of evaluation can be broadly classified into two categories:
1.Quantitative Technique: They are mainly used in educational
evaluations. These are highly reliable and valid.
The quantitative tests can be classified into –
a. Oral Techniques
b. Written Techniques
c. Practical techniques
2. Qualitative Technique:
These are used in schools and colleges for internal assessment. These
techniques are subjective and are less reliable.
The techniques used are –
a. Cumulative record
b. Anecdotal record
c. Observation Technique
d. Checklist
e. Rating scale
43.
44. CLASSIFICATION OF ASSESSMENT TOOLS AND
TECHNIQUES /METHODS OF DIFFERENT
DOMAIN/METHODS OF EVALUATION
• Assessment of knowledge(cognitive
domain)
• Assessment of skills(psychomotor
domain)
• Assessment of attitude(affective
domain)
45. ASSESSMENT OF
KNOWLEDGE
ASSESSMENT OF
SKILL
ASSESSMENT OF
ATTITUDE
Essay type questions
•Extended response
essay
•Restricted response
essay
Short answer
questions
•Fill in the blank type
•Statement completion
•Labeling a diagram
•Short answer in 5-10
words
Objective type of
questions
•Multiple-choice
questions
•Multiple response
questions
•True and false questions
viva
• Observation
checklist
• Rating scale
• Anecdotal records
• Cumulative records
• Written clinical
assignments
• Critical incident
record
• Practical
examination
• Viva voce (Oral
examination)
• Objective Structured
Clinical Examination
(OSCE)
• Objective Structured
Practical
Examination (OSPE)
Likert attitude scale
Semantic differential
scale
46. A. ASSESSMENT OF
KNOWLEDGE
• The method used to assess the knowledge of the learners are
educational tests or achievement tests to assess the intellectual level
of the students.
• The main tools used for assessment are oral and written
examinations (Standardized or Teacher based Tests). (Subjective or
Objective)
47. ASSESSMENT OF KNOWLEDGE
1. Essay type questions- They are being used since
ancient times and most commonly employed method of
assessment of cognitive skills in nursing education. Some
educators use them because essays have the potential to reveal
students’ abilities to reason, create, analyse, synthesis and
evaluate. Educators choose essay questions over other forms of
assessment because essay items challenge students to create a
response rather than to simply select a response
An essay type test presents one or more questions or other tasks
that require extended written responses from the persons being
tested. (Robert L.E. and David A.F.)
Essay item is an item that requires the student to structure a long
written response running to several paragraphs. (William
Wiersma & Tephen G Jurs)
48. DEFINITIONS:
• "A test item which requires a response composed by the examinee,
usually in the form of one or more sentences, of a nature that no
single response or pattern of responses can be listed as correct, and
the accuracy and quality of which can be judged subjectively only by
one skilled or informed in the subject."
_____John M. Stalnaker (1951)
• “A test containing questions requiring the students to respond in
writing, it emphasizes recall rather than recognition of the correct
alternative.“
_____Gilbert Sax (1989)
49. • Based on Stalnaker's definition, an
essay question should meet the
following criteria:
1. Requires examinees to compose rather than select
their response.
2. Elicits student responses that must consist of more
than one sentence.
3. Allows different or original responses or pattern of
responses.
4. Requires subjective judgment by a competent specialist
to judge the accuracy and quality of responses.
50. PURPOSE OF ESSAY TYPE QUESTION
• There are two major purposes for using essay questions.
One purpose is to assess students' understanding of and ability
to think with subject matter content.
The other purpose is to assess students' non- content related
attributes like creativity, neatness and writing abilities.
• Students get a chance to express own views
• To assess factual recall of knowledge
• Analysis and explanation of relationships
• Assessment of non-content-related attributes of
students
51. Principles for construction of essay
type questions
• The learning objective supposed to be evaluated by an essay type
question should be clearly defined in simple words.
• If a learning objective can be evaluated by any other type of
question, the use of essay type question should be avoided.
• It is always better to use several short essay type questions instead
of a long one.
52. TYPES:
• Depending upon the amount of freedom given to student to organize
his/her ideas and write answers.
1. Traditional essay questions or Extended response type
question
2. Structured essay questions or Restricted response type
question
53. Two types
a. Extended response essay-
• This type of question allows pupils to select any factual information
that they think is pertinent to organize the answer in accordance
with their best judgement and to integrate and evaluate ideas as they
deem to appropriate.
• this freedom enables them to demonstrate their ability to select ,
organize,integrate and evaluate ideas.
• eg-Explain the role of the nurse in the health care team.
b. Restricted response essay-Restricted response question
• This type of question usually limits both the content and the
response.
• The content is usually restricted by the topic to be discussed
• eg- State the main difference between Kwashiorkor And Marasmus.
54. • The question and the task in a problem situation should be
clearly defined by ensuring the following
o Clearly define the question so that students understand what to write.
o Delimit the scope of the question so that students do not feel that
they need to write infinite number of pages.
o Clearly develop the problem or problem situation so that students can
be focused.
o The approximate time and word limit for each essay type question
should be specified. The distribution of marks for different segments
of a particular question and organization, neatness, expressive language
should be explained.
o The use of complex and ambiguous words should be avoided.
o Words like differentiate and compare should be used at the beginning
of the question to restrict the scope of the question.
o Simple unambiguous language well understood by all students should
be used.
55. Merits of Essay Type Examinations
• They help in organizing ideas and concepts.
• They promote application of knowledge in different
spheres
• They are useful to assess the knowledge of language and
writing skills of the student
• They provide the students, sufficient independence in
answering without hesitance or binding.
• They give the students a good opportunity to express their
individual ideas and talents, creativity and skills of
presentation of a concept.
• It eliminates the possibility of the students' guessing the
correct answer.
56. Demerits of Essay Type Examinations
• It leaves some scope for guess work by the
students and the scoring pattern.
• Sometimes, the real evaluation of the students
may not be possible with such type of
examinations.
• There is a risk that the grading of essay responses
can be subjective and unreliable.
• Time consuming both for examiner and examinee.
• Some examiners are too liberal in marking and
some are too strict.
57. SUGGESTIONS FOR CONSTRUCTING ESSAY QUESTIONS
• Restrict the use of essay questions to those learning outcomes that
cannot be satisfactorily measured by objective items.
• State the question clearly and precisely and make clear what
information the answer should contain.
• While preparing questions ,it should be kept in mind that the
maximum subject matter content is covered.
• Indicate the approximate time limit for each question.
• Avoid the use of optional questions.
• Construct question that will call forth the skills specified in the
learning standards.
Example: Write a two page statement defending the importance of
conserving our natural resources? (Your answer will be evaluated in
terms of its organization, comprehensiveness, and relevance of the
arguments presented.)
58. Guidelines for Construction
• The questions should be framed in such a way that
the task is clearly defined and can be completed
within the stipulated time.
• It is preferable to set more number of questions
requiring short answers of about a page or two
than a few questions requiring long answers of
more than five pages.
• It is preferable not to give too many or too lengthy
questions.
• Prepare a marking system acceptable to other
examiners by prior discussion with a checklist of
specific points against which marks are allotted.
• Phrases like "Discuss briefly", "State everything
that you know", etc. should be avoided.
59. Sequential Steps in the Construction of
Essay Type Test
• Check for the objectives of evaluation clearly.
• The objectives should be representative of entire
areas of knowledge expected of students
• Sampling of objectives must be done as it
provides the basis for developing test items.
• It is not possible to cover the entire content of
the subject in the test. So sampling of the course
content is done.
• Prepare a blue print for the test.
60. SUGGESTION FOR SCORING ESSAY QUESTIONS
• chose either the analytical or holistic (global-quality) method.
• Analytical Scoring: This scoring method requires that the instructor develop
an ideal response and create a scoring key or guide.
• The scoring key provides an absolute standard for determining the total points
awarded for a response.
• Student responses are compared to the scoring standard and not to the
responses of their classmates.
• The reader forms an impression of the overall quality of a response and then
transforms that impression into a score or grade.
• The score represents the quality of a response in relation to a relative standard
such as other students in the class.
• Score the responses question-by-question rather than student-by-student.
• Disassociate the identity of students from their responses during the grading
process.
• Determine in advance what aspects of the response will or will not be judged in
scoring.
• Evaluate all answers to one question before going to the next one.
61. BLUFFING-A SPECIAL SCORING PROBLEM
• It is possible for students to obtain higher scores in essay questions
than they deserve by the means of clever bluffing. This is usually a
combination of writing skill, general knowledge and use of common
tricks like; Respond to every question.
❖Stressing importance of topic.
❖Agreeing with teachers opinion.
❖Name dropping.
❖Write on a related topic and “make it fit”.
❖Writing in general terms that fit many situation.
62. 2. Short Answer Questions
• The short answer test is an objective test in which
each items is in the form of a direct question, a
specific problems or incomplete statement or
question. The response must be supplied by the
examinee rather than merely identified from a list
of suggested answers supplied by the teacher.
• It does not call for an extensive written response.
• The answer is been expected as short and can be
expressed in different forms.
• Ideally, only one answer is acceptable.
63. •Definition:
• In this test, a student is given direct
questions and expected to provide firing in
words or phrases or numerical responses
to the questions.
64. Purposes of SAQs
• Useful to assess the recall ability of students (lower cognitive domain).
• Used to assess students in a classroom while a lecture is in progress.
• Useful in formative assessment.
• May be used in summative evaluation to supplement other forms of
questions.
65. Principles of Constructing SAQs
• The item should be expressed in a clear, simple language, making it as
concise as possible.
• When completion items are used, do not include too many blanks.
Key words are to be omitted and the blank should come at the end of
the sentence.
• The weightage for each question should be written with the question.
• Precise, simple and unambiguous language should be used.
• Each question should deal with important content area of a unit.
• Long complex sentences should be avoided and the questions should
be kept as simple as possible.
• Phrases like 'write briefly on, short notes on' should be avoided.
• Space should be provided for answers below each question per the
requirement of the question being asked.
66. GUIDELINES FOR MAKING SAQS:
• Identify the learning objective, the overall purpose and content of the item.
• Make questions precise, avoid incomplete statements.
• Item should be expressed in such a way that only a single, brief answer is
possible and that answer should be kept ready along with the framing of
questions.
• Prepare a structured checklist and marking sheet.
• The item should be expressed in a positive form than negatively phrased
items.
• Try to avoid providing clues to the required answer.
• When a numerical answer is expected, the degree of precision and the
units to express should be indicated.
• Long complex sentences should be avoided.
• Space for answering should be provided as much after the completion of
questions.
• The weightage and criteria for marking for each questions must be
mentioned clearly.
67. Types of SAQs
1.Fill in the blank type
• Example: Q.l: A patient is diagnosed with brain
tumor. The nurse's assessment reveals that the
patient has difficulty in interpreting visual stimuli.
Based on these findings, the nurse suspects injury
in the ……….. lobe of the brain.
• Answer: Occipital
68. 2. Statement completion
• Example: Q.l: A 45-year-old patient is admitted with
excruciating paroxysmal facial pain. He reports that the
episodes occur most often after feeling cold draft and
drinking cold beverages. Based on these findings the nurse
determines that the patient is most likely suffering from
• Answer: Trigeminal neuralgia
69. 3. Labeling a diagram
• Example: Q.l: An elderly patient fell and fractured the neck of his
femur. Identify the area where the fracture occurred.
4. Short answer in 5-10 words
• Example: Q.l: Mention the five commonly occurring signs and
symptoms of hypothyroidism.
1. 2.
3. 4.
5.
70. 5.Numerical problem type
Example – Doctor’s Order: Infuse 50 mg of Amphotericin B in 250 mL
NS over 4 hr. What flow rate (ml/hr) will you set on the IV infusion
pump
6.Open SAQs
Example – write the clinical manifestations of HTN.
7. Problem solving SAQs
Example –how do osmotic diuretics reduce ICP?
- A patient is admitted with hypertension. List the initial steps of
actions required.
71. Advantages of SAQs
• Provide the opportunity to cover a much wider content of the syllabus
to evaluate the students.
• Can be administered to a large group of students for a short formative
assessment.
• Provide less scope for guesswork.
• Easy to administer and mark the tests; ensure more objective scoring.
• Covers a large portion of syllabus.
• Easy to score.
• Quick response
• Easy to administer and mark the tests
72. Disadvantages of SAQs
• Difficulty in construction of reliable items.
• They can lead to cheating within a group of students if the
examination hall is not spacious enough.
• They provide no scope to assess the writing ability, expression,
organization of answer, etc.
73. 3. Objective type question
Objective test are different from essay type. Their examinee is presented
with a large number of clearly defined questions and asked to select the
correct answer from given list of possible answers
Types of Objective Type Test Item
• Objective type tests include the following items:
– Multiple Choice Items/Select The Best Answer
– Multiple Response Question
– True Or False Items
– Matching Type Items
74.
75. OBJECTIVE TYPE TEST
Definition: Objective test items are the items
that can be objectively scored items on which
persons select a response from a list of options.
W Wiersma and G Jurs
1990
79. 1.Multiple choice question
• It is an objective type of test, where student is provided
with several alternatives or choices to a given question
and asked to select the most appropriate one which is
correct.
• It contains two parts(component)
– The base or stem which presents the problem in the form of an
incomplete statement or question.
– The options list of possible/correct answer/distracters/
alternatives.
• The stem may be a statement, question, situation, graph
or picture.
• The suggested answers other than one correct response
or choice are called distracters.
• The correct answer is given in the key.
80. Suggestion for Constructing Multiple
Choice Items
✔ The stem of the item should be meaningful by itself and should present a definite
problem.
✔ The item stem should include as much of the item as possible and should be free
of irrelevant material.
✔ Use a negatively stated stem only when significant learning outcomes require it.
✔ All the alternatives should be grammatically consistent with the stem of the item.
✔ An item should contain only one correct or clearly best answer.
✔ Items used to measure understanding should contain some novelty, but beware of
too much.
✔ All distracters should be plausible.
✔ The purpose is to distract the uninformed from the correct answer.
✔ Verbal associations between the stem and the correct answer should be avoided.
✔ The relative length of the alternative should not provide a clue to the answer.
✔ The correct answer should appear in each of the alternative positions an
approximately equal number of times but in random order.
✔ Use sparingly “none of the above” or “all of the above.”
✔ Do not use multiple-choice items when other items are more appropriate.
82. Q.1: Which artery is most commonly used to assess the pulse rate in a two-year-old child? } Question stem
a) Radial artery
b) Femoral artery
Distracters Alternatives
c) Pedal artery
d) Apical artery } Answer
83.
84. Uses of MCQ
• It is one of the commonly used objective type tests
• It can measure a variety of learning outcomes from simple to complex
• They are used for formative and summative assessment as well as for
entrance examinations
• It is easier-to score
• It makes comparison of the students more objective.
85. Types of MCQs
• One best response
• Multiple true and false
• Multiple completion type
• Relationship analysis type
• Matching type
86. One best response
This is one of the most frequently used MCQ. A series of 4 to 5
choices is preferred to reduce the chances of random guessing.
Example Q 1: The World Health Day is celebrated every year on
• 1st April
• 7th April
• 1st May
• 7th May
87. Multiple completion type
• This is another common format used and is an improvement on the
first type mentioned earlier-and requires higher level of cognition.
Example Q 1: Live virus is used in immunization against
– Influenza
– The common cold
– Cholera
– Smallpox
• Responses 1,2 and 3 are correct.
• Responses 1 and 3 are correct.
• Responses 2 and 4 are correct.
• Response 4 is correct.
• All four are correct.
88. Relationship analysis type
• This type of item is useful to test higher levels of cognition as the
candidate has to decide individually whether each statement is correct
and then determine their cause-effect relationship
Example Q 1: Statement 1. Cow's milk is preferable to breast milk for
infant feeding. Because
Statement 2. Cow's milk has a higher content of calcium.
• Both statements are true and causally related.
• Both statements are true but not causally related.
• First statement is true and the second is false.
• First statement is false and the second is true.
• Both statements are false.
89. Multiple True-False Completion Type
Each of these choices can be individually true or false and are not
interdependent. Hence the item can have from nil to five true or
false responses.
Example Q 1: The consequences of Total Parenteral Nutrition in
children include:
• Oral aversion T/F
• Electrolyte imbalance T/F
• Vitamin deficiency T/F
• Weight loss T/F
• Water retension T/F
90. Matching type question
• The matching type items are prepared in two columns-one called as
the stimulus column and the other one called as the response
column. The student has to go through the stimulus column and
match it with a correct response from the other side.
91. Principles for Preparation of Matching Items
• The statements in the whole set of matching items should belong to
the same kind or nature.
• The number of choices should be more than the required answers.
• Too many items may be confusing and distracting. Hence, the
number of items should be limited to about 10.
• The stimuli and response columns should be on the same page.
• A single response should not be used for more than one stimulus.
• The terminology in one column should not give clues to the
expected responses in the other column.
• Arrange responses or both in alphabetical order prevents clues to
the responses. If the responses are in numerical quantities, arrange
them in order from low to high.
• Use the longer phrases as stimulus and shorter as responses.
92. Example Q 1
Match the following vitamin deficiency diseases with its vitamin
• Scurvy Vitamin B
• Rickets Vitamin A
• Night blindness Vitamin D
• Beriberi Vitamin C
Vitamin K
93. Guidelines to prepare MCQ
STEM
• Have a single problem in the stem.
• Write the stem in a precise manner.
• Avoid a lengthy stem.
• Stem consist of a complete statement ,not just a single word.
• Avoid clues in the Stem which may suggest the correct answer
• Should be meaningful.
94. OPTIONS
• Keep all options grammatically consistent with the stem.
• Use logical options
• Avoid giving clues to the correct options.
• Arrange options either in alphabetical or numerical order.
• Use only one best answer
• Avoid using “all of the above”or “none of the above.
95. • General Steps of Formulating a MCQ Paper
• Decide on the number of MCQs to be included.
• Select the appropriate number of various formats depending on the
learning outcome to be tested.
• Group all similar formats together.
• Check for inclusion of different formats and with varying difficult
levels in answering
• Place some easy items at the beginning of the paper for the
psychological support of the student.
• Make sure that all parts of an item are in the same page.
• Time should be adequate depending on the total number of different
formats of MCQs.
• Instructions to the candidates as how to respond to individual items
must be clear.
96. Advantages
• Can test large sample of knowledge in a short period of time
• Easy to score
• Objectivity and reliability in scoring is maintained.
• Easy to use and administer.
• Can cover large content area of syllabus
• Easy to check answer.
• Provide detailed feedback for both students and teachers.
97. Disadvantage of MCQ
• It does not test the students' ability to write logically and the capability
of expression.
• It cannot test motor skills like communication and interpersonal skills.
• Difficult and time consuming to construct good MCQs.
• Provide an opportunity to guess the answer if the question is not
properly constructed.
• More suitable format for cheating in students if invigilator is not highly
keen in observing students.
98. 4. MULTIPLE RESPONSE QUESTIONS
HERE THE CORRECT ANSWER MAY CONSIST OF MORE THAN
ONE CHOICE. THE EXAMINEE IS ASKED TO IDENTIFY ALL THOSE
WHICH ARE CORRECT
99.
100. Eg Q 1 The nurse is assessing a 2-year-old patient diagnosed with
bacterial meningitis. Which of the following signs and symptoms of
meningeal irritation is the nurse likely to observe? Select all that
apply:
– Generalized seizures
– Nuchal rigidity
– Positive Brudzinski 's sign
– Positive Kernig's sign
– Babinski reflex
– Photophobia
• Options
– B,C,D,F
– A,B,C,D
– B,C,D,F
– A,C,D,F
• ADVANTAGE AND DISADVANTAGES ARE SAME AS
MCQ
101. 5. True/ False questions
• These are question or statements followed by Yes/No or True/False
responses.
• The student is asked to tick or mark the correct response.
102. Principles of Preparation of True or False Items
• Only a single concept or idea should be represented in a statement.
• Write clear and direct statements. Avoid ambiguous statements.
• Avoid using clues like 'usually', 'sometimes', 'none', and 'nothing', 'no',
'should', 'always', 'may', etc.
• Determine the order of true and false by chance.
• Detectable pattern of answers should be avoided (T, F, T,F).
• The statement should not be taken directly from the text-book.
105. OBSERVED CHECKLISTS
• It is an approach to monitor performance of specific skills,
behaviours, or dispositions of individual student.
• Checklist is basically a method of recording whether a characteristic
is present or an action is performed.
106. ASSESSMENT OF SKILL
OBSERVATION CHECKLIST
• Checklist are lists of items or performance
indicators requiring dichotomous responses such
as satisfactory or unsatisfactory, pass or fail, yes
or no, present or absent etc.
• Gronlund (2005) describes a checklist as an
inventory of measurable performance dimension
of products with a place to record a simple yes
or no judgement
107. DEFINITION
• A checklist is a simple instrument consisting of prepared list of
expected items of performance or attributes, which are checked by a
evaluator for their presence or absence. A checklist enables the
observer to note whether or not a trait or characteristics is present.
108. CHARACTERISTICS
• Can be used for formative assessments by focusing on
performance of specific skills such as writing skills, speaking
skills or action-based skills.
• Should be carefully prepared and must include all important
elements of a specific task that student must know or perform.
• Each element should be sequentially and carefully included and
adequate weightage should be given.
• Observe one at a time and use only when assessing a particular
characteristics.
109. GUIDELINES TO CONSTRUCT
• Express each item in clear, specific, observable and in simple
language.
• Items can be continuous or divided into groups of related items.
• The items created has to be evaluated by the experts.
• Avoid negative statements.
• Ensure that each item has a clear response.
• Review the items independently.
• It must be complete and comprehensive in nature.
• Leave space to record anecdotal notes or comments.
• Each element should be marked as Present/Absent,
inappropriately done and an option “not applicable” can be
110. Construction of checklist
•Express each item in clear, simple
language
•Avoid lifting statements verbatim
from the text
•Avoid negative statements wherever
possible
•Review the items independently
111. Utilization of checklists
• Use checklist only when you are interested in ascertaining whether
a particular trait or characteristics is present or absent
• Use only carefully prepared checklist for more complex kind of trait
• Observe only one student at a time and confine your observation to
the points specified in the checklists
• Have separate checklists for each stdent
• The observer must be trained how to observe, what to observe and
how to record the observed behaviour
• Checklists require the observer to judge whether certain behaviour
of student and clinical practice has been taken place.
112. Merits of checklist
• Short and easy to assess and record
• Useful for evaluation of specific well defined behaviours and are
commonly used in the clinical simulated laboratory setting
• They can be used for both process and procedure evaluation
• They are adaptable for most subject matter areas
• They allow inter individual comparisons to be made on a common
set of traits or characteristics
113. Limitations
• Does not indicate the quality of
performance.
• Only a limited component can be assessed
114.
115. ADVANTAGES
• Useful in evaluating performance
skills that can be divided into a
series of specific actions.
• Allow inter-individual
comparisons.
• It is simple to use and record.
• Decreases the error chances.
• Useful for evaluating activities
which has to be performed.
DISADVANTAGES:
• Usefulness is limited as it does not
indicate quality of performance and
can’t do overall clinical
performance.
• The degree of accuracy of
performance is limited if presence
or absence is used only as an
attribute.
• It has limited use in qualitative
observations.
• Not easy to prepare.
117. RATING SCALE
• It is a term used to express opinion or judgment
regarding some performance of a person, object,
situation and character.
• Rating scale is an important technique of
evaluation.
• Rating is the assessments of a person by another
person.
118. DEFINITION
• It refers to a scale with a set of opinion, which describes varying degree of
the dimensions of an attitude or a phenomenon being observed.
120. (A)GRAPHIC RATING SCALE
• A straight line, may be represented by
descriptive phrases at various points.
• To rate the subject for a particular
trait a check mark is made at the
particular point.
121. (B) DESCRIPTIVE RATING
SCALE
• This type does not use numbers but divides the assessment into
a series of verbal phrases to indicate the level of performance.
122. (C) NUMERICAL RATING SCALE
• In which numbers are assigned to each trait.
• If it is a seven point scale, the number 7 represents
the maximum amount of that trait in the individual,
and 4 represents the average.
• The rater merely enters the appropriate number
after each name to indicate judgment of the person.
123. (D)COMPARATIVE RATING
SCALE
• In this person
makes a judgment
about an
attire/attitude/obje
ct by comparing it
with
others/ranking it.
124. CHARACTERISTICS OF RATING
SCALE
• These are value judgments about attributes of one person by another
person.
• These are most commonly used tools to carry out structured observations.
• These are generally developed to make qualitative judgment about
qualitative attributes.
• Provide more flexibility to judge the level of performance.
125. PRINCIPLES OF RATING
SCALE
✔It relates to learning objective.
✔Needs to be confined to performance areas that can be observed.
✔Clearly defines mode of behaviour.
✔The behaviour should be readily observed in a number of situations.
✔Allow some space in the rating scale for the rater to give supplementary
remarks.
✔3 to 7 rating positions may be provided.
✔All raters should be oriented to the specific scale as well as the process of
rating in general .
✔The rater should be unbiased and trained.
✔Consider evaluation setting, feedback and student participation.
✔Have experts and well informed raters.
✔Assure that rater autonomy will be maintained.
126. ADVANTAGES:
• Easy to administer and score.
• Its easy to make and less time
consuming.
• Easily used for large group.
• Also used for quantitative
methods.
• May also be used for assessment of
interest, attitude, personal
characteristics.
• Used to evaluate performance and
skills.
DISADVANTAGES:
• Difficult to fix up rating.
• Chances for subjective evaluation,
thus the scales may become
unscientific.
127. PRACTICAL EXAM
• Practical exams are meant to assess the professional competence
gained by the students over a period of time and whether it meets
the requirements and expectations specified by the statutory bodies
128. PRACTICAL EXAMINATION
• It is concerned with the assessment of practical performance skill and
practice competency acquired by a student during the course of a particular
programme.
• They are conducted in the practice laboratories or real – life practice areas
such as IPDs.
• It is used to make judgment on not only skills, but all three domains
including knowledge, attitude, IPR and skills.
129. PRACTICAL EXAMINATION
• Practical examinations are integral part of nursing examinations.
• The aim of practical examination is to evaluate the nursing
competence or practical skills.
• The students proceed through a series of steps and under take a
variety of practical tasks like assessing the patient , formulating
nursing diagnosis , recording and reporting etc..
130. PRACTICAL EXAMINATION
• Checklists and rating scales are prepared in advance to improve the
reliability of scoring.
• All students are evaluated on the same criteria by same examiners.
131. PURPOSES
• To assess the practical skills and practice competencies of nursing
students.
• To assess the practice domain by observing students reactions to real
– life situations.
• To assess the student's problem solving skills.
• To assess the documentation skills.
• To assess the skill of transforming theoretical knowledge in practice
by the student.
132. Purposes
• Assess the clinical competence of the students
• Expertise and skill of the student in performing procedures and
techniques
• Skills in proper recording and reporting
• The ability to employ the learnt skill in real, practical situation rather
than in paper
• Critical thinking, problem solving and decision making skills
• Attitude of the students towards patients
• Ability to work in a group
• Ability to correlate theory with practice
134. A. PLANNING PHASE
• Consider the learning
experience and learning
objectives.
• Decide the appropriate
place.
• Plan to conduct in a familiar
place for students.
• Arrange for adequate supply
to carry out procedure.
• Take prior permission and
initiate to concerned areas.
• The evaluation criteria must
be planned and intimated to
the students.
B. CONDUCTING PHASE
• Reach the area as per
planned schedule.
• Convey rules and
regulations of the
examinations and assign
patients randomly.
• Provide sufficient time to
perform
• Follow confidentiality.
135. Procedure
• Plan the exam in advance and execute according to the plan
• Examiners should be clearly intimated regarding the date and venue
of the examination
• The exam should be conducted in a real set up as far as possible
• Students should be intimated clearly
• Examiners should give clear and proper instructions to the students
• Both the examiners should evaluate all the students to increase the
objectivity
• The examiners must try to create an atmosphere of non stress
situation
• They should assess the student’s knowledge and skills in a
comprehensive manner
• The students should be informed about the outcome of the
examination and the corrective measures they have to take
• Safety of the students should be considered
136. Advantages
• They provide the opportunity to test all the senses in a realistic
situation
• They are helpful to grade or promote the students to the next level
• They provide opportunity to observe and test attitudes and
responsiveness to a complex situation
• They provide opportunity to test the ability to communicate under
pressure
• They are helpful in assessing the critical thinking, problem solving
and decision making skills of the students
137. Characteristics of practical examination
• They are held towards the end of the course or module
• They are held on a fixed date in advance
• They are frequently of long duration and require 5 to 6 hours
• They comprise one or a small number of compulsory tasks
• Can test several areas of syllabus together and assess the students
ability to bring together a number of different skills learned in
various units
• It requires the examiner to observe carefully to evaluate the
students performance
138. Drawbacks
• It is subjective as the students score depends on the whims, fancies
and mood of the examiner
• It is time consuming and there is a lack of standardized conditions in
bedside which affects students score
139. ADVANTAGES:
• Provides an opportunity to assess
the skills and competency of the
students.
• It is an opportunity to the
examiners for assessing the use of
compartmentalized knowledge in
an integrated manner by a student.
• For assessing the communication
and IPR skills.
DISADVANTAGES:
• Time consuming process.
• Not feasible for large group
assessment.
• Sometimes the disturbance factors
in ward affect the smooth
conduction of examinations.
• Not considered a standardized
assessment when conducting while
working with patients and personal
bias is expected
143. DEFINITION
A viva voce may be defined as an examination consisting of a
dialogue of the examiner with the examinee , where the
examiner questions to which the candidate must reply .
Viva voce examination or oral examination is a face to face
question answer activity between the examiner and the
student.
144. VIVA VOCE/ ORAL EXAMINATION
• A viva voce may be defined as an examination consisting of a dialogue
of the examiner with the examinee, where the examiner questions to
which the candidate must reply
148. Characteristics
• It takes place on a fixed occasion
• Examiners should prepare questions on varying degrees of difficulty
prior to the examination
• Adequate and equal duration of time should be given to each
student
• Examiners should score the students independently according to the
predetermined scheme
• The questions asked should be according to the educational
objectives and relevant to the subject being examined
• The examiner should maximum try to reduce the level of
subjectivity while examining the students
149.
150. IMPROVING ORAL EXAMINATION
• Prepare a list of tasks and abilities to be tested
• List the usual questions asked from memory
• Revise whether they test the abilities intended to be tested
• Check for clarity of questions
• Give adequate time for the students to think and answer and
always remember the individual difference
• Be courteous and show patience
• Do not distract the candidate
• Make an attempt to know what the student knows
• Do not hurry up
151. Advantages
• It allows a direct contact between the examiner
and the examinee
• It provides opportunity for studying personal
characteristics
• It permits flexibility in questioning
• There is less scope for cheating or unfair practise
for the examinee
• It can be a good learning experience as there is
scope for an immediate feedback
152. Disadvantages
• There is a chance of subjectivity
• The personal attributes may affect the judgment
• Time consuming and costly
• It lacks validity, reliability and objectivity
• Lack of clarity in questioning and questions of
variable levels of difficulty is common
• Uniformity cannot be maintained
153. OBJECTIVE STRUCTURED PRACTICAL EXAMINATION
• Objective Structured Practical Examination is a new pattern of
practical examination in which each component of clinical
competence is tested uniformly and objectively for all the students
who are taking up a practical examination at a given place.
154. Steps of OSPE
• Demonstrate practical skills
• Make accurate observations
• Analyze and interpret data
• Identify the patients problem
• Plan alternative nursing interventions
155. Types of station
• Procedure station: at which the student performs the task
• The question station/ the response station: at which the student
answers the questions being asked on the answer sheet
156. Procedure of conducting OSPE
• Examiners A, B, C stands in a place from where they can have a
good view of what a candidate is doing at a particular station. They
have a checklist on which they tick ass they observe
• The questions and answers are prepared well in advance
• The students are given clear instruction regarding how they will
rotate around the stations and the time limit in each station and
what they are supposed to do in each station (demonstrate a skill,
make observation, make calculation from the data provided or
answer the question asked)
• At the end of OSPE the checklist of examiners A, B, C pertaining to
a given candidate and her answer sheets are put together to give her
a final score
157. Advantages of OSPE
• It is more objective, reliable and valid than the traditional system of
examination
• All students are subjected to the same standardized test
• Emphasis is shifted from testing factual knowledge to testing of skills,
that too in a short time
• It helps to ensure a wide coverage of all practical skills
• It ensures interaction of teaching and learning
• There is increased faculty-student interaction
• A large number of students can be tested within a relatively short
time
158. Limitations of OSPE
• The simulated situation may not reflect the real life situation
• Students cannot be assessed for different skills, such as IPR,
communication skills etc.
• Empathy towards the patients cannot be evaluated
• The skill of the student in providing holistic nursing care cannot be
assessed
• It may be time consuming to construct an OSPE
• It cannot be used by a single person, it needs more resources in
terms of manpower, time and money
• There is no interaction between the examiner and the student
• There is a risk of fatigue
• Breaking clinical skills into individual competencies may be artificial
and not meaningful
• Careful organization is required since all stations require equal time
159. OBJECTIVE STRUCTURED CLINICAL EXAMINATION
• According to Harden (1988), OSCE is an approach to assess the
clinical competence, in which the components of competence are
assessed in a planned or structured way with attention being paid to
the objectivity of the examination.
160. Process of OSCE
• The student is assessed at a series of stations with one or two
aspects of competence being tested at each station.
• The examination can be described as a focused examination with
each station focusing on one or two aspects of competence
• OSCE includes series of 12 to 20 stations each testing one or two
components of clinical competencies for 3 to 5 minutes.
• Students are rotated to all stations with predetermined time interval
thus through the series of 12 to 20 stations to accommodate 12 to
20 students who will be examined simultaneously
161. Competencies assessed in OSCE
• Taking clinical history
• Physical examination
• Critical thinking in patient management
• Problem solving
• Communication and interpersonal relationship
162. ASSESSMENT OF ATTITUDES
ATTITUDE SCALE
• An attitude scale measures how the participant feels about a subject
at the moment when he or she answers the question
Types of attitude scales
• Point scale
• Differential scale
• Summated (Likert) Scale
• Semantic differential attitude scales
163. Likert Scale
The scale is named after its inventor, Psycologist Rensis Likert
Likert scale is designed to determine the opinion or attitude of a
subject and contains a number of declarative statements with a scale
after each statement.
The original version of the scale included five response categories.
Each response category was assigned a value with a value of 1 given to
the most negative response and a value of 5 to the most positive
response
164. Advantages
• Questions used are usually easy to understand and so lead to
consistent answers
• Easy to use
Disadvantages
• Only a few options are offered, with which respondents may not
fully agree
• People may become influenced by the way they have answered
previous questions
• They may also deliberately break the pattern disagreeing with a
statement with which they might otherwise have agreed
165. Semantic differentials
The Semantic Differentials was developed by Osgood and
collegues(1957)which measures attitudes or beliefs.
It consists of two adjectives with a 7 point scale between them.
The subject is to select one point on the scale that best describes his
or her view of the concept being examined.
In a semantic differential, values from 1 to 7 are assigned to each of
this space with 1 being the most negative response and 7 the most
positive response
166. Thurstone scale
In Psychology, the Thurstone scale was the first formal technique for
measuring an attitude
It was developed by Louis Leon Thurstone in 1928, as a means of
measuring attitudes towards religion
It is made of statements about a particular issue and each statement
has a numerical value indicating how favourable or unfavourable it is
judged to be
168. Types of evaluation
Evaluation is classified into various types
• Based on the time / frequency during which the evaluation is done
❖Formative evaluation
❖Summative evaluation
• Based on the purpose
❖Criterion referenced evaluation
❖Norm referenced evaluation
• Based on the nature
❖Maximum performance evaluation
❖Typical performance evaluation
• Based on the person who does evaluation
❖Internal evaluation
❖External evaluation
169. • Based on the time /
frequency during which the
evaluation is done
❖Formative evaluation
❖Summative evaluation
▪ The term FORMATIVE
denotes the ongoing or
systemic assessment of
students achievement while
the term, courses or the
instructional program is in
progress.
� The term SUMMATIVE
evaluation refers to
assigning a grade for
students achievement at
the end of the term,
course or instructional
program.
170. ❖ Formative evaluation is concerned with judgements made
during the design or development of a program which is
directed towards modifying, forming or otherwise improving
the program before it is completed (A J Nitko, 1983)
❖Formative Evaluation is done during an instructional
program to monitor learning modifying the program
(if needed) before its completion. It is for the current
students.
FORMATIVE EVALUATION.
171. �It is relatively focus on molecular analysis.
�It is cause seeking
�It is instructed in the broader experiences of the
program user.
�Its design is exploratory and flexible.
�It tends to ignore local effects of particular
programs.
�It seeks to identify influential variables.
�It requires analysis of instructional material for
mapping the hierarchical structure of the
learning tasks and actual teaching of course for a
certain period.
CHARACTERISTICS OF FORMATIVE EVALUTION
172. • Formative Evaluation is individualized by comparison of the
students achievement during various stages of the curriculum
or course
• Formative Evaluation puts the student against himself and not
by comparison with other peers or colleagues.
• Formative Evaluation is a continually in prospective way to
slow or rapid progress towards the attainment.
• This enable the student to control his own learning behavior at
every point of course and seek appropriate teacher guidance.
• The teacher in turn is provided qualitative and quantitative
data towards modifying teaching suitability and on time.
173. • Summative evaluation describes judgements about the
merits of an already completed program, procedure or
product (A J Nitko, 1983)
� Summative Evaluation is done at the conclusion of
instruction and measures the extend to which students
have attained the desired outcome. W.weersma 1990.
SUMMATIVE EVALUATION.
174. CHARECTERISTICS OF
SUMMATIVE EVALUATION:
❖ It leads to the use of well defined evaluation design.
❖ It focuses on analysis.
❖ It provides descriptive analysis.
❖ its instruments are reliable and valuable.
❖ summative evaluation is useful towards convincing the
consumer, society or the students towards the
competence achieved.
❖ It is used for the further promotion or next grade.
177. FORMATIVE EVALUATION:
1. Formative evaluation is used during the
teaching learning process to monitor the
learning process.
2. Formative evaluation is developmental in
nature. The aim of this evaluation is to
improve student’s learning and teacher’s
teaching.
3. Generally teacher made tests are used for
this purpose.
4. The test items are prepared for limited
content area.
5. It helps to know to what extent the
instructional objectives has been achieved.
6. It provides feed-back to the teacher to
modify the methods and to prescribe remedial
works
7. Only few skills can be tested in this
evaluation.
8. It is a continuous and regular process.
9. It considers evaluation as a process.
10. It answers to the question, whether the
progress of the pupils in a unit is successful?
SUMMATIVE EVALUATION:
1. Summative evaluation is used after the
course completion to assign the grades.
2. Summative evaluation is terminal in
nature. Its purpose is to evaluate student’s
achievement.
3. Generally standardized tests are used
for the purpose.
4. The tests items are prepared from the
whole content area.
5. It helps to judge the appropriateness of
the instructional objectives.
6. It helps the teacher to know the
effectiveness of the instructional
procedure.
7. Large number of skills can be tested in
this evaluation.
8. It is not regular and continuous process.
9. It considers evaluation as a product.
10. It answers to the question, the degree
to which the students have mastered the
course content.
178. • DIFFERENCE
•
FORMATIVE E. SUMMATIVE E.
1. Evaluation is performed to determine how
well students have mastered various
elements so that decision can be made on
how instructions should best proceed
1. Evaluation is performed simply to grade
the student at the end of one unit before
proceeding to the next.
2.E. deals with only a segment. 2. E. .deals with whole in a detailed manner..
3.It is possible to break a course or subject
in to
smaller unit of learning.
3. We can give test for the whole course.
4.Test can be administered after completion
of the
unit
4. Test can be given after the completion of
the course/ program
5.Immediate feed back. 5. Feed back not possible immediately.
6.Dignostic and progress test can be
possible
6. Achievement examination can be possible.
7.Weakness and strength of the student can
be understand.
7. Success and failure of the student will be
possible.
179. Based on the purpose
❖Criterion referenced
evaluation
❖Norm referenced
evaluation
180. CRITERION REFERENCED EVALUATION
Criterion-referenced tests and assessments are designed to
measure student performance against a fixed set of
predetermined criteria or learning standards—i.e., concise,
written descriptions of what students are expected to know
and be able to do at a specific stage of their education.
In diploma and degree nursing education, criterion-referenced
tests are used to evaluate whether students have learned a
specific body of knowledge or acquired a specific skill set.
For example, the curriculum taught in a course, academic
program, or content area.
181. CRITERION REFERENCED EVALUATION
• A criterion referenced test is one that provides for translating
test scores into a statement about the behaviour to be
expected of a person with that score or their relationship to a
specified subject matter
• Describes specific performance that was demonstrated.
• This type of evaluation is designed to provide a measure of
performance in interpretable terms.
• The performance is mainly based on interpretation of results.
Tools
Teacher made test
Observation technique.
182. CRITERION- REFERENCED EVALUATION
▪ This describes students performance according to a
specified domain of clearly defined learning tasks.
▪ Eg. Formulate the nursing diagnosis of patient with
typhoid fever.
▪ Thus criterion referenced evaluation directly describe the
specific performance that was demonstrated.
▪ So criterion referenced evaluation interprets to describes
what an individual can do, without reference to other
performance.
▪ Evaluation instruments includes teacher made test,
published test and observational techniques.
183. NORM-REFERENCED EVALUATION:
• Norm-referenced refers to standardized tests that are designed to
compare and rank test takers in relation to one another
• A norm referenced test is a type of test assessment or evaluation which yields an
estimate of the position of the tested individual in a predefined population with
respect to the trait being measured
▪ It describes pupil performance according to relative position in some groups.
• Performance according to relative position in some known group.
• Comparing individual performances with others
• Eg-rank of 5 in a class of 20
Tools
❖ Standardized aptitude test
❖ Achievement test
• Teacher made test
• PURPOSES:
❖ Classification of students.
❖ Classification of student in this way that they can be placed in remedial or gifted
program.
❖ To help teachers select students for different ability level.
184. • Based on the nature
❖Maximum performance evaluation
❖Typical performance evaluation
185. MAXIMUM PERFORMANCE EVALUATION
▪ Maximum performance evaluation determines What individuals can
do when performing at their best.
▪ It determines persons abilities and how well he /she can perform
when motivated to obtain as high a score as possible.
Tools
❖Aptitude test
❖Achievement test
186. TYPICAL PERFORMANCE EVALUATION
In this evaluation it determines what individual will do under
natural conditions. i.e their typical behaviour.
• This evaluation helps to indicate what individuals will do rather than
what they can do.
• Results in this area will indicate what individual will do rather than
what they can do.
• The importance of this distinction between ability and typical
behaviour is easily illustrated.
• Eg. A student with considerable aptitude for community health nursing
may how ever show little interest in other subjects.
Tools
Attitude test
Observation techniques
187. • Based on the person who does
evaluation
❖Internal evaluation
❖External evaluation
188. INTERNAL ASSESSMENT is continuous, periodic
and internal, in which assessment is done in relation to
certain abilities and skills of the students periodically and
continuously.
INTERNAL ASSESSMENT has to be planned at the
time of curriculum development and syllabus interpretation.
INTERNAL ASSESSMENT
189. • INTERNAL ASSESSMENT will be assessed by the
teacher/ instructor of the college or school and no external
teacher or instructor involved in this.
• INTERNAL ASSESSMENT demands the out come of
students than the abilities and skills of the students.
190. DEFINITION:
• Internal assessment refers to the process
evaluating students or staffs by the people who
govern it.
• No external authority or office is liable to
interfere with the test given to the members of the
institution. The only governing body is the head of
the institution.
191. To integrate teaching and evaluation and to
test the skills and abilities which can not be
tested through written examination
PURPOSES OF INTERNAL
ASSESSMENT.
192. NEED FOR INTERNAL ASSESSMENT
• To give credit in final assessment.
• To reduce tension associated with final
examination.
• To provide link for feedback in teaching.
• To provide opportunity to the teacher to evaluate
his/her students.
• To induce students for continue learning.
• Diagnostic and remedial teaching are possible
and more scientific.
• Internal assessment motivate the student to give
more weight age to the annual examination.
193. BASIC PRINCIPLES
Basic principles of internal assessment
• Should be continuous and made by subject teacher, it does not replace
exam.
• It uses suitable evaluation tools and techniques.
• Fix proportion of marks according to hours of instructions and
importance of subject to nursing.
• Used as a feedback to improve teaching.
• Students should know their internal assessment marks before their
final exams.
• Give opportunity to students to improve their internal assessment
grade by additional tests, assignments etc.
• Results must be studied statistically.
• Improve a number of components.
194. 1. This is logical and psychological
2. Proper study habits are likely to be developed.
3. Students will be more regular, alert, sincere in their study.
4. 11th hour preparation will be reduced.
5.Students will pay equal attention to all the activities
6. Helps the students to minimize their anxietyand nervous
breakdown.
7. Gives comprehensive picture to the teacher.
8. It is a good device for motivating.
9. It helps to diagnose the weakness and strength of the student.
10. It brings changes in their attitude, interest, and appreciation.
11. Gives ample opportunity to assess the student.” The teacher
who teaches should assess.”
12. Parents feel more comfortable in knowing about their
children.
ADVANTAGE / MERITS OF INTERNAL
ASSESSMENT.
196. EXTERNAL EXAMINATIONS/
EXTERNAL ASSESSMENT
INTRODUCTION:
❑ Conducting university examination, or boards of
examination for awarding certificates or degrees related to
nursing.
❑ In our country usually state boards and universities are
conducting examinations for awarding diplomas and degrees.
❑ Controller or registrar is over all in charge on behalf of
the university matters relating to conduct examinations,
announcement of results and conferment of degrees and
maintain confidence.
197. DEFINITION
• An evaluation carried out by evaluators external to the entity
evaluated. An evaluation which is performed by persons outside the
organization responsible for the intervention itself.
198. IMPORTANCE/OBJECTIVES OF EXTERNAL
ASSESSMENT
• Degree/certificate A standard.
• To make distinguish.
• Comparison of abilities.
• To evaluate the progress.
• Selection for higher education.
• To get employment.
• Popularity/standard of educational institution
• Selection of intelligent students.
• Competition.
• Evaluation of teacher’s performance.
• Evaluation of objectives and curriculum.
• Creation of good habits in students.
• Satisfaction and happiness of parents
199. ADVANTAGES
• A person will be able to know his/her performance and knowledge.
• For most people exam may encourage them to work and learn.
• It can create competition which pushes the competitioners to do their
best. It helps in developing one’s own personality and confidence.
• If a person passes the exam and got good result, it helps to get a
scholarship which will bring to have a good job.
202. ACHIEVEMENT TEST
• Achievement test is an important tool in school /college evaluation
and has great significance in measuring instructional progress and
progress of the students in the subject area.
• Accurate achievement data are very important for planning
curriculum and instruction and for program evaluation.
• An achievement test is a test of developed skill or knowledge. The
most common type of achievement test is a standardized
test developed to measure skills and knowledge learned in a given
grade level, usually through planned instruction, such as training or
classroom instruction.
203. • A standardized test is a test that is administered and scored in a
consistent, or "standard", manner. Standardized tests are designed in
such a way that the questions, conditions for administering, scoring
procedures, and interpretations are consistent and are administered
and scored in a predetermined, standard manner.
• Eg ; time-limited tests, or multiple-choice tests. a written test,
an oral test, or a practical skills performance test.
204. DEFINITION
• “Any test that measures the attainments and accomplishments of an
individual after a period of training or learning”-
N M Downie
• ‘The type of ability test that describes what a person has learned to
do’
Throndike and Hagen
• “A systematic procedure for determining the amount a student has
learned through instructions”
Groulund
205. TEST CONSTRUCTION
Purposes of test
• Before instruction
– To determine readiness
– To place the candidate or categorize
– To assess existing knowledge
• During instruction
– To assess learning
– To use as diagnostic tool
• After instruction
– To assess the learning outcome
– To assess the level of mastery
– To grade
• General purpose
– To direct, stimulate, motivate
– To assess teaching effectiveness
206. FUNCTIONS OF TEST
• It provides basis for promotion to the next grade.
• To find out where each student stands in various
academic areas.
• It helps in determination about the placement of
the students in a particular section.
• To motivate the students before a new
assignment has taken up.
• To expose pupil’s difficulties which the teacher
can help them to solve.
207. STEPS INVOLVED IN THE CONSTRUCTION OF
ACHIEVEMENT TEST
1. Planning of test
2. Preparation of a design for the test / developing test
design
3. Preparation of the blue print
4. Construction of items/ Writing of items
5. Organization of the test.
6. Preparation of the scoring key and marking scheme
7. Preparation of question-wise analysis
8. Test administration.
208. 1.Planning of test
• Objective of the Test
• Determine the maximum time and maximum
marks and nature of the test
2. Preparation of a design for the test
Important factors to be considered in design for
the test are:
•Weightage to objectives
•Weightage to content
•Weightage to form of questions
•Weightage to difficulty level.
209. a) Weightage to objectives ; this indicates what
objectives are to be tested and what weightage has to be
given to each objectives.
210. b. Weightage to content : this
indicates the various aspects of the content
to be tested and the weightage to be given
to these different aspects.
211. c. Weightage to form of questions; this indicates
the form of the questions to be included in the test and the
weightage to be given for each form of questions.
212. D . Weightage to difficulty level; this
indicates the total mark and weightage to be given
to different level of questions.
213. 3.Prepartion of blue print for test ;
blue print is a three dimensional chart indicating the
distribution of questions , objectives, content and form of
questions.
214. 4. Writing of items / construction of items
• The paper setter write items according to the blue
print.
• The blue print gives very definite idea, regarding the
number of questions to be set from each sub unit , their
forms and scope.
• The difficulty level has to be considered while writing
the items.
• It should also checked whether all the questions
included can be answered within the time allotted.
• It is advisable to arrange the questions in the order of
their difficulty level.
215. 5.Organization of the test;
• After finalizing the items , these have to be arranged
according to the scheme of section as suggested in the
design.
• Before that , the preliminary details such as name of the
examination , maximum marks and time, instruction for
answering each part etc. have to be written at the
appropriate places.
• Arrangement of questions
• Pschologically ,it will be advisable to arrange the items in
the order of difficulty level.
• ie. an “information item” will normally be easier than an
“understanding item” , which in turn may be easier than
“application item”.
216. 6.Preparation of the scoring key and
marking scheme;
In case of objective type items where the answers are in
the form of some letters or other symbol a scoring key is
prepared.
217. In the case of short answer and essay type questions, the marking
scheme is prepared.
In preparing marking scheme the examiner has to list out the value
points to be credited and fix up the mark to be given to each value
point.
MARKING SCHEME
218. 7. PREPARATION OF QUESTION-
WISE ANALYSIS
• It helps to know the strengths and weakness of the test, to tally the
question paper and the blueprint, and to determine the content
validity of the test.
219.
220. 8.TEST ADMINISTRATION.
The steps to be followed in the administration of group tests are
a) Motivate the students to do their best .
b) Follow the directions closely.
c) Keep time accurately .
d) Record any significant events that might influence test
scores.
e) Collect the test materials promptly.
The things to avoid while administering a test are…
a) Do not talk unnecessarily before the test
b) Keep interruptions to a minimum during the test
c) Avoid giving hints to pupils who ask about individual items
d) Discourage cheating.
221. Table of specifications
It includes maps, test grid and test blue print. The purpose of
developing table of specifications is to ensure that the test serves its
intended purpose
Step I: Define the specific learning outcome to be measured
Step II: Determining the instructional content to be evaluated and the
weightage to be assigned to each area
Step III: Developing a two way grid- two way grid is developed with
content areas being listed down on the left side and the learning
outcomes (eg. Knowledge, comprehension etc.) being listed on the
left side
222. Selecting item types
Items may be selection type which provides a set of response from
which to choose or supply type, which requires the student to
provide an answer
Common selection type items include true false, matching and multiple
choice
Editing and validating test items
It is very essential to edit the items and make any needed corrections.
At this stage peer review of questions is helpful for refining the
question, ensuring accuracy, testing for reliability and eliminating
grammatical error
223. Assembling and administering a test
• Once the items are written and edited they must be assembled into
a test. This step includes arranging the items, writing test directions
and responding and administering the test
Arranging items
– Group similar item types together
– Place items within each group in ascending order of difficulty
– Begin the test with an easy question.
224. Writing test directions
The written test directions should be self explanatory
It should include the time allotted to complete the test, instructions
for responding, instructions for recording the answer in the answer
sheet and marks assigned to each question
Reproducing the test
Test should be easy to read and follow
225. Administering the test/ Role of faculty
• Provide conducive physical environment
• Avoid giving unintentional clues
• Maintain confidentiality
• Maintain the test security
• Do not give the same question paper each
year
• Inform earlier the consequence of cheating
• Ensure close supervision throughout the class
• Ensure careful seating arrangement and
spacing of students
227. EVALUATION OF THE
COURSE
• It is a process of making judgement about the
extent to which a particular educational
program achieved its objective and is also
measuring the extent to which a program
delivered is effective and efficient in fulfilling
its intended purpose of its development or
creation
228. Aims of program evaluation
• To measure the progress of a program
• To identify any problem and to resolve any conflicts
• To enhance utilization of available resources
• To provide baseline information for future evaluation
and planning
• To modify and to make any remedial measures
• To increase the efficiency and effectiveness of the
program
• To promote cost effectiveness
• To improve the quality
• To increase the image of an institution
229. Components of the program to be
evaluated
• Philosophy and objectives of the college
• Admission criteria
• Staff welfare activities
• Student welfare activities
• Faculty position
• Curriculum
• Student’s performance
• Infrastructural facilities
• Records and its maintenance
230. Who will evaluate
• The consumers
• The stakeholders
• The general public
• The administrators
• The faculty members
• The alumni
• Any specifically appointed committee
✔INC
✔KNMC
✔KUHS
231. The program evaluation plan
This should consists of
• The major areas of evaluation and its
components
• Time frame
• Data to be collected
• Use of existing data
• Decision regarding who has to be evaluated
• The purpose of evaluation
232. Criteria for evaluation of a program
• Consistency with the objectives
• Comprehensiveness
• Sufficient diagnostic value.
• Validity
• Unity of evaluation judgement
• Continuity
234. INTRODUCTION
• Accreditation is very important for educational Institute
• It is ongoing evaluation process
• Accredited institutions are responsible for development and
maintenance of academic standard and quality of education.
235. ACCREDITATION
Meaning.
• The fact of being officially recognized, accepted,
or approved of, or
the act of officially recognizing, accepting,
or approving of something:
• Official approval, especially
in order to maintain satisfactory standards:
• Accreditation is the act of granting credit or
recognition, especially to an educational institution that
maintains suitable standards.
• Accreditation is a process by which a voluntary, non
governmental agency or organisation approves and grants
status to institutions or programs that meet
predetermined standards or outcomes
236. Purpose
• To certify to the public that an institution has met established
standards
• To encourage peer review by the faculty and staff of the institution
• To facilitate the transfer of students from one institution to another
• To assist prospective students in deciding which institution to attend
and join
• To assist prospective employers in identifying qualified personnel
• To raise the standards of education for the practice of profession
• Maintenance of adequate admission requirement.
• Maintenance of minimum academic standard.
• Stimulation of institutional self improvement.
• Protection of institutions against educationally and socially harmful
pressures.
237. Types
• Institutional accreditation: it is the accreditation of the institution as
a whole, without differentiation among the various curriculums. It
looks the institution as a total operating unit
• Specialized accreditation: it is otherwise known as program
accreditation where the program run by the institution is accredited
with utmost importance. The accreditation bodies are often
associations or councils of professions like medicine, nursing etc.
238. Beneficiaries of accreditation
• Students
• Faculty
• Graduates
• Practising nurses
• Consumers of nursing services
• Administrators
239. Aspects/ areas reviewed under accreditation process
• Administration and governance
• Finance and budget
• Faculty, students and resources
• Program outcomes
240. Stages of accrediting programs
• Initiation of the process
• Conduction of a self evaluation study
• Accreditation visit
• Evaluation by the board of review or peer group
• Continuing self evaluation and ongoing program improvement