EVALUATION
AND
MEASUREMENT
BHAGYASREE N P
1st MSc. Nursing
Govt CON
ALAPPUZHA
DEFINITION
 Evaluation is the process of ascertaining or judging
the value of something by careful appraisal- Good
carter.
 Evaluation is the process of determining to what
extent the educational objectives are being
realised. – Ralph Tyler
Philosophy of evaluation
 Each individual should receive that education that
most fully allows him /her to develop his / her
potential.
 Each individual should be placed that he/she
should contribute to society and receives personal
satisfaction in so doing.
 Fullest development of the individual requires
recognition of his /her essential individuality
along with some rational appraisal by him/herself
and others.
 The judgements required in assessing an
individual’s potential are complex in their
composition, difficult to make and filled with
error.
 Such error can be reduced but never eliminated.
Hence, any evaluation can never be considered
final.
 Every form of appraisal will have critics, which is
a spur to change and improvement
Purposes of evaluation
 Diagnosis: Identifies the weakness in learning
among students and monitor learning progress so
as to provide suggestions.
 Prediction: Predict probable future successes on
certain types of tasks on the basis of present
achievement or related tasks.
 Grading: Rank orders students and is usually
used in terminal examination.
Selection: Selects suitable candidates for
various courses in a university. The
entrance tests for various courses serve this
purpose.
 Guidance: Assists student in making decisions for
the future in the choices of higher studies or career.
 Exploration: Bring out the inherent capabilities of
pupils such as attitudes, habits, appreciation,
understanding, manipulative skills, in addition to
the conventional acquisition of knowledge.
 Evaluation of the programme:Monitors the
effectiveness of teaching in a particular course;
find out the relevance of the objectives and
effectiveness of methods used and provides a basis
for the modification of the curriculum and courses.
 Evaluation of teachers: Test the efficiencies of
teachers in providing learning experiences and
effectiveness of instructions
 Motivation: Help the student to become
increasingly self directing in their study and
activities. And it helps in selecting, giving
honours, placement of students in advanced
education and writing recommendations
PURPOSES OF EVALUATION IN NURSING
EDUCATION
 To determine the level of knowledge and
understanding of students
 To determine the level of student’s clinical
performance.
 To become aware of the specific difficulties of
individual students or of an entire class, as a basis
for further teaching .
 To diagnose each student’s strengths and weakness
and to suggest remedial measures which may be
needed .
 To encourage student’s learning by measuring their
achievements and informing them of their success
 To help students to become increasingly self
directing in their study
 To help students to acquire that attitude of and
skills in self evaluation
 To provide the additional motivation of
examination that provide opportunity to practice
critical thinking, the application of principles, the
making of judgments etc.
 To estimate the effectiveness of teaching and
learning techniques, of subject content and of
instructional media in attaining the goals of the
programme
 To gather information needed for administrative
purpose
Principles of evaluation
1. Determining and clarifying what is to be evaluated
always has priority in the evaluation process.
2. Evaluation techniques should be selected according
to the purpose to be served .
3.Comprehensive evaluation requires a variety of
evaluation techniques
4.Proper use of evaluation techniques requires an
awareness of both their limitations and strengths.
5.Evaluation is a means to an end , not an end in itself
Characteristics of evaluation
1. Evaluation is a continuous process
2. Evaluation includes academic and non-academic
subjects
3. Evaluation is a procedure for improving the
product:
4. Discovering the needs of an individual and
designing learning experiences:
5. Evaluation is purpose oriented:
Functions of evaluation
 To make provisions for guiding the growth of individuals
and pupils.
 To diagnose the weakness and strength of the pupils.
 To locate areas where remedial measures are needed .
 To provide a basis for a modification of the curriculum and
the course.
 To motivate pupils toward better attainment and growth.
 To test the efficiency of teachers in providing learning
experiences.
Types of evaluation
1.Feasibility evaluation
2. Formative evaluation
3. Certifying evaluation or Summative evaluation
4. Maintenance evaluation
5. Maximum performance evaluation
6. Criterion referenced evaluation
7.Norm referenced evaluation
Evaluation process
 Identifying the purpose of evaluation.
 Identifying the time frame.
 Determining when to evaluate.
 Selecting the evaluators
 Choosing an evaluation design/ framework or
model.
 Selecting an evaluation instruments
 Collecting data.
 Interpreting data
 Reporting the findings
 Using the findings
 Considering the costs of evaluation.
MEASUREMENT
 Measurement is any device, which allows the
students to obtain information in a quantitative
form
 “It is an act or process that involves the assignment
of a numerical index to whatever is being
assessed.”
 “The process of obtaining numerical description of
the degree to which an individual possesses a
particular characteristic
Essentials of measurement
 Identification and definition of the quality ,
attribute or variable that is to be measured.
 Determining the set of operation through which the
operation of attribute or variable may manifest and
become perceivable.
 Establishment of a set of procedure for the
translations of observations into quantitative
statements of degree extent or amount
Functions of measurement
 1. Prognosis – It is an administrative function such as
classification, selection, promotion and gradation of student on
the basis of student achievement, effectiveness of methods,
instruction and treatment is evaluated.
 2. Diagnosis- It identifies the weakness of the students and
helpful in preparing remedial work for them. It is also helpful
in establishing cause effect relationship. It can also improve
the instructional procedure.
 3. Research: Measurement provides more objective and
dependable basis for research purposes. Valid generalization
can be made on the basis of accurate measurement
Criteria for the selection of evaluative devices
 Sampling of the objectives
 Sampling of the content
 Validity
 Reliability
 Practicability
 Usefulness
 Validity
The validity of a test is the degree to which it
measures what is intended to measure.
Types of Validity
1.Face Validity
When one looks at the test he thinks of the extent to
which the test seems logically related to what is
being tested
2 . Construct Validity is used to ensure that the
measure is actually measure what it is intended to
measure (i.e. the construct), and no other variables.
3.Content validity Content validity of a test reflects
the extent to which student learn specific content
material in different subject. This type of validity is
related to the content area that is being tested
4.Performance validity A tool that can be a written
test is said to possess predictive validity to the
extent the information obtained through it serves the
purpose of predicting the future performance of
students in a particular area of learning
5. Concurrent validity The test scores are
correlated with another set of criterion scores which
are presently, that is concurrently available.
Reliability
1. Test-retest reliability is a measure of reliability
obtained by administering the same test twice over a
period of time to a group of individuals. The scores
from Time 1 and Time 2 can then be correlated in
order to evaluate the test for stability over time.
2. Parallel forms reliability
Is a measure of reliability obtained by administering
different versions of an assessment tool (both
versions must contain items that probe the same
construct, skill, knowledge base, etc.) to the same
group of individuals. The scores from the two
versions can then be correlated in order to evaluate
the consistency of results across alternate versions
3. Inter rater reliabilityis a measure of reliability
used to assess the degree to which different judges or
raters agree in their assessment decisions
4. Internal consistency reliability
Is a measure of reliability used to evaluate the
degree to which different test items that probe
the same construct produce similar results.
A. Average inter-item correlation
B. B. Split-half reliability
Measurement errors
 1.Random error
 2. Systematic error :
Distinction between measurement and evaluation
 Measurement
1. Measurement refers to
quantity describing in
terms of pupils
attainment in a subject.
For example , how much
an individual’s
performance has taken
place, i.e. score in one
subject.
 Evaluation
1.Evaluation is measured
in terms of quality and
value judgement
eg Good, bad, normal etc.
 Not a continuous
process
 It describes a situation,
e.g. 50 out of 100
marks in nursing
education
 Continuous process
 Evaluation judges its
works and values it as
average
 It is only a tool to be
used in evaluation,
but does not include
in evaluation
 Limited scope
 Evaluation includes
measurement and
signifies a wider
process of judging
students’ progress
 Wide scope

Evaluation and measurement nursing education

  • 1.
    EVALUATION AND MEASUREMENT BHAGYASREE N P 1stMSc. Nursing Govt CON ALAPPUZHA
  • 2.
    DEFINITION  Evaluation isthe process of ascertaining or judging the value of something by careful appraisal- Good carter.  Evaluation is the process of determining to what extent the educational objectives are being realised. – Ralph Tyler
  • 3.
    Philosophy of evaluation Each individual should receive that education that most fully allows him /her to develop his / her potential.  Each individual should be placed that he/she should contribute to society and receives personal satisfaction in so doing.  Fullest development of the individual requires recognition of his /her essential individuality along with some rational appraisal by him/herself and others.
  • 4.
     The judgementsrequired in assessing an individual’s potential are complex in their composition, difficult to make and filled with error.  Such error can be reduced but never eliminated. Hence, any evaluation can never be considered final.  Every form of appraisal will have critics, which is a spur to change and improvement
  • 5.
    Purposes of evaluation Diagnosis: Identifies the weakness in learning among students and monitor learning progress so as to provide suggestions.  Prediction: Predict probable future successes on certain types of tasks on the basis of present achievement or related tasks.  Grading: Rank orders students and is usually used in terminal examination.
  • 6.
    Selection: Selects suitablecandidates for various courses in a university. The entrance tests for various courses serve this purpose.
  • 7.
     Guidance: Assistsstudent in making decisions for the future in the choices of higher studies or career.  Exploration: Bring out the inherent capabilities of pupils such as attitudes, habits, appreciation, understanding, manipulative skills, in addition to the conventional acquisition of knowledge.  Evaluation of the programme:Monitors the effectiveness of teaching in a particular course; find out the relevance of the objectives and effectiveness of methods used and provides a basis for the modification of the curriculum and courses.
  • 8.
     Evaluation ofteachers: Test the efficiencies of teachers in providing learning experiences and effectiveness of instructions  Motivation: Help the student to become increasingly self directing in their study and activities. And it helps in selecting, giving honours, placement of students in advanced education and writing recommendations
  • 9.
    PURPOSES OF EVALUATIONIN NURSING EDUCATION  To determine the level of knowledge and understanding of students  To determine the level of student’s clinical performance.  To become aware of the specific difficulties of individual students or of an entire class, as a basis for further teaching .
  • 10.
     To diagnoseeach student’s strengths and weakness and to suggest remedial measures which may be needed .  To encourage student’s learning by measuring their achievements and informing them of their success  To help students to become increasingly self directing in their study  To help students to acquire that attitude of and skills in self evaluation
  • 11.
     To providethe additional motivation of examination that provide opportunity to practice critical thinking, the application of principles, the making of judgments etc.  To estimate the effectiveness of teaching and learning techniques, of subject content and of instructional media in attaining the goals of the programme  To gather information needed for administrative purpose
  • 12.
    Principles of evaluation 1.Determining and clarifying what is to be evaluated always has priority in the evaluation process. 2. Evaluation techniques should be selected according to the purpose to be served . 3.Comprehensive evaluation requires a variety of evaluation techniques 4.Proper use of evaluation techniques requires an awareness of both their limitations and strengths. 5.Evaluation is a means to an end , not an end in itself
  • 13.
    Characteristics of evaluation 1.Evaluation is a continuous process 2. Evaluation includes academic and non-academic subjects 3. Evaluation is a procedure for improving the product: 4. Discovering the needs of an individual and designing learning experiences: 5. Evaluation is purpose oriented:
  • 14.
    Functions of evaluation To make provisions for guiding the growth of individuals and pupils.  To diagnose the weakness and strength of the pupils.  To locate areas where remedial measures are needed .  To provide a basis for a modification of the curriculum and the course.  To motivate pupils toward better attainment and growth.  To test the efficiency of teachers in providing learning experiences.
  • 15.
    Types of evaluation 1.Feasibilityevaluation 2. Formative evaluation 3. Certifying evaluation or Summative evaluation 4. Maintenance evaluation 5. Maximum performance evaluation 6. Criterion referenced evaluation 7.Norm referenced evaluation
  • 16.
    Evaluation process  Identifyingthe purpose of evaluation.  Identifying the time frame.  Determining when to evaluate.  Selecting the evaluators  Choosing an evaluation design/ framework or model.
  • 17.
     Selecting anevaluation instruments  Collecting data.  Interpreting data  Reporting the findings  Using the findings  Considering the costs of evaluation.
  • 18.
  • 19.
     Measurement isany device, which allows the students to obtain information in a quantitative form  “It is an act or process that involves the assignment of a numerical index to whatever is being assessed.”  “The process of obtaining numerical description of the degree to which an individual possesses a particular characteristic
  • 20.
    Essentials of measurement Identification and definition of the quality , attribute or variable that is to be measured.  Determining the set of operation through which the operation of attribute or variable may manifest and become perceivable.  Establishment of a set of procedure for the translations of observations into quantitative statements of degree extent or amount
  • 21.
    Functions of measurement 1. Prognosis – It is an administrative function such as classification, selection, promotion and gradation of student on the basis of student achievement, effectiveness of methods, instruction and treatment is evaluated.  2. Diagnosis- It identifies the weakness of the students and helpful in preparing remedial work for them. It is also helpful in establishing cause effect relationship. It can also improve the instructional procedure.  3. Research: Measurement provides more objective and dependable basis for research purposes. Valid generalization can be made on the basis of accurate measurement
  • 22.
    Criteria for theselection of evaluative devices  Sampling of the objectives  Sampling of the content  Validity  Reliability  Practicability  Usefulness
  • 23.
     Validity The validityof a test is the degree to which it measures what is intended to measure.
  • 24.
    Types of Validity 1.FaceValidity When one looks at the test he thinks of the extent to which the test seems logically related to what is being tested
  • 25.
    2 . ConstructValidity is used to ensure that the measure is actually measure what it is intended to measure (i.e. the construct), and no other variables. 3.Content validity Content validity of a test reflects the extent to which student learn specific content material in different subject. This type of validity is related to the content area that is being tested
  • 26.
    4.Performance validity Atool that can be a written test is said to possess predictive validity to the extent the information obtained through it serves the purpose of predicting the future performance of students in a particular area of learning 5. Concurrent validity The test scores are correlated with another set of criterion scores which are presently, that is concurrently available.
  • 27.
    Reliability 1. Test-retest reliabilityis a measure of reliability obtained by administering the same test twice over a period of time to a group of individuals. The scores from Time 1 and Time 2 can then be correlated in order to evaluate the test for stability over time.
  • 28.
    2. Parallel formsreliability Is a measure of reliability obtained by administering different versions of an assessment tool (both versions must contain items that probe the same construct, skill, knowledge base, etc.) to the same group of individuals. The scores from the two versions can then be correlated in order to evaluate the consistency of results across alternate versions
  • 29.
    3. Inter raterreliabilityis a measure of reliability used to assess the degree to which different judges or raters agree in their assessment decisions
  • 30.
    4. Internal consistencyreliability Is a measure of reliability used to evaluate the degree to which different test items that probe the same construct produce similar results. A. Average inter-item correlation B. B. Split-half reliability
  • 31.
    Measurement errors  1.Randomerror  2. Systematic error :
  • 32.
    Distinction between measurementand evaluation  Measurement 1. Measurement refers to quantity describing in terms of pupils attainment in a subject. For example , how much an individual’s performance has taken place, i.e. score in one subject.  Evaluation 1.Evaluation is measured in terms of quality and value judgement eg Good, bad, normal etc.
  • 33.
     Not acontinuous process  It describes a situation, e.g. 50 out of 100 marks in nursing education  Continuous process  Evaluation judges its works and values it as average
  • 34.
     It isonly a tool to be used in evaluation, but does not include in evaluation  Limited scope  Evaluation includes measurement and signifies a wider process of judging students’ progress  Wide scope