STANDARDIZED AND NON
STANDARDIZED TESTS
INTRODUCTION
A test is the major and most
commonly used instrument for the
assessment of cognitive behaviors.
Usually, the test is based on the
learned content of
subject specific area (s) and is
directed to measure the learner’s level
of attainment of pre -specified
objectives. To measure an attribute, a
standard instrument is needed.
Therefore, unlike physical attributes,
measurements are done by describing the
characteristics associated with such
constructs in behavioral terms. The
expected behaviors (aptitude) such as an
ability to state, define, manipulate or
perform experiment, for
instance, in integrated science and similar
activities are put down in the form of test.
The test score gives quantitative
information about the existence of the
construct(attribute) possessed by the
testee. For this reason, the test items
as measuring instrument must be
valid, reliable and usable in order
to give dependable result.
OBJECTIVITY OF TEST
 A test is objective, when the scorer’s
personal judgment does not affect the
scoring.
 It eliminates fixed opinion or
judgments of the person who scores it.
USABILITY OF TEST
 The overall simplicity of use for both
constructor and for learner.
 It is an important criterion used for
assessing the value of a test.
LIMITATIONS
 They are either too short or too
lengthy.
 Tests do not cover the entire content.
 Tests are usually hurriedly conducted.
 Supervision is not proper.
Standardized test
 Standardization means uniformity of
procedure in scoring, administering and
interpreting the results.
 Standardized tests are instruments that
measure and predict ability/ aptitude and
achievement. Such tests are-
◦ Normed on an appropriate reference group
 (e.g., a group of people similar to those that the
test will be used with);-
◦ Always administered, scored, and
interpreted in the same way.
 A standardized test is a test that is
administered and scored in a
consistent, or “standard”, manner.
 Standardized tests are designed in
such a way that the questions,
conditions for administering, scoring
procedures, and interpretations are
consistent and are administered and
scored in a predetermined, standard
manner.
 Assessment devices are instruments
used to determine both how well a
student has learned covered
materials, and/or how well do in future
endeavors.
 Assessment can be accomplished
through tests, homework, seatwork,
etc. most formal assessments that are
used to assign grades and/or for
selection purposes or predictions
involve tests.
 A test is a systematic method for
measuring students’ behaviours and
evaluating these behaviours against
standards and norms. Tests can be
standardized or teacher-made
Characteristics
1. Constructed by test experts or
specialists.
2. Covers broad or wide areas of
objectives and content
3. Selection of items will be done very
carefully and validity, reliability,
usefulness of the rest is ascertained in a
systematic way.
4. Procedure of administration is
standardized.
5. Test has clear directions and it will be
6. Scoring key is provided.
7. Test manual provides norms for the test.
8. It should be fixed.
9. It is specific direction for administering
and scoring the test.
10. It consist of standard content and
procedure.
11. It provides standardized frame of
reference determining individual
performance
Non- Standardized Tests
Non-standardized tests are
those which do not follow the rules of
standardized tests in which there is no
uniformity in the student’s evaluation.
There are different questions for
different students or the test items are
not standardized.
 For example:- the practical examination
conducted in nursing is non-
standardized
test because students are evaluated on
the
different questions for each student, stu
dent has to demonstrate different skills
so more or less it becomes a chance or
luck that which skill you will be asked to
demonstrate in examination
 A non-standardized test is one that
allows for an assessment of an
individual's abilities or performances, but
doesn't allow for a fair comparison of one
student to another.
 The test results can be used for
students, teachers, and for other
administrative purposes.
 These tests are very simple to use.
 Easy for the students.
 Teachers can assess the strengths and
weaknesses of students.
 Teachers can understand the need for
re- teaching concepts and can decide
remedial instruction.
 Teacher made tests directed by the
teachers is to meet their various
needs and directives.
 Tests are not so carefully and
scientifically prepared
 The items of teacher made tests are
seldom analyzed and edited.
USES OF NON
STANDARDIZED TEST
 To know the ability and achievements of
students
 Helps the teacher to assess the strength
and weakness of student
 Provides continuous evaluation and
feedback to the teacher
 Motivates the students
 Helps to achieve particular objectives
 Helps the teacher to adopt better
instructional methods.
RELIABILITY
 Reliability is a characteristic of any test refers
to the accuracy and consistency of
information obtained in a study.
 A well-developed scientific tool should give
accurate results both at present as well as
over the time.
 A test good reliability means that the test
taker will obtain the same test score over
repeated testing as long as no other
extraneous factors have affected the score.
 A good instrument will produce consistent
scores. An instrument’s reliability is estimated
using a correlation coefficient
Types of Reliability
1. Test-retest reliability:-
It is a measure of reliability
obtained by administering the same test
twice over a period of time to a group of
individuals. The scores from Time 1 and
Time 2 can then be correlated in order
to evaluate the test for stability over
time.
2. Parallel forms reliability
It is a measure of reliability obtained
by administering different versions of an
assessment tool (both versions must
contain items that probe the same
construct, skill, knowledge base, etc.) to
the same group of individuals. The
scores from the two versions can then be
correlated in order to evaluate the
consistency of results across alternate
versions.
3. Inter-rater reliability
It is a measure of reliability used to
assess the degree to which different
judges or raters agree in their
assessment decisions. Inter-rater
reliability is useful because human
observers will not necessarily interpret
answers the same way; raters may
disagree as to how well certain
responses or material demonstrate
knowledge of the construct or skill being
4. split-half technique
Other name is Internal Consistency
reliability. It indicates the homogeneity
of the test. In this method the test is
divided into two equal or nearly
halves. Common way of this test is the
odd-even method
VALIDITY
 The accuracy with which a test
measures whatever it is supposed to
measure.
 An evaluation procedure is valid to
the extent that it provides an
assessment of the degree to which
pupils have achieved specific
objectives , content matter and
learning experiences
 Validity is an important characteristic
of any test. This refers to what the test
really measures. A test is valid, if it
measures what we really wish to
measure.
 It is a more complex concept that
broadly concerns the soundness of
the study's evidence - that is whether
the finding are unbiased and well
grounded
Factors affecting validity
 If reading vocabulary is poor, students
fail to reply to the test.
 Difficult sentences make difficulty to
understand.
 Use of inappropriate items.
 Medium of expression , English
instruction difficult for non- English
medium students.
 Too easy and too difficult test items
would not discriminate among pupils.
Influence of extraneous factors
grammar , handwriting , legibility etc .
 Time limitations.
 Inadequate weightage to sub-topics
 Unclear direction results to low validity
TYPES OF VALIDITY
 Content validity: all major aspects of
the content area must be adequately
covered by the test items and in
correct positions
 Predictive validity: the extent to
which a test can predict the future
performance of the students.
 Construct validity: it refers to the
extent to which a test reflects and
seems to measure a hypothesized
trait
 e.g. A reading test does not typically
assess mathematical knowledge of
student
 Concurrent validity: The relationship
between scores on measuring tool
and criteria available at the same time
in the present situation.
 E.g. if you create a new test for
depression level, you can compare its
performance with previous depression
test.
 Face validity: when one looks at the
test he thinks of the extent to which
the test seems logically related to
what is being tested.
 e.g. personality test.
ESSAY TYPE EXAMINATIONS
OR ESSAY TEST
 In an essay test, students construct responses to
items based on their understanding of the content.
 With this type of test item, varied answers may be
possible depending on the concepts selected by
the student for discussion and the way in which
they are presented.
 Essay items provide an opportunity for students to
select content to discuss, present ideas in their
own words, and develop an original and creative
response to a question.
 This freedom of response makes essay items
particularly useful for complex learning outcomes.
 ‘An essay test presents one or more questions or other
tasks that require extended written responses from the
persons being tested ’
 -Robert LE and DavidAF
FEATURE
S
 No single answer can be considered throughout and
correct.
 The examine is permitted freedom of response .
 The answer vary in their degree of quality or corrections.
Essay items may be written to evaluate a wide
range of learning outcomes.
These include:
 Comparing, such as comparing the side effects of two different
medications
 Outlining steps to take and protocols to follow
 Explaining and summarizing in one’s own words a situation or
statement
 Discussing topics
 Applying concepts and principles to a clinical scenario and
explaining their relevancy to it
 Analyzing patient data and clinical situations through use
of relevant concepts and theories
 Critiquing different interventions and nursing management
 Developing plans and proposals drawing on multiple
sources of information
 Analyzing nursing and health care trends
 Arriving at decisions about issues and actions to take
accompanied by a rationale
 Analyzing ethical issues, possible decisions, and their
consequences, and
 Developing arguments for and against a particular position
or decision.
TYPE
S
Based on the amount of freedom given to a student to
organize his ideas and write his answer.
The essay questions are divided in to two types.
 Extended response
 Restricted response
Extended-response
 Extended-response essay items are less restrictive and
as such provide an opportunity for students
◦ to choose concepts for responding,
◦ organize ideas in their own ways,
◦ arrive at judgments about the content, and
◦ demonstrate ability to communicate ideas effectively in
writing.
 With these items, the teacher may evaluate students’
ability
◦ to develop their own ideas and express them creatively,
◦ integrate learning from multiple sources in responding, and
◦ evaluate the ideas of others based on predetermined
criteria.
 Since responses are not restricted by the teacher,
Restricted-response
 In a restricted-response item, the teacher limits the
student’s answer by indicating the content to be
presented and frequently the amount of discussion
allowed, for instance, limiting the response to one
paragraph or page.
 With this type of essay item, the way in which the
student responds is structured by the teacher
 Eg:Describe five physiological changes associated with
the aging process.
PRINCIPLES OF PREPARING
ESSAY TYPE TEST
 Do not give too many lengthy questions
 Avoid phrases, e.g. ‘Discuss briefly’
 Questions should be well-structured with specific purpose
or topic at a time.
 Words should be simple, clear and carefully selected.
 Do not allows too many choices
 According to the level of student’s difficulty and
complexity items has to be selected.
Guidelines to write essay
items
 Develop essay items that require
synthesis of the content.
 Phrase items clearly.
◦ Example :
 Evaluate an article describing a nursing
research study.
 Revised Version: Select an article describing a
nursing research study. Critique the study,
specifying the criteria you used to evaluate it.
 Prepare students for essay tests.
 Tell students about apportioning their time
to allow sufficient time for answering each
essay item.
 Score essay items that deal with the
analysis of issues according to the rationale
that students develop rather than the
position they take on the issue
 Avoid the use of optional items and student
choice of items to answer.
 In the process of developing the item,
write an ideal answer to it.
 If possible, have a colleague review
the item and explain how he or she
would respond to it.
CRITERIA FOR EVALUATING ESSAY
ITEMS
 The criteria for evaluating essay items,
regardless of the method, often
address three areas:
(a) content,
(b) organization, and
(c) process.
 Questions that guide evaluation of
each of these areas are:
Content:
 Is relevant content included?
 Is it accurate?
 Are significant concepts and theories
presented?
 Are hypotheses, conclusions, and
decisions supported?
 Is the answer comprehensive?
Organization:
 Is the answer well organized?
 Are the ideas presented clearly?
 Is there a logical sequence of ideas?
Process:
 Was the process used to arrive at
conclusions, actions, approaches, and
decisions logical?
 Were different possibilities and implications
considered?
 Was a sound rationale developed using
relevant literature and theories?
Suggestions for Scoring
 Identify the method of scoring to be
used prior to the testing situation
 Specify in advance an ideal answer
 If using a scoring rubric, discuss it with
the students ahead of time
 Read a random sample of papers
 Score the answers to one item at a
time.
 Read each answer twice before
scoring
 Read papers in random order.
 Use the same scoring system for all
papers.
 Essay answers and other written
assignments should be read
anonymously.
 Cover the scores of the previous
answers
 For important decisions or if unsure
about the evaluation, have a colleague
read and score the answers to
improve reliability.
 Adopt a policy on writing (sentence
structure, spelling, punctuation,
grammar, neatness, and writing style
in general) and if it will be scored
ADVANTAGES OF ESSAY
TYPE TEST
 Tests the ability to communicate in writing, depth of
knowledge and understanding.
 The student is free to communicate her/ his ability for
independent thinking.
 The student can demonstrate her/his ability to organize
ideas and express them in a logical and coherent fashion.
 It requires short time for the teachers to prepare the test
and administer
 It can be successfully employed for all the school subjects.
DEVELOPS THEABILITIES LIKE
 Organizes ideas express them effectively
 Criticize or justifies the statement
 Interpretation of ideas, through it.
 The mental processes like logical thinking, critical
reasoning, systematic presentation can be best developed.
 Induces good study habits like making outline, summaries,
organizing the argument for and against etc.,
DISADVANTAGES
 Lack objectivity
 Provide little useful feedback
 Takes long - time to score
 Mood of examiners
 Improper comparison of answer of different students
(bright and dull)
 Laborious process both for corrector and for the student.
 Only competent teachers can assess it.
 Limited Ability to Sample Content
 Unreliability in Scoring
 Carryover Effects
 Halo Effect
 Effect of Writing Ability
 Order-of-Scoring Effect
 Time
 Student Choice of Questions
Thank you

Standardized and non standardized tests

  • 1.
  • 2.
    INTRODUCTION A test isthe major and most commonly used instrument for the assessment of cognitive behaviors. Usually, the test is based on the learned content of subject specific area (s) and is directed to measure the learner’s level of attainment of pre -specified objectives. To measure an attribute, a standard instrument is needed.
  • 3.
    Therefore, unlike physicalattributes, measurements are done by describing the characteristics associated with such constructs in behavioral terms. The expected behaviors (aptitude) such as an ability to state, define, manipulate or perform experiment, for instance, in integrated science and similar activities are put down in the form of test.
  • 4.
    The test scoregives quantitative information about the existence of the construct(attribute) possessed by the testee. For this reason, the test items as measuring instrument must be valid, reliable and usable in order to give dependable result.
  • 5.
    OBJECTIVITY OF TEST A test is objective, when the scorer’s personal judgment does not affect the scoring.  It eliminates fixed opinion or judgments of the person who scores it.
  • 6.
    USABILITY OF TEST The overall simplicity of use for both constructor and for learner.  It is an important criterion used for assessing the value of a test.
  • 7.
    LIMITATIONS  They areeither too short or too lengthy.  Tests do not cover the entire content.  Tests are usually hurriedly conducted.  Supervision is not proper.
  • 8.
    Standardized test  Standardizationmeans uniformity of procedure in scoring, administering and interpreting the results.  Standardized tests are instruments that measure and predict ability/ aptitude and achievement. Such tests are- ◦ Normed on an appropriate reference group  (e.g., a group of people similar to those that the test will be used with);- ◦ Always administered, scored, and interpreted in the same way.
  • 9.
     A standardizedtest is a test that is administered and scored in a consistent, or “standard”, manner.  Standardized tests are designed in such a way that the questions, conditions for administering, scoring procedures, and interpretations are consistent and are administered and scored in a predetermined, standard manner.
  • 10.
     Assessment devicesare instruments used to determine both how well a student has learned covered materials, and/or how well do in future endeavors.  Assessment can be accomplished through tests, homework, seatwork, etc. most formal assessments that are used to assign grades and/or for selection purposes or predictions involve tests.
  • 11.
     A testis a systematic method for measuring students’ behaviours and evaluating these behaviours against standards and norms. Tests can be standardized or teacher-made
  • 12.
    Characteristics 1. Constructed bytest experts or specialists. 2. Covers broad or wide areas of objectives and content 3. Selection of items will be done very carefully and validity, reliability, usefulness of the rest is ascertained in a systematic way. 4. Procedure of administration is standardized. 5. Test has clear directions and it will be
  • 13.
    6. Scoring keyis provided. 7. Test manual provides norms for the test. 8. It should be fixed. 9. It is specific direction for administering and scoring the test. 10. It consist of standard content and procedure. 11. It provides standardized frame of reference determining individual performance
  • 14.
    Non- Standardized Tests Non-standardizedtests are those which do not follow the rules of standardized tests in which there is no uniformity in the student’s evaluation. There are different questions for different students or the test items are not standardized.
  • 15.
     For example:-the practical examination conducted in nursing is non- standardized test because students are evaluated on the different questions for each student, stu dent has to demonstrate different skills so more or less it becomes a chance or luck that which skill you will be asked to demonstrate in examination
  • 16.
     A non-standardizedtest is one that allows for an assessment of an individual's abilities or performances, but doesn't allow for a fair comparison of one student to another.  The test results can be used for students, teachers, and for other administrative purposes.  These tests are very simple to use.  Easy for the students.  Teachers can assess the strengths and weaknesses of students.
  • 17.
     Teachers canunderstand the need for re- teaching concepts and can decide remedial instruction.  Teacher made tests directed by the teachers is to meet their various needs and directives.  Tests are not so carefully and scientifically prepared  The items of teacher made tests are seldom analyzed and edited.
  • 18.
    USES OF NON STANDARDIZEDTEST  To know the ability and achievements of students  Helps the teacher to assess the strength and weakness of student  Provides continuous evaluation and feedback to the teacher  Motivates the students  Helps to achieve particular objectives  Helps the teacher to adopt better instructional methods.
  • 19.
    RELIABILITY  Reliability isa characteristic of any test refers to the accuracy and consistency of information obtained in a study.  A well-developed scientific tool should give accurate results both at present as well as over the time.  A test good reliability means that the test taker will obtain the same test score over repeated testing as long as no other extraneous factors have affected the score.  A good instrument will produce consistent scores. An instrument’s reliability is estimated using a correlation coefficient
  • 20.
    Types of Reliability 1.Test-retest reliability:- It is a measure of reliability obtained by administering the same test twice over a period of time to a group of individuals. The scores from Time 1 and Time 2 can then be correlated in order to evaluate the test for stability over time.
  • 21.
    2. Parallel formsreliability It is a measure of reliability obtained by administering different versions of an assessment tool (both versions must contain items that probe the same construct, skill, knowledge base, etc.) to the same group of individuals. The scores from the two versions can then be correlated in order to evaluate the consistency of results across alternate versions.
  • 22.
    3. Inter-rater reliability Itis a measure of reliability used to assess the degree to which different judges or raters agree in their assessment decisions. Inter-rater reliability is useful because human observers will not necessarily interpret answers the same way; raters may disagree as to how well certain responses or material demonstrate knowledge of the construct or skill being
  • 23.
    4. split-half technique Othername is Internal Consistency reliability. It indicates the homogeneity of the test. In this method the test is divided into two equal or nearly halves. Common way of this test is the odd-even method
  • 24.
    VALIDITY  The accuracywith which a test measures whatever it is supposed to measure.  An evaluation procedure is valid to the extent that it provides an assessment of the degree to which pupils have achieved specific objectives , content matter and learning experiences
  • 25.
     Validity isan important characteristic of any test. This refers to what the test really measures. A test is valid, if it measures what we really wish to measure.  It is a more complex concept that broadly concerns the soundness of the study's evidence - that is whether the finding are unbiased and well grounded
  • 26.
    Factors affecting validity If reading vocabulary is poor, students fail to reply to the test.  Difficult sentences make difficulty to understand.  Use of inappropriate items.  Medium of expression , English instruction difficult for non- English medium students.
  • 27.
     Too easyand too difficult test items would not discriminate among pupils. Influence of extraneous factors grammar , handwriting , legibility etc .  Time limitations.  Inadequate weightage to sub-topics  Unclear direction results to low validity
  • 28.
    TYPES OF VALIDITY Content validity: all major aspects of the content area must be adequately covered by the test items and in correct positions  Predictive validity: the extent to which a test can predict the future performance of the students.
  • 29.
     Construct validity:it refers to the extent to which a test reflects and seems to measure a hypothesized trait  e.g. A reading test does not typically assess mathematical knowledge of student
  • 30.
     Concurrent validity:The relationship between scores on measuring tool and criteria available at the same time in the present situation.  E.g. if you create a new test for depression level, you can compare its performance with previous depression test.
  • 31.
     Face validity:when one looks at the test he thinks of the extent to which the test seems logically related to what is being tested.  e.g. personality test.
  • 33.
    ESSAY TYPE EXAMINATIONS ORESSAY TEST  In an essay test, students construct responses to items based on their understanding of the content.  With this type of test item, varied answers may be possible depending on the concepts selected by the student for discussion and the way in which they are presented.  Essay items provide an opportunity for students to select content to discuss, present ideas in their own words, and develop an original and creative response to a question.  This freedom of response makes essay items particularly useful for complex learning outcomes.
  • 34.
     ‘An essaytest presents one or more questions or other tasks that require extended written responses from the persons being tested ’  -Robert LE and DavidAF
  • 35.
    FEATURE S  No singleanswer can be considered throughout and correct.  The examine is permitted freedom of response .  The answer vary in their degree of quality or corrections.
  • 36.
    Essay items maybe written to evaluate a wide range of learning outcomes. These include:  Comparing, such as comparing the side effects of two different medications  Outlining steps to take and protocols to follow  Explaining and summarizing in one’s own words a situation or statement  Discussing topics  Applying concepts and principles to a clinical scenario and explaining their relevancy to it
  • 37.
     Analyzing patientdata and clinical situations through use of relevant concepts and theories  Critiquing different interventions and nursing management  Developing plans and proposals drawing on multiple sources of information  Analyzing nursing and health care trends  Arriving at decisions about issues and actions to take accompanied by a rationale  Analyzing ethical issues, possible decisions, and their consequences, and  Developing arguments for and against a particular position or decision.
  • 38.
    TYPE S Based on theamount of freedom given to a student to organize his ideas and write his answer. The essay questions are divided in to two types.  Extended response  Restricted response
  • 39.
    Extended-response  Extended-response essayitems are less restrictive and as such provide an opportunity for students ◦ to choose concepts for responding, ◦ organize ideas in their own ways, ◦ arrive at judgments about the content, and ◦ demonstrate ability to communicate ideas effectively in writing.  With these items, the teacher may evaluate students’ ability ◦ to develop their own ideas and express them creatively, ◦ integrate learning from multiple sources in responding, and ◦ evaluate the ideas of others based on predetermined criteria.  Since responses are not restricted by the teacher,
  • 40.
    Restricted-response  In arestricted-response item, the teacher limits the student’s answer by indicating the content to be presented and frequently the amount of discussion allowed, for instance, limiting the response to one paragraph or page.  With this type of essay item, the way in which the student responds is structured by the teacher  Eg:Describe five physiological changes associated with the aging process.
  • 41.
    PRINCIPLES OF PREPARING ESSAYTYPE TEST  Do not give too many lengthy questions  Avoid phrases, e.g. ‘Discuss briefly’  Questions should be well-structured with specific purpose or topic at a time.
  • 42.
     Words shouldbe simple, clear and carefully selected.  Do not allows too many choices  According to the level of student’s difficulty and complexity items has to be selected.
  • 43.
    Guidelines to writeessay items  Develop essay items that require synthesis of the content.  Phrase items clearly. ◦ Example :  Evaluate an article describing a nursing research study.  Revised Version: Select an article describing a nursing research study. Critique the study, specifying the criteria you used to evaluate it.  Prepare students for essay tests.
  • 44.
     Tell studentsabout apportioning their time to allow sufficient time for answering each essay item.  Score essay items that deal with the analysis of issues according to the rationale that students develop rather than the position they take on the issue  Avoid the use of optional items and student choice of items to answer.
  • 45.
     In theprocess of developing the item, write an ideal answer to it.  If possible, have a colleague review the item and explain how he or she would respond to it.
  • 46.
    CRITERIA FOR EVALUATINGESSAY ITEMS  The criteria for evaluating essay items, regardless of the method, often address three areas: (a) content, (b) organization, and (c) process.  Questions that guide evaluation of each of these areas are:
  • 47.
    Content:  Is relevantcontent included?  Is it accurate?  Are significant concepts and theories presented?  Are hypotheses, conclusions, and decisions supported?  Is the answer comprehensive?
  • 48.
    Organization:  Is theanswer well organized?  Are the ideas presented clearly?  Is there a logical sequence of ideas? Process:  Was the process used to arrive at conclusions, actions, approaches, and decisions logical?  Were different possibilities and implications considered?  Was a sound rationale developed using relevant literature and theories?
  • 49.
    Suggestions for Scoring Identify the method of scoring to be used prior to the testing situation  Specify in advance an ideal answer  If using a scoring rubric, discuss it with the students ahead of time  Read a random sample of papers  Score the answers to one item at a time.  Read each answer twice before scoring
  • 50.
     Read papersin random order.  Use the same scoring system for all papers.  Essay answers and other written assignments should be read anonymously.  Cover the scores of the previous answers
  • 51.
     For importantdecisions or if unsure about the evaluation, have a colleague read and score the answers to improve reliability.  Adopt a policy on writing (sentence structure, spelling, punctuation, grammar, neatness, and writing style in general) and if it will be scored
  • 52.
    ADVANTAGES OF ESSAY TYPETEST  Tests the ability to communicate in writing, depth of knowledge and understanding.  The student is free to communicate her/ his ability for independent thinking.  The student can demonstrate her/his ability to organize ideas and express them in a logical and coherent fashion.
  • 53.
     It requiresshort time for the teachers to prepare the test and administer  It can be successfully employed for all the school subjects.
  • 54.
    DEVELOPS THEABILITIES LIKE Organizes ideas express them effectively  Criticize or justifies the statement  Interpretation of ideas, through it.
  • 55.
     The mentalprocesses like logical thinking, critical reasoning, systematic presentation can be best developed.  Induces good study habits like making outline, summaries, organizing the argument for and against etc.,
  • 56.
    DISADVANTAGES  Lack objectivity Provide little useful feedback  Takes long - time to score  Mood of examiners  Improper comparison of answer of different students (bright and dull)  Laborious process both for corrector and for the student.  Only competent teachers can assess it.
  • 57.
     Limited Abilityto Sample Content  Unreliability in Scoring  Carryover Effects  Halo Effect  Effect of Writing Ability  Order-of-Scoring Effect  Time  Student Choice of Questions
  • 58.