ADVANCED EDUCATIONAL
RESEARCH METHODOLOGY
Unit-IV
Dr.N.Sasikumar
Assistant Professor
Department of Education
Alagappa University
Karaikudi-630003
TOOL OF RESEARCH
A researcher requires many data – gathering
tools or techniques. Tests are the tools of
measurement and it guides the researcher in
data collection and also in evaluation.
Tools may vary in complexity, interpretation,
design and administration. Each tool is suitable
for the collection of certain type of
information.
The selection of suitable instruments or tools
is of vital importance for successful research.
Different tools are suitable for collecting
various kinds of information for various
purposes.
The research worker may use one or more of
the tools in combination for his purpose.
The systematic way and procedure by which a
complex or scientific task is accomplished is
known as the technique.
MAJOR TOOLS OF RESEARCH IN EDUCATION
• Inquiry forms
– Questionnaire
– Checklist
– Score-card
– Schedule
– Rating Scale
– Opinionnaire
– Attitude Scale
• Observation
• Interview
• Sociometry
• Psychological Tests
– Achievement Test
– Aptitude Test
– Intelligence Test
– Interest inventory
– Personality measures etc.
QUESTIONNAIRE
“A questionnaire is a systematic compilation of
questions that are submitted to a sampling of
population from which information is desired.”
The questionnaire is a form prepared and
distributed to secure responses to certain
questions. It is a device for securing answers to
questions by using a form which the respondent
will fill by himself.
It is a systematic compilation of questions. It is
an important instrument being used to gather
information from widely scattered sources.
Characteristics of Good Questionnaire
 It deals with an important or significant topic.
 Its significance is carefully stated on the questionnaire itself or
on its covering letter.
 It seeks only that data which cannot be obtained from the
resources like books, reports and records.
 It is as short as possible, only long enough to get the essential
data.
 It is attractive in appearance, nearly arranged and clearly
duplicated or printed.
 Directions are clear and complete, important terms are clarified.
 The questions are objective, with no clues, hints or suggestions.
 Questions are presented in a order from simple to complex.
 Double negatives, adverbs and descriptive adjectives are
avoided.
 Double barreled questions or putting two questions in one
question are also avoided.
TYPES OF QUESTIONNAIRE
Questionnaire can be of various type on the basis
of it’s preparation.
Structured v/s Non Structured
Closed v/s Open
Fact v/s Opinion
Positive v/s negative
DESIGNS OF QUESTIONNAIRE
After construction of questions on the basis of
it’s characteristics it should be designed with
some essential routines.
Background information about the questionnaire.
Instructions to the respondent.
The allocation of serial numbers and
Coding Boxes.
MERITS & DEMERITS OF QUESTIONNAIRE
• Merits of Questionnaire Method
• it’s very economical.
• It’s a time saving process.
• It covers the research in wide area.
• It’s very suitable for special type of responses.
• It is most reliable in special cases.
• Demerits of Questionnaire Method
• Through this we get only limited responses.
• Lack of personal contact.
• Greater possibility of wrong answers.
• Chances of receiving incomplete response are more.
• Sometimes answers may be illegible.
• It may be useless in many problems.
CHECKLIST
• A checklist, is a type of informational job aid used
to reduce failure by compensating for potential
limits of human memory and attention.
• It helps to ensure consisting and completeness in
carrying out a task.
• A basic example is ‘to do list’.
• The main purpose of checklist is to call attention
to various aspects of an object or situation, to see
that nothing of importance is overlooked.
USES OF CHECKLISTS
• To collect acts for educational surveys.
• To record behaviour in observational studies.
• To use in educational appraisal, studies – of
school buildings, property, plan, textbooks,
instructional procedures and outcomes etc.
• To rate the personality.
• To know the interest of the subjects also.
Kuder’s interest inventory and Strong’s
Interest Blank are also checklists.
OPINIONNAIRE
• “Opinion polling or opinion gauging represents a
single question approach. The answers are usually
in the form of ‘yes’ or ‘no’.
• An undecided category is often included.
Sometimes large number of response alternatives if
provided.”
• Opinionnaire are usually used in researches of the
descriptive type which demands survey of opinions
of the concerned individuals.
• Public opinion research is an example of opinion
survey. Opinion polling enables the researcher to
forecast the coming happenings in successful
manner.
CHARACTERISTICS OF OPINIONNAIRE
• The opinionnaire makes use of statements or
questions on different aspects of the problem
under investigation.
• Responses are expected either on three point or
five point scales.
• It uses favourable or unfavourable statements.
• It may be sub-divided into sections.
• The gally poll ballots generally make use of
questions instead of statements.
• The public opinion polls generally rely on
personal contacts rather than mail ballots.
RATING SCALE
• Rating scale is one of the enquiry form. Form is a
term applied to expression or judgment
regarding some situation, object or character.
Opinions are usually expressed on a scale of
values.
• Rating techniques are devices by which such
judgments may be quantified. Rating scale is a
very useful device in assessing quality, specially
when quality is difficult to measure objectively.
• Rating scales record judgment or opinions and indicates
the degree or amount of different degrees of quality
which are arranged along a line is the scale.
• For example: How good was the performance?
Excellent Very good Good Average Below average Poor Very poor
___|________|________|_______|__________|__________|__________|____
• Ratings can be obtained through one of three major
approaches:
– Paired comparison
– Ranking and
– Rating scales
PURPOSE OF RATING SCALE
• Rating scales have been successfully utilized
for measuring the following:
• Teacher Performance/Effectiveness
• Personality, anxiety, stress, emotional
intelligence etc.
• School appraisal including appraisal of
courses, practices and programmes.
• A rating scale includes three factors like:
– The subjects or the phenomena to be rated.
– The continuum along which they will be rated and
– The judges who will do the rating.
USE OF RATING SCALE
• Rating scales are used for testing the validity of
many objective instruments like paper pencil
inventories of personality.
• Helpful in writing reports to parents
• Helpful in filling out admission blanks for
colleges
• Helpful in finding out student needs
• Making recommendations to employers.
• Supplementing other sources of understanding
about the child
• Stimulating effect upon the individuals who are
rated.
ATTITUDE SCALE
• Attitude scale is a form of appraisal procedure and it is
also one of the enquiry term. Attitude scales have been
designed to measure attitude of a subject of group of
subjects towards issues, institutions and group of peoples.
• The term attitude is defined in various ways, “the
behaviour which we define as attitudinal or attitude is a
certain observable set” organism or relative tendency
preparatory to and indicative of more complete
adjustment.”- L. L. Bernard
• “An attitude may be defined as a learned emotional
response set for or against something.”- Barr David
Johnson
ATTITUDE SCALE
• An attitude is spoken of as a tendency of an
individual to read in a certain way towards a
Phenomenon.
• It is what a person feels or believes in.
• It is the inner feeling of an individual.
• It may be positive, negative or neutral.
Purpose of Attitude Scale
• In educational research, these scales are used
especially for finding the attitudes of persons
on different issues like:
• Co-education
• Religious education
• Corporal punishment
• Democracy in schools
• Linguistic prejudices
• International co-operation etc.
Characteristics of Attitude Scale
• It provides for quantitative measure on a
unidimensional scale of continuum.
• It uses statements from the extreme positive to
extreme negative position.
• It generally uses a five point scale as we have
discussed in rating scale.
• It could be standardized and norms are worked
out.
• It disguises the attitude object rather than directly
asking about the attitude on the subject.
Thurstone Techniques of scaled values
• A Thurstone scale has a number of “agree” or “disagree” statements.
• It is a unidimensional scale to measure attitudes towards people.
• Developing the scale is time consuming and relatively complex compared to
other scales.
• Thurstone technique is also known as the technique equal appearing intervals.
• The “Thurstone Scale” they’re usually talking about the method of equal-
appearing intervals.
• It’s called “Equal appearing intervals” because when you choose the items
for your test, you’re picking items equally spaced apart.
The other two variations are:
• The method of successive intervals: this method is more challenging to
implement than equal-appearing intervals.
• The method of paired comparisons: requires twice the judgments than the
equal-appearing intervals method and can quickly become very consuming.
LIKERT SCALE
• The Likert scale uses items worded for or against the
proposition, with five point rating response indicating the
strength of the respondent’s approval or disapproval of the
statement.
• A Likert Scale is a type of rating scale used to measure
attitudes or opinions. With this scale, respondents are
asked to rate items on a level of agreement. For example:
• Strongly agree
• Agree
• Neutral
• Disagree
• Strongly disagree
• Five to seven items are usually used in the scale.
The scale doesn’t have to state “agree” or
“disagree”; dozens of variations are possible on
themes like agreement, frequency, quality and
importance. For example:
• Agreement: Strongly agree to strongly disagree.
• Frequency: Often to never.
• Quality: Very good to very bad.
• Likelihood: Definitely to never.
• Importance: Very important to unimportant.
SCHEDULE
Researcher is using a set of questionnaires for
interview purpose it is known as schedule.
“Schedule is the name usually applied to set
of questions, which are asked and filled by an
interviewer in a face to face situation with
another.” -W.J. Goode & P. K. Hatt
Schedule is a list of questions formulated and
presented with the specific purpose of testing
an assumption or hypothesis.In schedule
method interview occupies a central and plays
a vital role.
Important Features of Schedule
• The schedule is presented by the interviewer. The questions are
asked and the answers are noted down by him.
• The list of questions is a mere formal document, it need not be
attractive.
• The schedule can be used in a very narrow sphere of social
research.
• It aids to delimit the scope of the study and to concentrate on the
circumscribed elements essential to the analysis.
• It aims at delimiting the subject.
• In the schedule the list of questions is preplanned and noted down
formally and the interviewer is always armed with the formal
document detailing the questions.
OBSERVATION TECHNIQUE
• This is most commonly used technique of
evaluation research.
• It is used for evaluating cognitive and non-
cognitive aspects of a person.
• It is used in evaluation performance, interests,
attitudes, values towards their life problems
and situations.
• It is most useful technique for evaluating the
behaviors of children.
• “It is thorough study based on visual
observation. Under this technique group
behaviours and social institutions problems are
evaluated.” -C. Y. Younge
• “Observation employs relatively more visual
and senses than audio and vocal organs.” -C.A.
Mourse
• The cause- effect relationship and study of
events in original form, is known as
observation.
Purpose of observation
• To collect data directly.
• To collect substantial amount of data in short
time span.
• To get eye witness first hand data in real like
situation.
• To collect data in a natural setting.
Characteristics of observation
• Observation is systematic.
• It is specific.
• It is objective.
• It is quantitative.
• The record of observation should be made
immediately.
• Expert observer should observe the situation.
• It’s result can be checked and verified.
Types of Observation
• Structured and Unstructured
• Participant and Non-participant
Steps of Effective Observation:
As a research tool effective observation needs
effective
• Planning
• Execution
• Recording and
• Interpretation
Interview
• Interview is a two way method which permits
an exchange of ideas and information.
• “Interviewing is fundamentally a process of
social interaction.” -W. J. Goode & P.K. Hatt
• “The interview constitutes a social situation
between two persons, the psychological
process involved requiring both individuals
mutually respond though the social research
purpose of the interview call for a varied
response from the two parties concerned.”-
Vivien Palma
Importance of Interview
 Emotions, experiences and feelings.
Sensitive issues.
Privileged information.
 It is appropriate when dealing with young children, illiterates,
language difficulty and limited, intelligence.
 It supplies the detail and depth needed to ensure that the
questionnaire asks valid questions while preparing questionnaire.
 It is a follow up to a questionnaire and complement the
questionnaire.
 It can be combined with other tools in order to corroborate facts
using a different approach.
 It is one of the normative survey methods, but it is also applied in
historical, experimental, case studies .
Types of Interview
• Structured Interview
• Semi-Structured Interview
• Unstructured Interview
• Single Interview
• Group Interview
• Focus Group Interview
INVENTORY
• Inventory is a list, record or catalog containing
list of traits,preferences, attitudes, interests or
abilities used to evaluate personal
characteristics or skills.
• The purpose of inventory is to make a list
about a specific trait, activity or programme
and to check to what extent the presence of
that ability types of Inventories like
– Interest Inventory and
– Personality Inventory
• Persons differ in their interests, likes and dislikes.
Interest are significant element in the personality
pattern of individuals and play an important role in
their educational and professional careers.
• The tools used for describing and measuring interests
of individuals are the internet inventories or interest
blanks. They are self report instruments in which the
individuals note their own likes and dislikes.
• They are of the nature of standardized interviews in
which the subject gives an introspective report of his
feelings about certain situations and phenomena
which is then interpreted in terms of interest.
• The use of interest inventories is most frequent in the
areas of educational and vocational guidance and case
studies.
TOOLS AND TECHNIQUES OF RESEARCH
• Steps of Preparing a Research Tool
• Types of Validity
• Factors affecting validity
• Reliability
• The methods of Estimating Reliability
• Item Analysis
• Steps involved in
• Item – Analysis
• The first step in preparing a research tool is to
develop a pool of item.
• item analysis
– Computing difficulty index and
– discrimination index of each item.
– validity of the tool.
Validity
• Validity is the most important consideration in the selection and
use of any testing procedures.
• The validity of a test, or of any measuring instrument, depends
upon the degree of exactness with which something is
reproduced/copied or with which it measures what it purports to
measure.
• The validity of a test may be defined as “the accuracy with which
a test measures what it attempts to measure.”
• It is also defined as “The efficiency with which a test measures
what it attempts to measure”.
• Lindquist has defined validity – “As the accuracy with which it
measures that which is intended to as the degree to which it
approaches infallibility in measuring what it purports to measure”.
• On the basis of the preceding definitions, it is
seen that
• Validity is a matter of degree. It maybe high,
moderate or low.
• Validity is specific rather than general. A test
may be valid for one specific purpose but not
for another Valid for one specific group of
students but not for another.
TYPES OF VALIDITY
• Content Validity
“content validity involves essentially
the systematic examination of the
text content to determine whether it
covers a representative sample of the
bahaviour domain to be measured”.
Criterion-related Validity
• This is also known as empirical validity.
• There are two forms of criterion-related validity.
• Predictive Validity
– It refers to how well the scores obtained on the tool
predict future criterion behavior.
• Concurrent Validity
– It refers to how well the scores obtained on the tool
are correlated with present criterion behaviour.
Construct Validity
• It is the extent to which the tool measures a
theoretical construct or trait or psychological
variable.
• It refers to how well our tool seems to
measure a hypothesized trait.
FACTORS AFFECTING VALIDITY
 Unclear Direction
 Vocabulary
 Difficult Sentence Construction
 Poorly Constructed Test Items
 Use of Inappropriate Items
 Difficulty Level of Items
 Influence of Extraneous Factors
 Inappropriate Time Limit
 Inappropriate Coverage
 Inadequate Weightage
• Halo Effect (poor impression about one aspect of the concept)
RELIABILITY
A test score is called reliable when we have
reasons of believing the score to be stable and
trustworthy.
“The degree of consistency with which the test
measures what it does measure”
“Reliability means consistency of scores
obtained by same individual when re-
examined with the test on different sets of
equivalent items or under other variable
examining conditions”.
A psychological or educational measurement is indirect and is
connected with less precise instruments or traits that are not
always stable. There are many reasons why a pupil’s test score
may vary –
• Trait Instability : The characteristics we measure may change
over a period of time.
• Administrative Error :Any change in direction, timing or
amount of rapport with the test administrative may cause score
variability.
• Scoring Error : Inaccuracies in scoring a test paper will affect the
scores.
• Sampling Error : Any particular questions we ask in order to
infer a person’s knowledge may affect his score.
• Other Factors : Such as health, motivation, degree of fatigue of
the pupil, good or bad luck in guessing may cause score variability
THE METHODS OF ESTIMATING RELIABILITY
The four procedures in common use for
computing the reliability coefficient of
a test
–Test – Retest Method.
–The Alternate or Parallel Forms Method.
–The Internal Consistency Reliability.
–The Inter-rater Reliability.
Test-Retest (Repetition) Method
(Co-efficient of Stability)
• In test – retest method the single form of a
test is administered twice on the same sample
with a reasonable gap. Thus two set of

Unit_IV.ppt

  • 1.
    ADVANCED EDUCATIONAL RESEARCH METHODOLOGY Unit-IV Dr.N.Sasikumar AssistantProfessor Department of Education Alagappa University Karaikudi-630003
  • 2.
    TOOL OF RESEARCH Aresearcher requires many data – gathering tools or techniques. Tests are the tools of measurement and it guides the researcher in data collection and also in evaluation. Tools may vary in complexity, interpretation, design and administration. Each tool is suitable for the collection of certain type of information.
  • 3.
    The selection ofsuitable instruments or tools is of vital importance for successful research. Different tools are suitable for collecting various kinds of information for various purposes. The research worker may use one or more of the tools in combination for his purpose. The systematic way and procedure by which a complex or scientific task is accomplished is known as the technique.
  • 4.
    MAJOR TOOLS OFRESEARCH IN EDUCATION • Inquiry forms – Questionnaire – Checklist – Score-card – Schedule – Rating Scale – Opinionnaire – Attitude Scale • Observation • Interview • Sociometry • Psychological Tests – Achievement Test – Aptitude Test – Intelligence Test – Interest inventory – Personality measures etc.
  • 5.
    QUESTIONNAIRE “A questionnaire isa systematic compilation of questions that are submitted to a sampling of population from which information is desired.” The questionnaire is a form prepared and distributed to secure responses to certain questions. It is a device for securing answers to questions by using a form which the respondent will fill by himself. It is a systematic compilation of questions. It is an important instrument being used to gather information from widely scattered sources.
  • 6.
    Characteristics of GoodQuestionnaire  It deals with an important or significant topic.  Its significance is carefully stated on the questionnaire itself or on its covering letter.  It seeks only that data which cannot be obtained from the resources like books, reports and records.  It is as short as possible, only long enough to get the essential data.  It is attractive in appearance, nearly arranged and clearly duplicated or printed.  Directions are clear and complete, important terms are clarified.  The questions are objective, with no clues, hints or suggestions.  Questions are presented in a order from simple to complex.  Double negatives, adverbs and descriptive adjectives are avoided.  Double barreled questions or putting two questions in one question are also avoided.
  • 7.
    TYPES OF QUESTIONNAIRE Questionnairecan be of various type on the basis of it’s preparation. Structured v/s Non Structured Closed v/s Open Fact v/s Opinion Positive v/s negative
  • 8.
    DESIGNS OF QUESTIONNAIRE Afterconstruction of questions on the basis of it’s characteristics it should be designed with some essential routines. Background information about the questionnaire. Instructions to the respondent. The allocation of serial numbers and Coding Boxes.
  • 9.
    MERITS & DEMERITSOF QUESTIONNAIRE • Merits of Questionnaire Method • it’s very economical. • It’s a time saving process. • It covers the research in wide area. • It’s very suitable for special type of responses. • It is most reliable in special cases. • Demerits of Questionnaire Method • Through this we get only limited responses. • Lack of personal contact. • Greater possibility of wrong answers. • Chances of receiving incomplete response are more. • Sometimes answers may be illegible. • It may be useless in many problems.
  • 10.
    CHECKLIST • A checklist,is a type of informational job aid used to reduce failure by compensating for potential limits of human memory and attention. • It helps to ensure consisting and completeness in carrying out a task. • A basic example is ‘to do list’. • The main purpose of checklist is to call attention to various aspects of an object or situation, to see that nothing of importance is overlooked.
  • 11.
    USES OF CHECKLISTS •To collect acts for educational surveys. • To record behaviour in observational studies. • To use in educational appraisal, studies – of school buildings, property, plan, textbooks, instructional procedures and outcomes etc. • To rate the personality. • To know the interest of the subjects also. Kuder’s interest inventory and Strong’s Interest Blank are also checklists.
  • 12.
    OPINIONNAIRE • “Opinion pollingor opinion gauging represents a single question approach. The answers are usually in the form of ‘yes’ or ‘no’. • An undecided category is often included. Sometimes large number of response alternatives if provided.” • Opinionnaire are usually used in researches of the descriptive type which demands survey of opinions of the concerned individuals. • Public opinion research is an example of opinion survey. Opinion polling enables the researcher to forecast the coming happenings in successful manner.
  • 13.
    CHARACTERISTICS OF OPINIONNAIRE •The opinionnaire makes use of statements or questions on different aspects of the problem under investigation. • Responses are expected either on three point or five point scales. • It uses favourable or unfavourable statements. • It may be sub-divided into sections. • The gally poll ballots generally make use of questions instead of statements. • The public opinion polls generally rely on personal contacts rather than mail ballots.
  • 14.
    RATING SCALE • Ratingscale is one of the enquiry form. Form is a term applied to expression or judgment regarding some situation, object or character. Opinions are usually expressed on a scale of values. • Rating techniques are devices by which such judgments may be quantified. Rating scale is a very useful device in assessing quality, specially when quality is difficult to measure objectively.
  • 15.
    • Rating scalesrecord judgment or opinions and indicates the degree or amount of different degrees of quality which are arranged along a line is the scale. • For example: How good was the performance? Excellent Very good Good Average Below average Poor Very poor ___|________|________|_______|__________|__________|__________|____ • Ratings can be obtained through one of three major approaches: – Paired comparison – Ranking and – Rating scales
  • 16.
    PURPOSE OF RATINGSCALE • Rating scales have been successfully utilized for measuring the following: • Teacher Performance/Effectiveness • Personality, anxiety, stress, emotional intelligence etc. • School appraisal including appraisal of courses, practices and programmes. • A rating scale includes three factors like: – The subjects or the phenomena to be rated. – The continuum along which they will be rated and – The judges who will do the rating.
  • 17.
    USE OF RATINGSCALE • Rating scales are used for testing the validity of many objective instruments like paper pencil inventories of personality. • Helpful in writing reports to parents • Helpful in filling out admission blanks for colleges • Helpful in finding out student needs • Making recommendations to employers. • Supplementing other sources of understanding about the child • Stimulating effect upon the individuals who are rated.
  • 18.
    ATTITUDE SCALE • Attitudescale is a form of appraisal procedure and it is also one of the enquiry term. Attitude scales have been designed to measure attitude of a subject of group of subjects towards issues, institutions and group of peoples. • The term attitude is defined in various ways, “the behaviour which we define as attitudinal or attitude is a certain observable set” organism or relative tendency preparatory to and indicative of more complete adjustment.”- L. L. Bernard • “An attitude may be defined as a learned emotional response set for or against something.”- Barr David Johnson
  • 19.
    ATTITUDE SCALE • Anattitude is spoken of as a tendency of an individual to read in a certain way towards a Phenomenon. • It is what a person feels or believes in. • It is the inner feeling of an individual. • It may be positive, negative or neutral.
  • 20.
    Purpose of AttitudeScale • In educational research, these scales are used especially for finding the attitudes of persons on different issues like: • Co-education • Religious education • Corporal punishment • Democracy in schools • Linguistic prejudices • International co-operation etc.
  • 21.
    Characteristics of AttitudeScale • It provides for quantitative measure on a unidimensional scale of continuum. • It uses statements from the extreme positive to extreme negative position. • It generally uses a five point scale as we have discussed in rating scale. • It could be standardized and norms are worked out. • It disguises the attitude object rather than directly asking about the attitude on the subject.
  • 22.
    Thurstone Techniques ofscaled values • A Thurstone scale has a number of “agree” or “disagree” statements. • It is a unidimensional scale to measure attitudes towards people. • Developing the scale is time consuming and relatively complex compared to other scales. • Thurstone technique is also known as the technique equal appearing intervals. • The “Thurstone Scale” they’re usually talking about the method of equal- appearing intervals. • It’s called “Equal appearing intervals” because when you choose the items for your test, you’re picking items equally spaced apart. The other two variations are: • The method of successive intervals: this method is more challenging to implement than equal-appearing intervals. • The method of paired comparisons: requires twice the judgments than the equal-appearing intervals method and can quickly become very consuming.
  • 26.
    LIKERT SCALE • TheLikert scale uses items worded for or against the proposition, with five point rating response indicating the strength of the respondent’s approval or disapproval of the statement. • A Likert Scale is a type of rating scale used to measure attitudes or opinions. With this scale, respondents are asked to rate items on a level of agreement. For example: • Strongly agree • Agree • Neutral • Disagree • Strongly disagree
  • 27.
    • Five toseven items are usually used in the scale. The scale doesn’t have to state “agree” or “disagree”; dozens of variations are possible on themes like agreement, frequency, quality and importance. For example: • Agreement: Strongly agree to strongly disagree. • Frequency: Often to never. • Quality: Very good to very bad. • Likelihood: Definitely to never. • Importance: Very important to unimportant.
  • 28.
    SCHEDULE Researcher is usinga set of questionnaires for interview purpose it is known as schedule. “Schedule is the name usually applied to set of questions, which are asked and filled by an interviewer in a face to face situation with another.” -W.J. Goode & P. K. Hatt Schedule is a list of questions formulated and presented with the specific purpose of testing an assumption or hypothesis.In schedule method interview occupies a central and plays a vital role.
  • 29.
    Important Features ofSchedule • The schedule is presented by the interviewer. The questions are asked and the answers are noted down by him. • The list of questions is a mere formal document, it need not be attractive. • The schedule can be used in a very narrow sphere of social research. • It aids to delimit the scope of the study and to concentrate on the circumscribed elements essential to the analysis. • It aims at delimiting the subject. • In the schedule the list of questions is preplanned and noted down formally and the interviewer is always armed with the formal document detailing the questions.
  • 30.
    OBSERVATION TECHNIQUE • Thisis most commonly used technique of evaluation research. • It is used for evaluating cognitive and non- cognitive aspects of a person. • It is used in evaluation performance, interests, attitudes, values towards their life problems and situations. • It is most useful technique for evaluating the behaviors of children.
  • 31.
    • “It isthorough study based on visual observation. Under this technique group behaviours and social institutions problems are evaluated.” -C. Y. Younge • “Observation employs relatively more visual and senses than audio and vocal organs.” -C.A. Mourse • The cause- effect relationship and study of events in original form, is known as observation.
  • 32.
    Purpose of observation •To collect data directly. • To collect substantial amount of data in short time span. • To get eye witness first hand data in real like situation. • To collect data in a natural setting.
  • 33.
    Characteristics of observation •Observation is systematic. • It is specific. • It is objective. • It is quantitative. • The record of observation should be made immediately. • Expert observer should observe the situation. • It’s result can be checked and verified.
  • 34.
    Types of Observation •Structured and Unstructured • Participant and Non-participant Steps of Effective Observation: As a research tool effective observation needs effective • Planning • Execution • Recording and • Interpretation
  • 35.
    Interview • Interview isa two way method which permits an exchange of ideas and information. • “Interviewing is fundamentally a process of social interaction.” -W. J. Goode & P.K. Hatt • “The interview constitutes a social situation between two persons, the psychological process involved requiring both individuals mutually respond though the social research purpose of the interview call for a varied response from the two parties concerned.”- Vivien Palma
  • 36.
    Importance of Interview Emotions, experiences and feelings. Sensitive issues. Privileged information.  It is appropriate when dealing with young children, illiterates, language difficulty and limited, intelligence.  It supplies the detail and depth needed to ensure that the questionnaire asks valid questions while preparing questionnaire.  It is a follow up to a questionnaire and complement the questionnaire.  It can be combined with other tools in order to corroborate facts using a different approach.  It is one of the normative survey methods, but it is also applied in historical, experimental, case studies .
  • 37.
    Types of Interview •Structured Interview • Semi-Structured Interview • Unstructured Interview • Single Interview • Group Interview • Focus Group Interview
  • 38.
    INVENTORY • Inventory isa list, record or catalog containing list of traits,preferences, attitudes, interests or abilities used to evaluate personal characteristics or skills. • The purpose of inventory is to make a list about a specific trait, activity or programme and to check to what extent the presence of that ability types of Inventories like – Interest Inventory and – Personality Inventory
  • 39.
    • Persons differin their interests, likes and dislikes. Interest are significant element in the personality pattern of individuals and play an important role in their educational and professional careers. • The tools used for describing and measuring interests of individuals are the internet inventories or interest blanks. They are self report instruments in which the individuals note their own likes and dislikes. • They are of the nature of standardized interviews in which the subject gives an introspective report of his feelings about certain situations and phenomena which is then interpreted in terms of interest. • The use of interest inventories is most frequent in the areas of educational and vocational guidance and case studies.
  • 40.
    TOOLS AND TECHNIQUESOF RESEARCH • Steps of Preparing a Research Tool • Types of Validity • Factors affecting validity • Reliability • The methods of Estimating Reliability • Item Analysis • Steps involved in • Item – Analysis
  • 41.
    • The firststep in preparing a research tool is to develop a pool of item. • item analysis – Computing difficulty index and – discrimination index of each item. – validity of the tool.
  • 42.
    Validity • Validity isthe most important consideration in the selection and use of any testing procedures. • The validity of a test, or of any measuring instrument, depends upon the degree of exactness with which something is reproduced/copied or with which it measures what it purports to measure. • The validity of a test may be defined as “the accuracy with which a test measures what it attempts to measure.” • It is also defined as “The efficiency with which a test measures what it attempts to measure”. • Lindquist has defined validity – “As the accuracy with which it measures that which is intended to as the degree to which it approaches infallibility in measuring what it purports to measure”.
  • 43.
    • On thebasis of the preceding definitions, it is seen that • Validity is a matter of degree. It maybe high, moderate or low. • Validity is specific rather than general. A test may be valid for one specific purpose but not for another Valid for one specific group of students but not for another.
  • 44.
    TYPES OF VALIDITY •Content Validity “content validity involves essentially the systematic examination of the text content to determine whether it covers a representative sample of the bahaviour domain to be measured”.
  • 45.
    Criterion-related Validity • Thisis also known as empirical validity. • There are two forms of criterion-related validity. • Predictive Validity – It refers to how well the scores obtained on the tool predict future criterion behavior. • Concurrent Validity – It refers to how well the scores obtained on the tool are correlated with present criterion behaviour.
  • 46.
    Construct Validity • Itis the extent to which the tool measures a theoretical construct or trait or psychological variable. • It refers to how well our tool seems to measure a hypothesized trait.
  • 47.
    FACTORS AFFECTING VALIDITY Unclear Direction  Vocabulary  Difficult Sentence Construction  Poorly Constructed Test Items  Use of Inappropriate Items  Difficulty Level of Items  Influence of Extraneous Factors  Inappropriate Time Limit  Inappropriate Coverage  Inadequate Weightage • Halo Effect (poor impression about one aspect of the concept)
  • 48.
    RELIABILITY A test scoreis called reliable when we have reasons of believing the score to be stable and trustworthy. “The degree of consistency with which the test measures what it does measure” “Reliability means consistency of scores obtained by same individual when re- examined with the test on different sets of equivalent items or under other variable examining conditions”.
  • 49.
    A psychological oreducational measurement is indirect and is connected with less precise instruments or traits that are not always stable. There are many reasons why a pupil’s test score may vary – • Trait Instability : The characteristics we measure may change over a period of time. • Administrative Error :Any change in direction, timing or amount of rapport with the test administrative may cause score variability. • Scoring Error : Inaccuracies in scoring a test paper will affect the scores. • Sampling Error : Any particular questions we ask in order to infer a person’s knowledge may affect his score. • Other Factors : Such as health, motivation, degree of fatigue of the pupil, good or bad luck in guessing may cause score variability
  • 50.
    THE METHODS OFESTIMATING RELIABILITY The four procedures in common use for computing the reliability coefficient of a test –Test – Retest Method. –The Alternate or Parallel Forms Method. –The Internal Consistency Reliability. –The Inter-rater Reliability.
  • 51.
    Test-Retest (Repetition) Method (Co-efficientof Stability) • In test – retest method the single form of a test is administered twice on the same sample with a reasonable gap. Thus two set of