FOREST RESEARCH AND METHODOLOGY
BSH 365
Prabin Pandit
Assistant lecturer
(PUCEF)
foresterpandit@gmail.com
Unit-6: Questionaries development
• Drafting measurement questions/checklists/schedules
• Assembling , Pre- testing and revising instruments
• Test of validity and reliability of instruments
• Finalizing and formatting the instruments
Research instruments
• A Research Instrument is a tool used to collect, measure, and analyze data
related to your research interests.
• A research instrument can include interviews, tests, surveys, or checklists
etc.
• The Research Instrument is usually determined by researcher and is tied to the
study methodology.
Characteristics of a Good Research Instrument
• Valid and reliable
• Based on a conceptual framework
• Must gather data suitable for and relevant to the research topic
• Able to test hypothesis and/or answer proposed research questions under
investigation
• Free of bias and appropriate for the context, culture, and diversity of the study
site
• Contains clear and definite instructions to use the instrument
Questionaries designing process
Drafting measurement questions/checklists/schedules
Open ended questionaries: questions that allow respondents to answer in open text format
Example:
• Do you think community forestry helps to improve livelihood of the forest dependent
people?
• What you think about existing revenue sharing of CFUG?
• What you feel about the restructuring of Forestry organization in federal Nepal?
• What are your suggestions to make more collaboration with CFUG and local
government?
Drafting measurement questions/checklists/schedules
Closed ended questionaries: questions that can only be answered by selecting from a
limited number of options, usually multiple-choice questions with a single-word answer ,
'yes' or 'no', or a rating scale (e.g. from strongly agree to strongly disagree).
Example:
• What is the area of your community forest?
• Are you being the member of CFUG present or previous?
• From your perspectives, which level of government should monitor the CFUG?
a. Federal level b. Provincial level c. local level
Drafting measurement questions/checklists/schedules
Dichotomous question: Belonging to the closed-ended family of questions, dichotomous
questions are ones that only offer two possible answers, which are typically presented to
survey takers in the following format – Yes or No, True or False, Agree or Disagree and
Fair or Unfair.
Example:
• Are local people (CFUG) involved in collection of Satuwa?
a. Yes b. No
• Have you think any change in benefit sharing mechanism within CFUG?
a. Yes b. No
Drafting measurement questions/checklists/schedules
Rating scale/Likert scale question:
Example: What you think on the statement that CFUG is supportive to local governance system
strengthening?
a. Strongly agree b. Agree c. Neutral d. Disagree e. strongly disagree
Assembling , Pre- testing and revising instruments
• The practice of pretesting is highly regarded as an effective technique for improving
validity in qualitative data.
• Pretesting involves simulating the formal data collection process on a small scale to identify
practical problems with regard to data collection instruments, sessions, and methodology.
• The value of pretesting can lead to detecting errors, word ambiguity, as well as discovering
possible flaws in survey measurement variables.
• Pretesting can also provide advance warning about how or why a main research project can
fail.
Assembling , Pre- testing and revising instruments
• A typical pretest in qualitative research involves administering the interview to a group of
individuals that have similar characteristics to the target study population, and in a manner
that replicates how the data collection session will be administered
• Pretesting provides an opportunity to make revisions to study materials and data collection
procedures to ensure that appropriate questions are being
• It is vital that pretests be conducted systematically and include practice for all personnel
who will be engaged in data collection procedures for the eventual main study.
Reliability of instruments
• In research, the term reliability means " repeatability" or "consistency"
• It is the degree to which an assessment tool produces stable and consistent results.
• Reliability is the degree of consistency with which the attributes or variables are measured
by an instrument.
• A test is considered reliable if researcher frequently gets the same reading at different time
interval. Eg. Diameter measurement of tree.
• Reliability is the degree of consistency with which an instrument measures the attribute for
which it is designed to measure.
• Reliability is defined as the ability of an instrument to create reproducible results.
Validity of instruments
• Validity refers to an instrument or test actually testing what it suppose to be testing.
- Treece and Treece
• Validity refers to the degree to which an instrument measures what it suppose to measuring.
- Polit and Hungler.
• Validity is the appropriateness, meaning, fullness and usefulness of the interference made
from the scoring of the instrument.
- American Psychological Foundation.
Types of Validity
FACE VALIDITY: Face validity involves an overall look of an
instrument regarding its appropriateness to measure a particular
attribute or phenomenon.
CONTENT VALIDITY: Content validity is concerned with
scope of coverage of the content area to be measured.
CRITERION VALIDITY: This type of validity is a
relationship between measurements of the instrument with some
other external criteria.
CONSTRUCT VALIDITY : It refers to the extent to which
one instrument correlate with other instrument measure similar
construct.
Reliability & Validity
Thank You

unit-6.pptx

  • 1.
    FOREST RESEARCH ANDMETHODOLOGY BSH 365 Prabin Pandit Assistant lecturer (PUCEF) foresterpandit@gmail.com
  • 2.
    Unit-6: Questionaries development •Drafting measurement questions/checklists/schedules • Assembling , Pre- testing and revising instruments • Test of validity and reliability of instruments • Finalizing and formatting the instruments
  • 3.
    Research instruments • AResearch Instrument is a tool used to collect, measure, and analyze data related to your research interests. • A research instrument can include interviews, tests, surveys, or checklists etc. • The Research Instrument is usually determined by researcher and is tied to the study methodology.
  • 4.
    Characteristics of aGood Research Instrument • Valid and reliable • Based on a conceptual framework • Must gather data suitable for and relevant to the research topic • Able to test hypothesis and/or answer proposed research questions under investigation • Free of bias and appropriate for the context, culture, and diversity of the study site • Contains clear and definite instructions to use the instrument
  • 5.
  • 6.
    Drafting measurement questions/checklists/schedules Openended questionaries: questions that allow respondents to answer in open text format Example: • Do you think community forestry helps to improve livelihood of the forest dependent people? • What you think about existing revenue sharing of CFUG? • What you feel about the restructuring of Forestry organization in federal Nepal? • What are your suggestions to make more collaboration with CFUG and local government?
  • 7.
    Drafting measurement questions/checklists/schedules Closedended questionaries: questions that can only be answered by selecting from a limited number of options, usually multiple-choice questions with a single-word answer , 'yes' or 'no', or a rating scale (e.g. from strongly agree to strongly disagree). Example: • What is the area of your community forest? • Are you being the member of CFUG present or previous? • From your perspectives, which level of government should monitor the CFUG? a. Federal level b. Provincial level c. local level
  • 8.
    Drafting measurement questions/checklists/schedules Dichotomousquestion: Belonging to the closed-ended family of questions, dichotomous questions are ones that only offer two possible answers, which are typically presented to survey takers in the following format – Yes or No, True or False, Agree or Disagree and Fair or Unfair. Example: • Are local people (CFUG) involved in collection of Satuwa? a. Yes b. No • Have you think any change in benefit sharing mechanism within CFUG? a. Yes b. No
  • 9.
    Drafting measurement questions/checklists/schedules Ratingscale/Likert scale question: Example: What you think on the statement that CFUG is supportive to local governance system strengthening? a. Strongly agree b. Agree c. Neutral d. Disagree e. strongly disagree
  • 10.
    Assembling , Pre-testing and revising instruments • The practice of pretesting is highly regarded as an effective technique for improving validity in qualitative data. • Pretesting involves simulating the formal data collection process on a small scale to identify practical problems with regard to data collection instruments, sessions, and methodology. • The value of pretesting can lead to detecting errors, word ambiguity, as well as discovering possible flaws in survey measurement variables. • Pretesting can also provide advance warning about how or why a main research project can fail.
  • 11.
    Assembling , Pre-testing and revising instruments • A typical pretest in qualitative research involves administering the interview to a group of individuals that have similar characteristics to the target study population, and in a manner that replicates how the data collection session will be administered • Pretesting provides an opportunity to make revisions to study materials and data collection procedures to ensure that appropriate questions are being • It is vital that pretests be conducted systematically and include practice for all personnel who will be engaged in data collection procedures for the eventual main study.
  • 12.
    Reliability of instruments •In research, the term reliability means " repeatability" or "consistency" • It is the degree to which an assessment tool produces stable and consistent results. • Reliability is the degree of consistency with which the attributes or variables are measured by an instrument. • A test is considered reliable if researcher frequently gets the same reading at different time interval. Eg. Diameter measurement of tree. • Reliability is the degree of consistency with which an instrument measures the attribute for which it is designed to measure. • Reliability is defined as the ability of an instrument to create reproducible results.
  • 13.
    Validity of instruments •Validity refers to an instrument or test actually testing what it suppose to be testing. - Treece and Treece • Validity refers to the degree to which an instrument measures what it suppose to measuring. - Polit and Hungler. • Validity is the appropriateness, meaning, fullness and usefulness of the interference made from the scoring of the instrument. - American Psychological Foundation.
  • 14.
    Types of Validity FACEVALIDITY: Face validity involves an overall look of an instrument regarding its appropriateness to measure a particular attribute or phenomenon. CONTENT VALIDITY: Content validity is concerned with scope of coverage of the content area to be measured. CRITERION VALIDITY: This type of validity is a relationship between measurements of the instrument with some other external criteria. CONSTRUCT VALIDITY : It refers to the extent to which one instrument correlate with other instrument measure similar construct.
  • 15.
  • 16.