1. INSTRUMENT OF THE STUDY
FREDAMOR C. BERMIL
M.A. GEN. SCIENCE
Discussant
2. Instrument of the Study
What is a research instrument?
Research instruments are measurement tools (ex. questionnaire or scale) designed to obtained
data on a topic of interest from research subject.
Data Collection Technique
Observation
Making direct observation is a simple way of collecting data. Gathering first hand information in the
field gives the observer a holistic perspective that helps them understand the context in which the item
being studied operates or exists. The observation are recorded in field notes or on a mobile device if the
observer is collecting data electronically: ex. Building inspection, safety checklist, agricultural survey.
It is an effective method because it is straightforward and efficient. It doesn’t typically require
extensive training on the part of the data collector.
3. Survey/ Questionnaire
It is a popular means of data collection because they are inexpensive and can provide a broad
perspective. They can be conducted face – to – face, by mail, telephone or internet. Surveys are often used
when information is sought from a large number of people or on a wide range of topics. They can contain yes
or no, true or false, multiple choice, scaled or open – ended questions.
Interviews
This can be conducted in person or by phone, and can be structured (using survey forms) or
unstructured. Questioned should be focused, clear and encourage open – ended responses. Example: one – on
– one conversation with parent of at – risk youth to dropping in class.
Focus Group
It is imply a group interview of people who all have something in common. They provide the same type
of data as in person interviews. Focus group are useful when examining cultural values.
4. What is documentary analysis?
Documentary analysis is a form of qualitative research in which documents are interpreted by the
researcher to give voice and meaning around an assessment topic (Bowen, 2009). A rubric can be used to
grade or score a document.
3 Primary Types of Documents
Public Records – official on going records of an organization and activities. Example: students
transcript, mission statement, annual report, policy manual, strategic plans and syllabi.
Personal Documents – first person account of an individuals actions, experience and benefits.
Examples include calendar, e – mails, scrapbooks, blogs, facebook post, duty logs, , incident report,
reflections, journal and news paper.
Physical Evidence – objects found within the study setting (often called artifacts). Example are
flyers, posters, agendas, handbook and training materials
5. What is an interview schedule?
This is a set of prepared questions designed to be asked exactly as worded. Interview schedules
have a standardised format which means the same questions are asked to each interviewee in the same
order.
Types of Interview
In the view of the research (Burnard, Gll, Stewart, Treasure and Chadwick, 2008;Morse and
Corbin, 2003) there are three fundamental types of research interviews, they are: structured, semi –
structured, and unstructured (Feb. 18, 2016)
1. Structural Interview is typically formal and organized and may include several interviewers,
commonly referred to as a panel interview.
2. Semi Structured Interview - a semi-structured interview is a meeting in which the
interviewer does not strictly follow a formalized list of questions. They will ask more open-ended
questions, allowing for a discussion with the interviewee rather than a straightforward question
and answer format.
6. 3. Unstructured Interview – is a set of interview in which there is no specific set of predetermined
questions although the interviews usually have a certain topic in mind that they wish to cover during the
interview. Unstructured interview flow like an everyday conversation and tend to be more informal and
open – minded.
What is validity?
Research validity in surveys relates to the extent at which the survey measures right elements
that need to be measured. In simple terms, validity refers to how well an instrument as measures what it
is intended to measure.
Types of Validity
1. Face Validity is the most basic type of validity and it is associated with a highest level of
subjectivity because it is not based on any scientific approach. In other words, in this case a test may
be specified as valid by a researcher because it may seem as valid, without an in-depth scientific
justification.
Example: questionnaire design for a study that analyses the issues of employee performance can be
assessed as valid because each individual question may seem to be addressing specific and relevant
aspects of employee performance.
7. 2. Construct Validity relates to assessment of suitability of measurement tool to measure the
phenomenon being studied. Application of construct validity can be effectively facilitated with the involvement
of panel of ‘experts’ closely familiar with the measure and the phenomenon.
Example: with the application of construct validity the levels of leadership competency in any given
organisation can be effectively assessed by devising questionnaire to be answered by operational level
employees and asking questions about the levels of their motivation to do their duties in a daily basis.
3. Criterion-Related Validity involves comparison of tests results with the outcome. This specific type
of validity correlates results of assessment with another criterion of assessment. Example: nature of
customer perception of brand image of a specific company can be assessed via organising a focus group. The
same issue can also be assessed through devising questionnaire to be answered by current and potential
customers of the brand. The higher the level of correlation between focus group and questionnaire findings,
the high the level of criterion-related validity.
4. Formative Validity refers to assessment of effectiveness of the measure in terms of providing
information that can be used to improve specific aspects of the phenomenon.
Example: when developing initiatives to increase the levels of effectiveness of organisational culture if the
measure is able to identify specific weaknesses of organisational culture such as employee-manager
communication barriers, then the level of formative validity of the measure can be assessed as adequate.
8. 5. Sampling Validity (similar to content validity) ensures that the area of coverage of the measure
within the research area is vast. No measure is able to cover all items and elements within the
phenomenon, therefore, important items and elements are selected using a specific pattern of sampling
method depending on aims and objectives of the study.
Example: when assessing a leadership style exercised in a specific organisation, assessment of decision-
making style would not suffice, and other issues related to leadership style such as organisational culture,
personality of leaders, the nature of the industry etc. need to be taken into account as well.
6. Content Validity refers to how accurately an assessment or measurement tool taps into the various
aspects of the specific construct in question. In other words, do the questions really assess the construct in
question, or are the responses by the person answering the questions influenced by other factors?
Example: face validity describes the degree to which an assessment measures what it appears to measure,
concurrent validity measures how well the results of one assessment correlate with other assessments designed
to measure the same thing, and predictive validity measures how well the assessment results can predict a
relationship between the construct of being measured and future behavior.
9. What is reliability?
Reliability refers to whether or not you get the same answer by using an instrument to measure
something more than once. In simple terms, research reliability is the degree to which research method
produces stable and consistent results.
Type of Reliability
1. Test-retest reliability relates to the measure of reliability that has been obtained by
conducting the same test more than one time over period of time with the participation of the
same sample group.
Example: Employees of ABC Company may be asked to complete the same questionnaire
about employee job satisfaction two times with an interval of one week, so that test results can be
compared to assess stability of scores.
10. 2. Parallel forms reliability relates to a measure
that is obtained by conducting assessment of the same
phenomena with the participation of the same sample
group via more than one assessment method.
Example: The levels of employee satisfaction of ABC
Company may be assessed with questionnaires, in-depth
interviews and focus groups and results can be compared.
3.. Inter-rater reliability as the name indicates
relates to the measure of sets of results obtained by different
assessors using same methods. Benefits and importance of
assessing inter-rater reliability can be explained by referring
to subjectivity of assessments.
Example: Levels of employee motivation at ABC Company can
be assessed using observation method by two different
assessors, and inter-rater reliability relates to the extent of
difference between the two assessments.
11. Internal consistency reliability is applied to
assess the extent of differences within the test items
that explore the same construct produce similar results.
It can be represented in two main formats.
a) average inter-item correlation is a specific form of
internal consistency that is obtained by applying the
same construct on each item of the test
b) split-half reliability as another type of internal
consistency reliability involves all items of a test to be
‘spitted in half’.
Thank you
12. Pro-Environmental Behavior and Literacy Among Elementary
Grade Pupils Specifically Grade V in the District of President Roxas
The Effect of Child Protection Policy to the Teaching Learning Process
Among Elementary Grades in the District of President Roxas
Common Errors in Reading English Among Grades 3 to 6 Pupils of
Bayuyan Elementary School