DATA COLLECTION
CHAPTER 5: FINDING RESULTS THROUGH DATA COLLECTION
LEARNING OUTCOMES
•COLLECT DATA USING APPROPRIATE INSTRUMENTS.
•PRESENT AND INTERPRET DATA IN TABULAR AND GRAPHICAL FORMS.
•USE STATISTICAL TECHNIQUES TO ANALYZE DATA---STUDY OF DIFFERENCES AND
RELATIONSHIP LIMITED FOR BIVARIATE ANALYSIS.
DATA COLLECTION PROCEDURE
Data collection or data gathering is defined as the process of
gathering and measuring information on variables of interest, in an
established systematic method that enables one to answer stated
research questions, test hypotheses, and evaluate outcomes. There
are several techniques or strategies for data collection with
corresponding statistical instruments. The kind of analysis that can
be performed on a set of data will be influenced by the goals
identified at the outset, and the data actually gathered.
The Quantitative Data Collection Method relies on random
sampling and structured data collection instrument that fit
diverse experience into predetermined response categories. It
produces results that is easy to summarize, compare, and
generalize.
If this is not feasible, the researcher may collect data on
participant and situational characteristics in order to
statistically control their influence on the dependent, or
outcome variable. If the intent is to generalize from the
research participants to a larger population, the researcher
will employ probability sampling to select participants.
QUANTITATIVE DATA GATHERING
STRATEGIES
QUANTITATIVE DATA GATHERING STRATEGIES
• QUESTIONNAIRES
Questionnaires often make use of checklist and rating scales. It is usually
sent by mail, email or personally given to the respondents.
Paper-pencil-questionnaires - It can be sent to a large number of
people and saves the researcher time and money It could be open
ended format or multiple choice format. Allows the respondents to
answer in any way.
Web based questionnaires – A new and inevitably growing
methodology is the use of Internet based research. The respondents
will answer online questions such as the use of Survey monkey.
QUANTITATIVE DATA GATHERING STRATEGIES
• INTERVIEWS
Interviews can be used at any stage of the evaluation process.
Two types of interviews are used in evaluation research:
structured interviews, in which a carefully worded
questionnaire is administered, and in-depth interviews, in
which the interviewer does not follow a rigid form. Interview
will supplement the data gathered through questionnaires.
Personal interviews are done when people usually respond when asked by a
person but their answers may be influenced by the interviewer.
Telephone interviews are less time consuming and less expensive and the
researcher has ready access to anyone on the planet who has a telephone
Clinical interview is concerned with broad underlying feelings or
motivations throughout the course of an individual's life experiences rather
than with effects of a specific experience.
Disguised interview relates to the degree to which the respondent is made
aware of the real research purpose.
Focused interviews where the interviewers focuses attention upon a given
experience and its effect.
QUANTITATIVE DATA GATHERING STRATEGIES
• EXPERIMENTS
Attempts to determine a cause and effect relationship
between two or more variables.
Blind Experiment – the test subjects do not know if they are getting
the experimental treatment or the placebo.
Double Blind Experiment - neither the test subject nor the
experimenter measuring the response knows to which group the test
subjects have been assigned (treatment or placebo).
QUANTITATIVE DATA GATHERING STRATEGIES
• OBSERVATIONS
Observational techniques are methods by
which an individual or individuals gather first
hand data on programs, processes, or
behaviors being studied.
TO OBTAIN RELIABLE INFORMATION THAT WILL HELP ANSWER THE RESEARCH QUESTIONS,
FOLLOW THESE STEPS:
1. DETERMINE THE OBJECTIVES OF THE STUDY YOU ARE UNDERTAKING.
2. DEFINE THE POPULATION OF INTEREST.
3. CHOOSE THE VARIABLES THAT YOU WILL MEASURE IN THE STUDY.
4. DECIDE ON AN APPROPRIATE DESIGN FOR PRODUCING DATA.
5. COLLECT THE DATA.
6. DETERMINE THE APPROPRIATE DESCRIPTIVE AND/OR INFERENTIAL DATA ANALYSIS
TECHNIQUES.
STRATEGIES FOR
COLLECTING DATA
STRATEGIES FOR COLLECTING DATA
• Convenience sampling (haphazard): for instance,
surveying students as they pass by in the university’s
student union building, or
1. Non-probability Methods
• Gathering volunteers: for instance, using an
advertisement in a magazine or on a website inviting
people to complete a form or participate in a study.
STRATEGIES FOR COLLECTING DATA
2. Probability methods
• Simple random sample - making selections from a
population where each subject in the population has an equal
chance of being selected.
• Stratified random sample - where you have first identified
population of interest, then divide this population into strata
or groups based on some characteristic (e.g. sex, geographic
region), then perform simple random sample from each
strata.
• Cluster sample - where a random cluster of subjects is taken
from population of interest.
• Multi-stage sampling - procedure is carried out in phases
and usually involves more than one sampling method. In very
large and diverse populations sampling may be done in two or
more stages.
• Systematic sampling - individuals are chosen at regular
intervals from the sampling frame. For this method you
randomly select a number to tell you where to start selecting
individuals from the list.
METHODS OF DATA
PROCESSING
Data processing is dealing with editing, coding,
classifying, tabulating and presenting data through chart,
diagram or diagram. It is a series of actions or steps
performed on data to verify, organize, transform,
integrate, and extract data nan appropriate output form
for subsequent use. Methods of processing must be
rigorously documented to ensure the utility and integrity
of the data. According to Calmorin and Calmorin (2007),
data processing means translating information either
manually or electronically into qualitative form for use in
research analysis.
STEPS IN DATA PROCESSING
1. CLASSIFICATION OR CATEGORIZATION
Classification or categorization is the process of grouping the
statistical data under various understandable homogeneous groups
for the purpose of convenient interpretation. A uniformity of
attributes is the basic criterion for classification; and the grouping of
data is made according to similarity. Classification becomes necessary
when there is diversity in the data collected for meaningful
presentation and analysis. However, it is meaningless with respect to
homogeneous data. A good classification should have the
characteristics of clanty, homogeneity, equality of scale,
purposefulness and accuracy.
STEPS IN DATA PROCESSING
2. CODING OF DATA
Coding of data is more useful with research instruments of
open-ended questions (Calmorin & Calmorin, 2007). Coding
is necessary for efficient analysis and through it several
replies may be reduced to a small number of classes which
contain the critical information required for analysis. Coding
decisions should be taken at the designing stage of the
questionnaire.
STEPS IN DATA PROCESSING
3. TABULATION OF DATA
Tabulation is the process of summarizing raw data and displaying it in
compact form for further analysis. Therefore, preparing tables is a
very important step. Tabulation may be manual, mechanical, or
electronic. The choice is made largely on the basis of the size and type
of study, alternative costs, time pressures, and the availability of
computers, and computer programs Tabulation of data is classified in
two parts, that is, a simple tabulation and a complex tabulation.
Simple tabulation, gives information regarding one or more
independent questions. Complex tabulation gives information
regarding two mutually dependent questions.
STEPS IN DATA PROCESSING
4. DATA DIAGRAMS
Diagrams are charts and graphs used to present data.
These facilitate getting the attention of the reader. These
help presenting data more effectively Creative presentation
of data is possible.
SCOPE AND PURPOSE OF DATA ANALYSIS
• Data analysis is the process of developing answers to questions
through e examination chords interpretation of data.
• Data analysis and interpretation is the process of assigning
meaning to the collected information and determining the
conclusions, significance, and implications of the findings (Tania,
2014).
• The purpose of Interpreting the data is to reduce it to an intelligible
and interpretable form so that the relations of research problems
can be studied and tested, and conclusions drawn.
SCOPE AND PURPOSE OF DATA ANALYSIS
• Quantitative analysis approaches are meaningful only when there
is a need for data summary across many repetitions of a
participatory process, e.g. focus group discussions leading to
seasonal calendars, venn diagrams, etc (Abeyasekera, d)
• Readily available computer programs, such as Excel and Access,
may be useful. Excel tends to be easily accessible for most people
who have access to a computer with Microsoft products (Wilder
Research, 2009).
• Spreadsheet software and other statistical tool can be helpful in
organizing the data.
KEY COMPONENTS OF A DATA ANALYSIS PLAN
1.PURPOSE OF THE EVALUATION
2.QUESTIONS
3.WHAT YOU HOPE TO LEARN FROM THE QUESTION
4.ANALYSIS TECHNIQUE
5.HOW DATA WILL BE PRESENTED
DATA INTERPRETATION
• Research interpretation is defined as an adequate exposition of the
true meaning of the material presented in terms of the purpose of
the study (Reyes, 2004).
• The findings of the study should be written objectively and in a
concise and precise format.
• Reyes (2004) added that the interpretation of data is inextricably
woven with the analysis so much so that it is a special aspect of
analysis rather than a distinct operation.
• In quantitative research, it is common to use graphs, tables, charts
and other non-textual elements to help the reader understand the
data.
STEPS FOR DATA INTERPRETATION
1. REVISIT THE MAIN AND SUB-PROBLEMS.
2. DESCRIBE THE DATA.
3. PLAN FOR AN APPROPRIATE WAY TO PRESENT THE DATA COLLECTED
THROUGH TABULAR FORM, GRAPHICAL, OR ANY OTHER WAY.
4. PLUG IN ADDITIONAL INFORMATION.
5. HAVE CLOSURE OR CONCLUDING STATEMENT IN EVERY DATA
INTERPRETATION.
BASIC ANALYSIS OF “QUANTITATIVE” INFORMATION
4. Consider conveying the range of answers, e.g., 20 people
ranked “1”, 30 ranked “2”, and 20 people ranked “3”,
3. For ratings and rankings, consider computing a mean, or
average, for each question.
2. Tabulate the information (add up the number of ratings,
rankings, yes’s, no’s for each question).
1. Make copies of your data and store the master copy away. Use
the copy for making edits, cutting and pasting, etc.
STAGES OF ANALYSIS AND INTERPRETATION OF FINDINGS
There are four main stages in the analysis and
interpretation of qualitative information. These are
discussed in more detail in several textbooks
including Patton (1986, 1990), Miles and Huberman
(1994), and Silverman (1994). Here the researcher
shall concentrate more on the practical tasks,rather
than on theoretical issues.
STEPS IN DATA PROCESSING
5. STANDARD DEVIATION
The standard deviation, often represented with the Greek letter sigma
(o), is the measure of a spread of data around the mean. A high
standard deviation signifies that data is spread more widely from the
mean, where a low standard deviation signals that more data align with
the mean. In a portfolio of data analysis methods, the standard
deviation is useful for quickly determining dispersion of data points.
The standard deviation represents the distribution of the responses
around the mean. It indicates the degree of consistency among the
responses.
STEPS IN DATA PROCESSING
6. T-TESTS
T-Tests are used to test if the difference of means is statistically
significant. It tests if the sample is representative of the populations.
For example if the mean for variable 1 is 40 and the mean for variable
2 is 56, you may say the means are different. T-Tests may show that
they are not significantly different, however, and the researcher can’t
base the conclusion on the means difference since the difference in
the sample is not representative for the population.
Also commonly called t testing, hypothesis testing assesses if a certain
premise is actually true for your data set or population
STEPS IN DATA PROCESSING
7. PEARSON (R) CORRELATION
Pearson (r) Correlation is used to find a correlation
between at least twe continuous variables. The
value for such a correlation lies between 0.00 (no
correlation) and 1.00 (perfect correlation).
STEPS IN DATA PROCESSING
8. CHI-SQUARE TEST
There are two types of Chi-square test st but both
involve categorical data. One type of chi-square test
compares the frequency count of what is expected
in theory against what is actually observed. The
second type of chi-square test is known as a chi-
square test with two variables or the chi-square test
for independence.
THANK YOU!!!

RESEARCH STUDY - CHAPTER 5: Finding Results Through Data Collection

  • 1.
    DATA COLLECTION CHAPTER 5:FINDING RESULTS THROUGH DATA COLLECTION
  • 2.
    LEARNING OUTCOMES •COLLECT DATAUSING APPROPRIATE INSTRUMENTS. •PRESENT AND INTERPRET DATA IN TABULAR AND GRAPHICAL FORMS. •USE STATISTICAL TECHNIQUES TO ANALYZE DATA---STUDY OF DIFFERENCES AND RELATIONSHIP LIMITED FOR BIVARIATE ANALYSIS.
  • 3.
    DATA COLLECTION PROCEDURE Datacollection or data gathering is defined as the process of gathering and measuring information on variables of interest, in an established systematic method that enables one to answer stated research questions, test hypotheses, and evaluate outcomes. There are several techniques or strategies for data collection with corresponding statistical instruments. The kind of analysis that can be performed on a set of data will be influenced by the goals identified at the outset, and the data actually gathered.
  • 4.
    The Quantitative DataCollection Method relies on random sampling and structured data collection instrument that fit diverse experience into predetermined response categories. It produces results that is easy to summarize, compare, and generalize. If this is not feasible, the researcher may collect data on participant and situational characteristics in order to statistically control their influence on the dependent, or outcome variable. If the intent is to generalize from the research participants to a larger population, the researcher will employ probability sampling to select participants.
  • 5.
  • 6.
    QUANTITATIVE DATA GATHERINGSTRATEGIES • QUESTIONNAIRES Questionnaires often make use of checklist and rating scales. It is usually sent by mail, email or personally given to the respondents. Paper-pencil-questionnaires - It can be sent to a large number of people and saves the researcher time and money It could be open ended format or multiple choice format. Allows the respondents to answer in any way. Web based questionnaires – A new and inevitably growing methodology is the use of Internet based research. The respondents will answer online questions such as the use of Survey monkey.
  • 7.
    QUANTITATIVE DATA GATHERINGSTRATEGIES • INTERVIEWS Interviews can be used at any stage of the evaluation process. Two types of interviews are used in evaluation research: structured interviews, in which a carefully worded questionnaire is administered, and in-depth interviews, in which the interviewer does not follow a rigid form. Interview will supplement the data gathered through questionnaires.
  • 8.
    Personal interviews aredone when people usually respond when asked by a person but their answers may be influenced by the interviewer. Telephone interviews are less time consuming and less expensive and the researcher has ready access to anyone on the planet who has a telephone Clinical interview is concerned with broad underlying feelings or motivations throughout the course of an individual's life experiences rather than with effects of a specific experience. Disguised interview relates to the degree to which the respondent is made aware of the real research purpose. Focused interviews where the interviewers focuses attention upon a given experience and its effect.
  • 9.
    QUANTITATIVE DATA GATHERINGSTRATEGIES • EXPERIMENTS Attempts to determine a cause and effect relationship between two or more variables. Blind Experiment – the test subjects do not know if they are getting the experimental treatment or the placebo. Double Blind Experiment - neither the test subject nor the experimenter measuring the response knows to which group the test subjects have been assigned (treatment or placebo).
  • 10.
    QUANTITATIVE DATA GATHERINGSTRATEGIES • OBSERVATIONS Observational techniques are methods by which an individual or individuals gather first hand data on programs, processes, or behaviors being studied.
  • 11.
    TO OBTAIN RELIABLEINFORMATION THAT WILL HELP ANSWER THE RESEARCH QUESTIONS, FOLLOW THESE STEPS: 1. DETERMINE THE OBJECTIVES OF THE STUDY YOU ARE UNDERTAKING. 2. DEFINE THE POPULATION OF INTEREST. 3. CHOOSE THE VARIABLES THAT YOU WILL MEASURE IN THE STUDY. 4. DECIDE ON AN APPROPRIATE DESIGN FOR PRODUCING DATA. 5. COLLECT THE DATA. 6. DETERMINE THE APPROPRIATE DESCRIPTIVE AND/OR INFERENTIAL DATA ANALYSIS TECHNIQUES.
  • 12.
  • 13.
    STRATEGIES FOR COLLECTINGDATA • Convenience sampling (haphazard): for instance, surveying students as they pass by in the university’s student union building, or 1. Non-probability Methods • Gathering volunteers: for instance, using an advertisement in a magazine or on a website inviting people to complete a form or participate in a study.
  • 14.
    STRATEGIES FOR COLLECTINGDATA 2. Probability methods • Simple random sample - making selections from a population where each subject in the population has an equal chance of being selected. • Stratified random sample - where you have first identified population of interest, then divide this population into strata or groups based on some characteristic (e.g. sex, geographic region), then perform simple random sample from each strata.
  • 15.
    • Cluster sample- where a random cluster of subjects is taken from population of interest. • Multi-stage sampling - procedure is carried out in phases and usually involves more than one sampling method. In very large and diverse populations sampling may be done in two or more stages. • Systematic sampling - individuals are chosen at regular intervals from the sampling frame. For this method you randomly select a number to tell you where to start selecting individuals from the list.
  • 17.
  • 18.
    Data processing isdealing with editing, coding, classifying, tabulating and presenting data through chart, diagram or diagram. It is a series of actions or steps performed on data to verify, organize, transform, integrate, and extract data nan appropriate output form for subsequent use. Methods of processing must be rigorously documented to ensure the utility and integrity of the data. According to Calmorin and Calmorin (2007), data processing means translating information either manually or electronically into qualitative form for use in research analysis.
  • 19.
    STEPS IN DATAPROCESSING 1. CLASSIFICATION OR CATEGORIZATION Classification or categorization is the process of grouping the statistical data under various understandable homogeneous groups for the purpose of convenient interpretation. A uniformity of attributes is the basic criterion for classification; and the grouping of data is made according to similarity. Classification becomes necessary when there is diversity in the data collected for meaningful presentation and analysis. However, it is meaningless with respect to homogeneous data. A good classification should have the characteristics of clanty, homogeneity, equality of scale, purposefulness and accuracy.
  • 20.
    STEPS IN DATAPROCESSING 2. CODING OF DATA Coding of data is more useful with research instruments of open-ended questions (Calmorin & Calmorin, 2007). Coding is necessary for efficient analysis and through it several replies may be reduced to a small number of classes which contain the critical information required for analysis. Coding decisions should be taken at the designing stage of the questionnaire.
  • 21.
    STEPS IN DATAPROCESSING 3. TABULATION OF DATA Tabulation is the process of summarizing raw data and displaying it in compact form for further analysis. Therefore, preparing tables is a very important step. Tabulation may be manual, mechanical, or electronic. The choice is made largely on the basis of the size and type of study, alternative costs, time pressures, and the availability of computers, and computer programs Tabulation of data is classified in two parts, that is, a simple tabulation and a complex tabulation. Simple tabulation, gives information regarding one or more independent questions. Complex tabulation gives information regarding two mutually dependent questions.
  • 22.
    STEPS IN DATAPROCESSING 4. DATA DIAGRAMS Diagrams are charts and graphs used to present data. These facilitate getting the attention of the reader. These help presenting data more effectively Creative presentation of data is possible.
  • 23.
    SCOPE AND PURPOSEOF DATA ANALYSIS • Data analysis is the process of developing answers to questions through e examination chords interpretation of data. • Data analysis and interpretation is the process of assigning meaning to the collected information and determining the conclusions, significance, and implications of the findings (Tania, 2014). • The purpose of Interpreting the data is to reduce it to an intelligible and interpretable form so that the relations of research problems can be studied and tested, and conclusions drawn.
  • 24.
    SCOPE AND PURPOSEOF DATA ANALYSIS • Quantitative analysis approaches are meaningful only when there is a need for data summary across many repetitions of a participatory process, e.g. focus group discussions leading to seasonal calendars, venn diagrams, etc (Abeyasekera, d) • Readily available computer programs, such as Excel and Access, may be useful. Excel tends to be easily accessible for most people who have access to a computer with Microsoft products (Wilder Research, 2009). • Spreadsheet software and other statistical tool can be helpful in organizing the data.
  • 25.
    KEY COMPONENTS OFA DATA ANALYSIS PLAN 1.PURPOSE OF THE EVALUATION 2.QUESTIONS 3.WHAT YOU HOPE TO LEARN FROM THE QUESTION 4.ANALYSIS TECHNIQUE 5.HOW DATA WILL BE PRESENTED
  • 26.
    DATA INTERPRETATION • Researchinterpretation is defined as an adequate exposition of the true meaning of the material presented in terms of the purpose of the study (Reyes, 2004). • The findings of the study should be written objectively and in a concise and precise format. • Reyes (2004) added that the interpretation of data is inextricably woven with the analysis so much so that it is a special aspect of analysis rather than a distinct operation. • In quantitative research, it is common to use graphs, tables, charts and other non-textual elements to help the reader understand the data.
  • 27.
    STEPS FOR DATAINTERPRETATION 1. REVISIT THE MAIN AND SUB-PROBLEMS. 2. DESCRIBE THE DATA. 3. PLAN FOR AN APPROPRIATE WAY TO PRESENT THE DATA COLLECTED THROUGH TABULAR FORM, GRAPHICAL, OR ANY OTHER WAY. 4. PLUG IN ADDITIONAL INFORMATION. 5. HAVE CLOSURE OR CONCLUDING STATEMENT IN EVERY DATA INTERPRETATION.
  • 28.
    BASIC ANALYSIS OF“QUANTITATIVE” INFORMATION 4. Consider conveying the range of answers, e.g., 20 people ranked “1”, 30 ranked “2”, and 20 people ranked “3”, 3. For ratings and rankings, consider computing a mean, or average, for each question. 2. Tabulate the information (add up the number of ratings, rankings, yes’s, no’s for each question). 1. Make copies of your data and store the master copy away. Use the copy for making edits, cutting and pasting, etc.
  • 29.
    STAGES OF ANALYSISAND INTERPRETATION OF FINDINGS There are four main stages in the analysis and interpretation of qualitative information. These are discussed in more detail in several textbooks including Patton (1986, 1990), Miles and Huberman (1994), and Silverman (1994). Here the researcher shall concentrate more on the practical tasks,rather than on theoretical issues.
  • 31.
    STEPS IN DATAPROCESSING 5. STANDARD DEVIATION The standard deviation, often represented with the Greek letter sigma (o), is the measure of a spread of data around the mean. A high standard deviation signifies that data is spread more widely from the mean, where a low standard deviation signals that more data align with the mean. In a portfolio of data analysis methods, the standard deviation is useful for quickly determining dispersion of data points. The standard deviation represents the distribution of the responses around the mean. It indicates the degree of consistency among the responses.
  • 32.
    STEPS IN DATAPROCESSING 6. T-TESTS T-Tests are used to test if the difference of means is statistically significant. It tests if the sample is representative of the populations. For example if the mean for variable 1 is 40 and the mean for variable 2 is 56, you may say the means are different. T-Tests may show that they are not significantly different, however, and the researcher can’t base the conclusion on the means difference since the difference in the sample is not representative for the population. Also commonly called t testing, hypothesis testing assesses if a certain premise is actually true for your data set or population
  • 33.
    STEPS IN DATAPROCESSING 7. PEARSON (R) CORRELATION Pearson (r) Correlation is used to find a correlation between at least twe continuous variables. The value for such a correlation lies between 0.00 (no correlation) and 1.00 (perfect correlation).
  • 34.
    STEPS IN DATAPROCESSING 8. CHI-SQUARE TEST There are two types of Chi-square test st but both involve categorical data. One type of chi-square test compares the frequency count of what is expected in theory against what is actually observed. The second type of chi-square test is known as a chi- square test with two variables or the chi-square test for independence.
  • 35.