This document discusses principles and methods of research data interpretation. It describes how data is organized, analyzed, and interpreted to draw meaningful inferences. Specifically, it outlines various methods of data interpretation including direct observation, tables, graphs, numerical/statistical methods, and mathematical modeling. It emphasizes that interpretation establishes relationships within data and relates results to existing knowledge to further research. Proper interpretation requires avoiding biases and false generalizations.
Methods of data collection (research methodology)Muhammed Konari
Included all types of data collection.Includes primary data collection and secondary data collection. Described each and every classification of Data collections which are included in KTU Kerala.
Methods of data collection (research methodology)Muhammed Konari
Included all types of data collection.Includes primary data collection and secondary data collection. Described each and every classification of Data collections which are included in KTU Kerala.
This was a presentation that was carried out in our research method class by our group. It will be useful for PHD and master students quantitative and qualitative method. It consist sample definition, purpose of sampling, stages in the selection of a sample, types of sampling in quantitative researches, types of sampling in qualitative researches, and ethical Considerations in Data Collection.
Research is the systematic and objective analysis and recording of controlled observations that may lead to the development of generalizations, principles, or theories, resulting in prediction and possible control of events .
This was a presentation that was carried out in our research method class by our group. It will be useful for PHD and master students quantitative and qualitative method. It consist sample definition, purpose of sampling, stages in the selection of a sample, types of sampling in quantitative researches, types of sampling in qualitative researches, and ethical Considerations in Data Collection.
Research is the systematic and objective analysis and recording of controlled observations that may lead to the development of generalizations, principles, or theories, resulting in prediction and possible control of events .
This PowerPoint presentation will aim to help the researcher to understand the concept of making Generalization and Interpretation of Research Results. This PowerPoint make possible with the help of SlidesCarnival.
Learn the process of Research.
Research process consists of a series of actions or steps necessary to carry out research. It guides a researcher to conduct research in a planned and organized sequence.
Applied vs basic research - Research Methodology - Manu Melwin Joy manumelwin
When discussing research methodology, it is important to distinguish between applied and basic research. Applied research examines a specific set of circumstances, and its ultimate goal is relating the results to a particular situation. That is, applied research uses the data directly for real world application.
UNIT II - Identification of Problem & Review of Literature
Problems in research – identification of research problem,
Research objectives and Research Hypothesis,
Research question
Questionnaire and Interview Schedule.
How Is Each Related To Deductive Inquiry?
Deduction Vs Deductive Reasoning
Deductive Approach Paper
Inductive Approach
Deductive Reasoning Strengths And Weaknesses
Deductive Approach In Research Approach
Deductive Reasoning Case
Examples Of Unsound Valid Deductive Argument
Deductive Critical Thinking
Social Work And Violence Essay
Deductive Reasoning
Advantages And Disadvantages Of Deductive Method
What Makes A Deductive Argument
Deductive and Inductive Reasoning
Example Of An Unsound Deductive Argument
Deductive Bible Studies
Deductive and Inductive Grammar Teaching
Inductive & Deductive Research
Research Approach And Inductive Approach
E-content is a Comprehensive package of teaching material put into hypermedia format. Hypermedia is multimedia with internet deplorability. E-content can not be created by a teaching faculty alone . It needs the role of teacher, Video editor, production assistants, web developers (HTML 5 or Adobe captivate, etc). Analyze the learner needs and goals of the instructional material development, development of a delivery system and content, pilot study of the material developed, implementation, evaluating, refining the materials etc. In designing and development of E-content we have to adopt one of the instructional design models based on our requirements.
Pedagogy is the most commonly understood approach to teaching. It refers to the theory and practice of learning. Pedagogy is often described as the act of teaching. Pedagogy has little variations between traditional teaching and online teaching. Online teaching pedagogy is a method of effective teaching practice specifically developed for teaching via the internet. It has a set of prescribed methods, strategies, and practices for teaching academic subjects in an online (or blended) environment, where students are in a physical location separate from the faculty member.
Technology has changed the possibilities within teaching and learning. Classes, which prior to the digital era were restricted to lectures, talks, and physical objects, no longer have to be designed in that manner. Training in a synchronous virtual classroom can only be successful with the active participation and engagement of the learners. Explore the Virtual Classroom’s features and see how they can support and enhance your tutoring style.
• The monitoring and evaluation of the institutional processes require a carefully structured system of internal and external review. The NAAC expects the Institutions to undertake continuous Academic and Administrative Audits (AAA). This presentation is intended to serve as advisory to all accredited HEIs who volunteer to undertake AAA. The pros and cons of this process are also highlighted. Academic and Administrative Audit is the process of evaluating the efficiency and effectiveness of the administrative procedure. It includes assessment of policies, strategies & functions of the various administrative departments, control of the overall administrative system, etc. This checklist gives an overview what the audit committee members may look into while visiting an institution for this purpose. It invariably follows the Quality Indicators Framework prescribed by Accreditation Council in India.
• The monitoring and evaluation of the institutional processes require a carefully structured system of internal and external review. The NAAC expects the Institutions to undertake continuous Academic and Administrative Audits (AAA). This presentation is intended to serve as advisory to all accredited HEIs who volunteer to undertake AAA.
Chemical analysis data of water samples can not be used directly for understanding. They are to be used for various calculations in order to determine the quality parameters that have a lot of significances. A. Balasubramanian and D. Nagaraju, of the Department of Studies in Earth Science, Centre for Advanced Studies, University of Mysore, Mysore-570006, Karnataka, India have recently brought out a software and its application manual as a good book for reference and execution. The Name of the software is WATCHIT meaning Water Chemistry Interpretation Techniques. This software computes more than 100 parameters pertaining to water quality interpretations. The software follows its own method of approach to determine the required results. Systems International Units are used. Limited input parameters are required. This is suitable for all scientific research, government water quality data interpretations and for understanding the quality of water before using it.
Water conservation refers to reducing the usage of water and recycling of waste water for different purposes like domestic usage, industries, agriculture etc. This technical article highlights most of the popular methods of water conservation. A special note on rainwater harvesting is also provided.
This module gives an overview of general applications of current hydrogeological aspects. It is for the basic understanding of students and research scholars.
Climate Extreme (extreme weather or climate event) refers to the occurrence of a value of a weather or climate variable above (or below) a threshold value near the upper (or lower) ends of the range of observed values of the variable. Extreme weather and climate events, interacting with exposed and vulnerable human and natural systems, can lead to disasters.
WATER RESOURCES PLANNING AND MANAGEMENT POSSIBILITIES IN CHAMARAJANAGAR TALUK...Prof. A.Balasubramanian
Any unplanned development and utilization of water resources with result in water scarcity. In many parts of the developing world. Such a situation exists. In order to do proper planning and
management of water resources, it is necessary to conduct detailed analyses of the factors, which influence the water availability and its uses. In the present study, a comprehensive analysis have been undertaken for proper utilization of water resources in Chamarajanagar Taluk, which has been identified as one of the drought hit districts of Karnataka, in India. The factors analysed in this work are, surface and groundwater availability, land use, cropping pattern, recharge potential of soils and the rainfall pattern in typical areas of Taluk. It is observed that the problem of water scarcity is mainly due to the lack of irrigation planning and management. Hence, a
modified cropping pattern is suggested by taking into consideration of all available water resources and other conditions.
In broad terms, cultural geography examines the cultural values, practices, discursive and material expressions and artefacts of people, the cultural diversity and plurality of society.
It also emphasizes on how cultures are distributed over space, how places and identities are produced, how people make sense of places and build senses of place, and how people produce and communicate knowledge and meaning.
Minerals are formed by changes in chemical energy in systems which contain one fluid or vapor phase. In nature, minerals are formed by crystallisation or precipitation from concentrated solutions. These solutions are called as ore-bearing fluids. Ore-bearing fluids are characterised by high concentration of certain metallic or other elements.
Fluids are the most effective agents for the transport of material in the mantle and the Earth's crust.
Soils are complex mixers forming the skin of the earth's surface. Soil is a dynamic layer in which many complex chemical, physical and biological activities are going on constantly. Soils become adjusted to conditions of climate, landform and vegetation, and will change internally when those controlling conditions change. Soils are products of weathering. Soils play a dominant role in earth's geomorphic processes in a cyclic manner. The characteristics of soils are very essential for several reasons. This module highlights these characteristics.
GIS TECHNIQUES IN WATER RESOURCES PLANNING AND MANAGEMENT IN CHAMARAJANAGAR ...Prof. A.Balasubramanian
The over-exploitation and contamination of groundwater continue to threaten the long-term sustainability of our precious water resources, in spite of the best efforts made by various agencies.
This has many serious implications to the economic development of a country like India. Lack of
judicious planning and integration of environmental consideration to ground water development
projects are primarily responsible for such a state of affair in the ground water sector. Geographical Information Systems could be of immense help in planning sustainable ground water management strategies, especially in hard rock areas with limited ground water potential. Data collected from
Satellite Imagery and through field investigations have been integrated, on a GIS platform, for demarcation and prioritization of areas suitable for ground water development and ground water augmentation. An attempt has also been made to assess the vulnerability of the area to ground water
contamination. This paper demonstrates the utility of GIS in planning judicious management of ground water resources in a typical hard rock area of Chamarajanagar Taluk, Karnataka, state India.
Nanobiomaterials are very effective components for several biomedical and pharmaceutical studies. Among the metallic, organic, ceramic and polymeric nanomaterials, metallic nanomaterials have shown certain prominent biomedical applications. Enormous works have been done to synthesize, analyse and administer the metallic nanoparticles for various kinds of medical and therapeutic applications, during the last forty years. In these analyses, the prominent biomedical applications of ten metallic nanobiomaterials have been reviewed from various sources and works. It has been found that almost nine of them are used in a very wide spectrum of medical and theranostic applications.
A variety of Nano-biomaterials are synthesised, characterised and tested to find out their potentialities by global scientific communities, during the last three decades. Among those, nanostructured ceramics, cements and coatings are being considered for major use in orthopaedic, dental and other medical applications. The development of novel biocompatible ceramic materials with improved biomedical functions is at the forefront of health-related applications, all over the world. Understanding of the potential biomedical applications of ceramic nanomaterials will provide a major insight into the future developments. This study reviews and enlists the prominent potential biomedical applications of ceramic nanomaterials, like Calcium Phosphate (CaP), Tri-Calcium Phosphate (TCP), Hydroxy-Apatite(HAP), TCP+HAP, Si substituted HAP, Calcium Sulphate and Carbonate, Bioactive Glasses, Bioactive Glass Ceramics, Titania-Based Ceramics, Zirconia Ceramics, Alumina Ceramcis and Ceramic Polymer Composites.
The present forest and tree cover of the country is 78.37 million ha in 2007 which is 23.84% of the geographical areas and it includes 2.82% tree cover. This becomes 25.25%, if the areas above tree line i.e., 4000m are excluded from the total geographical area. The forest cover is classified into 3 canopy density classes.
1. Very Dense Forest (VDF) with canopy density more than 70%
2. Moderately Dense Forest (MDF) with Canopy density between 40-70% and
3. Open Forest (OF) with Canopy density between 10-40%
Multi-source connectivity as the driver of solar wind variability in the heli...Sérgio Sacani
The ambient solar wind that flls the heliosphere originates from multiple
sources in the solar corona and is highly structured. It is often described
as high-speed, relatively homogeneous, plasma streams from coronal
holes and slow-speed, highly variable, streams whose source regions are
under debate. A key goal of ESA/NASA’s Solar Orbiter mission is to identify
solar wind sources and understand what drives the complexity seen in the
heliosphere. By combining magnetic feld modelling and spectroscopic
techniques with high-resolution observations and measurements, we show
that the solar wind variability detected in situ by Solar Orbiter in March
2022 is driven by spatio-temporal changes in the magnetic connectivity to
multiple sources in the solar atmosphere. The magnetic feld footpoints
connected to the spacecraft moved from the boundaries of a coronal hole
to one active region (12961) and then across to another region (12957). This
is refected in the in situ measurements, which show the transition from fast
to highly Alfvénic then to slow solar wind that is disrupted by the arrival of
a coronal mass ejection. Our results describe solar wind variability at 0.5 au
but are applicable to near-Earth observatories.
Richard's entangled aventures in wonderlandRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Slide 1: Title Slide
Extrachromosomal Inheritance
Slide 2: Introduction to Extrachromosomal Inheritance
Definition: Extrachromosomal inheritance refers to the transmission of genetic material that is not found within the nucleus.
Key Components: Involves genes located in mitochondria, chloroplasts, and plasmids.
Slide 3: Mitochondrial Inheritance
Mitochondria: Organelles responsible for energy production.
Mitochondrial DNA (mtDNA): Circular DNA molecule found in mitochondria.
Inheritance Pattern: Maternally inherited, meaning it is passed from mothers to all their offspring.
Diseases: Examples include Leber’s hereditary optic neuropathy (LHON) and mitochondrial myopathy.
Slide 4: Chloroplast Inheritance
Chloroplasts: Organelles responsible for photosynthesis in plants.
Chloroplast DNA (cpDNA): Circular DNA molecule found in chloroplasts.
Inheritance Pattern: Often maternally inherited in most plants, but can vary in some species.
Examples: Variegation in plants, where leaf color patterns are determined by chloroplast DNA.
Slide 5: Plasmid Inheritance
Plasmids: Small, circular DNA molecules found in bacteria and some eukaryotes.
Features: Can carry antibiotic resistance genes and can be transferred between cells through processes like conjugation.
Significance: Important in biotechnology for gene cloning and genetic engineering.
Slide 6: Mechanisms of Extrachromosomal Inheritance
Non-Mendelian Patterns: Do not follow Mendel’s laws of inheritance.
Cytoplasmic Segregation: During cell division, organelles like mitochondria and chloroplasts are randomly distributed to daughter cells.
Heteroplasmy: Presence of more than one type of organellar genome within a cell, leading to variation in expression.
Slide 7: Examples of Extrachromosomal Inheritance
Four O’clock Plant (Mirabilis jalapa): Shows variegated leaves due to different cpDNA in leaf cells.
Petite Mutants in Yeast: Result from mutations in mitochondrial DNA affecting respiration.
Slide 8: Importance of Extrachromosomal Inheritance
Evolution: Provides insight into the evolution of eukaryotic cells.
Medicine: Understanding mitochondrial inheritance helps in diagnosing and treating mitochondrial diseases.
Agriculture: Chloroplast inheritance can be used in plant breeding and genetic modification.
Slide 9: Recent Research and Advances
Gene Editing: Techniques like CRISPR-Cas9 are being used to edit mitochondrial and chloroplast DNA.
Therapies: Development of mitochondrial replacement therapy (MRT) for preventing mitochondrial diseases.
Slide 10: Conclusion
Summary: Extrachromosomal inheritance involves the transmission of genetic material outside the nucleus and plays a crucial role in genetics, medicine, and biotechnology.
Future Directions: Continued research and technological advancements hold promise for new treatments and applications.
Slide 11: Questions and Discussion
Invite Audience: Open the floor for any questions or further discussion on the topic.
Cancer cell metabolism: special Reference to Lactate PathwayAADYARAJPANDEY1
Normal Cell Metabolism:
Cellular respiration describes the series of steps that cells use to break down sugar and other chemicals to get the energy we need to function.
Energy is stored in the bonds of glucose and when glucose is broken down, much of that energy is released.
Cell utilize energy in the form of ATP.
The first step of respiration is called glycolysis. In a series of steps, glycolysis breaks glucose into two smaller molecules - a chemical called pyruvate. A small amount of ATP is formed during this process.
Most healthy cells continue the breakdown in a second process, called the Kreb's cycle. The Kreb's cycle allows cells to “burn” the pyruvates made in glycolysis to get more ATP.
The last step in the breakdown of glucose is called oxidative phosphorylation (Ox-Phos).
It takes place in specialized cell structures called mitochondria. This process produces a large amount of ATP. Importantly, cells need oxygen to complete oxidative phosphorylation.
If a cell completes only glycolysis, only 2 molecules of ATP are made per glucose. However, if the cell completes the entire respiration process (glycolysis - Kreb's - oxidative phosphorylation), about 36 molecules of ATP are created, giving it much more energy to use.
IN CANCER CELL:
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
introduction to WARBERG PHENOMENA:
WARBURG EFFECT Usually, cancer cells are highly glycolytic (glucose addiction) and take up more glucose than do normal cells from outside.
Otto Heinrich Warburg (; 8 October 1883 – 1 August 1970) In 1931 was awarded the Nobel Prize in Physiology for his "discovery of the nature and mode of action of the respiratory enzyme.
WARNBURG EFFECT : cancer cells under aerobic (well-oxygenated) conditions to metabolize glucose to lactate (aerobic glycolysis) is known as the Warburg effect. Warburg made the observation that tumor slices consume glucose and secrete lactate at a higher rate than normal tissues.
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...Scintica Instrumentation
Intravital microscopy (IVM) is a powerful tool utilized to study cellular behavior over time and space in vivo. Much of our understanding of cell biology has been accomplished using various in vitro and ex vivo methods; however, these studies do not necessarily reflect the natural dynamics of biological processes. Unlike traditional cell culture or fixed tissue imaging, IVM allows for the ultra-fast high-resolution imaging of cellular processes over time and space and were studied in its natural environment. Real-time visualization of biological processes in the context of an intact organism helps maintain physiological relevance and provide insights into the progression of disease, response to treatments or developmental processes.
In this webinar we give an overview of advanced applications of the IVM system in preclinical research. IVIM technology is a provider of all-in-one intravital microscopy systems and solutions optimized for in vivo imaging of live animal models at sub-micron resolution. The system’s unique features and user-friendly software enables researchers to probe fast dynamic biological processes such as immune cell tracking, cell-cell interaction as well as vascularization and tumor metastasis with exceptional detail. This webinar will also give an overview of IVM being utilized in drug development, offering a view into the intricate interaction between drugs/nanoparticles and tissues in vivo and allows for the evaluation of therapeutic intervention in a variety of tissues and organs. This interdisciplinary collaboration continues to drive the advancements of novel therapeutic strategies.
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
2. 2
‘All meanings, we know, depend on
the key of interpretation.’
-George Eliot
3. Principles of Analysis and Interpretation
Data, as used in behavioral research, means
research results from which inferences are drawn:
usually numerical results, like scores of tests and
statistics such as means, percentages, and
correlation coefficients.
Analysis means the categorizing, ordering,
manipulating, and summarizing of data to obtain
answers to research questions.
Interpretation takes the results of analysis, makes
inferences pertinent to the research relations
studied, and draws conclusions about these
relations.
4. Methods of data interpretation
Direct visual observations of raw data
After organizing the data in tables
After making Graphical representations
After calculations using numerical /
statistical methods
After mathematical modelling
5. DATA
Data is known to be crude information
and not knowledge by itself.
The sequence from data to knowledge
is:
from Data to Information,
from Information to Facts, and finally,
from Facts to Knowledge.
6. DATA
Data becomes information, when it becomes
relevant to your decision problem.
Information becomes fact, when the data can
support it.
Facts are what the data reveals.
However the decisive instrumental (i.e., applied)
knowledge is expressed together with
some statistical degree of confidence.
7. Fact becomes knowledge, when it is
used in the successful completion
of a decision process.
massive amount of facts are
integrated as knowledge.
8.
9. Usefulness and utility of research findings lie in proper
interpretation.
Interpretation is a basic component of research.
After collecting and analyzing the data, the researcher
has to accomplish the task of drawing inferences
followed by report writing.
This has to be done very carefully, otherwise mis
conclusions may be drawn and the whole purpose of
doing research may get vitiated.
It is only through interpretation that the researcher
can expose relations and processes that underlie his
findings.
10. Meaning of Interpretation
Interpretation refers to the task of drawing inferences
from the collected facts after an analytical and or
experimental study.
In fact, it is a search for broader meaning of research
findings.
The task of interpretation has two major aspects viz.,
the effort to establish continuity in research through
linking the results of a given study with those of
another, and the establishment of some explanation
concepts.
11. “In one sense, interpretation is concerned with
relationships within the collected data, partially
overlapping analysis.
Interpretation also extends beyond the data of
the study to inch the results of other research,
theory and hypotheses.”
12. Interpenetration is the device
Thus, interpenetration is the device through
which the factors that seem to explain what
has been observed by researcher in the
course of the study can be better understood
and it also provides a theoretical conception
which can serve as a guide for further
researches.
13. Why Interpretation?
Interpretation is essential for the simple
reason that the usefulness and utility of
research findings lie in proper
interpretation.
It is being considered a basic component of
research process because of the following
reasons:
14. Through interpretation
It is through interpretation that the researcher
can well understand the abstract principle that
works beneath his findings.
Through this he can link up his findings with
those of other studies, having the same abstract
principle, and thereby can predict about the
concrete world of events. Fresh inquiries can test
these predictions later on.
This way the continuity in research can be
maintained.
15. Interpretation leads to establishment
Interpretation leads to the establishment of
explanatory concepts that can serve as a guide for
future research studies;
it opens new avenues of intellectual adventure
and stimulates the quest for more knowledge.
Researcher can better appreciate only through
interpretation why his findings are what they are
and can make others to understand the real
significance of his research findings.
16. Interpretation of the findings
The interpretation of the findings of exploratory
research study often results into hypotheses for
experimental research and as such interpretation
is involved in the transition from exploratory to
experimental research.
Since an exploratory study does not have a
hypothesis to start with, the findings of such a
study have to be interpreted on a post factum
basis in which case the interpretation is
technically described as ‘post factum’
interpretation.
17. Technique of Interpretation
The task of interpretation is not an easy job,
rather it requires a great skill and dexterity on the
part of researcher.
Interpretation is an art that one learns through
practice and experience.
The researcher may, at times, seek the guidance
from experts for accomplishing the task of
interpretation.
18. The technique of interpretation
The technique of interpretation often involves the
following steps:
1. Researcher must give reasonable explanations
of the relations which he/she has found and
he/she must interpret the lines of relationship
in terms of the underlying processes and must
try to find out the thread of uniformity that lies
under the surface layer of his diversified
research findings.
In fact, this is the technique of how generalization
should be done and concepts be formulated.
19. 2. Extraneous information, if collected
during the study, must be considered
while interpreting the final results of
research study, for it may prove to be a
key factor in understanding the problem
under consideration.
20. 3. It is advisable, before embarking upon final interpretation,
to , consult someone having insight into the study and who
is frank and honest and will not hesitate to point out
omissions and errors in logical argumentation. Such a
consultation will result in correct interpretation and, thus,
will enhance the utility of research results.
4. Researcher must accomplish the task of interpretation only
after considering all relevant factors affecting the problem
to avoid false generalization. He /she must be in no hurry
while interpreting results, for quite often the conclusions,
which appear to be all right at the beginning, may not at all
be accurate.
21. Precautions in Interpretation
One should always remember that even if
the data are properly collected and analyzed,
wrong interpretation would lead to
inaccurate conclusions.
It is, therefore, absolutely essential that the
task of , interpretation be accomplished
with patience in an impartial manner and
also in correct perspective.
22. For correct interpretation
Researcher must pay attention to the following points
for correct interpretation:
(i) At the outset, researcher must invariably satisfy
himself that
(a) the data are appropriate, trustworthy and adequate
for drawing inferences;
(b) the data reflect good homogeneity; and that
(c) proper analysis has been done through statistical
methods.
(ii) The researcher must remain cautious about the
errors that can possibly arise in the process of
interpreting results.
23. Errors can arise due to false
generalization
Errors can arise due to false generalization and/or due
to wrong interpretation of statistical measures, such as
the application of findings beyond the range of
observations, identification of correlation with
causation . and the like.
Another major pitfall is the tendency to affirm that
definite relationships exist on the basis of
confirmation of particular hypotheses.
24. Researcher must remain vigilant
In fact, the positive test results accepting the
hypothesis must be interpreted as “being in
accord” with the hypothesis, rather than as
“confirming the validity of the hypothesis”.
The researcher must remain vigilant about all
such things so that false generalization may not
take place.
He/she should be well equipped with and must
know the correct use of statistical measures for
drawing inferences concerning his study.
25. Researcher must always keep in view that the
task of interpretation is very much intertwined
with analysis and cannot be distinctly separated.
As such he must take the task of interpretation as
a special aspect of analysis and accordingly must
take all those precautions that one usually
observes while going through the process of
analysis viz., precautions concerning the
reliability of data, computational checks,
validation and comparison of results.
26. Researcher must never lose sight of the fact that his
task is not only to make sensitive observations of
relevant occurrences, but also to identify and
disengage the factors that are initially hidden to the
eye.
This will enable him to do his job of interpretation on
proper lines.
Broad generalization should be avoided as most
research is not amen- able to it because the coverage
may be restricted to a particular time, a particular area
and particular conditions.
Such restrictions, if any, must invariably be specified
and the results must be framed within their limits.
27. The researcher must remember that “ideally in the
course of a research study, there should be constant
interaction between initial hypothesis, empirical
observation and theoretical conceptions.
It is exactly in this area of interaction between
theoretical orientation and empirical observation that
opportunities for originality and creativity lie.”
Researcher must pay special attention to this aspect
while engaged in the task of interpretation.
28. Data Interpretation Methods
Data interpretation may be the most important key in
proving or disproving your hypothesis.
It is important to select the proper statistical tool to
make useful interpretation of your data.
If you pick an improper data analysis method, your
results may be suspect and lack credibility.
29. Visually scanning the data
Before doing any statistical analyses of
the data you have collected, look closely
at the data to determine the best
method of organizing it .
By visually scanning the data and
reorganizing it, you may be able to spot
trends or other anomalies that may help
you in your analysis of the data.
30. STATISTICS
Statistics is a science assisting you to
make decisions under
uncertainties (based on some numerical
and measurable scales).
Decision making process must be based on
data neither on personal opinion nor on any
belief.
31. What is Statistical Data Analysis?
Data are not information! To
determine what statistical data
analysis is, one must first define
statistics.
Statistics is a set of methods that
are used to collect, analyze, present,
and interpret data.
32. Statistical methods
Statistical methods are used in a wide
variety of occupations and help people
identify, study, and solve many complex
problems.
In the business and economic world,
these methods enable decision makers
and managers to make informed and
better decisions about uncertain
situations.
33. Review-Statistics:
“We can think of statistics as a group of
computational procedures that allow us to find
meaning in numerical data”.
Descriptive statistics provide a description of
what the data look like.
They provide a means to describe the points of
central tendency (mean, mode, median, etc.) and
dispersion (standard deviation, variance, inter-
quartile range, etc.).
34. Inferential statistics allow the
researcher to make
inferences about populations from smaller samples of the
population.
Statistics of the sample are used to estimate parameters of
the population.
A parameter is a constant value representative of the
population (such as population mean and standard
deviation) while a statistic is any calculation performed on
the sample being tested.
Inferential statistics also allow the researcher to test their
research hypotheses.
Some measures used in inferential statistics include the
standard error of the mean, estimators, and the p-value.
35. The way that the data is interpreted can have
varying effects on your conclusions.
Absolute honesty in recording and interpreting
data is required to maintain the credibility of
research.
All of the conditions of a situation should be
considered and that we make inferences in strict
accordance with the data obtained.
36. Using statistics to determine relationships is
paramount to the success of good research.
Using tools such as ANOVA, correlations, Fisher
Exact Tests, regression, etc. can predict whether or
not your research hypothesis is satisfied.
But, REMEMBER to select your p-value before you
begin your research project.
Doing this will add credibility to your research.
37. One other important point to remember when
doing data analysis is to use parametric statistics
instead of nonparametric statistics whenever
possible.
Remember that parametric statistics relies on the
assumptions of normality, which gives greater
power than nonparametric testing.
Both parametric and nonparametric statistical
tests are used for interpretation.
38. Interpreting Qualitative Data:
Qualitative data interpretation tends to be more
subjective in nature and many times can be influenced
by the researcher’s biases.
Effort must be put into the data collection process to
eliminate bias including collecting more than one kind
of data, get many different kinds of perspectives on the
events being studied, purposely look for contradicting
information, and acknowledging your biases that
relate to your research report.
39. Qualitative data analysis
Qualitative data analysis is time consuming and
complex because a lot of data can be created that is
both useful and not useful.
There is no “correct way” to analyze qualitative data.
Efforts can be made to make your data presentation
and interpretation more credible and less biased by
using the above methods.
40. Statistics consists of
Statistics consists of the principles and methods for
Designing studies
Collecting data
Presenting and analysing data
Interpreting the results
Statistics has been described as
Turning data into information
Data-based decision making.
41. Data interpretation:
Uncovering and explaining trends in the data.
The analyzed data can then be interpreted and
explained.
In general, when scientists interpret data, they
attempt to explain the patterns and trends
uncovered through analysis, bringing all of their
background knowledge, experience, and skills to
bear on the question and relating their data to
existing scientific ideas.
42. Given the personal nature of the
knowledge they draw upon, this step can
be subjective, but that subjectivity is
scrutinized through the peer
review process.
43. Data collection is the systematic
recording
Data collection is the systematic recording of
information;
data analysis involves working to uncover
patterns and trends in datasets;
data interpretation involves explaining those
patterns and trends.
44. Scientists interpret data based on
their background
Scientists interpret data based on their
background knowledge and experience; thus,
different scientists can interpret the same data in
different ways.
By publishing their data and the techniques they
used to analyze and interpret those data, scientists
give the community the opportunity to both
review the data and use them in future research.
45. Data Analysis
Data Analysis is the process of systematically
applying statistical and/or logical techniques to
describe and illustrate, condense and recap, and
evaluate data.
Various analytic procedures “provide a way of drawing
inductive inferences from data and distinguishing the
signal (the phenomenon of interest) from the noise
(statistical fluctuations) present in the data”..
46. While data analysis in qualitative research
can include statistical procedures, many
times analysis becomes an ongoing iterative
process where data is continuously collected
and analyzed almost simultaneously.
Indeed, researchers generally analyze for
patterns in observations through the entire
data collection phase.
47. The form of the analysis is determined by
the specific qualitative approach taken (field
study, ethnography content analysis, oral
history, biography, etc) and
the form of the data (field notes,
documents, audiotape, videotape).
48. An essential component of ensuring data integrity
is the accurate and appropriate analysis of research
findings.
Improper statistical analyses distort scientific
findings, mislead casual readers, and may
negatively influence the public perception of
research.
Integrity issues are just as relevant to analysis of
non-statistical data as well.
49. Considerations/issues in data
analysis
There are a number of issues that researchers should be cognizant of with respect to data
analysis. These include:
Having the necessary skills to analyze
Concurrently selecting data collection methods and appropriate analysis
Drawing unbiased inference
Inappropriate subgroup analysis
Following acceptable norms for disciplines
Determining statistical significance
Lack of clearly defined and objective outcome measurements
Providing honest and accurate analysis
Manner of presenting data
Environmental/contextual issues
Data recording method
Partitioning ‘text’ when analyzing qualitative data
Training of staff conducting analyses
Reliability and Validity
Extent of analysis
50. Summarizing data
Tables
Simplest way to summarize data
Data are presented as absolute numbers or percentages
Charts and graphs
Visual representation of data
Data are presented as absolute numbers or percentages
51. Basic guidance when summarizing
data
Ensure graphic has a title
Label the components of your graphic
Indicate source of data with date
Provide number of observations (n=xx) as a reference
point
Add footnote if more information is needed
53. Tables: Relative frequency
number of values within an interval
total number of values in the table
Year # births (n) Relative frequency (%)
1900–1909 35 27
1910–1919 46 34
1920–1929 51 39
Total 132 100.0
x 100
54. Tables
Year Number of births
(n)
Relative frequency (%)
1900–1909 35 27
1910–1919 46 34
1920–1929 51 39
Total 132 100.0
Percentage of births by decade between 1900 and 1929
55. Graphical representation
The graphical representation of data is categorized as
basic five types.
Graphical representation 1: Bar graph.
Graphical representation 2: Pie graph.
Graphical representation 3: Line graph.
Graphical representation 4: Scatter plot.
Graphical representation 5: Histogram.
56. Charts and graphs
Charts and graphs are used to portray:
Trends, relationships, and comparisons
The most informative are simple and self-explanatory
57. Use the right type of graphic
Charts and graphs
Bar chart: comparisons, categories of data
Line graph: display trends over time
Pie chart: show percentages or proportional share
58. Charts Are Analog!
Basic Rule: Charts show overviews - for details use tables!
Quantitative data can be represented as charts by using the
following analog properties
Position of graphical elements along a common scale; position of
graphical elements along identical scales at different locations (e.g.
graphs arranged in a row)
Distances (lengths)
Slopes and angles
Areas (e.g. of circles, squares or other shapes)
Lightness (grayscale) or texture gradient
People differ in their ability to estimate physical properties: They are
best at estimating positions anddistances, but not so good at
estimating slopes, angles, and areas (in this order).
Charts are images: Good charts enable users to easily and quickly find
relevant/critical data or recognize important relations between data.
62. Use it to…
convey approximate proportional relationships
(relative amounts) at a point in time
compare part of a whole at a given point in time
emphasize a small proportion of parts
63. Do NOT use it…
For exact comparisons of values, because estimating
angles is difficult for people
For rank data: Use column/bar charts in this case; use
multiple column/bar charts for grouped data
If proportions vary greatly; do not use multiple pies to
compare corresponding parts
64. Caution!
Pie charts cannot represent values beyond 100%
Each pie chart is valid for one point in time only
Pie charts are only suited to presenting quite a few
percentage values
Angles are harder to estimate for people than
distances; perspective pie charts are even harder to
interpret
66. Use it to…
Display over time (or any other dimension)
How a set of data adds up to a whole (cumulated
totals)
Which part of the whole each element represents
72. Use it to…
Present a part-whole relation over time (with accurate
impression)
Show proportional relationships over time
NOTE: Segmented column/bar charts are more
accurate than pie chart, because distances can be more
accurately estimated than areas.
74. Variants
Polygon: Connects data points through straight lines or higher order
graphs
Histogram: Columns/bars touch; useful for larger sets of data points,
typically used for frequency distributions
Staircase Chart: Displays only the silhouette of the histogram; useful
for even larger sets of data points, typically used for frequency
distributions
Step chart: Use it to illustrate trends among more than two members
of nominal or ordinal scales; do not use it for two or more variables or
levels of a single variable (hard to read)
Pyramid histogram: Two mirror histograms; use it for comparisons
76. Use it…
To display long data rows
To extrapolate beyond known data values (forecast)
To compare different graphs
To find and compare trends (changes over time)
To recognize correlations and covariations between
variables
78. Use it to…
Show measurements over time (one-dimensional
scatterplot)
Convey an overall impression of the relation between
two variables (Two-dimensional scatterplot)
79. Don’t Use it for…
Determining and comparing trends, recognition and
comparison of change rates
More than one independent variable: Avoid
illustrating more than one independent variable in a
scatter plot
81. Percentage of new enrollees tested for HIV at each site, by
quarter
0
1
2
3
4
5
6
Quarter 1 Quarter 2 Quarter 3 Quarter 4
%ofnewenrolleestestedfor
HIV
Months
Site 1
Site 2
Site 3
Q1 Jan–Mar Q2 Apr–June Q3 July–Sept Q4 Oct–Dec
82. Has the program met its goal?
0%
10%
20%
30%
40%
50%
60%
Quarter 1 Quarter 2 Quarter 3 Quarter 4
%ofnewenrolleestested
forHIV
Site 1
Site 2
Site 3
Percentage of new enrollees tested for HIV at each site, by
quarter
Target
83. Stacked bar chart
Represent components of whole & compare wholes
3
4
6
10
0 5 10 15
Males
Females
0-14 years
15+ years
Number of months patients have been enrolled in HIV care
Number of Months Female and Male Patients Have Been
Enrolled in HIV Care, by Age Group
84. Line graph
0
1
2
3
4
5
6
Year 1 Year 2 Year 3 Year 4
Numberofclinicians
Clinic 1
Clinic 2
Clinic 3
Number of Clinicians Working in Each Clinic During Years 1–4*
Displays trends over time
85. Line graph
0
1
2
3
4
5
6
Year 1 Year 2 Year3 Year 4
Numberofclinicians
Clinic1
Clinic2
Clinic3
Number of Clinicians Working in Each Clinic During Years 1-4*
Y1 1995 Y2 1996 Y3 1997 Y4 1998
86. Interpreting data
Adding meaning to information by making
connections and comparisons and exploring causes
and consequences
Relevance
of finding
Reasons for
finding
Consider
other data
Conduct
further
research
87. Interpretation – relevance of
finding
Adding meaning to information by making
connections and comparisons and exploring causes
and consequences
Relevance
of finding
Reasons for
finding
Consider
other data
Conduct
further
research
88. Interpretation – relevance of
finding
Does the indicator meet the target?
How far from the target is it?
How does it compare (to other time periods, other
facilities)?
Are there any extreme highs and lows in the data?
90. Relevance
of finding
Reasons for
finding
Consider
other data
Conduct
further
research
Interpretation – consider other
data
Use routine service data to clarify questions
• Calculate nurse-to-client ratio, review
commodities data against client load, etc.
Use other data sources
91. Interpretation – other data
sources
Situation analyses
Demographic and health surveys
Performance improvement data
Relevance
of finding
Reasons for
finding
Consider
other data
Conduct
further
research
92. Interpretation – conduct further
research
Data gap conduct further research
Methodology depends on questions being asked
and resources available
Relevance
of finding
Reasons for
finding
Consider
other data
Conduct
further
research
93. Data Interpretation
Answer these four questions
What is important in the data?
Why is it important?
What can be learned from it?
So what?
Remember
Interpretation depends on the perspective of the
researcher.
Why?
94. Interpretation
One technique for data interpretation (Wolcott)
Extend the analysis by raising questions
Connect findings to personal experiences
Seek the advice of “critical” friends.
Contextualize findings in the research
Converging evidence?
Turn to theory
95. Frequencies and Continuous Measures
Quantitative data come in two general forms:
frequencies and continuous measures.
f={(x,y); where x is a member of the set X, and y is
either 1 or 0 depending on x’s possessing or not
possessing M}
f={(x,y); x is an object, and y= any numeral}
96. Rules of Categorization
The first setup in any analysis is categorization.
The five rules of categorization are as follows:
1.Categories are set up according to the research
problem and purpose.
2.The categories are exhaustive.
3.The categories are mutually exclusive and
independent.
4.Each category (variable) is derived from one
classification principle.
5.Any categorization scheme must be on one level
of discourse
97. Kinds of Statistical Analysis
Frequency Distributions
Graphs and Graphing
Measures of Central Tendency and Variability
Measures of Relations
Analysis of Differences
Analysis of Variance and Related Methods
Profile Analysis
Multivariate Analysis
98. Graphs and Graphing
A graph is a two-dimension representation of a
relation or relations.
Interaction means that the relation of an independent
variable to a dependent variable differs in different
groups or at different levels of another independent
variable..
99. Frequency Distributions
Although frequency distributions are used primarily
for descriptive purposes, they can also be used for
other research purposes.
Observed distributions can also be compared to
theoretical distributions (normal distributions).
100. Measures of Central Tendency and
Variability
Mean, median, mode
Standard deviation, range
101. Measures of Relations
Ideally, any analysis of research data should include
both kinds of indices: measures of the significance of a
relation and measures of the magnitude of the relation.
102. Analysis of Differences
1.it is by no means confined to the differences between
measures of central tendency.
2.All analyses of differences are intended for the
purpose of studying relation. Conversely, the greater
the differences the higher the correlation, all other
things being equal.
103. Analysis of Variance and Related Methods
A method of identifying, breaking down, and testing
for statistical significance variances that come from
different sources of variation.
That is, a dependent variable has a total amount of
variance, some of which is due to the experimental
treatment, some to error, and some to other causes.
104. Profile Analysis
Profile analysis is basically the assessment of the
similarities of the profiles of individuals or groups.
A profile is a set of different measures of an individual
or group, each of which is expressed in the same unit
of measure.
106. Indices Index can be defined in two related ways:
1.An index is an observable phenomenon that is
substituted for a less-observable phenomenon. For
example, test scores indicate achievement levels,
verbal aptitudes, degrees of anxiety, and so on.
2.An index is a number that is a composite of two or
more numbers. For example, all sums and averages,
coefficients of correlation.
107. Indices Indices are most important in research because they
simplify comparisons.
The percentage is a good example.
Percentages transform raw numbers into comparable
form.
Indices generally take the form of quotients: ratios and
proportions.
108. Social Indicators
Indicators, although closely related to indices—
indeed, they are frequently indices as defined
above—form a special class of variables.
Variables like income, life expectancy, fertility,
quality of life, educational level (of people), and
environment can be called social indicators. Social
indicators are both variables and statistics.
109. Unfortunately, it is difficult to define “social indicators.”
In this book we are interested in social indicators as a class
of sociological and psychological variables that in the
future may be useful in developing and testing scientific
theories of the relations among social and psychological
phenomena.
110. The Interpretation of Research Data
Adequacy of Research Design, Methodology,
Measurement, and Analysis
Negative and Inconclusive Results
Unhypothesized Relations and Unanticipated
Findings
Proof, Probability, and Interpretation
111. Adequacy of Research Design, Methodology,
Measurement, and Analysis
Most important, the design, methods of observation,
measurement, and statistical analysis must all be
appropriate to the research problem.
112. Negative and Inconclusive Results
When results are positive, when the data support
the hypotheses, one interprets the data along the
lines of the theory and the reasoning behind the
hypotheses.
If we can repeat the feat, the n the evidence of
adequacy is even more convincing.
If we can be fairly sure that the methodology, the
measurement, and the analysis are adequate, then
negative results can be definite contributions to
scientific advancement.
113. Unhypothesized Relations and
Unanticipated Findings
The unpredicted relation may be an important key
to a deeper understanding of the theory.
For example, positive reinforcement strengthens
response tendencies.
Unpredicted and unexpected findings must be
treated with more suspicion than predicted and
expected findings.
Before being accepted, they should be
substantiated in independent research in which
they are specially predicted and tested.
114. Proof, Probability, and Interpretation
Let us flatly assert that nothing can be “proved”
scientifically.
All one can do is to bring evidence to bear that
such-and such a proposition is true.
Proof is a deductive matter.
Experimental methods of inquiry are not methods
of proof, they are controlled methods of bringing
evidence to bear on the probable truth or falsity of
relational propositions.
In short, no single scientific investigation ever
proves anything.
Thus the interpretation of the analysis of research
data should never use the word proof.
115. Effective Data Analysis
Effective data analysis involves
keeping your eye on the main game
managing your data
engaging in the actual process of quantitative and / or
qualitative analysis
presenting your data
drawing meaningful and logical conclusions
116. 116
The Big Picture
Analysis should be approached as a critical, reflective,
and iterative process that cycles between data and an
overarching research framework that keeps the big
picture in mind
117. 117
Managing Data
Regardless of data type, managing your data
involves
familiarizing yourself with appropriate software
developing a data management system
systematically organizing and screening your data
entering the data into a program
and finally ‘cleaning’ your data
118. Statistics
Being able to do statistics no longer means being able
to work with formula
It’s much more important for researchers to be familiar
with the language and logic of statistics, and be
competent in the use of statistical software
119. Data Types
Different data types demand discrete treatment, so it’s
important to be able to distinguish variables by
cause and effect (dependent or independent)
measurement scales (nominal, ordinal, interval, and
ratio)
120. Descriptive Statistics
Descriptive statistics are used to summarize the basic
feature of a data set through
measures of central tendency (mean, mode, and
median)
dispersion (range, quartiles, variance, and standard
deviation)
distribution (skewness and kurtosis)
121. Inferential Statistics
Inferential statistics allow researchers to assess
their ability to draw conclusions that extent
beyond the immediate data, e.g.
if a sample represents the population
if there are differences between two or more groups
if there are changes over time
if there is a relationship between two or more variables
122. Selecting Statistical Tests
Selecting the right statistical test relies on
knowing the nature of your variables
their scale of measurement
their distribution shape
types of question you want to ask
123. Presenting Quantitative Data
Presenting quantitative data often involves the
production of graphs and tables
These need to be
1. selectively generated so that they make relevant
arguments
2. informative yet simple, so that they aid reader’s
understanding
124. Qualitative Data Analysis (QDA)
In qualitative data analysis there is a common reliance
on words and images to draw out rich meaning
But there is an amazing array of perspectives and
techniques for conducting an investigation
125. The QDA Process
Qualitative data analysis creates new
understandings by exploring and interpreting
complex data from sources without the aid of
quantification
Data source include
interviews
group discussions
observation
journals
archival documents, etc
126. 126
Uncovering and Discovering Themes
The methods and logic of qualitative data analysis
involve uncovering and discovering themes that run
through raw data, and interpreting the implication of
those themes for research questions
127. More on the QDA Process
Qualitative data analysis generally involves
moving through cycles of inductive and deductive
reasoning
thematic exploration (based on words, concepts, literary
devises, and nonverbal cues)
exploration of the interconnections among themes.
128. 128
Specialist QDA Strategies
There are a number of paradigm and discipline
based strategies for qualitative data analysis
including
content analysis
discourse analysis
narrative analysis
conversation analysis
semiotics
hermeneutics
grounded theory
129. Presenting Qualitative Data
Effective presentation of qualitative data can be a real
challenge
You’ll need to have a clear storyline, and selectively use
your words and/or images to give weight to your story
130. Drawing Conclusions
Your findings and conclusions need to flow from
analysis and show clear relevance to your overall
project
Findings should be considered in light of
significance
current research literature
limitations of the study
your questions, aims, objectives, and theory
131. Add Interpretation of Analysis
of Data
Include your interpretation:
What does the data MEAN with regards to that theme?
The “So what?” of the theme and/or data.
132. WHY DO WE ANALYZE DATA
The purpose of analysing data is to obtain usable and
useful information.
The analysis, irrespective of whether the data is
qualitative or quantitative, may:
•describe and summarise the data
•identify relationships between variables
•compare variables
•identify the difference between variables
•forecast outcomes
133. SCALES OF MEASUREMENT
Many people are confused about what type of analysis
to use on a set of data and the relevant forms of
pictorial presentation or data display.
The decision is based on the scale of measurement of
the data.
These scales are
nominal,
Ordinal and
numerical.
134. Nominal scale
A nominal scale is where:
The data can be classified into a non-numerical or
named categories and
The order in which these categories can be written or
asked is arbitrary.
135. Ordinal scale
An ordinal scale is where:
the data can be classified into non-numerical or
named categories -an inherent order exists among the
response categories.
Ordinal scales are seen in questions that call for
ratings of quality (for example, very good, good, fair,
poor, very poor) and agreement (for example, strongly
agree, agree, disagree, strongly disagree).
136. Numerical scale
A numerical scale is:
where numbers represent the possible response
categories
there is a natural ranking of the categorieszero on the
scale has meaningthere is a quantifiable difference
within categories and between consecutive categories.
137. QUALITATIVE ANALYSIS
"Data analysis is the process of bringing order,
structure and meaning to the mass of collected data. It
is a messy, ambiguous, time-consuming, creative, and
fascinating process.
It does not proceed in a linear fashion; it is not neat.
Qualitative data analysis is a search for general
statements about relationships among categories of
data."
138. Simple qualitative analysis
Unstructured -are not directed by a script. Rich but
not replicable.
•Structured -are tightly scripted, often like a
questionnaire.
Replicable but may lack richness.
•Semi-structured -guided by a script but interesting
issues can be explored in more depth.
Can provide a good balance between richness and
replicability.
139. Simple qualitative analysis
Recurring patterns or themes
–Emergent from data, dependent on observation
framework if used
•Categorizing data
–Categorization scheme may be emergent or pre-
specified
•Looking for critical incidents
–Helps to focus in on key events
140. TOOLS TO SUPPORT DATA ANALYSIS
•Spreadsheet –simple to use, basic graphs
•Statistical packages, e.g. SPSS
•Qualitative data analysis tools
–Categorization and theme-based analysis, e.g. N6
–Quantitative analysis of text-based data
141. Interpreting research results
Researchers should describe their results clearly, and
in a way that other researchers can compare them with
their own results. They should also analyse the results,
using appropriate statistical methods to try to
determine the probability that they may have been
chance findings, and may not be replicable in larger
studies. But this is not enough.
142. Results need to be interpreted in an objective and
critical way, before assessing their implications and
before drawing conclusions.
Interpretation of research results is not just a concern
for researchers.
Policymakers should also be aware of the possible
pitfalls in interpreting research results and should be
cautious in drawing conclusions for policy decisions.
143. Interpreting descriptive statistics
The mean or average is only meaningful if the data fall into
a normal distribution curve, that is, they are evenly
distributed around the mean.
The mean or average, by itself, has a limited value.
There is an anecdote about a man having one foot on ice
and the other in boiling water; statistically speaking, on
average, he is pretty comfortable.
The range of the data, and their distribution (expressed in
the standard deviation) must be known.
It is sometimes more important to know the number or
percentage of subjects or values that are abnormal than to
know the mean
145. Significance of Report Writing
Research report is considered a major component of
the research study for the research task remains
incomplete till the report has been presented and/or
written.
As a matter of fact even the most brilliant hypothesis,
highly well designed and conducted research study,
and the most striking generalizations and findings are
of little value unless they are effectively communicated
to others.
146. The purpose of research is not well served unless the
findings are made known to others.
Research results must invariably enter the general
store of knowledge.
All this explains the significance of writing research
report.
There are people who do not consider writing of report
as an integral part of the research process.
But the general opinion is in favour of treating the
presentation of research results or the writing of
report as part and parcel of the research project.
147. Writing of report is the last step in a research study
and requires a set of skills somewhat different from
those called for in respect of the earlier stages of
research.
This task should be accomplished by the researcher
with utmost care; he may seek the assistance and
guidance of experts for the purpose.
148. Different Steps in Writing Report
Research reports are the product of slow, painstaking,
accurate inductive work.
The usual steps involved in writing report are:
(a) logical analysis of the subject-matter;
(b) preparation of the final outline;
(c) preparation of the rough draft;
(d) rewriting and polishing;
(e) preparation of the final bibliography; and
(f) writing the final draft.
149. Though all these steps are self explanatory, yet a brief
mention of each one of these will be appropriate for
better under-standing.
Logical analysis of the subject matter.
It is the first step which is primarily concerned with
the development of a subject.
150. There are two ways in which to develop a subject—
(a) logically and
(b) chronologically.
The logical development is made on the basis of
mental connections and associations between one
thing and another by means of analysis.
Logical treatment often consists in developing the
material from the simple possible to the most complex
structures.
151. Chronological development is based on a connection
or sequence in time or occurrence.
The directions for doing or making something usually
follow the chronological order.
Preparation of the final outline. It is the next step in
writing the research report.
“Outlines are the framework upon which long written
works are constructed.
They are an aid to the logical organisation of the
material and a reminder of the points to be stressed in
the report.”
152. Preparation of the rough draft
This follows the logical analysis of the subject and the
preparation of the final outline.
Such a step is of utmost importance for the researcher now
sits to write down what he has done in the context of his
/her research study.
He/she will write down the procedure adopted by him/her
in collecting the material for his study along with various
limitations faced by him/her, the technique of analysis
adopted by him/her, the broad findings and
generalizations and the various suggestions he /she wants
to offer regarding the problem concerned.
Rewriting and polishing of the rough draft. This step
happens to be the most difficult part of all formal writing.
153. Usually this step requires more time than the writing
of the rough draft. The careful revision makes the
difference between a mediocre and a good piece of
writing.
While rewriting and polishing, one should check the
report for weaknesses in logical development or
presentation.
154. The researcher should also “see whether or not the
material, as it is presented, has unity and cohesion;
does the report stand upright and firm and exhibit a
definite pattern, like a marble arch? Or does it
resemble an old wall of moldering cement and loose
bricks.”
In addition the researcher should give due attention to
the fact that in his rough draft he has been consistent
or not. He/she should check the mechanics of writing
spelling and usage.
155. Preparation of the final bibliography. Next in order
comes the task of the preparation of the final
bibliography.
The bibliography, which is generally appended to the
research report, is a list of books in some way
pertinent to the research which has been done. It
should contain all those works which the researcher
has consulted.
156. The bibliography should be arranged alphabetically
and may be divided into two parts; the first part may
contain the names of books and pamphlets, and the
second part may contain the names of magazine and
newspaper articles.
Generally, this pattern of bibliography is considered
convenient and satisfactory from the point of view of
reader, though it is not the only way of presenting
bibliography.
The two main ways of summarizing data are by using tables and charts or graphs.
A table is the simplest way of summarizing a set of observations. A table has rows and columns containing data, which can be in the form of absolute numbers or percentages, or both.
Charts and graphs are visual representations of numerical data and, if well designed, will convey the general patterns of the data.
To make your graphics as self-explanatory as possible, there are several things to always include:
Every table or graph should have a title or heading
The x- and y-axes of a graph should be labeled – include value labels, such as a percentage sign; include a legend
Always cite the source of your data and put the date of data were collection or publication
Provide the sample size or the number of people to which the graph is referring (N)
Include a footnote if the graphic isn’t self-explanatory
These points will pre-empt questions and explain the data. In the next several slides, we’ll see examples of these points.
Let’s start with tables. Most tables show a frequency distribution, which is a set of categories with numerical counts. Here, you see the year as the category and the number of births as the numerical count.
What should be added to this table to provide the reader with more information?
Note to facilitator: Wait for a participant response before answering.
Answer – Title
Answer – Data source
Another common way to summarize data is with relative frequency – which is the percentage of the total number of observations that appear in that interval.
It is computed by dividing the number of values within an interval by the total number of values in the table, then multiplying by 100 to get the percentage.
In this table, you see the proportion of the total number of births between 1990 and 1929 (132) by 10-year intervals.
The calculation for the first relative frequency is: 35/132 = 0.265 x 100 = 26.5 (approx 27%).
To interpret this table, we should look at the relative frequencies. What do they tell us?
We can see data across the three decades and what percentage of births occurred in each one. The largest percentage of children were born between 1920 and 1929, compared to the other two decades.
We can interpret the data further by calculating the average or the mean number of births across 30 years. This will give us a summary of the data.
Although they are easier to read than tables, charts provide less detail. The loss of detail may be replaced by a better understanding of the data.
We’re going to review the most commonly used charts and graphs in Excel/PowerPoint. Later, we’ll have you use data to create your own graphics, which may go beyond those presented here.
Bar charts are used to compare data across categories.
Line graphs are used to display trends over time.
Pie charts show percentages or the contribution of each value to a total.
A pie chart displays the contribution of each value to the total. In pie charts, the values always add up to 100.
In this case, we used the chart to show the contribution of patients enrolled each quarter to the total enrollment for the year. For example, the first quarter contributed the largest percentage (59%) of enrolled patients.
In this bar chart, we’re comparing the categories of data, which are the different sites. You see a comparison between sites by quarters and between quarters over time.
What should be added to this chart to provide the reader with more information?
NOTE to facilitator: Wait for a participant response before answering (and then show next slide).
On the next slide, we see how the graph has been improved and is now self-explanatory.
You see we’ve added a title. By adding a title, you know the population to which the graph is referring.
We’ve added labels for each axis. Labeling the y-axis (vertical) was critical because now we know that the values are percentages rather than absolute numbers.
We’ve added the source of the data – this let’s us know from where the data are derived and where to find additional information about this topic.
And we’ve clarified the quarters with months.
Now let’s interpret this chart.
You will note that we have added the target for the number of new enrollees tested for HIV.
The target is to test 50% of new enrollees at each site in each quarter.
We see that sites 1 & 3 have met their targets, but that site 2 has not; it is at 30% new enrollees tested. What percentage of the target has this site met?
NOTE to facilitator: Wait for a participant response before answering.
30/50 = 0.6 or 60%
A stacked bar chart is often used to represent components of a whole and compare the wholes (or multiple values).
Here, you see the number of months female and male patients have been enrolled in HIV care, by age group. By looking within each bar, you see the age breakdown by gender, and by looking at both bars together, you can compare the number of months enrolled for both males and females.
A line graph should be used to display trends over time. While bar charts also are useful for showing time trends, line graphs are particularly useful when there are many data points. In this case, we have four data points for each clinic.
Here, you see the number of clinicians working in each clinic during years 1–4. You will note the asterisk in the title. This asterisk clarifies the definition of clinical to include both doctors and nurses.
What can be added to this graph to make it more clear?NOTE to facilitator: Wait for a participant response before answering. After someone participates, go to next slide.
Data source is added and the actual years are defined.
Data interpretation is the process of making sense of the information. It allows us to ask: What does this information tell me about the program?
Here, you see a flow chart of the steps involved in interpreting data …
NOTE to facilitator: Read the steps outlined in the diagram.
We start by wanting to know the relevance of our findings. Seeking the relevance of a finding is to:
NOTE to facilitator: Read slide.
When interpreting data and seeking the relevance of our findings, we may ask these questions:
NOTE to facilitator: Read slide.
Asking these questions will help you to put the data in the context of your program.
When seeking potential reasons for the finding, we often will need additional information that will put our findings into the context of the program.
Supplementing the findings with expert opinion is a good way to do this. For example, talk to others with knowledge of the program or target population, who have in-depth knowledge about the subject matter, and get their opinions about possible causes.
For example, if your data show that you have not met your targets, you may want to know if:
the community is aware of the service? To answer this, you could talk to community leaders or other providers to get their opinions.
Sometimes ad hoc conversations with experts are insufficient. To get a more accurate explanation of your findings, you often will have to consider other data resources.
Let’s go back to the finding of ‘the program has not met its annual target’. Can we understand why this is happening by looking at other program indicators?
You may want to calculate the nurse-to-client ratio to determine if the facility is sufficiently staffed to meet the client load.
You also may want to review commodity data with client load to determine if there are shortages of commodities.
While it is important to consider other indicators in your analysis, remember – descriptive statistics do not show causality. In these cases, look at other data sources.
Other data sources include:
NOTE to facilitator: Read slide.
Once you review additional data, it may become apparent that these data are not sufficient to explain the reasons for your findings – that a data gap exists. In these instances, it may be necessary to conduct further research.
The types of research designs that are applied will depend on the questions that need to be answered, and of course will be tempered by the feasibility and expense involved with obtaining the new data.