This document outlines a research study exploring the lived experiences of male health visitors. It discusses the rationale for the study, which is that previous research identified a shortage of male professionals in child and family health services. The study aims to understand what it is like for men working as health visitors. It will use interpretative phenomenological analysis to interview male health visitors and analyze the themes from their perspectives. The methodology is described as is the multi-step data analysis process following an interpretative phenomenological approach. Issues of bias, rigor, and trustworthiness in the research are also addressed.
3. Sensitivity: Internal
Outline of the Session
Discuss data analysis, importance, types and process briefly.
Discuss common pitfalls to avoid in data analysis
Discuss data analysis consideration with an example of current
research
4. Sensitivity: Internal
Data Analysis
Data analysis is defined as a process of cleaning, transforming
and modelling data to discover useful information for decision
making .
There are various data analysis tools which can be used to
process and manipulate data, analyse the relationship and
correlations between data sets which helps to identify patterns
and trends for interpretation.
5. Sensitivity: Internal
Importance of Analysing data
Describe and summarise the data
Identify relationships between
variables
Compare and differentiate between
variables
Forecast Outcomes
6. Sensitivity: Internal
Types of Data Analysis
Text Analysis
Statistical Analysis
Diagnostic Analysis
Predictive Analysis
Prescriptive Analysis
10. Sensitivity: Internal
Data Analysis Activity (True or Myth )
Question True (T)/
Myth (M)
1. Common analysis and big words impress people
2. Analysis comes at the end after all the data is collected and collated .
3. Quantitative analysis is the most accurate type of data analysis.
4. Data have their own meaning
5. Stating limitations to the analysis weakens the evaluation.
6. Computer analysis is always easier and better .
Please select most relevant answer:
18. Sensitivity: Internal
Errors in Methodology
• Designing Experiments with
Insufficient Statistical power
• Ignoring Measurement Error
• Performing Multiple
Comparisons
19. Sensitivity: Internal
Problems with Interpretation
• Determining the Significance of
Certain Findings
• Avoiding Confusion between
precision and accuracy .
• Unravelling the causal
relationship among variables
22. Sensitivity: Internal
The Research Study
Title of investigation:
An exploration of the lived experience of male Health Visitors: An Interpretive
phenomenological analysis.
Aim of the investigation:
The aim of the study is to explore the lived experience of being a registered
male Health Visitor.
Objectives of the investigation:
1. To explore the lived experience of the male Health Visitor.
2. To investigate why these men chose Health Visiting as a profession.
3. To consider the impact on service delivery from a male Health Visitor
perspective.
23. Sensitivity: Internal
Rationale
Previous study 2010
revealed a shortage of men
working in Child and Family
Health Care Services who
fathers could relate to.
As a Health Visitor myself I
looked at this field of
practice.
A gap in literature on men
working as Health Visitors
was noted.
24. Sensitivity: Internal
Why this methodology?
• Exploration was needed to discover what it must be like for
men working in the field of Health Visiting-inductive
• I wanted to know how it feels for them-relativist
• To gather their perspective and the meanings they give to
their experience-subjectivist + constructivist through an insider
lens
• Utilise double hermeneutics to interpret the interpretations of
the participant of the phenomenon
• Finally, to use a robust method of data analyse- Interpretative
phenomenological analysis (IPA).
25. Sensitivity: Internal
Ref: NHS Institute for innovation and improvement. Available at
http://www.institute.nhs.uk/quality_and_service_improvement_tools/quality_and_service_improvement_tools/plan_do_study_act.html
accessed Sept 29th 2015.
Act
Evaluate process & outcome
Submit paper for publication
Make recommendations
Disseminate Findings
Study
Read & reread
Analyse Data
Analyse Data
Write up
Plan
Thematic analysis of lit search to
formulate Q’s
Decide Research approach
Time scales for ethics committee
Do/Implement
Apply for ethics approval
Formulate Q’s
Invite participants
Complete semi structured
interviews (n=6)
Tape and transcribe verbatim
Pilot Study
Mirrors
the
research
process
26. Sensitivity: Internal
Main Study
Study inclusion criteria
Be a Nursing and Midwifery
Council (NMC) Registered
Health Visitor.
Be male
Securing the right sample
• Initial difficulties
Sample Size
• r=11 from across England
(seen here in green)
27. Sensitivity: Internal
Data analysis
Smith, Flowers and Larkin, (2009 p82-107) present a clear six stage
methodical approach to data analysis in their text allowing the
researcher to follow the Interpretative Phenomenological Analysis (IPA)
process from a step by step perspective.
• Step 1. ‘Thorough reading’ of the transcripts, over and over, to submerse oneself
in the data.
• Step 2. The ‘Initial Noting’, considering concepts, description and linguistics,
while noting in the left and right margins comments and themes respectively.
• Step 3. ‘Developing emergent themes’, by adding the transcripts and notes to
create a substantive set of data then moving toward data reduction by merging,
mapping and linking patterns through a synergistic process to create a new
totality.
• Step 4. ‘Searching for connections across emergent themes’ both similarities
and diversities, significant events, frequently used references and purposes.
• Step 5. ‘Moving to the next case’ and repeating the process to create individual
themes and new themes.
• Step 6. ‘Look for patterns across the cases’ potentially charting or mapping
graphically.
28. Sensitivity: Internal
Bias ! where is it?
• My gender bias?
• My cultural bias?
• The distribution of the participants?
• Personal contact-personality?
• Their gender bias?
• Their cultural bias?
• The themes I choose to draw out?
• The conclusions I decide to make?
• Where I choose to publish?
29. Sensitivity: Internal
Rigour, Trustworthiness and Authenticity
• Rigour is related to the quality of research -
thoroughness/accuracy
• Keeping a detailed research diary (Evolving info
technology e.g emails, folders etc)
• Sticking to the ethical parameters agreed.
• Following structured analysis
• Triangulation with insider knowledge, the pilot study
and academic theories.
Text analysis-also known as data mining .It is method to discover a pattern in large data sets using databases .This is used to transform raw data into useful information .Overall, it offers a way to extract and examine data and deriving patterns and finally interpretation of data.
Statistical Analysis-It shows WHAT HAPPEN by using past data in the form of dashboards etc.
Two categories of this type-Descriptive analysis(Analyses complete data or sample of summarised numerical data-usually shows mean anddeviation for continuous data and percentage and frequency for categorical data) and inferential analysis (analyses sample from complete data but you can find different conclusions from the same data by selecting different samples .
Diagnostic Analysis –shows why did it happen by finding the cause from the insight found in statistical analysis . It is useful to identify behaviour patterns of data .Predictive analysis –shows what is likely to happen-this analysis make predictions about future outcomes based on current or previous data.
Prescriptive analysis – this combines insights from all previous analysis to determine which action to take in current problem or decision.
You need to know that you have right data for answering your question .
Right data for answering your question
Able to draw accurate conclusions from that data
Interpret that data to inform your decision making process
Define your question- Start with clearly defined problem
Set Clear Measurement priorities
Decide What to Measure
Decide How to measure
Collect data -Consider What to collect, how to store, consistency in recording and organising data
There are a number of ways that statistical techniques can be misapplied to problems in the real world. These types of errors can lead to invalid or inaccurate results. Three of the most common hazards are designing experiments with insufficient statistical power, ignoring measurement error, and performing multiple comparisons
Two types of errors can occur when making inferences based on a statistical hypothesis test: a Type I error happens if the null hypothesis is rejected when it should not be (the probability of this is called "alpha"); and a Type II error results from the failure to reject a null hypothesis when you should (the probability of this is called "beta")
The difference between "significance" in the statistical sense and "significance" in the practical sense continues to elude many consumers of statistical results. Significance (in the statistical sense) is really as much a function of sample size and experimental design as it is a function of strength of relationship. With low power, a researcher may overlook a useful relationship; with excessive power, one may find microscopic effects that have no real practical value. A reasonable way to handle this sort of thing is to cast results in terms of effect sizes (see Cohen, 1994)--that way the size of the effect is presented in terms that make quantitative sense. Precision and Accuracy are two concepts which seem to get confused frequently. It's a subtle but important distinction: precision refers to how finely an estimate is specified, whereas accuracy refers to how close an estimate is to the true value. Estimates can be precise without being accurate, a fact often glossed over when interpreting computer output containing results specified to the fourth or sixth or eighth decimal place. Therefore, one should not report any more decimal places than he/she is fairly confident of reflecting something meaningful. Assessing causality is the reason of most statistical analysis, yet its subtleties escape many statistical consumers. For one to determine a causal inference, he/she must have random assignment. That is, the experimenter must be the one assigning values of predictor variables to cases. If the values are not assigned or manipulated, the most one can hope for is to show evidence of a relationship of some kind. Observational studies are very limited in their ability to illuminate causal relationships. Now, of course, many of the things that are of interest to study are not subject to experimental manipulation (e.g. health problems/risk factors). In order to understand them in a causal framework, a multifaceted approach to the research (you might think of it as "conceptual triangulation"), the use of chronologically structured designs (placing variables in the roles of antecedents and consequents), and plenty of replication is required to come to any strong conclusions regardin