This document discusses qualitative analysis approaches. It begins by outlining inductive analysis, where evaluators interpret raw data to discover concepts and themes from the bottom up. Deductive analysis is also covered, where data is analyzed according to prior assumptions from the top down. The document then provides steps for basic inductive and deductive text analysis. Examples of applying each approach are given. Issues in qualitative analysis like subjectivity and generalizability are also mentioned. In the end, references are provided to support the information discussed.
13. 13
Photo by Matt. Create. - Creative Commons Attribution-NonCommercial-ShareAlike License https://www.flickr.com/photos/76583692@N00 Created with Haiku Deck
14. Doing Qualitative
Basic Text Analysis: Inductive
Use data to discover concepts, themes, or models
14
15. Doing Qualitative
Basic Text Analysis: Inductive
Use data to discover concepts, themes, or models
Evaluator as interpreter; highly involved
15
16. Doing Qualitative
Basic Text Analysis: Inductive
Use data to discover concepts, themes, or models
Evaluator as interpreter; highly involved
Emergent, “bottom up”
16
17. Doing Qualitative
Basic Text Analysis: Inductive
Use data to discover concepts, themes, or models
Evaluator as interpreter; highly involved
Emergent, “bottom up”
Qualitative outcome: key themes or categories
relevant to evaluation/research questions
17
26. Doing Qualitative
Step. 1. Collect and organize your raw data
Considerations:
• Number of collection points
• Transcription
• Audit trail
26
27. Doing Qualitative
Step. 1. Collect and organize your raw data
Considerations:
• Number of collection points
• Transcription
• Audit trail
• Research journal
27
28. Doing Qualitative
Step. 1. Collect and organize your raw data
Considerations:
• Number of collection points
• Transcription
• Audit trail
• Research journal
• Participant key/aliases/anonymity
28
29. Doing Qualitative
Step. 1. Collect and organize your raw data
End results:
• Clean, anonymized data files
• Transcription files
• Audit trail
• Participant key
• Research journal (including protocols for all of
the above)
29
36. Doing Qualitative
Basic Text Analysis: Deductive
Data is analyzed according to prior assumptions
36
37. Doing Qualitative
Basic Text Analysis: Deductive
Data is analyzed according to prior assumptions
Evaluator is “independent” from data
37
38. Doing Qualitative
Basic Text Analysis: Deductive
Data is analyzed according to prior assumptions
Evaluator is “independent” from data
A-priori; “top down”
38
39. Doing Qualitative
Basic Text Analysis: Deductive
Data is analyzed according to prior assumptions
Evaluator is “independent” from data
A-priori; “top down”
Quantitative outcome: metrics relevant to
evaluation/research objectives
39
40. Doing Qualitative
Application: Deductive Analysis
• Category comparison, comparison over time
• Answers to survey questions across participants
• Answers to interview questions across participants
• Analyzing webinar chat pods
• Social media: hashtag use in Twitter,
Facebook/LinkedIn audience engagement
40
41. Doing Qualitative
Basic Deductive Analysis: 5 Steps
1. Develop data categories.
2. Clearly define those categories.
3.Read through all raw data and apply categories.
4. Count.
5. Narrative and visual analysis.
41
42. Doing Qualitative
Chat Pod Engagement Metrics
21
0
17
10
5
0 5 10 15 20 25
Unique participant to participant exchanges
Participant questions
Resources shared by MFLN
Resources shared by participants
Unique chat pod participants
42
43. The fine print….
Only DCO viewers can participate in the chat pod; percentage of chat pod participants based on total
number of DCO viewers and total number of unique participants.
Resources shared by participants include shared links, authors, studies, books, etc.; demonstrates high-level
engagement because participants are contributing to the co-construction of knowledge during webinar.
Resources shared by MFLN include links, peer-reviewed studies and books, etc., from both MFLN and non-
MFLN authors; demonstrates direct CA engagement with participants by further supporting and
contextualizing knowledge construction by situating webinar presentation within the larger disciplinary area.
Participant questions are those listed in the chat pod; demonstrates intent to pursue two-way engagement in
webinar and therefore high-level engagement.
Unique participant to participant exchanges are those in which chat pod participants respond directly to one
another’s comments; demonstrates high-level engagement through realized reactive (two-way) and interactive
(dependent) discourse patterns.
Chat pod text related to webinar content is not captured as an engagement measure due to its discursive
category as declarative (one-way) communication. (It is noted, however, that declarative text is still
understood to indicate webinar engagement, and MFLN encourages and values such participant
engagement.)
Chat pod text related to technical issues and/or CEUs is not included in MFLN evaluation.
43
48. References
Davies, C. A. (2008). Reflexive ethnography: A guide to researching selves
and others (2nd Ed.). New York and London: Routledge.
Denzin, N. K., and Lincoln, Y. S. (2011). The Sage Handbook of
Qualitative Research (4th Ed.). Thousand Oaks, Calif: Sage.
Patton, M. Q. (2014). Qualitative research & evaluation methods (4th
Ed.). Thousand Oaks, Calif.: Sage.
Richardson, L., and St. Pierre, E. A. (2005). Writing: A method of
inquiry. In Norman K. Denzin and Yvonna S. Lincoln (Eds.), The
Sage Handbook of Qualitative Research (3rd ed.) (pp. 959–97).
Thousand Oaks, Calif.: Sage.
48
49. Photographs by Haiku Deck:
http://www.haikudeck.com. Haiku
Deck is licensed by Creative
Commons 3.0.
Icons made by Freepik:
http://www.flaticon.com. Flaticon is
licensed by Creative Commons 3.0.
49
Situated: this is work that locates the researcher/evaluator in the world.
Interpretative practices: the goal is to make sense of people and phenomena and experiences, and the meaning people bring to them. It’s a meaning-making practice.
MQP in his book Qualitative Evaluation and Research Methods book lists seven contributions of qualitative inquiry. This is certainly not an exhaustive list, but here you go:
Illuminating meanings
Understanding how things work
Capturing stories to understand individual’s perspectives and experiences
Understanding how systems function and their consequences for people’s lives
Understanding context
Identifying unanticipated consequences
In qualitative program evaluation, you are telling the story of the program by telling the stories of the program participants.
Ok, so where are we headed today?
First I’m going to talk about a few considerations to make as you approach qualitative work, including theoretical frameworks and what it means to be situated within a qualitative field of inquiry.
Then I’m going to go into some very basic strategies for actually doing qualitative analysis work. The idea here is that they are practical, ready-to-use strategies for you and your agents.
And we’ll end by addressing issues in qualitative work. I’ll address things like credibility in qualitative work; ethics; and how to use your qualitative data.
Skills:
I think this is where many people get tripped up when thinking about doing qualitative work: they feel they don’t have the training, they’re not “certified,” they don’t have the right degree, etc. This is not true.
There are two soft skills you need to do qualitative analysis: these are skills that are not easy to teach, and they are somewhat subjective. But in my opinion, you need:
Pattern-recognition skills
Organizational skills
If you’re a human being, you are already pretty good at pattern recognition. Just by being alive and interacting in a social world we are connected patterns and therefore meaning to things, to places, to others, emotions, etc. We can differentiate and pull apart a complex and changing world. This is pattern recognition and meaning making at it’s most basic.
Organization is big. In fact, this might be the more daunting skill set needed of the two
Your theoretical framework is going to inform what you see and how you see it.
In the context of program evaluation, we’re talking about your logic model, your program map, your theory of change, or however you articulate what you believe your program will accomplish given any number of variables. This is your set of beliefs about your program, and how you believe your program is going to impact those who participate in it. It might be based on existing scholarly knowledge, it might be based on programmatic experience, in might be based on principles, on hopeful, intended outcomes, as when doing an intervention. Regardless, your framework is going to likely be in place prior to you beginning qualitative evaluation and analysis.
This is necessary, because it grounds, it guides your work when you get to the analysis stage.
However, be sure that your framework doesn’t blind you to other things that might emerge from analysis. So your framework is necessary to get started, it guides and informs your analysis, but it should also not limit your analysis. So it’s important to keep that in mind.
The next thing to keep in mind is the situated quality of doing this type of work.
Aside from your theoretical or conceptual framework, your own beliefs, worldviews, and knowledge are going to inform your data analysis. This is part and parcel of doing qualitative work. This is where you might get some criticism of qualitative work being “soft” and “subjective.” I’ll address those issues later. But the point I want to make now is that the qualitative evaluator is not objectively separate from the analysis, and it’s important to acknowledge this up front, during, and after analysis.
You often hear social scientists talking about the “lens” through which they approach their work. This is part of that conversation. And the way you address those critiques of subjectivity is that you acknowledge your subjectivity, openly. It becomes a part of how your program evaluation is conceptualized, what methods you use, how you analyze your data, and even how you write it up and present it. So this is subjective work that is guided both by frames of knowledge, or logic (or principles), as well as by your own lived human experience, your beliefs, and the like.
Acknowledging this personal piece is called reflexivity. Charlotte Aull Davies, an ethnographer, describes reflexivity as a process of self reference, of turning back on oneself through all stages of qualitative work. I’ll talk more about what reflexivity looks like during the analysis process shortly.
Inductive analysis has a few key features……
With inductive analysis, you are letting the data lead the way. Analysis is one of discovery.
Your role as the evaluator is to interpret. As such, you’re going to be highly involved. And as I’ve mentioned previously, your interpretation is going to be subjective. And again, I’ll talk later about how to add credibility to subjectivity in the context of qualitative analysis.
Inductive analysis means that concepts, themes, and models are emerging. You are not proceeding with data analysis in order to test a pre-exiting theory or framework, but rather, you’re allowing for the formation of a theory, for an evaluative look at the success of your program.
And the end result of this process is typically thoroughly qualitative. It’s a narrative analysis that explains what you as the interpreter have discovered through the inductive process.
Don’t wait to organize your data until it’s all collected if you have multiple collection points.
If focus groups or interviews, organizing while collecting means transcribing right away. As far as transcribing goes, if you are working in a group, you’ll have to decide if you want one person to transcribe, or if you’d rather distribute that work. Transcription can be very time consuming, and so this is another great reasons to get working on it right away if you have multiple collection points in your evaluation.
There are obvious advantages to spreading some of the transcription work around, but there are also sound reasons to leave this as the task of one person.
First, you want to be sure all your raw data has the same transcription approach: things like non-verbal cues if you’ve videotaped, verbal pauses, and non-word noises should all be addressed in the same way. Having one person do the transcribing is advantageous in this way, although you could decide ahead of time on a protocol as well.
Another advantage with one person doing the transcription is that you have at least one person in the group who is intimately familiar with all of the data. If you’ve ever done transcription work, you know that you’ll have people’s voices and memorable phrases emblazoned on your brain for awhile. But this is an advantage in analysis, as you have one person who has spent that much more time with the whole body of primary data, rather than with just parts, or as just readers of the clean, secondary data file (which is what a transcription is).
If you’re working in a group, it’s important to discuss these things before data collection begins. Obviously as a group you can and should share often and widely, particularly as data is getting organized. Share everything: audio files, ongoing transcriptions, and first impressions, etc.
When you do your transcribing, I highly recommend you use the line numbering function in Word. This can be found under the “Format” tab, then “Layout.” You want to check “add line numbering,” and then make sure to also check “Continuous.”
.
Immediately begin your audit trail, which is important for organization but also a very important aspect of establishing credibility with qualitative work.
Your audit trail is simply a documentary record of all the choices/changes you make as you do your analysis. It really begins as you are designing your study: why you are doing a focus group instead of interviews, etc. It continues during data collection as you make transcription choices, allocate tasks, etc.
The audit trail is a product of your research processes. I like to think of it as an administrative record, as good housekeeping. It’s a documentary record. But it’s different than a research journal, which also needs to get established during the collection and organization process.
Unlike the audit trail, the research journal is not a product of your data collection; rather, it’s a PART of your data. The research journal is a place to begin recording initial reactions. This starts, for example, DURING the focus group or interviews, and it continues from there throughout the analysis and even writing process. These are your thoughts and reactions of the data. These thoughts may end up becoming codes, they may change your meaning-making process, and I guarantee this information will be extremely informative as your analysis progresses.
If you are working in a group, you’ll want to store this journal in a communal, editible platform like Google Docs so that everyone can access.
Set an organization format that makes sense, perhaps by events or phases rather than by dates—you want to establish a macro-level structure that makes it easy for multiple people to comment on the same topics easily, so a “calendar” of sorts doesn’t make sense. All entries grouped under events or analysis phases should be dated, however, so that you have a nice, linear progression of thought. It’s also good to be sure, if you’re working with a group, that each entry is identified by it’s author. So whether everyone enters their initials, or uses the same color, or whatever, that’s important to establish right away.
And of course, if you’re working alone, a dated, calendar entry format can work just fine.
Before formal analysis gets under way, you also need to establish a participant key.
Whether you’ve given participants a chance to choose their own aliases, or if you’re using first names only, or assigning participant numbers, you need a master document to track this. I use Excel for this, creating a simple spreadsheet of participant names and aliases/numbers assigned. As you’re transcribing and taking research notes, you’ll want to be sure that you are referring to participant aliases from the very beginning. So any transcriptions need to have participant names replaced by aliases, numbers, etc., to make sure anonymity is established at the very beginning of analysis.
In your organization of your raw data, you’re going to end up with several files:
Your cleaned, anonymized data files
Your audit trail
Your research journal
And your participant key
If you’re working with open-ended surveys you still want to clean them up: remove any identifying information, be sure aliases are assigned, and get them into a common format in Word.
Reading through your data is the very first official level of analysis.
Get familiar with your data, gain an overall sense of what’s happening, content being discussed, things like tone, resonance, disagreement, etc.
Be sure to take notes in your research journal to establish a record of your thought processes as you read. Ultimately this will be helpful as you discuss your findings, and because it provides evidence that you have not simply read the data, wrote your reaction, and called it “analysis.” This is a part of inductive analysis, which is step-wise and systematic.
You are also going to want to start thinking about how you are going to track and organize your codes. I’ll show you one of my typical Excel strategies in the next slide, but you can also use the comment feature in Word, color coding in Word, and if you’re more of a visual/tactile thinker, use sticky notes, crayons, etc. Whatever works for you.
You’re going to take several runs at coding during inductive analysis. The idea is that you’re not coding just to get it over with. You’re coding to get maximum understanding. And because you’re doing emergent, bottom-up work here, you want to keep returning to your codes and the data to see if your perspectives change, if you get new insights, etc.
If you are working with larger data sets, the idea is that you being coding as the data is collected. So if you’re one doing interview per month for 3 months for example, you’re going to begin coding the first interview ASAP, do the second interview, start coding, and then go back and review your codes from the first interview to see if you have new insights given new data, and you’ll continue in this fashion until you’re done. Even if you have a static data set that is collected once, you want to frequently go back and review what you’ve coded at the outset in light of what you’re currently coding. This yields a very rich analysis.
I’m a big fan of constant-comparative analytical methods (Glaser and Straus) because I think they do ensure a certain level of rigor and credibility—you’re not coding once, but you’re coding several times. Adjusting and taking notes as you go in your research journal. And you’ll want to track in your audit trail when you’ve gone back and recoded, as a record of this type of analysis. So in your spreadsheet, you might start with one code (or several) for one chunk of text, but then change it two days later. I like to use the comment function in Excel for a changed code. And you’ll also note this in your research journal, because this change will be data in and of itself. This is the spirit of constant-comparative analysis.
Higher-level, initial codes: often initially related to evaluation aims for your programming; this makes sense. As the evaluator, you have your logic model or your theory of change on your mind, you know your program goals and benchmarks, so these will inform your codes. This is great. But at some point, things are going to get a little richer, and little deeper, and start reflecting the actual content and dynamics of the discussion.
Lower-level codes: after multiple readings of data, may end up being related to actual words or phrases in the text.
Remember that not all text will be coded.
So you’re done coding, you’re sick of reading, you have lots of notes. The next step is to go back through your codes and check for overlap. This happens all the time when you’re doing open coding like this. So if you have several “technology” codes, take a look at them. The goal here is not necessarily to simplify, although a bit of simplification is a natural outcome of this step. This is another step in rigor: e.g., are there differences in your technology codes? If so, what are they? How does this change your coding?
Capture key aspects of data that have emerged (in your view) as most important/informative. Go back and read your notes! Make connections across codes. What themes are emerging? How do you know? This is where it all comes together. Take your time.
Writing is a level of analysis!!!!!!
So you’ve gone through and read, coded, recoded, taken notes, created categories. Don’t go and put those categories into an Excel graph! WRITE about them! I am willing to bet that it is in this writing process that you gain your deepest insights. This is where your themes are going to emerge. Take your time here. And don’t forget to put a little bit of yourself in your analysis: that is, use “I,” briefly explain your process, tell your story as the interpreter of the data.
You are checking to see if the raw data you collected fits into a pre-established framework
Your role is simply to compare and contrast: Does the data here fit within this prior framework? How?
In deductive analysis you’re going to start with your categories. Notice the huge difference here vs. starting with your data and coding it. You can develop your categories without even collecting data when you do deductive analysis. Your categories should be related to your programming goals.
Then you need to clearly define those categories. Very clearly, such that nothing is left up for interpretation.
Then you apply categories to your data.
Then you count, and then you write
Again: Stay organized, and keep notes
Not all text may fall into categories.
You may choose to use inductive analysis for uncoded text.
Here’s an example of a deductive analysis of an MFLN webinar chat pod. In your evaluation report you will also provide a narrative analysis, along with definitions of your codes. It’s also really nice to provide examples of data chunks with your definitions.
Ethics is a foundational part of all human-subject research. So I touched on some ethical issues earlier but I wanted to revisit again briefly.
Informed consent is obviously necessary in all of your evaluation work. As you conceptualize your qualitative evaluation plan, which might include methods like action research or things like storytelling, it’s important to be very up front with all your participants on the type of work being done. From a practical standpoint, it may also help ensure you get the type of data you need for the chosen methodology. For example, if you’re doing a focus group and you’re hoping to utilize storytelling as a part of your analysis, it’s important your participants know that up front in part because you will be sharing larger chunks of their text, but it also lets them know that you are hoping to host a storytelling-friendly focus group.
We talked also about anonymity and confidentiality, but if you’re working in a group to complete analysis, another level of protection you can offer your participants is that only one evaluator has access to the participant key.
When you’re working with qualitative data, your participants might reveal quite a bit of information, some of which you may not have asked for. You need to deal with this delicately. Use only data that is pertinent to your evaluation aims. And of course, if participants reveal information that could pose danger for themselves or for others, you need to handle this appropriately as well.
I also want to address credibility in qualitative analysis, which is a nested process.
As I mentioned before, you’ll be keeping an audit trail throughout. This provides overall transparency: everything you’ve done, when, how, methodological choices you’ve made, tasks assigned, etc., all establish a sense of credibility, a sense that you have done systematic, step-wise work in your analysis.
Your research journal also supports credibility. Even though it becomes a part of your data, it is still documentary record of coding iterations, analytical memos, etc. Any time you forget why you made a choice, you should be able to go back to your research journal and get an answer.
Another aspect of credibility is triangulation. Often triangulation is addressed when there is a group working on analysis together, because consensus has to be reached. However, if you’re working alone, you have to triangulate in other ways. You can triangulate by utilizing multiple methods, like a follow-up survey or participant observation along with interviews.
If you’re working in a group, it’s essential to have open, free-flowing discussion as often as possible during analysis. Open codes need to come from and be agreed upon by different analysts; code definitions need to be agreed upon prior to coding if you are doing deductive work; the same processes should be in place when identifying categories. And finally, writing should be a group effort.
Another step in credibility in qualitative work is to include yourself in your work, that is, to be reflexive. It’s very common in social sciences to include a short autobiographical section that is a reflexive statement: who you are, what your experience is, and how that experience informs your work.
And now I want to talk briefly about representing your qualitative work.
We know the value of qualitative data, and the richness is can bring to reports. So make sure you’re using it effectively. You’re final report is going to be the big one that is really going to include all aspects of your analysis. But you will likely have plenty of reports along the way to highlight your qualitative program evaluations. So use it well. Tell stories about the data, mix inductive data chunks with deductive data visualizations, or any other facts and figures.
Highlight analyzed data chunks on your Web site or in newsletters, provided doing so is allowable via the signed consent form (and/or get permission again, especially if leveraging as a storytelling data).
And finally, use your analyzed data in monthly and annual report. You’ve done all this work. And while hopefully it will result in an excellent qualitative report of its own, you can still use that data in other reports you do.
We’ve been talking about some qualitative strategies to use in your social media evaluation. We’re apply basic concepts from qualitative methodologies and leveraging them in the context of social media. I do want to be very clear: if you are doing a qualitative research study, or are conducting interviews, focus groups, participant observations, etc., as part of your evaluations, there are definitely layers of complexity and depth that will need to be added in to your methods. I am NOT suggesting with this webinar that qualitative work across the board is simple, quick, and easy, or that it can or should be done lightly. However, I am of the position that context matters. And basic qualitative evaluation methods for social media can be simpler and quicker than many might assume. So my goal today has been to get you thinking about adopting some of these strategies. But before I wrap up I want to return to a few larger issues I mentioned at the start. These larger issues tare a part of all qualitative work, and still need to be considered with the strategies I’ve discussed today.
A few things to keep in mind:
--Context: I’ll be talking about a few qualitative methodologies, but I’m presenting them in a very basic way specifically for the social media context. If you are looking to do qualitative evaluations utilizing some of the more traditional methods, such as interviews, focus groups, open-ended surveys, participant observation, these methods need to become more involved. And I’ll address that toward the end of my presentation.
--Rapid feedback: I’ll be talking about strategies that are very basic not only because I’m operating under the assumption that you’ll be using them in small chunks of social media (i.e., text comments from a Facebook post), but also in such a way that you can get quick insights on your social media strategy and social media impact. We all know that qualitative work can often be very time consuming. And while the strategies I’ll talk about will take a little longer than running Facebook insights or exporting a report from Sprout Social, they will not represent a huge time investment. So I hope you give them a try.
--Caveat: While I’m talking about some quick and dirty strategies here, I do want to state that I do not want to be undermining This is a loaded phrase. So while I’m giving you some tips here, I don’t mean for the tips to undermine qualitative research as a complex, transformative field of inquiry.
--Strategy: Don’t forget about the importance of a social media strategy. Using qualitative analysis in social media presumes that you have an established presence in a particular place. If you don’t have one, or need to develop one, you might not quite be ready for qualitative social media evaluation. Since we’re trying to capture experience, impact, reaction, and the like, you need to first create an environment on social media where your target audiences are coming to discuss, respond, engage. MFLN has been up and running for 4 years now, and we’ve used that time to develop a presence on social media. But we are still working on making our sites interactive. So we’re changing our sm strategy to begin really focusing on that interaction piece and to embark on some more qualitative evaluation work within our social media sites.