FINDING YOUR STORY: DATA ANALYSIS
CH. 7 Finding Your Story: Data Analysis
Glesne, C. (2016). Becoming qualitative researchers: An
introduction (5th ed.). Pearson.
Chapter 7
Finding Your Story: Data Analysis
I can no longer put off the inevitable. I’ve been home three
weeks, and I’ve found as many distractions as I could to avoid
coding. I’ve organized my files, I’ve set up the study and done a
major reorganization so I can spread out the stacks that will
soon pile up. I’m reading, I’m thinking, and as a way of really
beginning, I took out the prospectus I wrote in November.
During the last months at my site, I put a few Post-it notes into
the prospectus file with other BIG looming ideas, ones that
showed me I would have to tinker with the planned structure.
Today I thought I’d just print out a sheet of the tentative
chapter structure to put up on the wall (and delay coding once
again?). I began typing it, and what did I find? It’s all wrong, it
doesn’t capture the way I’ve been thinking at all. The power of
the shift hit me head on. I tried to reorganize the chapters, but I
found that wouldn’t work either. So instead I wrote out the big
themes I have been thinking about in my sleep, while I drive,
when I cook Passover food . . . and that’s where I’ll have to
start.
(Pugach, personal correspondence, March 31, 1994)
Data analysis involves organizing what you have seen, heard,
and read so you can figure out what you have learned and make
sense of what you have experienced. Working with the data, you
describe, compare, create explanations, link your story to other
stories, and possibly pose hypotheses or develop theories. How
you go about doing so, however, can vary widely. Linguistic
traditions, for example, focus upon words and conversations,
treating “text as an object of analysis itself” (Ryan & Bernard,
2000, p. 769) and may use procedures such as formal narrative
analysis, discourse analysis, or linguistic analysis as tools for
making sense of data. Researchers from sociological traditions
tend to treat “text as a window into human experience” (Ryan &
Bernard, 2000, p. 769) and use thematic analysis procedures to
deal with data through coding and segregating data for further
analysis, description, and interpretation. Thematic analysis, the
approach most widely used in ethnographic work, receives
primary attention in this chapter, but for comparison, several
other forms of data analysis are introduced as well.
Varying Forms of Analysis
The form of analysis you use is linked to your methodology,
research goals, data collection methods, and so on. This chapter
does not attempt to explain the multiple approaches to data
analysis that are available, but four different approaches are
presented to introduce how and why analysis procedures may
vary. Read more widely on modes that resonate with you, and
on data analysis in general. This section begins with an
introduction to thematic analysis, the kind of data analysis
focused upon throughout the rest of the chapter, before briefly
describing conversation analysis from linguistic traditions;
narrative analysis, which combines linguistic and sociological
traditions; and semiotics from sociological traditions.
Thematic Analysis
Thematic analysis—searching for themes and patterns—is used
frequently in anthropological, educational, and other qualitative
work. An important aspect of thematic analysis is segregating
data into categories by codes or labels. The coded clumps of
data are then analyzed in a variety of ways. You might, for
example, look at all the data coded the same way for one case
and see how it changes over time or varies in relationship to
other factors, for example, across events. You can also “explore
how categorizations or thematic ideas represented by the codes
vary from case to case” (Gibbs, 2007, p. 48). Cases might refer
to different events, settings, participants, or policies. Making
comparisons is an analytical step in identifying patterns within
a particular theme. The goal of thematic analysis is to arrive at
a more nuanced understanding of some social phenomenon
through understanding the processes that tend to involve that
phenomenon as well as the perceptions, values, and beliefs of
people toward it. Some researchers, such as those working with
grounded theory methodology, use the search for themes and
patterns to build theory.
Looking for patterns tends to focus attention on unifying
aspects of the culture or setting, on what people usually do,
with whom they usually interact, and so on. Although thematic
analysis searches for patterns, it is not about stipulating the
norm. A strength of thematic analysis is its ability to help
reveal underlying complexities as you seek to identify tensions
and distinctions, and to explain where and why people differ
from a general pattern. Thematic analysis receives more
discussion later on.
Conversation Analysis
Conversation analysis is a powerful form of analysis if your
research goals are to explore how meaning gets communicated
and negotiated through naturally occurring conversations:
Conversation analysis studies the various practices adopted by
conversational participants during ordinary everyday talk. This
may include how participants negotiate overlaps and
interruptions, how various failures (such as hearing and
understanding problems) are dealt with during the interaction
and how conversations are opened and terminated. (Bloor &
Wood, 2006, p. 39)
Conversation analysis might be used, for example, in a study of
a hospital implementing an interprofessional teamwork program
to improve patient safety. The researcher might use
conversation analysis to inform the program’s development
through a focus on how doctors, nurses, technicians, and aides
talk with each other in specific patient-related situations, and on
what kinds of meanings are communicated and negotiated
through that talk.
Data for conversation analysis studies tend to come from
recordings of everyday occurrences, not from interviews. The
researcher focuses on details within the conversations—from
time intervals between utterances to stress on certain words—
and employs a system of transcription that uses various symbols
to indicate nonverbal aspects of a conversation. Conversation
analysis developed out of a form of interpretative research
called ethnomethodology, a methodology that focuses how
people make sense of everyday life and the procedures they use
to accomplish taken-for-granted interactions such as “trusting,
agreeing, negotiating” (Schwandt, 2007, p. 98). Frequently,
video recording is used as a data-gathering tool to document
some aspect of everyday life, and the videos are studied and
analyzed frame by frame.
Narrative Analysis
If your research goal is to understand how participants construct
meaning from their experiences and/or how they structure the
narrating or telling of those experiences, then you will want to
know about narrative analysis strategies. Research questions
tend to be those that “explore either the structure of narratives
or the specific experiences of particular events, such as
marriage breakdown or finding out informa tion that is life-
changing; undergoing procedures (social/medical); or
participating in particular programs” (Grbich, 2013, p. 216).
Narratives may be collected in situ by voice or video recordings
or through interviews. If obtained through interviews, the
interviewer generally asks broad, open-ended questions such as
“Tell me about . . .” and then allows respondents to tell their
stories with as little interruption as possible. Rather than dissect
these stories into themes and patterns, the analysis process is
concerned with both the story and the telling of the story.
An example would be a research project that seeks to
understand how mothers who have had a child die have made
sense of that loss. The researcher could take a sociolinguistic or
a sociocultural narrative analysis approach to the data. Even
stronger would be using both. The sociocultural approach
focuses on the close reading of the narratives as told. For
example, if you have conducted interviews with women who
have suffered the loss of a child, you would read and reread
transcripts of each narrative and make note of the events
included in each story; the feelings and reactions expressed; the
meanings each woman made of her story; and any explanations
(Gibbs, 2007). You would then compare participants’ narratives,
noting similar and different events and sense making. You
would also work to embed the narratives in or link the stories to
the cultural and political context of participants (Grbich, 2013).
The sociolinguistic approach focuses on the linguistic and
rhetorical forms of telling the stories. You might analyze the
narratives by how the women began their stories, how they
ended them, and what made up the middle. You might consider
the dramatic style of tales. Narratives tend to fit one or more
particular dramatic styles: tragedy, satire, romance, comedy
(Gibbs, 2007). If all the stories of your narrators were told in
more or less the same dramatic style, then you would reflect
upon why that might be so for the particular group of women
interviewed. If the stories had very different structures, you
would reflect upon that and try to figure out why. Gubrium and
Holstein (2009) make the point that people’s narratives often
bear “diverse plot structures and themes,” that go unnoticed
“unless the researcher is aware of compositional options at the
start” (p. 69). The narrative analyst looks at how the
interviewee links experiences and circumstances together to
make meaning, realizing also that circumstances do not
determine how the story will be told or the meaning that is
made of it.
Drawing from sociological traditions, Gubrium and Holstein
(2009) emphasize the need in narrative work to go beyond the
transcript. The analyst must also consider how the context in
which the narrator tells the story influences what is told and
how it is told. Who asks the questions that invite a story? How
are some stories discouraged or silenced? For example, stories
my father told me about his participation in World War II
through interviews I conducted for the Library of Congress
Veteran Project are likely to be different tellings than when he
gathered with other World War II vets in Washington, D.C., on
Veteran’s Day in 2008. Observations of the context are
important for situating and interpreting the narratives. Gubrium
and Holstein (2009) describe narrative ethnography as “a
method of procedure and analysis involving the close scrutiny
of circumstances, their actors, and actions in the process of
formulating and communicating accounts. This requires direct
observation, with decided attention to story formation” (p. 22).
Researchers across the social science disciplines use narrative
analysis, but often for different purposes. As Bloor and Wood
(2006) state, “Linguists might examine the internal structure of
narratives, psychologists might focus on the process of recalling
and summarizing stories, and anthropologists might look at the
function of stories cross-culturally” (p. 119).
Semiotics
Semiotics draws from linguistics and communications sciences
and seeks to understand how people communicate through signs
and symbols. Semiotics looks less at what participants perceive
or what they believe and more at how specific beliefs or
attitudes get into their heads. For example, why might long-
distance bus travel in the United States be perceived as a
possibly dangerous mode of travel? Why are foods labeled
“organic” perceived as good? Why is economic development
often seen as a sign of progress? Semiotic analysis is
appropriate for research that asks questions of cultural belief
systems or of how certain kinds of information (such as
identity) get conveyed.
Semiotics focuses on basically anything that possesses
information. Written and oral texts obviously make use of signs
that convey information, but a sign could also be a red hat, a
pierced tongue, or a bag of tamales in contexts where each
conveys some meaning. For something to be a sign, there has to
be a signifier (red hat), something that carries the message, and
the signified, the concept that is conveyed (member of a Red
Hat Society). In semiotic analysis, the focus is on how signs
create or evoke meaning in certain contexts.
An integrated system of signs produces a social code.
“Semiotics aims to uncover the dynamics beyond surface
meanings or shallow descriptions and to articulate underlying
implications” (Madison, 2012, p. 76). It is concerned not only
with what a sign denotes or represents, but also with what the
sign connotes or means in particular cultural contexts. For
example, an undergraduate student undertook a semiotic
analysis of student groups on campus. She conducted interviews
to obtain perspectives on how students group themselves and
each other, but much of her work consisted of observations of
students—their clothing, ornamentations, and interactional
behavior. She became particularly intrigued with distinct ways
in which some groups of students used particular signs and
symbols to communicate belonging to or differing from other
students.
Semiotic analysts may consider visual signs (e.g., use of certain
colors), linguistic signs (use of certain words), and aural signs
(use of sound, such as tone of voice). They look at who is doing
the communication and who are the intended recipients. They
look at how the communication is structured and at what that
structure conveys. And they might look at binary oppositions;
that is, by saying that one kind of cookie is “organic” implies
that all the others without that label are not. Finally, they look
at the codes or unspoken rules and conventions that structure
and link the signs to the meanings people make of them and at
how these codes may change over time.
In looking at how signs interrelate to construct meaning, Roland
Barthes and others have inquired into ideologies and systems of
power to suggest ways in which certain signs get taken as
“natural”—as the way things are or should be—and are then
manipulated in the interest of those in power. Various
motivations (from maintaining the status quo to enticing
purchase of a product) may be behind getting a sign to connote
a desired image.
To conclude this section on varying forms of analysis, I present
a visual metaphor. Consider how fiber artist Caroline
Manheimer goes about piecing together scraps of fabric—her
data. Making an analogy to thematic analysis, she may
segregate (code) her fabric pieces based on certain criteria (such
as size, color, shape) into groups and then join the bits together,
creating a design in which one color or shape informs the
selection of the adjoining fragment. In the process, she might
cut some scraps into smaller pieces (splitting codes), or she
might sew several pieces together (lumping codes) and then
reorganize, creating patterns as exemplified in her art quilt
Wanderings (Image 7.1). In Uniform Series #15 (Image 7.2),
Manheimer’s process is more analogous to narrative analysis in
that she uses fabric to evoke a story about a life in which the
Catholic school uniform becomes the symbolic narrative thread.
The pieces of fabric (data) are more holistic, and the telling (the
narrative) is highlighted.
Your research purposes and questions influence not only what
data you produce, but also how you make sense of the data you
have. Because much of this book is about ethnographic research
techniques that help in understanding sociocultural aspects of
some issue, group, or organization , the remainder of this
chapter describes more fully procedures for thematic analysis.
Thematic Analysis: The Early Days
If you consistently reflect on your data, work to organize them,
and try to discover what they have to tell you, your study will
be more relevant and possibly more profound than if you view
data analysis as a discrete step to be done after data collection.
Working with your data while collecting them enables you to
focus and shape the study as it proceeds and is part of the
analytic process. O’Reilly (2005) gives an example of how she
involved ongoing data analysis with data collection in her
research on British migration to Spain:
I noticed that when two British people meet there they tend to
kiss each other on both cheeks, as the Spanish traditionally do.
This had never been written in my field notes because I hadn’t
thought it important until I realised I had seen it happen a lot. I
started to watch more closely. . . . I became aware that it is just
the British migrants who do this and not the tourists, and that
the migrants are more likely to do it when they are in the
company of tourists. I then began to notice that in the company
of tourists migrants would use the occasional Spanish word
when talking to each other. This led me to thinking about the
relationship between migrants and tourists, whereas until then I
had focused more on the relationship between British and
Spanish people. I thus began, during fieldwork, a closer
analysis of migrants and tourists and their behaviour and
attitudes towards each other that I would not have been able to
do once I had left the field. I started to sort through the notes
and data I had collected, assigning things to a new heading of
“tourist/migrant relations,” and discovered many new
occurrences I had not noticed before. (p. 187)
As O’Reilly notes, analytical connections need to be made while
you are still collecting data to make full use of the possibilities
of fieldwork. Writing memos and monthly reports, managing
your data, and applying rudimentary coding schemes will help
you to create new hunches and new questions, and to begin to
learn from and keep track of the information you are receiving.
Memo Writing
The term memo originally referred to a specific noting process
in grounded theory research (Glaser & Strauss, 1967). The term
is now used widely in qualitative research to refer to jotting
down reflective thoughts. By writing memos to yourself or
keeping a reflective field log, you develop your thoughts; by
getting your thoughts down as they occur, no matter how
preliminary or in what form, you begin the analysis process.
Memo writing also frees your mind for new thoughts and
perspectives. “When I think of something,” said graduate
student Jackie, “I write it down. I might forget about the
thought, but I won’t lose it. It’s there later on to help me think.”
Throughout the research process, you work to remain open to
new perspectives and thoughts. Gordon, another graduate
student, stated, “Insights and new ways to look at the data arise
while I am at work at other things. Probably the most
productive places for these insights are on the long drive to
class and during long, boring meetings when my mind is not
actively engaged.” Capture analytic thoughts when they occur.
Keeping a recorder in the car can help, as can jotting down your
thoughts wherever you happen to be, day or night (if safe to do
so).
Don’t just wait for thoughts to occur. Periodically, sit down to
compose analytical memos. You might want to consider your
research questions and write about ways in which your work is
addressing the questions or posing new or different questions.
Write about patterns you see occurring. If these patterns seemed
particularly neat and comprehensive, think about who might
have differing perspectives, and make interview appointments
with them. Think about exceptions to any pattern. Remember
that you are looking for a range of perspectives, not for the
generalization that can sum up behavior, beliefs, or values
among a group of people. What are the negative cases to the
patterns you observe? Consider when and why those cases might
occur. If you continuously consider what you are learning, these
early analytical thoughts can also guide you to the next set of
observations or interviewees and interview questions.
See Figure 7.1 for an example of an analytic memo I wrote
before coding data from fieldwork in seven academic art
museums. I knew that I needed to address the broad theme of
university/college culture, politics, and economic challenges,
and I sat down to specifically note aspects of that theme—in no
particular order—that were striking me as important. Writing
the memo allowed me to perceive ways to further categorize or
organize my data, and it sent me back to my data to further
examine, for example, ways in which a school’s history and
culture linked to the ways in which art and art museums were
perceived at specific institutions.
In addition to memos to yourself, writing monthly field reports
for committee members, family and friends, or the funding
agency is a way to examine systematically where you are and
where you should consider going. Keep the field reports short
and to the point, so that they don’t become a burden for you to
write or for your readers to read. Headings such as those I call
“The Three P’s: Progress, Problems, and Plans” help you to
review your work succinctly and plan realistically. In reflecting
on both the research process and the data collected, you develop
new questions, new hunches, and, sometimes, new ways of
approaching the research. The reports also provide a way to
communicate research progress to interested others, keeping
them informed of the whats and hows and giving them a chance
for input along the way.
Writing helps you think about your work, new questions, and
connections. All this writing adds up: You will have many
thoughts already on paper when you begin working on the first
draft of your manuscript. These comments and thoughts
recorded as field journal entries or as memos are links across
your data that find their way into a variety of files later on.
Maintaining Some Semblance of Control
When anthropologists, sociologists and others talk about the
“richness” of field data, this can be another way of expressing
the sheer volume and complexity of information they collect
and store.
(Dicks, Mason, Coffey, & Atkinson, 2005, p. 2)
I am seeing that I will need to write about university politics
and economic challenges. I don’t fully understand either, but
they are so important for these campus art museums “at the
side” of things, even when “at the heart.” The politics and
economics section could be complemented by ways museums
make a difference in the lives of the people who experience
them and that ranges from pathways of creativity to a
meditative escape. . . .
So what are the things standing out for me?
That reaching out to college/university audience and reaching
out to community are not as distinct as first appear.
A school’s history and culture that support the arts is of utmost
importance.
That leadership to focus the museum’s mission and to get others
onboard plus ability to fund-raise is crucial.
That art and art museums can be successfully used in creative
and engaging ways across disciplines.
That art museums can address cross-
disciplinary/interdisciplinary/transdisciplinary in different
ways—primarily through focus on curriculum or on exhibit.
That the museum is a vibrant place of apprenticeship-type
learning for students. They get to do things that often are done
only by curators or registrars. They are a place of learning
research skills, using archives, exploring cultural contexts and
history. They also learn how to present and communicate their
research through exhibitions, labels, text, websites, worksheets.
The museum is a resource for jobs, assistantships, and credit-
generation for students. Not all are in art history or studio art.
Some come from another discipline and “fall in love” with
museum work.
The art museum can have a strong link to education department.
It can be a place where students see and practice interacting
with K–12 on museum “tours.” It can be a forum for students to
teach/lead hands-on art activities and thereby link with children
and their families. It is a learning lab.
If a museum is “known” across the campus, it seems more likely
to benefit from alumni donations. This goes back to the culture
of the institution.
Administrative support and belief is crucial. Economic cuts are
part of the reality. If the admin. does not see the power and
potential of the art museum, its budget will be cut. This may
mean some restructuring—With whom is the museum allied? To
whom does it report? How are FTEs generated? Can they be
generated by the museum? What is the college/univ. mission for
service beyond the campus? How does the museum get credit
for this role?
Figure 7.1 Example of an Analytic Memo*
*Memo was written during fieldwork in a study of campus art
museums sponsored by the Samuel H. Kress Foundation.
Expect to be overwhelmed with the sheer volume—notebooks,
photocopies, computer files, manila files, and documents—of
data that accumulates during research. You truly acquire fat
data; their sheer bulk is intimidating. Invariably, you will
collect more data than you need. If they are not kept organized,
the physical presence of so many data can lead you to
procrastinate rather than face the task of focused analysis.
Keeping up with data organization during the collection process
also helps to ensure that you continually learn from the data and
that you spread out the onerous tasks often associated with
transforming data into computer files. Based upon his own
experience, Gordon advised:
Transcribe notes onto the computer after each interview and
observation. This admonition has been prompted by my
discovery that a fairly substantial part of my data is not in
readily usable form. I have had to go back after three months
and type my notes because I find it hard to use data that I
cannot read easily. Drudgery.
Keeping up with data involves transcribing interviews,
observation notes, and field logs and memos to computer files,
filing, creating new files, and reorganizing your files.
Throughout, you continuously reflect upon what you are
learning. Develop appropriate forms for recording data
collection dates, sites, times, and people interviewed or
observed, interviews transcribed, and so on (see Figure 7.2). In
this way, an account is kept not only of your progress, but also
of gaps, since you can easily see where and with whom you
spent time and what else you need to do.
Your filing system builds and becomes increasingly complex as
you collect data. You may begin with files organized by generic
categories such as interview questions, people, and places.
These files provide a way to keep track of information you need
early on. As your data and experience grow, you will create
relevant specific files on the social processes under
investigation where you can keep notes from readings and your
own analytic thoughts and observations. Early on, you may also
begin files on topics such as titles, introductory and concluding
chapters, and quotations.
Each of these specific files serves a distinct purpose. The title
file, for example, contains your efforts to capture what your
narrative may be about (Peshkin, 1985). Although your research
project has a stated central focus (from your research proposal),
you do not really know what particular story, of the several
possibilities, you will tell. Conjuring up titles as the data are
being collected is a way of trying out different emphases, all of
which are candidates for ultimately giving form to your data.
The titles become a way of getting your mind clear about what
you are doing, in an overall sense, although the immediate
application may be to concentra te your data collecting as you
pursue the implications of a particular focus. In short, your
search for a title is an act
Figure 7.2 Sample Form for Keeping Interview Records
of interpretation. Titles capture what you see as germane to
your study; but as your awareness of the promise of your study
changes, so do your titles.
Files related to introductions and conclusions direct you to two
obvious aspects of every study: its beginning and its ending.
Regardless of the particular name that you give to your
introductory and concluding chapters, you frame your study in
the former—providing necessary context, background, and
conceptualization. You effect closure in the concluding chapter
by summarizing, at the very least, and by explicating the
meaning that you draw from your data as befits the points of
your study, even if this means raising more questions or
illuminating multiple perspectives rather than providing
answers. It is never too early to reflect on the beginning and
ending of your work, much as the preparation of these chapters
may seem a distant dream when you are caught up in collecting
data. Ideally, the existence of these files alerts you to what you
might otherwise miss in the course of your study; they stimulate
you to notions that, like your titles, are candidates for inclusion
in your forthcoming text. Until the writing is actually done,
however, you will not know which will be the surviving notions.
The quotation file contains snippets from readings that appear
useful for one of the several roles that the relevant literature
can play. Eventually, they will be sorted out among chapters,
some as epigraphs, those quotations placed at the heads of
chapters because they provide the reader with a useful key to
what the chapter contains. Other quotations will be the
authoritative sprinklings that your elders provide as you find
your way through the novel ground of your own data. Through
resourceful use of quotations, you acknowledge that the world
has not been born anew on your terrain. The quotation file, like
other files, is meant to be a reminder that reading should always
inspire the question: What, if anything, do these words say
about my study?
Files help you to store and organize your thoughts and those of
others. Data analysis is the process of organizing data in light
of your increasingly sophisticated judgments, that is, of the
meaning-finding interpretations that you are learning to make
about the shape of your study. Understanding that you are in a
learning mode is most important; it reminds you that by each
effort of data analysis, you enhance your capacity to further
analyze.
Rudimentary Categorizations
This experience lends entirely new meaning to the term fat data.
I can’t even imagine reading everything I have, but I know I
need to. And coding it? All the while you’re writing, events are
still evolving in the community and you can’t ignore that either.
. . . So you really don’t stop collecting data, do you? You just
start coding and writing.
(Pugach, personal correspondence)
Marleen Pugach was still at her research site when she wrote
this note, realizing her need to begin sorting her data.
Classifying data into different groupings is a place to start.
Through doing so, you develop a rudimentary coding scheme,
the specifics of which are discussed in the next section. You
might, for example, think about how you would categorize cases
(people, schools, museums, etc.) in your fieldwork (Pelto,
2013). Doing so helps identify patterns in how cases are similar
and different and frequently compels considerations of
additional interview questions or of other individuals with
whom you need to talk. For example, in my work in Saint
Vincent, I began categorizing the young people whom I was
interviewing as traditionalists, change agents, and those who
were opting out of society in some way. I became particularly
interested in the change agents and in trying to figure out what
was different in their lives that made them optimistic, or at least
determined to make a difference. This realization led to both
new questions and interviews with others who fit my change
agent category.
In another example, Cindy began a pilot study by observing
meetings of a rural school board and interviewing its members.
After fifteen hours of data collection, she decided to see what
she might learn by coding the data she had. As a result, she
created a new research statement:
My initial problem statement was so broad it was difficult to
work with. The process of coding and organizing my codes has
helped me to determine an approach to solidify a new problem
statement that will lead me in a focused exploration of two
major areas of school board control: financial and quality
education.
Establishing the boundaries for your research is difficult. Social
interaction does not occur in neat, isolated units. Gordon
reflected on his work: “I constantly find myself heading off in
new directions and it is an act of will to stick to my original
(but revised) problem statement.” In order to complete any
project, you must establish boundaries, but these boundary
decisions are also an interpretive judgment based on your
awareness of your data and their possibilities. Posting your
problem statement or most recent working title above your
workspace may help to remind you about the task ahead. Cindy
used a computer banner program to print out her (revised)
research statement, which she taped to the wall over her desk.
The banner guided her work whenever she lifted her head to
ponder and reflect.
It may help also to think of the amount of film that goes into a
good half-hour documentary. Similar to documentary
filmmaking, the methods of qualitative data collecting naturally
lend themselves to excess. You collect more than you can use
because you cannot define your study so precisely as to pursue a
trim, narrowly defined line of inquiry. The open nature of
qualitative inquiry means that you acquire even more data than
you originally envisioned. You are left with the large task of
selecting and sorting—a partly mechanical but mostly
interpretive undertaking, because every time you decide to omit
a data bit as irrelevant to your study or to place it somewhere,
you are making a judgment.
At some point, you stop collecting data, or at least you stop
focusing on the collecting. Knowing when to end this phase is
difficult. It may be that you have exhausted all sources on the
topic—that there are no new situations to observe, no new
people to interview, no new documents to read. Such situations
are rare. Perhaps you stop collecting data because you have
reached theoretical saturation (Glaser & Strauss, 1967). This
means that successive examination of sources yields redundancy
and that the data you have seem complete and integrated.
Recognizing theoretical saturation can be tricky, however. It
may be that you hear the same thing from all of your informants
because your selection of interviewees is too limited or too
small for you to get discrepant views. Often, data collection
ends through less than ideal conditions: The money runs out or
deadlines loom large. Try to make research plans that do not
completely exhaust your money, time, or energy, so that you
can obtain a sense of complete and integrated data.
Rudimentary Categorizations
This experience lends entirely new meaning to the term fat data.
I can’t even imagine reading everything I have, but I know I
need to. And coding it? All the while you’re writing, events are
still evolving in the community and you can’t ignore that either.
. . . So you really don’t stop collecting data, do you? You just
start coding and writing.
(Pugach, personal correspondence)
Marleen Pugach was still at her research site when she wrote
this note, realizing her need to begin sorting her data.
Classifying data into different groupings is a place to start.
Through doing so, you develop a rudimentary coding scheme,
the specifics of which are discussed in the next section. You
might, for example, think about how you would categorize cases
(people, schools, museums, etc.) in your fieldwork (Pelto,
2013). Doing so helps identify patterns in how cases are similar
and different and frequently compels considerations of
additional interview questions or of other individuals with
whom you need to talk. For example, in my work in Saint
Vincent, I began categorizing the young people whom I was
interviewing as traditionalists, change agents, and those who
were opting out of society in some way. I became particularly
interested in the change agents and in trying to figure out what
was different in their lives that made them optimistic, or at least
determined to make a difference. This realization led to both
new questions and interviews with others who fit my change
agent category.
In another example, Cindy began a pilot study by observing
meetings of a rural school board and interviewing its members.
After fifteen hours of data collection, she decided to see what
she might learn by coding the data she had. As a result, she
created a new research statement:
My initial problem statement was so broad it was difficult to
work with. The process of coding and organizing my codes has
helped me to determine an approach to solidify a new problem
statement that will lead me in a focused exploration of two
major areas of school board control: financial and quality
education.
Establishing the boundaries for your research is difficult. Social
interaction does not occur in neat, isolated units. Gordon
reflected on his work: “I constantly find myself heading off in
new directions and it is an act of will to stick to my original
(but revised) problem statement.” In order to complete any
project, you must establish boundaries, but these boundary
decisions are also an interpretive judgment based on your
awareness of your data and their possibilities. Posting your
problem statement or most recent working title above your
workspace may help to remind you about the task ahead. Cindy
used a computer banner program to print out her (revised)
research statement, which she taped to the wall over her desk.
The banner guided her work whenever she lifted her head to
ponder and reflect.
It may help also to think of the amount of film that goes into a
good half-hour documentary. Similar to documentary
filmmaking, the methods of qualitative data collecting naturally
lend themselves to excess. You collect more than you can use
because you cannot define your study so precisely as to pursue a
trim, narrowly defined line of inquiry. The open nature of
qualitative inquiry means that you acquire even more data than
you originally envisioned. You are left with the large task of
selecting and sorting—a partly mechanical but mostly
interpretive undertaking, because every time you decide to omit
a data bit as irrelevant to your study or to place it somewhere,
you are making a judgment.
At some point, you stop collecting data, or at least you stop
focusing on the collecting. Knowing when to end this phase is
difficult. It may be that you have exhausted all sources on the
topic—that there are no new situations to observe, no new
people to interview, no new documents to read. Such situations
are rare. Perhaps you stop collecting data because you have
reached theoretical saturation (Glaser & Strauss, 1967). This
means that successive examination of sources yields redundancy
and that the data you have seem complete and integrated.
Recognizing theoretical saturation can be tricky, however. It
may be that you hear the same thing from all of your informants
because your selection of interviewees is too limited or too
small for you to get discrepant views. Often, data collection
ends through less than ideal conditions: The money runs out or
deadlines loom large. Try to make research plans that do not
completely exhaust your money, time, or energy, so that you
can obtain a sense of complete and integrated data.
Entering the Code Mines
In the early days of data collection, stories abound. Struck by
the stories, you tell them and repeat them. You may even allow
them to assume an importance beyond their worth to the
purposes of the project. Making sense of the narratives,
observations, and documents as a whole comes harder. You do
not have to stop telling stories, but in thematic analysis, you
must make connections among them: What is being illuminated?
What themes and patterns give shape to observations and
interviews? Coding helps answer these questions.
When most of the data are collected, the time has come to
devote attention to coding and analysis. Although you already
may have a classificatory scheme of sorts, you now focus on
categorization. You are ready to enter “the code mines.” The
work is part tedium and part exhilaration as it renders form and
possible meaning to the piles and files of data before you.
Marleen’s words portray the somewhat ambivalent
psychological ambience that accompanies the analytical process
of coding:
I’m about to finish the first set of teacher transcripts and begin
with the students. This will probably mean several new codes . .
. since it is a new group. I hope the codebook can stand the
pressure. One of the hardest things is accepting that doing the
coding is a months-long proposition. When my mother asks me
if I’m done yet, I know she doesn’t have a clue. (Personal
correspondence, May 3, 1994)
What Is a Code?
The word coding as used in qualitative work is confusing to
those familiar with the term and its use in quantitative survey
research, where short open-ended responses are categorized
with the purpose of counting. Instead of coding to count,
qualitative researchers code to discern themes, patterns, and
processes; to make comparisons; and to build theoretical
explanations. Some qualitative researchers prefer the term
indexing to the word coding, but as Saldaña (2009) states,
“Coding is not just labeling, it is linking” (p. 8). Codes link
thoughts and actions across bits of data. Indexing does not
convey that sense of linking. It may not matter which word to
use, as long as you realize that coding in qualitative research is
for different purposes than in quantitative work.
Coding is a progressive process of sorting and defining and
defining and sorting those scraps of collected data (e.g.,
observation notes, interview transcripts, memos, documents, and
notes from relevant literature) that are applicable to your
research purpose. By putting pieces that exemplify the same
descriptive or theoretical idea together into data clumps labeled
with a code, you begin to create a thematic organizational
framework.
A qualitative research code, as described by Saldaña (2009), “is
most often a word or short phrase that symbolically assigns a
summative, salient, essence-capturing, and/or evocative
attribute for a portion of language-based or visual data” (p. 3).
Saldaña draws a parallel between a book’s title and a code:
“Just as a title represents and captures a book or film or poem’s
primary content and essence, so does a code represent and
capture a datum’s primary content and essence” (p. 3). Note that
a code is a word or short phrase, not a number/letter
combination or a set of letters meant to represent some phrase,
such as T-AHW for teacher use of art homework. Saldaña
(2009) finds that such abbreviations “just make the decoding
process of your brain work much harder than they need to
during analysis” (p. 18). I agree. Write out your code words.
A useful suggestion for creating code words comes from
grounded theory research: Think in terms of gerunds (words
ending in ing). The gerund form moves you to consider
processes and actions such as resisting authority, seeking
attention, or striving to be do-gooders. Thinking in terms of
gerunds (or processes) tends to lead to a more useful and
interesting analysis of your data than categorizing by
descriptive nouns such as students, teachers, and administrators.
Approaches to Coding
How do you figure out what codes to use and what to mark as
coded? It is a creative act that takes concentra ted thought as you
read and think deeply about the words before you. Begin by
reading quickly through all your data with your notebook at the
ready for memos and possible code words. You will note that
some of the same topics come up over and over. This is not
surprising since your research questions were at least somewhat
directing your observations, and your interview questions were
somewhat guiding the interview script. You will begin to
observe, however, that people talk about a topic in both similar
and different ways, presenting different perspectives. These
similarities and differences become areas for coding. Make note
of actions, perspectives, processes, values, and so on that stand
out for you as you refamiliarize yourself with the data.
Then, take several interview transcripts or field observations
and try coding them line by line. As much as you may try to set
aside your assumptions and theoretical frameworks, those
perspectives tend to find their way into the codes you choose.
That is to be expected. What you want to avoid is imposing an a
priori set of codes on your data. Line-by-line coding helps to
immerse you in the data and discover what concepts they have
to offer. As you read line by line, jotting possible codes in the
margin, try to abstract your code words, removing them slightly
from the data. For example, a line in your fieldnotes that reads,
“Ms. Wilson asked the students to sit in their seats and to stop
talking. She then took her seat and sat there quietly for at least
three minutes before the room quieted,” could be coded
specifically as “Wilson-requesting quiet.” The code will
probably serve you better, however, if abstracted to “controlling
students” or “keeping order” or a number of other codes,
depending upon your research purposes. The point is that your
code is a category of activity of which the piece coded is an
example.
Saldaña (2009) suggests “The Touch Test,” as “a strategy for
progressing from topic to concept, from the real to the abstract,
and from the particular to the general” (p. 187). If you can
touch the aspect that is coded—for example, tattoos—then ask
yourself, “What is the larger concept or phenomenon or process
that tattoo is part of that cannot be touched?” It might be
adornment or body art or, perhaps, making a statement,
depending upon the context of the research. The intent of
Saldaña’s touch test is to help you figure out the concepts of
which your coded data are a part.
Line-by-line coding is a way to get started, but you do not
necessarily have to code every piece of data this way. Saldaña’s
text The Coding Manual for Qualitative Research (2009), the
text that I draw upon heavily in this section, is full of ways to
approach coding. As Saldaña states, the approaches or coding
methods “are not discrete and . . . can be ‘mixed and matched’”
(p. 47). Although touching upon several here, I recommend
Saldaña’s book for more suggestions.
One useful coding approach is domain or taxonomic coding.
Derived from the work of Spradley (1979) in cognitive
anthropology, this method attempts to get at how participants
categorize and talk about some aspect of their culture. Specific
kinds of interview questions may accompany this approach in
that the researcher may have asked interviewees to elaborate on
ways to (means), kinds of (inclusion/exclusion), steps of
(sequence), and so on regarding aspects of the research topic.
You do not have to have asked these specific questions to use
this approach in coding data. Rather, ask questions of the data
you have that would lead to categorizing types of, causes of,
consequences of, attitudes toward, strategies for, and so forth
that interviewees discussed or that you saw in your
observations. In the example above, Ms. Wilson’s request and
subsequent waiting may have been construed as a strategy for
controlling students. Coding this line would then lead you to
look for other types of controlling strategies Ms. Wilson used,
as well as types of controlling strategies used by other teachers.
Taxonomic coding helps you find patterns in human speech and
behavior. “Controlling” becomes a coding category for varied
examples of actions and speech. As Saldaña (2009) states,
“When you search for patterns in coded data to categorize them,
understand that sometimes you may group things together not
just because they are exactly alike or very much alike, but
because they might also have something in common—even if,
paradoxically, that commonality consists of differences” (p. 6).
Teachers’ attitudes toward and actions in controlling students,
for example, may be quite different, but all could be coded as
“types of control.”
Another coding approach is to become attuned to the words
participants use to talk about their lives, communities,
organizations, and so on. Referred to as in vivo or indigenous
codes, these terms may be particularly colorful or metaphoric or
words used differently than as they are generally used. For
example, in the museum study, I began noting and then coding
the metaphors participants used to describe their campus art
museum: the museum as a “gem,” a “treasure,” a “library,” a
“bubbling cauldron of ideas,” and so forth. In doing so, I started
to perceive patterns in where these metaphors occurred. For
example, gem and treasure were frequently used at one site but
not at another. I could then begin thinking (and memoing) about
how different metaphors might imply different expectations and
kinds of interactions at the various art museums.
Another type of coding to consider is emotions coding.
“Emotion Codes label the emotions recalled and/or experienced
by the participant, or inferred by the researcher about the
participant” (Saldaña, 2009, p. 86). Such codes become linked
with particular actions or behavior in the study. Saldaña (2009)
uses the example of a study of divorce and the emotions linked
to different stages and procedures within the divorce process.
Remember that you mix these and other coding methods as you
work your way through your data. These approaches are
heuristics to help you delve into the coding process and to fi nd
what works best for you and your research purposes.
Creating a Codebook
After coding several interview transcripts and observational
notes, make a list of the codes you have generated. Can you
arrange them into major categories and subcategories? Do some
codes appear to be nearly the same and could be combined? Do
some codes cover large categories that perhaps should be split
into two or more codes? You may find that the same subcode
appears under several major codes. This may indicate a theme
that runs throughout the work. Look for its presence or absence
under other major headings. If absent, should it be there?
After reworking your coding scheme, try it out on the same
documents coded previously to see how it fits. Revise as
needed, and then try it on another transcript and some more
observation notes. What new codes are added? Be overgenerous
in judging what is important to code; you do not want to
foreclose any opportunity to learn from the field by prematurely
settling on what is or is not relevant to you. Go back and forth
like this until you are no longer adding substantially more
codes, realizing that as you continue to code, you will likely
add more—sending you back to look for other expressions of
that code in previous parts of your text.
When comfortable with your codes, make a codebook. Give
each major code its own page. Below the major code, list each
subcode (and sub-subcodes) with an explanation of each.
Writing the explanation will help to keep you from what Gibbs
(2007) refers to as “definitional drift,” in which the material
you coded earlier is slightly different in meaning from the
material you code at a different time. For example, in my work
with young people in Oaxaca, resisting was one of my early
codes. I defined it as forms of speech or actions that
demonstrate disagreement with governmental rules or policies.
As my work progressed, my application of resisting as a coding
category became more complex and began overlapping with a
category, I called maintaining indigenous autonomy. I had to
rethink my resisting code and its definition.
The codebook is highly personal, meant to fit you; it need not
be useful or clear to anyone else. Although there may be
common features and a common intent to everyone’s data
analysis process, it remains, in the end, an idiosyncratic
enterprise. No one right coding scheme exists. The proof of
your coding scheme is in the pudding of your manuscript. The
sense your manuscript makes, how useful it is and how well it
reads depend, in large part, on your data analysis. If your
process is not producing any “ah ha’s” or moments of
excitement as you realize some new understandings, then it
probably is not yet a good coding scheme.
Reference
O’Leary, Z. (2005).
Researching real-world problems. Thousand Oaks, CA:
SAGE.
Ch.11
Analysing and Interpreting Data
FROM RAW DATA TO MEANINGFUL UNDERSTANDING
It’s easy to fall into the trap of thinking the major hurdle in
conducting real-world research is data collection. And yes,
gathering credible data is certainly a challenge – but so is
making sense of it. As George Eliot states, the key to meaning
is ‘interpretation’.
Now attempting to interpret a mound of data can be
intimidating. Just looking at it can bring on a nasty headache or
a mild anxiety attack. So the question is, what is the best way to
make a start? How can you begin to work through your data?
Well, if I were only allowed to give one piece of advice, it
would be to engage in creative and inspired analysis using a
methodical and organized approach. As described in Box 11.1,
the best way to move from messy, complex and chaotic raw data
… towards rich, meaningful and eloquent understandings is by
working through your data in ways that are creative, yet
managed within a logical and systematic framework.
Box 11.1 Balancing Creativity and Focus
Think outside the square … yet stay squarely on target
Be original, innovative, and imaginative … yet know where you
want to go
Use your intuition … but be able to share the logic of that
intuition
Be fluid and flexible … yet deliberate and methodical
Be inspired, imaginative and ingenious … yet realistic and
practical
Easier said than done, I know. But if you break the process of
analysis down into a number of defined tasks, it’s a challenge
that can be conquered. For me, there are five tasks that need to
be managed when conducting analysis:
Keeping your eye on the main game. This means not getting lost
in a swarm of numbers and words in a way that causes you to
lose a sense of what you’re trying to accomplish.
Managing, organizing, preparing and coding your data so that
it’s ready for your intended mode(s) of analysis.
Engaging in the actual process of analysis. For quantified data,
this will involve some level of statistical analysis, while
working with words and images will require you to call on
qualitative data analysis strategies.
Presenting data in ways that capture understandings, and being
able to offer those understandings to others in the clearest
possible fashion.
Drawing meaningful and logical conclusions that flow from
your data and address key issues.
This chapter tackles each of these challenges in turn.
Keeping your eye on the main game
While the thought of getting into your data can be daunting,
once you take the plunge it’s actually quite easy to get lost in
the process. Now this is great if ‘getting lost’ means you are
engaged and immersed and really getting a handle on what’s
going on. But getting lost can also mean getting lost in the
tasks, that is, handing control to analysis programs, and losing
touch with the main game. You need to remember that while
computer programs might be able to do the ‘tasks’, it is the
researcher who needs to work strategically, creatively and
intuitively to get a ‘feel’ for the data; to cycle between data and
existing theory; and to follow the hunches that can lead to
sometimes unexpected, yet significant findings.
FIGURE 11.1 THE PROCESS OF ANALYSIS
Have a look at Figure 11.1. It’s based on a model I developed a
while ago that attempts to capture the full ‘process’ of analysis;
a process that is certainly more complex and comprehensive
than simply plugging numbers or words into a computer. In fact,
real-world analysis involves staying as close to your data as
possible – from initial collection right through to drawing final
conclusions. And as you move towards these conclusions, it’s
essential that you keep your eye on the game in a way that sees
you consistently moving between your data and … your
research questions, aims and objectives, theoretical
underpinnings and methodological constraints. Remember, even
the most sophisticated analysis is worthless if you’re struggling
to grasp the implications of your findings to your overall
project.
Rather than relinquish control of your data to ‘methods’ and
‘tools’, thoughtful analysis should see you persistently
interrogating your data, as well as the findings that emerge from
that data. In fact, as highlighted in Box 11.2, keeping your eye
on the game means asking a number of questions throughout the
process of analysis.
Box 11.2 Questions for Keeping the Bigger Picture in Mind
Questions related to your own expectations
What do I expect to find i.e. will my hypothesis bear out?
What don’t I expect to find, and how can I look for it?
Can my findings be interpreted in alternative ways? What are
the implications?
Questions related to research question, aims and objectives
How should I treat my data in order to best address my research
questions?
How do my findings relate to my research questions, aims and
objectives?
Questions related to theory
Are my findings confirming my theories? How? Why? Why not?
Does my theory inform/help to explain my findings? In what
ways?
Can my unexpected findings link with alternative theories?
Questions related to methods
Have my methods of data collection and/or analysis coloured
my results. If so, in what ways?
How might my methodological shortcomings be affecting my
findings?
Managing the data
Data can build pretty quickly, and you might be surprised by the
amount of data you have managed to collect. For some, this will
mean coded notebooks, labelled folders, sorted questionnaires,
transcribed interviews, etc. But for the less pedantic, it might
mean scraps of paper, jotted notes, an assortment of cuttings
and bulging files. No matter what the case, the task is to build
or create a ‘data set’ that can be managed and utilized
throughout the process of analysis.
Now this is true whether you are working with: (a) data you’ve
decided to quantify; (b) data you’ve captured and preserved in a
qualitative form; (c) a combination of the above (there can be
real appeal in combining the power of words with the authority
of numbers). Regardless of approach, the goal is the same – a
rigorous and systematic approach to data management that can
lead to credible findings. Box 11.3 runs through six steps I
believe are essential for effectively managing your data.
Box 11.3 Data Management
Step 1 Familiarize yourself with appropriate software
This involves accessing programs and arranging necessary
training. Most universities (and some workplaces) have licences
that allow students certain software access, and many
universities provide relevant short courses. Programs
themselves generally contain comprehensive tutorials complete
with mock data sets.
Quantitative analysis will demand the use of a data
management/statistics program, but there is some debate as to
the necessity of specialist programs for qualitative data
analysis. This debate is taken up later in the chapter, but the
advice here is that it’s certainly worth becoming familiar with
the tools available.
Quantitative programs Qualitative programs
SPSS – sophisticated and user-friendly (www.spss.com)
SAS – often an institutional standard, but many feel it is not as
user-friendly as SPSS (www.sas.com)
Minitab – more introductory, good for learners/small data sets
(www.minitab.com)
Excel – while not a dedicated stats program it can handle the
basics and is readily available on most PCs (Microsoft Office
product)
Absolutely essential: here is an up-to-date word processing
package
Specialist packages include:
NU*DIST, NVIVO, MAXqda, The Ethnograph – used for
indexing, searching and theorizing
ATLAS.ti – can be used for images as well as words
CONCORDANCE, HAMLET, DICTION – popular for content
analysis (all above available: www.textanalysis.info)
CLAN-CA popular for conversation analysis
(http://childes.psy.cmu.edu)
Step 2 Log in your data
Data can come from a number of sources at various stages
throughout the research process, so it’s well worth keeping a
record of your data as it’s collected. Keep in mind that original
data should be kept for a reasonable period of time; researchers
need to be able to trace results back to original sources.
Step 3 Organize your data
This involves grouping like sources, making any necessary
copies and conducting an initial cull of any notes, observations,
etc. not relevant to the analysis.
Step 4 Screen your data for any potential problems
This includes a preliminary check to see if your data is legible
and complete. If done early, you can uncover potential problems
not picked up in your pilot/trial, and make improvements to
your data collection protocols.
Step 5 Enter the data
This involves systematically entering your data into a database
or analysis program, as well as creating codebooks, which can
be electronically based, that describe your data and keep track
of how it can be accessed.
Quantitative data Qualitative data
Codebooks often include: the respondent or group; the variable
name and description; unit of measurement; date collected; any
relevant notes Codebooks often include: respondents; themes;
data collection procedures; collection dates; commonly used
shorthand; and any other notes relevant to the study
Data entry: data can be entered as it is collected or after it has
all come in. Analysis does not take place until after data entry
is complete. Figure 11.2 depicts an SPSS data entry screen
Data entry: whether using a general word processing
program or specialist software, data is generally transcribed in
an electronic form and is worked through as it is received.
Analysis tends to be ongoing and often begins before all the
data has been collected/entered
FIGURE 11.2 DATA ENTRY SCREEN FOR SPSS
Step 6 Clean the data
This involves combing through the data to make sure any entry
errors are found, and that the data set looks in order.
Quantitative data
When entering quantified data it’s easy to make mistakes –
particularly if you’re moving fast, i.e. typos. It’s essential that
you go through your data to make sure it’s as accurate as
possible
Qualitative data
Because qualitative data is generally handled as it’s
collected, there is often a chance to refine processes as you go.
In this way your data can be as ‘ready’ as possible for analysis
STATISTICS – THE KISS (KEEP IT SIMPLE AND SENSIBLE)
APPROACH
‘Doctors say that Nordberg has a 50/50 chance of living, though
there’s only a 10 percent chance of that.’
– Naked Gun
It wasn’t long ago that ‘doing’ statistics meant working with
formulae, but personally, I don’t believe in the need for all real -
world researchers to master formulae. Doing statistics in the
twenty-first century is more about your ability to use statistical
software than your ability to calculate means, modes, medians
and standard deviations – and look up p-values in the back of a
book. To say otherwise is to suggest that you can’t ride a bike
unless you know how to build one. What you really need to do
is to learn how to ride, or in this case learn how to run a stats
program.
Okay, I admit these programs do demand a basic understanding
of the language and logic of statistics. And this means you will
need to get your head around (1) the nature of variables; (2) the
role and function of both descriptive and inferential statistics;
(3) appropriate use of statistical tests; and (4) effective data
presentation. But if you can do this, effective statistical analysis
is well within your grasp.
Now before I jump in and talk about the above a bit more, I
think it’s important to stress that …
Very few students can get their heads around statistics without
getting into some data.
While this chapter will familiarize you with the basic language
and logic of statistics, it really is best if your reading is done in
conjunction with some hands-on practice (even if this is simply
playing with the mock data sets provided in stats programs). For
this type of knowledge ‘to stick’, it needs to be applied.
Variables
Understanding the nature of variables is essential to statistical
analysis. Different data types demand discrete treatment. Using
the appropriate statistical measures to both describe your data
and to infer meaning from your data requires that you clearly
understand your variables in relation to both cause and effect
and measurement scales.
Cause and effect
The first thing you need to understand about variables relates to
cause and effect. In research-methods-speak, this means being
able to clearly identify and distinguish your dependent and
independent variables. Now while understanding the theoretical
difference is not too tough, being able to readily identify each
type comes with practice.
DEPENDENT VARIABLES These are the things you are trying
to study or what you are trying to measure. For example, you
might be interested in knowing what factors are related to high
levels of stress, a strong income stream, or levels of
achievement in secondary school – stress, income and
achievement would all be dependent variables.
INDEPENDENT VARIABLES These are the things that might
be causing an effect on the things you are trying to understand.
For example, conditions of employment might be affecting
stress levels; gender may have a role in determining income;
while parental influence may impact on levels of achievement.
The independent variables here are employment conditions,
gender and parental influence.
One way of identifying dependent and independent variables is
simply to ask what depends on what. Stress depends on work
conditions or income depends on gender. As I like to tell my
students, it doesn’t make sense to say gender depends on
income unless you happen to be saving for a sex-change
operation!
Measurement scales
Measurement scales refer to the nature of the differences you
are trying to capture in relation to a particular variable
(examples below). As summed up in Table 11.1, there are four
basic measurement scales that become respectively more
precise: nominal, ordinal, interval and ratio. The precision of
each type is directly related to the statistical tests that can be
performed on them. The more precise the measurement scale,
the more sophisticated the statistical analysis you can do.
NOMINAL Numbers are arbitrarily assigned to represent
categories. These numbers are simply a coding scheme and have
no numerical significance (and therefore cannot be used to
perform mathematical calculations). For example, in the case of
gender you would use one number for female, say 1, and
another for male, 2. In an example used later in this chapter, the
variable ‘plans after graduation’ is also nominal with numerical
values arbitrarily assigned as 1 = vocational/technical training,
2 = university, 3 = workforce, 4 = travel abroad, 5 = undecided
and 6 = other. In nominal measurement, codes should not
overlap (they should be mutually exclusive) and together should
cover all possibilities (be collectively exhaustive). The main
function of nominal data is to allow researchers to tally
respondents in order to understand population distributions.
ORDINAL This scale rank orders categories in some meaningful
way – there is an order to the coding. Magnitudes of difference,
however, are not indicated. Take for example, socio-economic
status (lower, middle, or upper class). Lower class may denote
less status than the other two classes but the amount of the
difference is not defined. Other examples include air travel
(economy, business, first class), or items where respondents are
asked to rank order selected choices (biggest environmental
challenges facing developed countries). Likert-type scales, in
which respondents are asked to select a response on a point
scale (for example, ‘I enjoy going to work’: 1 = strongly
disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = strongly
agree), are ordinal since a precise difference in magnitude
cannot be determined. Many researchers, however, treat Likert
scales as interval because it allows them to perform more
precise statistical tests. In most small-scale studies this is not
generally viewed as problematic.
INTERVAL In addition to ordering the data, this scale uses
equidistant units to measure difference. This scale does not,
however, have an absolute zero. An example here is date – the
year 2006 occurs 41 years after the year 1965, but time did not
begin in AD1. IQ is also considered an interval scale even
though there is some debate over the equidistant nature between
points.
RATIO Not only is each point on a ratio scale equidistant, there
is also an absolute zero. Examples of ratio data include age,
height, distance and income. Because ratio data are ‘real’
numbers all basic mathematical operations can be performed.
Descriptive statistics
Descriptive statistics are used to describe the basic features of a
data set and are key to summarizing variables. The goal is to
present quantitative descriptions in a manageable and
intelligible form. Descriptive statistics provide measures of
central tendency, dispersion and distribution shape. Such
measures vary by data type (nominal, ordinal, interval, ratio)
and are standard calculations in statistical programs. In fact,
when generating the example tables for this section, I used the
statistics program SPSS. After entering my data, I generated my
figures by going to ‘Analyze’ on the menu bar, clicking on
‘Descriptive Statistics’, clicking on ‘Frequencies’, and then
defining the statistics and charts I required.
Measuring central tendency
One of the most basic questions you can ask of your data
centres on central tendency. For example, what was the average
score on a test? Do most people lean left or right on the issue of
abortion? Or what do most people think is the main problem
with our health care system? In statistics, there are three ways
to measure central tendency (see Table 11.2): mean, median and
mode – and the example questions above respectively relate to
these three measures. Now while measures of central tendency
can be calculated manually, all stats programs can automatically
calculate these figures.
MEAN The mathematical average. To calculate the mean, you
add the values for each case and then divide by the number of
cases. Because the mean is a mathematical calculation, it is
used to measure central tendency for interval and ratio data, and
cannot be used for nominal or ordinal data where numbers are
used as ‘codes’. For example, it makes no sense to average the
1s, 2s and 3s that might be assigned to Christians, Buddhists
and Muslims.
MEDIAN The mid-point of a range. To find the median you
simply arrange values in ascending (or descending) order and
find the middle value. This measure is generally used in ordinal
data, and has the advantage of negating the impact of extreme
values. Of course, this can also be a limitation given that
extreme values can be significant to a study.
MODE The most common value or values noted for a variable.
Since nominal data is categorical and cannot be manipulated
mathematically, it relies on mode as its measure of central
tendency.
Measuring dispersion
While measures of central tendency are a standard and highly
useful form of data description and simplification, they need to
be complemented with information on response variability. For
example, say you had a group of students with IQs of 100, 100,
95 and 105, and another group of students with IQs of 60, 140,
65 and 135, the central tendency, in this case the mean, of both
groups would be 100. Dispersion around the mean, however,
will require you to design curriculum and engage learning with
each group quite differently. There are several ways to
understand dispersion, which are appropriate for different
variable types (see Table 11.3). As with central tendency,
statistics programs will automatically generate these figures on
request.
RANGE This is the simplest way to calculate dispersion, and is
simply the highest minus the lowest value. For example, if your
respondents ranged in age from 8 to 17, the range would be 9
years. While this measure is easy to calculate, it is dependent on
extreme values alone, and ignores intermediate values.
QUARTILES This involves subdividing your range into four
equal parts or ‘quartiles’ and is a commonly used measure of
dispersion for ordinal data, or data whose central tendency is
measured by a median. It allows researchers to compare the
various quarters or present the inner 50% as a dispersion
measure. This is known as the inner-quartile range.
VARIANCE This measure uses all values to calculate the spread
around the mean, and is actually the ‘average squared deviation
from the mean’. It needs to be calculated from interval and ratio
data and gives a good indication of dispersion. It’s much more
common, however, for researchers to use and present the square
root of the variance which is known as the standard deviation.
STANDARD DEVIATION This is the square root of the
variance, and is the basis of many commonly used statistical
tests for interval and ratio data. As explained below, its power
comes to the fore with data that sits under a normal curve.
Measuring the shape of the data
To fully understand a data set, central tendency and dispersion
need to be considered in light of the shape of the data, or how
the data is distributed. As shown in Figure 11.3, a normal curve
is ‘bell-shaped’; the distribution of the data is symmetrical,
with the mean, median and mode all converged at the highest
point in the curve. If the distribution of the data is not
symmetrical, it is considered skewed. In skewed data the mean,
median and mode fall at different points.
Kurtosis characterizes how peaked or flat a distribution is
compared to ‘normal’. Positive kurtosis indicates a relatively
peaked distribution, while negative kurtosis indicates a flatter
distribution.
The significance in understanding the shape of a distribution is
in the statistical inferences that can be drawn. As shown in
Figure 11.4, a normal distribution is subject to a particular set
of rules regarding the significance of a standard deviation.
Namely that:
68.2% of cases will fall within one standard deviation of the
mean
95.4% of cases will fall within two standard deviations of the
mean
99.6% of cases will fall within three standard deviations of the
mean
So if we had a normal curve for the sample data relating to ‘age
of participants’ (mean = 12.11, s.d. = 2.22 – see Boxes 11.2,
11.3), 68.2% of participants would fall between the ages of 9.89
and 14.33 (12.11–2.22 and 12.11+2.22).
These rules of the normal curve allow for the use of quite
powerful statistical tests and are generally used with interval
and ratio data (sometimes called parametric tests). For data that
does not follow the assumptions of a normal curve (nominal and
ordinal data), the researcher needs to call on non-parametric
statistical tests in making inferences.
Table 11.4 shows the curve, skewness and kurtosis of our
sample data set.
Inferential statistics
While the goal of descriptive statistics is to describe and
summarize, the goal of inferential statistics is to draw
conclusions that extend beyond immediate data. For example,
inferential statistics can be used to estimate characteristics of a
population from sample data, or to test various hypotheses
about the relationship between different variables. Inferential
statistics allow you to assess the probability that an observed
difference is not just a fluke or chance finding. In other words,
inferential statistics is about drawing conclusions that are
statistically significant.
Statistical significance
Statistical significance refers to a measure, or ‘p-value’, which
assesses the actual ‘probability’ that your findings are more
than coincidental. Conventional p-values are .05, .01, and .001,
which tells you that the probability your findings have occurred
by chance is 5/100, 1/100, or 1/1,000 respectively. Basically,
the lower the p-value, the more confident researchers can be
that findings are genuine. Keep in mind that researchers do not
usually accept findings that have a p-value greater than .05
because the probability that findings are coincidental or caused
by sampling error is too great.
Questions suitable to inferential statistics
It’s easy enough to tell students and new researchers that they
need to interrogate their data, but it doesn’t tell them what they
should be asking. Box 11.4 offers some common questions
which, while not exhaustive, should give you some ideas for
interrogating real-world data using inferential statistics.
Box 11.4 Questions for Interrogating Quantitative Data using
Inferential Statistics
How do participants in my study compare to a larger
population? These types of question compare a sample with a
population. For example, say you are conducting a study of
patients in a particular coronary care ward. You might ask if the
percentage of males or females in your sample, or their average
age, or their ailments are statistically similar to coronary care
patients across the country. To answer such questions you will
need access to population data for this larger range of patients.
Are there differences between two or more groups of
respondents? Questions that compare two or more groups are
very common and are often referred to as ‘between subject’. I’ll
stick with a medical theme here … For example, you might ask
if male and female patients are likely to have similar ailments;
or whether patients of different ethnic backgrounds have
distinct care needs; or whether patients who have undergone
different procedures have different recovery times.
Have my respondents changed over time?
These types of question involve before and after data with
either the same group of respondents or respondents who are
matched by similar characteristics. They are often referred to as
‘within subject’. An example of this type of question might be,
‘have patients’ dietary habits changed since undergoing bypass
surgery?’
Is there a relationship between two or more variables?
These types of question can look for either correlations (simpl y
an association) or cause and effect. Examples of correlation
questions might be, ‘Is there an association between time spent
in hospital and satisfaction with nursing staff?’ or, ‘Is there a
correlation between patient’s age and the medical procedure
they have undergone?’ Questions looking for cause and effect
differentiate dependent and independent variables. For example,
‘Does satisfaction depend on length of stay?’ or, ‘Does stress
depend on adequacy of medical insurance?’ Cause and effect
relationships can also look to more than one independent
variable to explain variation in the dependent variable. For
example, ‘Does satisfaction with nursing staff depend on a
combination of length of stay, age and severity of medical
condition?’
(I realize that all of these examples are drawn from the medical
or nursing fields, but application to other respondent groups is
pretty straightforward. In fact, a good exercise here is to try to
come up with similar types of question for alternative
respondent groups.)
Selecting the right statistical test
There is a baffling array of statistical tests out there that can
help you answer the types of question highlighted in Box 11.4.
And programs such as SPSS and SAS are capable of running
such tests without you needing to know the technicalities of
their mathematical operations. The problem, however, is
knowing which test is right for your particular application.
Luckily, you can turn to a number of test selectors now
available on the Internet (see Bill Trochim’s test selector at
www.socialresearchmethods.net/kb/index.htm) and through
programs such as MODSTAT and SPSS.
But even with the aid of such selectors (including the tabular
one I offer below), you still need to know the nature of your
variables (independent/dependent); scales of measurement
(nominal, ordinal, interval, ratio); distribution shape (normal or
skewed); the types of questions you want to ask; and the types
of conclusions you are trying to draw.
Table 11.5 covers the most common tests for univariate (one
variable), bivariate (two variable) and multivariate (three or
more variable) data. The table can be read down the first
column for univariate data (the column provides an example of
the data type, its measure of central tendency, dispersion and
appropriate tests for comparing this type of variable to a
population). It can also be read as a grid for exploring the
relationship between two or more variables. Once you know
what tests to conduct, your statistical software will be able to
run the analysis and assess statistical significance.
Presenting quantitative data
When it comes to presenting quantitative data, there can be a
real temptation to offer graphs, charts and tables for every
single variable in your study. So the first key to effective data
presentation is to resist this temptation, and actively determine
what is most important in your work. Your findings need to tell
a story related to your aims, objectives and research questions.
Now when it comes to how your data should be presented, I
think there is one golden rule: it should not be hard work for the
reader. Most people’s eyes glaze over when it comes to
statistics, so your data should not be hard to decipher. You
should not need to be a statistician to understand it. Your
challenge is to graphically and verbally present your data so
that meanings are clear. Any graphs and tables you present
should ease the task for the reader. So while you need to include
adequate information, you don’t want to go into information
overload. Box 11.5 covers the basics of graphic presentation,
while Box 11.6 looks at the presentation of quantitative data in
tabular form.
QUALITATIVE DATA ANALYSIS (QDA)
‘Not everything that can be counted counts, and not everything
that counts can be counted.’
– Albert Einstein
I’d always thought of Einstein as an archetypal ‘scientist’. But
I’ve come to find that he is archetypal only if this means
scientists are extraordinarily witty, insightful, political, creative
and open-minded. Which, contrary to the stereotype, is exactly
what I think is needed for groundbreaking advances in science.
So when Einstein himself recognizes the limitations of
quantification, it is indeed a powerful endorsement for working
with qualitative data.
Yes, using statistics is a clearly defined and effective way of
reducing and summarizing data. But statistics rely on the
reduction of meaning to numbers, and there are two concerns
here. First, meanings can be both intricate and complex, making
it difficult to reduce them to numbers. Second, even with such a
reduction, there can be a loss of ‘richness’ associated with the
process.
These two concerns have led to the development of a plethora of
qualitative data analysis (QDA) approaches that aim to create
new understandings by exploring and interpreting complex data
from sources such as interviews, group discussions,
observation, journals, archival documents etc., without the aid
of quantification. But the literature related to these approaches
is quite thick, and wading through it in order to find appropriate
and effective strategies can be a real challenge. Many students
end up: (1) spending a huge amount of time attempting to work
through the vast array of approaches and associated literature;
(2) haphazardly selecting one method that may or may not be
appropriate to their project; (3) conducting their analysis
without any well-defined methodological protocols; or (4) doing
a combination of the above.
So while we know that there is inherent power in words and
images, the challenge is working through options for managing
and analysing qualitative data that best preserve richness yet
crystallize meaning. And I think the best way to go about this is
to become familiar with both the logic and methods that
underpin most QDA strategies. Once this foundation is set,
working through more specific, specialist QDA strategies
becomes much easier.
Logic and methods
Given that we have to make sense of complex, messy and
chaotic qualitative data in the real-world everyday, you
wouldn’t think it would be too hard to articulate a rigorous
QDA process. But the analysis we do on a day-to-day basis
tends to be at the subconscious level, and is a process so full of
rich subtleties (and subjectivities) that it is actually quite
difficult to articulate and formalize.
There is some consensus, however, that the best way to move
from raw qualitative data to meaningful understanding is
through data immersion that allows you to uncover and discover
themes that run through the raw data, and by interpreting the
implication of those themes for your research project.
Discovering and uncovering
As highlighted in Figure 11.5, moving from raw data, such as
transcripts, pictures, notes, journals, videos, documents, etc., to
meaningful understanding is a process reliant on the
generation/exploration of relevant themes; and these themes can
either be discovered or uncovered. So what do I mean by this?
Well, you may decide to explore your data inductively from the
ground up. In other words, you may want to explore your data
without a predetermined theme or theory in mind. Your aim
might be to discover themes and eventuating theory by allowing
them to emerge from the data. This is often referred to as the
production of grounded theory or ‘theory that was derived from
data systematically gathered and analyzed through the research
process’ (Strauss and Corbin 1998, p. 12).
In order to generate grounded theory, researchers engage in a
rigorous and iterative process of data collection and ‘constant
comparative’ analysis that finds raw data brought to
increasingly higher levels of abstraction until theory is
generated. This method of theory generation (which shares the
same name as its product – grounded theory) has embedded
within it very well-defined and clearly articulated techniques
for data analysis (see readings at the end of the chapter). And it
is precisely this clear articulation of grounded theory techniques
that have seen them become central to many QDA strategies.
It is important to realize, however, that discovering themes is
not the only QDA option. You may have predetermined (a
priori) themes or theory in mind – they might have come from
engagement with the literature; your prior experiences; the
nature of your research question; or from insights you had while
collecting your data. In this case, you are trying to deductively
uncover data that supports predetermined theory. In a sense, you
are mining your data for predetermined categories of
exploration in order to support ‘theory’. Rather than theory
emerging from raw data, theory generation depends on
progressive verification.
While grounded theory approaches are certainly a mainstay in
QDA, researchers who only engage in grounded theory
literature can fall prey to the false assumption that all theory
must come inductively from data. This need not be the case. The
need to generate theory directly from data will not be
appropriate for all researchers, particularly those wishing to test
‘a priori’ theories or mine their data for predetermined themes.
Mapping themes
Whether themes are to be discovered or uncovered, the key to
QDA is rich engagement with the documents, transcripts,
images, texts, etc. that make up a researcher’s raw data. So how
do you begin to engage with data in order to discover and
uncover themes in what is likely to be an unwieldy raw data
set?
Well one way to look at it might be as a rich mapping process.
Technically, when deductively uncovering data related to ‘a
priori’ themes the map would be predetermined. However, when
inductively discovering themes using a grounded theory
approach the map would be built as you work through your data.
In practice, however, the distinction is unlikely to be that clear,
and you will probably rely on both strategies to build the richest
map possible.
Figure 11.6 offers a map exploring poor self-image in young
girls built through both inductive and deductive processes. That
is, some initial ideas were noted, but other concepts were added
and linked as data immersion occurred.
It’s also worth noting that this type of mind map can be easily
converted to a ‘tree structure’ that forms the basis of analysis in
many QDA software programs, including NU*DIST (see Figure
11.7).
Delving into data
When it comes to QDA, delving into your data generally occurs
as it is collected and involves: (1) reading and re-reading; (2)
annotating growing understanding in notes and memos; (3)
organizing and coding data; and (4) searching for patterns in a
bid to build and verify theories.
The process of organizing and coding can occur at a number of
levels and can range from highly structured, quasi-statistical
counts to rich, metaphoric interpretations. Qualitative data can
be explored for the words that are used; the concepts that are
discussed; the linguistic devices that are called upon; and the
nonverbal cues noted by the researcher.
EXPLORING WORDS Words can lead to themes through
exploration of their repetition, or through exploration of their
context and usage (sometimes called key words in context).
Specific cultural connotations of particular words can also lead
to relevant themes. Patton (2001) refers to this as ‘indigenous
categories’, while Strauss and Corbin (1998) refer to it as ‘in
vivo’ coding.
To explore word-related themes researchers systematically
search a text to find all instances of a particular word (or
phrase) making note of its context and meaning. Several
software packages, such as DICTION or CONCORDANCE, can
quickly and efficiently identify and tally the use of particular
words and even present such findings in a quantitative manner .
EXPLORING CONCEPTS Concepts can be deductively
uncovered by searching for themes generated from: the
literature; the hypothesis/research question; intuitions; or prior
experiences. Concepts and themes may also be derived from
‘standard’ social science categories of exploration, for example
power, race, class, gender etc. On the other hand, many
researchers will look for concepts to emerge inductively from
their data without any preconceived notions. With
predetermined categories, researchers need to be w ary of
‘fitting’ their data to their expectations, and not being able to
see alternate explanations. However, purely inductive methods
are also subject to bias since unacknowledged subjectivities can
impact on the themes that emerge from the data.
To explore concepts, researchers generally engage in line-by-
line or paragraph-by-paragraph reading of transcripts, engaging
in what grounded theory proponents refer to as ‘constant
comparison’. In other words, concepts and meaning are explored
in each text and then compared with previously analysed texts
to draw out both similarities and disparities (Glaser and Strauss
1967).
EXPLORING LITERARY DEVICES Metaphors, analogies and
even proverbs are often explored because of their ability to
bring richness, imagery and empathetic understanding to words.
These devices often organize thoughts and facilitate
understanding by building connections between speakers and an
audience. Once you start searching for such literary devices,
you’ll find they abound in both the spoken and written word.
Qualitative data analysts often use these rich metaphorical
descriptions to categorize divergent meanings of particular
concepts.
EXPLORING NONVERBAL CUES One of the difficulties in
moving from raw data to rich meaning is what is lost in the
process. And certainly the tendency in qualitative data
collection and analysis is to concentrate on words, rather than
the tone and emotive feeling behind the words, the body
language that accompanies the words, or even words not
spoken. Yet this world of the nonverbal can be central to
thematic exploration. If your raw data, notes or transcripts
contain non-verbal cues, it can lend significant meaning to
content and themes. Exploration of tone, volume, pitch and pace
of speech; the tendency for hearty or nervous laughter; the
range of facial expressions and body language used; and shifts
in any or all of these, can be central in a bid for meaningful
understanding.
Looking for patterns and interconnections
Once texts have been explored for relevant themes, the quest for
meaningful understanding generally moves to the relationships
that might exist between and amongst various themes. For
example, you may look to see if the use of certain words and/or
concepts is correlated with the use of other words and/or
concepts. Or you may explore whether certain words or
concepts are associated with a particular range of nonverbal
cues or emotive states. You might also look to see if there is a
connection between the use of particular metaphors and
nonverbal cues. And of course, you may want to explore how
individuals with particular characteristics vary on any of these
dimensions.
Interconnectivities are assumed to be both diverse and complex
and can point to the relationship between conditions and
consequences, or how the experiences of the individual relate to
more global themes. Conceptualization and abstraction can
become quite sophisticated and can be linked to both model and
theory building.
QDA software
It wasn’t long ago that QDA was done ‘by hand’ with elaborate
filing, cutting, sticky notes, markers, etc. But quality software
(as highlighted in Box 11.3) now abounds and ‘manual
handling’ is no longer necessary. QDA programs can store,
code, index, map, classify, notate, find, tally, enumerate,
explore, graph, etc., etc. Basically, they can: (1) do all the
things you can do manually, but much more efficiently; and (2)
do things that manual handling of a large data set simply won’t
allow. And while becoming proficient at the use of such
software can mean an investment in time (and possibly money),
if you’re working with a large data set you’re likely to get that
time back.
Okay … if QDA programs are so efficient and effective, why
are they so inconsistently called on by researchers working with
qualitative data? Well, I think there are three answers here.
First, is a lack of familiarity – researchers may not be aware of
the programs, let alone what they can do. Second is that the
learning investment is seen as too large and/or difficult. Third,
researchers may realize, or decide, that they really don’t want to
do that much with their qualitative data; they may just want to
use it sparingly to back up a more quantitative study.
My advice? Well, you really need to think through the pros and
cons here. If you’re working with a small data set and you can’t
see any more QDA in your future, you may not think it will pay
to go down this path – manual handling might do the trick. But
if you are (a) after truly rigorous qualitative analysis; (b) have
to manage a large data set; or (c) see yourself needing to work
with qualitative data in the future, it’s probably worth battling
the learning curve. Not only is your research process likely to
be more rigorous, you will probably save a fair bit of time in
the long run.
To get started with QDA software, I would recommend talking
to other researchers or lecturers to find out what programs
might be most appropriate for your goals and data. I would also
have a look at relevant software sites on the Internet (see Box
11.3); there is a lot of information here and some sites even
offer trial programs. Finally, I’d recommend that you take
appropriate training courses. NU*DIST and NVIVO are both
very popular and short course are often easy to find.
Specialist strategies
Up to this point, I’ve been treating QDA as a homogenous
approach with underlying logic and methods, and I haven’t
really discussed the distinct disciplinary and paradigmatic
approaches that do exist. But as mentioned at the start of this
section, the literature here is dense, and a number of distinct
approaches have developed over the past decades. Each has its
own particular goals, theory and methods … and each will have
varying levels of applicability to your own research. Now while
I would certainly recommend delving into the approaches that
resonate with you, it’s worth keeping in mind that you don’t
have to adopt just one approach. It is possible to draw insights
from various strategies in a bid to evolve an approach that best
cycles between your data and your own research agenda.
Table 11.6 may not be comprehensive enough to get you started
in any particular branch of qualitative data analysis, but it does
provide a comparative summary of some of the more commonly
used strategies. You can explore these strategies further by
delving into the readings offered at the end of the chapter.
Presenting qualitative data
I don’t think many books adequately cover the presentation of
qualitative data, but I think they should. New researchers often
struggle with the task and end up falling back on what they are
most familiar with, or what they can find in their methods books
(which are often quantitatively biased). So while these
researchers may only have three cases, five documents, or eight
interviews, they can end up with some pseudo-quantitative
analysis and presentation that includes pie charts, bar graphs
and percentages. For example, they may say 50% feel … and
20% think, when they’re talking about a total of only five
people.
Well this isn’t really where the power of qualitative data lies.
The power of qualitative data is in the actual words and images
themselves – so my advice is to use them. If the goal is the rich
use of words – avoid inappropriate quantification, and preserve
and capitalize on language.
So how do you preserve, capitalize on and present words and
images? Well, I think it’s about story telling. You really have to
have a clear message, argument or storyline, and you need to
selectively use your words and/or images in a way that gives
weight to that story. The qualitative data you present should be
pointed, powerful and able to draw your readers in.
Criteria Ratings Points
Identifies Main
Issues/Problems
24 to >21.0 pts
Advanced
Identifies and demonstrates
a sophisticated
understanding of the main
issues/problems in the
study.
21 to >19.0 pts
Proficient
Identifies and
demonstrates an
accomplished
understanding of most of
the issues/problems.
19 to >0.0 pts
Developing
Identifies and
demonstrates
acceptable
understanding of
some of the
issues/problems in
the study.
0 pts
Not
Present
24 pts
Analysis and
Evaluation of
Issues/Problems
23 to >21.0 pts
Advanced
Presents an insightful and
thorough analysis of all
identified issues/problem;
includes all necessary
calculations.
21 to >19.0 pts
Proficient
Presents a thorough
analysis of most of the
issues identified; missing
some necessary
calculations.
19 to >0.0 pts
Developing
Presents a
superficial or
incomplete analysis
of some of the
identified issues;
omits necessary
calculations.
0 pts
Not
Present
23 pts
Recommendations 23 to >21.0 pts
Advanced
Supports diagnosis and
opinions with strong
arguments and
well-documented evidence;
presents a balanced and
critical view; interpretation
is both reasonable and
objective.
21 to >19.0 pts
Proficient
Supports diagnosis and
opinions with limited
reasoning and evidence;
presents a somewhat
one-sided argument;
demonstrated little
engagement with ideas
presented.
19 to >0.0 pts
Developing
Little or no action
suggested and/or
inappropriate
solutions proposed
to the issues in the
study.
0 pts
Not
Present
23 pts
APA, Spelling &
Grammar
10 to >9.0 pts
Advanced
Limited to no APA, spelling
or grammar mistakes.
9 to >7.0 pts
Proficient
Minimal APA, spelling
and/or grammar mistakes.
7 to >0.0 pts
Developing
Noticeable APA,
spelling and
grammar mistakes.
0 pts
Not
Present
10 pts
Page Length 10 to >9.0 pts
Advanced
5-7 double-spaced pages
of content (not counting the
title page or references).
9 to >7.0 pts
Proficient
1 page more or less than
required length.
7 to >0.0 pts
Developing
More than 1 page
more or less than
required length.
0 pts
Not
Present
10 pts
Qualitative Data Analysis Grading Rubric |
CJUS750_B02_202240
Criteria Ratings Points
Sources 10 to >9.0 pts
Advanced
Citation of a journal article
that reports a qualitative
study. All web sites utilized
are authoritative.
9 to >7.0 pts
Proficient
Citation of a journal article
that reports a qualitative
study. Most web sites
utilized are authoritative.
7 to >0.0 pts
Developing
Citation of a journal
article that reports
a qualitative study.
Not all web sites
utilized are
credible, and/or
sources are not
current.
0 pts
Not
Present
10 pts
Total Points: 100
Qualitative Data Analysis Grading Rubric |
CJUS750_B02_202240

FINDING YOUR STORY DATA ANALYSISCH. 7 Finding Your Story Data

  • 1.
    FINDING YOUR STORY:DATA ANALYSIS CH. 7 Finding Your Story: Data Analysis Glesne, C. (2016). Becoming qualitative researchers: An introduction (5th ed.). Pearson. Chapter 7 Finding Your Story: Data Analysis I can no longer put off the inevitable. I’ve been home three weeks, and I’ve found as many distractions as I could to avoid coding. I’ve organized my files, I’ve set up the study and done a major reorganization so I can spread out the stacks that will soon pile up. I’m reading, I’m thinking, and as a way of really beginning, I took out the prospectus I wrote in November. During the last months at my site, I put a few Post-it notes into the prospectus file with other BIG looming ideas, ones that showed me I would have to tinker with the planned structure. Today I thought I’d just print out a sheet of the tentative chapter structure to put up on the wall (and delay coding once again?). I began typing it, and what did I find? It’s all wrong, it doesn’t capture the way I’ve been thinking at all. The power of the shift hit me head on. I tried to reorganize the chapters, but I found that wouldn’t work either. So instead I wrote out the big themes I have been thinking about in my sleep, while I drive, when I cook Passover food . . . and that’s where I’ll have to start. (Pugach, personal correspondence, March 31, 1994) Data analysis involves organizing what you have seen, heard, and read so you can figure out what you have learned and make sense of what you have experienced. Working with the data, you describe, compare, create explanations, link your story to other stories, and possibly pose hypotheses or develop theories. How you go about doing so, however, can vary widely. Linguistic traditions, for example, focus upon words and conversations,
  • 2.
    treating “text asan object of analysis itself” (Ryan & Bernard, 2000, p. 769) and may use procedures such as formal narrative analysis, discourse analysis, or linguistic analysis as tools for making sense of data. Researchers from sociological traditions tend to treat “text as a window into human experience” (Ryan & Bernard, 2000, p. 769) and use thematic analysis procedures to deal with data through coding and segregating data for further analysis, description, and interpretation. Thematic analysis, the approach most widely used in ethnographic work, receives primary attention in this chapter, but for comparison, several other forms of data analysis are introduced as well. Varying Forms of Analysis The form of analysis you use is linked to your methodology, research goals, data collection methods, and so on. This chapter does not attempt to explain the multiple approaches to data analysis that are available, but four different approaches are presented to introduce how and why analysis procedures may vary. Read more widely on modes that resonate with you, and on data analysis in general. This section begins with an introduction to thematic analysis, the kind of data analysis focused upon throughout the rest of the chapter, before briefly describing conversation analysis from linguistic traditions; narrative analysis, which combines linguistic and sociological traditions; and semiotics from sociological traditions. Thematic Analysis Thematic analysis—searching for themes and patterns—is used frequently in anthropological, educational, and other qualitative work. An important aspect of thematic analysis is segregating data into categories by codes or labels. The coded clumps of data are then analyzed in a variety of ways. You might, for example, look at all the data coded the same way for one case and see how it changes over time or varies in relationship to other factors, for example, across events. You can also “explore how categorizations or thematic ideas represented by the codes
  • 3.
    vary from caseto case” (Gibbs, 2007, p. 48). Cases might refer to different events, settings, participants, or policies. Making comparisons is an analytical step in identifying patterns within a particular theme. The goal of thematic analysis is to arrive at a more nuanced understanding of some social phenomenon through understanding the processes that tend to involve that phenomenon as well as the perceptions, values, and beliefs of people toward it. Some researchers, such as those working with grounded theory methodology, use the search for themes and patterns to build theory. Looking for patterns tends to focus attention on unifying aspects of the culture or setting, on what people usually do, with whom they usually interact, and so on. Although thematic analysis searches for patterns, it is not about stipulating the norm. A strength of thematic analysis is its ability to help reveal underlying complexities as you seek to identify tensions and distinctions, and to explain where and why people differ from a general pattern. Thematic analysis receives more discussion later on. Conversation Analysis Conversation analysis is a powerful form of analysis if your research goals are to explore how meaning gets communicated and negotiated through naturally occurring conversations: Conversation analysis studies the various practices adopted by conversational participants during ordinary everyday talk. This may include how participants negotiate overlaps and interruptions, how various failures (such as hearing and understanding problems) are dealt with during the interaction and how conversations are opened and terminated. (Bloor & Wood, 2006, p. 39) Conversation analysis might be used, for example, in a study of a hospital implementing an interprofessional teamwork program
  • 4.
    to improve patientsafety. The researcher might use conversation analysis to inform the program’s development through a focus on how doctors, nurses, technicians, and aides talk with each other in specific patient-related situations, and on what kinds of meanings are communicated and negotiated through that talk. Data for conversation analysis studies tend to come from recordings of everyday occurrences, not from interviews. The researcher focuses on details within the conversations—from time intervals between utterances to stress on certain words— and employs a system of transcription that uses various symbols to indicate nonverbal aspects of a conversation. Conversation analysis developed out of a form of interpretative research called ethnomethodology, a methodology that focuses how people make sense of everyday life and the procedures they use to accomplish taken-for-granted interactions such as “trusting, agreeing, negotiating” (Schwandt, 2007, p. 98). Frequently, video recording is used as a data-gathering tool to document some aspect of everyday life, and the videos are studied and analyzed frame by frame. Narrative Analysis If your research goal is to understand how participants construct meaning from their experiences and/or how they structure the narrating or telling of those experiences, then you will want to know about narrative analysis strategies. Research questions tend to be those that “explore either the structure of narratives or the specific experiences of particular events, such as marriage breakdown or finding out informa tion that is life- changing; undergoing procedures (social/medical); or participating in particular programs” (Grbich, 2013, p. 216). Narratives may be collected in situ by voice or video recordings or through interviews. If obtained through interviews, the interviewer generally asks broad, open-ended questions such as “Tell me about . . .” and then allows respondents to tell their
  • 5.
    stories with aslittle interruption as possible. Rather than dissect these stories into themes and patterns, the analysis process is concerned with both the story and the telling of the story. An example would be a research project that seeks to understand how mothers who have had a child die have made sense of that loss. The researcher could take a sociolinguistic or a sociocultural narrative analysis approach to the data. Even stronger would be using both. The sociocultural approach focuses on the close reading of the narratives as told. For example, if you have conducted interviews with women who have suffered the loss of a child, you would read and reread transcripts of each narrative and make note of the events included in each story; the feelings and reactions expressed; the meanings each woman made of her story; and any explanations (Gibbs, 2007). You would then compare participants’ narratives, noting similar and different events and sense making. You would also work to embed the narratives in or link the stories to the cultural and political context of participants (Grbich, 2013). The sociolinguistic approach focuses on the linguistic and rhetorical forms of telling the stories. You might analyze the narratives by how the women began their stories, how they ended them, and what made up the middle. You might consider the dramatic style of tales. Narratives tend to fit one or more particular dramatic styles: tragedy, satire, romance, comedy (Gibbs, 2007). If all the stories of your narrators were told in more or less the same dramatic style, then you would reflect upon why that might be so for the particular group of women interviewed. If the stories had very different structures, you would reflect upon that and try to figure out why. Gubrium and Holstein (2009) make the point that people’s narratives often bear “diverse plot structures and themes,” that go unnoticed “unless the researcher is aware of compositional options at the start” (p. 69). The narrative analyst looks at how the interviewee links experiences and circumstances together to make meaning, realizing also that circumstances do not
  • 6.
    determine how thestory will be told or the meaning that is made of it. Drawing from sociological traditions, Gubrium and Holstein (2009) emphasize the need in narrative work to go beyond the transcript. The analyst must also consider how the context in which the narrator tells the story influences what is told and how it is told. Who asks the questions that invite a story? How are some stories discouraged or silenced? For example, stories my father told me about his participation in World War II through interviews I conducted for the Library of Congress Veteran Project are likely to be different tellings than when he gathered with other World War II vets in Washington, D.C., on Veteran’s Day in 2008. Observations of the context are important for situating and interpreting the narratives. Gubrium and Holstein (2009) describe narrative ethnography as “a method of procedure and analysis involving the close scrutiny of circumstances, their actors, and actions in the process of formulating and communicating accounts. This requires direct observation, with decided attention to story formation” (p. 22). Researchers across the social science disciplines use narrative analysis, but often for different purposes. As Bloor and Wood (2006) state, “Linguists might examine the internal structure of narratives, psychologists might focus on the process of recalling and summarizing stories, and anthropologists might look at the function of stories cross-culturally” (p. 119). Semiotics Semiotics draws from linguistics and communications sciences and seeks to understand how people communicate through signs and symbols. Semiotics looks less at what participants perceive or what they believe and more at how specific beliefs or attitudes get into their heads. For example, why might long- distance bus travel in the United States be perceived as a possibly dangerous mode of travel? Why are foods labeled
  • 7.
    “organic” perceived asgood? Why is economic development often seen as a sign of progress? Semiotic analysis is appropriate for research that asks questions of cultural belief systems or of how certain kinds of information (such as identity) get conveyed. Semiotics focuses on basically anything that possesses information. Written and oral texts obviously make use of signs that convey information, but a sign could also be a red hat, a pierced tongue, or a bag of tamales in contexts where each conveys some meaning. For something to be a sign, there has to be a signifier (red hat), something that carries the message, and the signified, the concept that is conveyed (member of a Red Hat Society). In semiotic analysis, the focus is on how signs create or evoke meaning in certain contexts. An integrated system of signs produces a social code. “Semiotics aims to uncover the dynamics beyond surface meanings or shallow descriptions and to articulate underlying implications” (Madison, 2012, p. 76). It is concerned not only with what a sign denotes or represents, but also with what the sign connotes or means in particular cultural contexts. For example, an undergraduate student undertook a semiotic analysis of student groups on campus. She conducted interviews to obtain perspectives on how students group themselves and each other, but much of her work consisted of observations of students—their clothing, ornamentations, and interactional behavior. She became particularly intrigued with distinct ways in which some groups of students used particular signs and symbols to communicate belonging to or differing from other students. Semiotic analysts may consider visual signs (e.g., use of certain colors), linguistic signs (use of certain words), and aural signs (use of sound, such as tone of voice). They look at who is doing the communication and who are the intended recipients. They
  • 8.
    look at howthe communication is structured and at what that structure conveys. And they might look at binary oppositions; that is, by saying that one kind of cookie is “organic” implies that all the others without that label are not. Finally, they look at the codes or unspoken rules and conventions that structure and link the signs to the meanings people make of them and at how these codes may change over time. In looking at how signs interrelate to construct meaning, Roland Barthes and others have inquired into ideologies and systems of power to suggest ways in which certain signs get taken as “natural”—as the way things are or should be—and are then manipulated in the interest of those in power. Various motivations (from maintaining the status quo to enticing purchase of a product) may be behind getting a sign to connote a desired image. To conclude this section on varying forms of analysis, I present a visual metaphor. Consider how fiber artist Caroline Manheimer goes about piecing together scraps of fabric—her data. Making an analogy to thematic analysis, she may segregate (code) her fabric pieces based on certain criteria (such as size, color, shape) into groups and then join the bits together, creating a design in which one color or shape informs the selection of the adjoining fragment. In the process, she might cut some scraps into smaller pieces (splitting codes), or she might sew several pieces together (lumping codes) and then reorganize, creating patterns as exemplified in her art quilt Wanderings (Image 7.1). In Uniform Series #15 (Image 7.2), Manheimer’s process is more analogous to narrative analysis in that she uses fabric to evoke a story about a life in which the Catholic school uniform becomes the symbolic narrative thread. The pieces of fabric (data) are more holistic, and the telling (the narrative) is highlighted. Your research purposes and questions influence not only what data you produce, but also how you make sense of the data you
  • 9.
    have. Because muchof this book is about ethnographic research techniques that help in understanding sociocultural aspects of some issue, group, or organization , the remainder of this chapter describes more fully procedures for thematic analysis. Thematic Analysis: The Early Days If you consistently reflect on your data, work to organize them, and try to discover what they have to tell you, your study will be more relevant and possibly more profound than if you view data analysis as a discrete step to be done after data collection. Working with your data while collecting them enables you to focus and shape the study as it proceeds and is part of the analytic process. O’Reilly (2005) gives an example of how she involved ongoing data analysis with data collection in her research on British migration to Spain: I noticed that when two British people meet there they tend to kiss each other on both cheeks, as the Spanish traditionally do. This had never been written in my field notes because I hadn’t thought it important until I realised I had seen it happen a lot. I started to watch more closely. . . . I became aware that it is just the British migrants who do this and not the tourists, and that the migrants are more likely to do it when they are in the company of tourists. I then began to notice that in the company of tourists migrants would use the occasional Spanish word when talking to each other. This led me to thinking about the relationship between migrants and tourists, whereas until then I had focused more on the relationship between British and Spanish people. I thus began, during fieldwork, a closer analysis of migrants and tourists and their behaviour and attitudes towards each other that I would not have been able to do once I had left the field. I started to sort through the notes and data I had collected, assigning things to a new heading of “tourist/migrant relations,” and discovered many new occurrences I had not noticed before. (p. 187)
  • 10.
    As O’Reilly notes,analytical connections need to be made while you are still collecting data to make full use of the possibilities of fieldwork. Writing memos and monthly reports, managing your data, and applying rudimentary coding schemes will help you to create new hunches and new questions, and to begin to learn from and keep track of the information you are receiving. Memo Writing The term memo originally referred to a specific noting process in grounded theory research (Glaser & Strauss, 1967). The term is now used widely in qualitative research to refer to jotting down reflective thoughts. By writing memos to yourself or keeping a reflective field log, you develop your thoughts; by getting your thoughts down as they occur, no matter how preliminary or in what form, you begin the analysis process. Memo writing also frees your mind for new thoughts and perspectives. “When I think of something,” said graduate student Jackie, “I write it down. I might forget about the thought, but I won’t lose it. It’s there later on to help me think.” Throughout the research process, you work to remain open to new perspectives and thoughts. Gordon, another graduate student, stated, “Insights and new ways to look at the data arise while I am at work at other things. Probably the most productive places for these insights are on the long drive to class and during long, boring meetings when my mind is not actively engaged.” Capture analytic thoughts when they occur. Keeping a recorder in the car can help, as can jotting down your thoughts wherever you happen to be, day or night (if safe to do so). Don’t just wait for thoughts to occur. Periodically, sit down to compose analytical memos. You might want to consider your research questions and write about ways in which your work is addressing the questions or posing new or different questions. Write about patterns you see occurring. If these patterns seemed
  • 11.
    particularly neat andcomprehensive, think about who might have differing perspectives, and make interview appointments with them. Think about exceptions to any pattern. Remember that you are looking for a range of perspectives, not for the generalization that can sum up behavior, beliefs, or values among a group of people. What are the negative cases to the patterns you observe? Consider when and why those cases might occur. If you continuously consider what you are learning, these early analytical thoughts can also guide you to the next set of observations or interviewees and interview questions. See Figure 7.1 for an example of an analytic memo I wrote before coding data from fieldwork in seven academic art museums. I knew that I needed to address the broad theme of university/college culture, politics, and economic challenges, and I sat down to specifically note aspects of that theme—in no particular order—that were striking me as important. Writing the memo allowed me to perceive ways to further categorize or organize my data, and it sent me back to my data to further examine, for example, ways in which a school’s history and culture linked to the ways in which art and art museums were perceived at specific institutions. In addition to memos to yourself, writing monthly field reports for committee members, family and friends, or the funding agency is a way to examine systematically where you are and where you should consider going. Keep the field reports short and to the point, so that they don’t become a burden for you to write or for your readers to read. Headings such as those I call “The Three P’s: Progress, Problems, and Plans” help you to review your work succinctly and plan realistically. In reflecting on both the research process and the data collected, you develop new questions, new hunches, and, sometimes, new ways of approaching the research. The reports also provide a way to communicate research progress to interested others, keeping them informed of the whats and hows and giving them a chance
  • 12.
    for input alongthe way. Writing helps you think about your work, new questions, and connections. All this writing adds up: You will have many thoughts already on paper when you begin working on the first draft of your manuscript. These comments and thoughts recorded as field journal entries or as memos are links across your data that find their way into a variety of files later on. Maintaining Some Semblance of Control When anthropologists, sociologists and others talk about the “richness” of field data, this can be another way of expressing the sheer volume and complexity of information they collect and store. (Dicks, Mason, Coffey, & Atkinson, 2005, p. 2) I am seeing that I will need to write about university politics and economic challenges. I don’t fully understand either, but they are so important for these campus art museums “at the side” of things, even when “at the heart.” The politics and economics section could be complemented by ways museums make a difference in the lives of the people who experience them and that ranges from pathways of creativity to a meditative escape. . . . So what are the things standing out for me? That reaching out to college/university audience and reaching out to community are not as distinct as first appear. A school’s history and culture that support the arts is of utmost importance. That leadership to focus the museum’s mission and to get others onboard plus ability to fund-raise is crucial.
  • 13.
    That art andart museums can be successfully used in creative and engaging ways across disciplines. That art museums can address cross- disciplinary/interdisciplinary/transdisciplinary in different ways—primarily through focus on curriculum or on exhibit. That the museum is a vibrant place of apprenticeship-type learning for students. They get to do things that often are done only by curators or registrars. They are a place of learning research skills, using archives, exploring cultural contexts and history. They also learn how to present and communicate their research through exhibitions, labels, text, websites, worksheets. The museum is a resource for jobs, assistantships, and credit- generation for students. Not all are in art history or studio art. Some come from another discipline and “fall in love” with museum work. The art museum can have a strong link to education department. It can be a place where students see and practice interacting with K–12 on museum “tours.” It can be a forum for students to teach/lead hands-on art activities and thereby link with children and their families. It is a learning lab. If a museum is “known” across the campus, it seems more likely to benefit from alumni donations. This goes back to the culture of the institution. Administrative support and belief is crucial. Economic cuts are part of the reality. If the admin. does not see the power and potential of the art museum, its budget will be cut. This may mean some restructuring—With whom is the museum allied? To whom does it report? How are FTEs generated? Can they be generated by the museum? What is the college/univ. mission for service beyond the campus? How does the museum get credit
  • 14.
    for this role? Figure7.1 Example of an Analytic Memo* *Memo was written during fieldwork in a study of campus art museums sponsored by the Samuel H. Kress Foundation. Expect to be overwhelmed with the sheer volume—notebooks, photocopies, computer files, manila files, and documents—of data that accumulates during research. You truly acquire fat data; their sheer bulk is intimidating. Invariably, you will collect more data than you need. If they are not kept organized, the physical presence of so many data can lead you to procrastinate rather than face the task of focused analysis. Keeping up with data organization during the collection process also helps to ensure that you continually learn from the data and that you spread out the onerous tasks often associated with transforming data into computer files. Based upon his own experience, Gordon advised: Transcribe notes onto the computer after each interview and observation. This admonition has been prompted by my discovery that a fairly substantial part of my data is not in readily usable form. I have had to go back after three months and type my notes because I find it hard to use data that I cannot read easily. Drudgery. Keeping up with data involves transcribing interviews, observation notes, and field logs and memos to computer files, filing, creating new files, and reorganizing your files. Throughout, you continuously reflect upon what you are learning. Develop appropriate forms for recording data collection dates, sites, times, and people interviewed or observed, interviews transcribed, and so on (see Figure 7.2). In this way, an account is kept not only of your progress, but also of gaps, since you can easily see where and with whom you spent time and what else you need to do.
  • 15.
    Your filing systembuilds and becomes increasingly complex as you collect data. You may begin with files organized by generic categories such as interview questions, people, and places. These files provide a way to keep track of information you need early on. As your data and experience grow, you will create relevant specific files on the social processes under investigation where you can keep notes from readings and your own analytic thoughts and observations. Early on, you may also begin files on topics such as titles, introductory and concluding chapters, and quotations. Each of these specific files serves a distinct purpose. The title file, for example, contains your efforts to capture what your narrative may be about (Peshkin, 1985). Although your research project has a stated central focus (from your research proposal), you do not really know what particular story, of the several possibilities, you will tell. Conjuring up titles as the data are being collected is a way of trying out different emphases, all of which are candidates for ultimately giving form to your data. The titles become a way of getting your mind clear about what you are doing, in an overall sense, although the immediate application may be to concentra te your data collecting as you pursue the implications of a particular focus. In short, your search for a title is an act Figure 7.2 Sample Form for Keeping Interview Records of interpretation. Titles capture what you see as germane to your study; but as your awareness of the promise of your study changes, so do your titles. Files related to introductions and conclusions direct you to two obvious aspects of every study: its beginning and its ending. Regardless of the particular name that you give to your introductory and concluding chapters, you frame your study in the former—providing necessary context, background, and conceptualization. You effect closure in the concluding chapter
  • 16.
    by summarizing, atthe very least, and by explicating the meaning that you draw from your data as befits the points of your study, even if this means raising more questions or illuminating multiple perspectives rather than providing answers. It is never too early to reflect on the beginning and ending of your work, much as the preparation of these chapters may seem a distant dream when you are caught up in collecting data. Ideally, the existence of these files alerts you to what you might otherwise miss in the course of your study; they stimulate you to notions that, like your titles, are candidates for inclusion in your forthcoming text. Until the writing is actually done, however, you will not know which will be the surviving notions. The quotation file contains snippets from readings that appear useful for one of the several roles that the relevant literature can play. Eventually, they will be sorted out among chapters, some as epigraphs, those quotations placed at the heads of chapters because they provide the reader with a useful key to what the chapter contains. Other quotations will be the authoritative sprinklings that your elders provide as you find your way through the novel ground of your own data. Through resourceful use of quotations, you acknowledge that the world has not been born anew on your terrain. The quotation file, like other files, is meant to be a reminder that reading should always inspire the question: What, if anything, do these words say about my study? Files help you to store and organize your thoughts and those of others. Data analysis is the process of organizing data in light of your increasingly sophisticated judgments, that is, of the meaning-finding interpretations that you are learning to make about the shape of your study. Understanding that you are in a learning mode is most important; it reminds you that by each effort of data analysis, you enhance your capacity to further analyze. Rudimentary Categorizations
  • 17.
    This experience lendsentirely new meaning to the term fat data. I can’t even imagine reading everything I have, but I know I need to. And coding it? All the while you’re writing, events are still evolving in the community and you can’t ignore that either. . . . So you really don’t stop collecting data, do you? You just start coding and writing. (Pugach, personal correspondence) Marleen Pugach was still at her research site when she wrote this note, realizing her need to begin sorting her data. Classifying data into different groupings is a place to start. Through doing so, you develop a rudimentary coding scheme, the specifics of which are discussed in the next section. You might, for example, think about how you would categorize cases (people, schools, museums, etc.) in your fieldwork (Pelto, 2013). Doing so helps identify patterns in how cases are similar and different and frequently compels considerations of additional interview questions or of other individuals with whom you need to talk. For example, in my work in Saint Vincent, I began categorizing the young people whom I was interviewing as traditionalists, change agents, and those who were opting out of society in some way. I became particularly interested in the change agents and in trying to figure out what was different in their lives that made them optimistic, or at least determined to make a difference. This realization led to both new questions and interviews with others who fit my change agent category. In another example, Cindy began a pilot study by observing meetings of a rural school board and interviewing its members. After fifteen hours of data collection, she decided to see what she might learn by coding the data she had. As a result, she created a new research statement: My initial problem statement was so broad it was difficult to
  • 18.
    work with. Theprocess of coding and organizing my codes has helped me to determine an approach to solidify a new problem statement that will lead me in a focused exploration of two major areas of school board control: financial and quality education. Establishing the boundaries for your research is difficult. Social interaction does not occur in neat, isolated units. Gordon reflected on his work: “I constantly find myself heading off in new directions and it is an act of will to stick to my original (but revised) problem statement.” In order to complete any project, you must establish boundaries, but these boundary decisions are also an interpretive judgment based on your awareness of your data and their possibilities. Posting your problem statement or most recent working title above your workspace may help to remind you about the task ahead. Cindy used a computer banner program to print out her (revised) research statement, which she taped to the wall over her desk. The banner guided her work whenever she lifted her head to ponder and reflect. It may help also to think of the amount of film that goes into a good half-hour documentary. Similar to documentary filmmaking, the methods of qualitative data collecting naturally lend themselves to excess. You collect more than you can use because you cannot define your study so precisely as to pursue a trim, narrowly defined line of inquiry. The open nature of qualitative inquiry means that you acquire even more data than you originally envisioned. You are left with the large task of selecting and sorting—a partly mechanical but mostly interpretive undertaking, because every time you decide to omit a data bit as irrelevant to your study or to place it somewhere, you are making a judgment. At some point, you stop collecting data, or at least you stop focusing on the collecting. Knowing when to end this phase is
  • 19.
    difficult. It maybe that you have exhausted all sources on the topic—that there are no new situations to observe, no new people to interview, no new documents to read. Such situations are rare. Perhaps you stop collecting data because you have reached theoretical saturation (Glaser & Strauss, 1967). This means that successive examination of sources yields redundancy and that the data you have seem complete and integrated. Recognizing theoretical saturation can be tricky, however. It may be that you hear the same thing from all of your informants because your selection of interviewees is too limited or too small for you to get discrepant views. Often, data collection ends through less than ideal conditions: The money runs out or deadlines loom large. Try to make research plans that do not completely exhaust your money, time, or energy, so that you can obtain a sense of complete and integrated data. Rudimentary Categorizations This experience lends entirely new meaning to the term fat data. I can’t even imagine reading everything I have, but I know I need to. And coding it? All the while you’re writing, events are still evolving in the community and you can’t ignore that either. . . . So you really don’t stop collecting data, do you? You just start coding and writing. (Pugach, personal correspondence) Marleen Pugach was still at her research site when she wrote this note, realizing her need to begin sorting her data. Classifying data into different groupings is a place to start. Through doing so, you develop a rudimentary coding scheme, the specifics of which are discussed in the next section. You might, for example, think about how you would categorize cases (people, schools, museums, etc.) in your fieldwork (Pelto, 2013). Doing so helps identify patterns in how cases are similar and different and frequently compels considerations of additional interview questions or of other individuals with whom you need to talk. For example, in my work in Saint
  • 20.
    Vincent, I begancategorizing the young people whom I was interviewing as traditionalists, change agents, and those who were opting out of society in some way. I became particularly interested in the change agents and in trying to figure out what was different in their lives that made them optimistic, or at least determined to make a difference. This realization led to both new questions and interviews with others who fit my change agent category. In another example, Cindy began a pilot study by observing meetings of a rural school board and interviewing its members. After fifteen hours of data collection, she decided to see what she might learn by coding the data she had. As a result, she created a new research statement: My initial problem statement was so broad it was difficult to work with. The process of coding and organizing my codes has helped me to determine an approach to solidify a new problem statement that will lead me in a focused exploration of two major areas of school board control: financial and quality education. Establishing the boundaries for your research is difficult. Social interaction does not occur in neat, isolated units. Gordon reflected on his work: “I constantly find myself heading off in new directions and it is an act of will to stick to my original (but revised) problem statement.” In order to complete any project, you must establish boundaries, but these boundary decisions are also an interpretive judgment based on your awareness of your data and their possibilities. Posting your problem statement or most recent working title above your workspace may help to remind you about the task ahead. Cindy used a computer banner program to print out her (revised) research statement, which she taped to the wall over her desk. The banner guided her work whenever she lifted her head to ponder and reflect.
  • 21.
    It may helpalso to think of the amount of film that goes into a good half-hour documentary. Similar to documentary filmmaking, the methods of qualitative data collecting naturally lend themselves to excess. You collect more than you can use because you cannot define your study so precisely as to pursue a trim, narrowly defined line of inquiry. The open nature of qualitative inquiry means that you acquire even more data than you originally envisioned. You are left with the large task of selecting and sorting—a partly mechanical but mostly interpretive undertaking, because every time you decide to omit a data bit as irrelevant to your study or to place it somewhere, you are making a judgment. At some point, you stop collecting data, or at least you stop focusing on the collecting. Knowing when to end this phase is difficult. It may be that you have exhausted all sources on the topic—that there are no new situations to observe, no new people to interview, no new documents to read. Such situations are rare. Perhaps you stop collecting data because you have reached theoretical saturation (Glaser & Strauss, 1967). This means that successive examination of sources yields redundancy and that the data you have seem complete and integrated. Recognizing theoretical saturation can be tricky, however. It may be that you hear the same thing from all of your informants because your selection of interviewees is too limited or too small for you to get discrepant views. Often, data collection ends through less than ideal conditions: The money runs out or deadlines loom large. Try to make research plans that do not completely exhaust your money, time, or energy, so that you can obtain a sense of complete and integrated data. Entering the Code Mines In the early days of data collection, stories abound. Struck by the stories, you tell them and repeat them. You may even allow them to assume an importance beyond their worth to the
  • 22.
    purposes of theproject. Making sense of the narratives, observations, and documents as a whole comes harder. You do not have to stop telling stories, but in thematic analysis, you must make connections among them: What is being illuminated? What themes and patterns give shape to observations and interviews? Coding helps answer these questions. When most of the data are collected, the time has come to devote attention to coding and analysis. Although you already may have a classificatory scheme of sorts, you now focus on categorization. You are ready to enter “the code mines.” The work is part tedium and part exhilaration as it renders form and possible meaning to the piles and files of data before you. Marleen’s words portray the somewhat ambivalent psychological ambience that accompanies the analytical process of coding: I’m about to finish the first set of teacher transcripts and begin with the students. This will probably mean several new codes . . . since it is a new group. I hope the codebook can stand the pressure. One of the hardest things is accepting that doing the coding is a months-long proposition. When my mother asks me if I’m done yet, I know she doesn’t have a clue. (Personal correspondence, May 3, 1994) What Is a Code? The word coding as used in qualitative work is confusing to those familiar with the term and its use in quantitative survey research, where short open-ended responses are categorized with the purpose of counting. Instead of coding to count, qualitative researchers code to discern themes, patterns, and processes; to make comparisons; and to build theoretical explanations. Some qualitative researchers prefer the term indexing to the word coding, but as Saldaña (2009) states, “Coding is not just labeling, it is linking” (p. 8). Codes link thoughts and actions across bits of data. Indexing does not
  • 23.
    convey that senseof linking. It may not matter which word to use, as long as you realize that coding in qualitative research is for different purposes than in quantitative work. Coding is a progressive process of sorting and defining and defining and sorting those scraps of collected data (e.g., observation notes, interview transcripts, memos, documents, and notes from relevant literature) that are applicable to your research purpose. By putting pieces that exemplify the same descriptive or theoretical idea together into data clumps labeled with a code, you begin to create a thematic organizational framework. A qualitative research code, as described by Saldaña (2009), “is most often a word or short phrase that symbolically assigns a summative, salient, essence-capturing, and/or evocative attribute for a portion of language-based or visual data” (p. 3). Saldaña draws a parallel between a book’s title and a code: “Just as a title represents and captures a book or film or poem’s primary content and essence, so does a code represent and capture a datum’s primary content and essence” (p. 3). Note that a code is a word or short phrase, not a number/letter combination or a set of letters meant to represent some phrase, such as T-AHW for teacher use of art homework. Saldaña (2009) finds that such abbreviations “just make the decoding process of your brain work much harder than they need to during analysis” (p. 18). I agree. Write out your code words. A useful suggestion for creating code words comes from grounded theory research: Think in terms of gerunds (words ending in ing). The gerund form moves you to consider processes and actions such as resisting authority, seeking attention, or striving to be do-gooders. Thinking in terms of gerunds (or processes) tends to lead to a more useful and interesting analysis of your data than categorizing by descriptive nouns such as students, teachers, and administrators.
  • 24.
    Approaches to Coding Howdo you figure out what codes to use and what to mark as coded? It is a creative act that takes concentra ted thought as you read and think deeply about the words before you. Begin by reading quickly through all your data with your notebook at the ready for memos and possible code words. You will note that some of the same topics come up over and over. This is not surprising since your research questions were at least somewhat directing your observations, and your interview questions were somewhat guiding the interview script. You will begin to observe, however, that people talk about a topic in both similar and different ways, presenting different perspectives. These similarities and differences become areas for coding. Make note of actions, perspectives, processes, values, and so on that stand out for you as you refamiliarize yourself with the data. Then, take several interview transcripts or field observations and try coding them line by line. As much as you may try to set aside your assumptions and theoretical frameworks, those perspectives tend to find their way into the codes you choose. That is to be expected. What you want to avoid is imposing an a priori set of codes on your data. Line-by-line coding helps to immerse you in the data and discover what concepts they have to offer. As you read line by line, jotting possible codes in the margin, try to abstract your code words, removing them slightly from the data. For example, a line in your fieldnotes that reads, “Ms. Wilson asked the students to sit in their seats and to stop talking. She then took her seat and sat there quietly for at least three minutes before the room quieted,” could be coded specifically as “Wilson-requesting quiet.” The code will probably serve you better, however, if abstracted to “controlling students” or “keeping order” or a number of other codes, depending upon your research purposes. The point is that your code is a category of activity of which the piece coded is an example.
  • 25.
    Saldaña (2009) suggests“The Touch Test,” as “a strategy for progressing from topic to concept, from the real to the abstract, and from the particular to the general” (p. 187). If you can touch the aspect that is coded—for example, tattoos—then ask yourself, “What is the larger concept or phenomenon or process that tattoo is part of that cannot be touched?” It might be adornment or body art or, perhaps, making a statement, depending upon the context of the research. The intent of Saldaña’s touch test is to help you figure out the concepts of which your coded data are a part. Line-by-line coding is a way to get started, but you do not necessarily have to code every piece of data this way. Saldaña’s text The Coding Manual for Qualitative Research (2009), the text that I draw upon heavily in this section, is full of ways to approach coding. As Saldaña states, the approaches or coding methods “are not discrete and . . . can be ‘mixed and matched’” (p. 47). Although touching upon several here, I recommend Saldaña’s book for more suggestions. One useful coding approach is domain or taxonomic coding. Derived from the work of Spradley (1979) in cognitive anthropology, this method attempts to get at how participants categorize and talk about some aspect of their culture. Specific kinds of interview questions may accompany this approach in that the researcher may have asked interviewees to elaborate on ways to (means), kinds of (inclusion/exclusion), steps of (sequence), and so on regarding aspects of the research topic. You do not have to have asked these specific questions to use this approach in coding data. Rather, ask questions of the data you have that would lead to categorizing types of, causes of, consequences of, attitudes toward, strategies for, and so forth that interviewees discussed or that you saw in your observations. In the example above, Ms. Wilson’s request and subsequent waiting may have been construed as a strategy for controlling students. Coding this line would then lead you to look for other types of controlling strategies Ms. Wilson used,
  • 26.
    as well astypes of controlling strategies used by other teachers. Taxonomic coding helps you find patterns in human speech and behavior. “Controlling” becomes a coding category for varied examples of actions and speech. As Saldaña (2009) states, “When you search for patterns in coded data to categorize them, understand that sometimes you may group things together not just because they are exactly alike or very much alike, but because they might also have something in common—even if, paradoxically, that commonality consists of differences” (p. 6). Teachers’ attitudes toward and actions in controlling students, for example, may be quite different, but all could be coded as “types of control.” Another coding approach is to become attuned to the words participants use to talk about their lives, communities, organizations, and so on. Referred to as in vivo or indigenous codes, these terms may be particularly colorful or metaphoric or words used differently than as they are generally used. For example, in the museum study, I began noting and then coding the metaphors participants used to describe their campus art museum: the museum as a “gem,” a “treasure,” a “library,” a “bubbling cauldron of ideas,” and so forth. In doing so, I started to perceive patterns in where these metaphors occurred. For example, gem and treasure were frequently used at one site but not at another. I could then begin thinking (and memoing) about how different metaphors might imply different expectations and kinds of interactions at the various art museums. Another type of coding to consider is emotions coding. “Emotion Codes label the emotions recalled and/or experienced by the participant, or inferred by the researcher about the participant” (Saldaña, 2009, p. 86). Such codes become linked with particular actions or behavior in the study. Saldaña (2009) uses the example of a study of divorce and the emotions linked to different stages and procedures within the divorce process.
  • 27.
    Remember that youmix these and other coding methods as you work your way through your data. These approaches are heuristics to help you delve into the coding process and to fi nd what works best for you and your research purposes. Creating a Codebook After coding several interview transcripts and observational notes, make a list of the codes you have generated. Can you arrange them into major categories and subcategories? Do some codes appear to be nearly the same and could be combined? Do some codes cover large categories that perhaps should be split into two or more codes? You may find that the same subcode appears under several major codes. This may indicate a theme that runs throughout the work. Look for its presence or absence under other major headings. If absent, should it be there? After reworking your coding scheme, try it out on the same documents coded previously to see how it fits. Revise as needed, and then try it on another transcript and some more observation notes. What new codes are added? Be overgenerous in judging what is important to code; you do not want to foreclose any opportunity to learn from the field by prematurely settling on what is or is not relevant to you. Go back and forth like this until you are no longer adding substantially more codes, realizing that as you continue to code, you will likely add more—sending you back to look for other expressions of that code in previous parts of your text. When comfortable with your codes, make a codebook. Give each major code its own page. Below the major code, list each subcode (and sub-subcodes) with an explanation of each. Writing the explanation will help to keep you from what Gibbs (2007) refers to as “definitional drift,” in which the material you coded earlier is slightly different in meaning from the material you code at a different time. For example, in my work
  • 28.
    with young peoplein Oaxaca, resisting was one of my early codes. I defined it as forms of speech or actions that demonstrate disagreement with governmental rules or policies. As my work progressed, my application of resisting as a coding category became more complex and began overlapping with a category, I called maintaining indigenous autonomy. I had to rethink my resisting code and its definition. The codebook is highly personal, meant to fit you; it need not be useful or clear to anyone else. Although there may be common features and a common intent to everyone’s data analysis process, it remains, in the end, an idiosyncratic enterprise. No one right coding scheme exists. The proof of your coding scheme is in the pudding of your manuscript. The sense your manuscript makes, how useful it is and how well it reads depend, in large part, on your data analysis. If your process is not producing any “ah ha’s” or moments of excitement as you realize some new understandings, then it probably is not yet a good coding scheme. Reference O’Leary, Z. (2005). Researching real-world problems. Thousand Oaks, CA: SAGE. Ch.11 Analysing and Interpreting Data FROM RAW DATA TO MEANINGFUL UNDERSTANDING It’s easy to fall into the trap of thinking the major hurdle in conducting real-world research is data collection. And yes, gathering credible data is certainly a challenge – but so is making sense of it. As George Eliot states, the key to meaning
  • 29.
    is ‘interpretation’. Now attemptingto interpret a mound of data can be intimidating. Just looking at it can bring on a nasty headache or a mild anxiety attack. So the question is, what is the best way to make a start? How can you begin to work through your data? Well, if I were only allowed to give one piece of advice, it would be to engage in creative and inspired analysis using a methodical and organized approach. As described in Box 11.1, the best way to move from messy, complex and chaotic raw data … towards rich, meaningful and eloquent understandings is by working through your data in ways that are creative, yet managed within a logical and systematic framework. Box 11.1 Balancing Creativity and Focus Think outside the square … yet stay squarely on target Be original, innovative, and imaginative … yet know where you want to go Use your intuition … but be able to share the logic of that intuition Be fluid and flexible … yet deliberate and methodical Be inspired, imaginative and ingenious … yet realistic and practical Easier said than done, I know. But if you break the process of analysis down into a number of defined tasks, it’s a challenge that can be conquered. For me, there are five tasks that need to be managed when conducting analysis:
  • 30.
    Keeping your eyeon the main game. This means not getting lost in a swarm of numbers and words in a way that causes you to lose a sense of what you’re trying to accomplish. Managing, organizing, preparing and coding your data so that it’s ready for your intended mode(s) of analysis. Engaging in the actual process of analysis. For quantified data, this will involve some level of statistical analysis, while working with words and images will require you to call on qualitative data analysis strategies. Presenting data in ways that capture understandings, and being able to offer those understandings to others in the clearest possible fashion. Drawing meaningful and logical conclusions that flow from your data and address key issues. This chapter tackles each of these challenges in turn. Keeping your eye on the main game While the thought of getting into your data can be daunting, once you take the plunge it’s actually quite easy to get lost in the process. Now this is great if ‘getting lost’ means you are engaged and immersed and really getting a handle on what’s going on. But getting lost can also mean getting lost in the tasks, that is, handing control to analysis programs, and losing touch with the main game. You need to remember that while computer programs might be able to do the ‘tasks’, it is the researcher who needs to work strategically, creatively and intuitively to get a ‘feel’ for the data; to cycle between data and existing theory; and to follow the hunches that can lead to sometimes unexpected, yet significant findings. FIGURE 11.1 THE PROCESS OF ANALYSIS Have a look at Figure 11.1. It’s based on a model I developed a while ago that attempts to capture the full ‘process’ of analysis; a process that is certainly more complex and comprehensive than simply plugging numbers or words into a computer. In fact,
  • 31.
    real-world analysis involvesstaying as close to your data as possible – from initial collection right through to drawing final conclusions. And as you move towards these conclusions, it’s essential that you keep your eye on the game in a way that sees you consistently moving between your data and … your research questions, aims and objectives, theoretical underpinnings and methodological constraints. Remember, even the most sophisticated analysis is worthless if you’re struggling to grasp the implications of your findings to your overall project. Rather than relinquish control of your data to ‘methods’ and ‘tools’, thoughtful analysis should see you persistently interrogating your data, as well as the findings that emerge from that data. In fact, as highlighted in Box 11.2, keeping your eye on the game means asking a number of questions throughout the process of analysis. Box 11.2 Questions for Keeping the Bigger Picture in Mind Questions related to your own expectations What do I expect to find i.e. will my hypothesis bear out? What don’t I expect to find, and how can I look for it? Can my findings be interpreted in alternative ways? What are the implications? Questions related to research question, aims and objectives How should I treat my data in order to best address my research questions? How do my findings relate to my research questions, aims and
  • 32.
    objectives? Questions related totheory Are my findings confirming my theories? How? Why? Why not? Does my theory inform/help to explain my findings? In what ways? Can my unexpected findings link with alternative theories? Questions related to methods Have my methods of data collection and/or analysis coloured my results. If so, in what ways? How might my methodological shortcomings be affecting my findings? Managing the data Data can build pretty quickly, and you might be surprised by the amount of data you have managed to collect. For some, this will mean coded notebooks, labelled folders, sorted questionnaires, transcribed interviews, etc. But for the less pedantic, it might mean scraps of paper, jotted notes, an assortment of cuttings and bulging files. No matter what the case, the task is to build or create a ‘data set’ that can be managed and utilized throughout the process of analysis. Now this is true whether you are working with: (a) data you’ve decided to quantify; (b) data you’ve captured and preserved in a qualitative form; (c) a combination of the above (there can be real appeal in combining the power of words with the authority of numbers). Regardless of approach, the goal is the same – a rigorous and systematic approach to data management that can lead to credible findings. Box 11.3 runs through six steps I believe are essential for effectively managing your data.
  • 33.
    Box 11.3 DataManagement Step 1 Familiarize yourself with appropriate software This involves accessing programs and arranging necessary training. Most universities (and some workplaces) have licences that allow students certain software access, and many universities provide relevant short courses. Programs themselves generally contain comprehensive tutorials complete with mock data sets. Quantitative analysis will demand the use of a data management/statistics program, but there is some debate as to the necessity of specialist programs for qualitative data analysis. This debate is taken up later in the chapter, but the advice here is that it’s certainly worth becoming familiar with the tools available. Quantitative programs Qualitative programs SPSS – sophisticated and user-friendly (www.spss.com) SAS – often an institutional standard, but many feel it is not as user-friendly as SPSS (www.sas.com) Minitab – more introductory, good for learners/small data sets (www.minitab.com) Excel – while not a dedicated stats program it can handle the basics and is readily available on most PCs (Microsoft Office product) Absolutely essential: here is an up-to-date word processing package Specialist packages include: NU*DIST, NVIVO, MAXqda, The Ethnograph – used for indexing, searching and theorizing ATLAS.ti – can be used for images as well as words
  • 34.
    CONCORDANCE, HAMLET, DICTION– popular for content analysis (all above available: www.textanalysis.info) CLAN-CA popular for conversation analysis (http://childes.psy.cmu.edu) Step 2 Log in your data Data can come from a number of sources at various stages throughout the research process, so it’s well worth keeping a record of your data as it’s collected. Keep in mind that original data should be kept for a reasonable period of time; researchers need to be able to trace results back to original sources. Step 3 Organize your data This involves grouping like sources, making any necessary copies and conducting an initial cull of any notes, observations, etc. not relevant to the analysis. Step 4 Screen your data for any potential problems This includes a preliminary check to see if your data is legible and complete. If done early, you can uncover potential problems not picked up in your pilot/trial, and make improvements to your data collection protocols. Step 5 Enter the data This involves systematically entering your data into a database or analysis program, as well as creating codebooks, which can be electronically based, that describe your data and keep track of how it can be accessed. Quantitative data Qualitative data Codebooks often include: the respondent or group; the variable name and description; unit of measurement; date collected; any relevant notes Codebooks often include: respondents; themes; data collection procedures; collection dates; commonly used
  • 35.
    shorthand; and anyother notes relevant to the study Data entry: data can be entered as it is collected or after it has all come in. Analysis does not take place until after data entry is complete. Figure 11.2 depicts an SPSS data entry screen Data entry: whether using a general word processing program or specialist software, data is generally transcribed in an electronic form and is worked through as it is received. Analysis tends to be ongoing and often begins before all the data has been collected/entered FIGURE 11.2 DATA ENTRY SCREEN FOR SPSS Step 6 Clean the data This involves combing through the data to make sure any entry errors are found, and that the data set looks in order. Quantitative data When entering quantified data it’s easy to make mistakes – particularly if you’re moving fast, i.e. typos. It’s essential that you go through your data to make sure it’s as accurate as possible Qualitative data Because qualitative data is generally handled as it’s collected, there is often a chance to refine processes as you go. In this way your data can be as ‘ready’ as possible for analysis STATISTICS – THE KISS (KEEP IT SIMPLE AND SENSIBLE) APPROACH ‘Doctors say that Nordberg has a 50/50 chance of living, though there’s only a 10 percent chance of that.’
  • 36.
    – Naked Gun Itwasn’t long ago that ‘doing’ statistics meant working with formulae, but personally, I don’t believe in the need for all real - world researchers to master formulae. Doing statistics in the twenty-first century is more about your ability to use statistical software than your ability to calculate means, modes, medians and standard deviations – and look up p-values in the back of a book. To say otherwise is to suggest that you can’t ride a bike unless you know how to build one. What you really need to do is to learn how to ride, or in this case learn how to run a stats program. Okay, I admit these programs do demand a basic understanding of the language and logic of statistics. And this means you will need to get your head around (1) the nature of variables; (2) the role and function of both descriptive and inferential statistics; (3) appropriate use of statistical tests; and (4) effective data presentation. But if you can do this, effective statistical analysis is well within your grasp. Now before I jump in and talk about the above a bit more, I think it’s important to stress that … Very few students can get their heads around statistics without getting into some data. While this chapter will familiarize you with the basic language and logic of statistics, it really is best if your reading is done in conjunction with some hands-on practice (even if this is simply playing with the mock data sets provided in stats programs). For this type of knowledge ‘to stick’, it needs to be applied. Variables
  • 37.
    Understanding the natureof variables is essential to statistical analysis. Different data types demand discrete treatment. Using the appropriate statistical measures to both describe your data and to infer meaning from your data requires that you clearly understand your variables in relation to both cause and effect and measurement scales. Cause and effect The first thing you need to understand about variables relates to cause and effect. In research-methods-speak, this means being able to clearly identify and distinguish your dependent and independent variables. Now while understanding the theoretical difference is not too tough, being able to readily identify each type comes with practice. DEPENDENT VARIABLES These are the things you are trying to study or what you are trying to measure. For example, you might be interested in knowing what factors are related to high levels of stress, a strong income stream, or levels of achievement in secondary school – stress, income and achievement would all be dependent variables. INDEPENDENT VARIABLES These are the things that might be causing an effect on the things you are trying to understand. For example, conditions of employment might be affecting stress levels; gender may have a role in determining income; while parental influence may impact on levels of achievement. The independent variables here are employment conditions, gender and parental influence. One way of identifying dependent and independent variables is simply to ask what depends on what. Stress depends on work conditions or income depends on gender. As I like to tell my students, it doesn’t make sense to say gender depends on income unless you happen to be saving for a sex-change operation!
  • 38.
    Measurement scales Measurement scalesrefer to the nature of the differences you are trying to capture in relation to a particular variable (examples below). As summed up in Table 11.1, there are four basic measurement scales that become respectively more precise: nominal, ordinal, interval and ratio. The precision of each type is directly related to the statistical tests that can be performed on them. The more precise the measurement scale, the more sophisticated the statistical analysis you can do. NOMINAL Numbers are arbitrarily assigned to represent categories. These numbers are simply a coding scheme and have no numerical significance (and therefore cannot be used to perform mathematical calculations). For example, in the case of gender you would use one number for female, say 1, and another for male, 2. In an example used later in this chapter, the variable ‘plans after graduation’ is also nominal with numerical values arbitrarily assigned as 1 = vocational/technical training, 2 = university, 3 = workforce, 4 = travel abroad, 5 = undecided and 6 = other. In nominal measurement, codes should not overlap (they should be mutually exclusive) and together should cover all possibilities (be collectively exhaustive). The main function of nominal data is to allow researchers to tally respondents in order to understand population distributions. ORDINAL This scale rank orders categories in some meaningful way – there is an order to the coding. Magnitudes of difference, however, are not indicated. Take for example, socio-economic status (lower, middle, or upper class). Lower class may denote less status than the other two classes but the amount of the difference is not defined. Other examples include air travel (economy, business, first class), or items where respondents are asked to rank order selected choices (biggest environmental challenges facing developed countries). Likert-type scales, in which respondents are asked to select a response on a point
  • 39.
    scale (for example,‘I enjoy going to work’: 1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = strongly agree), are ordinal since a precise difference in magnitude cannot be determined. Many researchers, however, treat Likert scales as interval because it allows them to perform more precise statistical tests. In most small-scale studies this is not generally viewed as problematic. INTERVAL In addition to ordering the data, this scale uses equidistant units to measure difference. This scale does not, however, have an absolute zero. An example here is date – the year 2006 occurs 41 years after the year 1965, but time did not begin in AD1. IQ is also considered an interval scale even though there is some debate over the equidistant nature between points. RATIO Not only is each point on a ratio scale equidistant, there is also an absolute zero. Examples of ratio data include age, height, distance and income. Because ratio data are ‘real’ numbers all basic mathematical operations can be performed. Descriptive statistics Descriptive statistics are used to describe the basic features of a data set and are key to summarizing variables. The goal is to present quantitative descriptions in a manageable and intelligible form. Descriptive statistics provide measures of central tendency, dispersion and distribution shape. Such measures vary by data type (nominal, ordinal, interval, ratio) and are standard calculations in statistical programs. In fact, when generating the example tables for this section, I used the statistics program SPSS. After entering my data, I generated my figures by going to ‘Analyze’ on the menu bar, clicking on ‘Descriptive Statistics’, clicking on ‘Frequencies’, and then defining the statistics and charts I required.
  • 40.
    Measuring central tendency Oneof the most basic questions you can ask of your data centres on central tendency. For example, what was the average score on a test? Do most people lean left or right on the issue of abortion? Or what do most people think is the main problem with our health care system? In statistics, there are three ways to measure central tendency (see Table 11.2): mean, median and mode – and the example questions above respectively relate to these three measures. Now while measures of central tendency can be calculated manually, all stats programs can automatically calculate these figures. MEAN The mathematical average. To calculate the mean, you add the values for each case and then divide by the number of cases. Because the mean is a mathematical calculation, it is used to measure central tendency for interval and ratio data, and cannot be used for nominal or ordinal data where numbers are used as ‘codes’. For example, it makes no sense to average the 1s, 2s and 3s that might be assigned to Christians, Buddhists and Muslims. MEDIAN The mid-point of a range. To find the median you simply arrange values in ascending (or descending) order and find the middle value. This measure is generally used in ordinal data, and has the advantage of negating the impact of extreme values. Of course, this can also be a limitation given that extreme values can be significant to a study. MODE The most common value or values noted for a variable. Since nominal data is categorical and cannot be manipulated mathematically, it relies on mode as its measure of central tendency. Measuring dispersion While measures of central tendency are a standard and highly useful form of data description and simplification, they need to
  • 41.
    be complemented withinformation on response variability. For example, say you had a group of students with IQs of 100, 100, 95 and 105, and another group of students with IQs of 60, 140, 65 and 135, the central tendency, in this case the mean, of both groups would be 100. Dispersion around the mean, however, will require you to design curriculum and engage learning with each group quite differently. There are several ways to understand dispersion, which are appropriate for different variable types (see Table 11.3). As with central tendency, statistics programs will automatically generate these figures on request. RANGE This is the simplest way to calculate dispersion, and is simply the highest minus the lowest value. For example, if your respondents ranged in age from 8 to 17, the range would be 9 years. While this measure is easy to calculate, it is dependent on extreme values alone, and ignores intermediate values. QUARTILES This involves subdividing your range into four equal parts or ‘quartiles’ and is a commonly used measure of dispersion for ordinal data, or data whose central tendency is measured by a median. It allows researchers to compare the various quarters or present the inner 50% as a dispersion measure. This is known as the inner-quartile range. VARIANCE This measure uses all values to calculate the spread around the mean, and is actually the ‘average squared deviation from the mean’. It needs to be calculated from interval and ratio data and gives a good indication of dispersion. It’s much more common, however, for researchers to use and present the square root of the variance which is known as the standard deviation. STANDARD DEVIATION This is the square root of the variance, and is the basis of many commonly used statistical tests for interval and ratio data. As explained below, its power comes to the fore with data that sits under a normal curve.
  • 42.
    Measuring the shapeof the data To fully understand a data set, central tendency and dispersion need to be considered in light of the shape of the data, or how the data is distributed. As shown in Figure 11.3, a normal curve is ‘bell-shaped’; the distribution of the data is symmetrical, with the mean, median and mode all converged at the highest point in the curve. If the distribution of the data is not symmetrical, it is considered skewed. In skewed data the mean, median and mode fall at different points. Kurtosis characterizes how peaked or flat a distribution is compared to ‘normal’. Positive kurtosis indicates a relatively peaked distribution, while negative kurtosis indicates a flatter distribution. The significance in understanding the shape of a distribution is in the statistical inferences that can be drawn. As shown in Figure 11.4, a normal distribution is subject to a particular set of rules regarding the significance of a standard deviation. Namely that: 68.2% of cases will fall within one standard deviation of the mean 95.4% of cases will fall within two standard deviations of the mean 99.6% of cases will fall within three standard deviations of the mean So if we had a normal curve for the sample data relating to ‘age of participants’ (mean = 12.11, s.d. = 2.22 – see Boxes 11.2, 11.3), 68.2% of participants would fall between the ages of 9.89 and 14.33 (12.11–2.22 and 12.11+2.22). These rules of the normal curve allow for the use of quite powerful statistical tests and are generally used with interval and ratio data (sometimes called parametric tests). For data that
  • 43.
    does not followthe assumptions of a normal curve (nominal and ordinal data), the researcher needs to call on non-parametric statistical tests in making inferences. Table 11.4 shows the curve, skewness and kurtosis of our sample data set. Inferential statistics While the goal of descriptive statistics is to describe and summarize, the goal of inferential statistics is to draw conclusions that extend beyond immediate data. For example, inferential statistics can be used to estimate characteristics of a population from sample data, or to test various hypotheses about the relationship between different variables. Inferential statistics allow you to assess the probability that an observed difference is not just a fluke or chance finding. In other words, inferential statistics is about drawing conclusions that are statistically significant. Statistical significance Statistical significance refers to a measure, or ‘p-value’, which assesses the actual ‘probability’ that your findings are more than coincidental. Conventional p-values are .05, .01, and .001, which tells you that the probability your findings have occurred by chance is 5/100, 1/100, or 1/1,000 respectively. Basically, the lower the p-value, the more confident researchers can be that findings are genuine. Keep in mind that researchers do not usually accept findings that have a p-value greater than .05 because the probability that findings are coincidental or caused by sampling error is too great. Questions suitable to inferential statistics It’s easy enough to tell students and new researchers that they need to interrogate their data, but it doesn’t tell them what they should be asking. Box 11.4 offers some common questions which, while not exhaustive, should give you some ideas for
  • 44.
    interrogating real-world datausing inferential statistics. Box 11.4 Questions for Interrogating Quantitative Data using Inferential Statistics How do participants in my study compare to a larger population? These types of question compare a sample with a population. For example, say you are conducting a study of patients in a particular coronary care ward. You might ask if the percentage of males or females in your sample, or their average age, or their ailments are statistically similar to coronary care patients across the country. To answer such questions you will need access to population data for this larger range of patients. Are there differences between two or more groups of respondents? Questions that compare two or more groups are very common and are often referred to as ‘between subject’. I’ll stick with a medical theme here … For example, you might ask if male and female patients are likely to have similar ailments; or whether patients of different ethnic backgrounds have distinct care needs; or whether patients who have undergone different procedures have different recovery times. Have my respondents changed over time? These types of question involve before and after data with either the same group of respondents or respondents who are matched by similar characteristics. They are often referred to as ‘within subject’. An example of this type of question might be, ‘have patients’ dietary habits changed since undergoing bypass surgery?’ Is there a relationship between two or more variables? These types of question can look for either correlations (simpl y an association) or cause and effect. Examples of correlation questions might be, ‘Is there an association between time spent in hospital and satisfaction with nursing staff?’ or, ‘Is there a
  • 45.
    correlation between patient’sage and the medical procedure they have undergone?’ Questions looking for cause and effect differentiate dependent and independent variables. For example, ‘Does satisfaction depend on length of stay?’ or, ‘Does stress depend on adequacy of medical insurance?’ Cause and effect relationships can also look to more than one independent variable to explain variation in the dependent variable. For example, ‘Does satisfaction with nursing staff depend on a combination of length of stay, age and severity of medical condition?’ (I realize that all of these examples are drawn from the medical or nursing fields, but application to other respondent groups is pretty straightforward. In fact, a good exercise here is to try to come up with similar types of question for alternative respondent groups.) Selecting the right statistical test There is a baffling array of statistical tests out there that can help you answer the types of question highlighted in Box 11.4. And programs such as SPSS and SAS are capable of running such tests without you needing to know the technicalities of their mathematical operations. The problem, however, is knowing which test is right for your particular application. Luckily, you can turn to a number of test selectors now available on the Internet (see Bill Trochim’s test selector at www.socialresearchmethods.net/kb/index.htm) and through programs such as MODSTAT and SPSS. But even with the aid of such selectors (including the tabular one I offer below), you still need to know the nature of your variables (independent/dependent); scales of measurement (nominal, ordinal, interval, ratio); distribution shape (normal or skewed); the types of questions you want to ask; and the types of conclusions you are trying to draw. Table 11.5 covers the most common tests for univariate (one
  • 46.
    variable), bivariate (twovariable) and multivariate (three or more variable) data. The table can be read down the first column for univariate data (the column provides an example of the data type, its measure of central tendency, dispersion and appropriate tests for comparing this type of variable to a population). It can also be read as a grid for exploring the relationship between two or more variables. Once you know what tests to conduct, your statistical software will be able to run the analysis and assess statistical significance. Presenting quantitative data When it comes to presenting quantitative data, there can be a real temptation to offer graphs, charts and tables for every single variable in your study. So the first key to effective data presentation is to resist this temptation, and actively determine what is most important in your work. Your findings need to tell a story related to your aims, objectives and research questions. Now when it comes to how your data should be presented, I think there is one golden rule: it should not be hard work for the reader. Most people’s eyes glaze over when it comes to statistics, so your data should not be hard to decipher. You should not need to be a statistician to understand it. Your challenge is to graphically and verbally present your data so that meanings are clear. Any graphs and tables you present should ease the task for the reader. So while you need to include adequate information, you don’t want to go into information overload. Box 11.5 covers the basics of graphic presentation, while Box 11.6 looks at the presentation of quantitative data in tabular form. QUALITATIVE DATA ANALYSIS (QDA) ‘Not everything that can be counted counts, and not everything that counts can be counted.’
  • 47.
    – Albert Einstein I’dalways thought of Einstein as an archetypal ‘scientist’. But I’ve come to find that he is archetypal only if this means scientists are extraordinarily witty, insightful, political, creative and open-minded. Which, contrary to the stereotype, is exactly what I think is needed for groundbreaking advances in science. So when Einstein himself recognizes the limitations of quantification, it is indeed a powerful endorsement for working with qualitative data. Yes, using statistics is a clearly defined and effective way of reducing and summarizing data. But statistics rely on the reduction of meaning to numbers, and there are two concerns here. First, meanings can be both intricate and complex, making it difficult to reduce them to numbers. Second, even with such a reduction, there can be a loss of ‘richness’ associated with the process. These two concerns have led to the development of a plethora of qualitative data analysis (QDA) approaches that aim to create new understandings by exploring and interpreting complex data from sources such as interviews, group discussions, observation, journals, archival documents etc., without the aid of quantification. But the literature related to these approaches is quite thick, and wading through it in order to find appropriate and effective strategies can be a real challenge. Many students end up: (1) spending a huge amount of time attempting to work through the vast array of approaches and associated literature; (2) haphazardly selecting one method that may or may not be appropriate to their project; (3) conducting their analysis without any well-defined methodological protocols; or (4) doing a combination of the above.
  • 48.
    So while weknow that there is inherent power in words and images, the challenge is working through options for managing and analysing qualitative data that best preserve richness yet crystallize meaning. And I think the best way to go about this is to become familiar with both the logic and methods that underpin most QDA strategies. Once this foundation is set, working through more specific, specialist QDA strategies becomes much easier. Logic and methods Given that we have to make sense of complex, messy and chaotic qualitative data in the real-world everyday, you wouldn’t think it would be too hard to articulate a rigorous QDA process. But the analysis we do on a day-to-day basis tends to be at the subconscious level, and is a process so full of rich subtleties (and subjectivities) that it is actually quite difficult to articulate and formalize. There is some consensus, however, that the best way to move from raw qualitative data to meaningful understanding is through data immersion that allows you to uncover and discover themes that run through the raw data, and by interpreting the implication of those themes for your research project. Discovering and uncovering As highlighted in Figure 11.5, moving from raw data, such as transcripts, pictures, notes, journals, videos, documents, etc., to meaningful understanding is a process reliant on the generation/exploration of relevant themes; and these themes can either be discovered or uncovered. So what do I mean by this? Well, you may decide to explore your data inductively from the ground up. In other words, you may want to explore your data without a predetermined theme or theory in mind. Your aim might be to discover themes and eventuating theory by allowing
  • 49.
    them to emergefrom the data. This is often referred to as the production of grounded theory or ‘theory that was derived from data systematically gathered and analyzed through the research process’ (Strauss and Corbin 1998, p. 12). In order to generate grounded theory, researchers engage in a rigorous and iterative process of data collection and ‘constant comparative’ analysis that finds raw data brought to increasingly higher levels of abstraction until theory is generated. This method of theory generation (which shares the same name as its product – grounded theory) has embedded within it very well-defined and clearly articulated techniques for data analysis (see readings at the end of the chapter). And it is precisely this clear articulation of grounded theory techniques that have seen them become central to many QDA strategies. It is important to realize, however, that discovering themes is not the only QDA option. You may have predetermined (a priori) themes or theory in mind – they might have come from engagement with the literature; your prior experiences; the nature of your research question; or from insights you had while collecting your data. In this case, you are trying to deductively uncover data that supports predetermined theory. In a sense, you are mining your data for predetermined categories of exploration in order to support ‘theory’. Rather than theory emerging from raw data, theory generation depends on progressive verification. While grounded theory approaches are certainly a mainstay in QDA, researchers who only engage in grounded theory literature can fall prey to the false assumption that all theory must come inductively from data. This need not be the case. The need to generate theory directly from data will not be appropriate for all researchers, particularly those wishing to test ‘a priori’ theories or mine their data for predetermined themes.
  • 50.
    Mapping themes Whether themesare to be discovered or uncovered, the key to QDA is rich engagement with the documents, transcripts, images, texts, etc. that make up a researcher’s raw data. So how do you begin to engage with data in order to discover and uncover themes in what is likely to be an unwieldy raw data set? Well one way to look at it might be as a rich mapping process. Technically, when deductively uncovering data related to ‘a priori’ themes the map would be predetermined. However, when inductively discovering themes using a grounded theory approach the map would be built as you work through your data. In practice, however, the distinction is unlikely to be that clear, and you will probably rely on both strategies to build the richest map possible. Figure 11.6 offers a map exploring poor self-image in young girls built through both inductive and deductive processes. That is, some initial ideas were noted, but other concepts were added and linked as data immersion occurred. It’s also worth noting that this type of mind map can be easily converted to a ‘tree structure’ that forms the basis of analysis in many QDA software programs, including NU*DIST (see Figure 11.7). Delving into data When it comes to QDA, delving into your data generally occurs as it is collected and involves: (1) reading and re-reading; (2) annotating growing understanding in notes and memos; (3) organizing and coding data; and (4) searching for patterns in a bid to build and verify theories. The process of organizing and coding can occur at a number of levels and can range from highly structured, quasi-statistical
  • 51.
    counts to rich,metaphoric interpretations. Qualitative data can be explored for the words that are used; the concepts that are discussed; the linguistic devices that are called upon; and the nonverbal cues noted by the researcher. EXPLORING WORDS Words can lead to themes through exploration of their repetition, or through exploration of their context and usage (sometimes called key words in context). Specific cultural connotations of particular words can also lead to relevant themes. Patton (2001) refers to this as ‘indigenous categories’, while Strauss and Corbin (1998) refer to it as ‘in vivo’ coding. To explore word-related themes researchers systematically search a text to find all instances of a particular word (or phrase) making note of its context and meaning. Several software packages, such as DICTION or CONCORDANCE, can quickly and efficiently identify and tally the use of particular words and even present such findings in a quantitative manner . EXPLORING CONCEPTS Concepts can be deductively uncovered by searching for themes generated from: the literature; the hypothesis/research question; intuitions; or prior experiences. Concepts and themes may also be derived from ‘standard’ social science categories of exploration, for example power, race, class, gender etc. On the other hand, many researchers will look for concepts to emerge inductively from their data without any preconceived notions. With predetermined categories, researchers need to be w ary of ‘fitting’ their data to their expectations, and not being able to see alternate explanations. However, purely inductive methods are also subject to bias since unacknowledged subjectivities can impact on the themes that emerge from the data. To explore concepts, researchers generally engage in line-by- line or paragraph-by-paragraph reading of transcripts, engaging
  • 52.
    in what groundedtheory proponents refer to as ‘constant comparison’. In other words, concepts and meaning are explored in each text and then compared with previously analysed texts to draw out both similarities and disparities (Glaser and Strauss 1967). EXPLORING LITERARY DEVICES Metaphors, analogies and even proverbs are often explored because of their ability to bring richness, imagery and empathetic understanding to words. These devices often organize thoughts and facilitate understanding by building connections between speakers and an audience. Once you start searching for such literary devices, you’ll find they abound in both the spoken and written word. Qualitative data analysts often use these rich metaphorical descriptions to categorize divergent meanings of particular concepts. EXPLORING NONVERBAL CUES One of the difficulties in moving from raw data to rich meaning is what is lost in the process. And certainly the tendency in qualitative data collection and analysis is to concentrate on words, rather than the tone and emotive feeling behind the words, the body language that accompanies the words, or even words not spoken. Yet this world of the nonverbal can be central to thematic exploration. If your raw data, notes or transcripts contain non-verbal cues, it can lend significant meaning to content and themes. Exploration of tone, volume, pitch and pace of speech; the tendency for hearty or nervous laughter; the range of facial expressions and body language used; and shifts in any or all of these, can be central in a bid for meaningful understanding. Looking for patterns and interconnections Once texts have been explored for relevant themes, the quest for meaningful understanding generally moves to the relationships that might exist between and amongst various themes. For
  • 53.
    example, you maylook to see if the use of certain words and/or concepts is correlated with the use of other words and/or concepts. Or you may explore whether certain words or concepts are associated with a particular range of nonverbal cues or emotive states. You might also look to see if there is a connection between the use of particular metaphors and nonverbal cues. And of course, you may want to explore how individuals with particular characteristics vary on any of these dimensions. Interconnectivities are assumed to be both diverse and complex and can point to the relationship between conditions and consequences, or how the experiences of the individual relate to more global themes. Conceptualization and abstraction can become quite sophisticated and can be linked to both model and theory building. QDA software It wasn’t long ago that QDA was done ‘by hand’ with elaborate filing, cutting, sticky notes, markers, etc. But quality software (as highlighted in Box 11.3) now abounds and ‘manual handling’ is no longer necessary. QDA programs can store, code, index, map, classify, notate, find, tally, enumerate, explore, graph, etc., etc. Basically, they can: (1) do all the things you can do manually, but much more efficiently; and (2) do things that manual handling of a large data set simply won’t allow. And while becoming proficient at the use of such software can mean an investment in time (and possibly money), if you’re working with a large data set you’re likely to get that time back. Okay … if QDA programs are so efficient and effective, why are they so inconsistently called on by researchers working with qualitative data? Well, I think there are three answers here. First, is a lack of familiarity – researchers may not be aware of the programs, let alone what they can do. Second is that the
  • 54.
    learning investment isseen as too large and/or difficult. Third, researchers may realize, or decide, that they really don’t want to do that much with their qualitative data; they may just want to use it sparingly to back up a more quantitative study. My advice? Well, you really need to think through the pros and cons here. If you’re working with a small data set and you can’t see any more QDA in your future, you may not think it will pay to go down this path – manual handling might do the trick. But if you are (a) after truly rigorous qualitative analysis; (b) have to manage a large data set; or (c) see yourself needing to work with qualitative data in the future, it’s probably worth battling the learning curve. Not only is your research process likely to be more rigorous, you will probably save a fair bit of time in the long run. To get started with QDA software, I would recommend talking to other researchers or lecturers to find out what programs might be most appropriate for your goals and data. I would also have a look at relevant software sites on the Internet (see Box 11.3); there is a lot of information here and some sites even offer trial programs. Finally, I’d recommend that you take appropriate training courses. NU*DIST and NVIVO are both very popular and short course are often easy to find. Specialist strategies Up to this point, I’ve been treating QDA as a homogenous approach with underlying logic and methods, and I haven’t really discussed the distinct disciplinary and paradigmatic approaches that do exist. But as mentioned at the start of this section, the literature here is dense, and a number of distinct approaches have developed over the past decades. Each has its own particular goals, theory and methods … and each will have varying levels of applicability to your own research. Now while I would certainly recommend delving into the approaches that resonate with you, it’s worth keeping in mind that you don’t
  • 55.
    have to adoptjust one approach. It is possible to draw insights from various strategies in a bid to evolve an approach that best cycles between your data and your own research agenda. Table 11.6 may not be comprehensive enough to get you started in any particular branch of qualitative data analysis, but it does provide a comparative summary of some of the more commonly used strategies. You can explore these strategies further by delving into the readings offered at the end of the chapter. Presenting qualitative data I don’t think many books adequately cover the presentation of qualitative data, but I think they should. New researchers often struggle with the task and end up falling back on what they are most familiar with, or what they can find in their methods books (which are often quantitatively biased). So while these researchers may only have three cases, five documents, or eight interviews, they can end up with some pseudo-quantitative analysis and presentation that includes pie charts, bar graphs and percentages. For example, they may say 50% feel … and 20% think, when they’re talking about a total of only five people. Well this isn’t really where the power of qualitative data lies. The power of qualitative data is in the actual words and images themselves – so my advice is to use them. If the goal is the rich use of words – avoid inappropriate quantification, and preserve and capitalize on language. So how do you preserve, capitalize on and present words and images? Well, I think it’s about story telling. You really have to have a clear message, argument or storyline, and you need to selectively use your words and/or images in a way that gives weight to that story. The qualitative data you present should be pointed, powerful and able to draw your readers in.
  • 56.
    Criteria Ratings Points IdentifiesMain Issues/Problems 24 to >21.0 pts Advanced Identifies and demonstrates a sophisticated understanding of the main issues/problems in the study. 21 to >19.0 pts Proficient Identifies and demonstrates an accomplished understanding of most of the issues/problems. 19 to >0.0 pts Developing Identifies and demonstrates acceptable understanding of
  • 57.
    some of the issues/problemsin the study. 0 pts Not Present 24 pts Analysis and Evaluation of Issues/Problems 23 to >21.0 pts Advanced Presents an insightful and thorough analysis of all identified issues/problem; includes all necessary calculations. 21 to >19.0 pts Proficient Presents a thorough analysis of most of the issues identified; missing some necessary calculations. 19 to >0.0 pts
  • 58.
    Developing Presents a superficial or incompleteanalysis of some of the identified issues; omits necessary calculations. 0 pts Not Present 23 pts Recommendations 23 to >21.0 pts Advanced Supports diagnosis and opinions with strong arguments and well-documented evidence; presents a balanced and critical view; interpretation is both reasonable and objective. 21 to >19.0 pts Proficient Supports diagnosis and
  • 59.
    opinions with limited reasoningand evidence; presents a somewhat one-sided argument; demonstrated little engagement with ideas presented. 19 to >0.0 pts Developing Little or no action suggested and/or inappropriate solutions proposed to the issues in the study. 0 pts Not Present 23 pts APA, Spelling & Grammar 10 to >9.0 pts Advanced Limited to no APA, spelling or grammar mistakes.
  • 60.
    9 to >7.0pts Proficient Minimal APA, spelling and/or grammar mistakes. 7 to >0.0 pts Developing Noticeable APA, spelling and grammar mistakes. 0 pts Not Present 10 pts Page Length 10 to >9.0 pts Advanced 5-7 double-spaced pages of content (not counting the title page or references). 9 to >7.0 pts Proficient 1 page more or less than required length.
  • 61.
    7 to >0.0pts Developing More than 1 page more or less than required length. 0 pts Not Present 10 pts Qualitative Data Analysis Grading Rubric | CJUS750_B02_202240 Criteria Ratings Points Sources 10 to >9.0 pts Advanced Citation of a journal article that reports a qualitative study. All web sites utilized are authoritative. 9 to >7.0 pts Proficient
  • 62.
    Citation of ajournal article that reports a qualitative study. Most web sites utilized are authoritative. 7 to >0.0 pts Developing Citation of a journal article that reports a qualitative study. Not all web sites utilized are credible, and/or sources are not current. 0 pts Not Present 10 pts Total Points: 100 Qualitative Data Analysis Grading Rubric | CJUS750_B02_202240