• Narrative Analysis
– Narrative analysis is analysis of a chronologically told story,
with a focus on
• how elements are sequenced,
• why some elements are evaluated differently from others,
• how the past shapes perceptions of the present,
• how the present shapes perceptions of the past, and
• how both shape perceptions of the future.
– Narrative analysis is seen as a more in-depth alternative to
survey research using psychological scales.
– Some advocates see it as an "empowering" psychological
research methodology insofar as it gives respondents the venue
to articulate their own viewpoints and evaluative standards.
• Note, there is a branch of narrative analysis is quantitative.
• Key Concepts and Terms
• Scripts are the referential core of personal narratives (Labov & Waletzky, 1967) or the
"canonical events" (Bruner, 1990) used as a basis for understanding new, unexpected
• That is, scripts are predictive frames by which a culture interprets particular instances of
behavior associated with that script.
• Scripts do not require an evaluative component.
• Stories expand on generalized scripts by incorporating particularistic events, adding
evaluative elements which reveal the narrator's viewpoint regarding these particulars.
• Thus stories will evaluate a script as good, bad, successful, tragic, surprising, and so on.
• Patterns are recurring forms of patter which are discerned in narrative transcripts.
• Themes are sets of patterns.
• There is no agreed-upon methodology in narrative analysis to derive themes from
• One practice, however, is to use a research team, with "themes" being whatever the team
reaches consensus on, based on discussion of transcripts and analysis of pattern and
• As in content analysis, after transcription, narratives may be coded according to categories deemed
theoretically important by the researcher. This labeling of the narrative structure might, for instance, use a
set of structural/functional categories to label each segment as an AB= Abstract statement segment, OR=
Orientation segment, CA= complicating action, EV= evaluation, RE= resolution, etc.
• Many, many coding schemas are possible.
– Temporal Organization of the Narrative.
• Frequently the researcher finds it helpful to organize the narrative according to temporal sequence.
• Some researchers add subscripts to clauses in the narrative, with a left subscript indicating how many
anteceding narrative clauses the given clause is simultaneous with, and a right subscript indicating how
many following clauses the given clause is simultaneous with. Inter-rater reliability in temporally organizing
the narrative is important as changes in temporal organization can radically shift the meaning of the
– Contextual Analysis
• Narratives, and particularly the evaluative elements of narratives, are a social phenomena.
• As a social phenomena, narratives vary by social context (home, school, work, etc.) and evaluative data
extracted from narratives will vary by the social context within which they are collected.
• Consequently, it may be fruitful to gather narratives on the same reference objects from otherwise similar
respondents in varying social contexts.
• Likewise, gathering narratives on the same objects from the same respondents at different points in some
development process (ex., different career points) will yield differences in evaluative components and
consequent insight into the process.
– Focus Groups
• Though not integral to narrative analysis, researchers such as Labov (1997) have found that "the most
important data ... gathered on narrative is not drawn from the observation of speech production or
controlled experiments, but from the reactions of audiences to the narratives."
• Thus the exposure of focus groups to narratives and the comparison of reactions among groups of
different composition can be a method of further extending the anecdotal richness of the narrative method.
– Retelling Narratives
• A particular technique further extending group reactions to narratives is to ask various
types of respondents to memorize a short narrative (e.g., 12 - 20 lines) and then retell it.
• The researcher notes omissions and improvisations, which further illuminate how various
types of respondents react to given types of narratives.
• Retelling, when there is a progressively increased time lapse between exposure and
retelling, is also used to rank the perceived centrality of narrative elements: most central
elements are retained longest.
• By giving totally free rein to subjective story-telling the narrative analyst taps a rich vein of
anecdotal information at the expense of all the usual formal psychological research
considerations (representative sampling, operationalization of terms, use of controls,…).
• As Labov (1997) notes, "The discussion of narrative and other speech events at the
discourse level rarely allows us to prove anything.
• It is essentially a hermeneutic study, where continual engagement with the discourse as it
was delivered gains entrance to the perspective of the speaker and the audience, tracing
the transfer of information and experience in a way that deepens our own understandings
of what language and social life are all about. "
– Narrative analysis is best used for exploratory purposes, sensitizing the researcher,
illustrating but not by itself validating theory.
– A common focus is the exploration of :
• moral, and
• cultural ambiguities.
– Content analysis is the manual or automated coding
of documents, transcripts, newspapers, or even of
audio of video media to obtain counts of words,
phrases, or word-phrase clusters for purposes of
– Typically the researcher creates a dictionary which
clusters words and phrases into conceptual
categories for purposes of counting.
– Various constraints may filter the count, such as the
constraint that one concept be or not be within so
many words of another concept.
• Coding and statistical analysis is covered by Hodson (1999).
• A standard introduction to content analysis is Weber (1990).
Content Analysis and
• Software Resources
– ATLAS.ti is software for text analysis and model building. It handles graphical, audio, and
video data files as well as text. With this package one can code and/or annotate text or
media segments in a variety of ways, search/select segments by code (using proximity,
Boolean, or semantic thesaurus methods), create hotlinks connecting segments, and
display relationships among segments in diagrammatic format. An automatic coding mode
codes all similar segments according to defined patterns. Video segments can be as small
as frames and likewise audio segments can be detailed. Network diagrams, created with the
built-in semantic network editor, can be exported to graphics and word processing packages
and a built-in HTML generator creates web pages for sharing work with collaborators.
Visually, annotations and links are made in a margin area of the computer display. Data can
be generated in SPSS format for further analysis. However, ATLAS.ti is not a content
analysis package per se, but rather a text management package lacking fundamental
content analysis statistical functions. ATLAS.ti is available from Scolari Software, of Sage
– The General Inquirer is the classic package for content analysis, now web-enabled by
psychologist Phil Stone (Harvard University). It contains large content dictionaries (Lasswell
Value Dictionary; Harvard Psycho-Sociological Dictionary) which are used in conjunction
with text scanning software to establish patterns in the meaning of words. The General
Inquirer is now being distributed by the Zentrum fuer Umfragen, Methoden, und Analysen
(ZUMA, Mannheim); for more information, contact Dr. Peter Ph. Mohler, O05@DHDURZ2.
– Intext and TextQuest. TextQuest is tje Windows version of the Intext content analysis
software developed by Harald Klein, with a website at http://www.intext.de. The software
produces word lists, word sequence lists, word permutations, cross-references, and basic
content analysis functions.
– NUD*IST is a leading content analysis package, discussed by Richards and Richards (1991). It allows authors
to establish lexical and conceptual relations among words, to index text files, and to conduct pattern matching
and searching operations using Boolean co-occurrences of nodes in the text. NUD*IST is available from Scolari
Software, of Sage Publications, Inc. Scolari also publishes a variety of other text analysis software packages.
– QUALRUS is a general-purpose qualitative analysis program which supports text and multimedia sources. It
offers intelligent suggestions throughout the coding process and comes with a number of tools to help with
analysis of data once it has already been coded. Users can customize and automate many tasks by taking
advantage of Qualrus's scripting language. A free, functional demo version is available. More information on
Qualrus is available at its homepage, http://www.qualrus.com.
– TextSmart is SPSS's module for coding and analyzing open-ended survey questions. It supports text
management, seaching, and some forms of text analysis. Its "Import Wizard" brings text data into a tab-
delimited ASCII file format, on the fly filtering responses by automated stemming (a linguistic engine which
identifies word stems to combine terms), aliasing (grouping synonyms), and excluding trivial words. The
automatic categorization option automatically clusters terms that tend to occur together in responses, to create
meaningful categories automatically. Some categorization parameters are user-controllable and the researcher
can create his or her own categories by combining categories using Boolean logic. Output can be to an SPSS
or a tab-delimited ASCII file, and categorization parameters can be saved for future TextSmart runs. Because
TextSmart is "dictionary-free," the researcher is freed of the burden of creating a coding scheme or concept
dictionary prior to beginning analysis. By the same token, if the control which comes with a user-defined
dictionary is wanted, TextSmart is not the appropriate tool. Online information is available from SPSS, Inc.
– Hodson, R. (1999). Analyzing documentary accounts. Thousand Oaks, CA: Sage Publications. Quantitative
Applications in the Social Sciences Series No. 128. Describes random sampling of ethnographic field studies
as a basis for applying a meta-analytic schedule. Hodson covers both coding issues and subsequent use of
– Weber, R. P. (1990). Basic content analysis (2nd
ed.). Newbury Park, CA: Sage Publications. A standard
• Case study research is a time-honored, traditional approach to the
study of topics in different areas of psychology and, especially in the
– Because only a few instances are normally studied, the case researcher
will typically uncover more variables than he or she has data points,
making statistical control (ex., through multiple regression) an
– This, however, may be considered a strength of case study research:
• it has the capability of uncovering causal paths and mechanisms, and
• through richness of detail, identifying causal influences and interaction effects
which might not be treated as operationalized variables in a statistical study,
– In recent years there has been increased attention to implementation of
case studies in a systematic, stand-alone manner which increases the
validity of associated findings.
• However, although case study research may be used in its own right, it is more
often recommended as part of a multimethod approach ("triangulation") in
which the same dependent variable is investigated using multiple additional
procedures (e.g., also survey research, sociometry and network analysis, focus
groups, content analysis, ethnography, participant observation, narrative
analysis, archival data, or others).
• Key Concepts and Terminology
– Types of Case Studies.
• Here’s a typology (not the only one) of case studies:
– Snapshot case studies: Detailed, objective study of one research entity at
one point in time.
» Hypothesis-testing by comparing patterns across sub-entities (e.g.,
comparing departments within the case study agency).
– Longitudinal case studies. Quantitative and/or qualitative study of one
research entity at multiple time points.
– Pre-post case studies. Study of one research entity at two time points
separated by a critical event.
» A critical event is one which on the basis of a theory under study
would be expected to impact case observations significantly.
– Patchwork case studies. A set of multiple case studies of the same
research entity, using snapshot, longitudinal, and/or pre-post designs.
» This multi-design approach is intended to provide a more holistic
view of the dynamics of the research subject.
– Comparative case studies. A set of multiple case studies of multiple
research entities for the purpose of cross-unit comparison.
» Both qualitative and quantitative comparisons are generally made.
• Unlike random sample surveys, case studies are not representative of entire populations,
nor do they claim to be.
• The case study researcher should take care not to generalize beyond cases similar to the
• Provided the researcher refrains from over-generalization, case study research is not
methodologically invalid simply because selected cases cannot be presumed to be
representative of entire populations.
• Put another way, in statistical analysis one is generalizing to a population based on a
sample which is representative of that population. In case studies, in comparison, one is
generalizing to a theory based on cases selected to represent dimensions of that theory.
– Case selection should be Theory-Driven.
• When theories are associated with causal typologies, the researcher should select
at least one case which falls in each category.
• That cases are not quantitative does not relieve the case researcher from
identifying what dependent variable(s) are to be explained and what independent
variables may be relevant.
• Not only should observation of these variables be part of the case study, but ideally
the researcher would study at least one case for every causal path in the model
suggested by theory.
• Where this is not possible, often the case, the researcher should be explicit about
which causal types of cases are omitted from analysis.
• Cases cited in the literature as counter-cases to the selected theory should not be
• Cross-Theoretic Case Sselection. As multiple theories can conform to a given set of
data, particularly sparse data as in case study research, the case research design is
strengthened if the focus of the study concerns two or more clearly contrasting
– This enables the researcher to derive and then test contrasting expectations about what
would happen under each theory in the case setting(s) at hand.
– Pattern Matching
• Pattern matching is the attempt of the case
researcher to establish that a preponderance of
cases are not inconsistent with each of the links in
the theoretical model which drives the case study.
– For instance, in a study of employee theft at Walmart,
bearing on the theory that low levels of supervision lead
to instances of employee theft, cases should not display a
high low level of supervision and simultaneously a low
level of employee theft.
– That is, the researcher attempts to find qualitative or
quantitative evidence in the case that the effect
association for each causal path in the theoretical model
under consideration was of non-zero value and was of the
– Process tracing is the a more systematic approach to pattern matching in which
the researcher attempts, for each case studied, to find evidence not only that
patterns in the cases match theoretical expectations but also that (1) that there is
some qualitative or quantitative evidence that the effect association which was
upheld by pattern matching was, in fact, the result of a causal process and does not
merely reflect spurious association; and (2) that each link in the theory-based
causal model also was of the effect magnitude predicted by theory. While process
tracing cannot resolve indeterminancy (selecting among alternative models, all
consistent with case information), it can establish in which types of cases the model
does not apply.
• Controlled observation is the most common form of process tracing. Its name derives
from the fact that the researcher attempts to control for effects by looking for model units
of analysis (e.g., people, in the case of hypotheses about people) which shift substantially
in magnitude or even valence, on key variables in the model being investigated.
– In a study of prison culture, for instance, in the course of a case study an individual may shift from
being free to being incarcerated; or in a study of organizational culture, an individual may shift from
being a rank-and-file employee to being a supervisor).
– Such shifts can be examined to see if associated shifts in other variables (e.g., opinions) also
change as predicted by the model.
– Controlled observation as a technique dictates that the case study (1) be long enough in time to
chronicle such shifts, and (2) favor case selection of cases where shifts are known to have
occurred or are likely to occur.
• Time series analysis is a special and more rigorous case of process tracing, in which the
researcher also attempts to establish not only the existence, sign, and magnitude of each
model link is as expected, but also the temporal sequence of events relating the variables
in the model.
• This requires observations at multiple points in time, not just before-after observations, in
order to establish that the magnitude of a given effect is outside the range of normal
fluctuation of the time series.
• Explanation-building is an alternative or supplement to pattern matching.
Under explanation-building, the researcher does not start out with a theory to
• Rather, the researcher attempts to induce theory from case examples
chosen to represent diversity on some dependent variable (e.g., branches
with different outcomes on increasing accounts at Washington Mutual).
– A list of possible causes of the dependent variable is constructed through
literature review and brainstorming, and information is gathered on each cause
for each selected case.
– The researcher then inventories causal attributes which are common to all cases,
common only to cases high on the dependent variable, and common only to
cases low on the dependent variable.
– The researcher comes to a provisional conclusion that the differentiating
attributes are the significant causes, while those common to all cases are not.
Explanation-building is particularly compelling when there are plausible rival
explanations which can be rebutted by this method.
• Explanation-building can also be a supplement to pattern matching, as when
it is used to generate a new, more plausible model after pattern matching
disconfirms an initial model.
• Meta-Analysis is a particular methodology for extending grounded theory to a number of
• In meta-analysis the researcher creates a meta-analytic schedule, which is a cross-case
summary table in which the rows are case studies and the columns are variable-related
findings or other study attributes (e.g., time frame, research entity, case study design
type, number and selection method for interviewees, threats to validity like researcher
involvement in the research entity).
• The cell entries may be simple checkmarks indicating a given study supported a given
variable relationship, or the cell entries may be brief summaries of findings on a given
relationship or brief description of study attributes.
• The purpose of meta-analysis is to allow the researcher to use the summary of case
studies reflected in the meta-analytic table to make theoretical generalizations.
• In doing so, sometimes the researcher will weight the cases according to the number of
research entities studied, since some case studies may examine multiple entities.
– Hodson (1999) reproduces an example of a meta-analytic schedule for the topic of workplace
» Problems of meta-analysis include what even case study advocates admit is the
"formidable challenge" involved in developing a standardized meta-analytic schedule which
fits the myriad dimensions of any sizeable number of case studies.
» No widely accepted "standardized" schedules exist.
– In addition, for any given proposed schedule, many or most specific case studies will simply not
report findings in one or more of the column categories, forcing meta-analysts either to accept a
great deal of missing data or to have to do additional research by contacting case authors or even
– Considerations in implementing meta-analytic schedules:
– In addition to substantive variables particular to the researcher's subject,
methodological variables should be collected, such as date of data collection,
subject pool, and methodological techniques employed.
• Coder training
– It is customary to provide formal training for coders, who ideally should not be the
researchers so that data collection is separated from data interpretation.
– The researcher must establish inter-rater reliability, which in turn implies there must
be multiple raters. Reliability is generally increased through rater debriefing sessions
in which raters are encouraged to discuss coding challenges.
– Duplicate coding (allowing 10% or so of records to be coded by two coders rather
than one) is also used to track reliability.
– In larger projects, rating may be cross-validated across two or more groups of
• Data weighting.
– Meta-analysis often involves statistical analysis of results, where cases are studies.
– The researcher must decide whether cases based on a larger sample size should
be weighted more in any statistical analysis.
– In general, weighting is appropriate when cases are drawn from the same
population to which the researcher wishes to generalize.
• Handling missing data.
– Dropping cases where some variables have missing data is generally
unacceptable unless there are only a very small number of such cases
as (1) it is more likely that missing-data cases are related to the
variables of the study than that they are randomly distributed, and (2)
dropping cases when the number of cases is not large (as is typical of
meta-analytic studies) diminishes the power of any statistical analysis.
– There is no good solution for missing data but maximum likelihood
estimation (MLE) of missing values carries fewer assumptions about
data distribution than using regression estimates or substituting means.
SPSS supports MLE.
– Meta-analysis often involves results coded from a relatively small
number of cases (e.g., < 100).
– Consequently, any statistical analysis may be affected strongly by the
presence of outlier cases.
– Sensitivity analysis should be conducted to understand the difference in
statistical conclusions with and without the outlier cases included.