SlideShare a Scribd company logo
1 of 67
Individual Differences
Self-Awareness and Working with Others
Dr.Nathanson
1
1
Individual Differences at Work
We seek to understand people in order to develop insight into
our own behavior, and the behavior of others, and to respond in
effective ways in work settings.
Insight
Effective Interactions
*
PersonalityWho are you, and why do you behave the way that
you do?the combination of stable physical and mental
characteristics that give an individual his or her identitystable
over time, stable across situationsunique set of complex,
interacting characteristics“Habits of Response”
*
Personality (cont.)Origins of personality?genetics (nature)early
life experience (nurture)modeling, reinforcement, stability of
context, family dynamicsImpact of personality at work?Person x
Situation interactionorganizations are “strong
situations”dependent on culture, job, group factors
*
Personality (cont.)“Big Five” Personality Dimensionsdecades of
research and theoretical discussions of personality --> dozens
of personality dimensions1970’s and 1980’s: statistical methods
(e.g., factor analysis) provided a “clearer
picture”conscientiousness, extroversion, openness to
experience, neuroticism, agreeablenessonly moderate predictors
*
Group exerciseEach group member should discuss their profile,
i.e are they high or low or in the middle for each of the Big 5
elements.
Then the group should discuss on average what the group
personality is like.
Select a group leader and report to the class.
*
Personality (cont.)Locus of Controlan individual’s sense of
control over his/her life, the environment, and external
eventsHigh Internal LOCtask-oriented, innovative, proactive,
self-confidentHigh External LOCsensitive to social cues,
anxious changes with strong situational cues
*
Personality (cont.)Tolerance for Ambiguityextent to which
individuals are threatened by or have difficulty coping with
ambiguity, uncertainty, unpredictability, complexity…High
Tolerance for Ambiguitycan handle more informationbetter at
transmitting informationmore adaptivesensitive to other’s
characteristics
*
Personality (cont.)Do organizations have personalities?SAS
TheoryB. Schneiderthrough the combined processes of
selection, attrition, and socialization, organizations create a
culture with a “stable personality”implications?
*
Emotionscomplex, patterned, organismic reactions to how we
think we are doing in our efforts to survive and flourish; goal
orientedbiological, psychological, socialgoal oriented: related to
our ability to achieve what we wantnegative emotions: triggered
by frustration (anger, jealousy)positive emotions: triggered by
attainment (pride, happiness)
*
Emotional IntelligencePredictive of “star performance”: who
does well, who gets aheadDaniel Goleman: Working with
Emotional Intelligencebased on research in 500+
organizationsmore important in predicting success than
technical skills or IQHigh “EQ”: works well with otherscan be
learned
*
Emotional Intelligence (cont.)
Five dimensions of Emotional Intelligence
1. Personal Competence
- self awareness
- self-regulation
- motivation
2. Social Competence
- empathy
- social skills
*
ImplicationsPersonality is stable…don’t expect to change
another’s personality; get to know their personality, and work
with it…Recruiting, Selection & Placementbe very cautious
about using personality inventoriesvalidity & reliability are
essentialpeople tend to choose occupations that “fit” their
personality
*
Implications (cont.)Communication & Coachingpersonality
differences impact understanding between peopleCoaching
skills should be part of developmental process at
workDiversitydiversity in people increases creativity,
innovationit also increases conflict… conflict resolution skills
are essential for allValue and support diversity!!
*
Implications (cont.)Develop self-awarenessWho are you? How
does your personality affect your work, and other people at
work?What are your personality strengths?accept who you are,
and then look for opportunities to develop
*
Journal of Traumatic Stress, Vol. 13, No. 2, 2000
Comparison of the PTSD Symptom Scale-Interview
Version and the Clinician- Administered PTSD Scale
Edna B. and David F. Tolin’
The Clinician-Administered PTSD Scale (CAPS) is one of the
most frequently
used measures of posttraumatic stress disorder (PTSD). It has
been shown to be
a reliable and valid measure, although its psychometric
properties in nonveteran
populations are not well known. One problem with the CAPS is
its long assess-
ment time. The PTSD Symptom Scale-Interview Version (PSS-
I) is an alternative
measure of PTSD severity, requiring less assessment time than
the CAPS. Pre-
liminary studies indicate that the PSS-I is reliable and valid in
civilian trauma
survivors. In the present study we compared the psychometric
properties of the
CAPS and the PSS-I in a sample of 64 civilian trauma survivors
with and without
PTSD. Participants were administered the CAPS, the PSS-I, and
the Structured
Clinical Interview f o r DSM-IV (SCID) by separate
interviewers, and their re-
sponses were videotaped and rated by independent clinicians.
Results indicated
that the CAPS and the PSS-I showed high internal consistency,
with no direr-
ences between the two measures. Interrater reliability was also
high f o r both
measures, with the PSS-I yielding a slightly higher coeficient.
The CAPS and
the PSS-I correlated strongly with each other and with the
SCID. Although the
CAPS had slightly higher specijcity and the PSS-I had slightly
higher sensitivity
to PTSD, overall the CAPS and the PSS-I peqormed about
equally well. These
results suggest that the PSS-I can be used instead of the CAPS
in the assess-
ment of PTSD, thus decreasing assessment time without
sacrijcing reliability or
validity.
KEY WORDS: posttraumatic stress disorder; CAPS; PSS-I;
SCID.
I Center for Treatment and Study of Anxiety, Department of
Psychiatry, University of Pennsylvania,
3535 Market Street, 6th Floor, Philadelphia, Pennsylvania
19104.
*To whom correspondence should be addressed.
181
0894-98h7/00/0400-0181$18.00/1 c 2000 International Soclefy
for Traumatic Streaa Sludie
182 Foa and Tolin
One of the most widely used measures of posttraumatic stress
disorder (PTSD)
is the Clinician-Administered PTSD Scale (CAPS; Blake et al.,
1990), often re-
ferred to as the “gold-standard” measure for PTSD. The CAPS
is a semistructured
interview that measures the 17 symptoms of PTSD. Each
symptom is assessed
using two questions (for a total of 34 items): one measuring
frequency of the
symptom’s occurrence, and the other, its intensity (e.g., distress
or functional im-
pairment). To ascertain validity of response, each question is
followed by a number
of probe questions that aim at clarifying the frequency and
intensity of the symp-
tom. CAPS responses are used not only for making a
dichotomous PTSD diagnosis,
but also for quantifying the seventy of PTSD. The CAPS was
originally devel-
oped for use with combat veterans and most studies of its
psychometric properties
have used this population (e.g.. Blake et al., 1990). More
recently, to our knowl-
edge only one study (Blanchard et al., 1995) has examined the
reliability of the
CAPS in civilian populations, yielding high to very high
reliability coefficients.
Hovens et al. (1994) found high reliability and moderate
validity coefficients us-
ing a Dutch-language version of the CAPS. However, that
sample contained both
civilians and combat veterans; therefore, it is difficult to
determine whether the
same results would apply to a civilian sample.
Although the CAPS has excellent psychometric properties, as
noted by
Newman, Kaloupek, and Keane (1996), its major drawback is
the substantial
amount of time required for its administration due to its large
number of items.
Depending on the interviewee’s symptom picture,
administration of the CAPS can
take 40 to 60 min.
One potential alternative to the CAPS is the PTSD Symptom
Scale-Interview
Version (PSS-I; Foa, Riggs, Dancu, & Rothbaum, 1993). The
PSS-I is a semistruc-
tured interview that consists of 17 items, corresponding to the
17 symptoms of
PTSD. Unlike the CAPS, frequency and intensity of symptoms
are combined on
the PSS-I into a single rater estimate of seventy. The reason for
combining these
two dimensions is that some symptoms lend themselves more
readily to frequency
estimates (e.g., nightmares) whereas others are more readily
described in terms of
intensity (e.g., hypervigilance). Excellent reliability and
validity have been found
for the PSS-I using female victims of rape and nonsexual
assault (Foa et al., 1993).
Because the PSS-I consists of only 17 items (compared to the
CAPS’S 34), its
administration time is relatively short, approximately 20 to 30
min.
The purpose of the present study was to compare the
psychometric proper-
ties of the CAPS and the PSS-I using a sample of individuals
with and without
PTSD who had experienced a variety of traumatic events. We
administered the two
interviews and compared the resulting diagnostic status and
symptom severity to
one another and to that yielded by the Structured Clinical
Interview for DSM-IV
(SCID; First, Spitzer, Gibbon, & Williams, 1995). If the CAPS
and the PSS-I
show similar reliability and validity to each other, then the PSS-
I may be a useful
alternative to the CAPS when resources are limited.
PSS-I versus CAPS 183
Method
Participants
Participants were a convenience sample of 12 clinic patients and
52 non-
clinical adult volunteers (total = a), recruited from a relatively
heterogeneous
community sample in the greater Philadelphia area. The clinic
patients were re-
ceiving outpatient treatment for PTSD; the remainder responded
to advertisements
and requests for volunteers at community presentations. All
participants were re-
imbursed $30 for their participation.
Fifty-three percent of the participants were female, and 47%
were male.
Mean age was 37 years (SD = 10). Fifty-two percent were
Caucasian, 39% were
African American, 3% were Hispanic, 5% were Asian American,
and 1% were
other ethnicity.
All participants reported experiencing a traumatic incident that
met Crite-
rion A of the DSM-ZV (American Psychiatric Association,
1994) PTSD diagnosis.
The sample included a heterogeneous range of traumatic
experiences, with per-
centages as follows: rape 18%, other sexual assault 8%,
nonsexual assault 32%,
fire/explosion 11 %, accident 14%, and other trauma 17%. None
of the participants
were combat veterans.
Measures
PSS-I (Foa et al., 1993). The PSS-I is a semistructured
interview designed
to assess current symptoms of PTSD as defined by DSM-ZV
(American Psychi-
atric Association, 1994) criteria. The PSS-I consists of 17 items
corresponding
to the 17 symptoms of PTSD, and yields a total PTSD severity
score as well as
reexperiencing, avoidance, and arousal subscores. Each item
consists of one brief
question. The participant’s answer is rated by the interviewer
from 0 (Not at all)
to 3 (5 o r more times p e r week/Very much). Total severity
scores on the PSS-I are
based on sums of the raw items. Symptoms measured by the
PSS-I are considered
present if they are rated as 1 (Once p e r week or less/A little)
or greater.
Factor analysis of the PSS-I yielded three factors:
avoidancehrousal, numb-
ing, and intrusion (Foa, Riggs, & Gershuny, 1995). Internal
consistency coefficients
for the PSS-I subscales range from .65 to .71 in a sample of
female sexual and
nonsexual assault victims. Test-retest reliabilities range from
.66 to .77 over a
1-month period. Interrater reliabilities range from .93 to .95.
The PSS-I shows
good concurrent validity, as indicated by significant
correlations with measures of
PTSD symptoms, depression, and general anxiety (Foa et al.,
1993).
CAPS (Blake et al., 1990). The CAPS is a semistructured
interview designed
to measure symptoms of PTSD according to DSM-ZZZ-R
(American Psychiatric
184 Foa and Tolin
Association, 1987) criteria. The CAPS has 34 symptom-oriented
items, each rated
on a 5-point scale, which correspond to the 17 symptoms of
PTSD. The CAPS
yields two total scores, one for frequency and one for intensity,
as well as two sub-
scores for each of the reexperiencing, avoidance, and arousal
subscales. The anchor
points of the scales vary according to symptom, but higher
numbers consistently
indicate either higher frequency or intensity of the symptom.
In addition to having separate ratings of frequency and
intensity, the CAPS
differs from the PSS-I in that it includes questions to be used as
prompts if the
assessor needs further clarification. The CAPS also can be used
to assess both
lifetime and current PTSD symptomatology; however, for the
purposes of the
present study only current symptoms were assessed.
Previous research indicates that the CAPS shows excellent
interrater reliabil-
ity ( r = .92 to .99) for all three subscales in combat veterans.
Internal consistency
coefficients range from .73 to 3 5 . The CAPS shows good
concurrent validity, as
indicated by significant correlations with self-report measures
of PTSD symptoms
(Blake et al., 1990). Thus, the CAPS appears to be a reliable
and valid mea-
sure. Partly because of the complexity inherent in obtaining
separate scores for
frequency and intensity, several scoring rules have been
proposed for the CAPS
(Blanchard et al., 1995; Weathers, Ruscio, & Keane, 1999).
With motor vehicle
accident victims, Blanchard et al. (1 995) used three scoring
rules: a liberal rule
requiring a score of at least 2 as the sum of the frequency and
intensity ratings
for a given item; a moderate rule requiring a score of 3, and a
conservative rule
requiring a score of 4. As expected, rates of PTSD were highest
using the liberal
rule, and lowest using the conservative rule.
With combat veterans, Weathers et al. (1999) examined nine
different ra-
tionally and empirically derived scoring rules for the CAPS.
Three scoring rules
were particularly recommended: the “F 1/12’’ rule (liberal rule)
required a frequency
score of at least 1 and an intensity score of at least 2 for each
item. This rule was
recommended for screening purposes to avoid false negatives.
When false positives
and false negatives are equally undesirable (e.g., differential
diagnoses), the “SCID
Symptom-Calibrated (SXCAL)” rule was recommended. The
SXCAL rule uses
the optimally efficient severity-score cutoff for each item for
predicting the pres-
ence or absence of the corresponding PTSD symptom on the
SCID (Weathers et al.,
1999). When false positives needs to be minimized (e.g.,
confirming a diagnosis),
the conservative “Clinician-Rated 60” scoring was
recommended. Accordingly, a
symptom is considered present if the combination of frequency
and intensity for
that item was rated as present by at least 60% of a sample of 25
expert clinicians
(Weathers et al., 1999). This resulted in different cutoff scores
for each CAPS item.
SCID. (First et al., 1995). The SCID is a structured interview
measuring
DSM-ZV (American Psychiatric Association, 1994) sympto ms
of PTSD. The SCID
diagnosis of PTSD showed acceptable agreement with indexes
obtained from
previously validated assessment instruments included in the
National Vietnam
Veterans Readjustment Study (Kulka, Schlenger, Jordan, &
Hough, 1988), and
PSS-I versus CAPS 185
was identified previously as an instrument of choice in the
assessment of rape-
related PTSD (Resnick, Kilpatrick, & Lipovsky, 1991).
On the SCID, each symptom is assessed using one question, and
the inter-
viewer rates each symptom on a 3-point scale: absent or false,
subthreshold, and
threshold or true. Symptoms are considered present if they are
assigned the latter
rating.
Procedure
Thirty-nine participants were interviewed by two clinicians. The
first inter-
viewer queried the participant about trauma history and assisted
the participant
in identifying a single traumatic even that would be the focus of
the interview.
Participants reporting more than one traumatic event were
instructed to select the
most bothersome incident for this interview. Participants were
also instructed to
refer to the same traumatic event for all interviews, and reviews
of videotapes indi-
cated that all participants complied with this instruction. One
interviewer used the
CAPS and the other, the PSS-I. The order of administering the
two instruments as
well as which instrument would be used by which clinician were
each determined
randomly. Over the course of the study, 22 clinicians conducted
the interviews.
Participants were instructed to refer to the same traumatic event
in both interviews.
Clinicians were instructed not to discuss a participant’s
interview with one another
until all interview data had been collected for that individual.
All interviews were videotaped. The videotapes were reviewed
by at least two
raters who did not have access to the interviewers’ ratings.
These raters scored the
CAPS and the PSS-I on the basis of the participant’s responses
in the videotapes;
later, these ratings were compared to those of the interviewer.
To assess convergent validity with the SCID, an additional 25
participants
were administered the CAPS and the PSS-I as described above
as well as the
PTSD module of the SCID; the latter was administered by a
third clinician. The
order of the three interviews and the assignment of the
clinician-interviewer were
determined randomly.
All interviewers and raters were doctoral or master’s level
clinicians who were
trained in the use of both instruments by the instruments’
developers (Dr. Edna Foa
for the PSS-I and Dr. Frank Weathers for the CAPS). To ensure
standard admin-
istration and scoring, interviewers and raters met weekly to
review the interviews,
ascertain adherence to interview procedures, and resolve
scoring discrepancies.
Results
Kolmogoroff-Smirnov tests of the distribution of scores on the
PSS-I and
CAPS indicated that scores were not normally distributed.
Therefore, nonpara-
metric statistics were used wherever possible.
186
Table 1. Cronbach’s Aluha Coefficients for the PSS-I and the
CAPSa
Foa and Tolin
PSS-I CAPS
No. ofItems (Y No.ofItems (Y
Total score 17 3 6 34 .88
Reexperiencing subscale 5 .70 10 .70
Avoidance subscale 7 .I4 14 .76
Arousal subscale 5 .65 10 .7 I
“PSS-I = PTSO Symptom Scale-Interview Version; CAPS =
Clinician-
Administered PTSD Scale.
Reliability of the PSS-I and the CAPS
Internal consisrency. Cronbach’s alpha was calculated on PSS-I
and CAPS
total scores and subscale scores. Because the CAPS includes
two items per symp-
tom (frequency and intensity) and the PSS-I includes only one
item, we used a
dichotomous coding of each item to indicate its presence or
absence. By doing so,
we controlled for the different number of items.
Alpha coefficients for the PSS-I and the CAPS are shown in
Table 1 . Internal
consistency was good to very good for all scales and subscales
of both the PSS-I
and the CAPS, with the alpha coefficient ranging from .70 to
.88 for the CAPS
and from .65 to .86 for the PSS-I. Thus, the internal consistency
of the PSS-I and
the CAPS were comparable.
To further examine internal consistency, we correlated each
item’s raw score
with the total score. The average item-total correlation for the
PSS-I was S 9 ,
with correlations ranging from .11 to .74. For the CAPS, the
average item-total
correlation was .52 with arange of .21 to .68. On both
interviews, the item reflecting
the symptom of “inability to recall an important aspect of the
trauma” showed low
correlations with the total score (on the PSS-I, p(63) = . l I , p
= .39; on the CAPS,
p(63) = .21, p = .09). Thus, on this index of internal
consistency, the CAPS and
the PSS-I were again quite similar.
The correlations among the three symptom cluster and the total
severity scores
for the CAPS and the PSS-I are presented in Table 2. The
intercorrelations among
subscales for each instrument were moderate to high and the
overall picture was
again quite similar.
Interviewer-rater reliability. Interviewer-rater reliability was
calculated by
comparing the interviewer’s ratings to those of the videotape
raters. Because there
were several raters and one interviewer for each instrument,
reliability coefficients
were calculated as follows: First, each videotape rater was
assigned a number (1-4).
Next, Spearman correlation coefficients were calculated
between the interviewer
and rater 1, the interviewer and rater 2, and so on. The resulting
coefficients were
translated into Fisher’s z scores (Rosenthal & Rosnow, 1984)
and averaged. Then,
the average z score was translated back to p to yield a single
interrater reliability
PSS-I versus CAPS 187
Table 2. Spearman Correlations Among the Subscales of the
PSS-I
and the CAPS
Subscale Total Score Reexperiencing Avoidance
PSS-I
Reexperiencing .82*
Avoidance .92* .63*
Arousal .88* .63* .71*
Reexperiencing .87*
Avoidance .90* .68*
Arousal .88* .67* .70*
CAPS
* p < ,001.
Table 3. Interviewer-Rater Reliability Coefficients and
Percentage Agreement
for the PSS-I and the CAPS
~~ ~ ~
Pss-I CAPS
p % Agreement p %Agreement
Reexperiencing subscale .93* 99.2 .89* 92.5
Avoidance subscale .91* 97.5 .86* 88.5
Arousal subscale .92* 94.2 .8 1 * 93.4
Total score/PTSD diagnosis .93* 98.3 .95* 86.6
coefficient. Percentage of rater agreement for the presence or
absence of each
symptom was calculated by averaging the agreement of each
videotape rater with
that of the interviewer. Rater agreement for the CAPS was
calculated using the
F1/I2 rule (Weathers et al., 1999), since this was the original
scoring rule reported
by Blake et al. (1990). Using other scoring rules for the CAPS
did not change
interrater reliability significantly.
Table 3 presents the reliability coefficients of the total scores
and for each
subscale, as well as the percentage of rater agreement on the
presence or absence
of each symptom cluster and PTSD diagnosis. As can be seen in
Table 3, both the
CAPS and the PSS-I showed excellent interviewer-rater
reliability. There were
no substantial differences between the two measures, although
the PSS-I showed
consistently higher rates of agreement between raters for both
the correlations and
percentage agreements.
Validity of the PSS-I and the CAPS
Frequency of PTSD diagnosis. Thirty (46%) of participants met
diagnostic
criteria for PTSD according to the PSS-I. Rates of PTSD with
the CAPS varied
188 Foa and Tolin
Table 4. Diagnostic Agreement Between the CAPS and the PSS-
I
PSS-SR
CAPS Scoring Rule % Agreement Kappa
Liberal (Weathers) 83 .65
Moderate (Weathers) 78 .55
Conservative (Weathers) 70 .38
Liberal (Blanchard) 86 .I2
Moderate (Blanchard) 84 .68
Conservative (Blanchard) 80 .58
Note. Blanchard = Blanchard et al. (1995); Weathers =
Weathers et al. (1999).
Table 5. Correlations Between the Subscales of the CAPS and
the PSS-I
CAPS
Reexperiencing Avoidance Arousal
PSS-I Total Score Subscale Subscale Subscale
Total score .87* .76* .74* .76*
Avoidance subscale .75* .55* .75* .64*
Arousal subscale .17* .64* .63* .78*
Reexperiencing subscale .76* .79* .57* .64*
Note. Correlation coefficients between scales measuring the
same symptoms on both
interviews are italicized.
* p < .001.
according to the scoring rule used. Using the Blanchard et al.
(1995) diagnostic
rules, 33 (5 1 %) were diagnosed with PTSD with the liberal
rule, 2 8 (43%) with the
moderate rule, and 2 1 (32%) with the conservative rule. Rates
of PTSD diagnosis
on the CAPS also vaned across the different scoring rules
described by Weathers
et al. (1999). Using the liberal rule, 23 (35%) were diagnosed
with PTSD; 20
(31%) with the moderate rule, and 11 (17%) with the
conservative rule. Thus,
PTSD rates yielded by the PSS-I were similar to those obtained
with the Blanchard
et al. moderate scoring rule. Both the Blanchard et al. and the
PSS-I rates were
somewhat higher than those emerging from the Weathers et al.
rules.
Concurrent vuiidity. A high correlation of p = .87 (p < .001)
was found be-
tween the CAPS and the PSS-I for the total score. Agreement
across the two
measures on PTSD diagnosis varied according to the CAPS
scoring rule used (see
Table 4). Table 5 displays the Spearman correlations between
the interview scales.
Convergent validity. To assess convergent validity, CAPS and
PSS-I scores
were compared to the PTSD section of the SCID. Spearman
correlation coefficients
indicated that the SCID total score correlated strongly with the
CAPS total score
p ( 2 3 ) = 3 3 , p < .001; and PSS-I total score, p ( 2 3 ) =
.73, p < .001. To examine
whether the correlation between SCID and CAPS total scores
was greater than
the correlation between SCID and PSS-I total scores, a
Hotelling’s t test was
performed. Results were not significant: t ( 2 4 ) = 1.68, p >
.05.
PSS-I versus CAPS 189
Table 6. Agreement Between the SCID and the CAPS and the
PSS-I
CAPS
Liberal Moderate Conservative PSS-I
Scoring Rule Scoring Rule Scoring Rule
Standard
SCID Subscale Blanchard Weathers Blanchard Weathers
Blanchard Weathers Scoring Rule
Total Score
%Agreement 80 84 80 88 88 84 80
Sensitivity 0.86 0.71 0.71 0.71 0.71 0.43 0.86
Specificity 0.78 0.89 0.83 0.94 0.94 1 .oo 0.78
Kappa .56 .60 .52 .69 .69 .5 1 .56
%Agreement 84 80 84 84 80 56 92
, Sensitivity 0.85 0.80 0.85 0.85 0.80 0.45 0.90
K"PP" .57 .49 .57 .57 .49 2.5 .78
Reexperiencing
Specificity 0.80 0.80 0.80 0.80 0.80 1.00 1 .oo
% Agreement 80 84 80 84 88 88 80
Sensitivity 0.88 0.75 0.75 0.62 0.75 0.62 0.88
Avoidance
Specificity 0.76 0.88 0.82 0.94 0.94 1.00 0.76
Kappa .58 .63 .56 .61 .7 1 .69 .58
%Agreement 64 84 68 72 80 72 76
Sensitivity 1 .oo 1 .oo 1.00 1.00 1.00 0.50 I .oo
Specificity 0.31 0.69 0.39 0.46 0.62 0.92 0.54
Kappa .30 .68 .38 .45 .6 1 .43 .53
Arousal
Notes. Blanchard = scoring rule from Blanchard et al. (1995);
Weathers = scoring rule from Weathers
et al. (1999). Percent agreements are calculated to reflect
whether participants met or exceeded the
symptom count for the DSM-IV diagnosis.
When data were analyzed according to the presence or absence
of symptoms
rather than a continuous score, the results varied according to
the scoring rule
used. As shown in Table 6, both the PSS-I and the CAPS
showed moderate to
strong agreement with the SCID. The PSS-I showed somewhat
higher sensitivity,
whereas the CAPS showed somewhat higher specificity,
especially using more
conservative scoring rules. On both the CAPS and the PSS-I, the
arousal subscales
showed high sensitivity but relatively low specificity with the
SCID. Given the
strong agreement between the PSS-I and CAPS on the arousal
subscale (r = .78),
the low specificity may reflect a psychometric weakness of the
SCID rather than
of the two instruments in question. Overall, however, the CAPS
and the PSS-I
performed quite similarly in relation to the SCID.
Interview duration. Precise interview times were available for
42 sets of in-
terviews. Mean time to complete the PSS-I was 21.96 min (SD =
1 1 S l ) , and mean
time to complete the CAPS was 32.75 min (SD = 15.94). The
CAPS was found
to take significantly longer than the PSS-I to administer, t(41) =
5.93, p c .001,
Cohen's d = 0.78. When we sampled only those patients with
PTSD (as indi-
cated by the PSS-I; n = 16), the CAPS still took significantly
longer (M = 42.76,
190 Foa and Tolin
SD = 10.74) than did the PSS-I (M = 28.69, SD = 9.92), t ( 15)
= 4.64, p < .001,
and the effect was greater than before (Cohen’s d = 1.36). Thus,
the PSS-I ap-
pears to be a briefer instrument than the CAPS, and this is
particularly true for
interviewees reporting significant PTSD symptoms.
Discussion
Results of the present study suggest that the PSS-I compares
favorably to the
CAPS, as evidenced by internal consistency, item-total
correlations, intersubscale
correlations, and interviewer-rater agreement. In terms of
validity, the total score
and subscale scores of the PSS-I correlate strongly with the
corresponding scores
on the CAPS. When the PSS-I and the CAPS are used to predict
PTSD diagnosis
according to the SCID, both the PSS-I and the CAPS show
moderately strong
agreement with the SCID. Results for the CAPS vary according
to the scoring
rule used; however, in general, it appears that the PSS-I may
have slightly higher
sensitivity, whereas the CAPS may have slightly higher
specificity. Thus, the PSS-
I may have a small advantage in detecting actual PTSD, whereas
the CAPS’S
advantage may be in ruling out false positives. However, it
should be emphasized
that differences between the CAPS and the PSS-I were
relatively small compared
to their similarities.
Limitations of the present study include a relatively small
sample size, com-
pared to the large numbers of participants to whom the CAPS
has been administered
(e.g., Weathers et al., 1999). The present study examined only
civilian trauma vic-
tims, and thus the obtained results may not generalize to combat
veterans. We did
not collect data on the test-retest stability of either the CAPS or
the PSS-I; such
data would shed more light on the comparability of the two
interviews. Finally,
although interviewers were trained in both the CAPS and the
PSS-I, because of the
institution where the study was conducted (MCP Hahnemann
University), most
of the interviewers were more familiar with the PSS-I.
Additional studies using
interviewers who are equally familiar with the CAPS and the
PSS-I would help to
clarify this issue.
Because the two instruments show such similar internal
consistency, inter-
viewer-rater reliability, and validity, the PSS-I may be a useful
alternative to the
CAPS. In this study, the PSS-I took significantly less time to
administer, with
no appreciable loss of psychometric strength. Thus, when time
and/or financial
resources are limited, the PSS-I may be the interview method of
choice for the
assessment of PTSD.
References
American Psychiatric Association ( 1 987). Diagnostic and
statistical manual of mental dis0rder.s (3rd
ed.-rev.). Washington, DC: Author.
PSS-I versus CAPS 191
American Psychiatric Association (1994). Diagnostic and
statistical manual of mental disorders (4th
ed.). Washington, DC: Author.
Blake, D. D., Weathers, F. W., Nagy, L. M., Kaloupek, D. G.,
Klauminzer, G., Charney, D. S., & Keane,
T. M. (1990). A clinician rating scale for assessing current and
lifetime PTSD: The CAPS-I.
Behavior Therapist, 13, 187-1 88.
Blanchard, E. B., Hickling, E. J., Taylor, A. E., Forneris, C. A.,
Loos, W., & Jaccard, J. (1995). Effects
of varying scoring rules of the Clinician-Administered PTSD
Scale (CAPS) for the diagnosis of
posttraumatic stress disorder in motor vehicle accident victims.
Behaviour Research and Therupy,
33,471-475.
First, M . B., Spitzer, R. L., Gibbon, M., & Williams, J. B. W.
(1995). Structured clinical interview for
DSM-IVuxis I disorders-Patient edition (SCID I / r ) version
2.0). New York: Biometrics Research
Department.
Foa, E. B., Riggs, D. S., Dancu, C. V., & Rothbaum, B. 0.
(1993). Reliability and validity of a brief
instrument for assessing posttraumatic stress disorder. Journal
of Traumatic Stress, 6,459-473.
Foa, E. B., Riggs, D. S., & Gershuny, B. S. (1995). Arousal,
numbing, and intrusion: Symptom structure
of PTSD following assault. American Journal ofPsychiarg 152,
1 16-120.
Hovens, J. E., van der Ploeg, H. M., Klaarenbeek, M. T. A.,
Bramsen, I., Schreuder, J. N., & Rivero,
V. V. (1994). The assessment of posttraumatic stress disorder
with the Clinician Administered
PTSD Scale: Dutch results. Journal of Clinical Psychology,
50,325-340.
Kulka, R. A., Schlenger, W. E., Jordan, B. K., & Hough, R. L.
(1988, October). Preliminan, survey
jndings of the National Vietnam Veterans’ Readjustment Study.
Symposium presented at the 4th
annual meeting of the International Society for Traumatic Stress
Studies, Dallas.
Newman, E., Kaloupek, D. G., & Keane, T. M. (1996).
Assessment of posttraumatic stress disorder
in clinical and research settings. In B. A. van der Kolk, A. C.
McFarlane, & L. Weisaeth (Eds.),
Traumatic stress: The effects of overwhelming experience on
mind, body, and society (pp. 242-
273). New York Guilford Press.
Resnick, H. S., Kilpatrick, D. G., & Lipovsky, J. A. (1991).
Assessment of rape-related posttraumatic
stress disorder: Stressor and symptom dimensions.
fsychologicul Assessment, 3,561-572.
Rosenthal, R., & Rosnow, R. L. (1984). Essentials of behavioral
research: Methods and data analysis
(2nd ed.). New York: McGraw-Hill.
Weathers, F. W., Ruscio, A. M., & Keane, T. M. (1999).
Psychometric properties of nine scoring rules
for the Clinian-Administered PTSD Scale (CAPS).
Psychological Assessment, 11, 124-1 33.
Journal of Traumatic Stress
October 2015, 28, 480–483
B R I E F R E P O R T
Comparison of the PTSD Checklist (PCL) Administered via
a Mobile Device Relative to a Paper Form
Matthew Price,1 Eric Kuhn,2 Julia E. Hoffman,2,3 Josef
Ruzek,2 and Ron Acierno4,5
1Department of Psychological Science, University of Vermont,
Burlington, Vermont, USA
2National Center for PTSD, Dissemination and Training
Division, Department of Veterans Affairs Palo Alto Health Care
System,
Palo Alto, California, USA
3Center for Healthcare Evaluation, Department of Veterans
Affairs Palo Alto Healthcare System, Palo Alto, California,
USA
4Ralph H. Johnson Veterans Affairs Medical Center,
Charleston, South Carolina
5Medical University of South Carolina, Charleston, South
Carolina, USA
Mobile devices are increasingly used to administer self-report
measures of mental health symptoms. There are significant
differences,
however, in the way that information is presented on mobile
devices compared to the traditional paper forms that were used
to administer
such measures. Such differences may systematically alter
responses. The present study evaluated if and how responses
differed for a
self-report measure, the PTSD Checklist (PCL), administered
via mobile device relative to paper and pencil. Participants were
153 trauma-
exposed individuals who completed counterbalanced
administrations of the PCL on a mobile device and on paper.
PCL total scores (d =
0.07) and item responses did not meaningfully or significantly
differ across administrations. Power was sufficient to detect a
difference in
total score between administrations determined by prior work of
3.46 with a d = 0.23. The magnitude of differences between
administration
formats was unrelated to prior use of mobile devices or
participant age. These findings suggest that responses to self-
report measures
administered via mobile device are equivalent to those obtained
via paper and they can be used with experienced as well as
naı̈ ve users of
mobile devices.
Mobile devices can advance traumatic stress research and
treatment (Luxton et al., 2011; Price et al., 2014) through the
collection of ecologically valid data (Shiffman, Stone, & Huf-
ford, 2008). Use of mobile devices requires that responses to
mobile-administered measures are equivalent to responses from
paper measures. This assumption is open to empirical investiga-
tion and should be evaluated to ensure mobile devices provide
valid and reliable measurements.
Mobile devices systematically change the administra tion of
self-report measures. When delivered via paper, items are dis-
played in an array that allows all responses to be viewed simul -
taneously such that initial responses may influence subsequent
answers (Richman, Kiesler, Weisband, & Drasgow, 1999). Al -
ternatively, mobile devices typically display a single item per
screen. Administration of individual items may focus atten-
tion towards item content resulting in systemically different
responses.
The present study examined if responses to a self-report mea-
sure, the PTSD Checklist (PCL; Weathers et al., 2013), admin-
Copyright C© 2015 Wiley Periodicals, Inc., A Wiley Company.
View this article
online at wileyonlinelibrary.com
DOI: 10.1002/jts.22037
istrated via mobile device differed from paper administration.
The PCL has been extensively validated as a measure of PTSD
symptoms across diverse samples (Ruggiero, Ben, Scotti, & Ra-
balais, 2003). A standardized paper version of the PCL is avail -
able via request from the National Center for PTSD (NCPTSD).
The PCL is available in a standardized format for mobile de-
vices as part of the PE Coach mobile application (Reger et al.,
2013). It was hypothesized that responses between PCL total
score and item responses across mobile and paper administra-
tions would be comparable due to prior evidence that suggested
minimal differences between standardized tests administered
via paper and computer (Bush et al., 2013; Campbell et al.,
1999; Finger & Ones, 1999).
Method
Participants
Participants, aged M = 32.34 years (SD = 14.42), were 153
individuals recruited from a Level 1 trauma center (n = 22,
14.3%), a Veteran’s Affairs medical center outpatient mental
health service (VAMC; n = 38, 24.7%), an outpatient clinic
480
Mobile Comparison of PCL 481
Table 1
Descriptive Information Unadjusted for Time Between PCL
Administrations
Variable n %
Location
Veteran Affairs Medical Center 38 24.7
Female 8 21.1
Community 87 57.1
Female 65 73.9
Outpatient clinic 6 3.9
Female 6 100.0
Trauma center 22 14.3
Female 6 27.3
PTSD diagnosis 62 40.3
Own smartphone 118 76.6
Use e-mail on phone 117 76.0
Use apps on phone 113 73.4
Use games on phone 97 63.0
Use Internet on phone 122 79.2
Note. N = 153. PCL = Posttraumatic Stress Checklist.
for trauma victims (n = 6, 3.9%), and the community (n = 87,
57.1%). Descriptive information is presented in Table 1.
Measures and Procedure
The Posttraumatic Checklist-Civilian Version (PCL-C: Weath-
ers, Litz, Huska, & Keane, 1994) is a 17-item self-report mea-
sure that assesses PTSD symptom severity. Symptoms are rated
on a 5-point Likert-type scale, ranging from 1 = not at all to
5 = extremely, for the past month. Internal consistency for
the current study was excellent with α = .95 for both ad-
ministrations. The measure was administered twice, once via
the paper form available from the NCPTSD (Weathers, Litz,
Huska, & Keane, 2003) and once via PE Coach. The Life
Events Checklist (LEC; Weathers et al., 2013) is a 17-item
self-report measure assessing trauma exposure. Use of Inter-
net and mobile devices was assessed with questions adapted
from a survey from the Pew Internet and American Life Project
(2012). Questions assessed if various tasks were completed reg-
ularly completed on smartphones and mobile devices using a
yes/no format (e.g., “Do you regularly check e-mail on your
smartphone?”).
Medical records were used to confirm trauma exposure for
Level-1 trauma, VAMC, and outpatient clinic participants. A
diagnosis of PTSD was the indicator of trauma exposure for
VAMC and outpatient clinic participants whereas the presenting
trauma was used for Level-1 trauma center participants. Com-
munity participants were screened with the LEC to determine
if they experienced or witnessed a traumatic event. Follow -
up questions confirmed the validity of the Criterion A event.
The community sample was administered the PTSD module of
the Structured Clinical Interview for the DSM-IV by trained
research staff for the most stressful event identified by the LEC
(SCID; First, Spitzer, Gibbon, & Williams, 2002). No other
modules of the SCID were administered.
Participants completed the PCL on an iPod Touch (4th gen-
eration, 3.5′′ screen) and on paper with a 35-minute (Med =
35, interquartile range: 25) interval between administrations.
After the second administration, participants completed the
use of Internet and mobile devices survey, and demographics
questionnaire. Participants from the community were also given
the PTSD module from the SCID and 27% met criteria for
PTSD. Interviews were administered by trained research assis-
tants and audio recorded. Interviews were double coded from
the recording by a clinical psychologist with 100% diagnos-
tic agreement. The order in which mobile and paper versions
were administered was counterbalanced using a randomization
sequence. Randomization occurred in blocks of 10 and each
data collection site was allocated 10 blocks. Institutional re -
view boards of the agencies where this research was conducted
approved all procedures and all participants consented to the
study.
Data Analysis
Using the guidelines of Bland and Altman (1986), a clini-
cally meaningful margin of error between the two methods
of measurement of 3.46 was established (see Supplemental
Table 1) from nine prior studies where the PCL was adminis-
tered repeatedly. A difference score between the total scores
for both administrations was obtained by subtracting mobile
device scores from paper scores. Comparisons were made with
repeated-measures analysis of covariance in which length be-
tween administrations was used as a covariate. The mean of
the distribution of difference scores was calculated with the
95% confidence interval (CI). If the 95% CI of the difference
scores was within the clinically meaningful margin of error
then the two methods were considered interchangeable. A mar-
gin of error of 1.00 was used for differences between indi -
vidual items. Bivariate correlations between both measure ad-
ministrations and intraclass correlation coefficients (ICC) were
also computed. One participant declined to answer questions
about use of a mobile devices after reporting they did not
own a smartphone. There were no missing data on the PCL
administrations.
Results
Adjusted for time between administrations, the mean difference
between paper (M = 40.24, SD = 16.69) and mobile device (M
= 39.08, SD = 15.97) administration was 1.17 points with 95%
CI [1.13, 1.21] (Table 2). The upper limit of the 95% CI for the
mean difference was within the margin of error. The effect size
for the difference was d = 0.07. Test-retest reliability was r =
.93. The ICC was .96, 95% CI [.95, .97]. Mean differences at
the
item level ranged from 0.001 to 0.22. The highest upper limit
for the 95% CI at the item level was 0.37 for Item 8. Therefore,
Journal of Traumatic Stress DOI 10.1002/jts. Published on
behalf of the International Society for Traumatic Stress Studies.
482 Price et al.
Table 2
Mean Difference and 95% CI for PCL Items and Total Score
PCL M Diff 95% CI
Item
1. Intrusive thoughts 0.02 [−0.11, 0.15]
2. Nightmares 0.05 [−0.07, 0.18]
3. Reliving 0.13 [−0.01, 0.27]
4. Emotional cue reactivity 0.12 [−0.04, 0.28]
5. Physiological cue reactivity 0.14 [0.00, 0.27]
6. Avoidance of thoughts 0.04 [−0.14, 0.22]
7. Avoidance of reminders 0.13 [−0.04, 0.29]
8. Trauma-related amnesia 0.22 [0.08, 0.37]
9. Loss of interest 0.05 [−0.08, 0.17]
10. Feeling detached −0.09 [−0.22, 0.05]
11. Lack of positive emotion 0.09 [−0.03, 0.22]
12. Foreshortened future 0.03 [−0.11, 0.17]
13. Sleep problems −0.01 [−0.13, 0.12]
14. Irritability or anger 0.07 [−0.05, 0.20]
15. Difficulty of concentrating −0.04 [−0.18, 0.10]
16. Overly alert 0.04 [−0.08, 0.16]
17. Easily startled 0.14 [0.02, 0.26]
Total 1.17 [1.13, 1.21]
Note. Sample size = 153. Margin of error for Total scale = 3.46.
Margin of error
for items = 1.00. Difference score calculated as paper minus
mobile. PCL =
Posttraumatic Stress Checklist.
all of the items were within the margin of error (1.00). Test-
retest reliability at the item level ranged from r = .66 to .88 and
ICC = .75 to .93.
There were no differences in administrations across the dif-
ferent locations, F(3, 149) = 1.05, p = .373. Results were
consistent across the combined sample in that the upper limit
of the 95% CI for the sample obtained from the trauma cen-
ter, M = 0.45, 95% CI [0.45, 0.45]; VAMC, M = 2.72, 95%
CI [2.60, 2.85]; and community sample, M = 0.65, 95% CI
[0.58, 0.72] were within the margin of error for the total scale.
Test-retest reliability within each group was consistent with the
total sample: trauma center, r = .89, ICC = .94, 95% CI [.86,
.98]; VAMC, r = .89, ICC = .94, 95% CI [.89, .97]; com-
munity sample, r = .91, ICC = .95, 95% CI [.93, .97]. Mean
differences at the item level ranged from 0.00 to 0.36 for the
trauma center, from 0.00 to 0.37 for the VAMC, and from 0.01
to 0.20 for the community sample. The highest upper limit for
the 95% CI for each item was within the margin of error for the
trauma center (0.65), VAMC (0.65), and the community sample
(0.40).
The relation between use of smartphone functions and dif-
ference in total PCL scores across the administrations was as -
sessed with one-way analyses of variance. Differences in total
scores were not related to smartphone ownership, F(1, 149)
= 1.51, p = .221; use of e-mail via smartphone, F(1, 148) =
0.60, p = .439); use of apps, F(1, 147) = 0.78, p = .378);
use of games, F(1, 148) = 0.78, p = .379; and use of the In-
ternet on a smartphone, F(1, 148) = 0.78, p = .379. Finally,
differences in total PCL scores were unrelated to age (r = .04,
p = .598).
Discussion
The present study suggested that there were minimal dif-
ferences between a self-report measure of PTSD symptoms
administered via mobile device or paper in a heterogeneous
sample of trauma-exposed adults. The lack of a relation be-
tween prior experiences using a mobile device, age, and differ -
ences in total score indicates that mobile devices are a viable
strategy for those who have minimal training or experience with
this technology. Prior work demonstrated that among patients,
demographic characteristics and prior experience is largely un-
related to willingness to use technology for healthcare (Price
et al., 2013). There is evidence, however, to suggest that prior
use is relevant for clinicians (Kuhn et al., 2014). Clinicians
with experience using mobile devices or who own a personal
mobile device were more receptive to use such technologies
in treatment. Ensuring that clinicians are capable and comfort-
able with such devices will be necessary for proper measure
administration as patients are likely to turn to their therapist for
technical assistance or tutorials with these technologies (Price
&
Gros, 2014).
The present study had several limitations. The mobile ad-
ministration was not conducted in a naturalistic environment
where such measures administered via mobile device are most
likely to be completed insofar as this was a research study
with informed consent processes. The effect of environmental
influences on responses is unknown. Although it is unlikely
that the environment would systematically influence mobile re-
sponses relative to paper response, measures completed on a
mobile device are more likely to be completed in a variety of
contexts in which other factors could influence responses. Re-
searchers are advised to collect data on the context in which
measures are completed to assess potential sources of bias. The
study evaluated a single self-report measure of PTSD without a
lengthy assessment battery. Thus, the current study was unable
to examine effects related to fatigue across the administration
of multiple measures via a mobile device. The current study
supported the null hypothesis that there were no differences
between scores across paper and mobile versions of the PCL,
which is conceptually and pragmatically challenging (Piaggo,
Elbourne, Pocock, & Evans, 2006). Although the current study
had sufficient power to detect an effect as small as 0.23, con-
siderably more power would be needed to detect an effect at
the obtained effect size of 0.07 (n = 1,604). Continued studies
that demonstrate the clinical equivalence of measurements ob-
tained via mobile device relative to paper should be conducted
to further validate these findings. Finally, PTSD diagnoses were
obtained with different methods across the subsamples, and the
accuracy of diagnoses in medical records has been questioned
(Holowka et al., 2014).
Journal of Traumatic Stress DOI 10.1002/jts. Published on
behalf of the International Society for Traumatic Stress Studies.
Mobile Comparison of PCL 483
The current study provides empirical support regarding the
lack of differences for measures administered via mobile de-
vice. Given the high rates of smartphone ownership, the results
from the present study suggest that mobile devices are an appro-
priate method for population screens of PTSD. Such a method
would assist in the efficient allocation of resources in events of
mass trauma such as a natural disaster.
References
Bland, M. J., & Altman, D. G. (1986). Statistical methods for
assessing agree-
ment between two methods of clinical assessment. The Lancet,
327, 307–
310. doi:10.1016/S0140-6736(86)90837-8
Bush, N. E., Skopp, N., Smolenski, D., Crumpton, R., & Fairall,
J. (2013).
Behavioral screening measures delivered with a smartphone
app: psycho-
metric properties and user preference. The Journal of Nervous
and Mental
Disease, 201, 991–995. doi:10.1097/NMD.0000000000000039
Campbell, K. A., Rohlman, D. S., Storzbach, D., Binder, L. M.,
Anger, W.
K., Kovera, C. A., . . . Grossmann, S. J. (1999). Test-retest
reliability of
psychological and neurobehavioral tests self-administered by
computer. As-
sessment, 6, 21–32. doi:10.1177/107319119900600103
Finger, M. S., & Ones, D. S. (1999). Psychometric equivalence
of the computer
and booklet forms of the MMPI: A meta-analysis. Psychological
Assessment,
11, 58–66. doi:10.1037/1040-3590.11.1.58
First, M. B., Spitzer, R. L., Gibbon, M., & Williams, J. B. W.
(2002). Struc-
tured Clinical Interview for DSM-IV-TR Axis I Disorders,
Research Version,
Patient Edition. N ew York, NY: Biometrics Research, New
York State
Psychiatric Institute.
Holowka, D. W., Marx, B. P., Gates, M. A., Litman, H. J.,
Ranganathan, G.,
Rosen, R. C., & Keane, T. M. (2014). PTSD diagnostic validity
in Vet-
erans Affairs electronic records of Iraq and Afghanistan
veterans. Jour-
nal of Consulting and Clinical Psychology, 82, 569–579.
doi:10.1037/
a0036347
Kuhn, E., Eftekhari, A., Hoffman, J. E., Crowley, J. J., Ramsey,
K. M., Reger,
G. M., & Ruzek, J. I. (2014). Clinician perceptions of using a
smartphone
app with prolonged exposure therapy. Administration and Policy
in Mental
Health and Mental Health Services Research, 1–8.
doi:10.1007/s10488-013-
0532-2
Luxton, D. D., McCann, R. A., Bush, N. E., Mishkind, M. C., &
Reger, G.
M. (2011). mHealth for mental health: Integrating smartphone
technology
in behavioral healthcare. Professional Psychology: Research and
Practice,
42, 505–512. doi:10.1037/a0024485
Pew Internet and American Life Project. (2012, September).
Explore Survey
Questions. Retrieved from http://www.pewinternet.org/Static-
Pages/Data-
Tools/Explore-Survey-Questions/Roper-
Center.aspx?item={0368CEFB-
1706-4995-B395-925639C0B22F}
Piaggo, G., Elbourne, D. R., Pocock, S. J., & Evans, S. J. W.
(2006). Reporting
of noninferority and equivalence randomized trials: An
extension of the
CONSORT statement. Journal of the American Medical
Association, 295,
1152–1161. doi:10.1001/jama.295.10.1152
Price, M., & Gros, D. F. (2014). Examination of prior
experience with telehealth
and comfort with telehealth technology as a moderator of
treatment response
for PTSD and depression with veterans. International Journal of
Psychiatry
in Medicine, 48, 57–67. doi:10.2190/PM.48.1.e
Price, M., Williamson, D., McCandless, R., Mueller, M.,
Gregoski, M.,
Brunner-Jackson, B., . . . Treiber, F. (2013). Hispanic migrant
farm work-
ers’ attitudes toward mobile phone-based telehealth for
management of
chronic health conditions. Journal of Medical Internet Research,
15, e76.
doi:10.2196/jmir.2500
Price, M., Yuen, E. K., Goetter, E. M., Herbert, J. D., Forman,
E. M., Acierno,
R., & Ruggiero, K. J. (2014). mHealth: A mechanism to deliver
more acces-
sible, more effective mental health care. Clinical Psychology &
Psychother-
apy, 21, 427–436. doi:10.1002/cpp.1855
Reger, G. M., Hoffman, J., Riggs, D., Rothbaum, B. O., Ruzek,
J., Holloway,
K. M., & Kuhn, E. (2013). The “PE coach” smartphone
application: An
innovative approach to improving implementation, fidelity, and
homework
adherence during prolonged exposure. Psychological Services,
10, 342–349.
doi:10.1037/a0032774
Richman, W. L., Kiesler, S., Weisband, S., & Drasgow, F.
(1999). A meta-
analytic study of social desirability distortion in computer -
administered
questionnaires, traditional questionnaires, and interviews.
Journal of Ap-
plied Psychology, 84, 754–775. doi:10.1037/0021-
9010.84.5.754
Ruggiero, K. J., Ben, K. D., Scotti, J. R., & Rabalais, A. E.
(2003). Psy-
chometric properties of the PTSD Checklist—Civilian version.
Journal of
Traumatic Stress, 16, 495–502. doi:10.1023/A:1025714729117
Shiffman, S., Stone, A. A., & Hufford, M. R. (2008). Ecological
mo-
mentary assessment. Annual Review of Clinical Psychology, 4,
1–32.
doi:10.1146/annurev.clinpsy.3.022806.091415
Weathers, F., Litz, B., Huska, J., & Keane, T. (1994). Post-
Traumatic Stress
Disorder Checklist (PCL-C) for DSM-IV. Boston, MA: National
Center for
PTSD.
Weathers, F., Litz, B., Huska, J., & Keane, T. (2003). PTSD
Check-
list - Civilian Version. Retrieved from
http://www.mirecc.va.gov/docs/
visn6/3_PTSD_CheckList_and_Scoring.pdf
Weathers, F. W., Blake, D. D., Schnurr, P. P., Kaloupek, D. G.,
Marx, B. P.,
& Keane, T. M. (2013). The Life Events Checklist for DSM-5
(LEC-5).
Unpublished instrument. Retrieved from http://www.ptsd.va.gov
Journal of Traumatic Stress DOI 10.1002/jts. Published on
behalf of the International Society for Traumatic Stress Studies.
Journal of Traumatic Stress
October 2019, 32, 799–805
B R I E F R E P O R T
An Empirical Crosswalk for the PTSD Checklist: Translating
DSM-IV to DSM-5 Using a Veteran Sample
Samantha J. Moshier,1,2 Daniel J. Lee,2,3 Michelle J. Bovin,2,3
Gabrielle Gauthier,1 Alexandra Zax,1
Raymond C. Rosen,4 Terence M. Keane,2,3 and Brian P.
Marx2,3
1Veterans Affairs Boston Healthcare System, Boston,
Massachusetts, USA
2The National Center for PTSD at Veterans Affairs Boston
Healthcare System, Boston, Massachusetts, USA
3Department of Psychiatry, Boston University School of
Medicine, Boston, Massachusetts, USA
4Healthcore/New England Research Institutes, Watertown,
Massachusetts, USA
The fifth edition of the Diagnostic and Statistical Manual of
Mental Disorders (DSM-5) introduced numerous revisions to the
fourth
edition’s (DSM-IV) criteria for posttraumatic stress disorder
(PTSD), posing a challenge to clinicians and researchers who
wish to assess
PTSD symptoms continuously over time. The aim of this study
was to develop a crosswalk between the DSM-IV and DSM-5
versions of the
PTSD Checklist (PCL), a widely used self-rated measure of
PTSD symptom severity. Participants were 1,003 U.S. veterans
(58.7% with
PTSD) who completed the PCL for DSM-IV (the PCL-C) and
DSM-5 (the PCL-5) during their participation in an ongoing
longitudinal
registry study. In a randomly selected training sample (n = 800),
we used equipercentile equating with loglinear smoothing to
compute
a “crosswalk” between PCL-C and PCL-5 scores. We evaluated
the correspondence between the crosswalk-determined predicted
scores
and observed PCL-5 scores in the remaining validation sample
(n = 203). The results showed strong correspondence between
crosswalk-
predicted PCL-5 scores and observed PCL-5 scores in the
validation sample, ICC = .96. Predicted PCL-5 scores performed
comparably
to observed PCL-5 scores when examining their agreement with
PTSD diagnosis ascertained by clinical interview: predicted
PCL-5,
κ = 0.57; observed PCL-5, κ = 0.59. Subsample comparisons
indicated that the crosswalk’s accuracy did not differ across
characteristics
including gender, age, racial minority status, and PTSD status.
The results support the validity of this newly developed PCL-C
to PCL-5
crosswalk in a veteran sample, providing a tool with which to
interpret and translate scores across the two measures.
The publication of the fifth edition of the Diagnostic and
Statistical Manual of Mental Disorders (DSM-5;American Psy-
chiatric Association [APA], 2013) introduced numerous revi -
sions to the diagnostic criteria for posttraumatic stress disorder
(PTSD), including the addition of new symptoms; the modi-
fication of several existing symptoms; and the introduction of
four, rather than three, symptom clusters. These changes to the
diagnostic criteria pose a challenge to clinicians and researchers
Samantha Moshier is now at Emmanuel College (Boston, MA,
USA).
This research was funded by the U.S. Department of Defense,
Congres-
sionally Directed Medical Research Programs (designations
W81XWH08-
2-0100/W81XWH-08-2-0102 and W81XWH-12-2-
0117/W81XWH12-2-
0121). Dr. Lee is supported by the National Institute of Mental
Health
(5T32MH019836-16). Any opinions, findings, and conclusions
or recommen-
dations expressed in this material are those of the authors and
do not necessarily
reflect the view of the U.S. government.
Correspondence concerning this article should be addressed to
Brian Marx,
Ph.D., 150 South Huntington Ave (116B-4), Boston, MA 02130,
E-mail:
[email protected]
C© 2019 International Society for Traumatic Stress Studies.
View this article
online at wileyonlinelibrary.com
DOI: 10.1002/jts.22438
who previously collected symptom data using measures reflect-
ing the PTSD diagnostic criteria in the prior version of the DSM
(i.e., the fourth edition, text revision; DSM-IV-TR; APA, 2000)
but who wish to follow the course of PTSD symptoms over
time,
including after the revisions to the criteria were published. This
shift may be especially challenging to longitudinal investiga-
tions of PTSD, in which continuity of symptom measurement
over time is critical for many statistical analyses.
Clinicians and researchers with these continuity concerns
must choose among using symptom severity measures that cor-
respond with outdated PTSD diagnostic criteria; using mea-
sures that correspond with the updated DSM-5 PTSD diagnos-
tic criteria; or creating idiosyncratic, unvalidated measures that
simultaneously collect information about both sets of diagnos-
tic criteria. None of these choices is ideal. Instead, researchers
and clinicians would benefit from a guide that translates re-
sults of DSM-IV congruent measures to estimated results on
DSM-5 congruent measures, and vice versa. Recent research
has suggested that DSM-IV congruent symptom ratings can be
used to approximate a diagnosis of DSM-5 PTSD (Rosellini
et al., 2015). However, there is currently no tool available
to enable linking of continuous total or cluster-specific PTSD
799
http://crossmark.crossref.org/dialog/?doi=10.1002%2Fjts.22438
&domain=pdf&date_stamp=2019-10-18
800 Moshier et al.
symptom severity scores derived from DSM-IV and DSM-5 con-
gruent measures. Therefore, the aim of the present study was to
establish a translational crosswalk between symptom severity
scores on the PTSD Checklist–Civilian Version for DSM-IV-
TR (PCL-C) and the PCL for DSM-5 (PCL-5; Weathers, Litz,
Herman, Huska, & Keane, 1993; Weathers et al., 2013), as the
PCL is the most commonly used self-rated measure of PTSD
symptom severity. To do so, we conducted test-equating proce-
dures using data from both versions of the measure collected
concurrently in a sample of U.S. military veterans.
Method
Participants
Participants were 1,003 United States Army or Marine vet-
erans enrolled in the Veterans After-Discharge Longitudinal
Registry (Project VALOR). Project VALOR is a registry of Vet-
erans’ Affairs (VA) mental health care users with and without
PTSD who were deployed in support of recent military oper-
ations in Afghanistan and Iraq. To be included in the cohort,
veterans must have undergone a mental health evaluation at a
VA facility. The cohort oversampled for veterans with proba-
ble PTSD according to VA medical records (i.e., at least two
instances of a PTSD diagnosis by a mental health professional
associated with two separate visits) at a 3:1 ratio. Female veter -
ans were oversampled at a rate of 1:1 (female to male). A
sample
of 1,649 (60.8%) veterans completed the baseline assessment
for Project VALOR. For the current analysis, we focused on
a subsample of this group that consisted of 1,003 participants
who reported experiencing a DSM-5 Criterion A traumatic event
during a clinical interview and had complete data (required for
the test-equating analyses) on both the PCL-C and PCL-5 dur-
ing the fourth wave of study assessments (Time 4 [T4]). There
were no significant differences in sex, racial minority status, or
PTSD diagnostic status or symptom severity at the first wave of
data collection (Time 1 [T1]) between the 1,003 participants in-
cluded in this analysis and the remaining cohort members, ps =
.262–.891. However, participants included in this analysis were
older (M age = 38 years) compared with the remaining cohort
members (M age = 36 years), t(1,647) = −3.56, p = .000, and
had a higher level of educational attainment (i.e., 38% of the
analytic sample had a bachelor’s degree vs. 30% of remaining
cohort members), χ2(6, N = 1,642) = 15.74, p = .015.
Procedure
At T4 of Project VALOR, participants provided informed
consent verbally over the telephone in accordance with the re-
search protocol approved by the VA Boston Healthcare System
institutional review boards and the Human Research Protec-
tion Office of the U.S. Army Medical Research and Mate-
rial Command. Participants then completed a self-administered
questionnaire (SAQ) online and, following this, completed a
telephone-based diagnostic clinical interview. The SAQ con-
sisted of a large battery of questionnaires that, in total, included
over 740 questions pertaining to physical health, functional im-
pairment, psychiatric symptoms, deployment experiences, and
lifetime trauma exposure.
Measures
Demographic information. Participant age and sex were
extracted from a U.S. Department of Defense database. Race,
ethnicity, and education were collected via self-report in the T4
SAQ.
PTSD symptom severity. The PCL-C is a self-rated mea-
sure of PTSD symptom severity designed to correspond to the
17 core DSM-IV PTSD symptoms (Weathers et al., 1993). Re-
spondents use a scale ranging from 1 (not at all) to 5
(extremely)
to rate how much each symptom has bothered them in the past
month. Although a military version of the PCL (the PCL-M) is
available, we used the civilian version because it corresponded
with the study’s clinical interview procedures, which did not re-
strict potential index traumatic events solely to military-related
events. The PCL-C is one of the most commonly used self-
rated measures of DSM-IV PTSD symptom severity, and it
has demonstrated excellent psychometric properties across a
range of samples and settings (for review, see Norris & Ham-
blen, 2004). In the current sample, internal reliability of PCL-C
scores was excellent, Cronbach’s α = .96.
The PCL-5 (Weathers et al., 2013) is a self-rated measure
of PTSD symptom severity designed to correspond to the 20
core DSM-5 PTSD symptoms. Respondents use a scale ranging
from 0 (not at all) to 4 (extremely) to rate how much each
symp-
tom has bothered them in the past month. Like its predecessor,
the PCL-5 is frequently used across a range of settings for
a variety of purposes, including monitoring symptom change
as well as screening for and providing a provisional diagno-
sis of PTSD. Data from the PCL-5 have demonstrated good
test–retest reliability, r = .84, and convergent and discriminant
validity (Blevins, Weathers, Davis, Witte, & Domino, 2015;
Bovin et al., 2015; Keane et al., 2014; Wortmann et al., 2016).
Internal reliability of PCL-5 scores was excellent in the current
sample, Cronbach’s α = .96.
Major depression and PTSD diagnosis. The PTSD and
Major Depressive Episode (MDE) modules of the Structured
Clinical Interview for DSM-5 (SCID-5; First, Williams, Karg,
& Spitzer, 2015) were used to assess exposure to a Criterion
A event and to assess current PTSD diagnostic status and pres -
ence or absence of a current MDE. Interrater agreement was
evaluated for a random sample of 100 cases and was excellent
for both current PTSD, κ = 0.85, and current MDE, κ = .98.
Data Analysis
To link PCL-C and PCL-5 scores, we used equipercentile
equating, a test-equating procedure that is commonly used
in educational measurement fields to determine comparable
scores on different versions of the same exam (for a review,
see Dorans, Moses, & Eigner, 2010). Equipercentile equating
Journal of Traumatic Stress DOI 10.1002/jts. Published on
behalf of the International Society for Traumatic Stress Studies.
A Crosswalk for the PTSD Checklist 801
Table 1
Demographic Characteristics of the Total, Test, and Validation
Samples
Total Sample Test Sample Validation Sample
(n = 1,003) (n = 800) (n = 203)
Variable % M SD % M SD % M SD
Sex
Female 51.1 50.8 52.7
Male 48.9 49.2 47.3
Age (years) 43.2 9.8 43.2 9.9 43.2 9.5
Racial minority status
Non-White 23.2 22.6 25.2
White 76.8 77.4 74.8
Highest education level
High school or GED 6.6 6.9 5.4
Some college 38.9 39.7 37.1
Bachelor’s degree or higher 50.8 50.1 53.7
Current PTSD 58.7 58.9 58.1
Lifetime PTSD 87.5 87.4 88.2
Current MDE 34.5 34.4 35.0
PCL-C score 49.7 17.3 50.0 17.3 50.1 17.6
PCL-5 score 36.2 20.6 36.1 20.7 36.7 20.6
Note. PCL-C = Posttraumatic Stress Disorder Checklist–
Civilian Version (for DSM-IV); PCL-5 = Posttraumatic Stress
Disorder Checklist for DSM-5; PTSD =
posttraumatic stress disorder; MDE = major depressive episode;
GED = general education development.
considers scores on two measures to be equivalent to one an-
other if their percentile ranks in a given group are equal. This
approach has a number of benefits relative to mean or linear
equating methods; for example, it results in all imputed scores
falling within the actual range of the scale and does not rely on
the assumption of a normal distribution of test scores. Equiper -
centile equating methods have been used to develop crosswalks
for a number of neurocognitive and psychiatric rating scales
(e.g., Choi, Schalet, Cook, & Cella, 2014; Monsell et al., 2016).
Figure 1. Histograms of total Posttraumatic Stress Disorder
Checklist–Civilian
Version (for DSM-IV; PCL-C) and PCL for DSM-5 (PCL-5)
scores in the
training sample (N = 800). DSM = Diagnostic and Statistical
Manual of
Mental Disorders (DSM-IV = fourth edition; DSM-5 = fifth
edition).
Prior to performing the equating procedure, we randomly
split the sample into a training sample (n = 800) and a vali -
dation sample (n = 203; a split which allows for a large sam-
ple size to be retained for the equating procedure, consistent
with recommendations by Dorans et al., 2010). In the training
dataset, equipercentile equating with loglinear smoothing was
performed using the R package Equate (Albano, 2016). Stan-
dard errors and 95% confidence intervals of the crosswalk esti -
mates were calculated using 10,000 bootstrapped samples. Af-
ter completing the equating procedure in the training dataset,
we
used the resulting crosswalk to impute predicted PCL-5 scores
from PCL-C scores for all participants in the validation data set.
To evaluate the accuracy of the crosswalk in the valida-
tion sample, we examined the intraclass correlation coefficient
(ICC) between predicted and observed PCL-5 scores and cal-
culated the average difference between predicted and observed
PCL-5 scores. We calculated sensitivity, specificity, efficiency
(correct classification rate), quality of efficiency (i.e., Cohen’s
kappa), and area under the curve (AUC) for use of crosswalk-
predicted PCL-5 cut scores, using the cutoff of PCL-5 score
of 33 or greater (Bovin et al., 2015) in identifying PTSD
diagnosis as determined by the SCID interview. Finally, in
order to evaluate whether the crosswalk demonstrated accu-
racy across relevant subgroups of individuals, we compared
these same markers of accuracy when the sample was di-
vided into subgroups based on education level, age, gender,
racial minority status, and presence or absence of PTSD and
MDE.
Journal of Traumatic Stress DOI 10.1002/jts. Published on
behalf of the International Society for Traumatic Stress Studies.
802 Moshier et al.
Figure 2. Crosswalk of corresponding Posttraumatic Stress
Disorder Checklist–
Civilian Version (for DSM-IV; PCL-C) and PCL for DSM-5
(PCL-5) total
scores with 95% confidence intervals from 10,000 bootstrapped
samples.
DSM = Diagnostic and Statistical Manual of Mental Disorders
(DSM-
IV = fourth edition; DSM-5 = fifth edition).
We used the same test-equating procedures to create cross-
walks from PCL-C subscale scores to PCL-5 subscale scores,
representing each of the DSM-5 PTSD symptom clusters (Clus-
ter B = intrusion symptoms, Cluster C = avoidance symptoms,
Cluster D = negative alterations in cognitions and mood, Clus-
ter E = alterations in arousal and reactivity). These symptom
clusters were approximated in the PCL-C data by summing
Items 1–5 (Cluster B), Items 6 and 7 (Cluster C), Items 8–12
(Cluster D), and Items 13–17 (Cluster E). Missing data were
minimal (one missing case each for variables of age, race, and
education status; and three cases missing the MDE module of
the SCID) and were therefore handled using pairwise deletion.
Results
The characteristics of the sample and subsample are pre-
sented in Table 1. In all, 58.7% percent of participants met cri -
teria for current (i.e., past month) PTSD and 34.5% met criteria
for current MDE. Group comparison tests revealed no signifi -
cant differences among the training and validation samples on
sex, race, ethnicity, education level, PCL-C or PCL-5 score,
or proportion of sample with current PTSD or MDE, ps =
.363–.878. The PCL-C and PCL-5 were highly correlated in
both the training and validation samples, rs = .95 and .96,
respectively. These correlations were well over thresholds rec -
ommended for equating procedures (i.e., .75–.86; Choi et al.,
2014). A histogram of total score frequencies in the training
sample is presented in Figure 1.
The crosswalk for converting PCL-C to PCL-5 scores based
on equipercentile equating results is presented in Figure 2. The
PCL-C scores were equated to lower PCL-5 scores, which is
not surprising given the difference in scaling ranges between
the two measures (PCL-C scores range from 17 to 85 and
PCL-5 scores range from 0 to 80). For example, a score of 50
on the PCL-C was equated with a score of 36 on the PCL-5.
In the validation sample, the ICC among the observed and
predicted PCL-5 scores was .96. The mean difference between
observed and predicted PCL-5 scores was 0.20 (SD = 6.30).
Using the cutoff score of 33 or higher, the predicted PCL-5
score had similar diagnostic utility to the observed PCL-5 score
in predicting PTSD diagnosis determined by clinical interview:
Cohen’s κ = .55, sensitivity = .81, specificity = .74, AUC
= .77, correct classification of 78% of cases for the predicted
PCL-5; Cohen’s κ = .58, sensitivity = .84, specificity = .74,
AUC = .79, correct classification of 80% of cases for the
observed PCL-5.
The accuracy of the crosswalk was highly consistent across
subgroups based on sex, age, racial minority status, education
level, PTSD diagnostic status as determined by clinical inter -
view, and presence or absence of current MDE (see Table 2).
The ICCs between predicted and observed PCL-5 scores were
very high for all subgroups, ICCs = .92–.96. There were no sig-
nificant differences in the mean differences between observed
and predicted PCL-5 score between any of these demographic
subgroups. The kappa values between observed and predicted
probable DSM-5 PTSD diagnosis were good for all subgroups
examined, and the proportion of correctly classified cases did
not differ significantly by subgroup.
The items comprising Clusters B and C are highly simi-
lar between the PCL-C and PCL-5, with only minor wording
changes (e.g., the addition of “unwanted” to Item 1 or the ad-
dition of “strong” to Item 5 on the PCL-5). Not surprisingly
then, the equipercentile-equated crosswalk for the Cluster C
subscale was identical to a linear transformation of subtracting
2 points from PCL-C scores to reflect the change in scaling be-
tween the two measures. Similarly, the equipercentile-equated
crosswalk for Cluster B subscale scores was nearly identical
to a linear transformation involving subtracting 5 points from
PCL-C scores. The ICC between equated and observed scores
using these two methods was equal to .997. Additionally, the
equipercentile-equated crosswalk for Cluster B did not outper-
form the linear transformation method in the accuracy analyses
conducted in the validation sample, which suggests that the lin-
ear transformation can be used for simplicity when converting
Cluster B subscale scores between the PCL-C and PCL-5. How-
ever, such a linear transformation would not be appropriate for
Clusters D and E given that both clusters include new symptoms
in DSM-5 relative to DSM-IV-TR. The crosswalks for Cluster D
Journal of Traumatic Stress DOI 10.1002/jts. Published on
behalf of the International Society for Traumatic Stress Studies.
A Crosswalk for the PTSD Checklist 803
Table 2
Posttraumatic Stress Disorder Checklist (PCL) Crosswalk
Accuracy in Clinical and Demographic Subgroups Within the
Validation
Sample
Crosswalk-
Predicted and
Observed PCL-5
Scores
Difference Between
Crosswalk-Predicted and Observed
PCL-5 Scoresa
Crosswalk-
Predicted and
Observed
Probable PTSDb
Variable n ICC M SD κ
Sex
Male 96 .95 −0.29 6.72 0.91
Female 107 .96 −0.08 5.84 0.85
Age (years)
< 40 116 .95 0.30 6.55 0.83
� 40 86 .96 −0.84 5.85 0.95
Racial minority status
Non-White 51 .96 0.75 6.23 0.88
White 151 .95 −0.49 6.27 0.88
Education level
High school or some college 93 .94 −0.65 6.71 0.90
Bachelor’s degree or higher 109 .96 0.22 5.83 0.85
PTSD diagnosis
Present 118 .92 −0.28 6.33 0.85
Absent 85 .93 −0.16 6.31 0.78
Current MDE
Present 70 .93 −0.90 6.29 0.83
Absent 130 .95 0.25 6.21 0.86
Note. n = 203. ICC = intraclass correlation coefficient; PCL-C =
Posttraumatic Stress Disorder Checklist–Civilian Version (for
DSM-IV); PCL-5 = Posttraumatic
Stress Disorder Checklist for DSM-5; PTSD = posttraumatic
stress disorder; MDE = major depressive episode.
aIn t tests between all subgroups, ps = .190–.812. bProbable
PTSD defined as a PCL-5 score � 33.
Figure 3. Crosswalk of corresponding Posttraumatic Stress
Disorder Checklist–
Civilian Version (for DSM-IV; PCL-C) and PCL for DSM-5
(PCL-5) Clusters D
and E scores with 95% confidence intervals from 10,000
bootstrapped samples.
Approximated PCL-C scores for Clusters D and E were
computed by summing
Items 8–12 (Cluster D) and Items 13–17 (Cluster E) of the PCL-
C. DSM =
Diagnostic and Statistical Manual of Mental Disorders (DSM-IV
= fourth
edition; DSM-5 = fifth edition).
and E subscores based on equipercentile equating with loglin-
ear presmoothing are presented in Figure 3. Predicted cluster
subscores were very strongly correlated with observed cluster
subscores in the validation sample for all four clusters; the ICC
values between observed and predicted subscale scores were
.94 for Cluster B, .88 for Cluster C, .89 for Cluster D, and .91
for Cluster E.
Discussion
This is the first known study that attempted to equate scores
between two versions of a frequently used PTSD symptom
severity measure: the DSM-IV-based PCL-C and the DSM-
5-based PCL-5. The resulting crosswalk enables researchers
and clinicians to interpret and translate scores across the two
measures, an important consideration in longitudinal obser-
vational and clinical treatment studies that cross iterations of
the DSM. A particular strength of this study was the use of
both training and validation samples, which allowed us to
evaluate the accuracy of the crosswalk. Supporting the valid-
ity of the crosswalk, results demonstrated a strong degree of
Journal of Traumatic Stress DOI 10.1002/jts. Published on
behalf of the International Society for Traumatic Stress Studies.
804 Moshier et al.
concordance between observed and predicted PCL-5 scores
(both total and cluster subscale scores) in the validation sam-
ple. Additionally, predicted PCL-5 scores performed compara-
bly to observed PCL-5 scores when examining their agreement
with PTSD diagnosis ascertained by clinical interview. Finally,
the results suggest a similar degree of concordance between
crosswalk–predicted and observed subscale scores and indicate
that the metrics of crosswalk accuracy did not differ across
subgroups.
We anticipate that the PCL crosswalk may be particularly
useful for longitudinal research or for interpretation of clinical
data that have been collected over a time period spanning the
use
of both the DSM-IV and DSM-5. It may also allow for the com-
bining of data sets from studies using different versions of the
PCL, facilitating research that requires large sample sizes, such
as gene association studies. Moreover, the availability of cross -
walks for computing DSM-5 symptom cluster subscale scores
will allow for further study of the association between specific
domains of symptoms (e.g., avoidance, arousal) and risk factors
or outcomes of interest. However, it should be noted that the
evolution of the diagnostic criteria from DSM-IV to DSM-5 has
led to some substantive differences in how the PTSD construct
is defined in each version. The strong correlation among PCL-C
and PCL-5 scores (r = .95) suggests that it was statistically ap-
propriate to use test-equating procedures to link the scales. This
strong association has been demonstrated in prior studies of the
PCL-5 (e.g.,Wortmann et al., 2016) and is consistent with other
research suggesting a strong degree of overlap between the
two DSM criteria sets (e.g., Kilpatrick et al., 2013). However,
it should also be acknowledged that the resulting crosswalk
cannot provide specific information about the elements of the
PTSD construct that are new to DSM-5 and were not assessed
in DSM-IV (i.e., distorted blame, reckless behavior), and it also
does not address differences in the definition of a Criterion A
traumatic event.
This study has a number of strengths for a test-equating de-
sign. We used a single-group design in which all participants
completed both versions of the PCL, thus producing more reli -
able linking across measures. The sample was large and gender -
balanced, and participants showed a wide degree of variation in
PTSD symptom severity. However, the sample consisted solely
of veterans serving in recent-era (i.e., after the September 11,
2001, terrorist attacks) combat operations in Afghanistan and
Iraq. Although the crosswalk showed invariance to several de-
mographic characteristics within the sample, it is not clear to
what extent the results would generalize to civilian samples.
We suggest caution in applying the crosswalk to these sam-
ples and encourage continued study of these results in other
trauma-exposed samples. Additionally, it should be noted that
the PCL-C and PCL-5 were administered in the same order for
every participant, with the PCL-C administered first. Therefore,
order effects may have influenced our results, and future re-
search should examine this possibility, using a counter-balanced
design.
In this study, we present a crosswalk that will allow for con-
version between PCL-C and PCL-5 symptom severity scores.
The results provide support for the validity of the crosswalk
within a veteran sample. This tool will allow researchers and
clinicians to make use of archival PCL-C data in longitudinal
research, clinical settings, and beyond.
References
Albano, A. D. (2016). Equate: An R package for observed-score
linking and equating. Journal of Statistical Software, 74, 1–36.
https://doi.org/10.18637/jss.v074.i08
American Psychiatric Association. (2000). Diagnostic and
statistical manual
of mental disorders (4th ed., text revision). Washington, DC:
Author.
American Psychiatric Association. (2013). Diagnostic and
statistical manual of
mental disorders (5th ed.). Arlington, VA: American Psychiatric
Publishing.
Blevins, C. A., Weathers, F. W., Davis, M. T., Witte, T. K., &
Domino,
J. L. (2015). The Posttraumatic Stress Disorder Checklist for
DSM-5
(PCL-5): Development and initial psychometric evaluation.
Journal of Trau-
matic Stress, 28, 489–498. https://doi.org/10.1002/jts.22059
Bovin, M. J., Marx, B. P., Weathers, F. W., Gallagher, M. W.,
Rodriguez, P.,
Schnurr, P. P., & Keane, T. M. (2015). Psychometric properties
of the PTSD
Checklist for Diagnostic and Statistical Manual of Mental
Disorders–Fifth
Edition (PCL-5) in Veterans. Psychological Assessment, 28,
1379–1391.
https://doi.org/10.1037/pas0000254
Choi, S. W., Schalet, B., Cook, K. F., & Cella, D. (2014).
Establishing a
common metric for depressive symptoms: Linking the BDI-II,
CES-D, and
PHQ-9 to PROMIS depression. Psychological Assessment, 26,
513–527.
https://doi.org/10.1037/a0035768
Dorans, N. J., Moses, T., & Eignor, D. E. (2010). Principles and
practices of
test score equating (ETS Research Report No. RR-10-29).
Princeton, NJ:
Educational Testing Service.
First, M. B., Williams, J. W., Karg, R. S., & Spitzer, R. L.
(2015). Structured
Clinical Interview for DSM-5–Research Version. Arlington,
VA: American
Psychiatric Association.
Keane, T. M., Rubin, A., Lachowicz, M., Brief, D. J.,
Enggasser, J., Roy, M., . . .
Rosenbloom, D. (2014). Temporal Stability of DSM-5
posttraumatic stress
disorder criteria in a problem drinking sample. Psychological
Assessment,
26, 1138–1145. https://doi.org/10.1037/a0037133
Kilpatrick, D. G., Resnick, H. S., Milanak, M. E., Miller, M.
W., Keyes, K.
M., & Friedman, M. J. (2013). National estimates of exposure to
traumatic
events and PTSD prevalence using DSM-IV and DSM-5 criteria.
Journal of
Traumatic Stress, 26, 537–547.
https://doi.org/10.1002/da.22364
Monsell, S. E., Dodge, H. H., Zhou, X. H., Bu, Y., Besser, L.
M., Mock,
C., . . . Weintraub, S. (2016). Results from the NACC uniform
data set
neuropsychological battery crosswalk. Alzheimer Disease and
Associated
Disorders, 30, 134–139.
https://doi.org/10.1097/WAD.0000000000000111
Norris, F. H., & Hamblen, J. L. (2004). Standardized self-report
measures of
civilian trauma and PTSD. In J. P. Wilson, T. M. Keane, & T.
Martin (Eds.),
Assessing psychological trauma and PTSD (pp. 63–102). New
York, NY:
Guilford Press.
Rosellini, A. J., Stein, M. B., Colpe, L. J., Heeringa, S. G.,
Petukhova, M.
V., Sampson, N. A., . . . & Army STARRS Collaborators.
(2015). Approxi-
mating a DSM-5 diagnosis of PTSD using DSM-IV criteria.
Depression and
Anxiety, 32, 493–501. https://doi.org/10.1002/da.22364
Weathers, F., Litz, B., Herman, D., Huska, J., & Keane, T.
(1993, October).
The PTSD Checklist (PCL): Reliability, Validity, and
Diagnostic Utility.
Journal of Traumatic Stress DOI 10.1002/jts. Published on
behalf of the International Society for Traumatic Stress Studies.
https://doi.org/10.18637/jss.v074.i08
https://doi.org/10.1002/jts.22059
https://doi.org/10.1037/pas0000254
https://doi.org/10.1037/a0035768
https://doi.org/10.1037/a0037133
https://doi.org/10.1002/da.22364
https://doi.org/10.1097/WAD.0000000000000111
https://doi.org/10.1002/da.22364
A Crosswalk for the PTSD Checklist 805
Paper presented at the Annual Convention of the Internati onal
Society for
Traumatic Stress Studies, San Antonio, TX.
Weathers, F. W., Litz, B. T., Keane, T. M., Palmieri, P. A.,
Marx, B. P., &
Schnurr, P. P. (2013). The PTSD Checklist for DSM-5 (PCL-5).
Scale avail-
able from the National Center for PTSD at www.ptsd.va.gov
Wortmann, J. H., Jordan, A. H., Weathers, F. W., Resick, P. A.,
Don-
danville, K. A., Hall-Clark, B., . . . Litz, B. T. (2016).
Psychomet-
ric analysis of the PTSD Checklist-5 (PCL-5) among treatment-
seeking
military service members. Psychological Assessment, 28, 1392–
1403.
https://doi.org/10.1037/pas0000260
Journal of Traumatic Stress DOI 10.1002/jts. Published on
behalf of the International Society for Traumatic Stress Studies.
http://www.ptsd.va.gov
https://doi.org/10.1037/pas0000260

More Related Content

Similar to Individual DifferencesSelf-Awareness and Working wit

Narcissism self enchantment
Narcissism self enchantmentNarcissism self enchantment
Narcissism self enchantmentveropabon
 
Psychological Testing Techniques
Psychological Testing TechniquesPsychological Testing Techniques
Psychological Testing Techniquespsychegames2
 
Consideration of symptom validity as a routine component of forensic assessme...
Consideration of symptom validity as a routine component of forensic assessme...Consideration of symptom validity as a routine component of forensic assessme...
Consideration of symptom validity as a routine component of forensic assessme...NZ Psychological Society
 
Screencast-o-matic link for reviewing Ms. S results httpssc.docx
Screencast-o-matic link for reviewing Ms. S results httpssc.docxScreencast-o-matic link for reviewing Ms. S results httpssc.docx
Screencast-o-matic link for reviewing Ms. S results httpssc.docxgemaherd
 
Personality theories
Personality theoriesPersonality theories
Personality theoriesIAU Dent
 
Running head FIRST RESPONDERS RISK FOR PTSD .docx
Running head FIRST RESPONDERS RISK FOR PTSD .docxRunning head FIRST RESPONDERS RISK FOR PTSD .docx
Running head FIRST RESPONDERS RISK FOR PTSD .docxwlynn1
 
Personality assessment
Personality assessmentPersonality assessment
Personality assessmentSushma Rathee
 
The DSM-5 Clinical Cases e-book has provided multiple case-scena.docx
The DSM-5 Clinical Cases e-book has provided multiple case-scena.docxThe DSM-5 Clinical Cases e-book has provided multiple case-scena.docx
The DSM-5 Clinical Cases e-book has provided multiple case-scena.docxkarisariddell
 
Correlates of perceptions of life, 5.2
Correlates of perceptions of life, 5.2Correlates of perceptions of life, 5.2
Correlates of perceptions of life, 5.2Emily Borkowski
 
Search for meaning in life: Evidence for nuanced associations with psychologi...
Search for meaning in life: Evidence for nuanced associations with psychologi...Search for meaning in life: Evidence for nuanced associations with psychologi...
Search for meaning in life: Evidence for nuanced associations with psychologi...Nick Stauner
 
Psychological correlates of acute post surgical pain.
Psychological correlates of acute post surgical pain.Psychological correlates of acute post surgical pain.
Psychological correlates of acute post surgical pain.Paul Coelho, MD
 
· Journal List· HHS Author Manuscripts· PMC5626643J Affect.docx
· Journal List· HHS Author Manuscripts· PMC5626643J Affect.docx· Journal List· HHS Author Manuscripts· PMC5626643J Affect.docx
· Journal List· HHS Author Manuscripts· PMC5626643J Affect.docxodiliagilby
 
· Journal List· HHS Author Manuscripts· PMC5626643J Affect
· Journal List· HHS Author Manuscripts· PMC5626643J Affect· Journal List· HHS Author Manuscripts· PMC5626643J Affect
· Journal List· HHS Author Manuscripts· PMC5626643J Affectchestnutkaitlyn
 
1. What is the question the authors are asking They asked abo.docx
1. What is the question the authors are asking They asked abo.docx1. What is the question the authors are asking They asked abo.docx
1. What is the question the authors are asking They asked abo.docxpaynetawnya
 
Implicit Association Measurement Techniques
Implicit Association Measurement TechniquesImplicit Association Measurement Techniques
Implicit Association Measurement Techniqueshfienberg
 

Similar to Individual DifferencesSelf-Awareness and Working wit (19)

Narcissism self enchantment
Narcissism self enchantmentNarcissism self enchantment
Narcissism self enchantment
 
Psychological Testing Techniques
Psychological Testing TechniquesPsychological Testing Techniques
Psychological Testing Techniques
 
Consideration of symptom validity as a routine component of forensic assessme...
Consideration of symptom validity as a routine component of forensic assessme...Consideration of symptom validity as a routine component of forensic assessme...
Consideration of symptom validity as a routine component of forensic assessme...
 
Cwiertniewicz_UGRC_cortisolMRI
Cwiertniewicz_UGRC_cortisolMRICwiertniewicz_UGRC_cortisolMRI
Cwiertniewicz_UGRC_cortisolMRI
 
Neo personality inventory
Neo personality inventoryNeo personality inventory
Neo personality inventory
 
Screencast-o-matic link for reviewing Ms. S results httpssc.docx
Screencast-o-matic link for reviewing Ms. S results httpssc.docxScreencast-o-matic link for reviewing Ms. S results httpssc.docx
Screencast-o-matic link for reviewing Ms. S results httpssc.docx
 
FINAL thesis 4.28
FINAL thesis 4.28FINAL thesis 4.28
FINAL thesis 4.28
 
nihms31308
nihms31308nihms31308
nihms31308
 
Personality theories
Personality theoriesPersonality theories
Personality theories
 
Running head FIRST RESPONDERS RISK FOR PTSD .docx
Running head FIRST RESPONDERS RISK FOR PTSD .docxRunning head FIRST RESPONDERS RISK FOR PTSD .docx
Running head FIRST RESPONDERS RISK FOR PTSD .docx
 
Personality assessment
Personality assessmentPersonality assessment
Personality assessment
 
The DSM-5 Clinical Cases e-book has provided multiple case-scena.docx
The DSM-5 Clinical Cases e-book has provided multiple case-scena.docxThe DSM-5 Clinical Cases e-book has provided multiple case-scena.docx
The DSM-5 Clinical Cases e-book has provided multiple case-scena.docx
 
Correlates of perceptions of life, 5.2
Correlates of perceptions of life, 5.2Correlates of perceptions of life, 5.2
Correlates of perceptions of life, 5.2
 
Search for meaning in life: Evidence for nuanced associations with psychologi...
Search for meaning in life: Evidence for nuanced associations with psychologi...Search for meaning in life: Evidence for nuanced associations with psychologi...
Search for meaning in life: Evidence for nuanced associations with psychologi...
 
Psychological correlates of acute post surgical pain.
Psychological correlates of acute post surgical pain.Psychological correlates of acute post surgical pain.
Psychological correlates of acute post surgical pain.
 
· Journal List· HHS Author Manuscripts· PMC5626643J Affect.docx
· Journal List· HHS Author Manuscripts· PMC5626643J Affect.docx· Journal List· HHS Author Manuscripts· PMC5626643J Affect.docx
· Journal List· HHS Author Manuscripts· PMC5626643J Affect.docx
 
· Journal List· HHS Author Manuscripts· PMC5626643J Affect
· Journal List· HHS Author Manuscripts· PMC5626643J Affect· Journal List· HHS Author Manuscripts· PMC5626643J Affect
· Journal List· HHS Author Manuscripts· PMC5626643J Affect
 
1. What is the question the authors are asking They asked abo.docx
1. What is the question the authors are asking They asked abo.docx1. What is the question the authors are asking They asked abo.docx
1. What is the question the authors are asking They asked abo.docx
 
Implicit Association Measurement Techniques
Implicit Association Measurement TechniquesImplicit Association Measurement Techniques
Implicit Association Measurement Techniques
 

More from LizbethQuinonez813

In this module, we examined crimes against persons, crimes against p.docx
In this module, we examined crimes against persons, crimes against p.docxIn this module, we examined crimes against persons, crimes against p.docx
In this module, we examined crimes against persons, crimes against p.docxLizbethQuinonez813
 
In this module, we explore how sexual identity impacts the nature of.docx
In this module, we explore how sexual identity impacts the nature of.docxIn this module, we explore how sexual identity impacts the nature of.docx
In this module, we explore how sexual identity impacts the nature of.docxLizbethQuinonez813
 
In this module, we have studied Cultural Imperialism and Americaniza.docx
In this module, we have studied Cultural Imperialism and Americaniza.docxIn this module, we have studied Cultural Imperialism and Americaniza.docx
In this module, we have studied Cultural Imperialism and Americaniza.docxLizbethQuinonez813
 
In this Reflection Activity, you will be asked to think and write ab.docx
In this Reflection Activity, you will be asked to think and write ab.docxIn this Reflection Activity, you will be asked to think and write ab.docx
In this Reflection Activity, you will be asked to think and write ab.docxLizbethQuinonez813
 
In this lab, you will observe the time progression of industrializat.docx
In this lab, you will observe the time progression of industrializat.docxIn this lab, you will observe the time progression of industrializat.docx
In this lab, you will observe the time progression of industrializat.docxLizbethQuinonez813
 
In this module we have discussed an organizations design and how it.docx
In this module we have discussed an organizations design and how it.docxIn this module we have discussed an organizations design and how it.docx
In this module we have discussed an organizations design and how it.docxLizbethQuinonez813
 
In this lab, you will gather data about CO2 emissions using the .docx
In this lab, you will gather data about CO2 emissions using the .docxIn this lab, you will gather data about CO2 emissions using the .docx
In this lab, you will gather data about CO2 emissions using the .docxLizbethQuinonez813
 
In this five-page essay, your task is to consider how Enlightenment .docx
In this five-page essay, your task is to consider how Enlightenment .docxIn this five-page essay, your task is to consider how Enlightenment .docx
In this five-page essay, your task is to consider how Enlightenment .docxLizbethQuinonez813
 
In this reflection, introduce your professor to your project. Speak .docx
In this reflection, introduce your professor to your project. Speak .docxIn this reflection, introduce your professor to your project. Speak .docx
In this reflection, introduce your professor to your project. Speak .docxLizbethQuinonez813
 
In this discussion, please address the followingDiscuss how oft.docx
In this discussion, please address the followingDiscuss how oft.docxIn this discussion, please address the followingDiscuss how oft.docx
In this discussion, please address the followingDiscuss how oft.docxLizbethQuinonez813
 
In this course, we have introduced and assessed many noteworthy figu.docx
In this course, we have introduced and assessed many noteworthy figu.docxIn this course, we have introduced and assessed many noteworthy figu.docx
In this course, we have introduced and assessed many noteworthy figu.docxLizbethQuinonez813
 
In this Assignment, you will focus on Adaptive Leadership from a.docx
In this Assignment, you will focus on Adaptive Leadership from a.docxIn this Assignment, you will focus on Adaptive Leadership from a.docx
In this Assignment, you will focus on Adaptive Leadership from a.docxLizbethQuinonez813
 
Inferential AnalysisChapter 20NUR 6812Nursing Research
Inferential AnalysisChapter 20NUR 6812Nursing ResearchInferential AnalysisChapter 20NUR 6812Nursing Research
Inferential AnalysisChapter 20NUR 6812Nursing ResearchLizbethQuinonez813
 
Industry CompetitionChapter Outline3-1 Industry Life Cyc
Industry CompetitionChapter Outline3-1 Industry Life CycIndustry CompetitionChapter Outline3-1 Industry Life Cyc
Industry CompetitionChapter Outline3-1 Industry Life CycLizbethQuinonez813
 
Infancy to Early Childhood Case AnalysisPart IFor this di
Infancy to Early Childhood Case AnalysisPart IFor this diInfancy to Early Childhood Case AnalysisPart IFor this di
Infancy to Early Childhood Case AnalysisPart IFor this diLizbethQuinonez813
 
Infectious DiseasesNameCourseInstructorDateIntrodu
Infectious DiseasesNameCourseInstructorDateIntroduInfectious DiseasesNameCourseInstructorDateIntrodu
Infectious DiseasesNameCourseInstructorDateIntroduLizbethQuinonez813
 
Individual Focused Learning for Better Memory Retention Through
Individual Focused Learning for Better Memory Retention Through Individual Focused Learning for Better Memory Retention Through
Individual Focused Learning for Better Memory Retention Through LizbethQuinonez813
 
Infectious diseases projectThis project is PowerPoint, or a pa
Infectious diseases projectThis project is PowerPoint, or a paInfectious diseases projectThis project is PowerPoint, or a pa
Infectious diseases projectThis project is PowerPoint, or a paLizbethQuinonez813
 
Individual Project You are a business analyst in a publicly-tr
Individual Project You are a business analyst in a publicly-trIndividual Project You are a business analyst in a publicly-tr
Individual Project You are a business analyst in a publicly-trLizbethQuinonez813
 
Individual Project I-21.    TitleTechnology Management Plan
Individual Project I-21.    TitleTechnology Management Plan Individual Project I-21.    TitleTechnology Management Plan
Individual Project I-21.    TitleTechnology Management Plan LizbethQuinonez813
 

More from LizbethQuinonez813 (20)

In this module, we examined crimes against persons, crimes against p.docx
In this module, we examined crimes against persons, crimes against p.docxIn this module, we examined crimes against persons, crimes against p.docx
In this module, we examined crimes against persons, crimes against p.docx
 
In this module, we explore how sexual identity impacts the nature of.docx
In this module, we explore how sexual identity impacts the nature of.docxIn this module, we explore how sexual identity impacts the nature of.docx
In this module, we explore how sexual identity impacts the nature of.docx
 
In this module, we have studied Cultural Imperialism and Americaniza.docx
In this module, we have studied Cultural Imperialism and Americaniza.docxIn this module, we have studied Cultural Imperialism and Americaniza.docx
In this module, we have studied Cultural Imperialism and Americaniza.docx
 
In this Reflection Activity, you will be asked to think and write ab.docx
In this Reflection Activity, you will be asked to think and write ab.docxIn this Reflection Activity, you will be asked to think and write ab.docx
In this Reflection Activity, you will be asked to think and write ab.docx
 
In this lab, you will observe the time progression of industrializat.docx
In this lab, you will observe the time progression of industrializat.docxIn this lab, you will observe the time progression of industrializat.docx
In this lab, you will observe the time progression of industrializat.docx
 
In this module we have discussed an organizations design and how it.docx
In this module we have discussed an organizations design and how it.docxIn this module we have discussed an organizations design and how it.docx
In this module we have discussed an organizations design and how it.docx
 
In this lab, you will gather data about CO2 emissions using the .docx
In this lab, you will gather data about CO2 emissions using the .docxIn this lab, you will gather data about CO2 emissions using the .docx
In this lab, you will gather data about CO2 emissions using the .docx
 
In this five-page essay, your task is to consider how Enlightenment .docx
In this five-page essay, your task is to consider how Enlightenment .docxIn this five-page essay, your task is to consider how Enlightenment .docx
In this five-page essay, your task is to consider how Enlightenment .docx
 
In this reflection, introduce your professor to your project. Speak .docx
In this reflection, introduce your professor to your project. Speak .docxIn this reflection, introduce your professor to your project. Speak .docx
In this reflection, introduce your professor to your project. Speak .docx
 
In this discussion, please address the followingDiscuss how oft.docx
In this discussion, please address the followingDiscuss how oft.docxIn this discussion, please address the followingDiscuss how oft.docx
In this discussion, please address the followingDiscuss how oft.docx
 
In this course, we have introduced and assessed many noteworthy figu.docx
In this course, we have introduced and assessed many noteworthy figu.docxIn this course, we have introduced and assessed many noteworthy figu.docx
In this course, we have introduced and assessed many noteworthy figu.docx
 
In this Assignment, you will focus on Adaptive Leadership from a.docx
In this Assignment, you will focus on Adaptive Leadership from a.docxIn this Assignment, you will focus on Adaptive Leadership from a.docx
In this Assignment, you will focus on Adaptive Leadership from a.docx
 
Inferential AnalysisChapter 20NUR 6812Nursing Research
Inferential AnalysisChapter 20NUR 6812Nursing ResearchInferential AnalysisChapter 20NUR 6812Nursing Research
Inferential AnalysisChapter 20NUR 6812Nursing Research
 
Industry CompetitionChapter Outline3-1 Industry Life Cyc
Industry CompetitionChapter Outline3-1 Industry Life CycIndustry CompetitionChapter Outline3-1 Industry Life Cyc
Industry CompetitionChapter Outline3-1 Industry Life Cyc
 
Infancy to Early Childhood Case AnalysisPart IFor this di
Infancy to Early Childhood Case AnalysisPart IFor this diInfancy to Early Childhood Case AnalysisPart IFor this di
Infancy to Early Childhood Case AnalysisPart IFor this di
 
Infectious DiseasesNameCourseInstructorDateIntrodu
Infectious DiseasesNameCourseInstructorDateIntroduInfectious DiseasesNameCourseInstructorDateIntrodu
Infectious DiseasesNameCourseInstructorDateIntrodu
 
Individual Focused Learning for Better Memory Retention Through
Individual Focused Learning for Better Memory Retention Through Individual Focused Learning for Better Memory Retention Through
Individual Focused Learning for Better Memory Retention Through
 
Infectious diseases projectThis project is PowerPoint, or a pa
Infectious diseases projectThis project is PowerPoint, or a paInfectious diseases projectThis project is PowerPoint, or a pa
Infectious diseases projectThis project is PowerPoint, or a pa
 
Individual Project You are a business analyst in a publicly-tr
Individual Project You are a business analyst in a publicly-trIndividual Project You are a business analyst in a publicly-tr
Individual Project You are a business analyst in a publicly-tr
 
Individual Project I-21.    TitleTechnology Management Plan
Individual Project I-21.    TitleTechnology Management Plan Individual Project I-21.    TitleTechnology Management Plan
Individual Project I-21.    TitleTechnology Management Plan
 

Recently uploaded

The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13Steve Thomason
 
PSYCHIATRIC History collection FORMAT.pptx
PSYCHIATRIC   History collection FORMAT.pptxPSYCHIATRIC   History collection FORMAT.pptx
PSYCHIATRIC History collection FORMAT.pptxPoojaSen20
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactdawncurless
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxNirmalaLoungPoorunde1
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesFatimaKhan178732
 
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991RKavithamani
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdfSoniaTolstoy
 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxOH TEIK BIN
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxheathfieldcps1
 
Presiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsPresiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsanshu789521
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdfssuser54595a
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptxVS Mahajan Coaching Centre
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingTechSoup
 
Science 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsScience 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsKarinaGenton
 
URLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppURLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppCeline George
 
Concept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.CompdfConcept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.CompdfUmakantAnnand
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introductionMaksud Ahmed
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionSafetyChain Software
 

Recently uploaded (20)

The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13
 
PSYCHIATRIC History collection FORMAT.pptx
PSYCHIATRIC   History collection FORMAT.pptxPSYCHIATRIC   History collection FORMAT.pptx
PSYCHIATRIC History collection FORMAT.pptx
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impact
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptx
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and Actinides
 
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptx
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
 
Presiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsPresiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha elections
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
 
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
 
Science 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsScience 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its Characteristics
 
URLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppURLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website App
 
Concept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.CompdfConcept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.Compdf
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory Inspection
 

Individual DifferencesSelf-Awareness and Working wit

  • 1. Individual Differences Self-Awareness and Working with Others Dr.Nathanson 1 1 Individual Differences at Work We seek to understand people in order to develop insight into our own behavior, and the behavior of others, and to respond in effective ways in work settings. Insight Effective Interactions *
  • 2. PersonalityWho are you, and why do you behave the way that you do?the combination of stable physical and mental characteristics that give an individual his or her identitystable over time, stable across situationsunique set of complex, interacting characteristics“Habits of Response” * Personality (cont.)Origins of personality?genetics (nature)early life experience (nurture)modeling, reinforcement, stability of context, family dynamicsImpact of personality at work?Person x Situation interactionorganizations are “strong situations”dependent on culture, job, group factors * Personality (cont.)“Big Five” Personality Dimensionsdecades of research and theoretical discussions of personality --> dozens of personality dimensions1970’s and 1980’s: statistical methods (e.g., factor analysis) provided a “clearer picture”conscientiousness, extroversion, openness to experience, neuroticism, agreeablenessonly moderate predictors *
  • 3. Group exerciseEach group member should discuss their profile, i.e are they high or low or in the middle for each of the Big 5 elements. Then the group should discuss on average what the group personality is like. Select a group leader and report to the class. * Personality (cont.)Locus of Controlan individual’s sense of control over his/her life, the environment, and external eventsHigh Internal LOCtask-oriented, innovative, proactive, self-confidentHigh External LOCsensitive to social cues, anxious changes with strong situational cues * Personality (cont.)Tolerance for Ambiguityextent to which individuals are threatened by or have difficulty coping with ambiguity, uncertainty, unpredictability, complexity…High Tolerance for Ambiguitycan handle more informationbetter at transmitting informationmore adaptivesensitive to other’s characteristics
  • 4. * Personality (cont.)Do organizations have personalities?SAS TheoryB. Schneiderthrough the combined processes of selection, attrition, and socialization, organizations create a culture with a “stable personality”implications? * Emotionscomplex, patterned, organismic reactions to how we think we are doing in our efforts to survive and flourish; goal orientedbiological, psychological, socialgoal oriented: related to our ability to achieve what we wantnegative emotions: triggered by frustration (anger, jealousy)positive emotions: triggered by attainment (pride, happiness) * Emotional IntelligencePredictive of “star performance”: who does well, who gets aheadDaniel Goleman: Working with Emotional Intelligencebased on research in 500+ organizationsmore important in predicting success than technical skills or IQHigh “EQ”: works well with otherscan be
  • 5. learned * Emotional Intelligence (cont.) Five dimensions of Emotional Intelligence 1. Personal Competence - self awareness - self-regulation - motivation 2. Social Competence - empathy - social skills * ImplicationsPersonality is stable…don’t expect to change another’s personality; get to know their personality, and work with it…Recruiting, Selection & Placementbe very cautious about using personality inventoriesvalidity & reliability are essentialpeople tend to choose occupations that “fit” their personality *
  • 6. Implications (cont.)Communication & Coachingpersonality differences impact understanding between peopleCoaching skills should be part of developmental process at workDiversitydiversity in people increases creativity, innovationit also increases conflict… conflict resolution skills are essential for allValue and support diversity!! * Implications (cont.)Develop self-awarenessWho are you? How does your personality affect your work, and other people at work?What are your personality strengths?accept who you are, and then look for opportunities to develop * Journal of Traumatic Stress, Vol. 13, No. 2, 2000 Comparison of the PTSD Symptom Scale-Interview Version and the Clinician- Administered PTSD Scale Edna B. and David F. Tolin’ The Clinician-Administered PTSD Scale (CAPS) is one of the most frequently
  • 7. used measures of posttraumatic stress disorder (PTSD). It has been shown to be a reliable and valid measure, although its psychometric properties in nonveteran populations are not well known. One problem with the CAPS is its long assess- ment time. The PTSD Symptom Scale-Interview Version (PSS- I) is an alternative measure of PTSD severity, requiring less assessment time than the CAPS. Pre- liminary studies indicate that the PSS-I is reliable and valid in civilian trauma survivors. In the present study we compared the psychometric properties of the CAPS and the PSS-I in a sample of 64 civilian trauma survivors with and without PTSD. Participants were administered the CAPS, the PSS-I, and the Structured Clinical Interview f o r DSM-IV (SCID) by separate interviewers, and their re- sponses were videotaped and rated by independent clinicians. Results indicated that the CAPS and the PSS-I showed high internal consistency, with no direr- ences between the two measures. Interrater reliability was also high f o r both measures, with the PSS-I yielding a slightly higher coeficient. The CAPS and the PSS-I correlated strongly with each other and with the SCID. Although the CAPS had slightly higher specijcity and the PSS-I had slightly higher sensitivity to PTSD, overall the CAPS and the PSS-I peqormed about equally well. These results suggest that the PSS-I can be used instead of the CAPS in the assess-
  • 8. ment of PTSD, thus decreasing assessment time without sacrijcing reliability or validity. KEY WORDS: posttraumatic stress disorder; CAPS; PSS-I; SCID. I Center for Treatment and Study of Anxiety, Department of Psychiatry, University of Pennsylvania, 3535 Market Street, 6th Floor, Philadelphia, Pennsylvania 19104. *To whom correspondence should be addressed. 181 0894-98h7/00/0400-0181$18.00/1 c 2000 International Soclefy for Traumatic Streaa Sludie 182 Foa and Tolin One of the most widely used measures of posttraumatic stress disorder (PTSD) is the Clinician-Administered PTSD Scale (CAPS; Blake et al., 1990), often re- ferred to as the “gold-standard” measure for PTSD. The CAPS is a semistructured interview that measures the 17 symptoms of PTSD. Each symptom is assessed using two questions (for a total of 34 items): one measuring frequency of the symptom’s occurrence, and the other, its intensity (e.g., distress or functional im- pairment). To ascertain validity of response, each question is followed by a number
  • 9. of probe questions that aim at clarifying the frequency and intensity of the symp- tom. CAPS responses are used not only for making a dichotomous PTSD diagnosis, but also for quantifying the seventy of PTSD. The CAPS was originally devel- oped for use with combat veterans and most studies of its psychometric properties have used this population (e.g.. Blake et al., 1990). More recently, to our knowl- edge only one study (Blanchard et al., 1995) has examined the reliability of the CAPS in civilian populations, yielding high to very high reliability coefficients. Hovens et al. (1994) found high reliability and moderate validity coefficients us- ing a Dutch-language version of the CAPS. However, that sample contained both civilians and combat veterans; therefore, it is difficult to determine whether the same results would apply to a civilian sample. Although the CAPS has excellent psychometric properties, as noted by Newman, Kaloupek, and Keane (1996), its major drawback is the substantial amount of time required for its administration due to its large number of items. Depending on the interviewee’s symptom picture, administration of the CAPS can take 40 to 60 min. One potential alternative to the CAPS is the PTSD Symptom Scale-Interview Version (PSS-I; Foa, Riggs, Dancu, & Rothbaum, 1993). The PSS-I is a semistruc-
  • 10. tured interview that consists of 17 items, corresponding to the 17 symptoms of PTSD. Unlike the CAPS, frequency and intensity of symptoms are combined on the PSS-I into a single rater estimate of seventy. The reason for combining these two dimensions is that some symptoms lend themselves more readily to frequency estimates (e.g., nightmares) whereas others are more readily described in terms of intensity (e.g., hypervigilance). Excellent reliability and validity have been found for the PSS-I using female victims of rape and nonsexual assault (Foa et al., 1993). Because the PSS-I consists of only 17 items (compared to the CAPS’S 34), its administration time is relatively short, approximately 20 to 30 min. The purpose of the present study was to compare the psychometric proper- ties of the CAPS and the PSS-I using a sample of individuals with and without PTSD who had experienced a variety of traumatic events. We administered the two interviews and compared the resulting diagnostic status and symptom severity to one another and to that yielded by the Structured Clinical Interview for DSM-IV (SCID; First, Spitzer, Gibbon, & Williams, 1995). If the CAPS and the PSS-I show similar reliability and validity to each other, then the PSS- I may be a useful alternative to the CAPS when resources are limited.
  • 11. PSS-I versus CAPS 183 Method Participants Participants were a convenience sample of 12 clinic patients and 52 non- clinical adult volunteers (total = a), recruited from a relatively heterogeneous community sample in the greater Philadelphia area. The clinic patients were re- ceiving outpatient treatment for PTSD; the remainder responded to advertisements and requests for volunteers at community presentations. All participants were re- imbursed $30 for their participation. Fifty-three percent of the participants were female, and 47% were male. Mean age was 37 years (SD = 10). Fifty-two percent were Caucasian, 39% were African American, 3% were Hispanic, 5% were Asian American, and 1% were other ethnicity. All participants reported experiencing a traumatic incident that met Crite- rion A of the DSM-ZV (American Psychiatric Association, 1994) PTSD diagnosis. The sample included a heterogeneous range of traumatic experiences, with per- centages as follows: rape 18%, other sexual assault 8%, nonsexual assault 32%, fire/explosion 11 %, accident 14%, and other trauma 17%. None
  • 12. of the participants were combat veterans. Measures PSS-I (Foa et al., 1993). The PSS-I is a semistructured interview designed to assess current symptoms of PTSD as defined by DSM-ZV (American Psychi- atric Association, 1994) criteria. The PSS-I consists of 17 items corresponding to the 17 symptoms of PTSD, and yields a total PTSD severity score as well as reexperiencing, avoidance, and arousal subscores. Each item consists of one brief question. The participant’s answer is rated by the interviewer from 0 (Not at all) to 3 (5 o r more times p e r week/Very much). Total severity scores on the PSS-I are based on sums of the raw items. Symptoms measured by the PSS-I are considered present if they are rated as 1 (Once p e r week or less/A little) or greater. Factor analysis of the PSS-I yielded three factors: avoidancehrousal, numb- ing, and intrusion (Foa, Riggs, & Gershuny, 1995). Internal consistency coefficients for the PSS-I subscales range from .65 to .71 in a sample of female sexual and nonsexual assault victims. Test-retest reliabilities range from .66 to .77 over a 1-month period. Interrater reliabilities range from .93 to .95. The PSS-I shows good concurrent validity, as indicated by significant correlations with measures of
  • 13. PTSD symptoms, depression, and general anxiety (Foa et al., 1993). CAPS (Blake et al., 1990). The CAPS is a semistructured interview designed to measure symptoms of PTSD according to DSM-ZZZ-R (American Psychiatric 184 Foa and Tolin Association, 1987) criteria. The CAPS has 34 symptom-oriented items, each rated on a 5-point scale, which correspond to the 17 symptoms of PTSD. The CAPS yields two total scores, one for frequency and one for intensity, as well as two sub- scores for each of the reexperiencing, avoidance, and arousal subscales. The anchor points of the scales vary according to symptom, but higher numbers consistently indicate either higher frequency or intensity of the symptom. In addition to having separate ratings of frequency and intensity, the CAPS differs from the PSS-I in that it includes questions to be used as prompts if the assessor needs further clarification. The CAPS also can be used to assess both lifetime and current PTSD symptomatology; however, for the purposes of the present study only current symptoms were assessed. Previous research indicates that the CAPS shows excellent interrater reliabil-
  • 14. ity ( r = .92 to .99) for all three subscales in combat veterans. Internal consistency coefficients range from .73 to 3 5 . The CAPS shows good concurrent validity, as indicated by significant correlations with self-report measures of PTSD symptoms (Blake et al., 1990). Thus, the CAPS appears to be a reliable and valid mea- sure. Partly because of the complexity inherent in obtaining separate scores for frequency and intensity, several scoring rules have been proposed for the CAPS (Blanchard et al., 1995; Weathers, Ruscio, & Keane, 1999). With motor vehicle accident victims, Blanchard et al. (1 995) used three scoring rules: a liberal rule requiring a score of at least 2 as the sum of the frequency and intensity ratings for a given item; a moderate rule requiring a score of 3, and a conservative rule requiring a score of 4. As expected, rates of PTSD were highest using the liberal rule, and lowest using the conservative rule. With combat veterans, Weathers et al. (1999) examined nine different ra- tionally and empirically derived scoring rules for the CAPS. Three scoring rules were particularly recommended: the “F 1/12’’ rule (liberal rule) required a frequency score of at least 1 and an intensity score of at least 2 for each item. This rule was recommended for screening purposes to avoid false negatives. When false positives and false negatives are equally undesirable (e.g., differential diagnoses), the “SCID
  • 15. Symptom-Calibrated (SXCAL)” rule was recommended. The SXCAL rule uses the optimally efficient severity-score cutoff for each item for predicting the pres- ence or absence of the corresponding PTSD symptom on the SCID (Weathers et al., 1999). When false positives needs to be minimized (e.g., confirming a diagnosis), the conservative “Clinician-Rated 60” scoring was recommended. Accordingly, a symptom is considered present if the combination of frequency and intensity for that item was rated as present by at least 60% of a sample of 25 expert clinicians (Weathers et al., 1999). This resulted in different cutoff scores for each CAPS item. SCID. (First et al., 1995). The SCID is a structured interview measuring DSM-ZV (American Psychiatric Association, 1994) sympto ms of PTSD. The SCID diagnosis of PTSD showed acceptable agreement with indexes obtained from previously validated assessment instruments included in the National Vietnam Veterans Readjustment Study (Kulka, Schlenger, Jordan, & Hough, 1988), and PSS-I versus CAPS 185 was identified previously as an instrument of choice in the assessment of rape- related PTSD (Resnick, Kilpatrick, & Lipovsky, 1991).
  • 16. On the SCID, each symptom is assessed using one question, and the inter- viewer rates each symptom on a 3-point scale: absent or false, subthreshold, and threshold or true. Symptoms are considered present if they are assigned the latter rating. Procedure Thirty-nine participants were interviewed by two clinicians. The first inter- viewer queried the participant about trauma history and assisted the participant in identifying a single traumatic even that would be the focus of the interview. Participants reporting more than one traumatic event were instructed to select the most bothersome incident for this interview. Participants were also instructed to refer to the same traumatic event for all interviews, and reviews of videotapes indi- cated that all participants complied with this instruction. One interviewer used the CAPS and the other, the PSS-I. The order of administering the two instruments as well as which instrument would be used by which clinician were each determined randomly. Over the course of the study, 22 clinicians conducted the interviews. Participants were instructed to refer to the same traumatic event in both interviews. Clinicians were instructed not to discuss a participant’s interview with one another until all interview data had been collected for that individual.
  • 17. All interviews were videotaped. The videotapes were reviewed by at least two raters who did not have access to the interviewers’ ratings. These raters scored the CAPS and the PSS-I on the basis of the participant’s responses in the videotapes; later, these ratings were compared to those of the interviewer. To assess convergent validity with the SCID, an additional 25 participants were administered the CAPS and the PSS-I as described above as well as the PTSD module of the SCID; the latter was administered by a third clinician. The order of the three interviews and the assignment of the clinician-interviewer were determined randomly. All interviewers and raters were doctoral or master’s level clinicians who were trained in the use of both instruments by the instruments’ developers (Dr. Edna Foa for the PSS-I and Dr. Frank Weathers for the CAPS). To ensure standard admin- istration and scoring, interviewers and raters met weekly to review the interviews, ascertain adherence to interview procedures, and resolve scoring discrepancies. Results Kolmogoroff-Smirnov tests of the distribution of scores on the PSS-I and CAPS indicated that scores were not normally distributed. Therefore, nonpara- metric statistics were used wherever possible.
  • 18. 186 Table 1. Cronbach’s Aluha Coefficients for the PSS-I and the CAPSa Foa and Tolin PSS-I CAPS No. ofItems (Y No.ofItems (Y Total score 17 3 6 34 .88 Reexperiencing subscale 5 .70 10 .70 Avoidance subscale 7 .I4 14 .76 Arousal subscale 5 .65 10 .7 I “PSS-I = PTSO Symptom Scale-Interview Version; CAPS = Clinician- Administered PTSD Scale. Reliability of the PSS-I and the CAPS Internal consisrency. Cronbach’s alpha was calculated on PSS-I and CAPS total scores and subscale scores. Because the CAPS includes two items per symp- tom (frequency and intensity) and the PSS-I includes only one item, we used a dichotomous coding of each item to indicate its presence or absence. By doing so, we controlled for the different number of items. Alpha coefficients for the PSS-I and the CAPS are shown in
  • 19. Table 1 . Internal consistency was good to very good for all scales and subscales of both the PSS-I and the CAPS, with the alpha coefficient ranging from .70 to .88 for the CAPS and from .65 to .86 for the PSS-I. Thus, the internal consistency of the PSS-I and the CAPS were comparable. To further examine internal consistency, we correlated each item’s raw score with the total score. The average item-total correlation for the PSS-I was S 9 , with correlations ranging from .11 to .74. For the CAPS, the average item-total correlation was .52 with arange of .21 to .68. On both interviews, the item reflecting the symptom of “inability to recall an important aspect of the trauma” showed low correlations with the total score (on the PSS-I, p(63) = . l I , p = .39; on the CAPS, p(63) = .21, p = .09). Thus, on this index of internal consistency, the CAPS and the PSS-I were again quite similar. The correlations among the three symptom cluster and the total severity scores for the CAPS and the PSS-I are presented in Table 2. The intercorrelations among subscales for each instrument were moderate to high and the overall picture was again quite similar. Interviewer-rater reliability. Interviewer-rater reliability was calculated by comparing the interviewer’s ratings to those of the videotape
  • 20. raters. Because there were several raters and one interviewer for each instrument, reliability coefficients were calculated as follows: First, each videotape rater was assigned a number (1-4). Next, Spearman correlation coefficients were calculated between the interviewer and rater 1, the interviewer and rater 2, and so on. The resulting coefficients were translated into Fisher’s z scores (Rosenthal & Rosnow, 1984) and averaged. Then, the average z score was translated back to p to yield a single interrater reliability PSS-I versus CAPS 187 Table 2. Spearman Correlations Among the Subscales of the PSS-I and the CAPS Subscale Total Score Reexperiencing Avoidance PSS-I Reexperiencing .82* Avoidance .92* .63* Arousal .88* .63* .71* Reexperiencing .87* Avoidance .90* .68* Arousal .88* .67* .70* CAPS * p < ,001.
  • 21. Table 3. Interviewer-Rater Reliability Coefficients and Percentage Agreement for the PSS-I and the CAPS ~~ ~ ~ Pss-I CAPS p % Agreement p %Agreement Reexperiencing subscale .93* 99.2 .89* 92.5 Avoidance subscale .91* 97.5 .86* 88.5 Arousal subscale .92* 94.2 .8 1 * 93.4 Total score/PTSD diagnosis .93* 98.3 .95* 86.6 coefficient. Percentage of rater agreement for the presence or absence of each symptom was calculated by averaging the agreement of each videotape rater with that of the interviewer. Rater agreement for the CAPS was calculated using the F1/I2 rule (Weathers et al., 1999), since this was the original scoring rule reported by Blake et al. (1990). Using other scoring rules for the CAPS did not change interrater reliability significantly. Table 3 presents the reliability coefficients of the total scores and for each subscale, as well as the percentage of rater agreement on the presence or absence of each symptom cluster and PTSD diagnosis. As can be seen in Table 3, both the CAPS and the PSS-I showed excellent interviewer-rater reliability. There were no substantial differences between the two measures, although
  • 22. the PSS-I showed consistently higher rates of agreement between raters for both the correlations and percentage agreements. Validity of the PSS-I and the CAPS Frequency of PTSD diagnosis. Thirty (46%) of participants met diagnostic criteria for PTSD according to the PSS-I. Rates of PTSD with the CAPS varied 188 Foa and Tolin Table 4. Diagnostic Agreement Between the CAPS and the PSS- I PSS-SR CAPS Scoring Rule % Agreement Kappa Liberal (Weathers) 83 .65 Moderate (Weathers) 78 .55 Conservative (Weathers) 70 .38 Liberal (Blanchard) 86 .I2 Moderate (Blanchard) 84 .68 Conservative (Blanchard) 80 .58 Note. Blanchard = Blanchard et al. (1995); Weathers = Weathers et al. (1999). Table 5. Correlations Between the Subscales of the CAPS and the PSS-I
  • 23. CAPS Reexperiencing Avoidance Arousal PSS-I Total Score Subscale Subscale Subscale Total score .87* .76* .74* .76* Avoidance subscale .75* .55* .75* .64* Arousal subscale .17* .64* .63* .78* Reexperiencing subscale .76* .79* .57* .64* Note. Correlation coefficients between scales measuring the same symptoms on both interviews are italicized. * p < .001. according to the scoring rule used. Using the Blanchard et al. (1995) diagnostic rules, 33 (5 1 %) were diagnosed with PTSD with the liberal rule, 2 8 (43%) with the moderate rule, and 2 1 (32%) with the conservative rule. Rates of PTSD diagnosis on the CAPS also vaned across the different scoring rules described by Weathers et al. (1999). Using the liberal rule, 23 (35%) were diagnosed with PTSD; 20 (31%) with the moderate rule, and 11 (17%) with the conservative rule. Thus, PTSD rates yielded by the PSS-I were similar to those obtained with the Blanchard et al. moderate scoring rule. Both the Blanchard et al. and the PSS-I rates were somewhat higher than those emerging from the Weathers et al. rules.
  • 24. Concurrent vuiidity. A high correlation of p = .87 (p < .001) was found be- tween the CAPS and the PSS-I for the total score. Agreement across the two measures on PTSD diagnosis varied according to the CAPS scoring rule used (see Table 4). Table 5 displays the Spearman correlations between the interview scales. Convergent validity. To assess convergent validity, CAPS and PSS-I scores were compared to the PTSD section of the SCID. Spearman correlation coefficients indicated that the SCID total score correlated strongly with the CAPS total score p ( 2 3 ) = 3 3 , p < .001; and PSS-I total score, p ( 2 3 ) = .73, p < .001. To examine whether the correlation between SCID and CAPS total scores was greater than the correlation between SCID and PSS-I total scores, a Hotelling’s t test was performed. Results were not significant: t ( 2 4 ) = 1.68, p > .05. PSS-I versus CAPS 189 Table 6. Agreement Between the SCID and the CAPS and the PSS-I CAPS Liberal Moderate Conservative PSS-I Scoring Rule Scoring Rule Scoring Rule
  • 25. Standard SCID Subscale Blanchard Weathers Blanchard Weathers Blanchard Weathers Scoring Rule Total Score %Agreement 80 84 80 88 88 84 80 Sensitivity 0.86 0.71 0.71 0.71 0.71 0.43 0.86 Specificity 0.78 0.89 0.83 0.94 0.94 1 .oo 0.78 Kappa .56 .60 .52 .69 .69 .5 1 .56 %Agreement 84 80 84 84 80 56 92 , Sensitivity 0.85 0.80 0.85 0.85 0.80 0.45 0.90 K"PP" .57 .49 .57 .57 .49 2.5 .78 Reexperiencing Specificity 0.80 0.80 0.80 0.80 0.80 1.00 1 .oo % Agreement 80 84 80 84 88 88 80 Sensitivity 0.88 0.75 0.75 0.62 0.75 0.62 0.88 Avoidance Specificity 0.76 0.88 0.82 0.94 0.94 1.00 0.76 Kappa .58 .63 .56 .61 .7 1 .69 .58 %Agreement 64 84 68 72 80 72 76 Sensitivity 1 .oo 1 .oo 1.00 1.00 1.00 0.50 I .oo Specificity 0.31 0.69 0.39 0.46 0.62 0.92 0.54 Kappa .30 .68 .38 .45 .6 1 .43 .53 Arousal Notes. Blanchard = scoring rule from Blanchard et al. (1995); Weathers = scoring rule from Weathers
  • 26. et al. (1999). Percent agreements are calculated to reflect whether participants met or exceeded the symptom count for the DSM-IV diagnosis. When data were analyzed according to the presence or absence of symptoms rather than a continuous score, the results varied according to the scoring rule used. As shown in Table 6, both the PSS-I and the CAPS showed moderate to strong agreement with the SCID. The PSS-I showed somewhat higher sensitivity, whereas the CAPS showed somewhat higher specificity, especially using more conservative scoring rules. On both the CAPS and the PSS-I, the arousal subscales showed high sensitivity but relatively low specificity with the SCID. Given the strong agreement between the PSS-I and CAPS on the arousal subscale (r = .78), the low specificity may reflect a psychometric weakness of the SCID rather than of the two instruments in question. Overall, however, the CAPS and the PSS-I performed quite similarly in relation to the SCID. Interview duration. Precise interview times were available for 42 sets of in- terviews. Mean time to complete the PSS-I was 21.96 min (SD = 1 1 S l ) , and mean time to complete the CAPS was 32.75 min (SD = 15.94). The CAPS was found to take significantly longer than the PSS-I to administer, t(41) = 5.93, p c .001, Cohen's d = 0.78. When we sampled only those patients with PTSD (as indi-
  • 27. cated by the PSS-I; n = 16), the CAPS still took significantly longer (M = 42.76, 190 Foa and Tolin SD = 10.74) than did the PSS-I (M = 28.69, SD = 9.92), t ( 15) = 4.64, p < .001, and the effect was greater than before (Cohen’s d = 1.36). Thus, the PSS-I ap- pears to be a briefer instrument than the CAPS, and this is particularly true for interviewees reporting significant PTSD symptoms. Discussion Results of the present study suggest that the PSS-I compares favorably to the CAPS, as evidenced by internal consistency, item-total correlations, intersubscale correlations, and interviewer-rater agreement. In terms of validity, the total score and subscale scores of the PSS-I correlate strongly with the corresponding scores on the CAPS. When the PSS-I and the CAPS are used to predict PTSD diagnosis according to the SCID, both the PSS-I and the CAPS show moderately strong agreement with the SCID. Results for the CAPS vary according to the scoring rule used; however, in general, it appears that the PSS-I may have slightly higher sensitivity, whereas the CAPS may have slightly higher specificity. Thus, the PSS- I may have a small advantage in detecting actual PTSD, whereas
  • 28. the CAPS’S advantage may be in ruling out false positives. However, it should be emphasized that differences between the CAPS and the PSS-I were relatively small compared to their similarities. Limitations of the present study include a relatively small sample size, com- pared to the large numbers of participants to whom the CAPS has been administered (e.g., Weathers et al., 1999). The present study examined only civilian trauma vic- tims, and thus the obtained results may not generalize to combat veterans. We did not collect data on the test-retest stability of either the CAPS or the PSS-I; such data would shed more light on the comparability of the two interviews. Finally, although interviewers were trained in both the CAPS and the PSS-I, because of the institution where the study was conducted (MCP Hahnemann University), most of the interviewers were more familiar with the PSS-I. Additional studies using interviewers who are equally familiar with the CAPS and the PSS-I would help to clarify this issue. Because the two instruments show such similar internal consistency, inter- viewer-rater reliability, and validity, the PSS-I may be a useful alternative to the CAPS. In this study, the PSS-I took significantly less time to administer, with no appreciable loss of psychometric strength. Thus, when time
  • 29. and/or financial resources are limited, the PSS-I may be the interview method of choice for the assessment of PTSD. References American Psychiatric Association ( 1 987). Diagnostic and statistical manual of mental dis0rder.s (3rd ed.-rev.). Washington, DC: Author. PSS-I versus CAPS 191 American Psychiatric Association (1994). Diagnostic and statistical manual of mental disorders (4th ed.). Washington, DC: Author. Blake, D. D., Weathers, F. W., Nagy, L. M., Kaloupek, D. G., Klauminzer, G., Charney, D. S., & Keane, T. M. (1990). A clinician rating scale for assessing current and lifetime PTSD: The CAPS-I. Behavior Therapist, 13, 187-1 88. Blanchard, E. B., Hickling, E. J., Taylor, A. E., Forneris, C. A., Loos, W., & Jaccard, J. (1995). Effects of varying scoring rules of the Clinician-Administered PTSD Scale (CAPS) for the diagnosis of posttraumatic stress disorder in motor vehicle accident victims. Behaviour Research and Therupy, 33,471-475. First, M . B., Spitzer, R. L., Gibbon, M., & Williams, J. B. W. (1995). Structured clinical interview for DSM-IVuxis I disorders-Patient edition (SCID I / r ) version
  • 30. 2.0). New York: Biometrics Research Department. Foa, E. B., Riggs, D. S., Dancu, C. V., & Rothbaum, B. 0. (1993). Reliability and validity of a brief instrument for assessing posttraumatic stress disorder. Journal of Traumatic Stress, 6,459-473. Foa, E. B., Riggs, D. S., & Gershuny, B. S. (1995). Arousal, numbing, and intrusion: Symptom structure of PTSD following assault. American Journal ofPsychiarg 152, 1 16-120. Hovens, J. E., van der Ploeg, H. M., Klaarenbeek, M. T. A., Bramsen, I., Schreuder, J. N., & Rivero, V. V. (1994). The assessment of posttraumatic stress disorder with the Clinician Administered PTSD Scale: Dutch results. Journal of Clinical Psychology, 50,325-340. Kulka, R. A., Schlenger, W. E., Jordan, B. K., & Hough, R. L. (1988, October). Preliminan, survey jndings of the National Vietnam Veterans’ Readjustment Study. Symposium presented at the 4th annual meeting of the International Society for Traumatic Stress Studies, Dallas. Newman, E., Kaloupek, D. G., & Keane, T. M. (1996). Assessment of posttraumatic stress disorder in clinical and research settings. In B. A. van der Kolk, A. C. McFarlane, & L. Weisaeth (Eds.), Traumatic stress: The effects of overwhelming experience on mind, body, and society (pp. 242- 273). New York Guilford Press. Resnick, H. S., Kilpatrick, D. G., & Lipovsky, J. A. (1991).
  • 31. Assessment of rape-related posttraumatic stress disorder: Stressor and symptom dimensions. fsychologicul Assessment, 3,561-572. Rosenthal, R., & Rosnow, R. L. (1984). Essentials of behavioral research: Methods and data analysis (2nd ed.). New York: McGraw-Hill. Weathers, F. W., Ruscio, A. M., & Keane, T. M. (1999). Psychometric properties of nine scoring rules for the Clinian-Administered PTSD Scale (CAPS). Psychological Assessment, 11, 124-1 33. Journal of Traumatic Stress October 2015, 28, 480–483 B R I E F R E P O R T Comparison of the PTSD Checklist (PCL) Administered via a Mobile Device Relative to a Paper Form Matthew Price,1 Eric Kuhn,2 Julia E. Hoffman,2,3 Josef Ruzek,2 and Ron Acierno4,5 1Department of Psychological Science, University of Vermont, Burlington, Vermont, USA 2National Center for PTSD, Dissemination and Training Division, Department of Veterans Affairs Palo Alto Health Care System, Palo Alto, California, USA 3Center for Healthcare Evaluation, Department of Veterans
  • 32. Affairs Palo Alto Healthcare System, Palo Alto, California, USA 4Ralph H. Johnson Veterans Affairs Medical Center, Charleston, South Carolina 5Medical University of South Carolina, Charleston, South Carolina, USA Mobile devices are increasingly used to administer self-report measures of mental health symptoms. There are significant differences, however, in the way that information is presented on mobile devices compared to the traditional paper forms that were used to administer such measures. Such differences may systematically alter responses. The present study evaluated if and how responses differed for a self-report measure, the PTSD Checklist (PCL), administered via mobile device relative to paper and pencil. Participants were 153 trauma- exposed individuals who completed counterbalanced administrations of the PCL on a mobile device and on paper. PCL total scores (d = 0.07) and item responses did not meaningfully or significantly differ across administrations. Power was sufficient to detect a difference in total score between administrations determined by prior work of 3.46 with a d = 0.23. The magnitude of differences between administration formats was unrelated to prior use of mobile devices or participant age. These findings suggest that responses to self- report measures administered via mobile device are equivalent to those obtained via paper and they can be used with experienced as well as naı̈ ve users of mobile devices.
  • 33. Mobile devices can advance traumatic stress research and treatment (Luxton et al., 2011; Price et al., 2014) through the collection of ecologically valid data (Shiffman, Stone, & Huf- ford, 2008). Use of mobile devices requires that responses to mobile-administered measures are equivalent to responses from paper measures. This assumption is open to empirical investiga- tion and should be evaluated to ensure mobile devices provide valid and reliable measurements. Mobile devices systematically change the administra tion of self-report measures. When delivered via paper, items are dis- played in an array that allows all responses to be viewed simul - taneously such that initial responses may influence subsequent answers (Richman, Kiesler, Weisband, & Drasgow, 1999). Al - ternatively, mobile devices typically display a single item per screen. Administration of individual items may focus atten- tion towards item content resulting in systemically different responses. The present study examined if responses to a self-report mea- sure, the PTSD Checklist (PCL; Weathers et al., 2013), admin- Copyright C© 2015 Wiley Periodicals, Inc., A Wiley Company. View this article online at wileyonlinelibrary.com DOI: 10.1002/jts.22037 istrated via mobile device differed from paper administration. The PCL has been extensively validated as a measure of PTSD symptoms across diverse samples (Ruggiero, Ben, Scotti, & Ra- balais, 2003). A standardized paper version of the PCL is avail - able via request from the National Center for PTSD (NCPTSD). The PCL is available in a standardized format for mobile de- vices as part of the PE Coach mobile application (Reger et al., 2013). It was hypothesized that responses between PCL total
  • 34. score and item responses across mobile and paper administra- tions would be comparable due to prior evidence that suggested minimal differences between standardized tests administered via paper and computer (Bush et al., 2013; Campbell et al., 1999; Finger & Ones, 1999). Method Participants Participants, aged M = 32.34 years (SD = 14.42), were 153 individuals recruited from a Level 1 trauma center (n = 22, 14.3%), a Veteran’s Affairs medical center outpatient mental health service (VAMC; n = 38, 24.7%), an outpatient clinic 480 Mobile Comparison of PCL 481 Table 1 Descriptive Information Unadjusted for Time Between PCL Administrations Variable n % Location Veteran Affairs Medical Center 38 24.7 Female 8 21.1 Community 87 57.1 Female 65 73.9 Outpatient clinic 6 3.9 Female 6 100.0 Trauma center 22 14.3 Female 6 27.3
  • 35. PTSD diagnosis 62 40.3 Own smartphone 118 76.6 Use e-mail on phone 117 76.0 Use apps on phone 113 73.4 Use games on phone 97 63.0 Use Internet on phone 122 79.2 Note. N = 153. PCL = Posttraumatic Stress Checklist. for trauma victims (n = 6, 3.9%), and the community (n = 87, 57.1%). Descriptive information is presented in Table 1. Measures and Procedure The Posttraumatic Checklist-Civilian Version (PCL-C: Weath- ers, Litz, Huska, & Keane, 1994) is a 17-item self-report mea- sure that assesses PTSD symptom severity. Symptoms are rated on a 5-point Likert-type scale, ranging from 1 = not at all to 5 = extremely, for the past month. Internal consistency for the current study was excellent with α = .95 for both ad- ministrations. The measure was administered twice, once via the paper form available from the NCPTSD (Weathers, Litz, Huska, & Keane, 2003) and once via PE Coach. The Life Events Checklist (LEC; Weathers et al., 2013) is a 17-item self-report measure assessing trauma exposure. Use of Inter- net and mobile devices was assessed with questions adapted from a survey from the Pew Internet and American Life Project (2012). Questions assessed if various tasks were completed reg- ularly completed on smartphones and mobile devices using a yes/no format (e.g., “Do you regularly check e-mail on your smartphone?”). Medical records were used to confirm trauma exposure for Level-1 trauma, VAMC, and outpatient clinic participants. A
  • 36. diagnosis of PTSD was the indicator of trauma exposure for VAMC and outpatient clinic participants whereas the presenting trauma was used for Level-1 trauma center participants. Com- munity participants were screened with the LEC to determine if they experienced or witnessed a traumatic event. Follow - up questions confirmed the validity of the Criterion A event. The community sample was administered the PTSD module of the Structured Clinical Interview for the DSM-IV by trained research staff for the most stressful event identified by the LEC (SCID; First, Spitzer, Gibbon, & Williams, 2002). No other modules of the SCID were administered. Participants completed the PCL on an iPod Touch (4th gen- eration, 3.5′′ screen) and on paper with a 35-minute (Med = 35, interquartile range: 25) interval between administrations. After the second administration, participants completed the use of Internet and mobile devices survey, and demographics questionnaire. Participants from the community were also given the PTSD module from the SCID and 27% met criteria for PTSD. Interviews were administered by trained research assis- tants and audio recorded. Interviews were double coded from the recording by a clinical psychologist with 100% diagnos- tic agreement. The order in which mobile and paper versions were administered was counterbalanced using a randomization sequence. Randomization occurred in blocks of 10 and each data collection site was allocated 10 blocks. Institutional re - view boards of the agencies where this research was conducted approved all procedures and all participants consented to the study. Data Analysis Using the guidelines of Bland and Altman (1986), a clini- cally meaningful margin of error between the two methods of measurement of 3.46 was established (see Supplemental
  • 37. Table 1) from nine prior studies where the PCL was adminis- tered repeatedly. A difference score between the total scores for both administrations was obtained by subtracting mobile device scores from paper scores. Comparisons were made with repeated-measures analysis of covariance in which length be- tween administrations was used as a covariate. The mean of the distribution of difference scores was calculated with the 95% confidence interval (CI). If the 95% CI of the difference scores was within the clinically meaningful margin of error then the two methods were considered interchangeable. A mar- gin of error of 1.00 was used for differences between indi - vidual items. Bivariate correlations between both measure ad- ministrations and intraclass correlation coefficients (ICC) were also computed. One participant declined to answer questions about use of a mobile devices after reporting they did not own a smartphone. There were no missing data on the PCL administrations. Results Adjusted for time between administrations, the mean difference between paper (M = 40.24, SD = 16.69) and mobile device (M = 39.08, SD = 15.97) administration was 1.17 points with 95% CI [1.13, 1.21] (Table 2). The upper limit of the 95% CI for the mean difference was within the margin of error. The effect size for the difference was d = 0.07. Test-retest reliability was r = .93. The ICC was .96, 95% CI [.95, .97]. Mean differences at the item level ranged from 0.001 to 0.22. The highest upper limit for the 95% CI at the item level was 0.37 for Item 8. Therefore, Journal of Traumatic Stress DOI 10.1002/jts. Published on behalf of the International Society for Traumatic Stress Studies.
  • 38. 482 Price et al. Table 2 Mean Difference and 95% CI for PCL Items and Total Score PCL M Diff 95% CI Item 1. Intrusive thoughts 0.02 [−0.11, 0.15] 2. Nightmares 0.05 [−0.07, 0.18] 3. Reliving 0.13 [−0.01, 0.27] 4. Emotional cue reactivity 0.12 [−0.04, 0.28] 5. Physiological cue reactivity 0.14 [0.00, 0.27] 6. Avoidance of thoughts 0.04 [−0.14, 0.22] 7. Avoidance of reminders 0.13 [−0.04, 0.29] 8. Trauma-related amnesia 0.22 [0.08, 0.37] 9. Loss of interest 0.05 [−0.08, 0.17] 10. Feeling detached −0.09 [−0.22, 0.05] 11. Lack of positive emotion 0.09 [−0.03, 0.22] 12. Foreshortened future 0.03 [−0.11, 0.17] 13. Sleep problems −0.01 [−0.13, 0.12] 14. Irritability or anger 0.07 [−0.05, 0.20] 15. Difficulty of concentrating −0.04 [−0.18, 0.10] 16. Overly alert 0.04 [−0.08, 0.16] 17. Easily startled 0.14 [0.02, 0.26] Total 1.17 [1.13, 1.21] Note. Sample size = 153. Margin of error for Total scale = 3.46. Margin of error for items = 1.00. Difference score calculated as paper minus mobile. PCL = Posttraumatic Stress Checklist. all of the items were within the margin of error (1.00). Test- retest reliability at the item level ranged from r = .66 to .88 and ICC = .75 to .93.
  • 39. There were no differences in administrations across the dif- ferent locations, F(3, 149) = 1.05, p = .373. Results were consistent across the combined sample in that the upper limit of the 95% CI for the sample obtained from the trauma cen- ter, M = 0.45, 95% CI [0.45, 0.45]; VAMC, M = 2.72, 95% CI [2.60, 2.85]; and community sample, M = 0.65, 95% CI [0.58, 0.72] were within the margin of error for the total scale. Test-retest reliability within each group was consistent with the total sample: trauma center, r = .89, ICC = .94, 95% CI [.86, .98]; VAMC, r = .89, ICC = .94, 95% CI [.89, .97]; com- munity sample, r = .91, ICC = .95, 95% CI [.93, .97]. Mean differences at the item level ranged from 0.00 to 0.36 for the trauma center, from 0.00 to 0.37 for the VAMC, and from 0.01 to 0.20 for the community sample. The highest upper limit for the 95% CI for each item was within the margin of error for the trauma center (0.65), VAMC (0.65), and the community sample (0.40). The relation between use of smartphone functions and dif- ference in total PCL scores across the administrations was as - sessed with one-way analyses of variance. Differences in total scores were not related to smartphone ownership, F(1, 149) = 1.51, p = .221; use of e-mail via smartphone, F(1, 148) = 0.60, p = .439); use of apps, F(1, 147) = 0.78, p = .378); use of games, F(1, 148) = 0.78, p = .379; and use of the In- ternet on a smartphone, F(1, 148) = 0.78, p = .379. Finally, differences in total PCL scores were unrelated to age (r = .04, p = .598). Discussion The present study suggested that there were minimal dif- ferences between a self-report measure of PTSD symptoms administered via mobile device or paper in a heterogeneous
  • 40. sample of trauma-exposed adults. The lack of a relation be- tween prior experiences using a mobile device, age, and differ - ences in total score indicates that mobile devices are a viable strategy for those who have minimal training or experience with this technology. Prior work demonstrated that among patients, demographic characteristics and prior experience is largely un- related to willingness to use technology for healthcare (Price et al., 2013). There is evidence, however, to suggest that prior use is relevant for clinicians (Kuhn et al., 2014). Clinicians with experience using mobile devices or who own a personal mobile device were more receptive to use such technologies in treatment. Ensuring that clinicians are capable and comfort- able with such devices will be necessary for proper measure administration as patients are likely to turn to their therapist for technical assistance or tutorials with these technologies (Price & Gros, 2014). The present study had several limitations. The mobile ad- ministration was not conducted in a naturalistic environment where such measures administered via mobile device are most likely to be completed insofar as this was a research study with informed consent processes. The effect of environmental influences on responses is unknown. Although it is unlikely that the environment would systematically influence mobile re- sponses relative to paper response, measures completed on a mobile device are more likely to be completed in a variety of contexts in which other factors could influence responses. Re- searchers are advised to collect data on the context in which measures are completed to assess potential sources of bias. The study evaluated a single self-report measure of PTSD without a lengthy assessment battery. Thus, the current study was unable to examine effects related to fatigue across the administration of multiple measures via a mobile device. The current study supported the null hypothesis that there were no differences between scores across paper and mobile versions of the PCL,
  • 41. which is conceptually and pragmatically challenging (Piaggo, Elbourne, Pocock, & Evans, 2006). Although the current study had sufficient power to detect an effect as small as 0.23, con- siderably more power would be needed to detect an effect at the obtained effect size of 0.07 (n = 1,604). Continued studies that demonstrate the clinical equivalence of measurements ob- tained via mobile device relative to paper should be conducted to further validate these findings. Finally, PTSD diagnoses were obtained with different methods across the subsamples, and the accuracy of diagnoses in medical records has been questioned (Holowka et al., 2014). Journal of Traumatic Stress DOI 10.1002/jts. Published on behalf of the International Society for Traumatic Stress Studies. Mobile Comparison of PCL 483 The current study provides empirical support regarding the lack of differences for measures administered via mobile de- vice. Given the high rates of smartphone ownership, the results from the present study suggest that mobile devices are an appro- priate method for population screens of PTSD. Such a method would assist in the efficient allocation of resources in events of mass trauma such as a natural disaster. References Bland, M. J., & Altman, D. G. (1986). Statistical methods for assessing agree- ment between two methods of clinical assessment. The Lancet, 327, 307– 310. doi:10.1016/S0140-6736(86)90837-8 Bush, N. E., Skopp, N., Smolenski, D., Crumpton, R., & Fairall,
  • 42. J. (2013). Behavioral screening measures delivered with a smartphone app: psycho- metric properties and user preference. The Journal of Nervous and Mental Disease, 201, 991–995. doi:10.1097/NMD.0000000000000039 Campbell, K. A., Rohlman, D. S., Storzbach, D., Binder, L. M., Anger, W. K., Kovera, C. A., . . . Grossmann, S. J. (1999). Test-retest reliability of psychological and neurobehavioral tests self-administered by computer. As- sessment, 6, 21–32. doi:10.1177/107319119900600103 Finger, M. S., & Ones, D. S. (1999). Psychometric equivalence of the computer and booklet forms of the MMPI: A meta-analysis. Psychological Assessment, 11, 58–66. doi:10.1037/1040-3590.11.1.58 First, M. B., Spitzer, R. L., Gibbon, M., & Williams, J. B. W. (2002). Struc- tured Clinical Interview for DSM-IV-TR Axis I Disorders, Research Version, Patient Edition. N ew York, NY: Biometrics Research, New York State Psychiatric Institute. Holowka, D. W., Marx, B. P., Gates, M. A., Litman, H. J., Ranganathan, G., Rosen, R. C., & Keane, T. M. (2014). PTSD diagnostic validity in Vet- erans Affairs electronic records of Iraq and Afghanistan veterans. Jour- nal of Consulting and Clinical Psychology, 82, 569–579.
  • 43. doi:10.1037/ a0036347 Kuhn, E., Eftekhari, A., Hoffman, J. E., Crowley, J. J., Ramsey, K. M., Reger, G. M., & Ruzek, J. I. (2014). Clinician perceptions of using a smartphone app with prolonged exposure therapy. Administration and Policy in Mental Health and Mental Health Services Research, 1–8. doi:10.1007/s10488-013- 0532-2 Luxton, D. D., McCann, R. A., Bush, N. E., Mishkind, M. C., & Reger, G. M. (2011). mHealth for mental health: Integrating smartphone technology in behavioral healthcare. Professional Psychology: Research and Practice, 42, 505–512. doi:10.1037/a0024485 Pew Internet and American Life Project. (2012, September). Explore Survey Questions. Retrieved from http://www.pewinternet.org/Static- Pages/Data- Tools/Explore-Survey-Questions/Roper- Center.aspx?item={0368CEFB- 1706-4995-B395-925639C0B22F} Piaggo, G., Elbourne, D. R., Pocock, S. J., & Evans, S. J. W. (2006). Reporting of noninferority and equivalence randomized trials: An extension of the CONSORT statement. Journal of the American Medical Association, 295,
  • 44. 1152–1161. doi:10.1001/jama.295.10.1152 Price, M., & Gros, D. F. (2014). Examination of prior experience with telehealth and comfort with telehealth technology as a moderator of treatment response for PTSD and depression with veterans. International Journal of Psychiatry in Medicine, 48, 57–67. doi:10.2190/PM.48.1.e Price, M., Williamson, D., McCandless, R., Mueller, M., Gregoski, M., Brunner-Jackson, B., . . . Treiber, F. (2013). Hispanic migrant farm work- ers’ attitudes toward mobile phone-based telehealth for management of chronic health conditions. Journal of Medical Internet Research, 15, e76. doi:10.2196/jmir.2500 Price, M., Yuen, E. K., Goetter, E. M., Herbert, J. D., Forman, E. M., Acierno, R., & Ruggiero, K. J. (2014). mHealth: A mechanism to deliver more acces- sible, more effective mental health care. Clinical Psychology & Psychother- apy, 21, 427–436. doi:10.1002/cpp.1855 Reger, G. M., Hoffman, J., Riggs, D., Rothbaum, B. O., Ruzek, J., Holloway, K. M., & Kuhn, E. (2013). The “PE coach” smartphone application: An innovative approach to improving implementation, fidelity, and homework adherence during prolonged exposure. Psychological Services, 10, 342–349.
  • 45. doi:10.1037/a0032774 Richman, W. L., Kiesler, S., Weisband, S., & Drasgow, F. (1999). A meta- analytic study of social desirability distortion in computer - administered questionnaires, traditional questionnaires, and interviews. Journal of Ap- plied Psychology, 84, 754–775. doi:10.1037/0021- 9010.84.5.754 Ruggiero, K. J., Ben, K. D., Scotti, J. R., & Rabalais, A. E. (2003). Psy- chometric properties of the PTSD Checklist—Civilian version. Journal of Traumatic Stress, 16, 495–502. doi:10.1023/A:1025714729117 Shiffman, S., Stone, A. A., & Hufford, M. R. (2008). Ecological mo- mentary assessment. Annual Review of Clinical Psychology, 4, 1–32. doi:10.1146/annurev.clinpsy.3.022806.091415 Weathers, F., Litz, B., Huska, J., & Keane, T. (1994). Post- Traumatic Stress Disorder Checklist (PCL-C) for DSM-IV. Boston, MA: National Center for PTSD. Weathers, F., Litz, B., Huska, J., & Keane, T. (2003). PTSD Check- list - Civilian Version. Retrieved from http://www.mirecc.va.gov/docs/ visn6/3_PTSD_CheckList_and_Scoring.pdf Weathers, F. W., Blake, D. D., Schnurr, P. P., Kaloupek, D. G.,
  • 46. Marx, B. P., & Keane, T. M. (2013). The Life Events Checklist for DSM-5 (LEC-5). Unpublished instrument. Retrieved from http://www.ptsd.va.gov Journal of Traumatic Stress DOI 10.1002/jts. Published on behalf of the International Society for Traumatic Stress Studies. Journal of Traumatic Stress October 2019, 32, 799–805 B R I E F R E P O R T An Empirical Crosswalk for the PTSD Checklist: Translating DSM-IV to DSM-5 Using a Veteran Sample Samantha J. Moshier,1,2 Daniel J. Lee,2,3 Michelle J. Bovin,2,3 Gabrielle Gauthier,1 Alexandra Zax,1 Raymond C. Rosen,4 Terence M. Keane,2,3 and Brian P. Marx2,3 1Veterans Affairs Boston Healthcare System, Boston, Massachusetts, USA 2The National Center for PTSD at Veterans Affairs Boston Healthcare System, Boston, Massachusetts, USA 3Department of Psychiatry, Boston University School of Medicine, Boston, Massachusetts, USA 4Healthcore/New England Research Institutes, Watertown, Massachusetts, USA The fifth edition of the Diagnostic and Statistical Manual of
  • 47. Mental Disorders (DSM-5) introduced numerous revisions to the fourth edition’s (DSM-IV) criteria for posttraumatic stress disorder (PTSD), posing a challenge to clinicians and researchers who wish to assess PTSD symptoms continuously over time. The aim of this study was to develop a crosswalk between the DSM-IV and DSM-5 versions of the PTSD Checklist (PCL), a widely used self-rated measure of PTSD symptom severity. Participants were 1,003 U.S. veterans (58.7% with PTSD) who completed the PCL for DSM-IV (the PCL-C) and DSM-5 (the PCL-5) during their participation in an ongoing longitudinal registry study. In a randomly selected training sample (n = 800), we used equipercentile equating with loglinear smoothing to compute a “crosswalk” between PCL-C and PCL-5 scores. We evaluated the correspondence between the crosswalk-determined predicted scores and observed PCL-5 scores in the remaining validation sample (n = 203). The results showed strong correspondence between crosswalk- predicted PCL-5 scores and observed PCL-5 scores in the validation sample, ICC = .96. Predicted PCL-5 scores performed comparably to observed PCL-5 scores when examining their agreement with PTSD diagnosis ascertained by clinical interview: predicted PCL-5, κ = 0.57; observed PCL-5, κ = 0.59. Subsample comparisons indicated that the crosswalk’s accuracy did not differ across characteristics including gender, age, racial minority status, and PTSD status. The results support the validity of this newly developed PCL-C to PCL-5 crosswalk in a veteran sample, providing a tool with which to
  • 48. interpret and translate scores across the two measures. The publication of the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5;American Psy- chiatric Association [APA], 2013) introduced numerous revi - sions to the diagnostic criteria for posttraumatic stress disorder (PTSD), including the addition of new symptoms; the modi- fication of several existing symptoms; and the introduction of four, rather than three, symptom clusters. These changes to the diagnostic criteria pose a challenge to clinicians and researchers Samantha Moshier is now at Emmanuel College (Boston, MA, USA). This research was funded by the U.S. Department of Defense, Congres- sionally Directed Medical Research Programs (designations W81XWH08- 2-0100/W81XWH-08-2-0102 and W81XWH-12-2- 0117/W81XWH12-2- 0121). Dr. Lee is supported by the National Institute of Mental Health (5T32MH019836-16). Any opinions, findings, and conclusions or recommen- dations expressed in this material are those of the authors and do not necessarily reflect the view of the U.S. government. Correspondence concerning this article should be addressed to Brian Marx, Ph.D., 150 South Huntington Ave (116B-4), Boston, MA 02130, E-mail: [email protected] C© 2019 International Society for Traumatic Stress Studies. View this article online at wileyonlinelibrary.com
  • 49. DOI: 10.1002/jts.22438 who previously collected symptom data using measures reflect- ing the PTSD diagnostic criteria in the prior version of the DSM (i.e., the fourth edition, text revision; DSM-IV-TR; APA, 2000) but who wish to follow the course of PTSD symptoms over time, including after the revisions to the criteria were published. This shift may be especially challenging to longitudinal investiga- tions of PTSD, in which continuity of symptom measurement over time is critical for many statistical analyses. Clinicians and researchers with these continuity concerns must choose among using symptom severity measures that cor- respond with outdated PTSD diagnostic criteria; using mea- sures that correspond with the updated DSM-5 PTSD diagnos- tic criteria; or creating idiosyncratic, unvalidated measures that simultaneously collect information about both sets of diagnos- tic criteria. None of these choices is ideal. Instead, researchers and clinicians would benefit from a guide that translates re- sults of DSM-IV congruent measures to estimated results on DSM-5 congruent measures, and vice versa. Recent research has suggested that DSM-IV congruent symptom ratings can be used to approximate a diagnosis of DSM-5 PTSD (Rosellini et al., 2015). However, there is currently no tool available to enable linking of continuous total or cluster-specific PTSD 799 http://crossmark.crossref.org/dialog/?doi=10.1002%2Fjts.22438 &domain=pdf&date_stamp=2019-10-18 800 Moshier et al. symptom severity scores derived from DSM-IV and DSM-5 con-
  • 50. gruent measures. Therefore, the aim of the present study was to establish a translational crosswalk between symptom severity scores on the PTSD Checklist–Civilian Version for DSM-IV- TR (PCL-C) and the PCL for DSM-5 (PCL-5; Weathers, Litz, Herman, Huska, & Keane, 1993; Weathers et al., 2013), as the PCL is the most commonly used self-rated measure of PTSD symptom severity. To do so, we conducted test-equating proce- dures using data from both versions of the measure collected concurrently in a sample of U.S. military veterans. Method Participants Participants were 1,003 United States Army or Marine vet- erans enrolled in the Veterans After-Discharge Longitudinal Registry (Project VALOR). Project VALOR is a registry of Vet- erans’ Affairs (VA) mental health care users with and without PTSD who were deployed in support of recent military oper- ations in Afghanistan and Iraq. To be included in the cohort, veterans must have undergone a mental health evaluation at a VA facility. The cohort oversampled for veterans with proba- ble PTSD according to VA medical records (i.e., at least two instances of a PTSD diagnosis by a mental health professional associated with two separate visits) at a 3:1 ratio. Female veter - ans were oversampled at a rate of 1:1 (female to male). A sample of 1,649 (60.8%) veterans completed the baseline assessment for Project VALOR. For the current analysis, we focused on a subsample of this group that consisted of 1,003 participants who reported experiencing a DSM-5 Criterion A traumatic event during a clinical interview and had complete data (required for the test-equating analyses) on both the PCL-C and PCL-5 dur- ing the fourth wave of study assessments (Time 4 [T4]). There were no significant differences in sex, racial minority status, or PTSD diagnostic status or symptom severity at the first wave of
  • 51. data collection (Time 1 [T1]) between the 1,003 participants in- cluded in this analysis and the remaining cohort members, ps = .262–.891. However, participants included in this analysis were older (M age = 38 years) compared with the remaining cohort members (M age = 36 years), t(1,647) = −3.56, p = .000, and had a higher level of educational attainment (i.e., 38% of the analytic sample had a bachelor’s degree vs. 30% of remaining cohort members), χ2(6, N = 1,642) = 15.74, p = .015. Procedure At T4 of Project VALOR, participants provided informed consent verbally over the telephone in accordance with the re- search protocol approved by the VA Boston Healthcare System institutional review boards and the Human Research Protec- tion Office of the U.S. Army Medical Research and Mate- rial Command. Participants then completed a self-administered questionnaire (SAQ) online and, following this, completed a telephone-based diagnostic clinical interview. The SAQ con- sisted of a large battery of questionnaires that, in total, included over 740 questions pertaining to physical health, functional im- pairment, psychiatric symptoms, deployment experiences, and lifetime trauma exposure. Measures Demographic information. Participant age and sex were extracted from a U.S. Department of Defense database. Race, ethnicity, and education were collected via self-report in the T4 SAQ. PTSD symptom severity. The PCL-C is a self-rated mea- sure of PTSD symptom severity designed to correspond to the 17 core DSM-IV PTSD symptoms (Weathers et al., 1993). Re- spondents use a scale ranging from 1 (not at all) to 5
  • 52. (extremely) to rate how much each symptom has bothered them in the past month. Although a military version of the PCL (the PCL-M) is available, we used the civilian version because it corresponded with the study’s clinical interview procedures, which did not re- strict potential index traumatic events solely to military-related events. The PCL-C is one of the most commonly used self- rated measures of DSM-IV PTSD symptom severity, and it has demonstrated excellent psychometric properties across a range of samples and settings (for review, see Norris & Ham- blen, 2004). In the current sample, internal reliability of PCL-C scores was excellent, Cronbach’s α = .96. The PCL-5 (Weathers et al., 2013) is a self-rated measure of PTSD symptom severity designed to correspond to the 20 core DSM-5 PTSD symptoms. Respondents use a scale ranging from 0 (not at all) to 4 (extremely) to rate how much each symp- tom has bothered them in the past month. Like its predecessor, the PCL-5 is frequently used across a range of settings for a variety of purposes, including monitoring symptom change as well as screening for and providing a provisional diagno- sis of PTSD. Data from the PCL-5 have demonstrated good test–retest reliability, r = .84, and convergent and discriminant validity (Blevins, Weathers, Davis, Witte, & Domino, 2015; Bovin et al., 2015; Keane et al., 2014; Wortmann et al., 2016). Internal reliability of PCL-5 scores was excellent in the current sample, Cronbach’s α = .96. Major depression and PTSD diagnosis. The PTSD and Major Depressive Episode (MDE) modules of the Structured Clinical Interview for DSM-5 (SCID-5; First, Williams, Karg, & Spitzer, 2015) were used to assess exposure to a Criterion A event and to assess current PTSD diagnostic status and pres - ence or absence of a current MDE. Interrater agreement was evaluated for a random sample of 100 cases and was excellent
  • 53. for both current PTSD, κ = 0.85, and current MDE, κ = .98. Data Analysis To link PCL-C and PCL-5 scores, we used equipercentile equating, a test-equating procedure that is commonly used in educational measurement fields to determine comparable scores on different versions of the same exam (for a review, see Dorans, Moses, & Eigner, 2010). Equipercentile equating Journal of Traumatic Stress DOI 10.1002/jts. Published on behalf of the International Society for Traumatic Stress Studies. A Crosswalk for the PTSD Checklist 801 Table 1 Demographic Characteristics of the Total, Test, and Validation Samples Total Sample Test Sample Validation Sample (n = 1,003) (n = 800) (n = 203) Variable % M SD % M SD % M SD Sex Female 51.1 50.8 52.7 Male 48.9 49.2 47.3 Age (years) 43.2 9.8 43.2 9.9 43.2 9.5 Racial minority status Non-White 23.2 22.6 25.2 White 76.8 77.4 74.8 Highest education level
  • 54. High school or GED 6.6 6.9 5.4 Some college 38.9 39.7 37.1 Bachelor’s degree or higher 50.8 50.1 53.7 Current PTSD 58.7 58.9 58.1 Lifetime PTSD 87.5 87.4 88.2 Current MDE 34.5 34.4 35.0 PCL-C score 49.7 17.3 50.0 17.3 50.1 17.6 PCL-5 score 36.2 20.6 36.1 20.7 36.7 20.6 Note. PCL-C = Posttraumatic Stress Disorder Checklist– Civilian Version (for DSM-IV); PCL-5 = Posttraumatic Stress Disorder Checklist for DSM-5; PTSD = posttraumatic stress disorder; MDE = major depressive episode; GED = general education development. considers scores on two measures to be equivalent to one an- other if their percentile ranks in a given group are equal. This approach has a number of benefits relative to mean or linear equating methods; for example, it results in all imputed scores falling within the actual range of the scale and does not rely on the assumption of a normal distribution of test scores. Equiper - centile equating methods have been used to develop crosswalks for a number of neurocognitive and psychiatric rating scales (e.g., Choi, Schalet, Cook, & Cella, 2014; Monsell et al., 2016). Figure 1. Histograms of total Posttraumatic Stress Disorder Checklist–Civilian Version (for DSM-IV; PCL-C) and PCL for DSM-5 (PCL-5) scores in the training sample (N = 800). DSM = Diagnostic and Statistical Manual of Mental Disorders (DSM-IV = fourth edition; DSM-5 = fifth edition). Prior to performing the equating procedure, we randomly
  • 55. split the sample into a training sample (n = 800) and a vali - dation sample (n = 203; a split which allows for a large sam- ple size to be retained for the equating procedure, consistent with recommendations by Dorans et al., 2010). In the training dataset, equipercentile equating with loglinear smoothing was performed using the R package Equate (Albano, 2016). Stan- dard errors and 95% confidence intervals of the crosswalk esti - mates were calculated using 10,000 bootstrapped samples. Af- ter completing the equating procedure in the training dataset, we used the resulting crosswalk to impute predicted PCL-5 scores from PCL-C scores for all participants in the validation data set. To evaluate the accuracy of the crosswalk in the valida- tion sample, we examined the intraclass correlation coefficient (ICC) between predicted and observed PCL-5 scores and cal- culated the average difference between predicted and observed PCL-5 scores. We calculated sensitivity, specificity, efficiency (correct classification rate), quality of efficiency (i.e., Cohen’s kappa), and area under the curve (AUC) for use of crosswalk- predicted PCL-5 cut scores, using the cutoff of PCL-5 score of 33 or greater (Bovin et al., 2015) in identifying PTSD diagnosis as determined by the SCID interview. Finally, in order to evaluate whether the crosswalk demonstrated accu- racy across relevant subgroups of individuals, we compared these same markers of accuracy when the sample was di- vided into subgroups based on education level, age, gender, racial minority status, and presence or absence of PTSD and MDE. Journal of Traumatic Stress DOI 10.1002/jts. Published on behalf of the International Society for Traumatic Stress Studies. 802 Moshier et al.
  • 56. Figure 2. Crosswalk of corresponding Posttraumatic Stress Disorder Checklist– Civilian Version (for DSM-IV; PCL-C) and PCL for DSM-5 (PCL-5) total scores with 95% confidence intervals from 10,000 bootstrapped samples. DSM = Diagnostic and Statistical Manual of Mental Disorders (DSM- IV = fourth edition; DSM-5 = fifth edition). We used the same test-equating procedures to create cross- walks from PCL-C subscale scores to PCL-5 subscale scores, representing each of the DSM-5 PTSD symptom clusters (Clus- ter B = intrusion symptoms, Cluster C = avoidance symptoms, Cluster D = negative alterations in cognitions and mood, Clus- ter E = alterations in arousal and reactivity). These symptom clusters were approximated in the PCL-C data by summing Items 1–5 (Cluster B), Items 6 and 7 (Cluster C), Items 8–12 (Cluster D), and Items 13–17 (Cluster E). Missing data were minimal (one missing case each for variables of age, race, and education status; and three cases missing the MDE module of the SCID) and were therefore handled using pairwise deletion. Results The characteristics of the sample and subsample are pre- sented in Table 1. In all, 58.7% percent of participants met cri - teria for current (i.e., past month) PTSD and 34.5% met criteria for current MDE. Group comparison tests revealed no signifi - cant differences among the training and validation samples on sex, race, ethnicity, education level, PCL-C or PCL-5 score, or proportion of sample with current PTSD or MDE, ps = .363–.878. The PCL-C and PCL-5 were highly correlated in both the training and validation samples, rs = .95 and .96,
  • 57. respectively. These correlations were well over thresholds rec - ommended for equating procedures (i.e., .75–.86; Choi et al., 2014). A histogram of total score frequencies in the training sample is presented in Figure 1. The crosswalk for converting PCL-C to PCL-5 scores based on equipercentile equating results is presented in Figure 2. The PCL-C scores were equated to lower PCL-5 scores, which is not surprising given the difference in scaling ranges between the two measures (PCL-C scores range from 17 to 85 and PCL-5 scores range from 0 to 80). For example, a score of 50 on the PCL-C was equated with a score of 36 on the PCL-5. In the validation sample, the ICC among the observed and predicted PCL-5 scores was .96. The mean difference between observed and predicted PCL-5 scores was 0.20 (SD = 6.30). Using the cutoff score of 33 or higher, the predicted PCL-5 score had similar diagnostic utility to the observed PCL-5 score in predicting PTSD diagnosis determined by clinical interview: Cohen’s κ = .55, sensitivity = .81, specificity = .74, AUC = .77, correct classification of 78% of cases for the predicted PCL-5; Cohen’s κ = .58, sensitivity = .84, specificity = .74, AUC = .79, correct classification of 80% of cases for the observed PCL-5. The accuracy of the crosswalk was highly consistent across subgroups based on sex, age, racial minority status, education level, PTSD diagnostic status as determined by clinical inter - view, and presence or absence of current MDE (see Table 2). The ICCs between predicted and observed PCL-5 scores were very high for all subgroups, ICCs = .92–.96. There were no sig- nificant differences in the mean differences between observed and predicted PCL-5 score between any of these demographic subgroups. The kappa values between observed and predicted probable DSM-5 PTSD diagnosis were good for all subgroups examined, and the proportion of correctly classified cases did not differ significantly by subgroup.
  • 58. The items comprising Clusters B and C are highly simi- lar between the PCL-C and PCL-5, with only minor wording changes (e.g., the addition of “unwanted” to Item 1 or the ad- dition of “strong” to Item 5 on the PCL-5). Not surprisingly then, the equipercentile-equated crosswalk for the Cluster C subscale was identical to a linear transformation of subtracting 2 points from PCL-C scores to reflect the change in scaling be- tween the two measures. Similarly, the equipercentile-equated crosswalk for Cluster B subscale scores was nearly identical to a linear transformation involving subtracting 5 points from PCL-C scores. The ICC between equated and observed scores using these two methods was equal to .997. Additionally, the equipercentile-equated crosswalk for Cluster B did not outper- form the linear transformation method in the accuracy analyses conducted in the validation sample, which suggests that the lin- ear transformation can be used for simplicity when converting Cluster B subscale scores between the PCL-C and PCL-5. How- ever, such a linear transformation would not be appropriate for Clusters D and E given that both clusters include new symptoms in DSM-5 relative to DSM-IV-TR. The crosswalks for Cluster D Journal of Traumatic Stress DOI 10.1002/jts. Published on behalf of the International Society for Traumatic Stress Studies. A Crosswalk for the PTSD Checklist 803 Table 2 Posttraumatic Stress Disorder Checklist (PCL) Crosswalk Accuracy in Clinical and Demographic Subgroups Within the Validation Sample Crosswalk-
  • 59. Predicted and Observed PCL-5 Scores Difference Between Crosswalk-Predicted and Observed PCL-5 Scoresa Crosswalk- Predicted and Observed Probable PTSDb Variable n ICC M SD κ Sex Male 96 .95 −0.29 6.72 0.91 Female 107 .96 −0.08 5.84 0.85 Age (years) < 40 116 .95 0.30 6.55 0.83 � 40 86 .96 −0.84 5.85 0.95 Racial minority status Non-White 51 .96 0.75 6.23 0.88 White 151 .95 −0.49 6.27 0.88 Education level High school or some college 93 .94 −0.65 6.71 0.90 Bachelor’s degree or higher 109 .96 0.22 5.83 0.85 PTSD diagnosis Present 118 .92 −0.28 6.33 0.85
  • 60. Absent 85 .93 −0.16 6.31 0.78 Current MDE Present 70 .93 −0.90 6.29 0.83 Absent 130 .95 0.25 6.21 0.86 Note. n = 203. ICC = intraclass correlation coefficient; PCL-C = Posttraumatic Stress Disorder Checklist–Civilian Version (for DSM-IV); PCL-5 = Posttraumatic Stress Disorder Checklist for DSM-5; PTSD = posttraumatic stress disorder; MDE = major depressive episode. aIn t tests between all subgroups, ps = .190–.812. bProbable PTSD defined as a PCL-5 score � 33. Figure 3. Crosswalk of corresponding Posttraumatic Stress Disorder Checklist– Civilian Version (for DSM-IV; PCL-C) and PCL for DSM-5 (PCL-5) Clusters D and E scores with 95% confidence intervals from 10,000 bootstrapped samples. Approximated PCL-C scores for Clusters D and E were computed by summing Items 8–12 (Cluster D) and Items 13–17 (Cluster E) of the PCL- C. DSM = Diagnostic and Statistical Manual of Mental Disorders (DSM-IV = fourth edition; DSM-5 = fifth edition). and E subscores based on equipercentile equating with loglin- ear presmoothing are presented in Figure 3. Predicted cluster subscores were very strongly correlated with observed cluster subscores in the validation sample for all four clusters; the ICC values between observed and predicted subscale scores were .94 for Cluster B, .88 for Cluster C, .89 for Cluster D, and .91 for Cluster E.
  • 61. Discussion This is the first known study that attempted to equate scores between two versions of a frequently used PTSD symptom severity measure: the DSM-IV-based PCL-C and the DSM- 5-based PCL-5. The resulting crosswalk enables researchers and clinicians to interpret and translate scores across the two measures, an important consideration in longitudinal obser- vational and clinical treatment studies that cross iterations of the DSM. A particular strength of this study was the use of both training and validation samples, which allowed us to evaluate the accuracy of the crosswalk. Supporting the valid- ity of the crosswalk, results demonstrated a strong degree of Journal of Traumatic Stress DOI 10.1002/jts. Published on behalf of the International Society for Traumatic Stress Studies. 804 Moshier et al. concordance between observed and predicted PCL-5 scores (both total and cluster subscale scores) in the validation sam- ple. Additionally, predicted PCL-5 scores performed compara- bly to observed PCL-5 scores when examining their agreement with PTSD diagnosis ascertained by clinical interview. Finally, the results suggest a similar degree of concordance between crosswalk–predicted and observed subscale scores and indicate that the metrics of crosswalk accuracy did not differ across subgroups. We anticipate that the PCL crosswalk may be particularly useful for longitudinal research or for interpretation of clinical data that have been collected over a time period spanning the use of both the DSM-IV and DSM-5. It may also allow for the com-
  • 62. bining of data sets from studies using different versions of the PCL, facilitating research that requires large sample sizes, such as gene association studies. Moreover, the availability of cross - walks for computing DSM-5 symptom cluster subscale scores will allow for further study of the association between specific domains of symptoms (e.g., avoidance, arousal) and risk factors or outcomes of interest. However, it should be noted that the evolution of the diagnostic criteria from DSM-IV to DSM-5 has led to some substantive differences in how the PTSD construct is defined in each version. The strong correlation among PCL-C and PCL-5 scores (r = .95) suggests that it was statistically ap- propriate to use test-equating procedures to link the scales. This strong association has been demonstrated in prior studies of the PCL-5 (e.g.,Wortmann et al., 2016) and is consistent with other research suggesting a strong degree of overlap between the two DSM criteria sets (e.g., Kilpatrick et al., 2013). However, it should also be acknowledged that the resulting crosswalk cannot provide specific information about the elements of the PTSD construct that are new to DSM-5 and were not assessed in DSM-IV (i.e., distorted blame, reckless behavior), and it also does not address differences in the definition of a Criterion A traumatic event. This study has a number of strengths for a test-equating de- sign. We used a single-group design in which all participants completed both versions of the PCL, thus producing more reli - able linking across measures. The sample was large and gender - balanced, and participants showed a wide degree of variation in PTSD symptom severity. However, the sample consisted solely of veterans serving in recent-era (i.e., after the September 11, 2001, terrorist attacks) combat operations in Afghanistan and Iraq. Although the crosswalk showed invariance to several de- mographic characteristics within the sample, it is not clear to what extent the results would generalize to civilian samples. We suggest caution in applying the crosswalk to these sam- ples and encourage continued study of these results in other
  • 63. trauma-exposed samples. Additionally, it should be noted that the PCL-C and PCL-5 were administered in the same order for every participant, with the PCL-C administered first. Therefore, order effects may have influenced our results, and future re- search should examine this possibility, using a counter-balanced design. In this study, we present a crosswalk that will allow for con- version between PCL-C and PCL-5 symptom severity scores. The results provide support for the validity of the crosswalk within a veteran sample. This tool will allow researchers and clinicians to make use of archival PCL-C data in longitudinal research, clinical settings, and beyond. References Albano, A. D. (2016). Equate: An R package for observed-score linking and equating. Journal of Statistical Software, 74, 1–36. https://doi.org/10.18637/jss.v074.i08 American Psychiatric Association. (2000). Diagnostic and statistical manual of mental disorders (4th ed., text revision). Washington, DC: Author. American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders (5th ed.). Arlington, VA: American Psychiatric Publishing. Blevins, C. A., Weathers, F. W., Davis, M. T., Witte, T. K., & Domino, J. L. (2015). The Posttraumatic Stress Disorder Checklist for DSM-5 (PCL-5): Development and initial psychometric evaluation. Journal of Trau-
  • 64. matic Stress, 28, 489–498. https://doi.org/10.1002/jts.22059 Bovin, M. J., Marx, B. P., Weathers, F. W., Gallagher, M. W., Rodriguez, P., Schnurr, P. P., & Keane, T. M. (2015). Psychometric properties of the PTSD Checklist for Diagnostic and Statistical Manual of Mental Disorders–Fifth Edition (PCL-5) in Veterans. Psychological Assessment, 28, 1379–1391. https://doi.org/10.1037/pas0000254 Choi, S. W., Schalet, B., Cook, K. F., & Cella, D. (2014). Establishing a common metric for depressive symptoms: Linking the BDI-II, CES-D, and PHQ-9 to PROMIS depression. Psychological Assessment, 26, 513–527. https://doi.org/10.1037/a0035768 Dorans, N. J., Moses, T., & Eignor, D. E. (2010). Principles and practices of test score equating (ETS Research Report No. RR-10-29). Princeton, NJ: Educational Testing Service. First, M. B., Williams, J. W., Karg, R. S., & Spitzer, R. L. (2015). Structured Clinical Interview for DSM-5–Research Version. Arlington, VA: American Psychiatric Association. Keane, T. M., Rubin, A., Lachowicz, M., Brief, D. J., Enggasser, J., Roy, M., . . . Rosenbloom, D. (2014). Temporal Stability of DSM-5 posttraumatic stress
  • 65. disorder criteria in a problem drinking sample. Psychological Assessment, 26, 1138–1145. https://doi.org/10.1037/a0037133 Kilpatrick, D. G., Resnick, H. S., Milanak, M. E., Miller, M. W., Keyes, K. M., & Friedman, M. J. (2013). National estimates of exposure to traumatic events and PTSD prevalence using DSM-IV and DSM-5 criteria. Journal of Traumatic Stress, 26, 537–547. https://doi.org/10.1002/da.22364 Monsell, S. E., Dodge, H. H., Zhou, X. H., Bu, Y., Besser, L. M., Mock, C., . . . Weintraub, S. (2016). Results from the NACC uniform data set neuropsychological battery crosswalk. Alzheimer Disease and Associated Disorders, 30, 134–139. https://doi.org/10.1097/WAD.0000000000000111 Norris, F. H., & Hamblen, J. L. (2004). Standardized self-report measures of civilian trauma and PTSD. In J. P. Wilson, T. M. Keane, & T. Martin (Eds.), Assessing psychological trauma and PTSD (pp. 63–102). New York, NY: Guilford Press. Rosellini, A. J., Stein, M. B., Colpe, L. J., Heeringa, S. G., Petukhova, M. V., Sampson, N. A., . . . & Army STARRS Collaborators. (2015). Approxi- mating a DSM-5 diagnosis of PTSD using DSM-IV criteria. Depression and
  • 66. Anxiety, 32, 493–501. https://doi.org/10.1002/da.22364 Weathers, F., Litz, B., Herman, D., Huska, J., & Keane, T. (1993, October). The PTSD Checklist (PCL): Reliability, Validity, and Diagnostic Utility. Journal of Traumatic Stress DOI 10.1002/jts. Published on behalf of the International Society for Traumatic Stress Studies. https://doi.org/10.18637/jss.v074.i08 https://doi.org/10.1002/jts.22059 https://doi.org/10.1037/pas0000254 https://doi.org/10.1037/a0035768 https://doi.org/10.1037/a0037133 https://doi.org/10.1002/da.22364 https://doi.org/10.1097/WAD.0000000000000111 https://doi.org/10.1002/da.22364 A Crosswalk for the PTSD Checklist 805 Paper presented at the Annual Convention of the Internati onal Society for Traumatic Stress Studies, San Antonio, TX. Weathers, F. W., Litz, B. T., Keane, T. M., Palmieri, P. A., Marx, B. P., & Schnurr, P. P. (2013). The PTSD Checklist for DSM-5 (PCL-5). Scale avail- able from the National Center for PTSD at www.ptsd.va.gov Wortmann, J. H., Jordan, A. H., Weathers, F. W., Resick, P. A., Don- danville, K. A., Hall-Clark, B., . . . Litz, B. T. (2016). Psychomet-
  • 67. ric analysis of the PTSD Checklist-5 (PCL-5) among treatment- seeking military service members. Psychological Assessment, 28, 1392– 1403. https://doi.org/10.1037/pas0000260 Journal of Traumatic Stress DOI 10.1002/jts. Published on behalf of the International Society for Traumatic Stress Studies. http://www.ptsd.va.gov https://doi.org/10.1037/pas0000260