Adjusting For Patient Acuity In Measurement Of Nurse Staffing
1. Nursing Research March/April 2011 Vol 60, No 2, 107–114
Adjusting for Patient Acuity in Measurement
of Nurse Staffing
Two Approaches
Barbara A. Mark 4 David W. Harless
b Background: Researchers who examine the relationship be-
tween nurse staffing and quality of care frequently rely on the
Medicare case mix index to adjust for patient acuity, even
though it was developed originally based on medical
diagnoses and may not accurately reflect patients’ needs for
nursing care.
b Objectives: The aim of this study was to examine the differ-
ences between unadjusted measures of nurse staffing (reg-
istered nurses per 1,000 adjusted patient days) and case mix
adjusted nurse staffing and nurse staffing adjusted with nurs-
ing intensity weights, which were developed to reflect pa-
tients’ needs for nursing care.
b Method: Secondary data were used from 579 hospitals in 13
states from 2000 to 2006. Included were three measures of
nurse staffing and hospital characteristics including owner-
ship, geographic location, teaching status, hospital size, and
percent Medicare inpatient days.
b Results: Measures of nurse staffing differed in important ways.
The differences between the measures were related system-
atically to ownership, geographic location, teaching status,
hospital size, and percentage Medicare inpatient days.
b Discussion: Without an accurate method to incorporate acuity
into measurement of nurse staffing, research on the relation-
ship between staffing and quality of care will not reach the full
potential to inform practice.
b Key Words: case mix index nurse staffing nursing intensity
weight patient acuity
Researchers who examine the relationship between
nurse staffing and quality of care often are stymied by
the lack of a standard, universally accepted system for
measuring patient acuity, defined as patient requirements for
nursing care (Jennings, 2008). To make fair comparisons of
quality of care across hospitals, the measurement of nurse
staffing must account for varying levels of patient acuity. To
accomplish this, researchers frequently rely on the Medicare
case mix index (CMI; Friesner, Rosenman, McPherson,
2007). However, the original intent of the CMI was to
provide a comparative gauge of the total resource intensity of
the hospital’s patients; administrators tend to view a higher
CMI as a reflection that patients require more resources,
resulting in higher costs. In contrast, for clinicians, a higher
CMI is associated with greater severity of illness, greater risk
of mortality, greater treatment difficulty, a greater need for
intervention, and poorer prognoses (Averill et al., 2003).
In nursing, for example, researchers have used the CMI
as a regressor to control for patient acuity in studies that use
cross-sectional designs (Kovner Gergen, 1998), repeated
cross-sectional designs using several years of data (Kovner,
Jones, Zhan, Gergen, Basu, 2002), and true longitudinal
designs (Mark, Harless, McCue, Xu, 2004). Others have
used the CMI directly to calculate a CMI-adjusted staffing
measure; for example, reporting registered nurse (RN) full-
time equivalents (FTEs) per a given number of patient days,
adjusted by the CMI (Mark Harless, 2007). These ap-
proaches provide different information. Using the first ap-
proach, in a regression equation in which unadjusted staffing
(and other relevant variables, including CMI) are regressors
and quality of care is the dependent variable, the slope coef-
ficient on the unadjusted staffing term can be interpreted as
the change in quality with a one-unit increase in unadjusted
staffing, holding other variables (including CMI) constant.
In the second approach, the beta coefficient on the adjusted
staffing measure is the change in quality given a one-unit
change in adjusted staffing. Thus, the second approach di-
rectly incorporates the CMI into the measurement of nurse
staffing. Whether acuity is controlled in an analysis or
whether staffing is adjusted for acuity depends on the re-
search question. For example, in a study where, for ex-
ample, cost per discharge was the dependent variable, it is
likely that acuity would be controlled for. However, in a
study of quality in which staffing was the variable of pri-
mary interest, it would be important to adjust the staffing
measure for patient acuity.
Supplemental digital content is available for this article. Direct
URL citations appear in the printed text and are provided in the
HTML and PDF versions of this article on the journal’s Web site
(www.nursingresearchonline.com).
DOI: 10.1097/NNR.0b013e31820bb0c6
Nursing Research March/April 2011 Vol 60, No 2 107
Barbara A. Mark, PhD, RN, FAAN, is Sarah Frances Russell
Distinguished Professor, School of Nursing, The University of
North Carolina at Chapel Hill.
David W. Harless, PhD, is Professor, Department of Economics,
Virginia Commonwealth University, Richmond.
Copyright @ 201 Lippincott Williams Wilkins. Unauthorized reproduction of this article is prohibited.
1
2. Researchers who use the CMI to adjust a raw measure of
nurse staffing assume that the CMI is an appropriate proxy
for the patient’s need for nursing care. However, Norrish and
Rundall (2001) pointed out that patients’ needs for nursing
care may not correlate with their severity of illness: ‘‘Severely
ill patients receiving palliative care may require less nursing
care time than patients who are less acutely ill but require
intensive education and discharge planning’’ (p. 61). Nurses’
involvement with the families of these patients also requires
significant amounts of time. In addition, in using the CMI,
which is based on diagnosis-related group (DRG) assignments
for Medicare patients, researchers assume that it is appro-
priate for all patients, even if Medicare patients do not com-
prise the majority of hospital patients.
Nursing intensity weights (NIWs) are used to measure
patients’ needs for nursing care, and these have been used
to adjust staffing in several studies (American Nurses Asso-
ciation, 1997, 2000; Lichtig, Knauf, Milholland, 1999;
Needleman, Buerhaus, Mattke, Stewart, Zelevinsky,
2002). Briefly, NIWs were developed in the mid-1980s by
an expert panel formed by the New York State Nurses As-
sociation and the New York State Department of Health
(Ballard, Gray, Knauf, Uppal, 1993). The thrust for this
effort was the growing interest in identifying the costs of
nursing care. A Delphi panel of nursing experts from across
New York representing different types of hospitals evaluated
a typical patient case in each DRG and identified nursing
care requirements for each day of the patient’s stay in the cat-
egories of assessment; planning; teaching; and emotional,
medical, and physical needs (Ballard et al., 1993).
In 2005, the NIWs were revised for all-patient DRGs.
The 15-member panel that completed the revision included
nurses from a variety of acute care hospitals and a variety
of nursing specialties. Patient-level information from the
New York Statewide Planning and Research Cooperative
system was used to create a profile for each DRG. Each
DRG profile contained information about the number of
discharges, average length of stay, average number of days
in the intensive care unit (ICU), percentage of patients who
were short and long length of stay outliers, admission source,
discharge disposition, age, most common principal diagnoses,
most common secondary diagnoses, and most common
procedures (Knauf, Ballard, Mossman, Lichtig, 2006).
Patients’ needs for nursing care in each DRG for each day of
their hospital stay were evaluated on assessment, planning,
teaching (patient or family), emotional support (patient or
family), medical needs, and physical needs. These needs were
scored using a 5-point scale: 1 = minimum, 2 = subacute
(average), 3 = acute (above average), 4 = intensive
(complex), and 5 = maximum (more complex). For
example, patients rated as requiring minimal care provide
their own general care with supervision and encourage-
ment, whereas patients rated as requiring more complex
care require total care, constant monitoring, and frequent
modification of the nursing care plan. Then, the scores pro-
vided by each panel member were averaged across the six
dimensions by day of stay. In addition, the 1Y5 ordinal scale
was converted to a ratio scale based on approximate staffing
requirements (Knauf et al., 2006).
In 2007, NIWs were mapped to all-patient refined DRGs
(APR-DRGs; R. Knauf, personal communication, March 15,
2007), in which several changes were incorporated that led to
enhancements in the measurement of nursing intensity. First,
APR-DRGs are an all-payer system. Second, APR-DRGs
identify four levels of severity (minor, moderate, major, and
extreme), information not contained in the original DRG
groupings. Third, for NIWs for the APR-DRGs, the panel
used the highest score by day to reflect patients’ needs during
ICU days (R. Knauf, personal communication, September 8,
2010). Thus, for each APR-DRG, there are eight values of an
NIW (see Figure 1 for an example).
The purpose of this research was to examine the extent to
which the CMI can substitute for the NIW in the measurement
of acuity-adjusted nurse staffing. First, using a set of descriptive
statistics, the importance of adjusting nurse staffing based on
patients’ needs for nursing is shown by examining the differ-
ences between unadjusted measures of nurse staffing and those
that use the NIWs for adjustment. Second, the differences
between nurse staffing adjusted by NIWs and nurse staffing
adjusted by the CMI are examined. Third, using regression
analysis where the dependent variables are the differences
between unadjusted staffing and NIW-adjusted staffing and
the differences between CMI-adjusted staffing and NIW-
adjusted staffing, differences are examined to find any system-
atic links to selected hospital characteristics that have been
included as control variables in nurse staffing studies: owner-
ship (Mark Harless, 2007), location (Baernholdt Mark,
2009), and teaching status and hospital size (Needleman
et al., 2002).
The issue is not whether these hospital characteristics are
associated with staffing, but rather whether the differences
in the staffing measures are associated systematically with
these hospital characteristics. Furthermore, because the CMI
is calculated for Medicare patients only, it may not be useful
in hospitals with a lower proportion of Medicare patients.
Thus, in these regressions, ownership, geographic location,
teaching status, hospital size, and percentage Medicare
inpatient days are controlled for (Mark Harless, 2010;
Mark et al., 2004; Needleman et al., 2002).
In addition, using change in nurse staffing in a longitu-
dinal study might carry different implications for acuity
adjustment relative to a cross-sectional sample. Consider, for
example, the extreme case where the NIW for a hospital is
constant across years. In that case, the change in unadjusted
staffing for a given hospital reflects a genuine staffing change
without being confounded by changes in acuity. Even in that
case, although it would still be preferable to acuity adjust
changes in nurse staffing across hospitals, the bias in using
unadjusted staffing might be considerably smaller vis ( vis use
FIGURE 1. Example of nursing intensity weights for liver and intestinal
transplant patients (APR-DRG 001). APR-DRD = all-patient refined
diagnosis-related group; ICU = intensive care unit.
108 Adjusting for Patient Acuity Nursing Research March/April 2011 Vol 60, No 2
Copyright @ 201 Lippincott Williams Wilkins. Unauthorized reproduction of this article is prohibited.
1
3. of unadjusted staffing in a cross-sectional sample. Thus, re-
sults are provided in the context of both cross-sectional and
longitudinal studies.
Methods
Sample
The implications of different nurse staffing acuity adjustments
were analyzed in both a cross-sectional and a longitudinal
sample. The cross-sectional sample initially contained 662
general acute care hospitals. The cross-sectional sample size
was reduced to 579 hospitals located in 13 states in 2004 after
exclusion of hospitals that did not provide staffing information
or hospitals with an adjusted daily census less than 20 that
produced highly unstable estimates (Needleman et al., 2002).
The included states were Arizona (n = 28), California (n =
180), Colorado (n = 23), Florida (n = 84), Iowa (n = 19),
Kentucky (n = 31), Maryland (n = 23), North Carolina (n =
40), New Jersey (n = 55), Utah (n = 10), Washington (n = 26),
Wisconsin (n = 47), and West Virginia (n = 13). Hospitals in
these states were included because they are part of a larger
study of nurse staffing that we are conducting. The longi-
tudinal sample contains 2,203 observations from the same
states where we have complete data and are able to measure
the annual change in acuity-adjusted staffing from 2000 to
2006. The study reported here was reviewed by the institu-
tional review board and found to be exempt from human
subjects regulations.
Measures
The hospital’s CMI is based on the assignment of all Medi-
care patients to individual DRGs. The DRG assignment is
conditional on the patient’s principal diagnosis, complica-
tions and comorbidities, surgical procedures, age, gender, and
discharge disposition (American Hospital Directory, n.d.;
Averill et al., 2002). The Centers for Medicare and Medicaid
Services assigns a weight to each DRG based on a comparison
of the hospitals’ costs for treating that DRG to the national
average for that same DRG. The Centers for Medicare and
Medicaid Services (2006) defines a hospital’s CMI as the
‘‘average diagnosis-related group (DRG) relative weight for
that hospital.’’ The index is calculated by totaling the DRG
weights for all Medicare discharges and dividing by the
number of discharges. The CMI for the hospital is therefore
an indication of relative resource intensity; for example, a
CMI of 1.50 indicates that Medicare patients in that hospital
consume, on average, 50% more resources than the national
average for Medicare patients.
The calculation of a hospital’s aggregate mean NIW re-
quires data on the APR-DRG to which the patient was as-
signed, the severity level, the number of hospital days spent
on acute care units, the number of hospital days spent in
intensive care, and the corresponding NIW for each hospital
day. For this study, a publically available source of data, the
Healthcare Cost and Utilization Project State Inpatient
Databases (SID; described in the Sources of Data section),
was used to calculate the NIWs.
The APR-DRG and the corresponding severity level were
extracted for all patients from the SID for all of the states in
the sample. However, only data from Maryland contained a
data point for the number of days that patients spent in the
ICU; therefore, for each APR-DRG, the Maryland data were
used to construct the proportion of total length of stay that
patients spent in acute and intensive care. For example,
assume that for a given APR-DRG, the proportion of length
of stay in the ICU is 0.125. Assume also that a particular
patient with this APR-DRG had a length of stay of 6 days and
the average NIW for ICU days is 9.75 compared with 9.21 for
acute care days. The NIW per patient day for such a patient
is as follows:
[0.125 9.75 + (1 j 0.125) 9.21] = 9.2775
The sum of NIWs over a 6-day length of stay would
be 9.2775 6 = 55.665. The weighted proportion obtained
from the Maryland data was used and applied across all pa-
tients in all states in all years to obtain NIWs for every hos-
pital patient.
A normalized NIW and a normalized CMI were con-
structed by dividing their respective values by their mean values
(Table 1; Needleman et al., 2002). By definition, the nor-
malized values have a mean of 1.0. Acuity-adjusted staffing
was defined as equal to the staffing level (RN FTEs per 1,000
adjusted patient days) divided by the normalized value; that
is, NIW-adjusted staffing = RN FTEs per 1,000 adjusted
patient days/normalized NIW. Thus, when the normalized
NIW was equal to 1.0 (the mean value), NIW acuity-adjusted
staffing level equaled the unadjusted staffing level. If the
normalized NIW was equal to 1.1 (10% higher than mean
NIW), then acuity-adjusted staffing would be 9.1% lower
than unadjusted staffing. If the normalized NIW was equal to
0.9 (10% lower than the mean NIW), then acuity-adjusted
staffing would be 11.1% higher than unadjusted staffing.
Different ways to incorporate NIW and CMI can be imag-
ined, but this method has a straightforward interpretation,
allows a symmetrical treatment of NIW and CMI, and has
been chosen by other researchers (Needleman et al., 2002).
q
TABLE 1. Summary Statistics on Acuity
Adjustments for Staffing in Cross-sectional
Sample
Variable M SE
NIW 3.075 0.015
Normalized NIWa
1.000 0.005
CMI 1.394 0.010
Normalized CMIa
1.000 0.007
RN FTEs per 1,000 adjusted patient days 3.820 0.050
NIW-adjusted staffingb
3.825 0.048
CMI-adjusted staffingb
3.835 0.047
(NIW-adjusted staffingjunadjusted staffing) 0.005 0.020
(NIW-adjusted staffingjCMI-adjusted staffing) j0.010 0.019
Note. NIW = nursing intensity weight; CMI = case mix index; FTE = full-time
equivalent.
a
Normalized NIW = (NIW/NIW) and normalized CMI = (CMI/CMI)
b
NIW-adjusted staffing = RN FTES per 1,000 adjusted patient days/
normalized NIW and CMI-adjusted staffing = RN FTES per 1,000 adjusted
patient days/normalized CMI.
Nursing Research March/April 2011 Vol 60, No 2 Adjusting for Patient Acuity 109
Copyright @ 201 Lippincott Williams Wilkins. Unauthorized reproduction of this article is prohibited.
1
4. Nurse staffing was measured as the number of RN
FTEs per 1,000 adjusted patient days. Because the data
source for staffing information (described in the Sources of
Data section) does not distinguish between RNs employed
in the inpatient setting and those employed in the out-
patient setting, a standard adjustment is used. The most
commonly applied method is the adjusted patient day
method (Harless Mark, 2006; Kovner Gergen, 1998).
The logic assumes a common staffing level across hospital
inpatient and outpatient cost centers; the standard measure
of volume for hospital inpatients is the inpatient day, and
the standard measure of volume for outpatients is the
number of visits. Adjusted patient days therefore equal the
number of inpatient days times the ratio of outpatient gross
revenue to inpatient gross revenue.
Other key variables were measured. Ownership was de-
fined as whether the hospital was for-profit, not-for-profit, or
government controlled. Geographic location was defined as
whether the hospital was located in a metropolitan statistical
area (MSA; an area with at least 50,000 people). Teaching
status was defined as the ratio of the number of medical
residents to the number of hospital beds, with major teaching
hospitals further defined as having a ratio greater than or
equal to 0.25 and minor teaching hospitals as having a ratio
between 0 and 0.25. Hospital size was defined as the number
of open, staffed beds, divided into quartiles. Percentage
Medicare was the percentage of total inpatient days covered
under Medicare payment, divided into quartiles.
Sources of Data
Four sources of data were used for the study. The CMI was
obtained from http://www.cms.hhs.gov/AcuteInpatientPPS/
FFD/list.asp. The second source of data was the most recent
NIW file, which provides information on NIWs for each
APR-DRG (R. Knauf, personal communication, March 15,
2007). The third source of data was the Healthcare Cost and
Utilization Project SID, which contained the patient-level data
(APR-DRGs, length of stay) from which the patient-level
NIWs were calculated. In its entirety, the SID contains the
universe (not a sample) of the inpatient discharge abstracts in
approximately 33 states (depending on year); data were used
from the 13 states that were part of the larger study. The SID
contains clinical and nonclinical information on all patients,
regardless of payer (Agency for Healthcare Research and
Quality, 2010). The fourth source of data was the American
Hospital Association Annual Survey of Hospitals, from which
information was obtained on RN FTEs, inpatient days, in-
patient and outpatient revenues (from which adjusted patient
days and RN FTEs per 1,000 adjusted patient days were
calculated), ownership, geographic location, teaching status,
hospital size, and percentage Medicare patients.
Analyses
Descriptive statistics and simple correlation analysis were used
to evaluate the correlations between the number of RN FTEs
and CMI-adjusted and NIW-adjusted nurse staffing. Then,
multivariate regression analyses were used to evaluate the
sources of any systematic differences between CMI-adjusted
and NIW-adjusted nurse staffing in the cross-sectional case
and the longitudinal case. The dependent variables are the
differences between unadjusted staffing and NIW-adjusted
staffing and the differences between CMI-adjusted staffing
and NIW-adjusted staffing.
Results
Results in a Cross-sectional Sample
The mean NIW and CMI are shown in Table 1. The mean
NIW reflects the mean per patient day: For each hospital, the
sum of NIWs per patient stay was divided by the sum of
patients’ length of stay to obtain the average NIW per patient
day in a given year. Therefore, the sample mean NIW of
3.075 is a mean of means: the mean across hospitals of the
mean NIW per patient day within hospitals.
The average staffing per 1,000 adjusted patient days was
3.82 RN FTEs. That the means of the NIW-adjusted and
CMI-adjusted staffing are nearly identical to the unadjusted
staffing level is a result of their construction using the nor-
malized NIW and CMI values. In addition, the mean differ-
ence between NIW-adjusted staffing and unadjusted staffing
was 0.005, whereas the mean difference between NIW-
adjusted staffing and CMI-adjusted staffing was j0.010.
The correlation between NIW-adjusted and CMI-adjusted
staffing was .921. Based only on the overall mean differences
and the correlation coefficient, adjusting for acuity using the
CMI might be seen as a close substitute for an NIW staffing
adjustment. Further analysis, however, indicates that this
conclusion is erroneous; there were systematic differences be-
tween the two based on specific hospital characteristics.
In Table 2, the differences between NIW-adjusted staffing
and unadjusted staffing and between NIW-adjusted staffing
and CMI-adjusted staffing are illustrated by providing the
mean difference in the staffing measures for different hospital
characteristics. In the first column of mean differences, the
statistic for major teaching hospitals was j0.731; that is,
among major teaching hospitals, unadjusted staffing was,
on average, 0.731 higher than NIW-adjusted staffing. For
CMI-adjusted staffing, however, the mean difference was a
positive 0.127, so that, on average, NIW-adjusted staffing ex-
ceeded CMI-adjusted staffing among major teaching hos-
pitals. In one case, the difference was positive; and in the other,
negative. Another large mean difference between NIW-
adjusted staffing and unadjusted staffing occurred in hospi-
tals located outside MSAs where NIW-adjusted staffing was
higher by 0.277, on average. Large mean differences also were
apparent across the hospital size quartiles where, on average,
NIW-adjusted staffing exceeded unadjusted staffing in the
lowest quartile but was smaller than unadjusted staffing in the
highest quartile. Differences also existed across the hospital
size quartiles between NIW-adjusted and CMI-adjusted
staffing, but with the opposite pattern: NIW-adjusted staffing
was less than CMI-adjusted staffing in the lowest quartile but
was higher than CMI-adjusted staffing in the highest quartile.
Although differences in the staffing measures are illustrated
in Table 2, the information is limited because hospital char-
acteristics such as teaching status, location in an MSA, and
hospital size are correlated highly. In Table 3, this problem is
addressed using a regression model to estimate the portion of
the difference in the staffing measure attributable to each hos-
pital characteristic, controlling for other characteristics. In this
regression analysis, the estimates of the constant can be inter-
preted meaningfully. For the difference between NIW-adjusted
110 Adjusting for Patient Acuity Nursing Research March/April 2011 Vol 60, No 2
Copyright @ 201 Lippincott Williams Wilkins. Unauthorized reproduction of this article is prohibited.
1
5. staffing and unadjusted staffing, the constant of 0.448
indicates the predicted difference between the staffing mea-
sures for a not-for-profit, nonteaching hospital, not located in
an MSA, for hospitals in the lowest quartile of hospital size
and the lowest quartile of percentage Medicare patients. Many
of the parameter estimates for the hospital characteristics
indicate meaningfully large and statistically significant differ-
ences between NIW-adjusted staffing and unadjusted staffing.
When other characteristics were held constant, unadjusted
staffing exceeded NIW-adjusted staffing by 0.565 at major
teaching hospitals. Again, when other variables were held
constant, unadjusted staffing also exceeded NIW-adjusted
staffing for hospitals in MSAs by 0.143. Furthermore, there
were differences in the two staffing measures based on the
hospital size quartiles, and these differences were larger in
larger hospitals.
Although the magnitudes were generally smaller, there
were also statistically significant differences between NIW-
adjusted staffing and CMI-adjusted staffing for a number of
hospital characteristics. For example, with other things held
constant, CMI-adjusted staffing exceeded NIW-adjusted
staffing by 0.125 at public hospitals. Differences of similar
magnitudes also held for minor teaching hospitals and for
hospitals with the largest proportion of Medicare patients. A
consistent difference occurred for the hospital size quartiles
where NIW-adjusted staffing exceeded CMI-adjusted staffing
(i.e., CMI-adjusted staffing understated NIW-adjusted staff-
ing) by larger amounts the larger the hospital.
Results in a Longitudinal Sample
In this part of the analysis, changes in nurse staffing and
changes in the differences between NIW-adjusted staffing and
CMI-adjusted staffing were examined between 2000 and 2006.
Note that using change in nurse staffing in a longitudinal study
potentially carries different implications for acuity adjustment
relative to a cross-sectional sample. Consider, for example, the
extreme case where NIW for a hospital is constant across years.
In that case, the change in unadjusted staffing for a given hos-
pital reflects a genuine staffing change without being con-
founded by changes in acuity. Even in that example, however,
q
TABLE 2. Mean Difference From Nursing
Intensity-Weight-Adjusted Staffing in a
Cross-sectional Sample, by Hospital
Ownership, Teaching Status, Location,
Bed Size Quartile, and Percentage
Medicare Quartile
Variable
(NIW-Adjusted
StaffingjUnadjusted
Staffing)
(NIW-Adjusted
StaffingjCMI-
Adjusted Staffing)
Not-for-profit j0.012 0.022
Public j0.049 j0.070
For-profit 0.095 j0.072
Located outside an
MSA
0.277 j0.083
Located in an MSA j0.077 0.012
Not a teaching hospital 0.102 j0.014
Major teaching hospital j0.731 0.127
Minor teaching hospital j0.067 j0.032
Hospital size quartile 1
(24Y114 beds)
0.343 j0.179
Hospital size quartile 2
(115Y186 beds)
0.128 j0.068
Hospital size quartile 3
(187Y307 beds)
j0.071 0.038
Hospital size quartile 4
(308Y1,758 beds)
j0.386 0.172
Percentage Medicare
quartile 1 (0%Y41%)
j0.123 0.071
Percentage Medicare
quartile 2 (41%Y51%)
j0.019 0.053
Percentage Medicare
quartile 3 (51%Y60%)
0.021 j0.050
Percentage Medicare
quartile 4 (60%Y83%)
0.144 j0.115
Note. NIW = nursing intensity weight; CMI = case mix index; MSA =
metropolitan statistical area.
q
TABLE 3. Regression of Difference from
Nursing Intensity-Weight-Adjusted
Staffing on Key Hospital Characteristics
in a Cross-sectional Sample
Variable
(NIW-Adjusted
StaffingjUnadjusted
Staffing)
(NIW-Adjusted
StaffingjCMI-
Adjusted Staffing)
Public 0.036 (0.053) j0.125* (0.058)
For-profit j0.014 (0.042) j0.034 (0.047)
Located in MSA j0.143*** (0.036) j0.054 (0.043)
Major teaching hospital j0.565*** (0.098) j0.046 (0.087)
Minor teaching hospital j0.047 (0.036) j0.114* (0.047)
Hospital size quartile 2 j0.151*** (0.041) 0.128* (0.052)
Hospital size quartile 3 j0.324*** (0.046) 0.230*** (0.048)
Hospital size quartile 4 j0.564*** (0.048) 0.364*** (0.051)
Percent Medicare
quartile 2
j0.034 (0.053) j0.026 (0.058)
Percent Medicare
quartile 3
j0.060 (0.048) j0.105 (0.058)
Percent Medicare
quartile 4
j0.019 (0.049) j0.160** (0.054)
Constant 0.448*** (0.057) j0.020 (0.066)
Number of observations 579 579
R2
.405 .115
F(11, 567) 34.30 9.05
Note. NIW = nursing intensity weight; CMI = case mix index; MSA =
metropolitan statistical area.
The values in parentheses are standard errors.
*p G .05. **p G .01. ***p G .001.
Nursing Research March/April 2011 Vol 60, No 2 Adjusting for Patient Acuity 111
Copyright @ 201 Lippincott Williams Wilkins. Unauthorized reproduction of this article is prohibited.
1
6. although it would still be preferable to acuity adjust changes in
nurse staffing across hospitals, the bias in using unadjusted
staffing might be considerably smaller vis ( vis use of unad-
justed staffing in a cross-sectional sample.
The summary statistics for the year-to-year changes in the
measures of nurse staffing (as well as the differences from
NIW-adjusted staffing) over the period of the study are
provided in Table 4. Because these are the mean changes
across periods, the means can be interpreted as a time trend.
For example, the mean for the change in unadjusted staffing
was 0.082, so over the period of the study, RN FTEs per
1,000 adjusted patient days increased by 0.082 per year. Note
that the growth in CMI-adjusted staffing, 0.075, was very
similar to that for unadjusted staffing, whereas the growth in
NIW-adjusted staffing was significantly lower, 0.042. The
mean differences between change in NIW-adjusted staffing
and change in unadjusted staffing, as well as the mean differ-
ence between change in NIW-adjusted staffing and change in
CMI-adjusted staffing also indicate that using the change in
unadjusted staffing or change in CMI-adjusted staffing would
overstate increases in staffing over time relative to using NIW-
adjusted staffing. Moreover, the implication is that the diver-
gence from change in NIW-adjusted staffing is larger the longer
the period: for unadjusted staffing, 0.04 between the first and
last years of a two year longitudinal study, 0.08 (2 0.04) be-
tween the first and last years of a 3-year study, and so on. Thus,
researchers who use either the unadjusted or CMI-adjusted
staffing measures should understand that unadjusted staffing
or CMI-adjusted staffing will overstate the genuine change in
staffing and the longer the period of the study, the greater the
likelihood that staffing estimates will be overestimated.
Consistent with examination of these differences being
systematically related to selected hospital characteristics, a
regression analysis was undertaken for the longitudinal
sample parallel to that reported in Table 3. In contrast to
the findings for the cross-sectional analysis, no systematic
differences were found by hospital characteristics for ($NIW-
adjusted staffing j $unadjusted staffing), F(11, 661) = 1.29,
p = 0.22, or for ($NIW-adjusted staffing j $CMI-adjusted
staffing), F(11, 661) = 0.98, p = 0.46). (See Table,
Supplemental Digital Content 1, which illustrates complete
regression results, http://links.lww.com/NRES/A46.)
Longitudinal studies are critically important because they
have the potential to overcome well-known threats to internal
validity present in cross-sectional studies, particularly the in-
ability to draw causal conclusions (Kenny, 1979; Mark,
2006). The finding of no statistically significant relationship
of the differences between NIW-adjusted and CMI-adjusted
staffing with important hospital characteristics suggests that
although there are differences in a cross-sectional study by size,
location, and teaching status, these are differences in level that
are relatively constant over time. In the longitudinal case,
differences were not related to hospital characteristics, but the
differences among the staffing measures increase over time.
Discussion
The purpose of this study was to determine whether the
Medicare CMI provided an appropriate mechanism by which
measurement of staffing could be adjusted for patient acuity.
Although a simple correlation between the NIW and the CMI
was found to be quite high, there were systematic and stat-
istically significant differences between NIW-adjusted staffing
and both unadjusted and CMI-adjusted staffing that were
related to hospital characteristics. In the cross-sectional sam-
ple, unadjusted staffing was significantly larger than NIW-
adjusted staffing for hospitals located in an MSA, in major
teaching hospitals, and in all but the smallest hospitals.
Differences between NIW-adjusted and CMI-adjusted staff-
ing were also statistically significant for ownership, teaching
status, hospital size, and proportion of Medicare inpatient
days; however, these differences were not always in the same
direction; some were larger and others were smaller. Also,
although there were differences in the staffing measures in
the longitudinal sample, they were not related systematically
to any of the hospital characteristics. Particularly in cross-
sectional studies, researchers should attempt to adjust their
measurement of nurse staffing to account for patient acuity
using a measurement system that is reflective of patients’
needs for nursing care.
There are limitations to this study. The most important is
the assumption that the NIWs provide a true estimate of
patients’ needs for nursing care. However, given that it was
developed by an expert panel of nurses who focused directly
on patients’ nursing needs, it is probably a more accurate
reflection of acuity than is the CMI, a physician- and medical-
diagnosis-oriented measure (Fetter, Shin, Freeman, Averill,
Thompson, 1980) that focuses on total hospital resource use.
Another limitation is the reliance on patient data on ICU days
only from Maryland in constructing the NIWs. Using the SID,
there is no way to determine whether the proportion of ICU
days would have been the same in other states.
Despite the now voluminous research literature examining
the relationship between nurse staffing and quality of care, and
despite widespread legislative, regulatory, and voluntary calls
for hospitals to implement nurse staffing plans based on patient
acuity, there has been no corresponding growth in the
development of standardized acuity systems. Part of the reason
for this may be lack of consensus about the exact meaning of
patient acuity and, therefore, which aspects of patients’ needs
for nursing care should be included. For example, Jennings
(2008) points out the need for greater concept clarity and, in
particular, an explication of the relationship between patient
q
TABLE 4. Summary Statistics on Acuity
Adjustments for Staffing in a Panel Sample
Variable M SEa
$Unadjusted staffing 0.082*** 0.011
$CMI-adjusted staffing 0.075*** 0.011
$NIW-adjusted staffing 0.042*** 0.011
($NIW-adjusted staffingj$unadjusted
staffing)
j0.040*** 0.002
($NIW-adjusted staffingj$CMI-adjusted
staffing)
j0.033*** 0.003
Note. CMI = case mix index; NIW = nursing intensity weight.
a
Standard errors of the means adjusted for dependence within hospitals.
***p G .001.
112 Adjusting for Patient Acuity Nursing Research March/April 2011 Vol 60, No 2
Copyright @ 201 Lippincott Williams Wilkins. Unauthorized reproduction of this article is prohibited.
1
7. acuity and nursing workload. The underlying assumption is
that patients with higher levels of acuity require more nursing
time and, thus, an increased workload.
Another possible reason for the slow growth of stan-
dardized acuity measurement systems may be due to the over-
lapping nature of the terms severity of illness, comorbidity,
and patient acuity, all of which may act independently or
synergistically to increase nursing workload. For example,
comorbidity refers to ‘‘coexisting diagnosesIdiseases unre-
lated in etiology or causality to the principal diagnosis’’
(Iezzoni, 2003, p. 49); patients with comorbidities are highly
likely to be more severely ill and to require more nursing care
than do those without. Concepts of severity of illness are
often organized around prognosis and expectations about
patients’ clinical outcomes relating to the extent and nature of
disease (Iezzoni, 2003). In these definitions of comorbidity
and severity, the conceptual overlap is obvious. Further over-
lap can be seen between these terms and the term patient
acuity. In-depth conceptual analyses of these terms that would
delve into their interrelationships, as well as their relationships
to patients’ needs for nursing care, would provide a much
needed advancement for nurse staffing research.
Although Mumolie, Lichtig, and Knauf (2007) have called
for the development of a national, standard set of NIWs that
could be used consistently by researchers and policy makers,
the future of NIWs is in doubt (R. Knauf, personal commu-
nication, May 4, 2010). Thus, researchers may need to rely
on alternative approaches. For example, a comorbidity soft-
ware (Elixhauser, Steiner, Harris, Coffey, 1998) developed
specifically for use with large administrative data sets uses a
list of comorbidities based on International Classification of
Diseases Ninth Edition Clinical Modification codes in the
discharge record. The presence of these comorbidities was
associated with longer lengths of stay, higher hospital charges,
and higher levels of mortality. A complete discussion of other
risk adjustment methods is beyond the scope of this article,
but when researchers select a risk adjustment method, that
choice should be consistent with the research question being
posed, and with recognition that these alternatives do not
provide direct measurement of patient acuity.
Further research, analogous to that presented here, using
the current NIWs but incorporating comparisons with other
risk adjustment systems may provide insight into the validity
of these risk adjustment systems to reflect patient acuity more
accurately. In addition, examining the behavior of different
risk adjustment systems in studies relating nurse staffing and
quality of care is an obvious next step. The importance of an
accurate strategy to measure patient acuity in supporting
management decisions about the acuity-based levels of nurse
staffing seems self-evident. Until a standardized acuity sys-
tem is developed, tested, implemented widely in hospitals,
and adopted by researchers, the power of the evidence base
about nurse staffing to drive practice will not reach its full
potential. q
Accepted for publication December 13, 2010.
This study was funded by the Agency for Healthcare Research and
Quality, Grant 2R01HS10153.
Corresponding author: Barbara A. Mark, PhD, RN, FAAN, School of
Nursing, The University of North Carolina at Chapel Hill, Chapel
Hill, NC 27599-7460 (e-mail: bmark@email.unc.edu).
References
Agency for Healthcare Research and Quality. (2010). Overview
of the State Inpatient Databases (SID). Retrieved August 12,
2010, from http://www.hcup-us.ahrq.gov/sidoverview.jsp
American Hospital Directory. (n.d.). Medicare prospective payment
system. Retrieved from http://www.ahd.com/pps.html
American Nurses Association. (1997). Implementing nursing’s re-
port card. Washington, DC: Author.
American Nurses Association. (2000). Nurse staffing and patient
outcomes in the hospital setting. Washington, DC: Author.
Averill, R. F., Bonazelli, J. A., Mullin, R. L., Goldfield, N.,
McCullough, E. C., Mossman, P. N., et al. (2002). All Patient
Diagnosis Related Groups (AP-DRGs) version 23.0 definitions
manual. Wallingford, CT: 3M Health Information Systems.
Averill, R. F., Goldfield, N., Hughes, J. S., Bonazelli, J.,
McCullough, E. C., Steinbeck, B. A., et al. (2003). All-patient re-
fined diagnosis related groups (APR-DRGs). Methodology over-
view. Retrieved September 18, 2010, from http://www.hcup-us.
ahrq.gov/db/nation/NIS/APR-DRGsV20MethodologyOverviewand
Bibliography.pdf
Baernholdt, M., Mark, B. A. (2009). The nurse work environment,
job satisfaction and turnover rates in rural and urban nursing
units. Journal of Nursing Management, 17(8), 994Y1001.
Ballard, K. A., Gray, R. F., Knauf, R. A., Uppal, P. (1993). Mea-
suring variations in nursing care per DRG. Nursing Manage-
ment, 24(4), 33Y41.
Centers for Medicare and Medicaid Services. (2006). Case mix
index: Description. Retrieved from http://www.cms.gov/
AcuteInpatientPPS/FFD/itemdetail.asp?filterType=none
filterByDID=-99sortByDID=2sortOrder=ascending
itemID=CMS022523intNumPerPage=10
Elixhauser, A., Steiner, C., Harris, D. R., Coffey, R. M. (1998).
Comorbidity measures for use with administrative data. Medical
Care, 36(1), 8Y27.
Fetter, R. B., Shin, Y., Freeman, J. L., Averill, R. F., Thompson, J. D.
(1980). Case mix definition by diagnosis-related groups. Medical
Care, 18(2 Suppl.), 1Y53.
Friesner,D.L.,Rosenman,R.,McPherson,M.Q.(2007). Does a single
case mix index fit all hospitals? Empirical evidence from Washington
state. Research in Healthcare Financial Management, 11, 35Y55.
Harless, D. W., Mark, B. A. (2006). Addressing measurement
error bias in nurse staffing research. Health Services Research,
41(5), 2006Y2024.
Iezzoni, L. (2003). Risk adjustment for measuring health care out-
comes. Chicago, IL: Health Administration Press.
Jennings, B. M. (2008). Patient acuity. In R. G. Hughes (Ed.), Patient
safety and quality: An evidence-based handbook for nurses.
Rockville, MD: Agency for Healthcare Research and Quality.
Retrieved September 18, 2010, from http://www.ahrq.gov/qual/
nurseshdbk/docs/JenningsB_PA.pdf
Kenny, D. (1979). Correlation and causation. New York: John
Wiley Sons.
Knauf, R. A., Ballard, K. A., Mossman, P. N., Lichtig, L. K.
(2006). Nursing cost per DRG: Nursing intensity weights. Policy,
Politics, Nursing Practice, 7, 281Y289.
Kovner, C., Gergen, P. J. (1998). Nurse staffing levels and adverse
events following surgery in U.S. hospitals. Journal of Nursing
Scholarship, 30(4), 315Y321.
Kovner, C., Jones, C., Zhan, C., Gergen, P., Basu, J. (2002). Nurse
staffing and postsurgical adverse events: An analysis of adminis-
trative data from a sample of U.S. hospitals, 1990Y1996. Health
Services Research, 37, 611Y629.
Lichtig, L. K., Knauf, R. A., Milholland, D. K. (1999). Some
impacts of nursing on acute care hospital outcomes. Journal of
Nursing Administration, 29, 25Y33.
Mark, B. A. (2006). Methodological issues in nurse staffing re-
search. Western Journal of Nursing Research, 28(6), 694Y709.
Nursing Research March/April 2011 Vol 60, No 2 Adjusting for Patient Acuity 113
Copyright @ 201 Lippincott Williams Wilkins. Unauthorized reproduction of this article is prohibited.
1
8. Mark, B. A., Harless, D. W. (2007). Nurse staffing, mortality, and
length of stay in for profit and not for profit hospitals. Inquiry,
44(2), 167Y186.
Mark, B. A., Harless, D. W. (2010). Nurse staffing and post-
surgical complications using the present on admission (POA)
indicator. Research in Nursing and Health, 33(1), 35Y47.
Mark, B. A., Harless, D. W., McCue, M., Xu, Y. (2004). A lon-
gitudinal examination of hospital registered nurse staffing and
quality of care. Health Services Research, 39(2), 279Y300.
Mumolie, G. P., Lichtig, L. K., Knauf, R. A. (2007). The impli-
cations of nurse staffing information: The real value of reporting
nursing data. Nursing Economic$, 25(4), 212Y216, 227.
Needleman, J., Buerhaus, P., Mattke, S., Stewart, M., Zelevinsky, K.
(2002). Nurse-staffing levels and the quality of care in hospitals.
New England Journal of Medicine, 346(22), 1715Y1722.
Norrish, B. R., Rundall, T. G. (2001). Hospital restructuring
and the work of registered nurses. Milbank Quarterly, 79(1),
55Y79.
Call for Manuscripts
Nursing Research is soliciting manuscripts for an issue of the journal focused on statistical methods in nursing research.
The goal of the issue is to demonstrate the state of the science in the area of statistics in nursing research.
Deadline for submission is on October 1, 2011, with publication in 2012.
Invited are manuscripts on the following:
Data modeling that integrates well-posed nursing research questions with statistical approaches,
Nursing questions that involve multivariate, multiple levels of scale to include randomized controlled trials, quasi-
experimental studies, and observational studies,
The use of advances in quantitative sciences, including (a) design revisited in light of counterfactual theory for un-
derstanding causation, (b) statistics proper and software advances, (c) longitudinal data that merge with informatics, and
(d) measurement, especially the growing importance of item response theory,
Nurse-sensitive studies that examine quality and safety in patient care,
Studies of statistics analyzing longitudinal data, multilevel modeling approaches, measures, and analyses of dyads
and families as unit of measure,
Nursing studies that examine the effectiveness of nursing interventions in clinical settings and use statistics tailored to
the models employed,
Costing out care,
History of knowledge development of statistics in nursing research, and
Presentation of an advanced, complex, new, or challenging statistical approach taken or a problem overcome (may
be an expanded methods section where the results of the main aims of a study were reported or in the publication
process but the methods presented were not sufficient to appreciate what was done).
The focused issue on statistics in nursing is stimulated by the opportunity for nursing found within the Institute of Medicine/
Robert Wood Johnson Future of Nursing report. Engagement of nurse scientists is called for in a wide range of research
endeavors that support data-driven decisions and include health services research, translational research, comparative
effectiveness research, and basic and clinical research. In addition, the issue will call attention to the association of nursing
and statistics that began with the work of Florence Nightingale over a century ago.
Manuscripts are to be submitted through the Editorial Manager: http://www.editorialmanager.com/nres/default.asp. Denote
in cover letter submission for Statistics in Nursing Research issue.
Please e-mail the editor (m_dougherty@unc.edu) with questions.
DOI: 10.1097/NNR.0b013e318212294d
114 Adjusting for Patient Acuity Nursing Research March/April 2011 Vol 60, No 2
Copyright @ 201 Lippincott Williams Wilkins. Unauthorized reproduction of this article is prohibited.
1