This lecture describes alternative spatially autoregressive model specifications, and the use of specification testing. The model is reviewed and several applications are shown using real data for US counties
Recombinant DNA technology (Immunological screening)
Demography 7263 fall 2015 spatially autoregressive models 2
1. 10/7/2015 DEM 7263 Fall 2015 - Spatially Autoregressive Models 2
file:///Users/ozd504/Google%20Drive/dem7263/Rcode15/Lecture_3.html 1/21
DEM 7263 Fall 2015 - Spatially
Autoregressive Models 2
Corey S. Sparks, Ph.D.
September 16, 2015
Spatial Regression Models
This lecture builds off the previous lecture on the Spatially Autoregressive Model (SAR) with either a lag or
error specification. The lag model is written:
Where Y is the dependent variable, X is the matrix of independent variables, is the vector of regression
parameters to be estimated from the data, is the autoregressive coefficient, which tells us how strong
the resemblance is, on average, between and it’s neighbors. The matrix W is the spatial weight matrix,
describing the spatial network structure of the observations, like we described in the ESDA lecture.
In the lag model, we are specifying the spatial component on the dependent variable. This leads to a
spatial filtering of the variable, where they are averaged over the surrounding neighborhood defined in W,
called the spatially lagged variable. In R we use the spdep package, and the lagsarlm() function to fit
this model.
The error model says that the autocorrelation is not in the outcome itself, but instead, any autocorrelation
is attributable to there being missing spatial covariates in the data. If these spatially patterned covariates
could be measures, the tne autocorrelation would be 0. This model is written:
This model, in effect, controls for the nuisance of correlated errors in the data that are attributable to an
inherently spatial process, or to spatial autocorrelation in the measurement errors of the measured and
possibly unmeasured variables in the model. This model is estimated in R using errorsarlm() in the
spdep library.
Examination of Model Specification
To some degree, both of the SAR specifications allow us to model spatial dependence in the data. The
primary difference between them is where we model said dependence.
The lag model says that the dependence affects the dependent variable only, we can liken this to a
diffusion scenario, where your neighbors have a diffusive effect on you.
The error model says that dependence affects the residuals only. We can liken this to the missing spatially
dependent covariate situation, where, if only we could measure another really important spatially
associated predictor, we could account for the spatial dependence. But alas, we cannot, and we instead
model dependence in our errors.
Y = ρWY + β + eX
′
β
ρ
Yi
Y = β + eX
′
e = λWe + v
2. 10/7/2015 DEM 7263 Fall 2015 - Spatially Autoregressive Models 2
file:///Users/ozd504/Google%20Drive/dem7263/Rcode15/Lecture_3.html 2/21
These are inherently two completely different ways to think about specifying a model, and we should really
make our decision based upon how we think our process of interest operates.
That being said, this way of thinking isn’t necessarily popular among practitioners. Most practitioners want
the best fitting model, ‘nuff said. So methods have been developed that test for alternate model
specifications, to see which kind of model best summarizes the observed variation in the dependent
variable and the spatial dependence.
More exotic types of spatial dependence
Spatial Durbin Model Another form of a spatial lag model is the Spatial Durbin Model (SDM). This model
is an extension of the ordinary lag or error model that includes spatially lagged independent variables. If
you remember, one issue that commonly occures with the lag model, is that we often have residual
autocorrelation in the model. This autocorrelation could be attributable to a missing spatial covariate. We
can get a kind of spatial covariate by lagging the predictor variables in the model using W. This model can
be written:
Where, the parameter vector are now the regression coefficients for the lagged predictor variables. We
can also include the lagged predictors in an error model, which gives us the Durbin Error Model (DEM):
Generally, the spatial Durbin model is preferred to the ordinary error model, because we can include the
“unspecified” spatial covariates from the error model into the Durbin model via the lagged predictor
variables.
Spatially Autoregressive Moving Average Model Futher extensions of these models include
dependence on both the outcome and the error process. Two models are described in LeSage and Pace
(https://books.google.com/books?id=EKiKXcgL-D4C&hl=en). The Spatial Autocorrelation Model, or SAC
model and the Spatially autoregressive moving average model (SARMA model). The SAC model is:
Where, you can potentially have two different spatial weight matrices, and . Here, the lagged error
term is taken over all orders of neighbors, leading to a more global error process, while the SARMA model
has form:
Y = ρWY + β + WXθ + eX
′
θ
Y = β + WXθ + eX
′
e = λWe + v
Y = ρ Y + β + eW1
X
′
e = θ e + vW2
Y = ( − ρ β + ( − ρ ( − θ eIn
W1
)
−1
X
′
In
W1
)
−1
In
W2
)
−1
W1
W2
Y = ρ Y + β + uW1
X
′
u = ( − θ )eIn
W2
e ∼ N(0, )σ2
In
Y = ( − ρ β + ( − ρ ( − θ )eIn
W1
)
−1
X
′
In
W1
)
−1
In
W2
3. 10/7/2015 DEM 7263 Fall 2015 - Spatially Autoregressive Models 2
file:///Users/ozd504/Google%20Drive/dem7263/Rcode15/Lecture_3.html 3/21
which gives a “locally” weighted moving average to the residuals, which will avereage the residuals only in
the local neighborhood, instead of over all neighbor orders.
Fitting these models in R can be done in the spdep library.
spdat<-readShapePoly("~/Google Drive/dem7263/data/usdata_mort.shp")
#Create a k=4 nearest neighbor set
us.nb4<-knearneigh(coordinates(spdat), k=4)
us.nb4<-knn2nb(us.nb4)
us.wt4<-nb2listw(us.nb4, style="W")
hist(spdat$mortrate)
spplot(spdat,"mortrate", at=quantile(spdat$mortrate), col.regions=brewer.pal(n=5,
"Reds"), main="Spatial Distribution of US Mortality Rate")
4. 10/7/2015 DEM 7263 Fall 2015 - Spatially Autoregressive Models 2
file:///Users/ozd504/Google%20Drive/dem7263/Rcode15/Lecture_3.html 4/21
fit.1.us<-lm(scale(mortrate)~scale(ppersonspo)+scale(p65plus)+scale(pblack_1)+scal
e(phisp)+I(RUCC>=7), spdat)
summary(fit.1.us)
10. 10/7/2015 DEM 7263 Fall 2015 - Spatially Autoregressive Models 2
file:///Users/ozd504/Google%20Drive/dem7263/Rcode15/Lecture_3.html 10/21
#SMA Model
fit.sma<-spautolm(scale(mortrate)~scale(ppersonspo)+scale(p65plus)+scale(pblac
k_1)+scale(phisp)+I(RUCC>=7), spdat, listw=us.wt4, family="SMA")
summary(fit.sma)
##
## Call:
## sacsarlm(formula = scale(mortrate) ~ scale(ppersonspo) + scale(p65plus) +
## scale(pblack_1) + scale(phisp) + I(RUCC >= 7), data = spdat,
## listw = us.wt4, type = "sac", method = "MC")
##
## Residuals:
## Min 1Q Median 3Q Max
## -3.286200 -0.323325 0.018039 0.349200 3.786466
##
## Type: sac
## Coefficients: (numerical Hessian approximate standard errors)
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 0.0722677 0.0124066 5.8249 5.714e-09
## scale(ppersonspo) 0.2715715 0.0164644 16.4945 < 2.2e-16
## scale(p65plus) -0.0089727 0.0096066 -0.9340 0.3503
## scale(pblack_1) 0.0062671 0.0094103 0.6660 0.5054
## scale(phisp) -0.1108634 0.0102596 -10.8058 < 2.2e-16
## I(RUCC >= 7)TRUE -0.1539096 0.0215581 -7.1393 9.381e-13
##
## Rho: 0.7211
## Approximate (numerical Hessian) standard error: 0.019407
## z-value: 37.156, p-value: < 2.22e-16
## Lambda: -0.43194
## Approximate (numerical Hessian) standard error: 0.045712
## z-value: -9.4493, p-value: < 2.22e-16
##
## LR test value: 952.18, p-value: < 2.22e-16
##
## Log likelihood: -3018.867 for sac model
## ML residual variance (sigma squared): 0.34811, (sigma: 0.59)
## Nagelkerke pseudo-R-squared: 0.5806
## Number of observations: 3067
## Number of parameters estimated: 9
## AIC: 6055.7, (AIC for lm: 7003.9)
11. 10/7/2015 DEM 7263 Fall 2015 - Spatially Autoregressive Models 2
file:///Users/ozd504/Google%20Drive/dem7263/Rcode15/Lecture_3.html 11/21
##
## Call:
## spautolm(formula = scale(mortrate) ~ scale(ppersonspo) + scale(p65plus) +
## scale(pblack_1) + scale(phisp) + I(RUCC >= 7), data = spdat,
## listw = us.wt4, family = "SMA")
##
## Residuals:
## Min 1Q Median 3Q Max
## -3.293104 -0.344431 0.016537 0.380445 4.329735
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 0.085878 0.024072 3.5675 0.0003603
## scale(ppersonspo) 0.519189 0.018359 28.2795 < 2.2e-16
## scale(p65plus) -0.014067 0.015643 -0.8992 0.3685331
## scale(pblack_1) 0.131516 0.019449 6.7619 1.362e-11
## scale(phisp) -0.228134 0.018665 -12.2224 < 2.2e-16
## I(RUCC >= 7)TRUE -0.186258 0.030287 -6.1497 7.763e-10
##
## Lambda: 0.54914 LR test value: 645.76 p-value: < 2.22e-16
## Numerical Hessian standard error of lambda: 0.021071
##
## Log likelihood: -3172.079
## ML residual variance (sigma squared): 0.49357, (sigma: 0.70254)
## Number of observations: 3067
## Number of parameters estimated: 8
## AIC: 6360.2
Using the Lagrange Multiplier Test (LMT)
The so-called Lagrange Multiplier (econometrician’s jargon for a score test
(https://en.wikipedia.org/wiki/Score_test)) test. These tests compare the model fits from the OLS, spatial
error, and spatial lag models using the method of the score test.
For those who don’t remember, the score test is a test based on the relative change in the first derivative
of the likelihood function around the maximum likelihood. The particular thing here that is affecting the
value of this derivative is the autoregressive parameter, or . In the OLS model or = 0 (so both the
lag and error models simplify to OLS), but as this parameter changes, so does the likelihood for the model,
hence why the derivative of the likelihood function is used. This is all related to how the estimation
routines estimate the value of or .
In general, you fit the OLS model to your dependent variable, then submit the OLS model fit to the LMT
testing procedure.
Then you look to see which model (spatial error, or spatial lag) has the highest value for the test.
Enter the uncertainty… So how much bigger, you might say?
Well, drastically bigger, if the LMT for the error model is 2500 and the LMT for the lag model is 2480, this
is NOT A BIG DIFFERENCE, only about 1%. If you see a LMT for the error model of 2500 and a LMT for
ρ λ ρ λ
ρ λ
12. 10/7/2015 DEM 7263 Fall 2015 - Spatially Autoregressive Models 2
file:///Users/ozd504/Google%20Drive/dem7263/Rcode15/Lecture_3.html 12/21
the lag model of 250, THIS IS A BIG DIFFERENCE.
So what if you don’t see a BIG DIFFERENCE, HOW DO YOU DECIDE WHICH MODEL TO USE???
Well, you could think more, but who has time for that.
The econometricians have thought up a “better” LMT test, the so-called robust LMT, robust to what I’m
not sure, but it is said that it can settle such problems of a “not so big difference” between the lag and
error model specifications.
So what do you do? In general, think about your problem before you run your analysis, should this fail you,
proceed with using the LMT, if this is inconclusive, look at the robust LMT, and choose the model which
has the larger value for this test.
Here’s how we do the Lagrange Multiplier test in R:
lm.LMtests(fit.1.us, listw=us.wt4, test="all")
14. 10/7/2015 DEM 7263 Fall 2015 - Spatially Autoregressive Models 2
file:///Users/ozd504/Google%20Drive/dem7263/Rcode15/Lecture_3.html 14/21
## data = spdat)
## weights: us.wt4
##
## SARMA = 1162.4, df = 2, p-value < 2.2e-16
There is a 2.66% difference the regular LM test between the error and lag models, but a 36.35%
difference in the Robust LM tests. In this case, I would say that either the lag model looks like the best
one, using the Robust Lagrange multiplier test, or possibly the SARMA model, since it’s test is 7.14%
difference between it and the lag model. Unfortunately, there is no a robust test for SARMA model.
Of course, the AIC is also your friend:
AICs<-c(AIC(fit.1.us),AIC(fit.lag), AIC(fit.err), AIC(fit.durb), AIC(fit.errdurb),
AIC(fit.sac), AIC(fit.sma))
plot(AICs, type="l", lwd=1.5, xaxt="n", xlab="")
axis(1, at=1:7,labels=F) #6= number of models
labels<-c("OLS", "Lag","Err", "Durbin","Err Durbin", "SAC", "SMA" )
text(1:7, par("usr")[3]-.25, srt=45, adj=1, labels=labels, xpd=T)
mtext(side=1, text="Model Specification", line=3)
symbols(x= which.min(AICs), y=AICs[which.min(AICs)], circles=1, fg=2,lwd=2,add=T)
15. 10/7/2015 DEM 7263 Fall 2015 - Spatially Autoregressive Models 2
file:///Users/ozd504/Google%20Drive/dem7263/Rcode15/Lecture_3.html 15/21
knitr::kable(data.frame(Models=labels, AIC=round(AICs, 2)))
Models AIC
OLS 7003.92
Lag 6103.87
Err 6141.81
Durbin 6044.97
Err Durbin 6073.12
SAC 6055.73
SMA 6360.16
16. 10/7/2015 DEM 7263 Fall 2015 - Spatially Autoregressive Models 2
file:///Users/ozd504/Google%20Drive/dem7263/Rcode15/Lecture_3.html 16/21
Which shows that the Spatial Durbin model best fits the data, although the degree of difference between it
an the SAC model is small. A likelihood ratio test could be used:
anova(fit.sac, fit.durb)
## Model df AIC logLik Test L.Ratio p-value
## fit.sac 1 9 6055.7 -3018.9 1
## fit.durb 2 13 6045.0 -3009.5 2 18.766 0.00087355
Which indicates that the Durbin model fits significantly better than the SAC model. Durbin it is!!
Interpreting effects in spatial lag models
In spatial lag models, interpretation of the regression effects is complicated. Each observation will have a
direct effect of its predictors, but each observation will also have in indirect effect of the information of its
neighbors, although Spatial Error models do not have this issue. In OLS, the impact/effect of a predictor is
straight forward: and , but when a model has a spatial lag of either the outcome or a
predictor, this becomes more complicated, indeed: may not = 0, or , where
This implies that a change in the ith region’s predictor can affect the jth region’s
outcome * We have 2 situations: * , or the direct impact of an observation’s predictor on its own
outcome, and: * , or the indirect impact of an observation’s neighbor’s predictor on its outcome.
This leads to three quantities that we want to know: * Average Direct Impact, which is similar to a
traditional interpretation * Average Total impact, which would be the total of direct and indirect impacts of
a predictor on one’s outcome * Average Indirect impact, which would be the average impact of one’s
neighbors on one’s outcome
These quantities can be found using the impacts() function in the spdep library. We follow the example
that converts the spatial weight matrix into a “sparse” matrix, and power it up using the trW() function.
This follows the approximation methods described in Lesage and Pace, 2009. Here, we use Monte Carlo
simulation to obtain simulated distributions of the various impacts. We are looking for the first part of the
output and
W <- as(us.wt4, "CsparseMatrix")
trMC <- trW(W, type="MC")
im<-impacts(fit.durb, tr=trMC, R=100)
sums<-summary(im, zstats=T)
data.frame(sums$res)
## direct indirect total
## scale(ppersonspo) 0.46662605 0.28356899 0.75019505
## scale(p65plus) 0.03052456 -0.13349913 -0.10297457
## scale(pblack_1) 0.07299591 -0.03141598 0.04157993
## scale(phisp) -0.07557088 -0.30528244 -0.38085333
## I(RUCC >= 7)TRUE -0.17116336 -0.27538528 -0.44654864
=
δy
i
δxik
βk
= 0
δy
i
δxjk
δy
i
δxjk
= (W)
δy
i
δxjk
Sr
(W) = ( − ρWSr
In
)
−1
βk
(WSr
)ii
(WSr
)ij
17. 10/7/2015 DEM 7263 Fall 2015 - Spatially Autoregressive Models 2
file:///Users/ozd504/Google%20Drive/dem7263/Rcode15/Lecture_3.html 17/21
data.frame(sums$pzmat)
## Direct Indirect Total
## scale(ppersonspo) 0.000000e+00 3.025957e-11 0.000000e+00
## scale(p65plus) 5.627965e-02 1.666638e-04 8.603047e-03
## scale(pblack_1) 4.002635e-03 4.458669e-01 2.282902e-01
## scale(phisp) 3.258540e-03 1.776357e-15 0.000000e+00
## I(RUCC >= 7)TRUE 4.701231e-08 3.215075e-04 1.183164e-07
We see all variables have a significant direct effect, we also see that poverty, %65 and older, hispanic %
and Rural classifications all have significant indirect impacts.
We can likewise see the effects by order of neighbors, similar to what Yang et al(2015)
(http://onlinelibrary.wiley.com/doi/10.1002/psp.1809/abstract) do in their Table 4.
Here, I do this up to 5th order neighbors.
im2<-impacts(fit.durb, tr=trMC, R=100, Q=5)
sums2<-summary(im2, zstats=T, reportQ=T, short=T)
sums2
21. 10/7/2015 DEM 7263 Fall 2015 - Spatially Autoregressive Models 2
file:///Users/ozd504/Google%20Drive/dem7263/Rcode15/Lecture_3.html 21/21
## Q2 2.7101e-06
## Q3 4.1306e-06
## Q4 8.8498e-06
## Q5 2.3452e-05
So we see that, for instance, for the direct impact of poverty, .4446/.4667 = 95.26% of the effect is due to
a county’s own influence on itself, while (-.013 + .0277 + .0019 + .0037)/.4667 = 4.35 % of the effect of
poverty comes from other neighboring counties.