module 14: spatial statistics in epidemiology and public...

Post on 31-May-2020

3 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

MODULE 14: Spatial Statistics in Epidemiologyand Public Health

Lecture 5: Spatial regression

Jon Wakefield and Lance Waller

1 / 72

References

I Waller and Gotway (2004, Chapter 9) Applied SpatialStatistics for Public Health Data. New York: Wiley.

I Elliott, P., et al. (2000) Spatial Epidemiology: Methods andApplications, Oxford: Oxford University Press.

I Haining, R. (2003). Spatial Data Analysis: Theory andPractice. Cambridge: Cambridge University Press.

I Banerjee, S., Carlin, B.P., and Gelfand, A.E. (2014)Hierarchical Modeling and Analysis for Spatial Data, 2nd Ed.Boca Raton, FL: CRC/Chapman & Hall.

I Blangiardo, M. and Cameletti, M. (2015) Spatial andSpatio-temporal Bayesian Models with R-INLA. Chichester:Wiley.

2 / 72

What do we have so far?

I Point process ideas (intensities, K -functions).I Data: (x , y) event locations.I Where are the clusters? Use intensities.I How are events clusters (in average)? Use K -functions.

I Inhomogeneous Poisson process → regional counts arePoisson distributed.

I Non-overlapping areas should be independent.

I Point process results provide a basis for the small areaestimation methods from yesterday.

I Tension between statistical precision (want large local samplesizes → big regions), and geographic precision (want smallregions for more detail in map).

3 / 72

What’s left?

I So we know how to describe and evaluate spatial patterns inhealth outcome data.

I What about linking patterns in health outcomes to patterns inexposures?

I With independent observations we know how to use linear andgeneralized linear models such as linear, Poisson, logisticregression.

I What happens with dependent observations?

4 / 72

Caveat

“...all models are wrong. The practical question is howwrong do they have to be to not be useful.”Box and Draper (1987, p. 74)

5 / 72

What changes with dependence?

I In statistical modeling, we are often trying to describe themean of the outcome as a function of covariates, assumingerror terms are mutually independent.

I That means we usually model any trend in the data as a trendin expectations.

I Allows estimation of covariate effects.

I With dependent error terms, observed trends may be due tocovariates, correlation, or both.

I May impact the identifiability of covariate effects.

I Could have different effects equally likely under differentcorrelation models.

6 / 72

Residual correlation

I Where do correlated errors come from?

I Perhaps outcomes truly correlated (infectious disease).

I Perhaps we omitted an important variable that has spatialstructure itself.

I If temperature is important and we left it out of a modelapplied to the continental U.S., what would the residuals looklike?

7 / 72

Residual maps important

I If high temperatures associated with high outcomes, we wouldunderfit in southern states (observations > model predictions⇒ positive residuals), and overfit in northern states(observations < model prediction ⇒ negative residuals).

I The “missing covariate” idea suggests that maps of residualsare important spatial diagnostics.

I Also, we may want to apply tests of clustering or to detectclusters to residuals.

I Moran’s I , LISAs.

8 / 72

Our plan

I NY leukemia data and add some covariates.

I We will fit linear and Poisson regression models with variousspatial correlation structures and compare inferences.

I Remember, all of these models are wrong, but some may beuseful.

9 / 72

Illustrating regression models

I New York leukemia data from Waller et al. (1994)

I 281 census tracts (1980 Census).

I 8 counties in central New York.

I 592 cases for 1978-1982.

I 1,057,673 people at risk.

10 / 72

Crude Rates (per 100,000)

Central New York Census Tracts, 1980Leukemia rates 1978-1982

0.000000 - 0.000021

0.000022 - 0.000324

0.000325 - 0.000588

0.000589 - 0.001096

0.001097 - 0.006993

11 / 72

Building the model

I Let Yi = count for region i .

I Let Ei = expected count for region i .

I xi ,TCE = inverse distance to TCE site.

I xi ,65 = percent over age 65 (census).

I xi ,home = percent who own own home (census).

I The model:

Yi = β0 + xi ,TCEβTCE + xi ,65β65 + xi ,homeβhome + εi .

12 / 72

Assumptions for regression

I The error terms, εiind∼ N(0, σ2);

I The data have a constant variance, σ2;

I The data are uncorrelated (OLS) or have a specifiedparametric covariance structure (GLS);

13 / 72

Y normally distributed?

Histogram

Incidence Proportions

Freq

uenc

y

0.000 0.004

050

100

150

●●

●●

●●

●●

●●

●●●

●●

●●

●●

●●

●●

●●●●●

●●

●●

●●

●●●

● ●●

●●

●●

●●●

●●

●●

●●

●●●●

●●

●●

●●

●●●●

●●●

● ●

●●

●●

●●

●●●

●●

●●●

●●

●●● ●

●●●

● ●●

●●● ●

●●

●●

●●●

●●

●●●

●●●

●●

●●●

●●

●●

●●

●●

●●●

●●

●●

●●

●●

●●●

● ●

●●●●

● ●●

●●●

●●

●●

●●

●●

●●

● ●

●●

−3 −1 0 1 2 30.00

00.

003

0.00

6

Normal Q−Q Plot

Theoretical Quantiles

Sam

ple

Qua

ntile

s

14 / 72

Transformation?

Zi = log

(1000(Yi + 1)

ni

).

Transformed Incidence Proportions

Freq

uenc

y

−2 0 1 2 3 4 5

020

4060

●●

●●

●●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●●

●●

●●●

●●

●●

●●

●●

●●●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●●

●●

●●

●●

●●

●●●

●●

●●

●●

●●

●●

●●

●●

−3 −1 0 1 2 3

−20

12

34

Normal Q−Q Plot

Theoretical Quantiles

Sam

ple

Qua

ntile

s

15 / 72

Outliers, where are the top 3?

16 / 72

Scatterplots

●●

●●

●●●●●●

●●●●●

●●

●●

●●●●

●●

● ●

●●

● ●

●●

●●●

●●●

●●●

● ●

●●

●●●

●●●

●●

●●●●●

●●

●●

●●●

●●●

●●

●●●

●●●●●

●●

●●●

●●

●●

●●

●●●●●

●●●

●●●

●●●

●●

●●

●●●

●●

●●●●●

●●

●●●●

●●

●●

●●

●●●

●●●●●●●

●●●

●●

●●

●●●●●●

●●

●●

●●●●

●●●●

●●●●

●●

●●

●●

●●

● ●

●●

●●

●●

0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5

−20

24

Inverse Distance

Tran

sfor

med

Cou

ntInverse Distance vs. Outcome

●●

●●

●●●●

●●

●●

●● ●

● ●

●●

●●●

●●

● ●

●●

● ●

●●

●●●

●●●

●● ●

● ●

●●

●● ●

●●

●●

● ●●●●

●●

●●

●●●

● ●●

●●

●●●

●●●●●

●●

●●●

●●

●●

●●

●●●●●

●●●

●●●

●●●

●●

●●

●●●

●●

●●●●●

●●

●●●●

●●

●●

●●

●●●

●●●●●●

●●●

●●

●●

●●●●●●

●●

●●

●●●

●●●●

●●●●

●●

●●

●●

●●

● ●

● ●

●●

●●

1 2 3 4 5 6

−20

24

Log(100*Inverse Distance)

Tran

sfor

med

Cou

nt

Log(100*Inverse Distance) vs. Outcome

●●

●●

● ●●

●●

●●●●●

●●

●●

●●

●●

●●

● ●

●●

●●

●●

●●●

●● ●

●●●

● ●

●●

●●●

●●

●●

●●●●

●●

●●

● ●

●● ●

●● ●

●●

● ●●

●●

●●

●●

●●

●●

●●

●●

●●●●

●●●

●●

●●

●●

●●

●●●

●●

● ●●

●●

●●

●●● ●

●●

●●

●●

● ●●

●●

●●

●●

●●●

●●

●●

●● ●●

●●

●●

●●

●●

●●

●●●●

●●●●●

●●

●●

●●●

● ●

● ●

●●

●●

0.0 0.1 0.2 0.3 0.4 0.5

−20

24

% > 65 years

Tran

sfor

med

Cou

nt

Percent > 65 vs. Outcome

●●

●●

● ●●

●●

●●●●●

● ●

●●

●●●

●●

●●

●●

●●

●●

●●●

●● ●

●● ●

●●

●●

●● ●

●●

●●

●●● ●

●●

●●

● ●

●●●

● ●●

● ●

● ●●

●●

●●

●●

●●

●●

●●

●●

●● ●●

●●

●●

●●

●●

●●

●●

●●

●●●

● ●

●●

●●●●

●●

●●

● ●

●●●

●●

●●

●●

● ●●

●●

●●

●● ● ●

●●

●●

●●

●●

●●

●● ●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

0.0 0.2 0.4 0.6 0.8 1.0

−20

24

% Own Home

Tran

sfor

med

Cou

nt

Percent Own Home vs. Outcome

17 / 72

Linear Regression (OLS)

Parameter Estimate Std. Error p-value

β0 (Intercept) -0.5173 0.1586 0.0012

β1 (TCE) 0.0488 0.0351 0.1648

β2 (% Age > 65) 3.9509 0.6055 <0.0001

β3 (% Own home) -0.5600 0.1703 0.0011

σ2 0.4318 277 df

R2=0.1932 AIC=567.5

18 / 72

Is OLS appropriate?

I Z s roughly Gaussian (symmetric).

I Do Z s have constant variance?

I No, since population sizes vary.

I Var(Zi ) = Var(

log(1000(Yi+1)

ni

))I Try weighted least squares with weights 1/ni .

19 / 72

Linear Regression (WLS)

Parameter Estimate Std. Error p-value

β0 (Intercept) -0.7784 0.1412 <0.0001

β1 (TCE) 0.0763 0.0273 0.0056

β2 (% Age > 65) 3.8566 0.5713 <0.0001

β3 (% Own home) -0.3987 0.1531 0.0097

σ2 1121.94 277 df

R2=0.1977 AIC=513.5

20 / 72

What changed?

I The three outliers are all in regions with small ni .

I Weighting reduced their impact on estimates.

I Most profound effect is with respect to TCE.

21 / 72

WLS fitted values

22 / 72

Residual plot

-1.50 -0.75 0.00 0.75 1.50fit

-4

-2

0

2

4st

uden

tized

resid

uals

23 / 72

Residual map

24 / 72

What are we looking for?

I Patterns in locations of residuals.

I Model underfit (predictions too low) near cities?

I Correlations in residuals?

I Let’s try semivariograms for the residuals.

I Let’s try local Moran’s I for residuals.

25 / 72

Residual correlation?

0 20 40 60 80

distance

0.0

0.2

0.4

0.6

0.8

1.0

1.2

gamma

26 / 72

Comments

I Residual semivariogram not too impressive.

I We can try maximum likelihood fit incorporating residualcorrelation via the semivariogram (which defines covariancematrix).

27 / 72

Linear Regression, Correlated Errors (ML)

Parameter Estimate Std. Error p-value

β0 (Intercept) -0.7222 0.1972 <0.0001

β1 (TCE) 0.0826 0.0434 0.0576

β2 (% Age > 65) 3.7093 0.6188 <0.0001

β3 (% Own home) -0.3245 0.2044 0.1136

c0=0.3740 cs=0.0558 a=6.93

AIC=565.6 277 df

28 / 72

Weighting?

I We also need to include weights to account forheteroskedasticity.

I Again we use weights equal to 1/ni .

I What changes?

29 / 72

Linear regression, Correlated, Weighted

Parameter Estimate Std. Error p-value

β0 (Intercept) -0.9161 0.1648 <0.0001

β1 (TCE) 0.0956 0.0322 0.0032

β2 (% Age > 65) 3.5763 0.5920 <0.0001

β3 (% Own home) -0.2285 0.1761 0.1956

c0=997.65 cs=127.12 a=6.86

AIC=514.7 277 df

30 / 72

Fitted values (correlated, weighted)

31 / 72

Modelling counts directly

I Using linear regression required a fair amount of datatransformation, just to meet modelling assumptions.

I Can we model the counts directly?

I In epidemiology, common to use logistic or Poisson regression.

I For rare disease, little difference between logistic and Poisson.

I Both are examples of generalized linear models (McCullaghand Nelder, 1989).

32 / 72

Building the model

I Let Yi = count for region i .

I Let Ei = expected count for region i .

I Let (xi ,TCE , xi ,65, xi ,home) be the associated covariate values.

I Poisson regression:

Yi ∼ Poisson(Eiζi )

where

log(ζi ) = β0 + xi ,TCEβTCE + xi ,65β65 + xi ,homeβhome .

33 / 72

What’s different?

I Poisson distribution for counts, rather than transformingproportions for normality.

I Link function: Natural log of mean of Yi is a linear functionof covariates.

I So βs represent multiplicative increases in expected counts, eβ

a measure of relative risk associated with one unit increase incovariate.

I Ei an offset, what we expect if the covariates have no impact.

I Age, race, sex adjustments in either Ei (standardization) orcovariates.

34 / 72

How do we add spatial correlation?

I Trickier than in regression, since mean and variance arerelated for Poisson observations.

I Two general approaches:I Marginal specification defining correlation among means.I Conditional specification defining correlation through the use

of random effects.

35 / 72

Marginal and conditional models

I We often think of a model representing the marginal mean,E (Y ) as a function of fixed, unknown parameters.

I That is, the parameters define the population average effectof the covariates (“On average, how does a given level of airpollution impact a person?”)

I Another approach is to consider a model of the conditionalmean for each subject.

I In this setting we think of fixed effects of parameters andrandom effects specific to the subjects.

36 / 72

Marginal versus conditional interpretation

I For us: fixed effects apply equally to all subjects, randomeffects apply to a particular subject.

I Interpret fixed effects conditional on levels of the randomeffects.

I “What is the effect of aspirin on a headache averaged over allindividuals in the study?” (Marginal effect).

I “What is the effect of aspirin on a headache in thisindividual?” (Conditional effect).

I Random effects allow different parameter values forindividuals, following some distribution.

37 / 72

Random intercepts

I A model with fixed and random effects is a mixed model.

I A very common formulation is to have fixed parameter valuesand a random intercept. This says everyone has the sameresponse to the treatment, but that individuals have differentstarting points.

I In Poisson regression setting, if we add random effects wegenerate a generalized linear mixed model (GLMM).

38 / 72

Random effects and the conditional specification

I We add a random effect (intercept).

I Represents an impact of region i , not accounted for in Ei orthe covariates.

I We define this random effect to have a spatial distribution.

39 / 72

Building the model

I Let Yi denote the observed number of cases in region i .

I Let Ei denote the expected number of cases, ignoringcovariate effects.

I Assume Ei known, perhaps age-standardized, or based onglobal (external or internal) rates.

I First stage:

Yi |ζiind∼ Poisson(Eiζi )

I ζi represent a relative risk associated with region i notaccounted for by the Ei .

40 / 72

Building the model

I Note Yi/Ei = SMRi , the MLE of ζi .

I Also note, E [Yi |ζi ] 6= Ei , since Ei does not include the impactof the random effect.

I Create a GLMM with log link by

log(E [Yi |ζi ]) = log(Ei ) + log(ζi )

I If we add covariates and rename log(ζi ) = ψi , then

log(ζi ) = x ′iβ + ψi

41 / 72

New York data

I So our model is

Yi |β, ψiind∼ Poisson(Ei exp(x ′

iβ + ψi )),

log(ζi ) = β0 + xi ,TCEβTCE + xi ,65β65 + xi ,homeβhome + ψi .

I The ψi represent the random interecpts.

I Add overdispersion via ψiind∼ N(0, vψ).

I Add spatial correlation via

ψ ∼ MVN(0,Σ).

42 / 72

Priors and “shrinkage”

I Overdispersion model (i.i.d. ψi ) results in each estimate beinga compromise between the local SMR and the global averageSMR.

I “Borrows information (strength)” from other observations toimprove precision of local estimate.

I “Shrinks” estimate toward global mean. (Note: “shrink” doesnot mean “reduce”, rather means “moves toward”).

43 / 72

Local shrinkage

I Spatial model (correlated ψi ) results in each estimate begin acompromise between the lcoal SMR and the local averageSMR.

I Shrinks each ψi toward the average of its neighbors.

I Can also include both global and local shrinkage (Besag, York,and Mollie 1991).

I How do we fit these models?

44 / 72

Bayesian inference

Bayesian inference regarding model parameters based on posteriordistribution

Pr [β,ψ|Y ]

proportional to the product of the likelihood times the prior

Pr [Y |β,ψ]Pr [ψ]Pr [β].

Defers spatial correlation to the prior rather than the likelihood.

45 / 72

Spatial priors

I Could model joint distribution

ψ ∼ MVN(0,Σ).

I Could also model conditional distribution

ψi |ψj 6=i ∼ N

(∑j 6=i cijψj∑

j 6=i cij,

1

vCAR∑

j 6=i cij

), i = 1, . . . ,N.

where cij are weights defining the neighbors of region i .

I Adjacency weights: cij = 1 if j is a neighbor of i .

46 / 72

CAR priors

I The conditional specification defines the conditionalautoregressive (CAR) prior (Besag 1974, Besag et al. 1991).

I Under certain conditions on the cij , the CAR prior defines avalid multivariate joint Gaussian distribution.

I Variance covariance matrix a function of the inverse of thematrix of neighbor weights.

47 / 72

Perspective: Generalized linear mixed model

I Given the values of the random effects (ψi s), observations(Yi s) are independent.

I Taking into account correlation in the ψi s, the Yi s arecorrelated.

I Conditionally independent Yi |ψi give likelihood function.

I (Spatially correlated) distribution of the ψi s a priordistribution.

48 / 72

Fitting Bayesian models: Markov chain Monte Carlo

I Posterior often difficult to calculate mathematically.

I Iterative simulation approach to model fitting.

I Given full conditional distributions, simulate a new value foreach parameter, holding the other parameter values fixed.

I The set of simulated values converges to a sample from theposterior distribution.

I WinBUGS software.www.mrc-bsu.cam.ac.uk/bugs/welcome.shtml

49 / 72

Conceptual MCMC example

I Suppose we have a model with data Y and three parametersθ1, θ2, and θ3.

I “Gibbs sampler” simulates values from the full conditionaldistributions

f (θ1|θ2, θ3,Y ),

f (θ2|θ1, θ3,Y ),

f (θ3|θ1, θ2,Y ).

50 / 72

Conceptual MCMC

I Start with values θ(1)1 , θ

(1)2 , and θ

(1)3 .

sample θ(2)1 from f (θ1|θ(1)2 , θ

(1)3 ,Y ),

sample θ(2)2 from f (θ2|θ(2)1 , θ

(1)3 ,Y ),

sample θ(2)3 from f (θ3|θ(2)1 , θ

(2)2 ,Y ).

I As we continue to update θ, sampled values becomeindistinguishable from a sample from the joint posteriordistribution f (θ1, θ2, θ3|Y ).

51 / 72

MCMC example

I Gelman et al. (2004). Theoretical and MCMC results.[Y1

Y2

]∼ MVN

([θ1θ2

],

[1 ρρ 1

]).

I Uniform priors on θ1, θ2, yield posterior[θ1θ2

]∼ MVN

([Y1

Y2

],

[1 ρρ 1

]).

52 / 72

Full conditionals

I Multivariate results give full conditionals

θ1|θ2,Y ∼ N(Y1 + ρ(θ2 − Y2), 1− ρ2),

θ2|θ1,Y ∼ N(Y2 + ρ(θ1 − Y1), 1− ρ2).

I Let’s try a Gibbs sampler and compare to the theoreticalresults.

53 / 72

MCMC example

−4 −2 0 2 4

−4−2

02

4

theta1

theta2

First 10 iterations

●●●

●●

● ●

●●

●●●●●●

● ●● ●

●●

●●●

●●

−4 −2 0 2 4

−4−2

02

4

theta1

theta2

500 iterations

● ●

●●●●●●●●

●●

●●

●●

● ●●●

●●

●●● ●

● ●

●●

● ●

●●●●●●

●●

● ●●●

● ●

●●● ●●●

● ●●●

● ●

●●●●

● ●

●●

●●●●

● ●●●

● ●

●●● ●

●●

●●

●●● ●

●●

●●● ●

●●●●

● ●●●

●●

● ●

●●●●

●●●●●●●●●●

● ●●●

●●

● ●

●●●●●●●●

● ●

●●●●

● ●

●●●●● ●

●●●●

● ●

●●●●

●●

● ●●●

●●

● ●● ●

●●

●●●●● ●

●●

●●

● ●●●

● ●●●

●●●●

● ●

●●

● ●

● ●

●●

●●● ●● ●●●

● ●

●●●●

● ●

●●

●●

●●● ●

●●●●

● ●

●●

●●●●

●●

● ●●●

●●

●●

●●

● ●

●●

●●

●●

●●● ●●●

● ●●●●●

● ●

●●

● ●

●●

● ●

●●

●●● ●

●●●●●●

●●

● ●●●

●●

●●

●●

●●

● ●●●●●

● ●

●●● ●●●

● ●

●●

●●●●●●

●●●●

●●

● ●

●●●●

● ●● ●

●●

●●● ●

● ●

●●

●●

● ●

●●●●●●● ●

● ●

●●● ●

●●

● ●●●●●

●●●●●●●●

● ●●●

●●

● ●

●●

● ●●●

●●

● ●

●●

● ●

●●●●

● ●

●●

● ●●●

●●● ●

● ●

●●

●●●●

●●

● ●

●●

●●

● ●●●●●

●●●●● ●

●●● ●

●●

●●●●

● ●

●●

●●●●● ●●●

●●

●●

● ●

● ●

●●

● ●●●

● ●

●●● ●●●

● ●

●●● ●

●●

●●●●

●●

●●

● ●

● ●

●●● ●

●●●●●●

●●

● ●●●●●

● ●●●

● ●

●●

●●

● ●

● ●●●

●●

●●

● ●

●●

●●

● ●

●●

● ●

●●

● ●

●●●●

●●● ●

● ●

● ●

●●

●●● ●

●●

●●

●●

●●

●●●●

● ●●●

● ●

●●●●

●●

●●●●

● ●

●●●●

●●

● ●●●

● ●

●●● ●●●● ●

●●

●●

● ●

●●●●

● ●

●●

●●

● ●●●

●●

●●

● ●●●●●●●

●●

● ●

●●

●●

● ●●●●●

● ●

●●●●

● ●●●● ●

●●

●●●●

●●

●●●●

● ●●●

●●

●●

● ●●●● ●

● ●●●

●●

● ●●●

● ●●●

●●● ●

●●●●

●●●●

● ●

● ●

●●●●●●

●●● ●

● ●●●

● ●●●

●●● ●

●●

● ●

●●●●●●

●●

●●

●●

● ●

● ●●●

●●

●●● ●

●●

●●

● ●

●●

●●

● ●●●

● ●● ●

●●

●●●●

● ●●●●●

● ●●●●●

● ●●●

● ●●●

● ●

●●

●●

●●

● ●

●●●●

● ●

●●

●●● ●

● ●●●

● ●●●

● ●●●

● ●

●●●●

●●●●

● ●

●●

●●

● ●● ●

●●

●●

● ●●●

● ●

●●

● ●●●

● ●●●

●●

● ●

●●●●● ●

● ●

●●

●●

●●● ●●●● ●●●

●●

● ●

●●

● ●

●●● ●●●

●●● ●

● ●

●●

●●

Theta1

Density

−4 −2 0 2 4

0.0

0.2

0.4

Theta2

Density

−4 −2 0 2 40.0

0.2

0.4

54 / 72

Back to CAR prior

I Almost custom-made for MCMC.

I Defined for ψi , given ψj for j 6= i .

I We define neighborhood weights cij .

55 / 72

Complete model specification

Yi |β, ψiind∼ Poisson(Ei exp(x ′

iβ + ψi )),

log(ζi ) = β0 + xi ,TCEβTCE + xi ,65β65 + xi ,homeβhome + ψi .

βk ∼ Uniform.

ψi |ψj 6=i ∼ N

(∑j 6=i cijψj∑

j 6=i cij,

1

vCAR∑

j 6=i cij

), i = 1, . . . ,N.

1

vCAR∼ Gamma(0.5, 0.0005).

56 / 72

MCMC trace plots

0 2000 4000 6000 8000

−6−4

−20

Iteration

Inte

rcep

t

0 2000 4000 6000 8000

26

1014

Iteration

Age

Effe

ct

0 2000 4000 6000 8000

−1.5

−0.5

Iteration

Expo

sure

Effe

ct

0 2000 4000 6000 8000

−11

23

Iteration

Hom

e O

wner

ship

Effe

ct

57 / 72

Posterior densities

−0.5 0.0 0.5

0.0

1.0

2.0

Intercept

Dens

ity

1 2 3 4 5 6

0.0

0.2

0.4

0.6

Age Effect

Dens

ity

0.00 0.10 0.20 0.30

02

46

8

Exposure Effect

Dens

ity

−1.0 −0.5 0.0 0.5

0.0

1.0

2.0

Home Ownership Effect

Dens

ity

58 / 72

MCMC posterior estimates

Covariate Posterior 95% CredibleMedian Set

β0 0.048 (-0.355, 0.408)β65 3.984 (2.736, 5.330)βTCE 0.152 (0.066, 0.226)βhome -0.367 (-0.758, 0.049)

59 / 72

But there’s more!

I A nifty thing about MCMC estimates:

We get posterior samples from any function of modelparameters by taking that function of the sampled posteriorparameter values.

I Gives us posterior inference for SMRi = Yi ,fit/Ei .

I Also can get Pr [SMRi > 200|Y ] and map these exceedenceprobabilities.

60 / 72

Posterior median SMRs

61 / 72

Posterior exceedence probabilities

62 / 72

Example 2

I Cryptozoology Example: Waller and Carlin (2010) DiseaseMapping. In Handbook of Spatial Statistics, Gelfand et al.(eds.). Boca Raton: CRC/Chapman and Hall.

63 / 72

Cryptozoology example

I County-specific reports of encounters with Sasquatch(Bigfoot).

I “...which brings us to the appropriateness of the Bigfootexample.”

I Data downloaded from www.bfro.net

I Sightings from counties in Oregon and Washington (PacificNorthwest).

I Probability of report related to population density?

I (Hopefully) rare events in small areas.

I Perhaps spatial smoothing will stabilize local rate estimates.

I Fit models with no random effects, exchangeable randomeffects, CAR random effects, convolution random effects.

64 / 72

Sasquatch Data

Wasco

Number of Reports Reports per 2000 Population

2000 Population per Square Mile

0 200 400 600 800100

Kilometers

Legend

Population/Sq. Mi.Reports

0.7 - 12.9

13.0 - 32.1

32.2 - 69.9

70.0 - 180.1

180.2 - 414.0

414.1 - 793.3

793.4 - 1419.3

Reports/Person

0.00000 - 0.00003

0.00003 - 0.00008

0.00008 - 0.00016

0.00016 - 0.00026

0.00026 - 0.00046

0.00046 - 0.00076

0.00076 - 0.00517

Skamania

King

0

1-5

6-10

11-15

16-20

21-25

25-51

65 / 72

Reports vs. Population Density

02

46

0.000 0.001 0.002 0.003 0.004 0.005

log(Census 2000 population density)

Reports per Census 2000 population size

Log of Population Density

02

46

0 e+00 2 e−04 4 e−04 6 e−04 8 e−04 1 e−03

log(Census 2000 population density)

Reports per Census 2000 population size

Ignoring outlier

66 / 72

Observed vs. Expected

0 20 40 60 80 100 120

010

2030

4050

Expected (constant per person risk of reporting)

Obs

erve

dObserved versus Expected Number of Reports

Skamania

King

67 / 72

Predicted relative risks and credible sets

Filled circle = Skamania, Filled square = Wasco

0 5 10 15 20 25 30 35

−2−1

01

23

4

log(

rela

tive

risk)

No Random Effects Exchangeable CAR Convolution

68 / 72

Mapped relative risksNo random effect RRs Exchangeable RRs

CAR RRs Convolution RRs

Legend 0.00 - 1.00

1.01 - 2.00

2.01 - 3.00

3.01 - 4.00

4.01 - 15.00

approxim ately 70.00

69 / 72

Skamania Sasquatch Ordinances

I http://www.skamaniacounty.org/commissioners/

homepage/ordinances-2/

I Big Foot Ordinance 69-1: “THEREFORE BE IT RESOLVEDthat any premeditated, willful and wonton slaying of any suchcreature shall be deemed a felony punishable by a fine not toexceed Ten Thousand Dollars ($10,000.00) and/orimprisonment in the county jail for a period not to exceedFive (5) years. ADOPTED this 1st day of April, 1969.”

I Big Foot Ordinance 1984-2:I Repealed felony and jail sentence.I Established a Sasquatch Refuge (Skamania County).I Clarified penalty (gross misdemeanor vs. misdemeanor) and

penalty (fine and jail time), disallowed insanity defense, andclarified distinction between coroner designation of victim ashumanoid (murder) or anthropoid (this ordinance).

70 / 72

And...

www.amazon.com/Skamania-County-Washington-Bigfoot-Vintage/dp/B076PWN7ZM

71 / 72

Conclusions

I What method to use depends on what you want data youhave and what question you want to answer.

I All methods try to balance trend (fixed effects) withcorrelation (here, with random effects).

I All models wrong, some models useful.

I Trying more than one approach often sensible.

I Few methods (including Monte Carlo simulation) in currentGIS packages.

72 / 72

top related