bootstrapping sensitivity analysis

21
Bootstrapping Sensitivity Analysis Qingyuan Zhao Department of Statistics, The Wharton School University of Pennsylvania June 2, 2019 @ OSU Bayesian Causal Inference Workshop (Joint work with Bhaswar B. Battacharya and Dylan S. Small)

Upload: others

Post on 25-Apr-2022

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Bootstrapping Sensitivity Analysis

Bootstrapping Sensitivity Analysis

Qingyuan Zhao

Department of Statistics, The Wharton SchoolUniversity of Pennsylvania

June 2, 2019 @ OSU Bayesian Causal Inference Workshop

(Joint work with Bhaswar B. Battacharya and Dylan S. Small)

Page 2: Bootstrapping Sensitivity Analysis

1/20

Why sensitivity analysis?

I Unless we have perfectly executed randomized experiment, causalinference is based on some unverifiable assumptions.

I In observational studies, the most commonly used assumption isignorability or no unmeasured confounding:

A ⊥⊥ Y (0),Y (1)∣∣X .

We can only say this assumption is “plausible”.

I Sensitivity analysis asks: what if this assumption does not hold?Does our qualitative conclusion still hold?

I This question appears in many settings:

1. Confounded observational studies.2. Survey sampling with missing not at random (MNAR).3. Longitudinal study with non-ignorable dropout.

I In general, this means that the target parameter (e.g. averagetreatment effect) is only partially identified.

Page 3: Bootstrapping Sensitivity Analysis

2/20

Overview: Bootstrapping sensitivity analysis

Point-identified parameter: Efron’s bootstrap

Bootstrap

Point estimator ============⇒ Confidence interval

Partially identified parameter: An analogy

Optimization Percentile Bootstrap Minimax inequality

Extrema estimator ============⇒ Confidence interval

Rest of the talk

Apply this idea to IPW estimators in a marginal sensitivity model.

Page 4: Bootstrapping Sensitivity Analysis

3/20

Some existing sensitivity models

Generally, we need to specify how unconfoundedness is violated.

1. Y models: Consider a specific difference between the conditionaldistribution Y (a) |X ,A and Y (a) |X .

I Commonly called “pattern mixture models”.I Robins (1999, 2002); Birmingham et al. (2003); Vansteelandt et al.

(2006); Daniels and Hogan (2008).

2. A models: Consider a specific difference between the conditionaldistribution A |X ,Y (a) and A |X .

I Commonly called “selection models”.I Scharfstein et al. (1999); Gilbert et al. (2003).

3. Simultaneous models: Consider a range of A models and/or Ymodels and report the “worst case” result.

I Cornfield et al. (1959); Rosenbaum (2002); Ding and VanderWeele(2016).

Our sensitivity model—A hybrid of 2nd and 3rd, similar to Rosenbaum’s.

Page 5: Bootstrapping Sensitivity Analysis

4/20

Rosenbaum’s sensitivity model

I Imagine there is an unobserved confounder U that “summarizes” allconfounding, so A ⊥⊥ Y (0),Y (1) |X ,U.

I Let e0(x , u) = P0(A = 1|X = x ,U = u).

Rosenbaum’s sensitivity model

R(Γ) ={e(x , u) :

1

Γ≤ OR(e(x , u1), e(x , u2)) ≤ Γ,∀x ∈ X , u1, u2

},

where OR(p1, p2) := [p1/(1− p1)]/[p2/(1− p2)] is the odds ratio.

I Rosenbaum’s question: can we reject the sharp null hypothesisY (0) ≡ Y (1) for every e0(x , u) ∈ R(Γ)?

I Robins (2002): we don’t need to assume the existence of U. LetU = Y (1) when the goal is to estimate E[Y (1)].

Page 6: Bootstrapping Sensitivity Analysis

5/20

Our sensitivity model

I Let e0(x) = P0(A = 1|X = x) be the propensity score.

Marginal sensitivity models

M(Γ) ={e(x , y) :

1

Γ≤ OR(e(x , y), e0(x)) ≤ Γ,∀x ∈ X , y

}.

I Compare this to Rosenbaum’s model:

R(Γ) ={e(x , u) :

1

Γ≤ OR(e(x , u1), e(x , u2)) ≤ Γ,∀x ∈ X , u1, u2

}.

I Tan (2006) first considered this model, but he did not considerstatistical inference in finite sample.

I Relationship between the two models: M(√

Γ) ⊆ R(Γ) ⊆M(Γ).1

I For observational studies, we assume bothP0(A = 1|X ,Y (1)), P0(A = 1|X ,Y (0)) ∈M(Γ).

1The second part needs “compatibility”: e(x, y) marginalizes to e0(x).

Page 7: Bootstrapping Sensitivity Analysis

6/20

Parametric extension

I In practice, the propensity score e0(X ) = P0(A = 1|X ) is oftenestimated by a parametric model.

Definition (Parametric marginal sensitivity models)

Mβ0 (Γ) ={e(x , y) :

1

Γ≤ OR(e(x , y), eβ0 (x)) ≤ Γ,∀x ∈ X , y

}, where eβ0 (x)

is the best parametric approximation of e0(x).

This sensitivity model covers both

1. Model misspecification, that is, eβ0 (x) 6= e0(x); and

2. Missing not at random, that is, e0(x) 6= e0(x , y).

Page 8: Bootstrapping Sensitivity Analysis

7/20

Logistic representations

1. Rosenbaum’s sensitivity model:

logit(e(x , u)) = g(x) + γu,

where 0 ≤ U ≤ 1 and γ = log Γ.

2. Marginal sensitivity model:

logit(e(h)(x , y)) = logit(e0(x)) + h(x , y),

where ‖h‖∞ = sup |h(x , y)| ≤ γ. Due to this representation, we alsocall it a marginal L∞-sensitivity model.

3. Parametric marginal sensitivity model:

logit(e(h)(x , y)) = logit(eβ0 (x)) + h(x , y),

where ‖h‖∞ = sup |h(x , y)| ≤ γ.

Page 9: Bootstrapping Sensitivity Analysis

8/20

Confidence interval I

I For simplicity, consider the “missing data” problem where Y = Y (1)is only observed if A = 1.

I Observe i.i.d. samples (Ai ,Xi ,AiYi ), i = 1, . . . , n.

I The estimand is µ0 = E0[Y ], however it is only partially identifiedunder a simultaneous sensitivity model.

Goal 1 (Coverage of true parameter)Construct a data-dependent interval [L,U] such that

P0

(µ0 ∈ [L,U]

)≥ 1− α

whenever e0(X ,Y ) = P0(A = 1|X ,Y ) ∈M(Γ).

Page 10: Bootstrapping Sensitivity Analysis

9/20

Confidence interval II

I The inverse probability weighting (IPW) identity:

E0[Y ] = E[ AY

e0(X ,Y )

]MAR= E

[ AY

e0(X )

].

I Define

µ(h) = E0

[AY

e(h)(X ,Y )

]I Partially identified region: {µ(h) : e(h) ∈M(Γ)}.

Goal 2 (Coverage of partially identified region)Construct a data-dependent interval [L,U] such that

P0

({µ(h) : e(h) ∈ M(Γ)} ⊆ [L,U]

)≥ 1− α.

I Imbens and Manski (2004) have discussed the difference betweenthese two Goals.

Page 11: Bootstrapping Sensitivity Analysis

10/20

An intuitive idea: “The Union Method”

I Suppose for any h, we have a confidence interval [L(h),U(h)] suchthat

lim infn→∞

P0(µ(h) ∈ [L(h),U(h)]) ≥ 1− α

I Let L = inf‖h‖

L(h) and U = sup‖h‖

U(h), so [L,U] is the union interval.

Theorem1. [L,U] satisfies Goal 1 asymptotically.2. Furthermore if the intervals are “congruent”: ∃ α′ < α such that

lim supn→∞

P0

(µ(h) < L(h)) ≤ α′, lim sup

n→∞P0

(µ(h) > U(h)) ≤ α− α′.

Then [L,U] satisfies Goal 2 asymptotically.

Page 12: Bootstrapping Sensitivity Analysis

11/20

Practical challenge: How to take the union?I Suppose g(x) is an estimate of logit(e0(x)).

I For a specific difference h, we can estimate e(h)(x , y) by

e(h)(x , y) =1

1 + eh(x,y)−g(x,y).

I This leads to an (stabilized) IPW estimate of µ(h):

µ(h) =

[1

n

n∑i=1

Ai

e(h)(Xi ,Yi )

]−1[1

n

n∑i=1

AiYi

e(h)(Xi ,Yi )

].

I Under regularity conditions, the Z-estimation theory tells us

√n(µ(h) − µ(h)

)d→ N(0, (σ(h))2)

I Therefore we can use [L(h),U(h)] = µ(h) ∓ zα2· σ

(h)

√n

.

I However, computing the union interval requires solving acomplicated optimization problem.

Page 13: Bootstrapping Sensitivity Analysis

12/20

Bootstrapping sensitivity analysis

Point-identified parameter: Efron’s bootstrap

Bootstrap

Point estimator ============⇒ Confidence interval

Partially identified parameter: An analogy

Optimization Percentile Bootstrap Minimax inequality

Extrema estimator ============⇒ Confidence interval

A simple procedure for simultaneous sensitivity analysis

1. Generate B random resamples of the data. For each resample,compute the extrema of IPW estimates under Mβ0 (Γ).

2. Construct the confidence interval using L = Qα/2 of the B minimaand U = Q1−α/2 of the B maxima.

Theorem[L,U] achieves Goal 2 forMβ0 (Γ) asymptotically.

Page 14: Bootstrapping Sensitivity Analysis

13/20

Proof of the Theorem

Partially identified parameter: Three ideas

Optimization 1. Percentile Bootstrap 2. Minimax inequality

Extrema estimator ============⇒ Confidence interval

1. The sampling variability of µ(h) can be captured by bootstrap. Thepercentile bootstrap CI is given by[

Qα2

(ˆµ

(h)b

),Q1−α

2

(ˆµ

(h)b

)].

2. Generalized minimax inequality:

Percentile Bootstrap CI

Qα2

(infh

ˆµ(h)b

)≤ inf

hQα

2

(ˆµ

(h)b

)≤ sup

hQ1−α

2

(ˆµ

(h)b

)Union CI

≤ Q1−α2

(suph

ˆµ(h)b

).

Page 15: Bootstrapping Sensitivity Analysis

14/20

Computation

Partially identified parameter: Three ideas

3. Optimization Percentile Bootstrap Minimax inequality

Extrema estimator ============⇒ Confidence interval

3. Computing extrema of µ(h) is a linear fractional programming:Let zi = eh(Xi ,Yi ), we just need to solve

max or min

∑ni=1 AiYi

(1 + zie

−g(Xi ))∑n

i=1 Ai

(1 + zie−g(Xi )

) ,

subject to zi ∈ [Γ−1, Γ], i = 1, . . . , n.

I This can be converted to a linear programming.I Moreover, the solution z must have the same/opposite order as Y ,

so the time complexity can be reduced to O(n) (optimal).

The role of BootstrapComapred to the union method, the workflow is greatly simplified:

1. No need to derive σ(h) analytically (though we could).

2. No need to optimize σ(h) (which is very challenging).

Page 16: Bootstrapping Sensitivity Analysis

15/20

Comparison with Rosenbaum’s sensitivity analysis

Rosenbaum’s paradigm New bootstrap approach

Population Sample Super-population

Design Matching Weighting

Sensitivity model M(√

Γ) ⊆ R(Γ) ⊆M(Γ)

Inference Bounding p-value CI for ATE/ATT

Effect modification Constant effect Allow for heterogeneity

ExtensionCarefully developed forobservational studies

Can be applied tomissing data problems

Page 17: Bootstrapping Sensitivity Analysis

16/20

Example

Fish consumption and blood mercury

I 873 controls: ≤ 1 serving of fish per month.

I 234 treated: ≥ 12 servings of fish per month.

I Covariates: gender, age, income (very imblanced), race, education,ever smoked, # cigarettes.

Implementation details

I Rosenbaum’s method: 1-1 matching, CI constructed byHodges-Lehmann (assuming causal effect is constant).

I Our method (percentile Bootstrap): stabilized IPW for ATT w/woaugmentation by outcome linear regression.

Page 18: Bootstrapping Sensitivity Analysis

17/20

Results

I Recall that M(√

Γ) ⊆ R(Γ) ⊆M(Γ).

● ● ● ●

0

1

2

3

4

1 1.6 2.7 7.4

Γ

Cau

sal e

ffect

● MatchingSIPW (ATT)SAIPW (ATT)

Figure: The solid error bars are the range of point estimates and the dashed error bars(together with the solid bars) are the confidence intervals. Thecircles/triangles/squares are the mid-points of the solid bars.

Page 19: Bootstrapping Sensitivity Analysis

18/20

Discussion: The general sensitivity analysis problem

Optimization Percentile Bootstrap Minimax inequality

Extrema estimator ============⇒ Confidence interval

The percentile bootstrap idea can be extended to the following problem:

max or min E[f (X , θ, h(X ))],

subject to ‖h(x)‖∞ ≤ γ,

where f is a functional of the observed data X , some finite-dimensionalnuisance parameter θ, and a sensitivity function h(X ), as long as

I θ is “estimable” given X and h;

I Bootstrap “works” for En[f (X , θ, h(X ))], given h.

The challenges...

1. How to solve the sample verision of the optimization problem?

2. Can we allow infinite-dimensional θ?

3. Can we include additional constraints such as E[g(X , θ, h(X ))] ≤ 0?

Page 20: Bootstrapping Sensitivity Analysis

19/20

References I

Reference for this talk

I “Sensitivity analysis for inverse probability weighting estimators viathe percentile bootstrap.” To appear in JRSSB.

I R package: https://github.com/qingyuanzhao/bootsens.

Further references

J. Birmingham, A. Rotnitzky, and G. M. Fitzmaurice. Pattern-mixture and selection models foranalysing longitudinal data with monotone missing patterns. Journal of the Royal StatisticalSociety: Series B (Statistical Methodology), 65(1):275–297, 2003.

J. Cornfield, W. Haenszel, E. C. Hammond, A. M. Lilienfeld, M. B. Shimkin, and E. L. Wynder.Smoking and lung cancer: recent evidence and a discussion of some questions. Journal of theNational Cancer Institute, 22(1):173–203, 1959.

M. J. Daniels and J. W. Hogan. Missing data in longitudinal studies: Strategies for Bayesianmodeling and sensitivity analysis. CRC Press, 2008.

P. Ding and T. J. VanderWeele. Sensitivity analysis without assumptions. Epidemiology, 27(3):368, 2016.

P. B. Gilbert, R. J. Bosch, and M. G. Hudgens. Sensitivity analysis for the assessment of causalvaccine effects on viral load in hiv vaccine trials. Biometrics, 59(3):531–541, 2003.

G. W. Imbens and C. F. Manski. Confidence intervals for partially identified parameters.Econometrica, 72(6):1845–1857, 2004.

Page 21: Bootstrapping Sensitivity Analysis

20/20

References II

J. M. Robins. Association, causation, and marginal structural models. Synthese, 121(1):151–179,1999.

J. M. Robins. Comment on “covariance adjustment in randomized experiments and observationalstudies”. Statistical Science, 17(3):309–321, 2002.

P. R. Rosenbaum. Observational Studies. Springer New York, 2002.

D. O. Scharfstein, A. Rotnitzky, and J. M. Robins. Adjusting for nonignorable drop-out usingsemiparametric nonresponse models. Journal of the American Statistical Association, 94(448):1096–1120, 1999.

Z. Tan. A distributional approach for causal inference using propensity scores. Journal of theAmerican Statistical Association, 101(476):1619–1637, 2006.

S. Vansteelandt, E. Goetghebeur, M. G. Kenward, and G. Molenberghs. Ignorance and uncertaintyregions as inferential tools in a sensitivity analysis. Statistica Sinica, 16(3):953–979, 2006.