jsm 2011 round table

Post on 01-Nov-2014

1.681 Views

Category:

Education

2 Downloads

Preview:

Click to see full reader

DESCRIPTION

 

TRANSCRIPT

Uncertainties within some Bayesian concepts:

Examples from classnotes

Christian P. Robert

Universite Paris-Dauphine, IuF, and CREST-INSEEhttp://www.ceremade.dauphine.fr/~xian

July 31, 2011

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 1 / 30

Outline

Anyone not shocked by the Bayesian theory of inference has not understood it.— S. Senn, Bayesian Analysis, 2008

1 Testing

2 Fully specified models?

3 Model choice

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 2 / 30

Add: Call for vignettes

Kerrie Mengersen and myself are collecting proposals towards a collectionof vignettes on the theme

When is Bayesian analysis really successfull?

celebrating notable achievements of Bayesian analysis.[deadline: September 30]

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 3 / 30

Bayes factors

The Jeffreys-subjective synthesis betrays a much more dangerous confusion than theNeyman-Pearson-Fisher synthesis as regards hypothesis tests — S. Senn, BA, 2008

Definition (Bayes factors)

When testing H0 : θ ∈ Θ0 vs. Ha : θ 6∈ Θ0 use

B01 =π(Θ0|x)π(Θc

0|x)

/

π(Θ0)

π(Θc0)

=

Θ0

f(x|θ)π0(θ)dθ∫

Θc0

f(x|θ)π1(θ)dθ

[Good, 1958 & Jeffreys, 1939]

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 4 / 30

Self-contained concept

Derived from 0− 1 loss and Bayes rule: acceptance ifB01 > {(1− π(Θ0))/a1}/{π(Θ0)/a0}

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 5 / 30

Self-contained concept

Derived from 0− 1 loss and Bayes rule: acceptance ifB01 > {(1− π(Θ0))/a1}/{π(Θ0)/a0}but used outside decision-theoretic environment

eliminates choice of π(Θ0)

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 5 / 30

Self-contained concept

Derived from 0− 1 loss and Bayes rule: acceptance ifB01 > {(1− π(Θ0))/a1}/{π(Θ0)/a0}but used outside decision-theoretic environment

eliminates choice of π(Θ0)

but still depends on the choice of (π0, π1)

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 5 / 30

Self-contained concept

Derived from 0− 1 loss and Bayes rule: acceptance ifB01 > {(1− π(Θ0))/a1}/{π(Θ0)/a0}but used outside decision-theoretic environment

eliminates choice of π(Θ0)

but still depends on the choice of (π0, π1)

Jeffreys’ [arbitrary] scale of evidence:◮ if log

10(Bπ

10) between 0 and 0.5, evidence against H0 weak,

◮ if log10(Bπ

10) 0.5 and 1, evidence substantial,

◮ if log10(Bπ

10) 1 and 2, evidence strong and

◮ if log10(Bπ

10) above 2, evidence decisive

convergent if used with proper statistics

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 5 / 30

Difficulties with ABC-Bayes factors

‘This is also why focus on model discrimination typically (...) proceeds by

(...) accepting that the Bayes Factor that one obtains is only derived from

the summary statistics and may in no way correspond to that of the full

model.’ — S. Sisson, Jan. 31, 2011, X.’Og

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 6 / 30

Difficulties with ABC-Bayes factors

‘This is also why focus on model discrimination typically (...) proceeds by

(...) accepting that the Bayes Factor that one obtains is only derived from

the summary statistics and may in no way correspond to that of the full

model.’ — S. Sisson, Jan. 31, 2011, X.’Og

In the Poisson versus geometric case, if E[yi] = θ0 > 0,

limn→∞

Bη12(y) =

(θ0 + 1)2

θ0e−θ0

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 6 / 30

Difficulties with ABC-Bayes factors

Laplace vs. Normal models:

Comparing a sample x1, . . . , xn from the Laplace (double-exponential)L(µ, 1/

√2) distribution

f(x|µ) = 1√2exp{−

√2|x− µ|} .

or from the Normal N (µ, 1)

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 7 / 30

Difficulties with ABC-Bayes factors

Empirical mean, median and variance have the same mean under bothmodels: useless!

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 7 / 30

Difficulties with ABC-Bayes factors

Median absolute deviation: priceless!

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 7 / 30

Point null hypotheses

I have no patience for statistical methods that assign positive probability to pointhypotheses of the θ = 0 type that can never actually be true — A. Gelman, BA, 2008

Particular case H0 : θ = θ0Take ρ0 = Prπ(θ = θ0) and π1 prior density under Ha.

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 8 / 30

Point null hypotheses

I have no patience for statistical methods that assign positive probability to pointhypotheses of the θ = 0 type that can never actually be true — A. Gelman, BA, 2008

Particular case H0 : θ = θ0Take ρ0 = Prπ(θ = θ0) and π1 prior density under Ha.Posterior probability of H0

π(Θ0|x) =f(x|θ0)ρ0

f(x|θ)π(θ) dθ =f(x|θ0)ρ0

f(x|θ0)ρ0 + (1− ρ0)m1(x)

and marginal under Ha

m1(x) =

Θ1

f(x|θ)g1(θ) dθ.

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 8 / 30

Point null hypotheses (cont’d)

Example (Normal mean)

Test of H0 : θ = 0 when x ∼ N (θ, 1): we take π1 as N (0, τ2) then

π(θ = 0|x) =[

1 +1− ρ0ρ0

σ2

σ2 + τ2exp

(

τ2x2

2σ2(σ2 + τ2)

)

]−1

Influence of τ :

τ/x 0 0.68 1.28 1.96

1 0.586 0.557 0.484 0.35110 0.768 0.729 0.612 0.366

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 9 / 30

A fundamental difficulty

Improper priors are not allowed in this setting

If∫

Θ1

π1(dθ1) = ∞ or

Θ2

π2(dθ2) = ∞

then either π1 or π2 cannot be coherently normalised

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 10 / 30

A fundamental difficulty

Improper priors are not allowed in this setting

If∫

Θ1

π1(dθ1) = ∞ or

Θ2

π2(dθ2) = ∞

then either π1 or π2 cannot be coherently normalised but thenormalisation matters in the Bayes factor

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 10 / 30

Jeffreys unaware of the problem??

Example of testing for a zero normal mean:

If σ is the standard error and λ thetrue value, λ is 0 on q. We want asuitable form for its prior on q′. (...)Then we should take

P (qdσ|H) ∝ dσ/σ

P (q′dσdλ|H) ∝ f

(

λ

σ

)

dσ/σdλ/λ

where f [is a true density] (ToP, V,§5.2).

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 11 / 30

Jeffreys unaware of the problem??

Example of testing for a zero normal mean:

If σ is the standard error and λ thetrue value, λ is 0 on q. We want asuitable form for its prior on q′. (...)Then we should take

P (qdσ|H) ∝ dσ/σ

P (q′dσdλ|H) ∝ f

(

λ

σ

)

dσ/σdλ/λ

where f [is a true density] (ToP, V,§5.2).

Unavoidable fallacy of the “same” σ?!

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 11 / 30

Puzzling alternatives

When taking two normal samples x11, . . . , x1n1and x21, . . . , x2n2

withmeans λ1 and λ2 and same variance σ, testing for H0 : λ1 = λ2 getsoutwordly:

...we are really considering four hypotheses, not two as in the test foragreement of a location parameter with zero; for neither may be disturbed,or either, or both may.

ToP then uses parameters (λ, σ) in all versions of the alternativehypotheses, with

π0(λ, σ) ∝ 1/σ

π1(λ, σ, λ1) ∝ 1/π{σ2 + (λ1 − λ)2}π2(λ, σ, λ2) ∝ 1/π{σ2 + (λ2 − λ)2}

π12(λ, σ, λ1, λ2) ∝ σ/π2{σ2 + (λ1 − λ)2}{σ2 + (λ2 − λ)2}

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 12 / 30

Puzzling alternatives

ToP misses the points that

1 λ does not have the same meaning under q, under q1 (= λ2) andunder q2 (= λ1)

2 λ has no precise meaning under q12 [hyperparameter?]

On q12, since λ does not appear explicitely in the likelihoodwe can integrate it (V, §5.41).

3 even σ has a varying meaning over hypotheses

4 integrating over measures

P (q12dσdλ1dλ2|H) ∝ 2

π

dσdλ1dλ2

4σ2 + (λ1 − λ2)2

simply defines a new improper prior...

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 13 / 30

Addiction to models

One potential difficulty with Bayesian analysis is its ultimate dependenceon model(s) specification

π(θ) ∝ π(θ)f(x|θ)

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 14 / 30

Addiction to models

One potential difficulty with Bayesian analysis is its ultimate dependenceon model(s) specification

π(θ) ∝ π(θ)f(x|θ)

While Bayesian analysis allows for model variability, prunning,improvement, comparison, embedding, &tc., there always is a basicreliance [or at least conditioning] on the ”truth” of an overall model.

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 14 / 30

Addiction to models

One potential difficulty with Bayesian analysis is its ultimate dependenceon model(s) specification

π(θ) ∝ π(θ)f(x|θ)

While Bayesian analysis allows for model variability, prunning,improvement, comparison, embedding, &tc., there always is a basicreliance [or at least conditioning] on the ”truth” of an overall model. Maysound paradoxical because of the many tools offered by Bayesian analysis,however method is blind once ”out of the model”, in the sense that itcannot assess the validity of a model without imbedding this model insideanother model.

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 14 / 30

ABCµ multiple errors

[ c© Ratmann et al., PNAS, 2009]

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 15 / 30

ABCµ multiple errors

[ c© Ratmann et al., PNAS, 2009]

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 15 / 30

No proper goodness-of-fit test

‘There is not the slightest use in rejecting any hypothesis unless we can do itin favor of some definite alternative that better fits the facts.” — E.T.

Jaynes, Probability Theory

While the setting

H0 : M = M0 versus Ha : M 6= M0

is rather artificial, there is no satisfactory way of answering the question

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 16 / 30

An approximate goodness-of-fit test

TestingH0 : M = Mθ versus Ha : M 6= Mθ

rephrased as

H0 : minθ

d(Fθ,U(0,1)) = 0 versus Ha : minθ

d(Fθ,U(0,1)) > 0

[Verdinelli and Wasserman, 98; Rousseau and Robert, 01]

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 17 / 30

An approximate goodness-of-fit test

TestingH0 : M = Mθ versus Ha : M 6= Mθ

rephrased as

H0 : Fθ(x) ∼ U(0, 1) versus

Ha : Fθ(x) ∼ p0U(0, 1) + (1− p0)

k∑

i=1

ωi∑ℓωℓ

Be(αiǫi, αi(1− ǫi))

with

(αi, ǫi) ∼ [1− exp{−(αi − 2)2 − (ǫi − .5)2}]

× exp[−1/(α2i ǫi(1− ǫi))− 0.2α2

i /2]

[Verdinelli and Wasserman, 98; Rousseau and Robert, 01]

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 17 / 30

Robustness

Models only partly defined through moments

Eθ[hi(x)] = Hi(θ) i = 1, . . .

i.e., no complete construction of the underlying model

Example (White noise in AR)

The relationxt = ρxt−1 + σǫt

often makes no assumption on ǫt besides its first two moments...

How can we run Bayesian analysis in such settings? Should we?

[Lazar, 2005; Cornuet et al., 2011, in prep.]

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 18 / 30

[back to] Bayesian model choice

Having a high relative probability does not mean that a hypothesis is true or supported

by the data — A. Templeton, Mol. Ecol., 2009

The formal Bayesian approach put probabilities all over the entiremodel/parameter space

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 19 / 30

[back to] Bayesian model choice

Having a high relative probability does not mean that a hypothesis is true or supported

by the data — A. Templeton, Mol. Ecol., 2009

The formal Bayesian approach put probabilities all over the entiremodel/parameter spaceThis means:

allocating probabilities pi to all models Mi

defining priors πi(θi) for each parameter space Θi

pick largest p(Mi|x) to determine “best” model

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 19 / 30

Several types of problems

Concentrate on selection perspective:

how to integrate loss function/decision/consequences

representation of parsimony/sparcity (Occam’s rule)

how to fight overfitting for nested models

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 20 / 30

Several types of problems

Incoherent methods, such as ABC, Bayes factor, or any simulation approach that treats

all hypotheses as mutually exclusive, should never be used with logically overlapping

hypotheses. — A. Templeton, PNAS, 2010

Choice of prior structures

adequate weights pi:

if M1 = M2 ∪M3, p(M1)>= p(M2) + p(M3) ?

priors distributions◮ πi(·) defined for every i ∈ I

◮ πi(·) proper (Jeffreys)◮ πi(·) coherent (?) for nested models

prior modelling inflation

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 20 / 30

Compatibility principle

Difficulty of finding simultaneously priors on a collection of models Mi

(i ∈ I)

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 21 / 30

Compatibility principle

Difficulty of finding simultaneously priors on a collection of models Mi

(i ∈ I)Easier to start from a single prior on a “big” model and to derive theothers from a coherence principle

[Dawid & Lauritzen, 2000]

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 21 / 30

Projection approach

For M2 submodel of M1, π2 can be derived as the distribution of θ⊥2 (θ1)when θ1 ∼ π1(θ1) and θ⊥2 (θ1) is a projection of θ1 on M2, e.g.

d(f(· |θ1), f(· |θ1⊥)) = infθ2∈Θ2

d(f(· |θ1) , f(· |θ2)) .

where d is a divergence measure[McCulloch & Rossi, 1992]

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 22 / 30

Projection approach

For M2 submodel of M1, π2 can be derived as the distribution of θ⊥2 (θ1)when θ1 ∼ π1(θ1) and θ⊥2 (θ1) is a projection of θ1 on M2, e.g.

d(f(· |θ1), f(· |θ1⊥)) = infθ2∈Θ2

d(f(· |θ1) , f(· |θ2)) .

where d is a divergence measure[McCulloch & Rossi, 1992]

Or we can look instead at the posterior distribution of

d(f(· |θ1), f(· |θ1⊥))

[Goutis & Robert, 1998; Dupuis & Robert, 2001]

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 22 / 30

Kullback proximity

Alternative projection to the above

Definition (Compatible prior)

Given a prior π1 on a model M1 and a submodel M2, a prior π2 on M2 iscompatible with π1

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 23 / 30

Kullback proximity

Alternative projection to the above

Definition (Compatible prior)

Given a prior π1 on a model M1 and a submodel M2, a prior π2 on M2 iscompatible with π1 when it achieves the minimum Kullback divergencebetween the corresponding marginals: m1(x;π1) =

Θ1f1(x|θ)π1(θ)dθ

and m2(x);π2 =∫

Θ2f2(x|θ)π2(θ)dθ,

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 23 / 30

Kullback proximity

Alternative projection to the above

Definition (Compatible prior)

Given a prior π1 on a model M1 and a submodel M2, a prior π2 on M2 iscompatible with π1 when it achieves the minimum Kullback divergencebetween the corresponding marginals: m1(x;π1) =

Θ1f1(x|θ)π1(θ)dθ

and m2(x);π2 =∫

Θ2f2(x|θ)π2(θ)dθ,

π2 = argminπ2

log

(

m1(x;π1)

m2(x;π2)

)

m1(x;π1) dx

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 23 / 30

Difficulties

Further complicating dimensionality of test statistics is the fact that the models areoften not nested, and one model may contain parameters that do not have analogues in

the other models and vice versa. — A. Templeton, Mol. Ecol., 2009

Does not give a working principle when M2 is not a submodel M1

[Perez & Berger, 2000; Cano, Salmeron & Robert, 2006]

Depends on the choice of π1

Prohibits the use of improper priors

Worse: useless in unconstrained settings...

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 24 / 30

A side remark: Zellner’s g

Use of Zellner’s g-prior in linear regression, i.e. a normal prior for βconditional on σ2,

β|σ2 ∼ N (β, gσ2(XTX)−1)

and a Jeffreys prior for σ2,

π(σ2) ∝ σ−2

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 25 / 30

Variable selection

For the hierarchical parameter γ, we use

π(γ) =

p∏

i=1

τγii (1− τi)1−γi ,

where τi corresponds to the prior probability that variable i is present inthe model (and a priori independence between the presence/absence ofvariables)

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 26 / 30

Variable selection

For the hierarchical parameter γ, we use

π(γ) =

p∏

i=1

τγii (1− τi)1−γi ,

where τi corresponds to the prior probability that variable i is present inthe model (and a priori independence between the presence/absence ofvariables)Typically (?), when no prior information is available, τ1 = . . . = τp = 1/2,ie a uniform prior

π(γ) = 2−p

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 26 / 30

Influence of g

Taking β = 0p+1 and c large does not work

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 27 / 30

Influence of g

Taking β = 0p+1 and c large does not work

Consider the 10-predictor full model

y|β, σ2 ∼ N

β0 +

3∑

i=1

βixi +

3∑

i=1

βi+3x2i+ β7x1x2 + β8x1x3 + β9x2x3 + β10x1x2x3, σ

2In

where the xis are iid U (0, 10)[Casella & Moreno, 2004]

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 27 / 30

Influence of g

Taking β = 0p+1 and c large does not work

Consider the 10-predictor full model

y|β, σ2 ∼ N

β0 +

3∑

i=1

βixi +

3∑

i=1

βi+3x2i+ β7x1x2 + β8x1x3 + β9x2x3 + β10x1x2x3, σ

2In

where the xis are iid U (0, 10)[Casella & Moreno, 2004]

True model: two predictors x1 and x2, i.e. γ∗ = 110. . .0,(β0, β1, β2) = (5, 1, 3), and σ2 = 4.

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 27 / 30

Influence of g2

t1(γ) g = 10 g = 100 g = 103 g = 104 g = 106

0,1,2 0.04062 0.35368 0.65858 0.85895 0.982220,1,2,7 0.01326 0.06142 0.08395 0.04434 0.005240,1,2,4 0.01299 0.05310 0.05805 0.02868 0.003360,2,4 0.02927 0.03962 0.00409 0.00246 0.002540,1,2,8 0.01240 0.03833 0.01100 0.00126 0.00126

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 28 / 30

Case for a noninformative hierarchical solution

Use the same compatible informative g-prior distribution with β = 0p+1

and a hierarchical diffuse prior distribution on g, e.g.

π(g) ∝ g−1IN∗(c)

[Liang et al., 2007; Marin & Robert, 2007; Celeux et al., ca. 2011]

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 29 / 30

Occam’s razor

Pluralitas non est ponenda sine neccesitate

Variation is random until the contraryis shown; and new parameters in laws,when they are suggested, must betested one at a time, unless there isspecific reason to the contrary.

H. Jeffreys, ToP, 1939

No well-accepted implementation behind the principle...

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 30 / 30

Occam’s razor

Pluralitas non est ponenda sine neccesitate

Variation is random until the contraryis shown; and new parameters in laws,when they are suggested, must betested one at a time, unless there isspecific reason to the contrary.

H. Jeffreys, ToP, 1939

No well-accepted implementation behind the principle...besides the fact that the Bayes factor naturally penalises larger models

Christian P. Robert (Paris-Dauphine) Uncertainties within Bayesian concepts July 31, 2011 30 / 30

top related