bayesian density regression

19
Bayesian Density Regression Author: David B. Dunson and Nat esh Pillai Presenter: Ya Xue April 28, 2006

Upload: zeph-jensen

Post on 31-Dec-2015

43 views

Category:

Documents


6 download

DESCRIPTION

Bayesian Density Regression. Author: David B. Dunson and Natesh Pillai Presenter: Ya Xue April 28, 2006. Outline. Key idea Proof Application to HME. Bayesian Density Regression with Standard DP. The regression model: (i=1,...,n) Two cases:. Parametric model. - PowerPoint PPT Presentation

TRANSCRIPT

Bayesian Density Regression

Author: David B. Dunson and Natesh Pillai

Presenter: Ya Xue

April 28, 2006

Outline

• Key idea

• Proof

• Application to HME

Bayesian Density Regression with Standard DP

• The regression model: (i=1,...,n)

• Two cases:

1.

2.

Parametric model

Standard Dirichlet process mixture model

iiiiiii dpxyfxyf )(),|()|(

),|(,)( iiii xyfp

Gp i )(

Bayesian Density Regression with Standard DP

• Model

• The algorithm automatically finds the shrinkage of parameters

.,...,1

),,(~

,~

),(),|1(

0

Ni

GDPG

G

xxyp

i

iTiiii

.,...,1, Nii

Polya Urn Model

ij

ii jn

Gn

X ,)1

1()

1(),,|( 0

)(

• Standard Polya urn model

• This paper proposed a generalized Polya urn model.

ijij

ij

ij

ijij

ii jw

wG

wX ,)()(),,|( 0

)(

where is a kernel function.),( jiij xxww 0ijw monotonically as increases.),( ji xxd

.1lim ijxx wij

(1)

Idea – Spatial DPEquation (1) implies• The prior probability of setting decreases a

s increases.

• The prior probability of increases as more neighbors are added that have predictor values xj close to xi.

• The expected prior probability of increases in proportion to the hyperparameter .

ji ),( ji xxd

)(ii

)(ii

Outline

• Key idea

• Proof

• Application to HME

Spatial Varying Regression Model

iixiiiii dGxyfxyfi

)(),|()|(

• At a given location in the feature space,

A mixture of an innovation random measure

and neighboring random measures

j~i indexes samples

Theorem 1

Hierarchical Model

• The hierarchical form

• Let denote an index set for the subjects drawn from the jth mixture component, for j=1,...,n. Then we have for

• Conditioning on Z, we can use the Polya urn result to obtain the conditional prior

• Only the subvector of elements of belonging to are informative.

Conditional Distribution},...,1{}:{ njZiI ij

*~jxi G

.jIi

(2))(i

iZI

Marginalize over Z

• We obtain the following generalization of the Polya urn scheme

(a)

(b)if sample i and j belong to the same mixture component.1ijm

Example

(a) (b)

For example, n=4,

p(mi)

Rewrite Equation (2)

• Let

• Then Eqn.(2) can be expressed as

(3)

Theorem 4

Hence, Eqn. (3) is equivalent to

ijij

ij

ij

ijij

ii jw

wG

wBX .)()(),,,|( 0

)(

Predictive distribution

Outline

• Key idea

• Proof

• Application to HME

Mixture Model

• We simulate data from a mixture of two normal linear regression models

• Poor results obtained by using the standard DP mixture model.

)04.0,;()1()01.0,;()|( 42

22

2 22ii

xii

xii xyNexyNexyf ii