michael prerau, ms

28
Zen, and the Art of Neural Decoding using an EM Algorithm Parameterized Kalman Filter and Gaussian Spatial Smoothing Michael Prerau, MS

Upload: jerry

Post on 07-Feb-2016

24 views

Category:

Documents


0 download

DESCRIPTION

Zen, and the Art of Neural Decoding using an EM Algorithm Parameterized Kalman Filter and Gaussian Spatial Smoothing. Michael Prerau, MS. Encoding/Decoding Process. Generate a smoothed Gaussian white noise stimulus - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Michael Prerau, MS

Zen, and the Art of Neural Decoding using an EM Algorithm

Parameterized Kalman Filter and Gaussian Spatial Smoothing

Michael Prerau, MS

Page 2: Michael Prerau, MS

Encoding/Decoding Process

Generate a smoothed Gaussian white noise stimulus

Generate a random kernel, D and convolve with the stimulus to generate a spike rate

Drive Poisson spike generator Decode and find K

Use K to decode from new stimuli “real time”

( )( )

( )rsQK

Q

%

%%

rate stim D

Page 3: Michael Prerau, MS

Encoding/Decoding Process

Gaussian NoiseStimulus

Random D

Cell Matrix

Poisson SpikeGenerator

Calculate Kernel K

Encode Decode

Kernel K

Cell Matrix

Poisson SpikeGenerator

Stimulus Output

Decoding “Real Time”

Page 4: Michael Prerau, MS

Encoding/Decoding

Page 5: Michael Prerau, MS

Stimulus

Page 6: Michael Prerau, MS

Decoded Estimate

Page 7: Michael Prerau, MS

State-Space Modeling

( , )kS x yHidden State: Where sputnik really is ( , )x y

( , )k x yO o oObservations: What the towers see

1( )k k sS PHYSICS S

State equation: How sputnik ideally moves

k k oO S

Observation equation: If we knew where sputnik was, how would that relate to our observations?

{ , , }s o Parameters:

Page 8: Michael Prerau, MS

State-Space Modeling

Observations

State estimate

Page 9: Michael Prerau, MS

The Kalman Filter

Gaussian state The actual stimulus intensity

Gaussian observations The filtered estimate

1k k kx Ax w State Equation

k k kz Hx Observation Equation

ˆ ˆ ˆ( )k k k kx x K z Hx State Estimate

Page 10: Michael Prerau, MS

State Equation: Random Walk AR Model

Observation Equation: Linear Model

Parameters

The Kalman Filter Application to the Intensity Estimate

1k k kx x 2(0, )k N where

k k kz x 2(0, )k N where

2 2( , , , )v

Page 11: Michael Prerau, MS

Complete Data Likelihood

Log-likelihood

12

12

2 2 1 2

1

2 2 1 21

1

( ) (2 ) exp{( 2 ) ( ) }

(2 ) exp{( 2 ) ( ) }

K

k kk

K

k kk

p Z x z x

x x

221

0 2 2| 1

( ) ( )1 1log( ( | ))

2 2k k k k

k kk k

x x z xp x Z

The Kalman Filter Application to the Intensity Estimate

Page 12: Michael Prerau, MS

Forward Filter Derivation

221

0 2 2| 1

( ) ( )1 1log( ( | ))

2 2k k k k

k kk k

x x z xp x Z

Most likely hidden state will maximize log-likelihood:

0 12 2| 1

log( ( | )) ( )k k k k k k

k k k

p x R x x z x

x

22| 1

12 2 2 2 2 2| 1 | 1

ˆ k kk k k

k k k k

x x z

Maximize for xk and solve:

2| 1

1 12 2 2| 1

( )ˆk k

k k k kk k

x z xx

Arrange Kalman style:

Page 13: Michael Prerau, MS

For hidden state variance, first take the 2nd derivative of the log likelihood:

Then take the negative of the inverse for the variance of the hidden state:

2 20

2 2 2| 1

log( ( | )) 1k k

k k k

p x Z

x

2 2| 12

2 2 2| 1

ˆ k kk

k k

Forward Filter Derivation

Page 14: Michael Prerau, MS

The EM Algorithm

Suppose we don’t know the parameter values? Use the Expectation Maximization (EM)

Algorithm (Dempster, Laird, and Rubin, 1977) Iterative maximization

E-step: Take the most likely (Expected value) value of the state process given the parameters

M-step: Maximize for the most likely parameters given the estimated state values

Page 15: Michael Prerau, MS

E-Step for Intensity Model

( )

22 ( )2

1

22 ( )12

1

log ( ) ||

1 1log(2 ) ( ) ||

2 2

1 1log(2 ) ( ) ||

2 2

K

k kk

K

k kk

E p Z x Z

E K z x Z

E K x x Z

l

l

l

( )

2 ( )

( )11

||

||

||

k K k

kk K

k kk k K

x E x Z

W E x Z

W E x x Z

l

l

l

Take the expected value of the joint likelihood:

We will encounter terms such as:

Can be solved with the state-space covariance algorithm (De Jong and MacKinnon, 1988)

Page 16: Michael Prerau, MS

Example :

M-Step for Intensity Model

For the M-Step, maximize with respect to each parameter.

Set equal to zero and solve

2

222 2

1

2 2 ( 1) 2( 1) 22 2

1 1 1

( 1) 2( 12 2|2 2

1 1 1

1 1log(2 ) ( )

2 2

1 1{ [ log(2 ) [ 2 ]}

2 2

1 1{ log(2 ) [ 22 2

K

k kk

K K K

k k k kk k k

K K K

k k K kk k k

E K z x

E K z x z x

K z x z

l l

l l )|

2 ( 1) 2( 1)| |22 2

1 1 1

]}

1[ 2 ]

2 2( )

k K

K K K

k k K k k Kk k k

W

Kz x z W

l l

2 ( 1) 2( 1)| |22 2

1 1 1

2 ( 1) 2( 1)2( 1) 1|

1 1 1

10 [ 2 ]

2 2( )

2

K K K

k k K k k Kk k k

K K K

k k K k k Kk k k

Kz x z W

K z x z W

l l

l ll

Page 17: Michael Prerau, MS

M-Step for Intensity Model

1

1 11 1

K K

k k K k Kk k

W W

22 11 1

1

2K

k K k k K k Kk

K W W W

1

( 1)|

11

KK

k K kk Kkk

x zW

l

2 ( 1) 2( 1)2( 1) 1|

1 1 1

2K K K

k k K k k Kk k k

K z x z W

l ll

M-Step Summary:

Page 18: Michael Prerau, MS

The EM Algorithm

Page 19: Michael Prerau, MS

The EM Algorithm

Page 20: Michael Prerau, MS

Kalman Estimate

Page 21: Michael Prerau, MS

convolution =

2D Gaussian Spatial Smoothing

Page 22: Michael Prerau, MS

Gaussian Spatially Smoothed Estimate

Page 23: Michael Prerau, MS

Kalman Filtering the Gaussian Smoothed Estimate

Page 24: Michael Prerau, MS

Kalman Filtering the Gaussian Smoothed Estimate

Page 25: Michael Prerau, MS

Comparison

Page 26: Michael Prerau, MS

Comparison

Page 27: Michael Prerau, MS

Comparison

Stimulus

Sest Kalman GaussianSmoothed

SmoothedKalman

Page 28: Michael Prerau, MS

fin