statistical models of visual neuronside/data/teaching/amsc663/16... · 2017. 3. 8. · need higher...

Post on 21-Feb-2021

2 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Statistical models of visual neurons

Anna Sotnikova Applied Mathematics and Statistics, and Scientific Computation

program

Advisor: Dr. Daniel A. Butts Department of Biology

Update Presentation

Visual system of a neuron

http://www.pc.rhul.ac.uk/staff/j.zanker/ps1061/l2/ps1061_2.htm

spikes

The goal of this field is to identify relationship between the visual stimuli and the resulting neural responses

light electricity

Statistical modeling of neuron’s response

What is the simplest model which describes the computation being performed by the neuron?

Stimulus -> -> Firing rate ?

Prob(n(t)) =r(t)n(t)

n(t)!exp(�r(t))

n(t) - number of spikes at moment t

Linear-Nonlinear-Poisson model

• Moment-based statistical models for linear filter k estimation:

1. First order moment-based model - Spike Triggered Average (STA)

2. Second order moment-based model - Spike Triggered Covariance (STC)

Previous semester models

Maximum Likelihood estimators:

• Generalized Linear Model (GLM)

• Generalized Quadratic Model (GQM)

• Nonlinear Input Model (NIM)

This semester models

• Real data set - Lateral Geniculate Nucleus data (LGN) • Synthetic data set - Retinal Ganglion Cells (RGC)

Data sets description

Both data sets contain : 1. Stimulus vector (S, which depends on time) 2. Spikes vector (n, which depends on time) 3. Time interval of the stimulus update (dt, time step)

0 1 2 3 4 5 6 7 8 9 10time (sec)

-1.5

-1

-0.5

0

0.5

1

1.5

stim

ulus

0 1 2 3 4 5 6 7 8 9 10time (sec)

0

2

4

6

8

10

12

14

16

18

20

num

ber o

f spi

kes

Generalized Linear Model: Idea

Picture reference: Butts DA, Weng C, Jin JZ, Alonso JM, Paninski L (2011) Temporal precision in the visual pathway through the interplay of excitation and stimulus-driven suppression.

Additional parameters

r(t) = F (k · s(t) + h · robs

(t) + b)

F () = exp()

⌧ = P � 1

P- the time window lengthn - vector of spikesS - stimulus vector b - threshold

GOAL: find optimal k,h and b

s(t) = (S(t� ⌧), . . . , S(t))

robs

(t) = (n(t� ⌧), . . . , n(t))

Generalized Linear Model (GLM): Plan

r(t) = F (k · s(t) + h · robs

(t) + b)

1. GLM: find LogLikelihood LL[k,h,b] -> max 2. Validate the code using simulated data a. without h and b terms b. with h and b terms 3. Validate part of the model(without h and b

terms) comparing with STA filter for RGC data set

4. Find optimal k and h filters for LGN data set

Maximum Likelihood estimation

Poisson distribution

N = {n(t)} ⇥ = {r(t)}

⇥ = {k,h, b}r(t) = F (k · s(t) + h · robs

(t) + b) F () = exp()

LL(⇥) =X

t

n(t)(k · s(t) + h · robs

(t) + b)�X

t

exp(k · s(t) + h · robs

(t) + b)

P (N |⇥) =Y

t

(r(t))n(t)

n(t)!exp(�r(t))

LL(⇥) = log(P (N |⇥)) =X

t

n(t)log(r(t))�X

t

r(t)

Solve log-likelihood using gradient ascent method

Validation: 1. Generate stimulus 2. Chose a simple filter k

3. Use Poisson distribution for spike generation

4. Use generated stimulus and spikes in order to reconstruct the

filter

kj = exp((j � 15)/5)

0 5 10 15time step (j in the formula)

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1Created filter vs optimization result

created filteroptimization

GLM code validation:

kj = exp((j � 15)/5) ⇤ cos(2⇡j/7)

0 5 10 15time step (j in the formula)

-0.5

0

0.5

1Created filter vs optimization result

created filteroptimization

GLM code validation:

Try another test k-filter

1 2 3 4 5 6 7 8 9 10time step

-1.2

-1

-0.8

-0.6

-0.4

-0.2

0h filter and optimization result for generated data

h filter

optimization

0 5 10 15time step

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1k filter vs optimization result for generated data

k filteroptimization

0 5 10 15-0.5

0

0.5

1k filter used vs optimized

1 2 3 4 5 6 7 8 9 10-1.2

-1

-0.8

-0.6

-0.4

-0.2

0h filter used vs optimized

original h filteroptimization result

H1

K2

H2=H1

GLM code validation including history term

K1

Optimal linear filter matches STA for RGC data:

0 5 10 15time step

-3

-2

-1

0

1

2

3STA filter vs optimization result for filter k

STAfound filter k

Need higher time resolution to catch history term

0 10 20 30 40 50 60 70time step (time step = dt)

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2Spikes for the first 0.5 sec. Low resolution

0 50 100 150 200 250time step (time step = dt/4)

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1Spikes for the first 0.5 sec. Increased resolution

0 100 200 300 400 500 600 700 800 900 1000time step (time step = dt/16)

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1Spikes for the first 0.5 sec. Increased resolution

dt dt/4

dt/16

Regularization is needed at high resolutions

0 5 10 15 20 25 30time step

-0.4

-0.3

-0.2

-0.1

0

0.1

0.2

0.3

0.4k filter without regularization, dt/4

Solve regularized log-likelihood using gradient

ascent method

RLL(⇥) =X

t

n(t)(k · s(t) + h · robs

(t) + b)�X

t

exp(k · s(t) + h · robs

(t) + b)� �

X

i

(ki

� k

i�1)2

GLM implementation: increase the resolution.

Resolution is dt/4, no history term.

max difference between STA and optimization

result is about 1% 0 5 10 15 20 25 30time step

-0.3

-0.2

-0.1

0

0.1

0.2

0.3K filter optimization vs STA, with regularization, dt/4

optimizationSTA

0 5 10 15 20 25 30time step

-0.4

-0.3

-0.2

-0.1

0

0.1

0.2

0.3

0.4k filter without regularization, dt/4

LL ~ 104 Regularization term ~ 102

GLM implementation: regularization parameter choice

0 5 10 15 20 25 30time step

-0.3

-0.2

-0.1

0

0.1

0.2

0.3Comparison of lambda influense on filter k, dt/4

4000200030002500350010008000500

GLM with k,h and b parameters

Resolution dt/4 Not OK

OK

0 5 10 15 20 25 30time step

-0.3

-0.2

-0.1

0

0.1

0.2

0.3Filter k for dt/4

optimizationSTA

0 5 10 15 20 25 30time step

-2.5

-2

-1.5

-1

-0.5

0Filter h dt/4

0 0.005 0.01 0.015 0.02 0.025 0.03 0.035 0.04 0.045time (sec)

-6

-5

-4

-3

-2

-1

0

1Filter h for dt/16

GLM with k,h and b parameters

Picture reference: Butts DA, Weng C, Jin JZ, Alonso JM, Paninski L (2011) Temporal precision in the visual pathway through the interplay of excitation and stimulus-driven suppression.

neuron’s history detected for resolution dt/16

• GLM algorithm was implemented and validated on generated data

• The history term was extracted using regularization of log-likelihood

• Optimization of up to 240 parameters simultaneously was implemented

Summary

Updated project scheduleOctober - mid November November

✓ Implement STA and STC models

✓ Test models on synthetic data set and validate models on real data set

November - December December - mid February

✓ Implement Generalized Linear Model (GLM)

✓ Test model on synthetic data set and validate model on LGN data set

January - March mid February - mid April

• Implement Generalized Quadratic Model (GQM) and Nonlinear Input Model (NIM)

• Test models on synthetic data set and validate models on LGN data set

April - May Mid April - May

• Collect results and prepare final report

References1. McFarland JM, Cui Y, Butts DA (2013) Inferring nonlinear neuronal computation based on physiologically

plausible inputs. PLoS Computational Biology 9(7): e1003142.

2. Butts DA, Weng C, Jin JZ, Alonso JM, Paninski L (2011) Temporal precision in the visual pathway through the interplay of excitation and stimulus-driven suppression. J. Neurosci. 31: 11313-27.

3. Simoncelli EP, Pillow J, Paninski L, Schwartz O (2004) Characterization of neural responses with stochastic stimuli. In: The cognitive neurosciences (Gazzaniga M, ed), pp 327–338. Cambridge, MA: MIT.

4. Paninski, L., Pillow, J., and Lewi, J. (2006). Statistical models for neural encoding, decoding, and optimal stimulus design.

5. Shlens, J. (2008). Notes on Generalized Linear Models of Neurons.

Appendix: Gradient in GLM with k,h and b (including regularization term)

for i = 2 to M-1 (except the1st and the last elements ), where M is the number of stimuli = stimulus_length - P +1

top related