hebb rule

41
Hebb Rule • Linear neuron • Hebb rule • Similar to LTP (but not quite…) T v wu 1 , w t t w d t v v dt w u w w u

Upload: mika

Post on 28-Jan-2016

35 views

Category:

Documents


0 download

DESCRIPTION

Hebb Rule. Linear neuron Hebb rule Similar to LTP (but not quite…). Hebb Rule. Average Hebb rule= correlation rule Q: correlation matrix of u. Hebb Rule. Hebb rule with threshold= covariance rule C: covariance matrix of u - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Hebb Rule

Hebb Rule

• Linear neuron

• Hebb rule

• Similar to LTP (but not quite…)

Tv w u

1, w t tw

d tv v

dt w

u w w u

Page 2: Hebb Rule

Hebb Rule

• Average Hebb rule= correlation rule

• Q: correlation matrix of u

w

T Tw

T T

dv

dtd

vdt

wu

wu w u u u w u

u u w uu w Qw

Page 3: Hebb Rule

Hebb Rule

• Hebb rule with threshold= covariance rule

• C: covariance matrix of u• Note that <(v-< v >)(u-< u >)> would be unrealistic because it predicts

LTP when both u and v are low

Tw

T T

dv

dtC

C

wu u w u u u

w

u u u u u u u

Page 4: Hebb Rule

Hebb Rule

• Main problem with Hebb rule: it’s unstable… Two solutions:

1. Bounded weights

2. Normalization of either the activity of the postsynaptic cells or the weights.

Page 5: Hebb Rule

BCM rule

• Hebb rule with sliding threshold

• BCM rule implements competition because when a synaptic weight grows, it raises by v2, making more difficult for other weights to grow.

2

v

v

w v

vv

w

dv v

dtd

vdt

wu

Page 6: Hebb Rule

Weight Normalization

• Subtractive Normalization:

1

, 11 11

1

1 0

u

wu

Ni

w i kku

wu

u

vdv

dt N

dwvu vu

dt N

vdv

dt N

vN

n u nwu n

n u nn wn u

n nn u

1

Const.uN

ii

w

n w

Page 7: Hebb Rule

Weight Normalization

• Multiplicative Normalization:

• Norm of the weights converge to 1/

2

2

222

2

2 1

w

w w

dv v

dt

d ddt dt

dv

dt

wu w

w ww

ww

2

1

Const.uN

ii

w

Page 8: Hebb Rule

• Convergence properties:

• Use an eigenvector decomposition:

where e are the eigenvectors of Q

Hebb Rule

w

ddt

wQw

1

uN

t c t

w e

Page 9: Hebb Rule

Hebb Rule

e2e1

1>2

Page 10: Hebb Rule

Hebb Rule

1

, uN

w

w

w

w

d tt t c t

dt

dc tc t

dtdc t

c tdt

dc tc t

dt

wQw w e

eQ e

e Qe

e e

Equations decouple because

e are the eigenvectors of Q

Page 11: Hebb Rule

Hebb Rule

1

1 1

exp 0 exp 0

exp 0

for large , ,

u

w

w

w w

N

w

dc tc t

dtdc t

c tdt

t tc t c w

tt w

t t v

e e

e

w e e

w e e u

Page 12: Hebb Rule

Hebb Rule

• The weights line up with first eigenvector and the postsynaptic activity, v, converges toward the projection of u onto the first eigenvector (unstable PCA)

Page 13: Hebb Rule

Hebb Rule

• Non zero mean distribution: correlation vs covariance

Page 14: Hebb Rule

Hebb Rule

• Limiting weights growth affects the final state

0 0.2 0.4 0.6 0.8 1

0

0.2

0.4

0.6

1

w1/wmax

w2/w

max

0.8 First eigenvector: [1,-1]

Page 15: Hebb Rule

Hebb Rule

• Normalization also affects the final state. • Ex: multiplicative normalization. In this case,

Hebb rule extracts the first eigenvector but keeps the norm constant (stable PCA).

Page 16: Hebb Rule

• Normalization also affects the final state. • Ex: subtractive normalization.

Hebb Rule

1

111

if

0

wu

wu u

ddt N

ddt N N

w Qn nwQw

e n

e Qn n n Qn neQe Qn

Page 17: Hebb Rule

Hebb Rule

1

1

1 1

if

0, 1

wu

u

u

c tdc tc t

dt N

c tc t

N

c tc t

N

c t

e n

e n

e Qn ne Q e

e Qe nQ e

e e nQ e

Q e

Page 18: Hebb Rule

• The constrain does not affect the other eigenvector:

• The weights converge to the second eigenvector (the weights need to be bounded to guarantee stability…)

Hebb Rule

1 12

0 exp 0uN

w

tt w w

w e e e e

Page 19: Hebb Rule

Ocular Dominance Column

• One unit with one input from right and left eyes

R R L L

s dR R L R

d sR L L L

v w u w u

q qu u u u

q qu u u u

Q uu

s: same eye

d: different eyes

Page 20: Hebb Rule

Ocular Dominance Column

• The eigenvectors are:

s dR R L RT

d sR L L L

q qu u u u

q qu u u u

Q uu

1 1

2 2

1,1 / 2,

1, 1 / 2,

s d

s d

q q

q q

e

e

Page 21: Hebb Rule

Ocular Dominance Column

• Since qd is likely to be positive, qs+qd>qs-qd. As a result, the weights will converge toward the first eigenvector which mixes the right and left eye equally. No ocular dominance...

1 1

2 2

1,1 / 2,

1, 1 / 2,

s d

s d

q q

q q

e

e

Page 22: Hebb Rule

Ocular Dominance Column

• To get ocular dominance we need subtractive normalization.

1 1

2 2

1,1 / 2,

1, 1 / 2,

s d

s d

q q

q q

e

e

Page 23: Hebb Rule

Ocular Dominance Column

• Note that the weights will be proportional to e2 or –e2 (i.e. the right and left eye are equally likely to dominate at the end). Which one wins depends on the initial conditions.

1 1

2 2

1,1 / 2,

1, 1 / 2,

s d

s d

q q

q q

e

e

Page 24: Hebb Rule

Ocular Dominance Column

• Ocular dominance column: network with multiple output units and lateral connections.

Page 25: Hebb Rule

Ocular Dominance Column

• Simplified model

u L uR

B

Page 26: Hebb Rule

Ocular Dominance Column

• If we use subtractive normalization and no lateral connections, we’re back to the one cell case. Ocular dominance is determined by initial weights, i.e., it is purely stochastic. This is not what’s observed in V1.

• Lateral weights could help by making sure that neighboring cells have similar ocular dominance.

Page 27: Hebb Rule

Ocular Dominance Column

• Lateral weights are equivalent to feedforward weights

0?

ii iR R iL L

R R L L

dvv w u w u

dtd

u udt

ddt

+ Mv

vv w w + Mv

v

Page 28: Hebb Rule

Ocular Dominance Column

• Lateral weights are equivalent to feedforward weights

1

i iR R iL L

R R L L

RR L

L

v w u w u

u u

u

u

= + Mv

v = w w + Mv

v = I M w w

v = KWu

Page 29: Hebb Rule

Ocular Dominance Column

w

w

w

ddt

ddt

ddt

Wvu

v = KWu

WKWuu

WKWQ

Page 30: Hebb Rule

Ocular Dominance Column

• We first project the weight vectors of each cortical unit (wiR,wiL) onto the eigenvectors of Q.

1

w

w

w

ddt

ddt

ddt

WKWQ

WKWPΛP

WPKWPΛ

Page 31: Hebb Rule

Ocular Dominance Column

• There are two eigenvectors, w+ and w-, with eigenvalues qs+qd and qs-qd:

=

=

R L

R L

R L R L

w w w

w w w

WP w w w w

w w

Page 32: Hebb Rule

Ocular Dominance Column

0

0

w

s dw

s d

s dw

s d

ddtd

q qdtq qd

dtd

q qdtq qd

dt

WPKWPΛ

w

K w ww

wK w

K ww

Page 33: Hebb Rule

Ocular Dominance Column

• Ocular dominance column: network with multiple output units and lateral connections.

w s d

w s d

dq q

dtd

q qdt

wKw

wKw

Page 34: Hebb Rule

Ocular Dominance Column

• Once again we use a subtractive normalization, which holds w+ constant. Consequently, the equation for w- is the only one we need to worry about.

w s d

dq q

dt

wKw

Page 35: Hebb Rule

Ocular Dominance Column

• If the lateral weights are translation invariant, Kw- is a convolution. This is easier to solve in the Fourier domain.

( )*

w s d

s d

kw s d k k

dq q

dtq q x

d wq q K w

dt

wKw

K w

Page 36: Hebb Rule

Ocular Dominance Column

• The sine function with the highest Fourier coefficient (i.e. the fundamental) growth the fastest.

exp 0

kw s d k k

s d kk k

w

d wq q K w

dt

q q K tw t w

Page 37: Hebb Rule

Ocular Dominance Column

• In other words, the eigenvectors of K are sine functions and the eigenvalues are the Fourier coefficients for K.

2cosa

v

ae

N

Page 38: Hebb Rule

Ocular Dominance Column

• The dynamics is dominated by the sine function with the highest Fourier coefficients, i.e., the fundamental of K(x) (note that w- is not normalized along the x dimension).

• This results is an alternation of right and left columns with a periodicity corresponding to the frequency of the fundamental of K(x).

Page 39: Hebb Rule

Ocular Dominance Column

• If K is a Gaussian kernel, the fundamental is the DC term and w ends up being constant, i.e., no ocular dominance columns (one of the eyes dominate all the cells).

• If K is a mexican hat kernel, w will show ocular dominance column with the same frequency as the fundamental of K.

• Not that intuitive anymore…

Page 40: Hebb Rule

Ocular Dominance Column

• Simplified model

-0 .6 -0 .4 -0 .2 0 0.2 0.4 0.6-1

-0 .5

0

0.5

1

0 20 40 600

0.2

0.4

0.6

c o rtic a l d ista nc e (m m ) k (1/m m )

K, e K~

A B

Page 41: Hebb Rule

Ocular Dominance Column

• Simplified model: weights matrices for right and left eyes

W L W R W R W LW W W - W