andras lorincz andras.lorincz@elte.hu nipgf.elte.hu

Post on 17-Mar-2016

39 Views

Category:

Documents

1 Downloads

Preview:

Click to see full reader

DESCRIPTION

Hebbian Constraint on the Resolution of the Homunculus Fallacy Leads to a Network that Searches for Hidden Cause-Effect Relationships. Andras Lorincz andras.lorincz@elte.hu http://nipg.inf.elte.hu. Content. Homunculus fallacy and resolution Hebbian architecture step-by-step - PowerPoint PPT Presentation

TRANSCRIPT

Faculty of InformaticsEötvös Loránd University

Andras Lorinczandras.lorincz@elte.hu

http://nipg.inf.elte.hu

Eötv

ös L

orán

d Un

iver

sity

Facu

lty o

f Inf

orm

atics

Content

Homunculus fallacy and resolution Hebbian architecture step-by-step Outlook to neurobiology

Cognitive Map: the hippocampal formation (in rats) Extensions to control and reinforcement learning Conjecture about consciousness Conclusions

Eötv

ös L

orán

d Un

iver

sity

Facu

lty o

f Inf

orm

atics

The Homunculus Fallacy

How do we know that this is a phone?

Eötv

ös L

orán

d Un

iver

sity

Facu

lty o

f Inf

orm

atics

Democrit’s Answer

Small phone atoms fly away, leave a ‘print’ – a representation of the phone – on our eyes and this is how we know

Eötv

ös L

orán

d Un

iver

sity

Facu

lty o

f Inf

orm

atics

Fallacy

Infinite regressionInfinite regression

Who makes sense of the representation?

Eötv

ös L

orán

d Un

iver

sity

Facu

lty o

f Inf

orm

atics

Root of fallacy is in the wording

We transform the infinte regression into finite architecture with convergent dynamics

Eötv

ös L

orán

d Un

iver

sity

Facu

lty o

f Inf

orm

atics

Root of fallacy is in the wording

We transform the infinte regression into finite architecture with convergent dynamics

(Not the representation but the) input makes sense provided that

the representation can reconstruct the input (given the experiences)

In other words: the representation

can produce an output, which is similar to the input of the network

Eötv

ös L

orán

d Un

iver

sity

Facu

lty o

f Inf

orm

atics

Architecture with Hebbian learning

x: input h: hidden representation y: reconstructed input (should match x) W: bottom-up matrix, or BU transformation M: top-down matrix, or TD transformation

Hebbian (or local) learning:Components of the matrices (transformations) make the LTM of the systemLocality of learning warrants graceful degradation for the architecture

Eötv

ös L

orán

d Un

iver

sity

Facu

lty o

f Inf

orm

atics

Architecture with Hebbian learning

x: input h: hidden representation y: reconstructed input (should match x) W: bottom-up matrix, or BU transformation M: top-down matrix, or TD transformation

Eötv

ös L

orán

d Un

iver

sity

Facu

lty o

f Inf

orm

atics

Previous New: we compareε: reconstruction error: x–y

Eötv

ös L

orán

d Un

iver

sity

Facu

lty o

f Inf

orm

atics

Previous New: we compareε: reconstruction error: x–y

Eötv

ös L

orán

d Un

iver

sity

Facu

lty o

f Inf

orm

atics

Previous New: we compareε: reconstruction error: x–y

Eötv

ös L

orán

d Un

iver

sity

Facu

lty o

f Inf

orm

atics

Previous New: we compareε: reconstruction error: x–y

Eötv

ös L

orán

d Un

iver

sity

Facu

lty o

f Inf

orm

atics

Previous New: we learn to predictε(t+1): innovation: x(t+1)–y(t+1)ε(t+1) = x(t+1)–y(t+1) = x(t+1)–Mh(t)

Eötv

ös L

orán

d Un

iver

sity

Facu

lty o

f Inf

orm

atics

Conditions

Eötv

ös L

orán

d Un

iver

sity

Facu

lty o

f Inf

orm

atics

Conditions

AutoRegressive (AR) process with recurrent network Fh(t+1)=Fh(t)+ε(t+1)h: hidden stateF: hidden deterministic dynamicsnh: hidden innovation “causing” the process

Eötv

ös L

orán

d Un

iver

sity

Facu

lty o

f Inf

orm

atics

Cause-effect relations

Eötv

ös L

orán

d Un

iver

sity

Facu

lty o

f Inf

orm

atics

Cause-effect relations

Eötv

ös L

orán

d Un

iver

sity

Facu

lty o

f Inf

orm

atics

Double loop:Both the state and the innovation are represented

Generalization:AutoRegressive Independent Process Analysis (AR-IPA)

Eötv

ös L

orán

d Un

iver

sity

Facu

lty o

f Inf

orm

atics

“Cognitive Map” (in rats)

Lorincz-SzirtesAutoregressive model of the hippocampal representation of eventsIJCNN Atlanta, June 14-19, 2009

Eötv

ös L

orán

d Un

iver

sity

Facu

lty o

f Inf

orm

atics

“Cognitive Map” (in rats)

Lorincz-SzirtesAutoregressive model of the hippocampal representation of eventsIJCNN Atlanta, June 14-19, 2009

Similar anatomical structure Similar opertational propertiesTwo-phase operation

Eötv

ös L

orán

d Un

iver

sity

Facu

lty o

f Inf

orm

atics

“Cognitive Map” (in rats)

Lorincz-SzirtesAutoregressive model of the hippocampal representation of eventsIJCNN Atlanta, June 14-19, 2009

A single additional pieceCA3—DG: eliminates echoes (ARMA-IPA)

Eötv

ös L

orán

d Un

iver

sity

Facu

lty o

f Inf

orm

atics

“Cognitive Map” (in rats)

Lorincz-SzirtesAutoregressive model of the hippocampal representation of eventsIJCNN Atlanta, June 14-19, 2009

Learns places and directions path integration / planning (dead reckoning)

Eötv

ös L

orán

d Un

iver

sity

Facu

lty o

f Inf

orm

atics

Extensions of the network

AR can be embedded into reinforcement learning Kalman-filter and RL: Szita, Lorincz, Neural Computation, 2004 Echo State Networs and RL: Szita, Gyenes, Lorincz, ICANN, 2006

AR can be extended with control (ARX) and active (Bayesian) learningPoczos, Lorincz, Journal of Machine Learning Research, 2009

Eötv

ös L

orán

d Un

iver

sity

Facu

lty o

f Inf

orm

atics

Consciousness

Consider an overcomplete hidden representation made of a set of recurrent networks

Eötv

ös L

orán

d Un

iver

sity

Facu

lty o

f Inf

orm

atics

Consciousness

Consider an overcomplete hidden representation made of a set of recurrent networks

Eötv

ös L

orán

d Un

iver

sity

Facu

lty o

f Inf

orm

atics

Consciousness

Consider an overcomplete hidden representation made of a set of recurrent networks

Eötv

ös L

orán

d Un

iver

sity

Facu

lty o

f Inf

orm

atics

Consciousness

Consider an overcomplete hidden representation made of a set of recurrent networks

Eötv

ös L

orán

d Un

iver

sity

Facu

lty o

f Inf

orm

atics

Consciousness

This model can explain rivalry situations

Consider an overcomplete hidden representation made of a set of recurrent networks

Eötv

ös L

orán

d Un

iver

sity

Facu

lty o

f Inf

orm

atics

Conclusions

Resolution of the fallacy plus Hebbian constraintslead to a structure that resembles the “Cognitive Map” of rats

Eötv

ös L

orán

d Un

iver

sity

Facu

lty o

f Inf

orm

atics

Conclusions

Resolution of the fallacy plus Hebbian constraintslead to a structure that resembles the “Cognitive Map” of rats searches for hidden cause-effect relationships

Eötv

ös L

orán

d Un

iver

sity

Facu

lty o

f Inf

orm

atics

Conclusions

Resolution of the fallacy plus Hebbian constraintslead to a structure that resembles the “Cognitive Map” of rats searches for hidden cause-effect relationships

Questions for future workWhat kind of networks arise from the extensions, i.e., Kalman filter embedded into reinforcement learning Bayesian actively controlled learningif the constraint of Hebbian learning is taken rigorously.

Eötv

ös L

orán

d Un

iver

sity

Facu

lty o

f Inf

orm

atics

Thank you for your attention!.

top related