€¦  · web viewmirrorbot ist-2001-35282 biomimetic multimodal learning in a mirror neuron-based...

38
MirrorBot IST-2001-35282 Biomimetic multimodal learning in a mirror neuron- based robot Cortical assemblies of language areas: Development of cell assembly model for Broca/Wernicke areas (Deliverable D5.1) Authors: Andreas Knoblauch, Günther Palm Covering period 1.6.2002-30.4.2003 MirrorBot Report 5 Report Version: 1 Report Preparation Date: 30. April 2003 Classification: Public Contract Start Date: 1 st June 2002 Duration: Three Years Project Co-ordinator: Professor Stefan Wermter Partners: University of Sunderland, Institut National de Recherche en Informatique et en Automatique at Nancy, Universität Ulm, Medical Research Council at Cambridge, Università degli Studi di Parma 1

Upload: others

Post on 02-Apr-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: €¦  · Web viewMirrorBot IST-2001-35282 Biomimetic multimodal learning in a mirror neuron-based robot

MirrorBot

IST-2001-35282

Biomimetic multimodal learning in a mirror neuron-based robot

Cortical assemblies of language areas:Development of cell assembly model for Broca/Wernicke areas

(Deliverable D5.1)Authors: Andreas Knoblauch, Günther Palm

Covering period 1.6.2002-30.4.2003

MirrorBot Report 5

Report Version: 1

Report Preparation Date: 30. April 2003

Classification: Public

Contract Start Date: 1st June 2002 Duration: Three Years

Project Co-ordinator: Professor Stefan Wermter

Partners: University of Sunderland, Institut National de Recherche en Informatique et en Automatique at Nancy, Universität Ulm, Medical Research Council at Cambridge, Università degli Studi di Parma

Project funded by the European Community under the “Information Society Technologies Programme“

1

Page 2: €¦  · Web viewMirrorBot IST-2001-35282 Biomimetic multimodal learning in a mirror neuron-based robot

Table of Contents

0. Introduction 3

1. Spiking Associative Memory 5

2. A Global model for language and action 8

3. Implementation of cortical language areas 9

4. Language processing in the implemented cortical areas 14

5. Discussion 19

6. References 22

2

Page 3: €¦  · Web viewMirrorBot IST-2001-35282 Biomimetic multimodal learning in a mirror neuron-based robot

0. Introduction

In order to model the language system for the MirrorBot project we have to take into consideration two basic motivations: On the one hand we have to create a technically efficient system that can be implemented on a robot. On the other hand we have also to create a model that is biologically plausible in order to compare it with known language areas in the brain (like Wernicke’s and Broca’s areas). To be flexible enough to account for both motivations we decided to use a single architectural framework: neural associative memory can be implemented in a technically efficient way (Willshaw et al. 1969; Palm, 1980), and can also constitute a plausible model for cortical circuitry (Hebb 1949; Palm 1982; Knoblauch and Palm 2001, 2002_a, 2002_b).

0.1 Associative memory

Associative memories are systems that contain information about a finite set of associations between pattern pairs. A possibly noisy address pattern can be used to retrieve an associated pattern that ideally will equal exactly the originally stored pattern. In neural implementations the information about the associations is stored in the synaptic connectivity of one or more neuron populations (memory matrix, see Fig. 1). In technical applications neural implementations can be advantageous over hash-tables or simple look-up-tables if the number of patterns is large, if parallel implementation is possible, or if fault-tolerance is required, i.e., if the address patterns may differ from the original patterns used for storing the associations.

Figure 1. Learning and Retrieving patterns in the binary Willshaw model of associative memory (auto-associative case). Patterns (like u1 or u2) are binary vectors. Memory matrix A is obtain by overlaying the outer products corresponding to the patterns (or pattern pairs in the case of hetero-association). Patterns are retrieved by applying a threshold (here 2) on the membrane potentials x obtained by vector-matrix-multiplication of the address pattern vector and the memory matrix.

0.2 The binary Willshaw model of neural associative memory

In 1969 Willshaw et al. discovered that a high (normalised) storage capacity of ln2 (about 0.7) bit per synapse is possible for Steinbuch's neural implementation of associative memory using binary patterns and synapses (Steinbuch 1961).

3

Page 4: €¦  · Web viewMirrorBot IST-2001-35282 Biomimetic multimodal learning in a mirror neuron-based robot

This so-called Willshaw model of associative memory was further analysed in the eighties (Palm 1980, 1991) and methods like iterative and bidirectional retrieval were developed to improve retrieval quality under noisy conditions (Kosko 1988, Schwenker et al. 1996, Sommer and Palm 1999, Knoblauch and Palm 2001). It is known that for technical applications even a storage capacity of 1 can be achieved asymptotically if the memory matrix is optimally compressed (Knoblauch 2003).Figure 1 illustrates how patterns are stored and retrieved in the binary Willshaw model of neural associative memory. The patterns are simply binary vector of length n corresponding to a population of active and non-active neurons. The nxn matrix A of synaptic connectivity (also called the memory matrix) is also binary, i.e., a synapse is either active (1) or non-active (0). Matrix entry aij corresponds to the synaptic connection from neuron i to neuron j. In the case of auto-association storing a pattern will simply activate all the synapses between active pattern neurons. If a synapse is activated by two or more patterns it simply remains active.

Retrieval is similarly simple. Given an address pattern (that may be, for example, the noisy version of one of the previously stored patterns) the neuron potentials can be computed by vector-matrix-multiplication of the address pattern vector and the memory matrix A. Then simply a threshold is applied to obtain the retrieval result. If the address pattern contains only correct ones, but no false ones then the so-called Willshaw retrieval strategy can be applied. Here we can set the threshold equal to the number of active neurons in the address pattern (as in Fig.1). A variant of this threshold choice plays also an important role in spiking associative memory described below.

0.3 Neural assemblies

The theory of neural assemblies (Hebb 1949; Braitenberg 1978; Palm 1982, 1990) suggests that entities of the outside world (and also internal states) are coded in groups of neurons rather than in single (“grandmother”) cells. It is assumed that a neuronal cell assembly is generated by Hebbian coincidence or correlation learning where the synaptic connections are strengthened between co-activated neurons. Models of (auto-) associative memory can serve as models for cell assemblies. Note that the auto-associatively stored patterns in Fig.1 can be interpreted as a cell assembly in this sense. Using the Willshaw model of associative memory as a model for neural cell assemblies has a number of advantages over other models like the popular Hopfield nets (Hopfield 1982, 1984).

The binary pattern vectors (0,1) reflects the binary nature of spikes. The binary synaptic matrix (0,1) reflects the fact that a cortical neuron can make

only one type of connection: either excitatory or inhibitory. In contrast, the Hopfield model, for example, requires synapses with both signs on the same axon and even a possible change of sign of the same synapse. Therefore the Hopfield model can be adequate only for modeling at a lower resolution interpreting one Hopfield node as a mixed group of excitatory and inhibitory neurons. Even then it is especially difficult to interpret models using spiking neurons (e.g., Ritz et al. 1994). Moreover, it is not yet clearly verified by neurophysiological experiments that learning of inhibitory synapses (which would be required by Hopfield nets) follows the rule postulated by Hopfield.

The Willshaw model requires sparse activation patterns which seems to be also the case in the real brain. In contrast Hopfield nets need very large pattern activity (typically of size n/2 for n neurons).

4

Page 5: €¦  · Web viewMirrorBot IST-2001-35282 Biomimetic multimodal learning in a mirror neuron-based robot

The binary Willshaw model exhibits a very large storage capacity. Correspondingly, a very large number of cell assemblies (or patterns) can be represented (typically ~n2/(logn)2 for n neurons). In contrast, the Hopfield model can represent only a relatively small number of patterns (typically «n for n neurons), and the storage capacity is much smaller. For non-binary synapses the storage capacity (normalised to the required physical memory) even vanishes asymptotically for the Hopfield model, while for the binary Willshaw model we can even reach full 100% if the memory matrix is optimally compressed (Knoblauch, 2003).

0.4 Language areas in the brain

When words referring to actions or visual scenes are presented to humans, distributed networks including areas of the motor and visual systems of the cortex become active (e.g., Pulvermüller 1999). The brain correlates of words and their referent actions and objects appear to be strongly coupled neuron ensembles in defined cortical areas. One of the long-term goals of the MirrorBot project is to build a multimodal internal representation using cortical neuron maps, which will serve as a basis for the emergence of action semantics using mirror neurons (Rizzolatti et al. 1999, Rizzolatti 2001, Womble and Wermter 2002). This model will provide insight into the roles of mirror neurons in recognizing semantically significant objects and producing motor actions. In a first step we will focus in this workpackage on modeling different language areas. It is well known that damage to Wernicke’s and Broca’s areas in human patients can impair language perception and production in characteristic ways (e.g., Pulvermüller 1995). While the classical view is that Wernicke’s area is mainly involved in language understanding and Broca’s area is mainly involved in the production of language, recent neurophysiological evidence indicates that Broca’s area is also involved in language perception and interpretation (Pulvermüller 1999, 2003; cf. Alexander 2000).In this work we will develop a model of several language areas to enable the MirrorBot to understand and react to spoken commands in basic scenarios of the project. For this our model will incorporate among others also cortical areas corresponding to both Wernicke’s and Broca’s areas.

1. Spiking associative memory

1.1 Spiking associative memory

It turned out that classical one step retrieval in the Willshaw model faces some severe problems when addressing with superpositions of several patterns (Sommer and Palm 1999; Knoblauch and Palm 2001): In such situations the retrieval result will also be a superposition of the address patterns. The problem is well understood for linear associative memories (cf. Hopfield 1984). In such models the result will per definition be a superposition of the components in the address pattern. However, the Willshaw model is non-linear, and it turned out that by using spiking neurons adequately one can separate the individual components often in one step (Knoblauch and Palm, 2001). Additionally one can iterate one-step spiking or non-spiking retrieval. Spiking neurons can help to separate individual pattern components due to the time structure implicit in a spike pattern. The most excited neuron will spike first, and this will break the symmetry between multiple pattern components because the immediate

5

Page 6: €¦  · Web viewMirrorBot IST-2001-35282 Biomimetic multimodal learning in a mirror neuron-based robot

feedback will pop-out the pattern corresponding to the first spike(s), and suppress the others. We have recently developed a biological realistic architecure for spiking associative memory compatible with neuroanatomical and neurophysiological results, and also a technical efficient version, the so-called spike counter model (Knoblauch and Palm, 2001). Since the simulation of the biological model is relatively expensive in terms of memory and computing time, we preferred for this project the technical model. We slightly modified the original spike counter model for implementation of many cortical areas, and additionally included a working memory mechanism for sustained activity. This extended model is described in the following.

1.2 Technical spike counter model for the MirrorBot project

Figure 2 illustrates the basic structure of the spike counter model of associative memory as implemented for the MirrorBot project. A local population of n neurons constitutes a cortical area modelled as spiking associative memory using the spike counter model (Knoblauch and Palm 2001). Each neuron i has four states:

Membrane potential xi

Counter cHi for spikes received hetero-associatively from N other cortical areas

(memory matrices H1,…,HN). In this model variant cHi is not really a counter since

the received spikes are weighed by connection strengths ck of the respective cortico-cortical connections.

Counter cAi for spikes received by immediate auto-associative feedback (via

memory matrix A). Counter c for the total number of spikes that already occurred during the current

retrieval within the local area. This counter is the same for all the local neurons.

Figure 2. Spike counter model of spiking associative memory. A local population of n neurons receives external input via N hetero-associative connections Hi each weighed with ci. This input initiates the retrieval where local spikes are immediately fed back via the auto-associative connection A. The counters cH, cA, c represent the number of spikes a neuron receives hetero-associatively and auto-associatively, and summed over the whole area (see text for details).

6

Page 7: €¦  · Web viewMirrorBot IST-2001-35282 Biomimetic multimodal learning in a mirror neuron-based robot

While in the classical Willshaw model synaptic input is summed to obtain the membrane potential, in the spike counter model synaptic input rather determines the temporal change of the membrane potential: The derivative of the membrane potential of a neuron is a function G of its spike counters,

dxi/dt = G(cHi,cA

i,ci).

We can implement an instantaneous variant of the Willshaw retrieval strategy (see section 0.2) if we choose the function G such that G is positive if cA

i c, and negative for cA

i«c. A simple linear example would beG(cH,cA,c) = A cH + B ( cA - c ),

where we can choose A«B and 1 which is also neurophysiologically plausible (Braitenberg and Schüz, 1991). An efficient implementation of the spike counter model is given by the following algorithm:

Line 1 initializes all the state variables. In line 2 the working memory effect is implemented, i.e., the last output vector yold of the local neuron population is weighed with c0. Together with hetero-associative synaptic input from N external areas this yields the spike counter vector cH. Line 3 initializes the membrane potentials with cH such that only the most excited neurons reach the threshold 0. Line 4 determines the first neuron that will spike (neuron j at time ts). The WHILE loop of lines 5 to 14 iterates as long as there still exists a neuron j that is about to spike at time ts. Line 6 integrates the membrane potentials, line 7 sets the output vector for neuron j, line 8 prevents neuron j from emitting a second spike. In lines 9 and 10 the spike counters cA

and c are updated (Aj is the j-th row of the auto-associative memory matrix A, and “1” is a vector of length n containing only ones). In line 11 the temporal change of the membrane potentials is computed as described above, and lines 12 and 13 determine the

7

Page 8: €¦  · Web viewMirrorBot IST-2001-35282 Biomimetic multimodal learning in a mirror neuron-based robot

next spike. If dx/dt is negative for all neurons no more spike will occur, and this will end the algorithm with the result y. A sequential implementation of this algorithm needs only O(n logn) steps (for n neurons and logarithmic pattern size) which is no more than required for the sequential implementation of the classical Willshaw model (up to a constant). The model can also be implemented in parallel which requires O(log2 n) steps. Here the classical Willshaw model would be more efficient since it needs only O(log n) steps. The additional factor of log n results from determining the next spike in line 13 which requires determination of the minimum of n spike times. In the following sections we will describe the implementation of a model consisting of many inter-connected neuron populations (“areas”) where each of the areas is implemented using exclusively the described spiking associative memory algorithm.

2. A global model for language and action

2.1 A simple MirrorBot scenario

One simple scenario occurring in the MirrorBot project is to give a spoken command “Bot show plum!” to the MirrorBot indicating that the robot should seek the plum and subsequently point to it (e.g., using a laser pointer mounted on its camera). Although this is quite a simple scenario it requires already a rather complex architecture.

2.2 Draft of a “minimal” model for the simple MirrorBot scenario

A perhaps minimal architecure to implement the described robot behaviour is shown in Figure 3.

Figure 3. A model draft for a simple scenario of the MirrorBot project like reacting to the spoken command “Bot show plum!” (see text). The model comprises several inter-connected cortical areas corresponding to auditory, grammar, visual , goal, and motor processing. Additionally the model comprises evaluation fields and activation fields. Each area of this model can be implemented exclusively by spiking associative memory as described in this report. Currently we have implemented only the language areas.

This draft of a model consists of many cortical areas that can be implemented using the spiking associative memories described in the last sections. The model consists of auditory areas to represent spoken language, of grammar areas to interpret spoken sentences, visual areas to process visual input, goal areas to represent action schemes (for

8

Page 9: €¦  · Web viewMirrorBot IST-2001-35282 Biomimetic multimodal learning in a mirror neuron-based robot

our scenario: first seek the plum, and then point to it), and motor areas to represent motor output (as turning the head for seeking an object in the visual field). Additionally we have auxiliary areas or fields to activate and deactivate the cortical areas, to compare corresponding representations in different areas, and to implement attention. Currently we have implemented only the language areas as a proof of concept. For this we have modified the original model draft as described in the following section.

3. Implementation of cortical language areas

3.1 Cortical language areas

In a first step we tried to implement the language processing part of the global model draft proposed in the last section. For this we adapted the language areas for implementation using only spiking Willshaw associative memories as described in section 1. Figure 4 shows the 15 areas of our model for cortical language processing. Each of the areas is modelled as a spiking associative memory of 100 neurons. For each area we defined a priori a set of binary patterns (i.e., subsets of the neurons) constituting the neural assemblies. These patterns were stored auto-associatively in the local synaptic connectivity according to a simple Hebbian learning rule (see Fig.1 in section 0.2). In addition the local synaptic connections we also modelled extensive inter-areal hetero-associative connections between the cortical areas (see below).

Figure 4. Cortical language areas. Each of the 15 areas was implemented as a spiking associative memory (size 100 neurons) including auto-associative synaptic connections as described in section 1. Areas A1,A2,A3 (yellow) implement primary auditory processing such as word recognition. Grammar areas A4 and A5 (blue) perform grammatical processing of sentence, e.g. segmenting a sentence into its grammatical components like subject, predicate, and object. On the bottom one can see the activation fields (black) which serve to activate and deactivate the cortical areas (see text for details). The area names do not refer to the names of real brain areas used by neurphysiologists. However, we suggest that areas A1,A2,A3 might be interpreted as parts of Wernicke’s area, and area A4 as apart of Broca’s area. The complex of area A5 (-S,-P,-O1-a,O1,O2-a,O2) might be interpreted as parts of Wernicke’s or Broca’s areas.

9

Page 10: €¦  · Web viewMirrorBot IST-2001-35282 Biomimetic multimodal learning in a mirror neuron-based robot

The model can roughly be divided into three parts. Primary cortical auditory areas A1,A2, and A3: First auditory input is represented

in area A1 by primary linguistic features (such as phonemes), and subsequently classified with respect to function (area A2) and content (area A3).

Grammatical areas A4, A5-S, A5-O1-a, A5-O1, A5-O2-a, and A5-O2: Area A4 contains information about previously learned sentence structures, for example that a sentence starts with the subject followed by a predicate. The other areas contain representations of the different sentence constituents (such as subject, predicate, or object, see below for details).

Activation fields af-A4, af-A5-S, af-A5-O1, and af-A5-O2: The activation fields are relatively primitive areas that are connected to the corresponding grammar areas. They serve to activate or deactivate the grammar areas in a rather unspecific way.

We stress that the area names we have chosen should not be interpreted as the names of cortical areas of the real brain used by neurophysiologists. However, we suggest that areas A1,A2,A3 might be interpreted as parts of Wernicke’s area, and area A4 as a part of Broca’s area. The complex of the grammatical role areas A5 might be interpreted as parts of Broca’s or Wernicke’s area. The activation fields can be interpreted as thalamic nuclei.

3.2 Neural assemblies of the language areas

Each area consists of spiking neurons that are connected by local synaptic feedback. Previously learned word features, whole words, or sentence structures are represented by neural assemblies (subgroups of neurons, i.e., binary pattern vectors), and laid down as long-term memory in the local synaptic connectivity according to Hebbian coincidence learning (Hebb 1949). In our implementation we used the Willshaw model of associative memory for binary patterns and binary synapses (see section 0.2). Figure 5 illustrates as an example the situation for the grammatical object area A5-O1. It consists of 10x10=100 neurons. Previously learned entities are represented by neural assemblies of 5 out of 100 neurons (i.e., the patterns are binary vectors of length 100 that contain 5 ones and 95 zeros). Overlaps of different patterns can be used to express similarities between the represented entities.

Figure 5. The architecture of grammatical object area A5-O1 as an example. It consists of an array of 100 neurons. Patterns (or equivalently, neural assemblies) are subsets of the 100 neurons, i.e. binary vectors of length 100. The patterns are generated a priori “by hand”, and can be interpreted as the long-term memory of the model. Shown are three assemblies representing a dog, a cat, and a plum. Similarities between the represented entities can be expressed by the overlaps of the patterns.

10

Page 11: €¦  · Web viewMirrorBot IST-2001-35282 Biomimetic multimodal learning in a mirror neuron-based robot

At a first glance it may seem artificial to define a priori the neural assemblies “by hand”. However, it is the easiest way of getting our model working as a proof of concept. In a second step we will extend our model such that new words (and possibly also new sentence structures) can be learned online. On the other hand this will also require additional areas to control learning.

In the following we describe the function and the neural assemblies of the individual areas in more detail:

Area A1 is a primary auditory cortical area which receives preprocessed acoustic input. Currently we have defined one neural assembly for each entity that can occur in the MirrorBot scenarios. In future we plan to implement a more general coding scheme where acoustic input is represented by elementary linguistic features such as phonemes (cf. Fromkin and Rodman 1974).

Area A2 contains neural assemblies ‘_start’, ‘_end’, ‘_blank’, and ‘_word’. This area classifies the representations in area A1. For example, if area A1 contains phonetically the representation of a known word this will activate the assembly ‘_word’ in area A2. If area A1 represents the transition from one word to the next then this would activate the ‘_blank’-assembly. And if the beginning or ending of a sentence is detected this is represented by the ‘_start’ and ‘_end’ assemblies, respectively.

Area A3 contains one assembly for each word that can occur in the MirrorBot scenarios. Currently the assemblies are designed such that similiarities between the represented entities are reflected in the overlaps between the different assemblies. Alternatively, the assemblies could also reflect similarities between the linguistic (e.g., phonetic) representations of the entities, i.e., word form rather than word content. Moreover, we have the idea that in the global model area A3 would be reciprocally connected with various non-language areas representing other (e.g., visual, motor, etc.) semantic aspects of the entities.

The grammatical sequence area A4 currently contains assemblies ‘S’, ‘P’, ‘vp1_O1’, ‘vp2_O1’, ‘vp2_O2’, and ‘_end’ representing the grammatical constituents of a sentence like subject, predicate and object. In order to represent grammatically correct sentence structures the elementary assemblies are connected to sequences. Currently we have stored the sequences (S,P,vp1_O1) and (S,P,vp2_O1,vp2_O2) according to two verb patterns. For example verb pattern vp1 represents verbs expecting only one object (as for the verb “show” in the sentence “Bot show plum”), and similarly vp2 represents verbs expecting two objects (as for the verb “put” in the sentence “Bot put apple (to) plum!”). Information about the valid sequences is stored in a delayed hetero-associative synaptic feedback connection within area A4. For example, for the sequence (S, P, vp1_O1; _end) we have to store the associations (S, P), (P, vp1_O1), and (vp1_O1, _end). Figure 6 illustrates the currently stored grammatical sequences.

Area A5-S contains assemblies representing the known entities that can be the subject of a sentence. Currently we have only two possible subjects, namely ‘bot’, and ‘sam’ representing the robot and its instructor, respectively.

Area A5-P contains assemblies ‘stop’, ‘go’, ‘move_body’, ‘turn_body’, ‘turn_head’, ‘show’, ‘pick’, ‘lift’, ‘drop’, ‘touch’, and ‘put’, representing the possible predicates of current MirrorBot sentences.

Areas A5-O1 and A5-O2 contain assemblies representing the possible objects of sentences. Currently these are ‘nut’, ‘plum’, ‘dog’, ‘cat’, ‘desk’, ‘wall’, ‘ball’, ‘cup’, ‘apple’, ‘forward’, ‘backward’, ‘left’, ‘right’, ‘up’, and ‘down’.

11

Page 12: €¦  · Web viewMirrorBot IST-2001-35282 Biomimetic multimodal learning in a mirror neuron-based robot

Areas A5-O1-a and A5-O2-a contain assemblies representing object attributes. Currently these are ‘brown’, ‘blue’, ‘black’, ‘white’, ‘green’, and ‘red’.

Additionally, areas A3, A5-S, A5-P, A5-O1, A5-O1-a, A5-O2, and A5-O2-a also contain an assembly ‘_none’ to represent unknown or inappropriate entities.

The activation fields af-A4, af-A5-S, af-A5-P, af-A5-O1, and af-A5-O2 are rather primitive areas that contain only two or three assemblies ‘ON’, ‘OFF’, and sometimes ‘OFF_p’. The activation fields serve to rather unspecifically activate or deactivate the corresponding cortical area. Therefore we have the idea that the activation fields can be interpreted as some sub-cortical (for example thalamic) locations rather than cortical areas.

Figure 6. The grammatical sequences stored in area A4. Similar as for the other cortical areas area A4 contains information about elementary assemblies ‘S’, ‘P’, ‘vp1_O1’, ‘vp2_O1’, ‘vp2_O2’, and ‘_end’ representing grammatical sentence parts such as subject, predicate, and object in the local auto-associative synaptic feedback (see text for details). Additionally, these elementary assemblies are also connected to sequences by delayed hetero-associative synaptic feedback within area A4. Currently the two sequences (S,P,vp1_O1,_end), and (S,P,vp2_O1,vp2_O2,_end) are represented as illustrated.

3.3 Synaptic connections between the cortical areas and activation fields

In addition to the local auto-associative synaptic feedback defining the neural assemblies the different cortical areas are interconnected by various hetero-associative synaptic connections. These connections generally associate pairs of assemblies in different areas, and constitute together with the local auto-associative synaptic feedback the long-term memory of the model. Figure 7 shows all the inter-areal connections of our model. In addition Fig. 7 shows also the connections from or to areas not implemented in our model and therefore defining the interface with the more global model (see section 2). For example, peripheral auditory input (for example “Bot show plum!”) will enter area A1 from more peripheral acoustic preprocessing stages. Similarly, the activation fields can be controlled from external areas. Vice versa, neural activity in areas A5-S, A5-P, A5-O1, A5-O1-a, A5-O2, and A5-O2-a represents the result of grammatical analysis of auditory input, and can be used by external cortical areas. For example in our more global model of section 2 information from the grammar areas would be routed on to the goal areas (and from there further to the motor areas) where the action corresponding to a spoken command (such as “Bot show plum!”) can be planned and performed.The circular black arrow of area A4 exhibits a hetero-associative synaptic feedback connection representing the associations of the local assemblies constituting sequences that represent valid grammatical sentences (see above).

12

Page 13: €¦  · Web viewMirrorBot IST-2001-35282 Biomimetic multimodal learning in a mirror neuron-based robot

Figure 7. Inter-areal connections of the cortical language model. Each black arrow corresponds to a hetero-associative synaptic connection that associates specifically corresponding assemblies of the involved presynaptic and postsynaptic cortical areas by Hebbian learning. Together with the local auto-associative synaptic feedback of each area the hetero-associations constitute the long-term memory of the model. In contrast, the blue feedback arrows indicate the areas implementing a short-term memory mechanism as described in section 1. In contrast, the blue feedback arrows indicate areas that implement the working memory mechanism described in section 1. For example it may be necessary for an area such as area A5-S to maintain the representation of the subject of a sentence (such as “Bot”) ignited by auditory bottom-up input from area A1 and A3 when already other parts of the sentence are processed in the primary auditory areas A1 and A3. When processing a sentence then the most important information stream flows from the primary auditory areas A1 via A3 to the grammatical role areas A5-S, A5-P, A5-O1, A5-O1-a, A5-O2, and A5-O2-a. From there other parts of the global model can use the grammatical information to perform, for example, action planning. The routing of information from area A3 to the grammatical role areas is guided by the grammatical sequence area A4 which receives input mainly via the primary auditory areas A1 and A2. This guiding mechanism of area A4 works by activating the respective grammatical role areas appropriately via the activation fields af-A5-S, af-A5-P, af-A5-O1, and af-A5-O2.Vice versa, feedback from the grammatical role areas can select the appropriate grammatical sequence in area A4. Currently we have only implemented the connection from area A5-P to A4 where the predicate representation can select either the (S, P, vp1_O1, end) or the (S, P, vp2_O1, vp2_O2, _end) sequence (see above). Currently we have only implemented a minimal model comprising only the necessary inter-areal connections to perform some easy MirrorBot scenarios such as responding to a single spoken command like “Bot show plum!”. It turned out that the model functions pretty well in this context. Note that most of the connections of our model are unidirectional while most cortico-cortical projections in the brain seem to be bidirectional. We believe that bidirectional (or reciprocal) inter-areal connections will

13

Page 14: €¦  · Web viewMirrorBot IST-2001-35282 Biomimetic multimodal learning in a mirror neuron-based robot

become necessary for more complex situations such as a cocktail-party situation with several simultaneously speaking agents, when in primary areas one single signal must be selected by top-down attentional feedback (see Knoblauch and Palm 2001, 2002_a, 2002_b), or for more complex sentence structures where the correct grammar sequence (as in area A4) can only be selected by top-down “back-tracking” input. In the next section we will see how the different areas of our model interact to segment a simple sentence of the MirrorBot scenario into its grammatical components.

4. Language Processing in the implemented cortical areas

4.1 Scenario

In the following we present the protocol for a simulation of our model of cortical language areas when stimulating with a simple spoken command in a MirrorBot scenario. The spoken command was

“Bot put plum green apple!”

which means that the robot Bot should put the plum next to the green apple. This input signal was subsequently enriched during acoustic preprocessing by additional markers that sign for example word borders, or the beginning and ending of a sentence. Thus the input for primary auditory cortical area A1 was

“_start Bot _blank put _blank plum _blank green _blank apple _end”.

How this sentence is segmented into its grammatical parts by our model of cortical language areas is described in the following simulation protocol.

4.2 Simulation protocol

The sentence “Bot put plum green apple” was processed in 36 simulation steps, where in each step an associative step updating the activity state in the areas is computed as described in section 1.We observed that processing of a grammatical correct sentence is accomplished only if some temporal constraints are matched by the auditory input. This means that the representation of a word in the primary areas must be active at least for a minimal number of steps. It turned out that a word should be active for at least 4-5 simulation steps. Note that this is not really a limitation since we have the notion that one simulation step corresponds to about 5-10 ms of time in the real brain (Knoblauch and Palm 2001) not including transmission delays between distant areas. Since the duration of syllables is some hundred milliseconds this is not really a constraint even if one would include realistic transmission delays in the model. Thus in the following we write again the stimulus as seen by the primary auditory area A1 where the numbers in brackets behind an entity correspond to the number of simulation steps the entity was activated.

_start(1) Bot(5) _blank(1) put(5) _blank(1) plum(5) _blank(1) green(5) _blank(1) apple(5) _end(1)

Figure 8 shows the activity state of the model after the 6-th simulation step. Solid arrows indicate currently involved connections, dashed arrows indicate relevant associations in

14

Page 15: €¦  · Web viewMirrorBot IST-2001-35282 Biomimetic multimodal learning in a mirror neuron-based robot

previous simulation steps. At the beginning all the activation fields except for af-A4 have the ‘OFF’ assembly activated due to input bias from external neuron populations. This means that the grammatical role areas are initially inactive. Only activation field af-A4 enables area A4 to be activated.

Figure 8. Activation state of the model of cortical language areas after simulation step 6. Initially the OFF-assemblies of the activation fields are active except for area A4 where the ON-assembly is activated by external input. The processing of a sentence beginning activates the ‘_start’ assembly in area A2 which activates ‘S’ in A4 which activates af-A5-S which primes A5-S. When subsequently the word “bot” is processed in area A1 this will activate the ‘_word’ assembly in area A2, and the ‘bot’ assemblies in area A3 and area A5-S. Solid arrows indicate currently involved connections, dashed arrows indicate relevant associations in previous simulation steps.

This happens, for example, when auditory input enters area A1. First the ‘_start’ representation in area A1 gets activated indicating the beginning of a new spoken sentence. This will activate the corresponding ‘_start’ assembly in area A2 which in turn activates the ‘S’ assembly in the sequence area A4 since the next processed word is expected to be the subject of the sentence. The ‘S’ assembly therefore will activate the ‘ON’ assembly in the activation field af-A5-S which primes area A5-S such that the next processed word in area A3 will be routed to A5-S. Indeed, as soon as the word representation of “Bot” is activated in area A1 it is routed in two steps further via area A3 to area A5-S.

In step 7 the ‘_blank’ representation enters area A1 indicating the border between the words “bot” and “put”. This will activate the ‘_blank’ assembly in area A2 which will activate the ‘OFF’ assembly in activation af-A4. This results in a deactivation of the sequence area A4. Since the ‘_blank’ assemblies in areas A1 and A2 is only active for one single simulation step and immediately followed by the representations of the word “put”, the ‘ON’ assembly in activation field af-A4 is one step later active again and activates also the sequence area A4. However, since the ‘S’ assembly in A4 was intermittently erased the delayed intra-areal feed-back of area A4 will switch activity to the next sequence assembly ‘P’. This means that the next processed word is expected to be the predicate of the sentence. The ‘P’ representation in area A4 activates the ‘ON’

15

Page 16: €¦  · Web viewMirrorBot IST-2001-35282 Biomimetic multimodal learning in a mirror neuron-based robot

assembly in activation field af-A5-S which primes area A5-S to be activated by input from area A3. In the meantime the representations of the input word “put” were activated in areas A1 and A3, and from there routed further to area A-S. Thus we conclude with the situation after simulation step 13 as shown in Figure 9.

Figure 9. System state of the model of cortical language areas after simulation step 13. The ‘_blank’ representing the word border between words “bot” and “put” is recognized in area A2 which activates the ‘OFF’ representation in af-A4 which deactivates area A4 for one simulation step. Immediately after the ‘_blank’ the word “put” is processed in A1 which actives the ‘_word’ and ‘put’ assemblies in areas A2 and A3, respectively. This will activate again via af-A4 area A4. Due to the delayed intra-areal feedback this will activate the ‘P’ representation in area A4 as the next sequence assembly. This activates the ‘ON’ assembly in af-A5-P which primes A5-P to be activated by the ‘put’ representation in area A3.

Next similarly as before a word border is processed in areas A1 and A2 which will again activate the ‘OFF’ assembly in activation field af-A4 for one step. This will switch the sequence assembly in area A4 one step further. At this stage this switching is guided by the predicate representation in area A5-P. Since the verb “put” expects two objects this biases the switching such that the ‘vp2_O1’ assembly gets activated in area A4 (cf. Fig.6). The ‘vp2_O1’ representation activates the ‘ON’ assembly in activation field af-A5-O1 which primes both areas A5-O1-a and A5-O1. The next processed word is “plum” which activates assembly ‘plum’ in areas A1 and A3, and again the ‘_word’ representation in area A2. From area A3 the ‘plum’ assembly is routed further to the primed area A5-O1. Since A5-O1-a is also primed but contains no representation corresponding to “plum” the ‘_none’ assembly is activated in area A5-O1-a. Figure 10 illustrates the situation at this point at simulation step 19.Then the next word border between words “plum” and “green” is processed. This will again activate the ‘_blank’ representations in areas A1 and A2, and activate the ‘OFF’ assembly in activation field af-A4. This will switch on the sequence assembly in area A4 to its next part, namely to the ‘vp2_O2’ representation which means that the next word is expected to belong to the second object of the sentence.

16

Page 17: €¦  · Web viewMirrorBot IST-2001-35282 Biomimetic multimodal learning in a mirror neuron-based robot

Figure 10. Activity state of the model of cortical language areas after simulation step 19. The word border between the words “put” and “plum” activates the ‘_blank’ assemblies in areas A1 and A2 which activates the ‘OFF’ representation in activation field af-A4 which temporarily deactivates area A4. When the ‘plum’ representation enters area A1 this activates the ‘_word’ representation in area A2 which results in activating again the ‘ON’ assembly in activation field af-A4. This reactivates area A4 where the delayed auto-associative feedback within area A4 as well as synaptic input from area A5-P activates the sequence assembly ‘vp2_O1’ in area A4. This will activate activation field af-A5-O2 and prime areas A5-O1-a and A5-O1. At the same time the ‘plum’ representation is activated in area A3 and routed further to the primed area A5-O1. Since in area A5-O1-a no representation corresponding to ‘plum’ exists, here the ‘_none’ assembly is activated.

The ‘vp2_O2’ representation in area A4 will activate the ‘ON’ assembly in activation field af-A5-O2 which in turn primes areas A5-O2-a and A5-O2 to be activated. The next processed word “green” is represented by corresponding assemblies in areas A1 and A3 and from there activates the ‘green’ assembly in area A5-O2-a. However, since no representation corresponding to “green” exists in the object area A5-O2 here the ‘_none’ assembly gets activated. The situation at this point after simulation step 25 is illustrated in Figure 11. At this stage the next word border must not switch the sequence assembly in area A4 further to its next part. This is because the second is actually not yet complete. We have only processed the attribute (“green”) so far. Therefore a rather subtle (but not implausible) mechanism guarantees in our model that the switching is prevented at this stage. The ‘_none’ representation in area A5-O2 has now two effects. First it activates together with the ‘blank’ assembly in area A2 representing the word border the ‘OFF_p’ assembly in af-A5-O2. Second it prevents by a projection to activation field af-A4 the activation of the ‘OFF’ assembly in af-A4 and therefore the switching in the sequence area A4. The ‘OFF_p’ deactivates only area A5-O2 but not A5-O2-a. As soon as A5-O2 is deactivated and the next word “apple” is processed and represented by the ‘_word’ assembly in area A2, input from the ‘vp2_O2’ assembly in area A4 activates again the ‘ON’ assembly in activation field af-A5-O2. And thus the ‘apple’ assembly in area A5-O2 gets activated via areas A1 and A3. Figure 12 shows the situation at this point after simulation step 30.

17

Page 18: €¦  · Web viewMirrorBot IST-2001-35282 Biomimetic multimodal learning in a mirror neuron-based robot

Figure 11. The system state of the model of cortical language areas after simulation step 25. The word border between “plum” and “green” activates the ‘_blank’ representations in area A1 and A2 which initiates the switching of the sequence assembly in area A4 to its next part as described before (see text). The activation of the ‘vp2_O2’ assembly activates the ‘ON’ assembly in activation field af-A5-O2 which primes areas A5-O2-a and A5-O2. Then the processing of the word “green” activates the ‘green’ assembly in area A5-O2-a via areas A1 and A3. Since there exists no corresponding representation in area A5-O2 here the ‘_none’ assembly is activated.

After simulation step 30 the processing is almost complete. All the grammatical role areas contain already the appropriate information, i.e., the sentence has been segmented into its grammatical parts.Finally the representation of the end of a sentence is processed and represented by the activation of the ‘_end’ assemblies in areas A1 and A2. Similarly as the ‘_blank’ assembly in area A2 this activates the ‘OFF’ assembly in activation field af-A4, and this deactivates the grammatical sequence area A4. After all areas A1, A2, and A3 become inactive because no longer input is delivered from the acoustic periphery. Therefore the ‘ON’ assembly in activation field af-A4 gets activated again by its external bias (see above), and thus the sequence assembly in area A4 is switched to its final part ‘_end’. The ‘_end’ representation could be used as the confirmation that a meaningful sentence was currently processed and trigger the routing of the corresponding information to other external areas. In the example of our more global model draft (see section 2) the information of the grammatical role areas could be routed on to the goal areas and from there further to motor areas where the action corresponding to the spoken command could be planned and performed. Perhaps one could even identify the grammatical role areas with some of the goal areas of the more global model.In summary, we have presented a neural model for language processing consisting exclusively of inter-connected spiking associative memories that is capable of processing relatively simple sentences appropriate for the MirrorBot scenarios. So far we have made no attempt to interpret our model in terms of known areas of the real brain. In the concluding discussion section we will relate our model to anatomically known brain areas, and discuss the relation of our model to the proposed mirror system. Further we

18

Page 19: €¦  · Web viewMirrorBot IST-2001-35282 Biomimetic multimodal learning in a mirror neuron-based robot

will discuss if and how our model can be extended to process even more complex sentences.

Figure 12. System state of the model of cortical language areas after simulation step 30. When the word border between “green” and “apple” is processed we have still the ‘_none’ assembly activated in area A5-O2 (see Fig.11). This prevents the activation of the ‘OFF’ assembly in activation field af-A4 and therefore also the switching in area A4 from the sequence part ‘vp2_O2’ to ‘_end’ (cf. Fig. 6). Further together with the word border representation ‘_blank’ in area A2 the ‘_none’ assembly in area A5-O2 activates the ‘OFF_p’ assembly in af-A5-O2 which deactivates area A5-O2. As soon as the word “apple” is processed and represented by the ‘_word’ and ‘apple’ assemblies in areas A2 and A3, respectively, the ‘ON’ assembly is again activated in activation field af-A5-O2. This primes again area A5-O2, and therefore the ‘apple’ representation can be activated via input from area A3. The next word border (or end of the sentence) will switch the sequence assembly in area A4 to its final part (‘_end’) similarly as before.

5. Discussion

5.1 Summary of the presented results

We have developed and implemented a neural model for language processing in the context of the MirrorBot project. In this model elementary preprocessed verbal input can be segmented into its grammatical components. The model consists of 15 cortical and possibly subcortical areas each implemented as a spiking associative memory. The areas were interconnected by hetero-associative connections according to simple Hebbian learning of associations between representations in the different areas. The most important part of our model is the sequence area A4 (see Fig.4 and Fig. 6) where information about the possible structures of sentences is stored. We chose the architecture of our model in order to obtain a quasi minimal model for the requirements of the MirrorBot scenarios. This involves the understanding of simple verbal commands like “Bot show plum!” or “Bot put plum (to) green apple!” which should initiate the robot to point towards a plum located somewhere in the visual field, or to move the plum to the location of the green apple. For the latter

19

Page 20: €¦  · Web viewMirrorBot IST-2001-35282 Biomimetic multimodal learning in a mirror neuron-based robot

example we have presented parts of the simulation protocol when our model is stimulated with this input. Moreover we developed the draft of a more global model of perception, language, and action, where the currently implemented model of cortical language areas is embedded.

5.2 Scaleability of the model

The current implementation of our model of language areas is designed to process simple spoken sentences of the MirrorBot scenarios (see above). Therefore we have stored currently only two sequences in the grammatical sequence area A4 corresponding to sentences with one object or two objects. Although this two sentence patterns cover all the possible sentences according to the current definition of the MirroBot grammar (see work package 7), we believe that our approach can be extended to more complex sentences. It will certainly be no major problem to include also the structure of simple questions like “Where is plum?” into our model, although this would introduce a kind of non-determinism since it is not clear a priori if the first word will be a subject or a question word. This is in contrast to the static processing of the schemes (S,P,O) or (S,P,O1,O2) implemented so far (see sections 3 and 4). However, the spiking associative memories are adaptable versions of finite automata (as computers are). It is well known that finite automata can only recognise so-called regular languages (Hopcroft and Ullman 1969; cf. Chomsky 1957) while more complex languages such as English or German belong apparently to the class of so-called context-sensitive language which would need a Turing machine for being recognised. Note that the only difference of a Turing machine to a finite automata is that the Turing machine has infinite memory (and can access the memory arbitrarily, in contrast to a stack automata). Since from a naturalistic perspective humans may also be only finite automata this suggests that the common use of human language by humans is also only regular. For example, humans use normally only a very limited stack depth in relative clauses. And to decide if an exorbitant complex sentence is really correct humans usually need paper and pencil to extend their limited working memory capacity. Thus it should be possible, in principle, to implement quite complex language structures using our modelling approach.

5.3 Relation of our model to known language areas of the real brain

It is well known that language comprehension and language production in humans is impaired in characteristic ways after lesions of certain cortical and sub-cortical areas in the brain although the peripheral hearing and speaking apparatus is still unimpaired.The best investigated phenomenons are the diverse aphasia-syndromes (e.g., Alexander 2000; cf. Pulvermüller 1995). For example lesions of Broca’s area in the frontal lobe leads to characteristic difficulties in speech production (but also in language comprehension, cf. Alexander 2000; Pulvermüller 1999, 2003) while semantic content is still normal. In contrast, superior temporal lesions in Wernicke’s area rather preserve fluent language output with normal grammatical elements while content may be extremely paraphasic or empty. It is also known that certain types of aphasia are accompanied with sub-cortical (e.g., thalamic lesions). However, investigations of the correspondence of the diverse aphasia syndromes (e.g., Broca’s and Wernicke’s aphasia, conduction aphasia, etc.) to lesions of specific cortical or sub-cortical brain areas are still on a rather coarse-grained level. As a starting point for an extended discussion with experimental brain research groups we suggest that the Wernicke area of the real brain may correspond to areas A2 and A3

20

Page 21: €¦  · Web viewMirrorBot IST-2001-35282 Biomimetic multimodal learning in a mirror neuron-based robot

(and possibly also A5) since this part of the model deals predominantly with semantic aspects of language (see Figure 4), and that the Broca area of the real brain may correspond to the grammatical sequence area A4 and possibly also to the complex of area A5 in our model since these areas engage mainly in grammatical processing. However, these areas will need to interact with many more areas of the brain to obtain the required functionality (cf., Pulvermüller 1999).Finally we find it plausible to assume that the activation fields in our model may correspond to thalamic nuclei below the associated cortex since this part of the model contains only rather unspecific representations that serve to activate, deactivate, or modulate the corresponding cortical areas.

5.4 Relation to the mirror system

Mirror neurons are certain neuron types found for example in ventral premotor area F5 of monkeys that get activated not only when a specific action is performed by the monkey but also when the same action performed by another monkey is seen by the monkey (e.g., Rizzolatti et al. 1999; Rizzolatti and Lupino 2001). In this sense mirror neurons perform an abstraction from a specific modality (e.g., acoustic mirror neurons are investigated in work package 9 of the MirrorBot project).This mirror property can be related to language evolution. The so-called Mirror-System-Hypothesis (MSH; see Arbib 2002_a, 2002_b) suggests that the matching of neural code for execution and observation of action (e.g., hand movements) as observed in monkeys is the precursor of parity, i.e., that an utterance (e.g., spoken words, but also gestures) carries similar meaning for producer (e.g., speaker) and receiver (e.g., hearer). The MSH is supported by evidence that monkeys premotor area F5 (where mirror neurons are found) might be the homologue of Broca’s area in humans. This implies that language and symbolic processing is closely related to performing actions (e.g., manipulating objects). As discussed in section 5.3 we suggest as a working hypothesis that the complex of areas A4 and A5 may correspond to Broca’s area. While area A4 contains complex grammatical sequence representations which cannot expected to occur in monkeys, the complex of area A5 (in particular area A5-P) deals with action representations that are closely connected to the goal areas in our draft for the global model (see Fig. 3) which might be interpreted as some prefrontal areas in the real brain.Thus we suggest that in our model the interface between the complex of areas A5 and the goal areas can be interpreted as the mirror system. Moreover, the design of our model was guided by the idea of global cell assemblies (Palm 1982; Knoblauch and Palm, 2002_b; cf. Pulvermüller 1999) spanning over many cortical areas. In this view it is very natural to obtain multi-modal neurons also in the language areas that represent actions and get activated by hearing the corresponding word, seeing the corresponding gesture, or observing the corresponding action. However, these properties of our model can only be investigated in future when the complete MirrorBot model is integrated.

5.5 The next steps

We think that our model of cortical language processing delivers already the necessary functionality to cope with the current MirrorBot scenarios (e.g., the Grammar definition of work package 7.1, report 2) and even some extension.We plan to further extend our model of language processing in three directions. First we will replace the somewhat artificial representations in areas A1 and A3 by a more natural coding scheme (e.g., according to phomenes). Second it may become necessary for the

21

Page 22: €¦  · Web viewMirrorBot IST-2001-35282 Biomimetic multimodal learning in a mirror neuron-based robot

extended MirrorBot scenarios to further increase the grammatical repertoire, e.g., to deal with questions like “Where is plum?” or “What is this?”. This will probably also require a certain restructuring of the complex of area A5 and the connections with area A4 and the corresponding activation fields, and, of course, more sequences will have to be stored in A4. Third, we plan to introduce learning of new words. While learning new grammatical sequences in area A4 will probably be too complex, it should be feasible to learn new names for actions, objects, and attributes.However, the most interesting goal for future work will be to integrate our implementation of cortical language areas with other components of the MirrorBot project such as visual object and gesture recognition, perceptive and motor associative maps, and speech production. Only within the context of such a global model it will become possible to investigate globally distributed cell assemblies and the mirror neuron system. Another important aspect will be to discuss the different representations in our artificial areas in the light of experimental evidence of the ‘corresponding’ brain areas, with the ultimate goal of finding a more reasonable and experimentally well-founded correspondence.

6. References

Alexander A (2000) Aphasia I: Clinical and anatomic issues. In: Farah M, Feinberg T (eds) Patient-based approaches to cognitive neuroscience. MIT Press, Cambridge MA, pp 165-181

Arbib M (2002_a) The mirror system, imitation, and the evolution of language. In: Nehaniv C, Dautenhahn K (eds.) Imitation in animals and artifacts, MIT Press, Cambridge MA, pp 229-280

Arbib M (2002_b) Language evolution: The Mirror system hypothesis. In: Arbib M (ed.) The handbook of brain theory and neural networks: second edition, MIT Press, Cambridge MA, pp 606-611

Braitenberg V, Schüz A (1991) Anatomy of the cortex. Statistics and geometry. Springer, Berlin Heidelberg New York

Braitenberg V (1978) Cell assemblies in the cerebral cortex. In: Heim R, Palm G (eds) Theoretical approaches to complex systems. Springer, Berlin, pp 171-188

Chomsky N (1957) Syntactic structures. The Hague: Mouton.Fromkin V, Rodman R (1974) An introduction to language. Harcourt Brace College

Publishers, Fort Worth TX.Hebb D (1949) The organization of behavior. A neuropsychological theory. Wiley, New

York.Hopcroft J, Ullman J (1969) Formal languages and their relation to automata. Addison-

Wesley.Hopfield J (1982) Neural networks and physical systems with emergent collective

computational abilities. Proceedings of the National Academy of Science USA, 79:2554-2558

Hopfield J (1984) Neurons with graded response have collective computational properties like those of two-state neurons. Proceedings of the National Academy of Science USA, 81:3088-3092

Knoblauch A (2003) Optimal matrix compression yields storage capacity 1 for binary Willshaw associative memory. To appear in Proceedings of ICANN 2003

Knoblauch A, Palm G (2001) Pattern separation and synchronization in spiking associative memories and visual areas. Neural Networks 14:763-780

22

Page 23: €¦  · Web viewMirrorBot IST-2001-35282 Biomimetic multimodal learning in a mirror neuron-based robot

Knoblauch A, Palm G (2002_a) Scene segmentation by spike synchronization in reciprocally connected visual areas. I. Local effects of cortical feedback. Biological Cybernetics 87:151-167

Knoblauch A, Palm G (2002_b) Scene segmentation by spike synchronization in reciprocally connected visual areas II. Global assemblies and synchronization on larger space and time scales. Biological Cybernetics 87:168-184

Kosko B (1988) Bidirectional associative memories. IEEE Transactions on Systems, Man, and Cybernetics 18:49-60

Palm G (1980) On associative memories. Biological Cybernetics 36:19-31Palm G (1982) Neural assemblies. An alternative approach to artificial intelligence.

Springer, BerlinPalm G (1990) Cell assemblies as a guideline for brain research. Concepts in

Neuroscience 1:133-148Palm G (1991) Memory capacities of local rules for synaptic modification. A

comparative review. Concepts in Neuroscience 2:97-128Pulvermüller F (1995) Agrammatism: behavioral description and neurobiological

explanation. Journal of Cognitive Neuroscience 7:165-181Pulvermüller F (1999) Words in the brain’s language. Behavioral and Brain Sciences

22:253-336Pulvermüller F (2003) The neuroscience of language. Cambridge University PressRitz R, Gerstner W., Fuentes U, van Hemmen J (1994) A biologically motivated and

analytically soluble model of collective oscillations in the cortex. II. Applications to binding and pattern segmentation. Biological Cybernetics 71:349-358

Rizzolatti G, Fadiga L, Fogassi L, Gallese V (1999) Resonance behaviors and mirror neurons. Archives Italiennes de Biologie 137:85-100

Rizzolatti G, Luppino G (2001) The cortical motor system. Neuron 31:889-901Schwenker F, Sommer F, Palm G (1996) Iterative retrieval of sparsely ocded associative

memory patterns. Neural Networks 9:445-455Sommer F, Palm G (1999) Improved bidirectional retrieval of sparse patterns stored by

Hebbian learning. Neural Networks 12:281-297Steinbuch K (1961) Die Lernmatrix. Kybernetik 1:36-45Willshaw D, Buneman O, Longuet-Higgins H (1969) Non-holographic associative

memory. Nature 222:960-962Womble S, Wermter S (2002) Mirror neurons and feedback learning. In Stamenov, M,

Gallesse V: Mirror neurons and the evolution of brain and language. John Benjamins Publishing Company, Amsterdam, pp. 353-362

23