dynamical computation reservoir emerging within a biological model network
TRANSCRIPT
ARTICLE IN PRESS
0925-2312/$ - se
doi:10.1016/j.ne
�CorrespondLisbon, Inform
Portugal. Tel.:
E-mail addr
URL: http:/1The author
Ciencia e a Tec
and Computat
2002), and also
Neurocomputing 70 (2007) 1177–1185
www.elsevier.com/locate/neucom
Dynamical computation reservoir emerging within a biologicalmodel network
Carlos Lourenc-oa,b,�,1
aFaculty of Sciences of the University of Lisbon, Informatics Department, Campo Grande, 1749-016 Lisboa, PortugalbInstituto de Telecomunicac- oes, Security and Quantum Information Group, Av. Rovisco Pais, 1, 1049-001 Lisboa, Portugal
Available online 12 December 2006
Abstract
Chaos in dynamical systems potentially provides many different dynamical states arising from a single attractor. We call this the reservoir
property and give here a precise meaning to two aspects of such property. In both cases, the high flexibility of chaos comes into play, as
compared to more regular regimes. In this article, we especially focus on the fact that chaotic attractors are known to possess an infinite
number of embedded unstable periodic orbits. In brain modeling, or for the purpose of suggesting computational devices that could take
advantage of chaos, the different embedded dynamical states can be interpreted as different behaviors or computational modes suitable for
particular tasks. Previously we proposed a rather abstract neural network model that mimicked cortex to some extent but where biological
realism was not the major concern. In the present paper we show that the same potential for computation can be displayed by a more
realistic neural model. The latter features spatiotemporal chaos of a type thus far only found in more ‘‘artificial’’ models. We also note that
certain network-related properties, previously overlooked, turn out to be essential for the generation of complex behavior.
r 2007 Elsevier B.V. All rights reserved.
PACS: 05.45.Gg; 05.45.Pq; 05.45.Xt; 05.65.+b; 87.10.+e; 87.18.Bb; 87.18.Hf; 87.18.Sn; 87.19.La; 89.75.Fb
Keywords: Chaos; Dynamical reservoir; Computation; Spatiotemporal dynamics; Self-organization; Neurodynamics
1. Spatiotemporal neural chaos and computation
From the mid-1980s onwards, different groups reportedthe discovery of chaotic electrical signals in the brain,namely human [3] and simian [17]. This started a long-lasting dispute concerning the true chaotic nature of suchsignals, as well as much speculation regarding the possibleroles of chaos in cognition [1,4,8,9]. Our standpoint inprevious work and in the present paper is as follows. Wetake chaos for a fact and assume that natural systems maydisplay it, including the biological case. We then ask what
e front matter r 2007 Elsevier B.V. All rights reserved.
ucom.2006.11.008
ing author at: Faculty of Sciences of the University of
atics Department, Campo Grande, 1749-016 Lisboa,
+351 21 7500601; fax: +351 21 7500084.
ess: [email protected].
/www.di.fc.ul.pt/�csl/.
acknowledges the partial support of Fundac- ao para a
nologia and EU FEDER via the former Center for Logic
ion and the project ConTComp (POCTI/MAT/45978/
via the project PDCT/MAT/57976/2004.
computational purposes it may serve, and which of thosemight be relevant for biologically inspired computationaldevices. In this vein, actual computational models wereproposed [1,2,5,6,13]. These particular models adopt acontinuous-time setting and feature nonlinear networkproperties. The investigation of the neurons’ dynamics isassisted by general knowledge of properties of nonlinearoscillators, as well as of generic networks. In the referencesabove, rather abstract models such as Ginzburg-Landauand Rossler oscillators are meant to capture the essentialoscillatory features of neurons. Particularly in [1,2,13], a fullnetwork setting is presented mimicking cortical architec-ture. Thus an actual spatiotemporal dynamics is unveiled,overcoming the limitations and criticism that result fromworking with single-unit or otherwise very small networks[13]. Unstable periodic orbits (UPOs) can be stabilized fromwithin chaos, very fast and with minimum perturbation ofthe original system. The original chaotic attractor containsan infinite number of such dynamical modes, some of whichcan be stabilized at will according to the requirements
ARTICLE IN PRESSC. Lourenc-o / Neurocomputing 70 (2007) 1177–11851178
of computational tasks. In [13], this is applied to theprocessing of spatiotemporal visual input patterns withdifferent symmetries. The units (or ‘‘neurons’’) are topolo-gically arranged in a network, and the simultaneousmonitoring of their state variables reveals such spatiotem-poral regimes as standing and rotating waves of differentsymmetries, or a complex mixture thereof. The name‘‘reservoir’’ in the context of a chaotic computationaldevice has been adopted in [1] and later references by thesame authors, where it refers explicitly to the fact thatchaos can contain an infinite number of UPOs and thesecan be used as coding devices or computational modes.Interestingly, the concrete application of chaotic computingfeatured in [1,2,13] is also an early, albeit partial,demonstration of what is modernly called reservoircomputing. The latter is featured namely in Echo StateNetworks [10] and Liquid State Machines [16], wherethe dynamics of a large pool of neurons in a recurrentnetwork is perturbed by some input and the resultingtransient dynamics is assessed by separate readout neuronsas a process which can provide meaning to the observeddynamics. However, and apart from other differences, thereadout mechanism in [1,2,13] is at a more basic level and inparticular it lacks the learning capabilities of [10,16]. Thefact that in [1,2,13] a chaotic attractor is perturbed, either ina permanent or a transient fashion, allows the use of a largerange of resulting system responses. This also suggests theterm ‘‘reservoir’’ as referring to the availability of a richcollection of dynamical input–output transformations, thusin closer agreement with the spirit of [10,16]. This somehowconflicts with the use of the word in the sense of a reservoirof UPOs, and is not exactly the meaning inherent to the so-called reservoir computing if in the latter case the reservoiris understood as the pool of neurons itself. Notwithstand-ing, this explanation should clarify the term usage in eachcontext. In the two senses of reservoir of UPOs andreservoir of responses, the flexibility of chaos is stressed out.Note that the major theme of the present paper is thereservoir of UPOs itself. This constitutes an importantmodule of a system demonstrating computation withperturbations under our model, but a complete applicationis out of the scope of the present publication. The readerinterested in previewing what the final system will look like,should presently be referred to the device in [13]. Althoughit uses a more abstract model, it does display a number offeatures which will look familiar to those acquainted withLiquid State Machines and similar devices. Namely, afading memory is present, and the system is perturbeddifferently according to specific characteristics of eachinput, for both cases of static and dynamic input.
2. Nonlinear oscillators: from abstract units to more realistic
neurons
The mathematical models mentioned above face somecriticism when comparisons are made with actual neurons.This is the case with the models in [1,2,5,6,13], but also, in
varying degree, with other models in the literature. Neuronsare not oscillators, even if certain cells can displayautonomous rhythmic firing (a behavior we are not addres-sing here, but certainly nothing like an actual oscillator,especially if the membrane potential is the monitoredvariable). However, groups of neurons can show oscillatingelectrical activity, sustained by the exchange of excitation andinhibition. Neural coupling is far from the simple linearconnectivity of diffusive type considered in [1,2,13] and otherstudies of networks of the reaction–diffusion type. Rather,neurons are connected via highly nonlinear transfer functionssuch as sigmoids. Finally, in real life there is an unavoidabledelay in signal transmission between all neurons, which isusually not considered as an intrinsic property of the modelnetworks. This includes the networks mentioned above.Hence a more realistic model is sought. Our purpose is
to attain just the ‘‘right’’ level of biological or physicalplausibility, while still having a manageable model fordynamical exploration. Although proposed in a differentcontext, the model appearing in [11] and further developed in[7] provides a good compromise. The individual unit is aslightly more complex version of the leaky integrator, and isalso called the single-compartment neuron. Passive as well asactive membrane properties are considered, along with highlynonlinear coupling and delays in signal transmission betweenneurons. While aiming at biological realism, the model thatwe adopt also turns out to fit in the general framework of theKleinfeld model [12], which has been proposed as a standardmodel for delayed recurrent neural networks [18]. The modelis amenable to synaptic learning, namely by backpropagation[18], but we have not considered that possibility in our ownwork thus far. An abbreviated account of the UPO reservoirproperties of this model was given in [14].
2.1. Deriving the neuron model
The detailed morphology of the neuron need not be takeninto account as it would be the case with more complexmodels with multiple compartments. The emphasis of themodeling is on the overall electrical state of each neuron,and not on the spatial distribution of charge within aneuron. We start with the electrical equivalent of the neuralmembrane as illustrated by Fig. 1. The state of the neuron ischaracterized by the membrane potential V. Cm is themembrane capacitance. En represents the ionic potentialassociated with ion species n. The membrane conductanceassociated with species n is denoted gn. A Resistive–Capa-citive (RC) equation can be written for the potential V:
CmdV
dt¼ �g1ðV � E1Þ � g2ðV � E2Þ.
Actually E1 and E2 can be taken as linear combinationsof different ionic potentials [11], but we need not detail themhere as the setting is general enough. The ionic conductancesare decomposed into a ‘‘passive’’ and an ‘‘activated’’ part,
gn ¼ g0n þ gc
n.
ARTICLE IN PRESS
−75 −50 −25 0 25 50
membrane potential (mV)
0
50
100
firing r
ate
(H
z)
Fig. 2. Dependence of a neuron’s instantaneous firing rate on its own
membrane potential. Parameters of the sigmoid f in this figure are
a ¼ 0:2mV�1, Vc ¼ �25mV, f max ¼ 100Hz.
V
E1 E2
g1 g2
Cm
Fig. 1. Electrical equivalent of the neural membrane. See text for details.
C. Lourenc-o / Neurocomputing 70 (2007) 1177–1185 1179
The resting conductance g0n, specific to species n, is constant
in time. The activated part, gcn for each species n, takes into
account the opening or closing of ionic channels, dependingon the level of synaptic excitation which arises through thecoupling of neurons. We may write
gL ¼ g01 þ g0
2,
V L ¼g01E1 þ g0
2E2
gL
,
where gL is the resting conductance of the membrane andV L is its resting potential. The previous RC equation canthus be transformed into
CmdV
dt¼ �gLðV � VLÞ � gc
1ðV � E1Þ � gc2ðV � E2Þ.
Let gðk!iÞn denote the local conductance of ion species n, at
the synapse between neurons k and i. Then, we have
gcn;i ¼
Xk
gðk!iÞn ,
integrating the contributions from several presynapticneurons, indexed by k. While gðk!iÞ
n depends on thestimulation provided by neuron k, we neglect its dependenceon the voltage V of cell i itself. The different equilibriumpotentials have the values V L ¼ �60mV, E1 ¼ 50mV, andE2 ¼ �80mV. As with other parameters of the model, thesevalues are suggested by electrophysiological experiments.The activation of channel gc
1 contributes to the increase ofthe membrane potential toward E1, whereas the activationof gc
2 tends to decrease the membrane potential toward E2.The gc
1 and gc2 are activated by excitatory and inhibitory
presynaptic neurons, respectively. The synaptic conduc-tances depend on the firing rate of the presynaptic neuron:
gðk!iÞn ðtÞ / f kðt� tikÞ,
where tik is the time-delay due to the finite speed of signalpropagation. The firing frequency of the presynaptic neuronk saturates at a value f max, say, 100Hz. We consider asigmoidal transfer function between the membrane potentialV k and the firing frequency f k:
f k ¼ f maxF ðVkÞ,
F ðV Þ ¼1
1þ e�aðV�VcÞ.
The transfer function f is illustrated in Fig. 2. By choosingparticular parameter values a and Vc, a certain activationthreshold is obtained such that the neuron is silent for lowervalues of the potential V. Past some higher value of V, the
firing rate saturates. V c is fixed at �25mV. Two different avalues are considered as indicated in Section 2.2.
2.2. Connecting the neurons
In the isolated model neuron, the response of themembrane to perturbations from the resting state can becharacterized by g ¼ gL=Cm, which is the inverse of themembrane’s time-constant. The dynamics in this simplecase is relaxatory. However, as several neurons are coupledtogether, new dynamical regimes may settle in, dependingon details of the coupling and on the initial conditions ofthe population. One can observe multiple steady states,including global quiescence and global saturation, aswell as a variety of oscillatory regimes for the electricalactivity of the neurons. Although a single neuron, underthe present model, does not oscillate, a coupled populationmay present oscillations due to the interplay of excitatoryand inhibitory feedback through synaptic connections.The variables of the dynamics are the instantaneous
membrane potential of each neuron. A stationary statecorresponds to the firing of action potentials with a frequencyconstant in time. This frequency can be close to zero, or havesome finite value. An oscillatory state, on the other hand,implies a time-modulation of the firing frequency.Although there are many types of neurons, for the purpose
of the modeling we divide them into two populations: oneof excitatory neurons, with membrane potentials X i, andanother of inhibitory neurons, with membrane potentialsY j. The dynamics of the coupled neuronal populations isdescribed by
dX i
dt¼ � gðX i � V LÞ � ðX i � E1Þ
Xkai
oð1Þik FX ½X kðt� tikÞ�
� ðX i � E2ÞXlai
oð2Þil FY ½Y lðt� tilÞ�,
dY j
dt¼ �gðY j � V LÞ � ðY j � E1Þ
Xkaj
oð3Þjk FX ½X kðt� tjkÞ�,
i; k ¼ 1; . . . ;Nex; j; l ¼ 1; . . . ;N in.
ARTICLE IN PRESS
XiXi−1 Xi+1
YiYi−1 Yi+1
Fig. 3. Dynamical neural network. Excitatory neurons and inhibitory
neurons are represented as triangles and circles, respectively. Connections
X i ! X i�1 and X i ! Y i�1 are excitatory, whereas connections Y i !
X i�1 are inhibitory.
C. Lourenc-o / Neurocomputing 70 (2007) 1177–11851180
The network comprises Nex excitatory neurons X i, and N in
inhibitory neurons Y j . The inverse of the membrane’s time-constant takes the value g ¼ 0:25msec�1. The propogationdelay tik between neurons k and i depends on the distancebetween the two neurons. Contrarily to previous versions ofthe model, here we consider different values of the slope a ofthe transfer function F, depending on whether the firingneuron is excitatory or inhibitory. We adopt the values aX ¼
0:09mV�1 and aY ¼ 0:2mV�1. The synaptic weights ocharacterize the efficiency of channel activation by thepresynaptic neuron. They correspond to the coupling
constants of the network. The weights oð1Þik , oð2Þil , and oð3Þjk
refer, respectively, to excitatory-to-excitatory, inhibitory-to-excitatory, and excitatory-to-inhibitory connections. We takethese values as constant in time. No inhibitory-to-inhibitoryconnections are considered.
The connectivity in the model is of a local nature. Forsimplicity, only first-neighbor connections are implemen-ted. Furthermore, we study networks that are invariantunder translations in space, where the neurons are equallyspaced. Therefore, in the X i and Y j equations one can dropthe subscripts in the time-delays and synaptic weights: tik ¼
t and oðIÞik ¼ oðIÞ, where I ¼ 1; 2; 3 designates the differenttypes of interactions. The summations in the X i and Y j
equations are to be performed with contributions onlyfrom presynaptic neurons in the immediate spatial vicinityof the postsynaptic neuron. This local pattern of con-nectivity may be more relevant for certain parts of thecortex than for others, but this discussion is out of thescope of the present paper.
2.3. Network topology
We have investigated the dynamical behavior of theabove equations in different configurations, includingthe case of bi-dimensional networks with many excitatoryand inhibitory neurons. However, for the purpose ofthis paper, we consider only a 1D spatial arrangementof a moderate number of neurons. Spatiotemporalchaotic solutions are sought, as well as embedded UPOstherein. These UPOs are expected to display diversespatiotemporal symmetries and could eventually bestabilized via suitable control methods. Fig. 3 shows thenetwork topology. Here Nex ¼ N in ¼ N. Hence the totalpopulation size is 2N. For the numerical simula-tions described below we take N ¼ 8. In accordance withSection 2.2, the network has first-neighbor connectivity.The boundary conditions are of the zero-flux type. For thisconfiguration the dynamical equations of the networkbecome
dX i
dt¼ � gðX i � VLÞ � ðX i � E1Þ
Xj¼i�1
oð1Þij F X ½X jðt� tijÞ�
� ðX i � E2ÞX
j¼i�1
oð2Þij F Y ½Y jðt� tijÞ�,
dY i
dt¼ �gðY i � VLÞ � ðY i � E1Þ
Xj¼i�1
oð3Þij F X ½X jðt� tijÞ�,
i ¼ 1; . . . ;N.
Recall also that we consider a spatially homogeneousnetwork, thus oðIÞij ¼ oðIÞ and tij ¼ t. The weights of types(1) and (3) are fixed at oð1Þ ¼ 3:15 and oð3Þ ¼ 2:5,respectively. Parameters t and oð2Þ can each have differentvalues.
2.4. Dynamics: routes to spatiotemporal chaos
The network properties of connection topology anddelay in signal transmission turn out crucial for theunfolding of a rich dynamics. It has been noted by severalauthors that the delay value can serve as a Hopfbifurcation parameter from stationary states into periodicoscillations. Moreover, it provides a route to chaos and afurther increase in the chaotic attractor’s dimensionas its value is incremented. Both mechanisms wereidentified in our study of neural networks with otherconfigurations and parameters, thus confirming the im-portance of the delay. Here, we propose a somewhatdifferent, albeit related, route. We fix t ¼ 1:8msec and varythe value of oð2Þ, which weighs the amount of inhibitoryfeedback to the network.Starting from oð2Þ ¼ 17 and taking successively lower
values of oð2Þ, while keeping all other parameters fixed asindicated above, we observe a sequence of bifurcationsthat occur for the neural system. For 16:05ooð2Þo17 astationary state is observed in which the excitatory (resp.,the inhibitory) neurons share a common low membranepotential corresponding to a very low firing rate. Atoð2Þ ¼ 16:05, a primary Hopf bifurcation occurs whichdestabilizes this stationary state giving rise to uniformoscillations of the network with a critical period of13.76msec. For lower values of oð2Þ, the structure of theseoscillations becomes increasingly complex. From oð2Þ ¼1:69 downwards, two new dynamical processes take placein conjugation: the network no longer oscillates homo-geneously, and a cascade of period-doubling bifurcations isseen for the time-variation of any given neuron’s mem-brane potential. The cascade accumulates for a value ofoð2Þ close to 1.641. Past that point, spatiotemporal chaos isobserved in a narrow but finite parameter range.
ARTICLE IN PRESSC. Lourenc-o / Neurocomputing 70 (2007) 1177–1185 1181
3. Chaos in the neural model
By fixing oð2Þ ¼ 1:64, we can investigate the resultingchaotic regime.
3.1. Looking at neurons
The time-variation of each neuron’s membrane potentialhas a bimodal character, with higher frequency oscillationssuperimposed on the basic cycle (see Fig. 4, Top). Thisevolution is aperiodic, and it is one example of richdynamical behavior not found in networks of simpleoscillators.
For the convenience of visualization, we consider ‘‘low-pass filters’’ that eliminate most of the high-frequencybehavior. The implementation of the filters consists indefining the array of time-averaged variables
uiðtÞ ¼1
t
Z t
t�tX iðyÞdy.
This does not alter the dimension of the dynamics.It just modifies slightly the way we look at the system.We choose t ¼ 1:8msec, coinciding with the intrinsic delayin the network as indicated in Section 2.4. This value isapproximately 7% of the average pseudo-period of thecycles in the chaotic regime. Fig. 4, Bottom displays thefiltered time-series of a typical excitatory neuron.
3.2. Looking at spatial modes
Most of the interesting dynamical behavior that arisesthrough neuron coupling is best evaluated by followingentire spatial modes as they evolve in time. Earlydynamical exploration could already show that both thenetwork average value of X (or u) and a profile of localdeviations from this average have aperiodic temporalvariation. We can make this monitoring more precise andseparate the dynamics of the profile of local deviationsfrom that of the spatially averaged network activity. Yet,we remark that the two dynamics are closely coupled in theevolution of the neural network. The deviation profilemeasures the degree of instantaneous local inhomogeneity,
0 100 200 300 400
time (msec)
−80
−55
−30
−80
−55
−30
X4 (
mV
)u
4 (
mV
)
Fig. 4. Top: Chaotic time-series of X 4, a typical excitatory neuron;
Bottom: the corresponding time-series of u4. The latter is a low-pass
filtered version of X 4 as explained in the text.
hence spatial structure. The network (spatial) average givesimportant information about the background activity thatthe neurons feel, either excitatory or inhibitory. Since weare dealing with spatiotemporal phenomena, we must notethat the obtained behavior is conditioned by the moderatesize of the network. The spatial correlation length is of theorder of the system size. In a larger system, the observedregime would correspond to traveling waves of activitygoing rightward or leftward in a random fashion, and‘‘riding’’ on a uniform profile that has itself an irregulartemporal variation. In the present network, the waves arenot fully developed. Chaos is associated with the destabi-lization of entire spatial modes. A partial spatial coherenceis kept throughout the chaotic regime. This denotes thecorrelations that exist between neurons, caused by theircoupling pattern.In view of the spatially correlated activity of the neurons,
we adopt a spatial mode analysis that had already beenuseful with other extended systems such as physical systemsdescribed by partial differential equations [15]. However,here we are dealing with a spatially discrete system andhence we replace the integrals of [15] with finite sums. As in[15], Fourier spatial modes are chosen for their conve-nience. The modes are monitored by computing theirrespective coefficients:
A0ðtÞ ¼ huiðtÞ ¼1
N
XN
i¼1
uiðtÞ,
AjðtÞ ¼2
N � 1
XN
i¼1
uiðtÞ cos jpi � ðN þ 1Þ=2� �
N � 1
� �,
BjðtÞ ¼2
N � 1
XN
i¼1
uiðtÞ sin jpi � ðN þ 1Þ=2� �
N � 1
� �,
j ¼ 1; . . . ;N � 1.
Notice that each ui is the instantaneous time-filtered valueof an excitatory neuron’s membrane potential. The scalingfactor 2=ðN � 1Þ is arbitrary, since we do not aim atrecovering the original function uiðtÞ � uði; tÞ via a sum ofthe Fourier modes weighted by the Aj and Bj coefficients.Rather, the Aj and Bj are taken as the variables spanning aphase-space where the dynamics can be followed. Forexample, if all Aj and Bj were constant (i.e., a fixed point inFourier space), this would correspond to a stationaryspatial structure. A limit-cycle in Fourier space, on theother hand, corresponds to a time-periodic spatiotemporalorbit followed by the profile uiðtÞ; i ¼ 1; . . . ;N.One may ask what is the minimum number of
coefficients required for the dynamics to be adequatelymonitored. With the present system, a truncation at j ¼ 1 isenough to characterize the chaotic regime. Fig. 5 illustratesthe chaotic regime via a projection of the dynamics inFourier space. The coefficients A0 and B1 are monitoredduring an arbitrary period of time. A0 is simply thespatially averaged activity of the network. B1 is thecoefficient of the first sine mode, and already indicates
ARTICLE IN PRESS
0 200 400 600
time (msec)
−80
−55
−30
A0
−25
0
25
B1
Fig. 5. Chaotic time-series of the Fourier coefficients A0 and B1, as
defined in the main text.
0.00 50.26 100.52
time (msec)
−80
−55
−30
−80
−55
−30
−25
0
25
B1
X8 (
mV
)X
1 (
mV
)Fig. 6. Period-two unstable periodic orbit embedded in the chaotic
attractor. The top of the figure shows the evolution of the Fourier
coefficient B1 during two complete periods of oscillation. The down-most
plots feature the simultaneous variation of the membrane potentials X 1
and X 8. The period of the orbit is T ¼ 50:26msec. The spatiotemporal
orbit has a reflection symmetry around the central position of the network.
C. Lourenc-o / Neurocomputing 70 (2007) 1177–11851182
a deviation of the instantaneous activity profile from ahomogeneous state. Both coefficients show aperiodicbehavior. The coefficients of higher Fourier modes, notshown in the figure, also display irregular variation.
3.3. Dynamical reservoir
Finding an ensemble of UPOs embedded in the chaoticattractor has been one of the objectives of this work. Forthis system it has been possible to gather evidence that suchan ensemble exists, most likely being infinite and countable.However, only the first few UPOs of lowest periods arenumerically accessible with moderate effort. If an UPO hasa period close to the average pseudo-period of the chaoticattractor, we will say that it is period-one. If the UPO’speriod is approximately twice the chaotic pseudo-period,then it is period-two, and so on. We could identify withadequate precision, from within the chaotic attractor: oneperiod-one UPO, three different period-two UPOs, and atleast two different period-four UPOs. Fig. 6 shows one ofthe period-two UPOs.
It should be emphasized that UPOs such as the one inFig. 6 are not spontaneously sustained within the chaoticregime, unless a short timespan is considered during whichthe system visits the vicinity of the UPO. These UPOs canbe found, for instance, using the method outlined inSection 3.4. The latter implies scanning the system’sevolution for a very long time, by which it can be detectedthat an arbitrarily small vicinity of a given UPO istemporarily entered. Instability inevitably leads to theamplification of small differences between the system’sstate and the exact periodic orbit. However, for thepurpose of plotting time series such as the ones in Fig. 6,the unperturbed evolution of the system for a limitednumber of consecutive cycles can provide a sufficientlygood approximation for the respective UPO.
Due to the system being spatially extended, the UPOscan display different spatiotemporal symmetries. Forexample, the orbit in Fig. 6 has the following reflectionsymmetry: each half of the network has instantaneousactivity different from that of the other half, but theneurons are paired such that X 1 imitates X 8 from half aperiod before, X 2 imitates X 7 also with a half-period time-
lag, and so on. Recall that the total number of excitatoryneurons is N ¼ 8. Other UPOs found within the sameattractor do not necessarily obey the left-to-right symmetryas the one of Fig. 6.
3.4. Tracking the unstable periodic orbits
The method for tracking UPOs such as the one of Fig. 6,and other UPOs also mentioned in the text, can appear indifferent variations. In the following we briefly describe aparticular procedure which takes advantage of the lowdimensionality of the attractor, albeit in the context of aninfinite-dimensional system.Let us consider a Poincare section at A0 ¼ AP
0 ¼ �60,_A040. The time of the nth crossing of the Poincare plane isdenoted as tðnÞ. A time-interval variable is defined asTðnÞ ¼ tðnÞ � tðn� 1Þ. We can look for period-one orbitsof the system in the form of fixed points of the mapTðnþ 1Þ ¼F½TðnÞ�, if such a map can be defined. In thiscase, the period of the orbit and the fixed point have thesame numerical value. A period-k orbit with period T�
appears as a discrete-time orbit T�1 ! T�2 ! . . .T�k obey-ing
Pki¼1T
�i ¼ T�. However, if the underlying spatiotem-
poral orbit has the type of reflection symmetry displayed bythe UPO of Fig. 6, it will appear as a period-k=2 orbit ofTðnÞ. In the case of Fig. 6, the featured period-two UPOwill appear as a period-one orbit of TðnÞ. This can beexplained as follows. The homogeneous mode correspond-ing to A0, as well as all the cosine modes corresponding toAj ; jX1, are invariant with respect to reflection around the
ARTICLE IN PRESSC. Lourenc-o / Neurocomputing 70 (2007) 1177–1185 1183
central position of the network. The map F happens to bebuilt upon A0, and thus it will treat two halves of a sameorbit as being the same if the second half is a spatialreflection of the first. As it comes, it is the durations ofeach half of such a symmetric orbit that are equal, but viathe map F each of these half-periods of length T�=2 couldbe interpreted as being the true period of some orbit.Further inspection is required to distinguish between, say,a true period-one UPO and a period-two UPO withreflection symmetry. The first-return map F is shown inFig. 7, as well as the second-, third-, and fourth-returnmaps, respectively, Fð2Þ, Fð3Þ, and Fð4Þ. Note that anyperiod-four UPO, say, is a fixed point of the mapTðnþ 4Þ ¼Fð4Þ½TðnÞ�. If, in addition, such UPO displayedspatial reflection symmetry, then it would also be a fixedpoint of the map Tðnþ 2Þ ¼Fð2Þ½TðnÞ�. It should also benoted that the term ‘‘map’’ is used here in the sense of amultivalued map of a single argument.
It may seem surprising that an infinite-dimensionaldynamical system featuring spatiotemporal chaos bedescribed by such low-dimensional maps. Yet, that wasalready the case with the Kuramoto–Sivashinsky partialdifferential equation investigated in [15]. In the presentpaper, the high dimensionality comes also from a spatialdistribution, but especially from the presence of delayterms in the evolution equations. Notwithstanding, a fewdimensions suffice to describe the attractor, and indeed the
20.0 24.0 28.0 32.0
T(n) (msec)
20.0
24.0
28.0
32.0
T(n
+1)
(msec)
20.0 24.0 28.0 32.0
T(n) (msec)
20.0
24.0
28.0
32.0
T(n
+3)
(msec)
Fig. 7. Return maps of orders 1, 2, 3, and 4, for the time-interval variable TðnÞ
with the neural network in a chaotic, uncontrolled regime. Shown are the locat
symbol), period-two (circle and square symbols—these corresponding to differe
one fixed point for any given UPO is shown in each of the four maps. Otherwis
of the map Fð4Þ, apart from the one signaled by the ellipse. See text for furth
respective UPOs.
maps are low-dimensional projections of the dynamics.However, any perturbation of the system immediatelypushes it back into higher dimension. Thus, the apparentsimplicity of the maps can be misleading. Nevertheless, avery low-dimensional treatment is possible in this case.Each map features a large number of candidate fixed-points at the respective period. The majority of them turnout to be false fixed points. A systematic procedure isneeded to rule out all of these false candidates: not only dowe require that Tðnþ kÞ ¼ TðnÞ for a particular FðkÞ, butalso that a true kth-return is observed in the sense that allmonitored variables come back within an arbitrarily smallvicinity of their values at tðnÞ. The monitored variablesfor this purpose can be e.g. the coefficients of the spatialmodes up to some adequate order, or the actual values ofthe neurons’ membrane potentials at time t as well as thevalues at time t� t. Thus, as mentioned in Section 3.3, sixdifferent UPOs could be found. Referring to Fig. 7, Table 1provides numerical data associated with the six UPOs. Thefirst UPO in the list is a period-one orbit. Inspection of thefull dynamics of the network in continuous time revealsthat this orbit consists in spatially homogeneous oscilla-tions of the membrane potentials. Trivially, the activity issymmetric with respect to reflection around the centralposition of the network. The continuous time period is29.98msec which coincides with the numerical value ofthe respective fixed point of map F. This orbit being
20.0 24.0 28.0 32.0
T(n) (msec)
20.0
24.0
28.0
32.0
T(n
+2)
(msec)
20.0 24.0 28.0 32.0
T(n) (msec)
20.0
24.0
28.0
32.0
T(n
+4)
(msec)
on the Poincare section A0 ¼ AP0 ¼ �60,
_A040. These maps are obtained
ions of fixed points corresponding to orbits of, respectively, period-one (�
nt orbits), and period-four (ellipse symbol). To avoid clutter, no more than
e, for the period-four orbit, say, we could also plot three other fixed points
er explanation and Table 1 for numerical values of the fixed points and
ARTICLE IN PRESS
Table 1
Numerical values associated with the UPOs featured in Fig. 7
Discrete time
period of the
UPO
Continuous time
period of the
UPO (msec)
Fixed point
marked in
Fig. 7
(msec)
Spatial
reflection
symmetry?
Multiplicity
of the orbit
1 29.98 29.98 ð�Þ yes 1
2 50.26 25.13 ðÞ yes 1
2 52.74 21.96 ð&Þ no 2
4 104.42 23.63 (0) no 2
The discrete time period of an UPO is the number of times that the
respective trajectory has to recur on the Poincare plane, after some initial
crossing, until all the variables of the dynamics repeat exactly. In general,
a period-k orbit gives rise to k fixed points of map FðkÞ and also of all
maps FðmkÞ, where m is a positive integer. The exception here is the fixed
point 25.13msec of the second UPO, which is doubly degenerate.
C. Lourenc-o / Neurocomputing 70 (2007) 1177–11851184
period-one, any of its observables will repeatedly assumethe same value at successive crossings of the Poincareplane. In particular, its period of 29.98msec is itself a fixedpoint of maps F, Fð2Þ, Fð3Þ, and Fð4Þ. It is signaled with across ð�Þ in Fig. 7. Recalling previous remarks, suchperiodic behavior cannot be spontaneously sustained in thechaotic regime, as is the case with the other UPOs alsomentioned. The second UPO in the list is a period-twoorbit with continuous time period of 50.26msec. It isprecisely the UPO depicted in Fig. 6. It features a spatialreflection symmetry which was discussed in Section 3.3 andalso above in the present section. Hence it appears as aperiod-one orbit from the viewpoint of map F, implyingthat it will also give rise to fixed points of maps FðkÞ for allk. In each case the numerical value of the fixed point thusobtained is 50:26=2 ¼ 25:13msec. It is signaled with a circleðÞ. The third UPO is also period-two, but it does nothave the spatial reflection symmetry mentioned above. Itscontinuous time period is 52.74msec. This orbit does notgive rise to any fixed point of maps F and Fð3Þ, but itoriginates two fixed points at mapFð2Þ; since 4 is a multipleof 2, these two fixed points also appear in mapFð4Þ as fixedpoints. Of the two, only the one with numerical value21.96msec is explicitly marked, with a square ð&Þ, in mapsFð2Þ and Fð4Þ in Fig. 7. It is not hard to see that an UPOwhich is not symmetric with respect to reflection, such asthis one, indeed corresponds to two analogous orbits withexactly the same period and where the activity of eachneuron mirrors the activity of the opposite neuron in therelated orbit. If we denote the two related UPOs as O andOmirror, respectively, then the activity of, say, excitatoryneuron number 1 during one complete period of O willimitate the activity of excitatory neuron number 8 duringone complete period of Omirror. The same is true forneurons number 2 and 7, respectively, when comparedacross the two related UPOs. This imitation is alsoobserved with all other related pairs in our eight-neuronlong spatial network arrangement. This is to be expectedfrom symmetry considerations, since a spatially homo-
geneous network that provides the substrate for somespatially asymmetric UPO, must also display a relatedUPO which is a spatial reflection of the former UPO.Hence the multiplicity of the representative UPO of period52.74msec is taken as 2. The last orbit in Table 1 is period-four and has a continuous time period of 104.42msec. Likethe previous orbit, it does not have spatial reflectionsymmetry. Hence it only originates fixed points of mapFð4Þ, and not of map Fð2Þ. Of the four fixed points of thisorbit’s discrete dynamics on the Poincare section, only theone with numerical value 23.63msec is marked over mapFð4Þ in Fig. 7, with an ellipse (0). For the same reasons thatapply to the previous UPO, the representative orbit ofperiod 104.42msec has multiplicity 2. This raises the totalnumber of UPOs found in this numerical study to six, whilenot excluding the possibility of finding new UPOs throughfurther exploration.
4. Discussion
First we note that the UPO ‘‘reservoir’’ property isindeed found in the chaotic neural attractor. Severaldifferent behaviors in the form of UPOs can be identifiedand described as having different spatiotemporal symme-tries. Such a range of behaviors can be useful to performthe kind of dynamical computation referred in Section 1.For instance, neurons at certain parts of the network mayprovide specialized modulation of input signals, in thespirit of [13]. The orbits that violate the left-to-right spatialsymmetry are especially interesting, since they can be usedto ensure that the neurons in different parts of the networkare consistently doing different things.For this procedure to be effective, we need to be able to
selectively stabilize the desired UPOs according to the task.If left unperturbed, the chaotic attractor will not sponta-neously display such individual orbits in a persistentmanner. Hence control methods are required to captureand hold these UPOs, and they should be applied at leastfor the duration of a computational task. As it comes,stabilization processes that are analogous to those em-ployed with the toy models mentioned in Section 1 can beapplied to the present system, which is much morebiologically inspired. They share the common feature thatthe chaotic system is only minimally perturbed, thus nolarge forcing is required. The control procedure itself is outof the scope of this paper, and will be described in a futurepublication.Although we have tried several different network
configurations, only an 8+8 neuron arrangement waschosen to illustrate our ideas in the present paper. Thesystem’s spatial extension can be considered as propor-tional to the number of neurons. In analogy with otherspatially extended systems [15], the dimension of thechaotic attractor is expected to grow in monotone fashionwith spatial extension. The architecture adopted in thepresent paper should provide the right balance betweensmall networks, where the dynamics is not complex enough
ARTICLE IN PRESSC. Lourenc-o / Neurocomputing 70 (2007) 1177–1185 1185
and spatial effects are barely visible, and very largenetworks, where chaos can be too much developed andnumerically difficult to assess in a first approach in view ofcomputation.
It is demonstrated that a network of model neurons akinto leaky integrators, with physically plausible nonlinearconnectivity and intrinsic delays in signal transmission, iscapable of displaying a rich set of behaviors without theneed of any significant parameter deviation from referencevalues. The model, which is amenable to electronicimplementation, takes advantage of a fully time-continuousdynamics together with spatial distribution, thus providingan adequate framework for the dynamical computationparadigm that motivated this work. A reservoir ofbehaviors exists in chaos, which could be explored for thesake of computation. Here the reservoir is spatiotemporal,and the worthiest candidates for practical applicationswould be the ones taking advantage of this.
Acknowledgment
The author wishes to thank the anonymous reviewers fortheir useful comments and suggestions.
References
[1] A. Babloyantz, C. Lourenc-o, Computation with chaos: A paradigm
for cortical activity, Proc. Nat. Acad. Sci. USA 91 (1994) 9027–9031.
[2] A. Babloyantz, C. Lourenc-o, Brain chaos and computation, Int.
J. Neural Systems 7 (1996) 461–471.
[3] A. Babloyantz, J. Salazar, C. Nicolis, Evidence of chaotic dynamics
of brain activity during the sleep cycle, Phys. Lett. A 111 (1985)
152–156.
[4] M. Conrad, What is the use of chaos?, in: A. Holden (Ed.), Chaos,
Manchester University Press, Manchester UK, 1986, pp. 3–14.
[5] N. Crook, W.J. Goh, M. Hawarat, The nonlinear dynamic state
neuron, in: M. Verleysen (Ed.), Proceedings of the 13th European
Symposium on Artificial Neural Networks (ESANN 2005), d-side
pub., April 27–29, Bruges (Belgium), 2005, pp. 37–42.
[6] N. Crook, T. Scheper, A novel chaotic neural network architecture,
in: M. Verleysen (Ed.), Proceedings of the Ninth European
Symposium on Artificial Neural Networks (ESANN 2001), D-Facto
pub., April 25–27, Bruges (Belgium), 2001, pp. 295–300.
[7] A. Destexhe, Ph.D. Dissertation, Universite Libre de Bruxelles,
March 1992.
[8] W. Freeman, Simulation of chaotic EEG patterns with a dynamic
model of the olfactory system, Biol. Cybern. 56 (1987) 139–150.
[9] W. Freeman, The physiology of perception, Sci. Am. 264 (1991)
78–85.
[10] H. Jaeger, The ‘‘echo state’’ approach to analyzing and training
recurrent neural networks, GMD Report 148, German National
Research Center for Information Technology, 2001.
[11] L. Kaczmarek, A model of cell firing patterns during epileptic
seizures, Biol. Cybern. 22 (1976) 229–234.
[12] D. Kleinfeld, Sequential state generation by model neural networks,
Proc. Nat. Acad. Sci. USA 83 (1986) 9469–9473.
[13] C. Lourenc-o, Attention-locked computation with chaotic neural nets,
Int. J. Bifurcation and Chaos 14 (2004) 737–760.
[14] C. Lourenc-o, Dynamical reservoir properties as network effects, in:
M. Verleysen (Ed.), Proceedings of the 14th European Symposium on
Artificial Neural Networks (ESANN 2006), d-side pub., April 26–28,
Bruges (Belgium), 2006, pp. 503–508.
[15] C. Lourenc-o, M. Hougardy, A. Babloyantz, Control of low-
dimensional spatiotemporal chaos in Fourier space, Phys. Rev. E
52 (1995) 1528–1532.
[16] W. Maass, T. Natschlager, H. Markram, Real-time computing
without stable states: A new framework for neural computation
based on perturbations, Neural Comput. 14 (2002) 2531–2560.
[17] P. Rapp, I. Zimmerman, A. Albano, G. de Guzman, N. Greenbaun,
Dynamics of spontaneous neural activity in the simian motor cortex:
The dimension of chaotic neurons, Phys. Lett. A 110 (1985) 335–338.
[18] I. Tokuda, R. Tokunaga, K. Aihara, Back-propagation learning of
infinite-dimensional dynamical systems, Neural Networks 16 (2003)
1179–1193.
Carlos Lourenc-o was born in 1968 in Lisbon,
Portugal. He received a Ph.D. degree in ‘Sciences
Physiques’ from Universite Libre de Bruxelles in
1996. He is presently a professor at the Infor-
matics Department of the Faculty of Sciences of
Lisbon University, and a researcher at the
Instituto de Telecomunicac- oes in Lisbon. His
research interests concern dynamical systems and
new computational paradigms, including natural
computing and neural networks.