phase transitions and self-organized criticality in ... · (related to neurotransmiter vesicle...

17
arXiv:1606.06391v1 [nlin.AO] 21 Jun 2016 Phase transitions and self-organized criticality in networks of stochastic spiking neurons Ludmila Brochini 1 , Ariadne de Andrade Costa 2 , Miguel Abadi 1 , Ant ˆ onio C. Roque 3 , Jorge Stolfi 2 , and Osame Kinouchi 3,* 1 Universidade de S˜ ao Paulo, Departamento de Estat´ ıstica-IME, S˜ ao Paulo-SP, 05508-090, Brazil 2 Universidade de Campinas, Instituto de Computac ¸˜ ao, Campinas-SP, 13083-852, Brazil 3 Universidade de S˜ ao Paulo, Departamento de F´ ısica-FFCLRP, Ribeir˜ ao Preto-SP, 14040-901, Brazil * [email protected] ABSTRACT Phase transitions and critical behavior are crucial issues both in theoretical and experimental neuroscience. We report ana- lytic and computational results about phase transitions and self-organized criticality (SOC) in networks with general stochastic neurons. The stochastic neuron has a firing probability given by a smooth monotonic function Φ( V ) of the membrane potential V , rather than a sharp firing threshold. We find that such networks can operate in several dynamic regimes (phases) depend- ing on the average synaptic weight and the shape of the firing function Φ. In particular, we encounter both continuous and discontinuous phase transitions to absorbing states. At the continuous transition critical boundary, neuronal avalanches occur whose distributions of size and duration are given by power laws, as observed in biological neural networks. We also propose and test a new mechanism to produce self-organized criticality (SOC): the use of dynamic neuronal gains – a form of short- term plasticity probably in the axon initial segment (AIS) – instead of depressing synapses at the dendrites (as previously studied in the literature). The new self-organization mechanism produces a slightly supercritical state, that we called SOSC, in accord to some intuitions of Alan Turing. Another simile would be an atomic pile of less than critical size: an injected idea is to correspond to a neutron entering the pile from without. Each such neutron will cause a certain disturbance which eventually dies away. If, however, the size of the pile is sufficiently increased, the disturbance caused by such an incoming neutron will very likely go on and on increasing until the whole pile is destroyed. Is there a corresponding phenomenon for minds, and is there one for machines? There does seem to be one for the human mind. The majority of them seems to be subcritical, i.e., to correspond in this analogy to piles of subcritical size. An idea presented to such a mind will on average give rise to less than one idea in reply. A smallish proportion are supercritical. An idea presented to such a mind may give rise to a whole ”theory” consisting of secondary, tertiary and more remote ideas. (...) Adhering to this analogy we ask, ”Can a machine be made to be supercritical?” Alan Turing (1950) 1 . Introduction The Critical Brain Hypothesis 2, 3 states that (some) biological neuronal networks work near phase transitions because criticality enhances information processing capabilities 46 and health 7 . The first discussion about crit- icality in the brain, in the sense that subcritical, critical and slightly supercritical branching process of thoughts could describe human and animal minds, has been made in the beautiful speculative 1950 Imitation Game paper by Turing 1 . In 1995, Herz & Hopfield 8 noticed that self-organized criticality (SOC) models for earthquakes were mathematically equivalent to networks of integrate-and-fire neurons, and speculated that perhaps SOC would oc- cur in the brain. In 2003, in a landmark paper, these theoretical conjectures found experimental support by Beggs and Plenz 9 and, by now, more than half a thousand papers can be found about the subject, see some reviews 2, 3 . Although not consensual, the Critical Brain Hypothesis can be considered at least a very fertile idea. The open question about neuronal criticality is what are the mechanisms responsible for tuning the network towards the critical state. Up to now, the main mechanism studied is some dynamics in the links which, in the biological context, occur at the synaptic level 1013 . Here we propose a whole new mechanism: dynamic neuronal gains, related to the diminution (and recovery) of the firing probability, an intrinsic neuronal property. The neuronal gain is experimentally related to the well known phenomenon of firing rate adaptation 1416 . This new mechanism is sufficient to drive neuronal networks of stochastic neurons towards a critical boundary found,

Upload: others

Post on 17-Jan-2020

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Phase transitions and self-organized criticality in ... · (related to neurotransmiter vesicle depletion at synaptic contacts between neurons) proposed in the literature10–13. This

arX

iv:1

606.

0639

1v1

[nlin

.AO

] 21

Jun

201

6

Phase transitions and self-organized criticality innetworks of stochastic spiking neuronsLudmila Brochini 1, Ariadne de Andrade Costa 2, Miguel Abadi 1, Ant onio C. Roque 3,Jorge Stolfi 2, and Osame Kinouchi 3,*

1Universidade de Sao Paulo, Departamento de Estatıstica-IME, Sao Paulo-SP, 05508-090, Brazil2Universidade de Campinas, Instituto de Computacao, Campinas-SP, 13083-852, Brazil3Universidade de Sao Paulo, Departamento de Fısica-FFCLRP, Ribeirao Preto-SP, 14040-901, Brazil* [email protected]

ABSTRACT

Phase transitions and critical behavior are crucial issues both in theoretical and experimental neuroscience. We report ana-lytic and computational results about phase transitions and self-organized criticality (SOC) in networks with general stochasticneurons. The stochastic neuron has a firing probability given by a smooth monotonic function Φ(V) of the membrane potentialV, rather than a sharp firing threshold. We find that such networks can operate in several dynamic regimes (phases) depend-ing on the average synaptic weight and the shape of the firing function Φ. In particular, we encounter both continuous anddiscontinuous phase transitions to absorbing states. At the continuous transition critical boundary, neuronal avalanches occurwhose distributions of size and duration are given by power laws, as observed in biological neural networks. We also proposeand test a new mechanism to produce self-organized criticality (SOC): the use of dynamic neuronal gains – a form of short-term plasticity probably in the axon initial segment (AIS) – instead of depressing synapses at the dendrites (as previouslystudied in the literature). The new self-organization mechanism produces a slightly supercritical state, that we called SOSC,in accord to some intuitions of Alan Turing.

Another simile would be an atomic pile of less than critical size: an injected idea is to correspond to a neutronentering the pile from without. Each such neutron will causea certain disturbance which eventually dies away.If, however, the size of the pile is sufficiently increased, the disturbance caused by such an incoming neutron willvery likely go on and on increasing until the whole pile is destroyed. Is there a corresponding phenomenon forminds, and is there one for machines? There does seem to be onefor the human mind. The majority of themseems to besubcritical, i.e., to correspond in this analogy to piles of subcriticalsize. An idea presented to sucha mind will on average give rise to less than one idea in reply.A smallish proportion are supercritical. An ideapresented to such a mind may give rise to a whole ”theory” consisting of secondary, tertiary and more remoteideas. (...) Adhering to this analogy we ask, ”Can a machine be made to be supercritical?”Alan Turing (1950)1.

Introduction

The Critical Brain Hypothesis2,3 states that (some) biological neuronal networks work near phase transitionsbecause criticality enhances information processing capabilities 4–6 and health7. The first discussion about crit-icality in the brain, in the sense that subcritical, critical and slightly supercritical branching process of thoughtscould describe human and animal minds, has been made in the beautiful speculative 1950 Imitation Game paperby Turing1. In 1995, Herz & Hopfield8 noticed that self-organized criticality (SOC) models for earthquakes weremathematically equivalent to networks of integrate-and-fire neurons, and speculated that perhaps SOC would oc-cur in the brain. In 2003, in a landmark paper, these theoretical conjectures found experimental support by Beggsand Plenz9 and, by now, more than half a thousand papers can be found about the subject, see some reviews2,3.Although not consensual, the Critical Brain Hypothesis canbe considered at least a very fertile idea.

The open question about neuronal criticality is what are themechanisms responsible for tuning the networktowards the critical state. Up to now, the main mechanism studied is some dynamics in the links which, inthe biological context, occur at the synaptic level10–13. Here we propose a whole new mechanism: dynamicneuronal gains, related to the diminution (and recovery) ofthe firing probability, an intrinsic neuronal property.The neuronal gain is experimentally related to the well known phenomenon of firing rate adaptation14–16. Thisnew mechanism is sufficient to drive neuronal networks of stochastic neurons towards a critical boundary found,

Page 2: Phase transitions and self-organized criticality in ... · (related to neurotransmiter vesicle depletion at synaptic contacts between neurons) proposed in the literature10–13. This

by the first time, for these models. The neuron model we use wasproposed by Galves and Locherbach17 as astochastic model of spiking neurons inspired by the traditional integrate-and-fire (IF) model.

Introduced in the early 20th century18, IF elements have been extensively used in simulations of spiking neu-rons16,19–24. Despite their simplicity, IF models have successfully emulated certain phenomena observed in bio-logical neural networks, such as firing avalanches10,11,25 and multiple dynamical regimes26,27. In these models,the membrane potentialV(t) integrates synaptic and external currents up to afiring threshold VT28. Then, a spikeis generated andV(t) drops to areset potential VR. Theleaky integrate-and-fire(LIF) model extends the IF neuronwith a leakage current, which causes the potentialV(t) to decay exponentially towards abaseline potential VB inthe absence of input signals20,22.

LIF models are deterministic but it has been claimed that stochastic models may be more adequate for simulationpurposes29. Some authors proposed to introduce stochasticity by adding noise terms to the potential20,21,26,27,29–33,yielding theleaky stochastic integrate-and-fire(LSIF) models.

Alternatively, the Galves-Locherbach (GL) model17,34–37 and also the model used by Larremoreet al.38 introducestochasticity in their firing neuron models in a different way. Instead of noise inputs, they assume that the firingof the neuron is a random event, whose probability of occurrence in any time step is afiring functionΦ(V) ofmembrane potentialV. By subsuming all sources of randomness into a single function, the Galves-Locherbach(GL) neuron model simplifies the analysis and simulation of noisy spiking neural networks.

Brain networks are also known to exhibitplasticity: changes in neural parameters over time scales longer thanthe firing time scale23,39. For example, short-term synaptic plasticity40 has been incorporated in models byassuming that the strength of each synapse is lowered after each firing, and then gradually recovers towards anreference value10,11. This kind of dynamics drives the synaptic weights of the network towards critical values,a phenomenon calledself-organized criticality(SOC), which is believed to optimize the network informationprocessing3,4,7,9,41.

In this work, first we study the dynamics of networks of GL neurons by a very simple and transparent mean-field calculation. We find both continuous and discontinuousphase transitions depending on the average synapticstrength and parameters of the firing functionΦ(V). To the best of our knowledge, these phase transitions havenever been observed in standard integrate-and-fire neurons. We also find that, at the second order phase transitionthe stimulated excitation of a single neuron causes avalanches of firing events (neuronal avalanches) that aresimilar to those observed in biological networks3,9.

Second, we present a new mechanism for SOC based on a dynamicson theneuronal gains(a parameter ofthe neuron probably related to the axon initial segment – AIS28,42), instead of depression of coupling strengths(related to neurotransmiter vesicle depletion at synapticcontacts between neurons) proposed in the literature10–13.This new activity dependent gain model is sufficient to achieve self-organized criticality, both by simulationevidence and by mean-field calculations. The great advantage of this new SOC mechanism is that it is much moreefficient—since we have only one adaptive parameter per neuron, instead of one per synapse.

The Model

We assume a network ofN GL neurons that change states in parallel at certainsampling timeswith a uniformspacing∆. Thus, the membrane potential of neuroni is modeled by a real variableVi [t] indexed bydiscrete timet, an integer that represents the sampling timet∆.

Each synapse transmits signals from somepresynapticneuronj to somepostsynapticneuroni, and has asynapticstrength wi j . If neuron j fires between discrete timest and t + 1, its potential drops toVR = 0. This eventincrements bywi j the potential of every postsynaptic neuroni that does not fire in that interval. The potential ofa non-firing neuron may also integrate anexternal stimulus Ii[t], which can model signals received from sourcesoutside the network. Apart from these increments, the potential of a non-firing neuron decays at each time steptowards the baseline voltageVB by a factorµ ∈ [0,1], which models the effect of a leakage current. Since the zeroof potential is arbitrary, we assumeVB = 0.

We introduce the Boolean variableXi [t] ∈ {0,1} which denotes whether neuroni fired betweent andt +1. Thepotentials evolve as:

Vi[t +1] =

VR if Xi [t] = 1,

µVi[t]+ Ii[t]+N

∑j=1

wi j Xj [t] if Xi [t] = 0. (1)

2/17

Page 3: Phase transitions and self-organized criticality in ... · (related to neurotransmiter vesicle depletion at synaptic contacts between neurons) proposed in the literature10–13. This

This corresponds to a GL neuron with a geometric leakage function g[t − ts] = µ [t−ts], wherets is the time ofthe last spike of neuroni, see17. We haveXi[t + 1] = 1 with probability Φ(Vi [t]), which is called thefiringfunction17,34–38. We also haveXi [t +1] = 0 if Xi [t] = 1 (refractory period). The functionΦ is sigmoidal, that is,monotonically increasing, with limiting valuesΦ(−∞) = 0 andΦ(+∞) = 1, with only one derivative maximum.We also assume thatΦ(V) is zero up to somethreshold potential VT. If Φ is the shifted Heaviside step functionΘ, Φ(V) = Θ(V −VT), we have a deterministic discrete-time LIF neuron. Any other choice forΦ(V) gives astochastic neuron.

The network activity is measured by the fraction (or density) ρ [t] of firing neurons:

ρ [t] =1N

N

∑j=1

Xj [t] . (2)

The densityρ [t] can be computed from the probability densityp[t](V) of potentials at timet:

ρ [t] =∫ ∞

VTΦ(V)p[t](V)dV , (3)

wherep[t](V)dV is the fraction of neurons with potential in the range[V,V +dV] at timet.

Neurons that fire betweent andt + 1 have their potential reset to zero. They contribute top[t + 1](V) a Diracimpulse at potentialVR, with amplitude (integral)ρ [t] given by equation (3). In subsequent time steps, thepotentials of all neurons will evolve according to equation(1). This process modifiesp[t](V) also forV 6=VR.

Results

In this paper we study only fully connected networks, that is, all neurons haveN−1 neighbors. In this case, toobtain sensible results, we must scale the synapses aswi j = Wi j /N, where the random variablesWi j have finitemeanW and finite variance. We also study only the case withVR = 0 andIi [t] = I (constant uniform input). So,for these networks, equation (1) reads:

Vi[t +1] =

0 if Xi [t] = 1,

µVi[t]+ I +1N

N

∑j=1

Wi j Xj [t] if Xi [t] = 0. (4)

Mean-field calculation

In the mean-field analysis, we assume that the synaptic weights Wi j follow a distribution with averageW andfinite variance. The mean-field approximation disregards correlations, so the final term of equation (1) becomes:

1N

N

∑j=1

Wi j Xj [t] =Wρ [t] . (5)

Notice that the variance of the weightsWi j becomes immaterial whenN tends to infinity.

By now, we assume that the external inputI is zero for all neurons and all times. Therefore, every neuron i thatdoes not fire betweent andt +1 (that is, withXi [t] = 0) has its potential changed in the same way:

Vi[t +1] = µVi [t]+Wρ [t] , (6)

Recall that the probability densityp[t](V) has a Dirac impulse at potentialU0 = 0, representing all neurons thatfired in the previous interval. This Dirac impulse is modifiedin later steps by equation (6). It follows that, once allneurons have fired at least once, the densityp[t](V) will be a combination of discrete impulses with amplitudesη0[t],η1[t],η2[t], . . ., at potentialsU0[t],U1[t],U2[t], . . ., such that∑∞

k=0 ηk = 1.

The amplitudeηk[t] is the fraction of neurons withfiring age kat discrete timet, that is, neurons that fired betweentimest − k−1 andt − k, and did not fire betweent − k andt. The common potential of those neurons, at timet,

3/17

Page 4: Phase transitions and self-organized criticality in ... · (related to neurotransmiter vesicle depletion at synaptic contacts between neurons) proposed in the literature10–13. This

is Uk[t]. In particular,η0[t] is the fractionρ [t−1] of neurons that fired in the previous time step. For this type ofdistribution, the integral of equation (3) becomes a discrete sum:

ρ [t] =∞

∑k=0

Φ(Uk[t])ηk[t] . (7)

According to equation (6), the valuesηk[t] andUk[t] evolve by the equations

ηk[t +1] = (1−Φ(Uk−1[t])) ηk−1[t] , (8)

Uk[t +1] = µUk−1[t]+Wρ [t] , (9)

for all k≥ 1, with η0[t +1] = ρ [t] andU0[t +1] = 0.

Stationary states for general Φ and µA stationary stateis a densityp[t](V) = p(V) of membrane potentials that does not change with time. In such aregime, quantitiesUk andηk do not depend anymore ont. Therefore, the equations (8–9) become the recurrenceequationsη0 = ρ = ∑∞

k=0 Φ(Uk)ηk, U0 = 0, and :

ηk = (1−Φ(Uk−1))ηk−1 , (10)

Uk = µUk−1+Wρ , (11)

for all k≥ 1.

0 0.5 10

0.2

0.4

W =2

V

P(V

)

0 0.5 10

0.2

0.4

W =1.55

V

P(V

)

0 0.5 10

0.2

0.4

0.6W =1.42

V

P(V)

0 0.5 10

0.2

0.4

0.6W =1.32

V

P(V)

a b

c d

Figure 1. Examples of stationary potential distributionsP(V): monomialΦ function withr = 1,γ = 1,µ = 1/2 casewith different values ofW. a)W2 =WB = 2, two peaks; b)W3 = 14/9, three peaks; c)W4 = 488/343, four peaks, d)W∞ ≈ 1.32, infinite number of peaks withU∞ = 1. Notice that forW <W∞ all the peaks in the distributionp(V) lie atpotentialsUk < 1. ForWB = 2 we haveη0 = η1 = 1/2, producing a bifurcation to a2-cycle. The values ofWm =W2,W3,W4

andW∞ can be obtained analytically by imposing the conditionUm = 1 in equations (10–11).

Since equations (10) are homogeneous on theηk, the normalization condition∑∞k=0 ηk = 1 must be included

explicitly. So, integrating over the densityp(V) leads to a discrete distributionP(V) (see Fig.1 for a specificΦ).

Equations (10–11) can be solved numerically, e. g. by simulating the evolution of the potential probability densityp[t](V) according to equation (8–9), starting from an arbitrary initial distribution, until reaching a stable distribu-tion (the probabilitiesηk should be renormalized for unit sum after each time step, to compensate for roundingerrors).

4/17

Page 5: Phase transitions and self-organized criticality in ... · (related to neurotransmiter vesicle depletion at synaptic contacts between neurons) proposed in the literature10–13. This

The monomial saturating Φ with µ > 0

Now we consider a specific class of firing functions, thesaturating monomials. This class is parametrized by apositivedegree rand aneuronal gainγ > 0. In all functions of this class,Φ(V) is 0 whenV ≤VT, and 1 whenV ≥VS, where the saturation potential isVS =VT+1/γ. In the intervalVT <V <VS we have the monomial:

Φ(V) =(

γ(V −VT))r

. (12)

Note that these functions can be seen as limiting cases of sigmoidal functions, and that we recover the determin-istic LIF modelΦ(V) = Θ(V −VT) whenγ → ∞. Notice that for such saturating monomials, there is a valueWB = VT+2/γ above which the dynamics is deterministic, leading to the2-cycles, see Appendix. The case ofisolated neurons with the monomial saturatingΦ is also studied in the Appendix.

In the analyses that follows, the control parameters areW andγ, andρ(W,γ) is the order parameter. We obtainnumericallyρ(W,γ) and the phase diagram(W,γ) for several values ofµ > 0, for the linear (r = 1) saturatingΦwith VT = 0 (Fig.2). Only the first 100 peaks(Uk,ηk) were considered, since, for the givenµ andΦ, there wasno significant probability density beyond that point. The same numerical method can be used forr 6= 1,VT 6= 0.

0 1 2 3 4 50

0.5

1

W

ρ(W

)

γ =1 µ=0. 25γ =1 µ=0. 5γ =1 µ=0. 75γ =0. 5 µ=0. 5γ =2 µ=0. 5

W

γ

0 0.5 1 1.5 2 2.50

0.5

1

1.5

2

2.5

ρ = 0

0 < ρ < 0.5

〈ρ 〉 = 0.5

γ c(µ = 0)

γ c(µ = 0.5)

γ b

a b

Figure 2. Results for µ > 0 : a) Numerically computedρ(W) curves withr = 1 and(γ,µ) = (1,1/4),(1,1/2),(1,3/4),(1/2,1/2),(2,1/2). The absorbing stateρ0 looses stability atWC and the non trivial fixedpointρ > 0 appears. AtWB = 2/γ, we haveρ = 1/2 and from there2-cycles appearin the formρ [t +1] = 1−ρ [t], whereρ [t] can have any value in the region bounded between the traceρ1/(γW)linesρ2 = (W−1/γ)/W, see Eq.31of theAppendix. b) Numerically computed(γ,W) diagram showing the critical boundariesγC(W) = (1−µ)/W and the bifurcationline γB(W) = 2/W to 2-cycles.

Near the critical point, we obtain numericallyρ(W,µ)≈C(W−WC)/W, whereWC(γ) = (1−µ)/γ andC(µ) is aconstant. So, the critical exponent isα = 1, characteristic of the mean-field directed percolation (DP) universalityclass3,4. The critical boundary in the(W,γ) plane, numerically obtained, seems to beγC(W) = (1− µ)/W(Fig. 2b).

Analytic results for µ = 0

Below we give results of a simple mean-field analysis in the limits N → ∞ andµ → 0. The latter implies that, attime t +1, the neuron “forgets” its previous potentialVi [t] and integrates only the inputsI [t]+Wi j Xj [t]. This sce-nario is interesting because it enables analytic solutions, yet exhibits all kinds of behaviors and phase transitionsthat occur withµ > 0.

5/17

Page 6: Phase transitions and self-organized criticality in ... · (related to neurotransmiter vesicle depletion at synaptic contacts between neurons) proposed in the literature10–13. This

When µ = 0 and Ii [t] = I (uniform constant input), the densityp[t](V) consists of only two Dirac peaks atpotentialsU0[t] =VR = 0 andU1[t] = I +Wρ [t −1], with fractionsη0[t] andη1[t] that evolve as:

η0[t +1] = ρ [t] = Φ(0)η0[t]+Φ(I +Wη0[t])(1−η0[t]) , (13)

η1[t +1] = 1−η0[t +1] . (14)

Furthermore, if neurons have not spontaneous firing, that is, Φ(0) = 0, then equation (13) reduces to:

η0[t +1] = ρ [t] = Φ(I +Wη0[t])(1−η0[t]) . (15)

In a stationary regime, equation (15) simplifies to:

ρ = (1−ρ)Φ(I +Wρ) , (16)

sinceη0 = ρ , η1 = 1−ρ , U0 = 0, andU1 = I +Wρ . Bellow, all the results refer to the monomial saturatingΦs(Fig. 3a).

Continuous phase transitions in networks: the case with r = 1.

When r = 1, we have the linear functionΦ(V) = γV for 0 < V < VS = 1/γ. The stationary state conditionequation (16) then becomes:

γWρ2+(1− γW)ρ = 0 . (17)

The two solutions are the absorbing stateρ = 0 and the non-trivial state:

ρ =W−WC

W, (18)

with WC = 1/γ. Since we must have 0< ρ ≤ 1/2, this solution is valid only forWC <W ≤WB = 2/γ (Fig 3b).

This solution describes an stationary state where 1−ρ of the neurons are at potentialU1 =W−WC. The neuronsthat will fire in the next step are a fractionΦ(U1) of those, which are again a fractionρ of the total. For anyW > WC, the stateρ = 0 is unstable: any small perturbation of the potentials cause the network to converge tothe active stationary state above. ForW <WC, the solutionρ = 0 is stable and absorbing. In theρ(W) plot, thelocus of stationary regimes defined by equation (18) bifurcates atW =WB into the two bounds of equation (31)that delimit the2-cycles(Fig. 3b).

So, at the critical boundaryW= 1/γ, we have a standard continuous absorbing state transitionρ(W)∝ (W−WC)α

with a critical exponentα = 1, which also can be written asρ(γ) ∝ (γ − γC)α . In the (γ,W) plane, the phasetransition corresponds to a critical boundaryγC(W) = 1/W, below the2-cyclephase transitionγB(W) = 2/W(Fig. 3c).

Discontinuous phase transitions in networks: the case with r > 1.

Whenr > 1 andW ≤WB = 2/γ, the stationary state condition is:

(γW)r ρ r − (γW)rρ r−1+1= 0 . (19)

This equation has a non trivial solutionρ+ only when 1≤ r ≤ 2 andWC(r)≤W ≤WB, for a certainWC(r)> 1/γ.In this case, atW = WC(r), there is a discontinuous (first-order) phase transition toa regime with activityρ =ρC(r) ≤ 1/2 (Fig. 3d). It turns out thatρC(r) → 0 asr → 1, recovering the continuous phase transition in thatlimit. For r = 2, the solution to equation (19) is a single pointρ(WC) = ρC = 1/2 atWC = 2/γ =WB (Fig. 3f).

Notice that, in the linear case, the fixed pointρ0 = ρ = 0 is unstable forW > 1 (Fig.3b). This occurs because theseparatrixρ− (trace lines, Fig.3d), for r → 1, collapses with theρ0 point, so that it looses its stability.

It also occurs discontinuous transitions if we have a non zero firing thresholdVT. Analytic results forµ = 0,VT >0, I > 0 are given in the Appendix.

6/17

Page 7: Phase transitions and self-organized criticality in ... · (related to neurotransmiter vesicle depletion at synaptic contacts between neurons) proposed in the literature10–13. This

0 V0

0.5

1

V

φ(V

)

r = 0 . 5r = 1r = 2

0 1 2 3

0

0.5

W

ρ(W

)

W

γ

0 1 20

1

2

0 1 2 3

0

0.5

W

ρ(W

)

0 1 2 3

0

0.5

W

ρ(W

)

10−4

10−2

10−3

10−2

10−1

100

W

ρ(W

)

r=0 . 2 5r=0 . 5r=0 . 7 5

a b

c d

e f

S

Figure 3. Firing densities (with γ = 1) and phase diagram withµ = 0 andVT = 0. a) Examples of monomial firingfunctionsΦ(V) with γ = 1 r = 0.5,1 and 2. b) Theρ(W) bifurcation plot forr = 1. The absorbing stateρ0 looses stabilityafterW >WC = 1 (dashed line). The non trivial fixed pointρ+ bifurcates atWB = 2/γ = 2 into two branches (gray lines) thatbound the marginally stable2-cycles. c) The(γ,W) phase diagram forr = 1. Bellow the critical boundaryγ = γC(W) = 1/Wthe inactive stateρ = 0 is absorbing and stable; above that line it is also absorbing but unstable. Above the lineγ = γB(W) = 2/W there are only the marginally stable2-cycles. ForγC(W)< γ ≤ γB(W) there is a single stationary regimeρ(W) = (W−WC)/W < 1/2, withWC = 1/γ. d) Discontinuous phase transitions forγ = 1 with exponentsr = 1.2. Theabsorbing stateρ0 now is stable (solid line at zero). The non trivial fixed pointρ+ starts with the valueρC atWC andbifurcates atWB, creating the boundary curves (gray) that delimit possible2-cycles. At WC also appears the unstableseparatrixρ− (dashed line). e) Ceaseless activity (no phase transitions) for r = 0.25,0.5 andr = 0.75. The activity approachzero (forW = 0 as power laws. f) In the limiting caser = 2 we do not have aρ > 0 fixed point, but only the stableρ = 0(black), the2-cyclesregion (gray) and the unstable separatrix (traces).

Ceaseless activity: the case with r < 1.

Whenr < 1, there is no absorbing solutionρ = 0 to equation (19). In theW → 0 limit we getρ(W) = (γW)r/(1−r).These power laws means thatρ > 0 for anyW > WC(r) = 0 (Fig. 3e). We recover the second order transitionWC(r = 1) = 1/γ whenr → 1 in equation (19). Interestingly, this ceaseless activityρ > 0 for anyW > 0 seems to

7/17

Page 8: Phase transitions and self-organized criticality in ... · (related to neurotransmiter vesicle depletion at synaptic contacts between neurons) proposed in the literature10–13. This

be similar to that found by Larremoreet al.38 with a µ = 0 linear saturating model. This ceaseless activity, evenwith r = 1, perhaps is due to the presence of inhibitory neurons in Larremoreet al. model.

Neuronal avalanches

0 200 400 6000

0.005

0.01

0.015

0.02

t(t ime step)

ρ(t)

102

103

104

105

10−6

10−4

10−2

sP

S(s)

N= 1000N= 2000N= 4000N= 8000N= 16000N= 32000P S( s ) ∝ s− 3 / 2

100

102

104

106

10−6

10−4

10−2

100

s

CS(s

)

N= 1000N= 2000N= 4000N= 8000N= 16000N= 32000C S( s ) ∝ s− 1 / 2

10−4

10−2

100

102

10−4

10−3

10−2

10−1

100

s/N

CS(s

).s1/2

N= 1000N= 2000N= 4000N= 8000N= 16000N= 32000

a b

c d

a b

Figure 4. Avalanche size statistics in the static model:Simulations at the critical pointWC = 1,γC = 1 (with µ = 0 ). a)Example of avalanche profileρ [t] at the critical point. b) Avalanche size distributionPS(s)≡ P(S= s), for network sizesN = 1000,2000,4000,8000,16000 and 32000. The dashed reference line is proportional to s−β , with β = 3/2. c)Complementary cumulative distributionCS(s) = ∑∞

k PS(k). Being an integral ofPS(s), its power law exponent is−β +1=−1/2 (dashed line). d) Data collapse (finite-size scaling) forCS(s)s1/2 versus function ofs/NcS, with the cutoffexponentcS= 1.

Firing avalanches in neural networks have attracted significant interest because of their possible connection toefficient information processing3–5,7,9. Through simulations, we studied the critical pointWC = 1,γC = 1 (withµ = 0) in search for neuronal avalanches3,9 (Fig 4).

An avalanche that starts at discrete timet = a and ends att = b has durationd = b−a and sizes= N∑bt=a ρ [t]

(Fig. 4a). By using the notationS for a random variable ands for its numerical value, we observe a power lawavalanche size distributionPS(s) ≡ P(S= s) ∝ s−β , with the mean-field exponentβ = 3/2 (Fig.4b)3,9,11. Since

8/17

Page 9: Phase transitions and self-organized criticality in ... · (related to neurotransmiter vesicle depletion at synaptic contacts between neurons) proposed in the literature10–13. This

the distributionPS(s) is noisy for larges, for further analysis we use the complementary cumulative functionCS(s) ≡ P(S≥ s) = ∑k=s∞PS(k) (which gives the probability of having an avalanche with size equal or greaterthans) because it is very smooth and monotonic (Fig.4c). Data collapse gives a finite-size scaling exponentcS= 1 (Fig.4d)12,13.

We also observed a power law distribution for avalanche duration,PD(d)≡P(D=d)∝ d−δ with δ =2 (Figure5a).The complementary cumulative distribution isCD(d) ≡ P(D ≥ d) = ∑∞

k=d PD(k). From data collapse, we find afinite-size scaling exponentcD = 1/2 (Fig.5b), in accord with the literature11.

101

102

103

10−6

10−4

10−2

d

PD(d

)

N=10 0 0N=2 0 0 0N=4 0 0 0N=8 0 0 0N=1 6 0 0 0N=3 2 0 0 0PD( d ) ∝ d− 2

10−2

100

102

10−4

10−2

100

d/N 1/2

CD(d).d

N=10 0 0N=2 0 0 0N=4 0 0 0N=8 0 0 0N=1 6 0 0 0N=3 2 0 0 0

a b

Figure 5. Avalanche duration statistics in the static model:Simulations at the critical pointWC = 1,γ = 0 (µ = 0 ) fornetwork sizesN = 1000,2000,4000,8000,16000 and 32000: a) Probability distributionPD(d)≡ P(D = d) for avalanchedurationd. The dashed reference line is proportional tod−δ , with δ = 2. b) Data collapseCD(d)d versusd/NcD , with thecutoff exponentcD = 1/2. The complementary cumulative functionCD(d)≡ ∑∞

k PD(k), being an integral ofPD(d), haspower law exponent−δ +1=−1.

The model with dynamic parameters

The results of the previous section were obtained by fine-tuning the network at the critical pointγC = WC = 1.Given the conjecture that the critical situation has functional advantages, a biological model should include somehomeostatic mechanism capable of tuning the network towards criticality. Without such mechanism, we cannottruly say that the network self-organizes toward the critical regime.

However, observing that the relevant parameter for criticality in our model is the critical boundaryγCWC = 1, wepropose to work with dynamic gainsγi [t] while keeping the synapsesWi j fixed. The idea is to reduce the gainγi [t]when the neuron fires, and let the gain slowly recover towardsa higher resting value after that:

γi [t +1] = γi [t]+1τ(A− γi[t])−uγi[t]Xi [t] . (20)

Now, the factorτ is related to the characteristic recovery time of the gain,A is the asymptotic resting gain, andu∈ [0,1] is the fraction of gain lost due to the firing. This model is plausible biologically, and can be related to adecrease and recovery, due to the neuron activity, of the firing probability at the AIS42. Our dynamicγi [t] mimicthe well known phenomenon ofspike firing adaptation14,15.

This approach seems sufficient to achieve a state very similar to self-organized criticality. Fig.6a shows a simu-lation with all-to-all coupled networks withN neurons and, for simplicity,Wi j =W. We observe that the averagegain γ[t] = 1

N ∑Ni=1 γi [t] seems to converge toward the critical valueγC(W) = 1/W = 1, starting from different

γ[0] 6= 1. As the network converges to the critical state, we observepower-law avalanche size distributions withexponent−3/2 leading to a cumulative functionCS(s) ∝ s−1/2 (Fig. 6b). Curiously, the finite-size scaling expo-nent iscS= 2/3, which is different from that observed in the static model with fixed gains (cS= 1) (Fig.4d).

9/17

Page 10: Phase transitions and self-organized criticality in ... · (related to neurotransmiter vesicle depletion at synaptic contacts between neurons) proposed in the literature10–13. This

100

102

104

106

0.2

0.4

0.6

0.8

1

1.2

t (t ime step)

<γ[t]>

10−3

10−2

10−1

100

101

102

10−4

10−3

10−2

10−1

100

101

s/N c S

CS(s

).s1/2

N=500N=1000N=2000N=4000

a b

Figure 6. Self-organization with dynamic neuronal gains:Simulations of a network of GL neurons with fixedWi j =W = 1,γ = 1,u= 1,A= 1.1 andτ = 1000 ms. Dynamic gainsγi [t] starts withγi [0] uniformly distributed in[0,γmax].The average initial condition isγ[t]≡ 1

N ∑Ni γi [t]≈ γmax/2, which produces the different initial conditionsγ[0]. (a)

Self-organization of the average gainγ[t] over time. The horizontal dashed line marks the valueγC = 1. (b) Data collapse forCS(s)s1/2 versuss/NcS for severalN, with the cutoff exponentcS= 2/3.

This empirical evidence is supported by a mean-field analysis of equation (20). Averaging over the sites, we havefor the average gain:

γ[t +1] = γ[t]+1τ(A− γ[t])−uρ [t]γ[t] . (21)

In the stationary state, we haveγ[t +1] = γ[t] = γ∗, so:

(

1τ+uρ∗

)

γ∗ =Aτ. (22)

But we have the relation

ρ∗ =C(γ∗− γC)/γ∗ (23)

near the critical region, whereC is a constant that depends onΦ(V) andµ , for example, withµ = 0,C= 1 for Φlinear monomial model. So:

(

γ∗

τ+uCγ∗−uCγC

)

γ∗ =Aγ∗

τ. (24)

Eliminating the common factorγ∗, and dividing byuC, we have:

(

1+1

uCτ

)

γ∗ = γC+A

uCτ. (25)

10/17

Page 11: Phase transitions and self-organized criticality in ... · (related to neurotransmiter vesicle depletion at synaptic contacts between neurons) proposed in the literature10–13. This

Now, callx= 1/(uCτ). Then, we have:

γ∗ =γC+Ax1+ x

. (26)

The fine tuning solution is to put by handA = γc, which leads toγ∗ = γc independent ofx. This fine tuningsolution should not be allowed in a true SOC scenario. So, suppose thatA= Bγc. Then, we have:

γ∗ = γc1+Bx1+ x

. (27)

Now we see that, to have a critical or supercritical state (where equation (23) holds), we must haveB > 1,otherwise we fall in the subcritical stateγ∗ < γC whereρ∗ = 0 and this mean-field calculation is not valid. A firstorder approximation leads to:

γ∗ = γC+(A− γC)x+O(x2) . (28)

This mean-field calculation shows that, ifx → 0, we obtain a SOC stateγ∗ → γC. However, the strict casex → 0 would require a scalingτ = O(Na) with an exponenta > 0, as done previously for dynamic synapsesin10–13), This scaling is necessary because all-to-all networks are pathological: the same pathology imply thescalingwi j =Wi j /N, which is also non-biological since depends on the non-local information given byN. Thisdependence onN is similar to the scalingJi j /N for magnetic couplings in spin systems with all-to-all networks,which also is non-physical.

However, if we want to avoid the non-biological factorτ(N) = O(Na), we can use reasonable parameters asτ ∈ [10,100] ms, u = [0.1,1], C = 1 andA ∈ [1.1,2]γC. In particular, ifτ = 100,u = 1 andA = 1.1, we havex= 0.01 and:

γ∗ = 1.001γC+O(10−4) . (29)

Even a more conservative valueτ = 10 ms givesγ∗ = 1.01γC. Although not perfect SOC, this result is totally suf-ficient to explain power law neuronal avalanches. We call this phenomena self-organized supercriticality (SOSC),where the supercriticality can be very small.

We must yet to determine the volume of parameter space(τ,A,u) where the SOSC phenomenon holds. In thecase of dynamic synapsesWi j [t], this parametric volume is very large12,13 and we conjecture that the same occursfor the dynamic gainsγi [t]. This shall be studied in detail in another paper.

Discussion

Stochastic model:The stochastic neuron introduced by Galves and Locherbach17,37 are interesting elements forstudies of networks of spiking neurons because they enable exact analytic results and simple numerical calcula-tions. While the LSIF models of Soulaet al.30 and Cessac31–33 introduce stochasticity in the neuron’s behaviorby adding noise terms to its potential, the GL model is agnostic about the origin of noise and randomness (whichcan be a good thing when several noise sources are present). All the random behavior is grouped at the singlefiring functionΦ(V).

Phase transitions: Networks of GL neurons display a variety of dynamical stateswith interesting phase tran-sitions. We looked for stationary regimes in such networks,for some specific firing functionsΦ(V) with nospontaneous activity at the baseline potential (that is, with Φ(0) = 0 andI = 0). We studied the changes in thoseregimes as a function of the mean synaptic weightW and mean neuronal gainγ. We found basically tree kinds ofphase transition, depending of the behavior ofΦ(V) ∝ Vr for low V: 2–4:

r < 1: A ceaseless dynamic regime with no phase transitions (WC = 0) similar to that found by Lar-remore (2014);

r = 1: A continuous (second order) absorbing state phase transition in the Directed Percolation universalityclass usual in SOC models2,3,12,13;

r > 1: Discontinuous (first order) absorbing state transitions.

11/17

Page 12: Phase transitions and self-organized criticality in ... · (related to neurotransmiter vesicle depletion at synaptic contacts between neurons) proposed in the literature10–13. This

We also observed discontinuous phase transitions for anyr > 0 when the neurons have a firing thresholdVT > 0,see Appendix.

The deterministic LIF neuron models, which do not have noise, do not seem to allow these kinds of transi-tions23,26,27. The model studied by Larremoreet al.38 is equivalent to the GL model with monomial saturatingfiring function with r = 1,VT = 0 andγ = 1. They did not report any phase transition (perhaps becauseof theeffect of inhibitory neurons in their network), but found a ceaseless activity very similar to what we observed withr < 1.

Avalanches:In the case of second-order phase transitions (Φ(0) = 0, r = 1,VT = 0), we detected firing avalanchesat the critical boundaryγC = 1/W, whose size and duration power law distributions present the standard mean-field exponentsβ = 3/2 andδ = 2. We observed a very good finite-scaling and data collapse behavior, withfinite-size exponentscS= 1 andcD = 1/2.

Self-organized criticality:

One way to achieve this goal is to use dynamical synapsesWi j [t], in a way that mimics the loss of strength aftera synaptic discharge (presumably due to neurotransmitter vesicles depletion), and the subsequent slow recov-ery10–13:

Wi j [t +1] =Wi j [t]+1

τNKj(A−Wi j [t])−uWi j [t]Xj [t] . (30)

whereK j is the number of neighbors of the presynaptic neuronj. Parameters are related to the synaptic recoverytimeτ, the asymptotic valueA andu the fraction of synaptic lost after firing. This has been examined in10–13. Forour all-to-all coupled network, we haveK = N−1 andN(N−1) dynamic equations for theWi j s. This is a hugenumber, for exampleO(108) equations, even for a moderate network ofN = 104 neurons12,13. The possibilityof well behaved SOC in bulk dissipative systems with loadingis discussed in11,43. Further considerations forsystems with conservation in average at the stationary state, as occurs in our model, are made in12,13.

Inspired by the presence of the critical boundary, we proposed a new mechanism for short-scale neural networkplasticity, based on dynamic neuron gainsγi [t] instead the above dynamic synaptic weights. This new mechanismis biologically plausible, probably related an activity-dependent firing probability at the AIS28,42, and was foundto be sufficient to obtain neuronal avalanches. We obtained good data collapse and finite-size behavior for thePS(S) distributions but, in contrast with the static model, we geta finite-size exponentcS= 2/3. The reason forthis difference is not clear by now, but we notice that suchcS = 2/3 exponent has been found previously in thePruessner–Jensen SOC model and explained by a field theory elaborated for such systems43.

The great advantage of this new SOC mechanism is its computational efficiency: when simulatingN neuronswith K synapses each, there are onlyN dynamic equations for the gainsγi [t], instead ofNK equations for thesynaptic weightsWi j [t]. Notice that, for the all-to-all coupling network studied here, this meansO(N2) equationsfor dynamic synapse but onlyO(N) equations for dynamic gains. This makes a huge difference for the networksizes that can be simulated.

We stress that, since we usedτ finite, the criticality is not perfect (γ∗/γC ∈ [1.001;1.01]). So, we called it a self-organized superCriticality (SOSC) phenomenon. Ifx= 1/(uCτ) ≈ 0.001, the stationary state is experimentallyindistinguishable from true SOC. However, ifx < 100, large avalanches can be obtained. Interestingly, SOSCwould be a concretization of Turing intuition that the best brain operating point is slightly supercritical1.

We speculate that this slightly supercriticality could explain why humans are so prone to supercritical-like patho-logical states like epilepsy (prevalence 1.7%) and mania (prevalence 2.6)3. From our mechanism, such pathologi-cal states arises from small gain depressionu or small gain recovery timeτ. These parameters are experimentallyrelated to firing rate adaptation and perhaps our proposal could be experimentally studied in normal and patho-logical tissues.

We also conjecture that this supecriticality in the whole network could explain theSubsamplig Paradoxin neu-ronal avalanches: since the initial experimental protocols9, critical power laws have been seem when using ar-rays ofNe = 32−−512 electrodes, that are a very small number compared to the full biological network withN = O(106−−109) neurons. This situationNe << N has been calledsubsampling44–46.

The paradox occurs because models that present good power laws for avalanches measured over the total numberof neuronsN, under subsampling present only exponential tails or log-normal behaviors46. No model, to the bestof our knowledge, has solved this paradox. Our dynamic gains, since produce supercritical states likeγ∗ = 1.01γC,

12/17

Page 13: Phase transitions and self-organized criticality in ... · (related to neurotransmiter vesicle depletion at synaptic contacts between neurons) proposed in the literature10–13. This

could be a solution if the supercriticality in the whole network, described by a power law with a supercriticalbump for large avalanches, turns out an apparent pure power law under subsampling. This possibility will befully explored in another paper.

Directions for future research: Future research could investigate other network topologies and firing functions,heterogeneous networks, the effect of inhibitory neurons26,38, and network learning. The study of self-organizedsupercriticality (and subsampling) with GL neurons and dynamic neuron gains is particularly promising.

Methods

Numerical Calculations: All numerical calculations are done by using MATLAB software. Simulation proce-dures: Simulation codes are made in Fortran90 and C++11. The avalanche statistics were obtained by simulatingthe evolution of finite networks ofN neurons, with uniform synaptic strengthsWi j = W (Wii = 0), Φ(V) mono-mial linear (r = 1) and critical parameter valuesWC = 1 andγC = 1. Each avalanche was started with all neuronpotentialsVi[0] =VR = 0 and forcing the firing of a single random neuroni by settingXi[0] = 1.

In contrast to standard integrate-and fire10,11 or automata networks4,12,13, stochastic networks can fire even afterintervals with no firing (ρ [t] = 0) because membrane voltagesV[t] are not necessarily zero andΦ(V) can producenew delayed firings. So, our criteria to define avalanches is slightly different from previous literature: the networkwas simulated according to equation (1) until all potentials had decayed to such low values that∑N

i Vi[t]< 10−20,so further spontaneous firing would not be expected to occur for thousands of steps, which defines astop time.Then, the total number of firingsS is counted from the first firing up to this stop time.

The correct finite-size scaling for avalanche duration is obtained by defining the duration asD = Dbare+5 timesteps, whereDbare is the measured duration in the simulation. These extra five time steps probably arise from thenew definition of avalanche used for these stochastic neurons.

Appendix

The degenerate 2-cycle regimes

If for someΦ there is a saturating potentialVS such thatΦ(VS) = 1, then the fixed pointρ looses stability at somebifurcation pointW =WB. The value ofWB can be obtained remembering that, due to the refractory period, themaximal fixed point isρ = ρB = 1/2. The bifurcation point isWB = 2VS because, at this point, we haveU1 =VS,Φ(U1) = 1 andΦ(Uk) = 0 for k> 1, so that the stationary condition forρ reduces toρB = 1−ρB, i.e. ρB = 1/2.

WhenW >WB = 2VS, besides the solutionρ = 1/2, there is an infinitude of solutions where the same potentialdistribution repeats with period 2 (2-cycles), and the activityρ [t] alternates betweenρ [t] = 1/2+ ε(W) andρ [t+1] = 1/2−ε(W), which are marginally stable. In theρ(W) curve, these possible periodic states are boundedby the lines:

ρ1(W) =VS

W≤ ρ ≤

W−VS

W= ρ2(W) , (31)

(Fig. 3b). These limits are obtained by using the conditionU1[t] =Wρ1(W) =VS andρ2(W) = 1−ρ1(W).

The degenerate state withρ [t +1] = ρ [t] = 1/2 and the2-cycleswith ε > 0 are marginally stable because thevalue ofε(W) is not unique (for a givenW): any valueε(W) compatible with the limits of equation (31) canoccur. These2-cyclesare not peculiar to the GL model: they also occur in the deterministic LIF model.

Isolated neurons

We analyze the behavior of the GL neuron model under the standard experiment where an isolated neuron in vitrois artificially injected with a current of constant intensity J. That corresponds to setting the external input signalI [t] of that neuron to a constant valueI = J∆/C whereC is the effective capacitance of the neuron.

The firing rate of an isolated neuron can be written as:

F(I) = ρ(I)Fmax; (32)

13/17

Page 14: Phase transitions and self-organized criticality in ... · (related to neurotransmiter vesicle depletion at synaptic contacts between neurons) proposed in the literature10–13. This

0 0.5 1 1.5 20

0.1

0.2

0.3

0.4

0.5

I

F(I

)

r=0 . 5r=1r=2

Figure 7. Firing rate of an isolated neuron: F(I) as a function of external inputI , for exponentsr = 0.5,1 and 2.

whereFmax is the empirical maximum firing rate (measured in spikes per second) of a given neuron andρ is ourprevious neuron firing probability per time step. WithW = 0 andI > 0 in equation16, we get:

ρ(I) = Φ(I)(1−ρ(I)) , (33)

The solution for the monomial saturatingΦ with VT = 0 is:

ρ(I) =(γI)r

1+(γI)r , (34)

which is less thanρ = 1/2 only if I < 1/γ. For anyI ≥ 1/γ the firing rate saturates atρ = 1/2 (the neuronfires at every other step, alternating between potentialsU0 =VR = 0 andU1 = I . So, forI > 0, there is no phasetransition. Interestingly, Eqs. (34), known as generalized Michaelis-Menten function, is frequently used to fit thefiring response of biological neurons to DC currents47,48.

Discontinuous phase transitions in networks: the case with VT > 0 and I > 0.

The standard IF model hasVT > 0. If we allow this feature in our models we find a new ingredient that producesfirst order phase transitions. Indeed, in this case, ifU1 =Wρ + I <VT then we have a single peak atU0 = 0 withη0 = 1, which means we have a silent state. WhenU1 = Wρ + I > VT, we have a peak with heightη1 = 1−ρandρ = η0 = Φ(U1)η1.

For the linear monomial model this leads to the equations:

ρ = γ(U1−VT)(1−ρ) , (35)

γWρ2+(1− γW− γVT+ γI)ρ + γVT− γI = 0 , (36)

with the solution:

ρ±(γ,W,VT, I) =(γW+ γVT− γI −1)±

(γW+ γVT− γI −1)2−4γ2WVT+4γ2WI2γW

, (37)

whereρ+ is the non trivial fixed point andρ− is the unstable fixed point (separatrix). These solutions only existfor γW values such thatγ(W+VT− I)−1> 2γ

W(VT− I). This produces the condition:

γW > γWC =(

1+√

γ(VT− I))2

, (38)

which defines a first order critical boundary. At the criticalboundary the density of firing neurons is:

ρC =

γ(VT− I)

1+√

γ(VT− I), (39)

14/17

Page 15: Phase transitions and self-organized criticality in ... · (related to neurotransmiter vesicle depletion at synaptic contacts between neurons) proposed in the literature10–13. This

0 1 2 3

0

0.1

0.2

0.3

0.4

0.5

0.6

W

ρ(W

)

0 1 2 3

0

0.1

0.2

0.3

0.4

0.5

0.6

Wρ(W

)0 1 2 3

0

0.1

0.2

0.3

0.4

0.5

0.6

W

ρ(W

)

a b c

Figure 8. Phase transitions forVT > 0: monomial model withµ = 0,r = 1,γ = 1 and thresholdsVT = 0,0.05 and 0.1 .Here the solid black lines represent the stable fixed points,dashed black lines represent unstable fixed points and grey linescorrespond to the marginally stable boundaries of cycles-2regime. The discontinuityρC goes to zero forVT → 0.

which is nonzero (discontinuous) for anyVT > I . These transitions can be seen in (Fig.8). The solutions forequations (37) and (39) is valid only for ρC < 1/2 (2-cyclebifurcation). This imply the maximal valueVT =1/γ + I .

References

1. Turing, A. M. Computing machinery and intelligence.Mind 59, 433–460 (1950).

2. Chialvo, D. R. Emergent complex neural dynamics.Nature physics6, 744–750 (2010).

3. Hesse, J. & Gross, T. Self-organized criticality as a fundamental property of neural systems.Criticality as asignature of healthy neural systems: multi-scale experimental and computational studies(2015).

4. Kinouchi, O. & Copelli, M. Optimal dynamical range of excitable networks at criticality.Nature physics2,348–351 (2006).

5. Beggs, J. M. The criticality hypothesis: how local corticalnetworks might optimize information process-ing. Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and EngineeringSciences366, 329–343 (2008).

6. Shew, W. L., Yang, H., Petermann, T., Roy, R. & Plenz, D. Neuronal avalanches imply maximum dynamicrange in cortical networks at criticality.The Journal of Neuroscience29, 15595–15600 (2009).

7. Massobrio, P., de Arcangelis, L., Pasquale, V., Jensen, H. J. & Plenz, D. Criticality as a signature of healthyneural systems.Frontiers in systems neuroscience9 (2015).

8. Herz, A. V. & Hopfield, J. J. Earthquake cycles and neural reverberations: collective oscillations in systemswith pulse-coupled threshold elements.Physical review letters75, 1222 (1995).

9. Beggs, J. M. & Plenz, D. Neuronal avalanches in neocortical circuits. The Journal of neuroscience23,11167–11177 (2003).

10. Levina, A., Herrmann, J. M. & Geisel, T. Dynamical synapses causing self-organized criticality in neuralnetworks.Nature physics3, 857–860 (2007).

11. Bonachela, J. A., De Franciscis, S., Torres, J. J. & Munoz, M. A. Self-organization without conservation:are neuronal avalanches generically critical?Journal of Statistical Mechanics: Theory and Experiment2010,P02015 (2010).

12. Costa, A., Copelli, M. & Kinouchi, O. Can dynamical synapsesproduce true self-organized criticality?Journal of Statistical Mechanics: Theory and Experiment2015, P06004 (2015).

15/17

Page 16: Phase transitions and self-organized criticality in ... · (related to neurotransmiter vesicle depletion at synaptic contacts between neurons) proposed in the literature10–13. This

13. Campos, J., Costa, A., Copelli, M. & Kinouchi, O. Differences between quenched and annealed networkswith dynamical links.arXiv:1604.05779Submmited to Physical Review E (2016).

14. Ermentrout, B., Pascal, M. & Gutkin, B. The effects of spike frequency adaptation and negative feedback onthe synchronization of neural oscillators.Neural Computation13, 1285–1310 (2001).

15. Benda, J. & Herz, A. V. A universal model for spike-frequencyadaptation.Neural computation15, 2523–2564 (2003).

16. Buonocore, A., Caputo, L., Pirozzi, E. & Carfora, M. F. A leaky integrate-and-fire model with adaptation forthe generation of a spike train.Mathematical biosciences and engineering: MBE13, 483–493 (2016).

17. Galves, A. & Locherbach, E. Infinite systems of interactingchains with memory of variable length—astochastic model for biological neural nets.Journal of Statistical Physics151, 896–921 (2013).

18. Lapicque, L. Recherches quantitatives sur l’excitation electrique des nerfs traitee comme une polarisation.J. Physiol. Pathol. Generale9, 620–635 (1907). Translation: Brunel, N. & van Rossum, M.C.Quantitativeinvestigations of electrical nerve excitation treated as polarization. Biol. Cybernetics97, 341–349 (2007).

19. Gerstein, G. L. & Mandelbrot, B. Random walk models for the spike activity of a single neuron.Biophysicaljournal 4, 41 (1964).

20. Burkitt, A. N. A review of the integrate-and-fire neuron model: I. homogeneous synaptic input.Biologicalcybernetics95, 1–19 (2006).

21. Burkitt, A. N. A review of the integrate-and-fire neuron model: Ii. inhomogeneous synaptic input and networkproperties.Biological cybernetics95, 97–112 (2006).

22. Naud, R. & Gerstner, W. The performance (and limits) of simple neuron models: generalizations of the leakyintegrate-and-fire model. InComputational Systems Neurobiology, 163–192 (Springer, 2012).

23. Brette, R.et al. Simulation of networks of spiking neurons: a review of toolsand strategies.Journal ofcomputational neuroscience23, 349–398 (2007).

24. Brette, R. What is the most realistic single-compartment model of spike initiation?PLoS Comput Biol11,e1004114 (2015).

25. Benayoun, M., Cowan, J. D., van Drongelen, W. & Wallace, E. Avalanches in a stochastic model of spikingneurons.PLoS Comput Biol6, e1000846 (2010).

26. Ostojic, S. Two types of asynchronous activity in networks of excitatory and inhibitory spiking neurons.Nature neuroscience17, 594–600 (2014).

27. Torres, J. J. & Marro, J. Brain performance versus phase transitions.Scientific reports5 (2015).

28. Platkiewicz, J. & Brette, R. A threshold equation for actionpotential initiation. PLoS Comput Biol6,e1000850 (2010).

29. McDonnell, M. D., Goldwyn, J. H. & Lindner, B. Editorial: Neuronal stochastic variability: Influences onspiking dynamics and network activity.Frontiers in computational neuroscience10 (2016).

30. Soula, H., Beslon, G. & Mazet, O. Spontaneous dynamics of asymmetric random recurrent spiking neuralnetworks.Neural Computation18, 60–79 (2006).

31. Cessac, B. A discrete time neural network model with spikingneurons.Journal of Mathematical Biology56,311–345 (2008).

32. Cessac, B. A view of neural networks as dynamical systems.International Journal of Bifurcation and Chaos20, 1585–1629 (2010).

33. Cessac, B. A discrete time neural network model with spikingneurons: Ii: Dynamics with noise.Journal ofmathematical biology62, 863–900 (2011).

34. De Masi, A., Galves, A., Locherbach, E. & Presutti, E. Hydrodynamic limit for interacting neurons.Journalof Statistical Physics158, 866–902 (2015).

35. Duarte, A. & Ost, G. A model for neural activity in the absenceof external stimuli.Markov Processes andRelated Fields22, 37–52 (2016).

36. Duarte, A., Ost, G. & Rodrıguez, A. A. Hydrodynamic limit for spatially structured interacting neurons.Journal of Statistical Physics161, 1163–1202 (2015).

16/17

Page 17: Phase transitions and self-organized criticality in ... · (related to neurotransmiter vesicle depletion at synaptic contacts between neurons) proposed in the literature10–13. This

37. Galves, A. & Locherbach, E. Modeling networks of spiking neurons as interacting processes with memoryof variable length.J. Soc. Franc. Stat.157, 17–32 (2016).

38. Larremore, D. B., Shew, W. L., Ott, E., Sorrentino, F. & Restrepo, J. G. Inhibition causes ceaseless dynamicsin networks of excitable nodes.Physical review letters112, 138103 (2014).

39. Cooper, S. J. Donald o. hebb’s synapse and learning rule: a history and commentary.Neuroscience &Biobehavioral Reviews28, 851–874 (2005).

40. Tsodyks, M., Pawelzik, K. & Markram, H. Neural networks withdynamic synapses.Neural computation10,821–835 (1998).

41. Larremore, D. B., Shew, W. L. & Restrepo, J. G. Predicting criticality and dynamic range in complexnetworks: effects of topology.Physical review letters106, 058101 (2011).

42. Kole, M. H. & Stuart, G. J. Signal processing in the axon initial segment.Neuron73, 235–247 (2012).

43. Bonachela, J. A. & Munoz, M. A. Self-organization without conservation: true or just apparent scale-invariance?Journal of Statistical Mechanics: Theory and Experiment2009, P09009 (2009).

44. Priesemann, V., Munk, M. H. & Wibral, M. Subsampling effectsin neuronal avalanche distributions recordedin vivo. BMC neuroscience10, 40 (2009).

45. Ribeiro, T. L.et al. Spike avalanches exhibit universal dynamics across the sleep-wake cycle.PloS one5,e14129 (2010).

46. Ribeiro, T. L., Ribeiro, S., Belchior, H., Caixeta, F. & Copelli, M. Undersampled critical branching processeson small-world and random networks fail to reproduce the statistics of spike avalanches.PloS one9, e94992(2014).

47. Lipetz, L. E. The relation of physiological and psychological aspects of sensory intensity. InPrinciples ofReceptor Physiology, 191–225 (Springer, 1971).

48. Naka, K.-I. & Rushton, W. A. S-potentials from luminosity units in the retina of fish (cyprinidae).TheJournal of physiology185, 587 (1966).

Acknowledgements

This paper results from research activity on the FAPESP Center for Neuromathematics (FAPESP grant 2013/07699-0). OK and AAC also received support from Nucleo de Apoio a Pesquisa CNAIPS-USP and FAPESP (grant2016/00430-3). LB, JS and ACR also received CNPq support (grants 165828/2015-3,310706/2015-7and 306251/2014-0). We thank A. Galves for suggestions and revision of the paper, and M. Copelli and S. Ribeiro for discussions.

Author contributions statement

LB and AAC performed the simulations and prepared all the figures. OK and JS made the analytic calculations.OK, JS and LB wrote the paper. MA and ACR contributed with ideas, the writing of the paper and citations tothe literature. All authors reviewed the manuscript.

Competing financial interestsThe authors declare no competing financial interests.

17/17