paper on neural networks

8
Arithmetic Computing via Rate Coding in Neural Circuits with Spike-triggered Adaptive Synapses Sushrut Thorat Department of Physics Indian Institute of Technology, Bombay Maharashtra, India 400076 Email: [email protected] Bipin Rajendran Department of Electrical Engineering Indian Institute of Technology, Bombay Maharashtra, India 400076 Email: [email protected] Abstract—We demonstrate spiking neural circuits with spike- time dependent adaptive synapses capable of performing a variety of basic mathematical computations. These circuits encode and process information in the average spike rate that lie in the range of 40 - 140 Hz. The synapses in our circuit obey simple, local and spike time dependent adaptation rules. We demonstrate that our circuits can perform the fundamental operations – addition, subtraction, multiplication and division, as well as other non- linear transformations such as exponentiation and logarithm for time dependent signals in real-time. We show that our spiking neural circuits are tolerant to a high degree of noise in the input variables, and illustrate its computational power in an exemplary signal estimation problem. Our circuits can thus be used in a wide variety of hardware and software implementations for navigation, control and computing. I. I NTRODUCTION One of the fundamental questions of neuroscience is how basic computational operations are performed by neural cir- cuits that encode information in the temporal domain. Math- ematical operations such as addition and multiplication are central to signal processing, and hence to neural computations. Addition and subtraction are essential for pattern recognition. These operations can alter the number of neurons required to reach threshold and influence the ability to distinguish different patterns of activation [1]. Multiplicative operations occur in a wide range of sensory systems. Computations such as looming stimulus detection in the locust visual system [2], and auditory processing in the barn-owl nervous system [3] rely on multiplication operations. In this paper, we model these operations using Spiking Neural Networks (SNNs) [4] to unravel potential mechanisms underlying these computations in biological systems. The goal of our study is to mathe- matically describe the connections in the neural circuit, and build SNNs to perform the prescribed mathematical operations. These circuits can then be used to build complex networks to model higher order neural functions. There have been various attempts at building SNNs to perform basic mathematical operations. Koch and Seveg de- tailed the role of single neurons in information processing [8]. Srinivasan and Bernard proposed a mechanism for multiplica- tion which involved detecting coincident arrival of spikes [5]. Coincidence detection cannot be used to perform division. Tal and Schwartz used a Log transfer function, generated creating a refractory period in the Leaky-Integrate-and-Fire neurons, and correlated multiplication with the addition of the logarithms, but they did not calculate the exact multiplication [6]. Inspired by the barn-owl auditory processing capabilities, Nezis & van Rossum built a SNN for multiplication by approximating multiplication by the minimum function, and using a power- law transfer function [7]. All the approaches used non-linear transfer functions to achieve multiplication. In this paper, we adopt a novel strategy to perform math- ematical operations using SNNs. We develop small SNNs with linear transfer functions, and spike-dependent plastic synapses using simple weight adaptation rules to perform these operations. We introduce a new token of information which we call Spike Info, s to be used to encode information in place of Spike Rate, as required towards the linearisation of spike response to stimuli. We will use the linearized neuron model to develop circuits to perform linear operations such as addition and subtraction. For multiplication and division operations, we use exp(log s 1 ± log s 2 ), where s 1 and s 2 are the input spike infos. Similar computational schemes exist in nature, in the locust visual system [2], although it uses non-linear transfer functions for the logarithm and exponentiation operations. We then detail SNNs for performing transformations such as logarithm, exponentiation, multiplication, and reciprocal. Note that all these operations are in the spike rate domain, and are generated in real-time for biologically plausible signals. We then assess the performance of the developed SNNs, especially its noise resilience and illustrate its applicability in an exemplary signal estimation problem related to echolocation in real-time. The formalism and the circuits we have developed here can be used to compose large spiking neural circuits for software as well as power efficient hardware implementations. II. SPIKING NEURAL NETWORK I MPLEMENTATION We use the Adaptive Exponential Integrate and Fire (AEIF) neuron model [9] for modeling the dynamics of the neurons, with the parameters chosen to mimic regular spiking (RS) neurons. The dynamics of the membrane potential V (t), in response to synaptic current I syn and applied current I ext is described by the equations, C dV dt = -g L (V (t) - E L )+ g L Δ T exp V (t) - V T Δ T - U (t)+ I ext (t)+ I syn (t), (1a) w dU dt = a(V (t) - E L ) - U (t), (1b) and when V (t) 0,V (t) ! V r and U (t) ! U (t)+ b.

Upload: devlaxman

Post on 25-Sep-2015

18 views

Category:

Documents


1 download

DESCRIPTION

ANN Paper

TRANSCRIPT

  • Arithmetic Computing via Rate Coding in NeuralCircuits with Spike-triggered Adaptive Synapses

    Sushrut ThoratDepartment of Physics

    Indian Institute of Technology, BombayMaharashtra, India 400076

    Email: [email protected]

    Bipin RajendranDepartment of Electrical EngineeringIndian Institute of Technology, Bombay

    Maharashtra, India 400076Email: [email protected]

    AbstractWe demonstrate spiking neural circuits with spike-time dependent adaptive synapses capable of performing a varietyof basic mathematical computations. These circuits encode andprocess information in the average spike rate that lie in the rangeof 40 140Hz. The synapses in our circuit obey simple, localand spike time dependent adaptation rules. We demonstrate thatour circuits can perform the fundamental operations addition,subtraction, multiplication and division, as well as other non-linear transformations such as exponentiation and logarithm fortime dependent signals in real-time. We show that our spikingneural circuits are tolerant to a high degree of noise in the inputvariables, and illustrate its computational power in an exemplarysignal estimation problem. Our circuits can thus be used in a widevariety of hardware and software implementations for navigation,control and computing.

    I. INTRODUCTION

    One of the fundamental questions of neuroscience is howbasic computational operations are performed by neural cir-cuits that encode information in the temporal domain. Math-ematical operations such as addition and multiplication arecentral to signal processing, and hence to neural computations.Addition and subtraction are essential for pattern recognition.These operations can alter the number of neurons requiredto reach threshold and influence the ability to distinguishdifferent patterns of activation [1]. Multiplicative operationsoccur in a wide range of sensory systems. Computations suchas looming stimulus detection in the locust visual system [2],and auditory processing in the barn-owl nervous system [3]rely on multiplication operations. In this paper, we modelthese operations using Spiking Neural Networks (SNNs) [4] tounravel potential mechanisms underlying these computationsin biological systems. The goal of our study is to mathe-matically describe the connections in the neural circuit, andbuild SNNs to perform the prescribed mathematical operations.These circuits can then be used to build complex networks tomodel higher order neural functions.

    There have been various attempts at building SNNs toperform basic mathematical operations. Koch and Seveg de-tailed the role of single neurons in information processing [8].Srinivasan and Bernard proposed a mechanism for multiplica-tion which involved detecting coincident arrival of spikes [5].Coincidence detection cannot be used to perform division. Taland Schwartz used a Log transfer function, generated creating arefractory period in the Leaky-Integrate-and-Fire neurons, andcorrelated multiplication with the addition of the logarithms,

    but they did not calculate the exact multiplication [6]. Inspiredby the barn-owl auditory processing capabilities, Nezis &van Rossum built a SNN for multiplication by approximatingmultiplication by the minimum function, and using a power-law transfer function [7]. All the approaches used non-lineartransfer functions to achieve multiplication.

    In this paper, we adopt a novel strategy to perform math-ematical operations using SNNs. We develop small SNNswith linear transfer functions, and spike-dependent plasticsynapses using simple weight adaptation rules to perform theseoperations. We introduce a new token of information whichwe call Spike Info, s to be used to encode information in placeof Spike Rate, as required towards the linearisation of spikeresponse to stimuli. We will use the linearized neuron model todevelop circuits to perform linear operations such as additionand subtraction. For multiplication and division operations, weuse exp(log s1 log s2), where s1 and s2 are the input spikeinfos. Similar computational schemes exist in nature, in thelocust visual system [2], although it uses non-linear transferfunctions for the logarithm and exponentiation operations.We then detail SNNs for performing transformations such aslogarithm, exponentiation, multiplication, and reciprocal. Notethat all these operations are in the spike rate domain, andare generated in real-time for biologically plausible signals.We then assess the performance of the developed SNNs,especially its noise resilience and illustrate its applicability inan exemplary signal estimation problem related to echolocationin real-time. The formalism and the circuits we have developedhere can be used to compose large spiking neural circuits forsoftware as well as power efficient hardware implementations.

    II. SPIKING NEURAL NETWORK IMPLEMENTATION

    We use the Adaptive Exponential Integrate and Fire (AEIF)neuron model [9] for modeling the dynamics of the neurons,with the parameters chosen to mimic regular spiking (RS)neurons. The dynamics of the membrane potential V (t), inresponse to synaptic current Isyn and applied current Iext isdescribed by the equations,

    CdV

    dt= gL(V (t) EL) + gLT exp

    V (t) VTT

    U(t) + Iext(t) + Isyn(t), (1a)

    wdU

    dt= a(V (t) EL) U(t), (1b)

    and when V (t) 0, V (t)! Vr and U(t)! U(t) + b.

    Sushrut Thorat

    Sushrut Thorat

    Sushrut Thorat

    Sushrut Thorat

    Sushrut Thorat

    Sushrut Thorat

    Sushrut Thorat

    Sushrut Thorat

    Sushrut Thorat

    Sushrut Thorat

    Sushrut Thorat

    Sushrut Thorat

    Sushrut ThoratPoints to be cleared:1. We want to develop circuits with small number of neurons, so we have a lot of specific learning weights. We could have approximated exponentiation by using a large network of neurons. But here we are designing small SNNs which could be easily implemented into a micro-circuit.2. The neuron model is biologically realistic. The weight update rules might not be, because biological system have a lot of neurons at their disposal.3. Explicitly mention that this formalism is through a circuit perspective.4. Explicitly mention that we just want sparse update rates, and not exactly biologically realistic ones.5. Mention that this formalism can be used to linearise neurons from other neuron models.6. Excitatory and Inhibitory labels in the figures7. Mention that all of this is approximate computation, curve fitting and stuff.8. Rewrite the conclusions.

  • Synaptic current due to a spike in the pre-synaptic neuronat time tf is given by the equation,

    Isyn(t) = Isw

    exp

    t t

    f

    m

    exp

    t t

    f

    s

    h(ttf ),

    (2)where w is the weight of the synapse and h is the Heavisidestep function.

    The values of the parameters of the neuron are listed inAppendix A. We simulate the dynamics of our circuits inMATLAB and we obtain convergent results by using Eulerintegration with a step size of 0.01ms. The instantaneousspike rates are calculated by using the adaptive Kernel DensityEstimation method, as discussed by Shimazaki & Shinomoto[10]. When we do not deal with the pictorial representationof spike rate in real-time (which requires knowledge of theinstantaneous spike rate), we identify the time window duringthe simulation when the inverse of the inter-spike intervalssettle down, and we average over the time window to find theaverage spike rate.

    III. LINEARIZED NEURON

    The spike rate of a RS (AEIF) neuron in response tothe magnitude of the applied DC current is shown in Fig. 1.The response is non-linear, starkly more so in the region oflow spike rate. To make the response linear, we design alinearized neuron, by applying a bias current (Ib = 265 nA),and introducing a self-inhibitory synapse (winh = 130) to theAEIF neuron as shown in Fig. 2. The self-inhibitory connectionalso lets us access a wider domain of input current whilekeeping the spike rate under 200Hz (to maintain biologicallyrealistic modelling).

    Fig. 1: The spike rate response at lower DC input currents of an AEIFneuron is non-linear. Since the spike rate response at higher DC inputcurrents is almost linear, this regime is better suited for computations.

    The spike rate of the linearized neuron (Fig. 3a) at zeroinput current is now f0 = 14.55Hz. Considering the informa-tion coded in the spike rate of the neuron to be offset by thisvalue, we define a quantity called Spike Info, denoted as s, bythe relation

    s = f f0, (3)where f is the observed spike rate. Since the output spikerate is more or less constant, the average value of the synapticcurrent flowing out of the output neuron for unit weight is also

    Fig. 2: Our linearized neuron is an AEIF neuron with a self-inhibitorysynapse winh = 130 and with a bias current Ib = 265 nA.

    found to increase linearly with spike rate (Fig. 3b). Hence, thecardinal relations obeyed by the linearized neuron are

    sout = Iin (4)hIouti = w (sout + ) (5)

    where, = 0.2237Hz/pA, = 0.01105 pA/Hz, and =0.1605 pA.

    Fig. 3: By using a bias current to avoid the low current region, andintroducing a self-inhibitory connection to reduce the non-inearity, alinear dependence can be obtained. The bias current also creates apositive offset of f0 = 14.55Hz in the spike rate. So, we introducea token of information Spike Info, s, defined as s = f f0.

    A. Offseting and Scaling Spike Info

    Fig. 4: By adjusting the value of w12 and Ib0 , we can independentlyobtain the transformations s2 = s1 + s0 and/or s2 = s1.

    Now we discuss how to design a SNN that offsets a givenspike info (by a factor s0) or scales it (by a factor ) accordingto the transformation

    sout = sin + s0 (6)

    Sushrut Thorat

  • Referring to the SNN depicted in Fig. 4, s1 denotes the inputspike info and s2 denotes the output spike info. We will showthat by adjusting the values of w12 and Ib0 , we can obtain thistransformation independently. Let the average current flowinginto N1 due to s1 be denoted as Iin. Then

    hIini = w12(s1 + ) (7a)s2 (hIini+ Ib0) (7b)

    Hence,

    s2 w12s1 + (Ib0 + w12) (8)Note that (7b) is an approximate relation, as hIini is not aDC current. By appropriately tuning the values w12 and Ib0 ,we can independently add an offset or scale the input spikeinfo (Figs. 5 & 6). The values of these parameters needed forsome of these transformations we will use later in the paperare listed in Table I.

    TABLE I: Parameters for Scaling and Offset, sout = sin + s0

    w12 Ib0 s0 Fig.(pA) (Hz)

    195 46 0.5 0 5400 75 1 0 5865 188 2 0 5430 28 0 30 6)445 256 0 80 6)201 44 0.5 21 20, 21207 123 0.5 40 6, 22401 115 0.93 48 15

    Fig. 5: In the range of 0120Hz spike info, it is possible to linearlyscale the spike info by tuning the parameters w12 and Ib0 (Table I).

    IV. LINEAR OPERATIONS

    A. Addition

    We now design a SNN which will perform the additionoperation in the spike info domain. The circuit is shown inFig. 7; the input spike infos are fed into a linearized neuronthrough two identical excitatory synapses wsum. This neuronalso receives an additional bias current Isum to correct foroffset errors. Hence, the total average current flowing into theadder neuron is

    hIinpi wsum(s1 + ) + wsum(s2 + ) + Isum (9)

    Fig. 6: It is possible to introduce an offset, and/or scale the spike infoin the range of 0 120Hz by adjusting the parameters w12 and Ib0(Table I).

    Fig. 7: Sum of spike infos can be computed by feeding the in-put spikes into a linearized neuron with equal excitatory synapseswsum = 405. The bias current Isum = 190 pA ensures thatssum = s1 + s2.

    As before, this is the average current flowing into alinearized neuron. Hence,

    ssum Iinp= wsum(s1 + s2) + (2wsum + Isum) (10)

    Fig. 8: (a) Exemplary spike infos s1 and s2 were generated by feedingsinusoidal and ramp input currents to two linearized neurons. (b) Theoutput generated by the adder has a time dependent spike rate thatmatches the expected s1 + s2.

  • By appropriately choosing the parameter values, (here,wsum = 405 and Isum = 190 pA) we can ensure thatssum = s1 + s2. The circuit faithfully computes the sumof spike infos in real-time. When a slowly varying sinusoidalspike info and a ramp spike info are applied to the adder, itgenerates a spike stream in real-time, whose spike info closelymatches the expected variation (Fig. 8). The input spike infosused in this study were generated by applying appropriatecurrents to linearized neurons not shown here.

    B. Subtraction

    On similar lines, a SNN to determine the difference ofspike infos can be obtained as shown in Fig. 9 by feedingthe input spikes to a linearized neuron, but one through anexcitatory synapse and the other through an inhibitory synapsewith identical strengths (wdiff ). Then,

    sdiff Iinp= wdiff (s1 s2) + Idiff (11)

    A small value of Idiff is required to remove non-linearityas the synaptic currents are not constant values, but time-dependent. The SNN shown with wdiff = 405 and bias currentIdiff = 13 pA computes (s1 s2). (Fig. 10).

    Fig. 9: The SNN circuit for calculating the difference in spike infouses an inhibitory synapse with equal magnitude as the excitatorysynapse, with wdiff = 405. Idiff = 13 pA.

    Fig. 10: (a) Exemplary spike infos s1 and s2 were generated byfeeding sinusoidal input currents to two linearized neurons. (b) Theoutput spike info of the difference SNN matches the expected value(s1 s2) when s1 > s2. Since all circuits in our scheme processonly positive spike info, we have not optimized the circuit to matchthe response when s1 < s2.

    V. NON-LINEAR OPERATIONS

    To complete the repertoire of basic mathematical opera-tions, we now proceed to build SNNs for multiplication anddivision. One way to implement these operations is by usingexponentiation and logarithm as exp(log s1 log s2), wheres1 and s2 are the input spike infos.

    A. Logarithm

    We will now develop a SNN whose output spike info willbe the natural logarithm of the input spike info. We proposethat this can be achieved by feeding the input spikes to alinearized neuron with a spike dependent adaptive synapsewln(t) as shown in Fig. 11. Also note that this linearizedneuron is receiving an additional DC bias current Iln.

    Fig. 11: The adaptive synapse wln(t) responds dynamically to theinput spikes. By choosing appropriate parameters for the update,output spike info can be made proportional to the logarithm of theinput spike info.

    We require the spike activity of the output neuron to besub-linearly proportional to the synaptic current. This can bedone by imposing the following weight update rule

    dwlndt

    = k0(wln w0) k1(Iin I0), (12)where w0, k0, k1 and I0 are some constants.

    Since the bias current Iln is a constant, (7a) implies thatIin = wln(t)(s1 + ). Thus, (12) can be written as

    dwln(t)

    dt= awln(t) bs1wln(t) + c, (13)

    where a, b and c are constants. To perform computations inthe spike domain, we propose the replacement of s1dt with(t ts), where ts is the time of arrival of a spike. Thus, weemploy the weight-adaptation rule

    wln(t+t)wln(t) = (c awln(t))tbwln(t)Xts

    (t ts)(14)

    The weight adaptation for three different input spike infosis shown in Fig. 12. The weights do not settle down toparticular values, but their averages do. It is also seen thatthe temporal dynamics of the circuit settles down within0.1 0.2ms.

    Thus, as the input spike info increases, the rate of increaseof average current into Nln decreases, resulting in the logarith-mic dependence (Fig. 14). We have designed our logarithmicSNN such that its domain is 20 120Hz spike info, and inthis regime, our logarithmic translator is

    sout = cln log(sin) fl (15)where cln = 10.73 and fl = 13.73Hz.

    Sushrut Thorat

  • Fig. 12: The synaptic weight wln dynamically changes in responseto the spike info s1 in Fig. 11. The temporal dynamics of the spikerate at the output neuron Nln is also shown for s1 = 120.8Hz.

    Fig. 13: The mean and standard deviation of the synaptic weight afterthe logarithmic circuit settles is shown as a function of s1. The valueof wln decreases with increasing input spike info, and hence the rateof increase of current into N2 decreases with increasing input spikeinfo, as the current is proportional to the product of the weight andthe input spike info.

    B. Exponentiation

    The SNN to generate an output spike info proportionalto the exponential of the input spike info can be designedon similar lines as that of the logarithm, except that wenow require the spike activity of the output neuron to besuper-linearly proportional to the synaptic current. The weightchange rule is hence

    dwexpdt

    = k2(wexp w1) + k3(I I1), (16)where k2, k3, w1 and I1 are constants. The spike-triggeredweight update rule given by

    wexp(t+t) wexp(t) = (c awexp(t))t+ bwexp(t)

    Xts

    (t ts) (17)

    However, we found that to accurately reproduce the ex-ponential translation, it is necessary to offset the input spike

    Fig. 14: The logarithmic circuit generates a spike stream, whose spikeinfo is proportional to the logarithm of the spike info of the input spikestream (ranging from 20 120Hz).

    Fig. 15: The adaptive synapse wexp(t) responds dynamically to theinput spikes. By choosing appropriate parameters for the update,output spike info can be made proportional to the exponential ofthe input spike info.

    info by 48Hz and scale it by a factor 0.93. Hence, theexponentiation operation requires two neurons as shown inFig. 15.

    Fig. 16: The synaptic weight wexp dynamically changes in responseto the spike info s1 in Fig.18 (top). The temporal dynamics of thespike rate at the output neuron Ne2 is also shown for s1 = 41.3Hz.

    Dynamics in the weight update follow similar trends asbefore (Fig. 16), except that the mean value of the synapticstrength now increases super-linearly with input spike info(Fig. 17). The output spike info at Ne2 is exponentially pro-portional to the the input spike info in the range of 3050Hz(Fig. 18). We also verified that the norm of residuals obtained

  • Fig. 17: The mean and standard deviation of the synaptic weight afterthe exponential circuit settles is shown as a function of s1. The valueof wexp increases with increasing input spike info, and hence the rateof increase of current into N2 increases with increasing input spikeinfo.

    Fig. 18: The output spike info of the exponential SNN is exponentiallyproportional to the the input spike info in the range of 30 50Hz.

    by fitting the observed data with a square dependence is abouta factor of 10 higher than the corresponding value for theexponential fit shown in Fig. 18.

    We have designed our exponential SNN such that itsdomain is 30 50Hz spike info, and in this regime, ourexponential translator is

    sout = exp exp(cexpsin) + sexp (18)

    where cexp = 0.1862, exp = 11.92 103 and the offsetsexp = 16.04Hz.

    C. Multiplication & Division

    The blocks to compute ln and exp functions can nowbe combined with the adder circuit, in principle, to generatecircuits to perform multiplication and division. However, whencascading these circuits, since cexp cln = 1.99, we have toincorporate a SNN circuit to scale the spike info by a factorof 2. Also, since the range of the adder/difference circuit hasto lie within the domain of the exponential circuit, we use aspike info translator. A block diagram of the complete circuitis given in Fig. 19.

    Fig. 19: The SNN for multiplying/dividing two spike infos involvethe logarithmic converter and the adder/difference SNN. This outputis then scaled and offset before being passed to the exponential SNN.Hence, six neurons are required for multiplication and division.

    Fig. 20: When the multiplier circuit is fed with two identical spiketrains whose frequency increases linearly with time, it generates anoutput spike train whose spike info increases quadratically, in therange of 20 120Hz.

    Fig. 21: When the multiplier circuit is fed with an increasing anddecreasing ramp spike info, it generates an output spike that closelyfollows the expected parabolic product.

    The response of the multiplier circuit for two slowlyvarying spike info signals is shown in Figs. 20 and 21. Inboth cases, the spike info of the output spike train changesquadratically with time and there is excellent match betweenthe computed and expected values. The offset/scaling circuitused in the multiplier employs = 0.5 and s0 = 21Hz; the

  • Fig. 22: Using the SNN for division, we demonstrate that it is possibleto generate a spike train whose spike info varies inversely proportionalto the spike info of the input spike train.

    Fig. 23: An example of slowly-varying noisy signal (with std. dev:0.1) and its DFT. This is used for noise-resilience tests with the SNNs.

    circuit parameters are given in Table I.

    Similarly, we demonstrate the performance of the SNN fordivision, by showing that it is possible to generate a spike trainwhose spike info varies inversely proportional to the spike infoof the input spike train (Fig. 22). One of the inputs used in thisexperiment was a spike train with constant spike info, and theother had a linearly increasing spike info. The offset/scalingcircuit now employs = 0.5 and s0 = 40Hz.

    VI. PROCESSING NOISY SIGNALS

    We now study the performance of our SNN in the presenceof noise in the input signals. We generated noisy signals withfrequency components no bigger than 200Hz (the maximumfrequency we encounter in the listed SNNs), by generatingrandom every 5ms, and interpolating between them usingpiece-wise cubic hermite interpolation (pchip) for the time-stepof 0.01ms. An example of a slowly-varying noisy signal withstandard deviation of 0.1 is shown with its Discreet FourierTransform (DFT) [11], in Fig. 23. We characterize the noisysignal with a Coefficient of Variation (CV) which is given bycv = /, where is the standard deviation, and is themean of the analog current used to generate the input spikeinfo.

    Fig. 24: Variation in the average output spike info when the multiplierSNN is fed with a spike stream generated using noisy analog current(CV: 0.1), as compared with the noise-less scenario.

    We generated a slowly-varying noisy current with a CV of0.1, and provided it as an input to the SNNs for determiningthe square of the input spike info, and simulated the responseof the multiplier SNN for 100 experiments. The mean valueof the output spike info with the noisy input waveform closelyfollows that of the expected spike trains without noise to ahigh degree of accuracy, as shown in Fig. 24. The SNN cantolerate noisy inputs with CV less than 0.2.

    VII. APPLICATION TO SIGNAL ESTIMATION

    As an exemplary application of these SNN circuits, wedemonstrate the use of the multiplier SNN for range detection.We are inspired by the signal processing involved in echolo-cation that is widely used by a variety of birds and animalsto estimate the distance to preys and obstacles. For instance,its known that echolocating bats send out time varying soundsignals which are reflected by preys or obstacles, and thereare specific neural circuits that can detect the differences in theinterference patterns caused by the emitted and received sounds[12]. However, it is not clear how spiking neural circuits candetermine the differences in interference patterns between theemitted and received signal in real time.

    Fig. 25: A multiplier circuit that takes as its inputs a time varyingspike train and its delayed version can produce an output spike trainwhose maximum frequency will encode .

    We propose a simple model that uses time varying spikefrequency signals and the multiplier SNN circuit that can detectthe time delay between emitted and received signal by real-time signal processing. The multiplier SNN is fed by a timevarying spike signal and a time delayed version of the samesignal, as shown in Fig. 25. We assume that s1(t) is responsible

  • Fig. 26: The emitted signal, s1, and the signal received after a delay of3 s, s02, are shown in (a). The products of s1 and s2 for various valuesof are presented in (b). As the time delay decreases, the overlap ofthe signal increases, and the spike frequency of the product increases.

    Fig. 27: The maximum spike frequency at the output of the multipliesdepends strongly on the time delay between the emitted and reflectedsignal.

    for the creation of the emitted signal (for example, the neuronstimulating the larynx) and s2(t) is response of the neuronat the detection end (for instance, the neuron in the ear). Weassume that there is some jitter and noise in the channel, hencethe reflected signal is not the exact replica of the emittedsignal. We monitor the peak spike rate at the output of themultiplier for various values of time delay between the emittedand received spike trains. The multiplication of the emittedsignal and received signal for various time delays can be seenin Fig. 26. As can be seen in Fig. 27, the maximum spikefrequency at the output of the multiplier SNN is a strongfunction of the time delay. So a network containing neuronswith different axonal delays at the transmitter and a multiplierSNN could be used to determine the distance to the reflectingobject in real time with spike based processing.

    VIII. CONCLUSION

    We presented a methodology to perform spike basedarithmetic computations in neural circuits with spike-timedependent adaptive synapses based on rate coding. Our circuits

    perform the basic arithmetic operations on analog signalswhich result in time dependent spike rates in the range of40140Hz when fed to spiking neurons. The synapses in ourcircuit obey simple, local and spike time dependent adaptationrules. The building blocks we have designed in this papercan perform the fundamental operations addition, subtrac-tion, multiplication and division, as well as other non-lineartransformations such as exponentiation and logarithm for suchtime dependent signals in real-time. We have demonstratedthat these circuits can reliably compute even in the presenceof highly noisy signals. We have illustrated the power ofthese circuits to perform complex computations based on thefrequency of spike trains in real-time, and thus they can be usedin a wide variety of hardware and software implementationsfor navigation, control and computing. Though our circuits useAEIF model of the neuron, the design methodology outlinedcan be readily used to design other SNN circuits that use otherspiking neuron models.

    ACKNOWLEDGMENT

    This work was partially supported by the Department ofScience and Technology, Government of India.

    APPENDIX

    A. Parameters used for simulation

    1) Adaptive Exponential Integrate-and-Fire neuron model:Regularly spiking AEF neuron parameters used in thisstudy are Cm = 200 pF; gL = 10 nS; EL = 70mV;VT = 50mV; T = 2mV; a = 2 nS; b = 0 pA; w = 30mSand Vr = 58mV.

    REFERENCES[1] SIlver RA, Neuronal Arithmetic, 2010, Nature Reviews Neuroscience,

    11, 474-489.[2] Gabbiani F, Krapp HG, Koch C, Laurent G, Multiplication and stimulus

    invariance in a looming-sensitive neuron, 2004, J Physiol Paris, 98(1-3):19-34.

    [3] Pena JL, Konishi M, Auditory spatial receptive fields created by multi-plication, 2001, Science, 292(5515):249-52.

    [4] Maass W, Networks of Spiking Neurons: The Third Generation of NeuralNetwork Models, 1976, Neural Networks, Vol. 10, No. 9, pp. 1659-1671.

    [5] Srinivasan MV, Bernard GD, A Proposed Mechanism for Multiplicationof Neural Signals, 1997, Biol. Cybernetics, 21, 227236.

    [6] Tal D, Schwartz EL, Computing with the leaky integrate-and-fire neu-ron: logarithmic computation and multiplication, 1997, Neural Comp,9:305318.

    [7] Nezis P, van Rossum MCW, Accurate multiplication with noisy spikingneurons, 2011, J. Neural Eng, 8, 034005.

    [8] Koch C, Segev I, The role of single neurons in information processing,2000, Nature Neuroscience, 3, 1171-1177.

    [9] Brette R, Gerstner W, Adaptive Exponential Integrate-and-Fire Modelas an Effective Description of Neuronal Activity, 2005, J Neurophysiol,94:3637-3642.

    [10] Shimazaki H, Shinomoto S, Kernel bandwidth optimization in spike rateestimation, 2010, J Comput Neurosci, 29:171182.

    [11] Smith S, The Scientist and Engineers Guide to Digital Signal Process-ing, 1999, California Technical Publishing, 2nd edition.

    [12] Ulanovsky N, Moss C, What the bats voice tells the bats brain, 2008,PNAS USA 105(25):8491-8..