global asymptotic stability of cohen–grossberg neural network with continuously distributed delays

10
Physics Letters A 342 (2005) 331–340 www.elsevier.com/locate/pla Global asymptotic stability of Cohen–Grossberg neural network with continuously distributed delays Li Wan, Jianhua Sun Department of Mathematics, Nanjing University, Nanjing 210008, China Received 9 September 2004; received in revised form 21 April 2005; accepted 10 May 2005 Available online 23 May 2005 Communicated by A.P. Fordy Abstract The convergence dynamical behaviors of Cohen–Grossberg neural network with continuously distributed delays are dis- cussed. By using Brouwer’s fixed point theorem, matrix theory and analysis techniques such as Gronwall inequality, some new sufficient conditions guaranteeing the existence, uniqueness of an equilibrium point and its global asymptotic stability are obtained. An example is given to illustrate the theoretical results. 2005 Elsevier B.V. All rights reserved. Keywords: Cohen–Grossberg neural network; Continuously distributed delay; Global asymptotic stability 1. Introduction As is well known, Cohen–Grossberg neural network (CGNN) include many models from different research fields, such as neurobiology, population biology and evolutionary theory (see [11]), in particular, the well-known Hopfield neural network (HNN). Since the stability of neural networks plays an important role in their poten- tial applications, such as associative content-addressable memories, pattern recognition and optimization, it is of significance and necessary to investigate the stability of CGNN. The early study on the stability of HNN and CGNN only considered models without time delays, i.e., the updating and propagation are assumed to occur instantaneously (see [11,16–18]). Later, discrete-time delays were introduced into communication channels since significant time delays are ubiquitous both in neural processing and This work was supported in part by the National Natural Science Foundation of China under Grant 10171044, in part by the Natural Science Foundation of Jiangsu Province under Grant BK2001024, and in part by the Foundation for University Key Teachers of the Ministry of Education of China. * Corresponding author. E-mail addresses: [email protected] (L. Wan), [email protected] (J. Sun). 0375-9601/$ – see front matter 2005 Elsevier B.V. All rights reserved. doi:10.1016/j.physleta.2005.05.026

Upload: li-wan

Post on 29-Jun-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Global asymptotic stability of Cohen–Grossberg neural network with continuously distributed delays

rk

are dis-ity, somebility are

search

oten-n, it is of

., theresing and

NaturalMinistry

Physics Letters A 342 (2005) 331–340

www.elsevier.com/locate/pla

Global asymptotic stability of Cohen–Grossberg neural netwowith continuously distributed delays✩

Li Wan, Jianhua Sun∗

Department of Mathematics, Nanjing University, Nanjing 210008, China

Received 9 September 2004; received in revised form 21 April 2005; accepted 10 May 2005

Available online 23 May 2005

Communicated by A.P. Fordy

Abstract

The convergence dynamical behaviors of Cohen–Grossberg neural network with continuously distributed delayscussed. By using Brouwer’s fixed point theorem, matrix theory and analysis techniques such as Gronwall inequalnew sufficient conditions guaranteeing the existence, uniqueness of an equilibrium point and its global asymptotic staobtained. An example is given to illustrate the theoretical results. 2005 Elsevier B.V. All rights reserved.

Keywords: Cohen–Grossberg neural network; Continuously distributed delay; Global asymptotic stability

1. Introduction

As is well known, Cohen–Grossberg neural network (CGNN) include many models from different refields, such as neurobiology, population biology and evolutionary theory (see[11]), in particular, the well-knownHopfield neural network (HNN). Since the stability of neural networks plays an important role in their ptial applications, such as associative content-addressable memories, pattern recognition and optimizatiosignificance and necessary to investigate the stability of CGNN.

The early study on the stability of HNN and CGNN only considered models without time delays, i.eupdating and propagation are assumed to occur instantaneously (see[11,16–18]). Later, discrete-time delays weintroduced into communication channels since significant time delays are ubiquitous both in neural proces

✩ This work was supported in part by the National Natural Science Foundation of China under Grant 10171044, in part by theScience Foundation of Jiangsu Province under Grant BK2001024, and in part by the Foundation for University Key Teachers of theof Education of China.

* Corresponding author.E-mail addresses: [email protected](L. Wan),[email protected](J. Sun).

0375-9601/$ – see front matter 2005 Elsevier B.V. All rights reserved.doi:10.1016/j.physleta.2005.05.026

Page 2: Global asymptotic stability of Cohen–Grossberg neural network with continuously distributed delays

332 L. Wan, J. Sun / Physics Letters A 342 (2005) 331–340

ation inalengths.agationorporateclude,

con-with

r quitetheoryuarantee

dn resultsare

n

are

callye linear

ctivation

in signal transmission. Articles considering HNN and CGNN with discrete delays include, for example,[1–10,12,14,19,21–24,28,29]and[25–27], and references therein.

Although the use of discrete-time delays in models of delayed feedback provides a good approximsimple circuits consisting of a small number of cells, as pointed out by[31,32], neural networks usually havespatial extent due to the presence of a multitude of parallel pathways with a variety of axon sizes andThus, there will be a distribution of conduction velocities along these pathways, and the distribution of propis not instantaneous and cannot be modeled with discrete delays. A more appropriate description is to inccontinuously distributed delays. Articles investigating HNN models with continuously distributed delays infor example,[10,14,20,23,29–32], and references therein.

To the best of our knowledge, there are very few results (if any) on the stability of CGNN models withtinuously distributed delays. So, in this Letter, we intend to study the global asymptotic stability of CGNNcontinuously distributed delays in a very general setting. By using Brouwer’s fixed point theorem, undegeneral conditions, we prove the existence of an equilibrium point in CGNN, and by applying the matrixand the analysis techniques such as Gronwall inequality, we provide some new sufficient conditions to gthe uniqueness and global asymptotic stability of the equilibrium point.

The rest of the Letter is organized as follows. We introduce in Section2 the continuously distributed delayeCGNN model with some necessary notations, assumptions and a lemma which will be useful later. Our maiare given in Section3. An example is given in Section4 to demonstrate the main results. Finally, conclusionsdrawn in Section5.

2. Model description and preliminaries

Consider Cohen–Grossberg neural network (CGNN) with continuously distributed delay

(1)xi (t) = −ai

(xi(t)

)[bi

(xi(t)

) −n∑

j=1

t∫−∞

tij kij (t − s)sj(xj (s)

)ds + Ji

], xi(s) = ϕi(s), s � 0

for 1 � i � n, and t � 0. In the above model,n � 2 is the number of neurons in the network,xi(t) is the statevariable of theith neuron at timet , sj (xj (t)) denotes the output of thej th unit at timet , ai presents an amplificatiofunction,bi can include a constant term indicating a fixed input the network,tij weights the strength of thej th uniton theith unit at timet , Ji is the constant input from outside of the network. The kernelskij for i, j = 1,2, . . . , n,are real valued non-negative continuous functions defined on[0,+∞) and satisfy

∫ +∞0 kij (t) dt = 1.

Throughout this Letter, for system(1), we assume the following

(A1) ai is bounded, positive and continuous, furthermore, 0< ai � ai(x) � ai , for i = 1,2, . . . , n.(A2) bi ∈ C1(R,R) andγi = infx∈R b′

i (x) > 0.

In general, the activation functionssi(x), i = 1,2, . . . , n, are required to be sigmoid which implies that theymonotone and smooth, i.e.,

(a) si ∈ C1(R,R), s′i (x) > 0 for x ∈ R ands′

i (0) = supx∈R s′i (x) > 0 for i = 1,2, . . . , n.

(b) si(0) = 0 and limx→±∞ si(x) = ±1.

However, we learn from[12] that since amplifiers, in many electronic circuits, having neither monotoniincreasing nor continuously differentiable input–output functions are frequently adopted such as piecewisfunction, non-monotonic and not necessarily smooth functions might be better candidates for neuron a

Page 3: Global asymptotic stability of Cohen–Grossberg neural network with continuously distributed delays

L. Wan, J. Sun / Physics Letters A 342 (2005) 331–340 333

ore, we

functions in designing and implementing an artificial neural network for some purpose of networks. Therefcan make a modification of(a) and(b) on si(x), i = 1,2, . . . , n, and assume the following

(A3) si :R → R is globally Lipschitz with Lipschitz constantLi for i = 1,2, . . . , n.(A4) For some constantMi > 0, |si(x)| � Mi , x ∈ R, i = 1,2, . . . , n.

For the convenience, we introduce some notations. For twon × n matricesA andB, A > B (A � B) meansthat each pair of corresponding elements ofA and B satisfies the inequality ‘>’ (‘ �’). A � 0 meansA isa non-negative matrix. Forx ∈ R

n, we define[x(t)]+ = (|x1(t)|, . . . , |xn(t)|)T . For x ∈ C(R,Rn), [x(t)]+∞ =

(‖x1(t)‖∞, . . . ,‖xn(t)‖∞)T , where‖xi(t)‖∞ = sup−∞<s�0 |xi(t + s)|, i = 1, . . . , n.

Definition 2.1. The setS ⊂ C(R,Rn) is called a positive invariant set of(1) if for any initial valueϕ ∈ S, we have

the solutionx(t) ∈ S for t � 0.

In addition, we need the following lemma, see[15].

Lemma 2.2. If M � 0 and ρ(M) < 1, then (I − M)−1 � 0, where I denotes the identity matrix and ρ(M) thespectral radius of a square matrix M .

3. Main results

We first show that system(1) has an equilibrium.

Theorem 3.1. If (A1), (A2) and (A4) hold, then for every input J = (J1, . . . , Jn)T , system (1) has an equilibrium.

Proof. Let the inputJ = (J1, . . . , Jn)T be given. By (A1), we know that system(1) has an equilibriumx∗ if and

only if x∗ = (x∗1, . . . , x∗

n)T is a solution of equations fori = 1, . . . , n

bi

(x∗i

) −n∑

j=1

t∫−∞

tij kij (t − s)sj(x∗j

)ds + Ji = 0,

that is

(2)bi

(x∗i

) −n∑

j=1

tij sj(x∗j

) + Ji = 0.

Consider the following system of equations

bi(xi) =n∑

j=1

tij sj (xj ) − Ji.

By (A4), we have∣∣∣∣∣n∑

j=1

tij sj (xj ) − Ji

∣∣∣∣∣ �n∑

j=1

|tij |Mj + |Ji | = Pi.

Page 4: Global asymptotic stability of Cohen–Grossberg neural network with continuously distributed delays

334 L. Wan, J. Sun / Physics Letters A 342 (2005) 331–340

t

S)

By (A2), bi is invertible. Therefore, we have fori = 1, . . . , n∣∣∣∣∣b−1i

(n∑

j=1

tij sj (xj ) − Ji

)∣∣∣∣∣ � max{∣∣b−1

i (s)∣∣, s ∈ [−Pi,Pi]

} = Di.

Denoteh = (h1, . . . , hn)T = (b−1

1 , . . . , b−1n )T . It can be easily seen thath = (h1, . . . , hn)

T maps a bounded seD = D1 × · · · × Dn to itself. It follows from Brouwer’s fixed point theorem (Theorem 3.2 of[13]) thath(x) has afixed pointx∗ in D. So the result holds true and the proof is completed.�Theorem 3.2. In addition to (A1)–(A4), assume

(A5) ρ(AM) < 1, A = diag

(a1

a1, . . . ,

an

an

), M = (mij )n×n, mij = |tij |Lj

γi

hold, then the equilibrium point of system (1) is unique.

Proof. Let x∗, y∗ ∈ Rn be the equilibrium points of system(1), and letu = y∗ −x∗, i.e.,ui = y∗

i −x∗i , i = 1, . . . , n.

By (1), we have

bi

(y∗i

) − bi

(x∗i

) = b′i (ξi)

(y∗i − x∗

i

) =n∑

j=1

tij(sj

(y∗j

) − sj(x∗j

)),

whereξi locates in betweenx∗i andy∗

i , i = 1, . . . , n. Hence

|ui | = 1

b′i (ξi)

∣∣∣∣∣n∑

j=1

tij(sj

(y∗j

) − sj(x∗j

))∣∣∣∣ � ai

ai

1

γi

n∑j=1

|tij |Lj |uj |,

that is

|u| � AM|u|.It follows from Theorem 8.3.2 of[15] that if u �= 0, ρ(AM) � 1, which contradictsρ(AM) < 1. Sou = 0, and theproof is completed. �

Let x∗ = (x∗1, . . . , x∗

n)T be an equilibrium point of(1) and letyi(t) = xi(t) − x∗i , i = 1, . . . , n. From(2) and

replacingxi(t) in (1) by yi(t) + x∗i , it is not difficult to obtain

(3)yi (t) = −αi

(yi(t)

)[βi

(yi(t)

) −n∑

j=1

t∫−∞

tij kij (t − s)fj

(yj (s)

)ds

], yi(s) = ψi(s), s � 0

for i = 1, . . . , n, in whichαi(yi(t)) = ai(yi(t) + x∗i ), βi(yi(t)) = bi(yi(t) + x∗

i ) − bi(x∗i ), fj (yj (t)) = sj (yj (t) +

x∗j ) − sj (x

∗j ).

It is clear that the trivial solutiony = 0 of (3) is uniformly stable (US) and globally asymptotically stable (GAif and only if x∗ is US and GAS for(1), respectively. Furthermore, ifx∗ is GAS, then it is unique.

Theorem 3.3. If (A1)–(A5) hold, then the equilibrium point O of system (3) is US.

Proof. For anyε > 0, puttingP = (I − AM)−1Eε, E = (1, . . . ,1)T . FromLemma 2.2and (A5), we have(I −AM)−1 � 0 andP > 0. If we can prove the set

S = {ψ ∈ C

((−∞,0],R

n): [ψ]+ � P

}

Page 5: Global asymptotic stability of Cohen–Grossberg neural network with continuously distributed delays

L. Wan, J. Sun / Physics Letters A 342 (2005) 331–340 335

is a positive invariant set of(3), then it follows from the meaning of US that the result holds true.In order to prove the setS is a positive invariant set of(3), we have to prove that

(4)[ψ]+∞ � P ⇒ [y(t)

]+ � P,

for any t � 0 andψ ∈ C((−∞,0],Rn). Therefore, we first show that fort � 0 and any given constantc > 1,

(5)[ψ]+∞ < cP ⇒ [y(t)

]+< cP.

In the following computation, we will use the relation(I − AM)P = Eε, that is

n∑j=1

ai

ai

mijpj + ε = pi, i = 1, . . . , n.

Suppose(5) is not true, then there exists somei andt1 > 0, such that

(6)∣∣yi(t1)

∣∣ = cpi,∣∣yi(t)

∣∣ < cpi, t < t1,[y(t)

]+ � cP, 0� t � t1,

wherepi is theith component of vectorP . From(3), we have

yi (t) = −αi

(yi(t)

)b′i (ξ)yi(t) + αi

(yi(t)

) n∑j=1

t∫−∞

tij kij (t − s)fj

(yj (s)

)ds,

in which ξ locates in betweenx∗ andyi(t) + x∗. For any

t ∈ S1i = {

t : yi(t) > 0} ∩ (0, t1],

we have

yi (t) � −αi

(yi(t)

)b′i (ξ)yi(t) + ai

n∑j=1

t∫−∞

|tij |kij (t − s)Lj

∣∣yj (s)∣∣ds

� −aiγiyi(t) + ai

n∑j=1

t∫−∞

|tij |kij (t − s)Lj

∣∣yj (s)∣∣ds

� −aiγiyi(t) + cai

n∑j=1

|tij |Ljpj .

Thus, by Gronwall inequality, we obtain

yi(t) � e−aiγi t yi(0) + cai

n∑j=1

|tij |Ljpj

t∫0

e−aiγi (t−s) ds

� e−aiγi t cpi + cai

ai

n∑j=1

|tij |Lj

γi

pj

(1− e−aiγi t

)

= e−aiγi t

(cpi − c

ai

ai

n∑j=1

mijpj

)+ c

ai

ai

n∑j=1

mijpj = e−aiγi t cε + cpi − cε

< cε + cpi − cε = cpi.

Page 6: Global asymptotic stability of Cohen–Grossberg neural network with continuously distributed delays

336 L. Wan, J. Sun / Physics Letters A 342 (2005) 331–340

Similarly, for any

t ∈ S2i = {

t : yi(t) < 0} ∩ (0, t1],

we have

yi (t) � −aib′i (ξ)yi(t) − ai

n∑j=1

t∫−∞

|tij |kij (t − s)Lj

∣∣yj (s)∣∣ds � −aiγiyi(t) − cai

n∑j=1

|tij |Ljpj .

Thus, by Gronwall inequality, we obtain

yi(t) � e−aiγi t yi(0) − cai

n∑j=1

|tij |Ljpj

t∫0

e−aiγi (t−s) ds

� −e−aiγi t cpi − cai

aiγi

n∑j=1

|tij |Ljpj

(1− e−aiγi t

)

= −e−aiγi t

(cpi − c

ai

aiγi

n∑j=1

|tij |Ljpj

)− c

ai

aiγi

n∑j=1

|tij |Ljpj = −e−aiγi t cε − cpi + cε

> −cε − cpi + cε = −cpi.

Whethert1 belongs toS1i or S2

i , |yi(t1)| < cpi holds true, which contradicts the first equality in(6). Obviously,yi(t1) = 0 contradicts the first equality in(6), too. So(5) holds. Lettingc → 1, then(4) holds. The proof iscompleted. �Corollary 3.4. If (A1)–(A5) hold, then the solutions of (3) are uniformly bounded.

Proof. It is not difficult to see the result holds from the proof ofTheorem 3.3and(5). �Theorem 3.5. If (A1)–(A5) hold, then the equilibrium point O of system (3) is globally attractive.

Proof. It is sufficient to prove for any givenψ ∈ C((−∞,0],Rn), we have

(7)limt→+∞ sup

[y(t)

]+ = 0,

where limt→+∞ sup[y(t)]+ = (limt→+∞ sup|y1(t)|, . . . , limt→+∞ sup|yn(t)|)T . From Corollary 3.4, it followsthat there exists a non-negative constant vectorλ = (λ1, . . . , λn)

T , such that

(8)limt→+∞ sup

[y(t)

]+ = λ.

Therefore, for sufficient small positive constantε, there exists someT1 > 0 such that for anyt � T1

(9)[y(t)

]+ � (1+ ε)λ.

FromCorollary 3.4again, we know that for allt ∈ R

(10)[y(t)

]+ � cP,

Page 7: Global asymptotic stability of Cohen–Grossberg neural network with continuously distributed delays

L. Wan, J. Sun / Physics Letters A 342 (2005) 331–340 337

where c, P are the same as in the proof ofTheorem 3.3. For the aboveε, c and P , it follows from∫ +∞0 kij (s) ds = 1, for i, j = 1, . . . , n that there exists someT2 > 0 such that

(11)n∑

j=1

ai

ai

mij cpj

+∞∫T2

kij (s) ds � ε, i = 1, . . . , n.

For any

t ∈ S1i = {

t : yi(t) > 0} ∩ (T1 + T2,+∞),

we have, using(9), (10)and(11)

yi (t) � −αi

(yi(t)

)b′i (ξ)yi(t) + ai

n∑j=1

t∫−∞

|tij |kij (t − s)Lj

∣∣yj (s)∣∣ds

� −aiγiyi(t) + ai

n∑j=1

t∫−∞

|tij |kij (t − s)Lj

∣∣yj (s)∣∣ds

= −aiγiyi(t) + ai

n∑j=1

t−T2∫−∞

|tij |kij (t − s)Lj

∣∣yj (s)∣∣ds + ai

n∑j=1

t∫t−T2

|tij |kij (t − s)Lj

∣∣yj (s)∣∣ds

� −aiγiyi(t) + ai

n∑j=1

t−T2∫−∞

|tij |kij (t − s)Lj cpj ds + ai

n∑j=1

t∫t−T2

|tij |kij (t − s)Lj (1+ ε)λj ds

� −aiγiyi(t) + aiγi

n∑j=1

+∞∫T2

mij kij (s)cpj ds + aiγi

n∑j=1

mij (1+ ε)λj

� −aiγiyi(t) + aiγi

(ε +

n∑j=1

mij (1+ ε)λj

)

in which ξ locates in betweenx∗ andyi(t) + x∗. Thus, by Gronwall inequality, we obtain

yi(t) � e−aiγi t yi(0) + ai

ai

(ε +

n∑j=1

mij (1+ ε)λj

)(1− e−aiγi t

).

Similarly, for any

t ∈ S2i = {

t : yi(t) < 0} ∩ (T1 + T2,+∞),

we have, using(9), (10)and(11)again,

yi (t) � −αi

(yi(t)

)b′i (ξ)yi(t) − ai

n∑j=1

t∫−∞

|tij |kij (t − s)Lj

∣∣yj (s)∣∣ds

� −aiγiyi(t) − aiγi

(ε +

n∑j=1

mij (1+ ε)λj

).

Page 8: Global asymptotic stability of Cohen–Grossberg neural network with continuously distributed delays

338 L. Wan, J. Sun / Physics Letters A 342 (2005) 331–340

y

rixficient

Thus, by Gronwall inequality, we obtain

yi(t) � e−aiγi t yi(0) − ai

ai

(ε +

n∑j=1

mij (1+ ε)λj

)(1− e−aiγi t

).

From the definition of superior limit and(8), there must exist a sequence{tn}∞n=1 in (T1 + T2,+∞) such that

(12)limtn→+∞

∣∣yi(tn)∣∣ = λi, i = 1, . . . , n.

If there is a subsequence{tnk}∞k=1 of {tn}∞n=1 satisfying(12) in

S3i = {

t : yi(t) = 0} ∩ (T1 + T2,+∞),

thenλ = 0 and(7) holds. If there is such a subsequence{tnk}∞k=1 in S1

i or S2i , then lettingtnk

→ +∞, ε → 0, wealways have

λi � ai

ai

n∑j=1

mijλj , i = 1, . . . , n,

that is,λ � AMλ. Assumeλ �= 0, by Theorem 8.3.2 of[15] andλ � 0, we knowρ(AM) � 1, which contradicts(A5). So,λ is a zero vector and(7) holds. The proof is completed.�

FromTheorems 3.3 and 3.5, it is easy to obtain the following theorem.

Theorem 3.6. If (A1)–(A5) hold, then the equilibrium point O of system (3) or the unique equilibrium point x∗ ofsystem (1) is GAS.

Notice thatρ(A) � ‖A‖ for any A ∈ Rn×n, in which ‖ · ‖ is an arbitrary matrix norm. Moreover, for an

matrix norm and any non-singular matrixQ, a matrix norm‖A‖Q can be given by‖A‖Q = ‖Q−1AQ‖. For theconvenience of calculation, in general, takingQ = diag{q1, . . . , qn} > 0. Therefore, corresponding to the matnorm widely applied—the row norm, column norm and Frobenius norm, we can obtain the following sufconditions to guarantee‖A‖Q < 1, respectively.

(1)∑n

j=1(qi

qj|aij |) < 1, for 1� i � n;

(2)∑n

i=1(qi

qj|aij |) < 1, for 1� j � n;

(3)∑n

i=1∑n

j=1(qi

qj|aij |)2 < 1.

From the above statements, we have

Corollary 3.7. Let (A1)–(A4) hold, then the unique equilibrium point x∗ of system (1) is GAS provided one of thefollowing conditions holds: there exist positive real numbers q1, . . . , qn such that

(1)∑n

j=1[ qi

qj

ai

ai

|tij |Lj

γi] < 1, for 1� i � n;

(2)∑n

i=1[ qi

qj

ai

ai

|tij |Lj

γi] < 1, for 1� j � n;

(3)∑n

i=1∑n

j=1[ qi

qj

ai

ai

|tij |Lj

γi]2 < 1.

Page 9: Global asymptotic stability of Cohen–Grossberg neural network with continuously distributed delays

L. Wan, J. Sun / Physics Letters A 342 (2005) 331–340 339

h con-uilibriumtheoryo satisfynicallyethods

ificationrated by a

ts. Thisthe Cityscript

4. One illustrative example

In this section, we give one example to illustrate of our results.Consider the following system(

x1(t)

x2(t)

)= −

(2+ sinx1 0

0 2+ cosx2

[(x1(t)

x2(t)

)

−t∫

−∞

( −14k(t − s) −1

8k(t − s)

− 116k(t − s) 1

12k(t − s)

)(s1(x1(s))

s2(x2(s))

)ds +

(J1

J2

)]

wheres1 ands2 satisfy (A3) and (A4) withL = 1, k(t) = 2π(1+t2)

. Obviously,γ1 = γ2 = 1, ai = 3, ai = 1, i = 1,2.Thus

ρ(AM) = 1

2+

√17

128<

1

2+

√1

4= 1.

Hence, it follows fromTheorem 3.6that the system has a unique equilibrium, which is GAS.

5. Conclusions

In this Letter, global asymptotic stability of the equilibrium point of Cohen–Grossberg neural network wittinuously distributed delays have been investigated. Some new criteria on the existence, uniqueness of eqpoint and its global asymptotic stability have been derived by using Brouwer’s fixed point theorem, matrixand analysis techniques such as Gronwall inequality. The neuronal output activation function only needs tconditions (A3) and (A4) given in this Letter, but needs not to be continuous, differentiable and monotoincreasing, as usually required by other analysis methods. It is worthwhile to mention that our technical mare practical, in the sense that all new stability conditions are stated in simple algebraic forms, so their verand applications are straightforward and convenient. The effectiveness of our results has been demonstconvenient numerical example.

Acknowledgements

The authors would like to thank the anonymous reviewers and the editor for their constructive commenrevised form was finished while J. Sun was visiting the Centre for Chaos Control and Synchronization atUniversity of Hong Kong. J. Sun would like to thank Prof. Guanrong Chen for his critical reading of the manuand hospitality.

References

[1] S. Arik, IEEE Trans. Neural Networks 13 (5) (2002) 1239.[2] S. Arik, IEEE Trans. Circuits Systems I 49 (8) (2002) 1211.[3] S. Arik, IEEE Trans. Circuits Systems I 50 (1) (2003) 156.[4] J. Bélair, J. Dynamics Differential Equations 5 (1993) 607.[5] J. Cao, IEEE Trans. Circuits Systems I 48 (11) (2001) 1330.[6] J. Cao, Phys. Lett. A 307 (2003) 136.[7] J. Cao, J. Wang, IEEE Trans. Circuits Systems I 50 (1) (2003) 34.

Page 10: Global asymptotic stability of Cohen–Grossberg neural network with continuously distributed delays

340 L. Wan, J. Sun / Physics Letters A 342 (2005) 331–340

s, Dor-

[8] T. Chen, S. Amari, IEEE Trans. Neural Networks 12 (1) (2001) 159.[9] T. Chen, Neural Networks 14 (2001) 251.

[10] Y. Chen, Neural Networks 15 (2002) 867.[11] M. Cohen, S. Grossberg, IEEE Trans. Systems Man Cybernet. SMC-13 (1983) 815.[12] P. van den Driessche, X. Zou, SIAM J. Appl. Math. 58 (1998) 1878.[13] K. Gopalsamy, Stability and Oscillations in Delay Differential Equations of Population Dynamics, Kluwer Academic Pubilisher

drecht, 1992.[14] K. Gopalsamy, X. He, Physica D 76 (1994) 344.[15] R.A. Horn, C.R. Johnson, Matrix Analysis, Cambridge Univ. Press, London, 1990.[16] J.J. Hopfield, Proc. Natl. Acad. Sci. USA 79 (1982) 2254.[17] J.J. Hopfield, Proc. Natl. Acad. Sci. USA 81 (1984) 3088.[18] J.J. Hopfield, D.W. Tank, Science 233 (1986) 3088.[19] X. Liao, Absolute Stability of Nonlinear Control Systems, Kluwer Academic Pubilishers, Dordrecht, 1993.[20] S. Mohamad, K. Gopalsamy, Math. Comput. Simulation 53 (2000) 1.[21] C. Marcus, R. Westervelt, Phys. Rev. A 39 (1989) 347.[22] A. Quezz, V. Protoposecu, J. Barben, IEEE Trans. Systems Man Cybernet. 18 (1983) 80.[23] V. Sree Hari Rao, B.R.M. Phaneendra, Neural Networks 12 (1999) 445.[24] J. Wu, Trans. Amer. Math. Soc. 350 (1999) 4799.[25] L. Wang, X.F. Zou, Neural Networks 15 (2002) 415.[26] L. Wang, X.F. Zou, Physica D 170 (2002) 162.[27] H. Ye, A.N. Michel, K. Wang, Phys. Rev. E 51 (1995) 2611.[28] Y. Zhang, A. Heng, P. Vadakkepat, IEEE Trans. Circuits Systems I 49 (2) (2002) 256.[29] J. Zhang, X. Jin, Neural Networks 13 (2000) 745.[30] Q. Zhang, R. Ma, J. Xu, Commun. Theor. Phys. 39 (2003) 381.[31] H. Zhao, Phys. Lett. A 297 (2002) 182.[32] H. Zhao, Neural Networks 17 (2004) 47.