attractor and boundedness for stochastic cohen–grossberg neural networks with delays
TRANSCRIPT
Neurocomputing 79 (2012) 164–167
Contents lists available at SciVerse ScienceDirect
Neurocomputing
0925-23
doi:10.1
n Corr
E-m
zqhmat
journal homepage: www.elsevier.com/locate/neucom
Letters
Attractor and boundedness for stochastic Cohen–Grossbergneural networks with delays
Li Wan a,n, Qinghua Zhou b
a School of Mathematics and Computer, Wuhan Textile University, Wuhan 430073, Chinab Department of Mathematics, Zhaoqing University, Zhaoqing 526061, China
a r t i c l e i n f o
Article history:
Received 7 January 2011
Received in revised form
27 July 2011
Accepted 12 October 2011
Communicated by D. Liuillustrate the correctness and effectiveness of our theoretical results.
Available online 11 November 2011
Keywords:
Stochastic Cohen–Grossberg neural
networks
Attractor
Delays
12/$ - see front matter & 2011 Elsevier B.V. A
016/j.neucom.2011.10.008
esponding author.
ail addresses: [email protected] (L. Wa
[email protected] (Q. Zhou).
a b s t r a c t
By employing Lyapunov method and Lasalle-type theorem, the attractor of stochastic Cohen–Grossberg
neural networks (CGNN) with delays is initially investigated. Novel results and sufficient criteria on the
attractor of stochastic CGNN are obtained. The almost surely asymptotic stability is a special case of our
results. The boundedness of stochastic CGNN is also investigated. Finally, one example is presented to
& 2011 Elsevier B.V. All rights reserved.
1. Introduction
In the past decades, the Cohen–Grossberg neural network pro-posed by Cohen and Grossberg [1] has attracted increasing interestdue to their potential applications in classification, associativememory, parallel computation and optimization problems. Suchapplications heavily depend on the dynamic behaviors of the net-works. Recently, it has been well recognized that stochastic distur-bances are ubiquitous in real nervous systems. Therefore, it isimportant and interesting to investigate how stochastic disturbancesaffect the networks. Many results on stochastic neural networks withdelays have been reported in [2–27] and references therein. Somesufficient criteria on the stability of uncertain stochastic neuralnetworks were derived in [2–5]. Almost sure exponential stabilityand robust stability of stochastic neural networks was studied in[6–12]. In [13–18], mean square exponential stability and pthmoment exponential stability of stochastic neural networks werediscussed. Some sufficient criteria on the stability for impulsivestochastic neural networks were established in [19,20]. In [21], thestability of discrete-time stochastic neural networks was analyzed.The stability of stochastic Markovian jumping neural networks wasinvestigated in [22–26]. In [27], the passivity of stochastic neuralnetworks was analyzed. However, these available literatures mainlyconsider the stability property of stochastic neural networks.
ll rights reserved.
n),
In fact, except for stability property, studies on neural dyna-mical systems involve uniform boundedness, ultimate bounded-ness, bifurcation, chaos, attractor, and so on. The attractor andboundedness are foundational conceptions of a dynamicalsystem, which play an important role in investigating global asymp-totic stability, global exponential stability, and so on. However, to ourknowledge, the attractor and boundedness of stochastic neuralnetworks with delays have never been investigated.
Motivated by the above discussions, the objective of this paperis to investigate the attractor and boundedness of stochastic Cohen–Grossberg neural networks with delays. The left paper is organizedas follows: Some preliminaries are in Section 2; our main results arepresented in Section 3; one example and conclusions are given inSections 4 and 5, respectively.
2. Preliminaries
Consider the following stochastic Cohen–Grossberg neural net-works with delays:
dxðtÞ ¼�aðxðtÞÞ½bðxðtÞÞ�Af ðxðtÞÞ�Bf ðxðt�tÞÞ� dt
þ½s1xðtÞþs2xðt�tÞ� dwðtÞ
9FðxðtÞ,xðt�tÞÞ dtþGðxðtÞ,xðt�tÞÞ dwðtÞ ð2:1Þ
and the initial condition xðsÞ ¼ xðsÞ,�mrsr0, in which x¼ ðx1, . . . ,xnÞ
T is the state vector associated with the neurons, xðt�tÞ ¼ðx1ðt�t1Þ, . . . ,xnðt�tnÞÞ
T , aðxðtÞÞ ¼ diagða1ðx1ðtÞÞ, . . . ,anðxnðtÞÞÞ is theamplification function, bðxðtÞÞ ¼ ðb1ðx1ðtÞÞ, . . . ,bnðxnðtÞÞÞ
T is thebehaved function, f ðxðtÞÞ ¼ ðf 1ðx1ðtÞÞ, . . . ,f nðxnðtÞÞÞ
T is the activation
L. Wan, Q. Zhou / Neurocomputing 79 (2012) 164–167 165
function, f ðxðt�tÞÞ ¼ ðf 1ðx1ðt�t1ÞÞ, . . . ,f nðxnðt�tnÞÞÞT ,s1 and s2 are
the diffusion coefficient matrices, ti is the transmission delay andsatisfies 0rtirm, w(t) is a one-dimensional Brownian motiondefined on a complete probability space ðO,F ,PÞ with a naturalfiltration fF tgtZ0 generated by fwðsÞ : 0rsrtg, A¼ ðaijÞn�n andB¼ ðbijÞn�n represent the connection weight matrix and the delayedconnection weight matrix, respectively. x¼ fxðsÞARn : �mrsr0gACbF0ð½�m,0�; Rn
Þ,CbF 0ð½�m,0�;Rn
Þ is the family of all F0-measurablebounded Cð½�m,0�;Rn
Þ-valued random variables and Cð½�m,0�;RnÞ is
the family of all continuous Rn-valued functions defined on ½�m,0�.Let AT and A�1 denote the transpose and inverse of the matrix
A. Let lmaxðAÞ represent the maximum eigenvalue of matrix A. LetA40 (respectively, AZ0) represent matrix A is symmetric positivedefinite (respectively, positive semi-definite). Let C2;1
ðRn� Rþ ;Rþ Þ
denotes the family of all nonnegative functions Vðx,tÞ on Rn� Rþ
which are continuously twice differentiable in x and once differenti-able in t. For each V AC2;1
ðRn� Rþ ;Rþ Þ, define an operator LV from
Rn� Rn
� Rþ to R by
LVðx,y,tÞ ¼ Vtðx,tÞþVxðx,tÞFðx,yÞþ12trace½GT
ðx,yÞVxxðx,tÞGðx,yÞ�,
where
Vtðx,tÞ ¼@Vðx,tÞ
@t, Vxxðx,tÞ ¼
@2Vðx,tÞ
@xi@xj
� �n�n
,
Vxðx,tÞ ¼@Vðx,tÞ
@x1, . . . ,
@Vðx,tÞ
@xn
� �:
Throughout this paper, we suppose the following assumptionshold.
(A1)
For positive, bounded and continuously differentiable func-tion aið�Þ, there exist constants a iZai40 such that 0oa
ir
aiðxÞra i, and xa0iðxÞZ0,xAR,i¼ 1, . . . ,n.
(A2) f ð0Þ ¼ 0 and there exist constants lþi and l�i such thatl�i rf iðxÞ�f iðyÞ
x�yr lþi , 8x, yAR, xay, i¼ 1, . . . ,n,
(A3)
bð0Þ ¼ 0 and there exist constants giZgi40 such thatgirbiðxÞ�biðyÞ
x�yrgi, 8x, yAR, xay, i¼ 1, . . . ,n:
Remark 2.1. Assumption (A2) is less conservative than that of in[2–4], since the constants lþi and l�i are allowed to be positive,negative or zero. That is to say, the activation function in Assump-tion (A2) is assumed to be neither monotonic, nor differentiable, norbounded. In addition, assumptions (A2) and (A3) imply that
9f iðxÞ9r li9x9, li ¼maxf9lþi 9,9l�i 9g, b2i ðxÞrg2
i x2,
gix2rbiðxÞx, i¼ 1, . . . ,n, xAR: ð2:2Þ
Definition 2.2. The nonempty set A� Rn is called the attractor forthe solution xðt; xÞ of system (2.1) if the following formula holds:
limt-1
dðxðt; xÞ,AÞ ¼ 0 a:s:
for every xACbF0ð½�m,0�;Rn
Þ, in which dðx,AÞ ¼ infyAAJx�yJ.
Lemma 2.3. Let assumptions (A1)–(A3) hold. Then system (2.1) has
a unique solution denoted xðt; xÞ on tZ0 for any initial data
xACbF0ð½�m,0�;Rn
Þ. Moreover, both Fðx,yÞ and Gðx,yÞ are locally
bounded in ðx,yÞ. That is, for any l40 there is a Kl40 such that
for all x,yARn with maxfJxJ,JyJgr l,maxfJFðx,yÞJ,JGðx,yÞJgrKl.
Proof. It follows from [28] that under assumptions (A1)–(A3),system (2.1) has a unique global solution xðt; xÞ on tZ0 for anyinitial data xACb
F0ð½�m,0�;Rn
Þ.
From (A1)–(A3) and (2.2), one obtains
JFðx,yÞJ2¼ FTðx,yÞFðx,yÞ ¼ ½bT
ðxÞ�f TðxÞAT
�f TðyÞBT
�aT ðxÞaðxÞ�½bðxÞ�Af ðxÞ�Bf ðyÞ�r3lmaxðaT ðxÞaðxÞÞ½bT
ðxÞbðxÞþ f TðxÞAT Af ðxÞ
þ f TðyÞBT Bf ðyÞ�r3 max
1r irnfa2
i gXn
i ¼ 1
g2i x2
i þlmaxðAT AÞ
Xn
i ¼ 1
l2i x2i
"
þlmaxðBT BÞ
Xn
i ¼ 1
l2i y2i
#
r3 max1r irn
fa2i g max
1r irnfg2
i gJxJ2þ9lmaxðA
T AÞ9 max1r irn
fl2i gJxJ2
�
þ9lmaxðBT BÞ9 max
1r irnfl2i gJyJ2
�,
JGðx,yÞJ2¼ GTðx,yÞGðx,yÞ ¼ ½s1xþs2y�T ½s1xþs2y�
r2½xTsT1s1xþyTsT
2s2y�r2½lmaxðsT1s1ÞJxJ2
þlmaxðsT2s2ÞJyJ2
�:
For any l40 and maxfJxJ,JyJgr l, one obtains maxfJFðx,yÞJ,
JGðx,yÞJgrKl, where
Kl ¼ l max 3 max1r irn
fa2i g max
1r irnfg2
i gþ9lmaxðAT AÞ9 max
1r irnfl2i g
���
þ9lmaxðBT BÞ9 max
1r irnfl2i g
��1=2
,f2½lmaxðsT1s1ÞþlmaxðsT
2s2Þ�g1=2
!:
Therefore, Fðx,yÞARn and Gðx,yÞARn are locally bounded in
ðx,yÞ. &
Lemma 2.4 (Lasalle-type theorem, Mao [29]). Assume that system
(2.1) has a unique solution denoted xðt; xÞ on tZ0 for any initial data
xACbF0ð½�m,0�;Rn
Þ and Fðx,yÞ,Gðx,yÞ are locally bounded in ðx,yÞ.Assume that there are V AC2;1
ðRn� Rþ ;Rþ Þ and W1,W2ACðRn;Rþ Þ
such that
LVðx,y,tÞr�W1ðxÞþW2ðyÞ, W1ðxÞZW2ðxÞ,
limJxJ-1
inf0r tr1
Vðx,tÞ ¼1:
Then set K ¼ fxARn : W1ðxÞ�W2ðxÞ ¼ 0ga+ and
limt-1
dðxðt;xÞ,KÞ ¼ 0 a:s: ð2:3Þ
for every xACbF0ð½�m,0�;Rn
Þ.
3. Main results
Now we first give the result on the attractor for stochasticCohen–Grossberg neural networks with delays.
Theorem 3.1. Let assumptions (A1)–(A3) hold. Suppose that there
exist some matrices P¼ diagðp1, . . . ,pnÞ40,Ui ¼ diag ðui1, . . . ,uinÞZ
0ði¼ 1;2Þ such that S1ZS2Z0,
S1 ¼2Pg�PBP�1BT P�2sT
1½a�1P�s1þ2U1L1 �PA�U1L2
ð�PA�U1L2ÞT 2U1
!,
S2 ¼2sT
2½a�1P�s2�2U2L1 U2L2
ðU2L2ÞT
�2U2þP
!,
where L1 ¼ diagðl�1 lþ1 , . . . , l�n lþn Þ,L2 ¼ diagðl�1 þ lþ1 , . . . ,l�n þ lþn Þ,g¼ diagðg1, . . . ,gnÞ,a ¼ diagða
1, . . . ,a
nÞ.
Then set K ¼ fxARn : W1ðxÞ�W2ðxÞ ¼ 0g is nonempty and (2.3)
holds, where WiðxÞ ¼ ðxT ,f TðxÞÞSiðx
T ,f TðxÞÞT ,i¼ 1;2. In other words,
K is the attractor and the solution xðt; xÞ of system (2.1) will approach
K asymptotically with probability 1.
In particular, if S14S2Z0, then K ¼ f0g and the solution of
system (2.1) will tend to zero asymptotically with probability1.
L. Wan, Q. Zhou / Neurocomputing 79 (2012) 164–167166
Proof. Clearly, f0g � K. From S1ZS2Z0, it follows W1ðxÞZ
W2ðxÞ. We choose Lyapunov–Krasovskii functional
VðxðtÞÞ ¼ 2Xn
i ¼ 1
pi
Z xiðtÞ
0
s
aiðsÞds ð3:1Þ
and obtain
LVðxðtÞ,xðt�tÞ,tÞ ¼ 2xT ðtÞP½�bðxðtÞÞþAf ðxðtÞÞþBf ðxðt�tÞÞ�þ1
2trace½GTðxðtÞ,xðt�tÞÞÞVxxðx,tÞGðxðtÞ,xðt�tÞÞ�,
ð3:2Þ
where
Vxxðx,tÞ ¼ 2diagp1
a1ðx1ðtÞÞ�
p1x1ðtÞa01ðx1ðtÞÞ
a21ðx1ðtÞÞ
, . . . ,pn
anðxnðtÞÞ�
pnxnðtÞa0nðx1ðtÞÞ
a2nðxnðtÞÞ
!:
From (A1)–(A3) and (2.2), one obtains
Vxxðx,tÞr2diagp1
a1
, . . . ,pn
an
!¼ 2a�1P,
12trace½GT
ðxðtÞ,xðt�tÞÞÞVxxðx,tÞGðxðtÞ,xðt�tÞÞ�¼ 1
2½GTðxðtÞ,xðt�tÞÞÞVxxðx,tÞGðxðtÞ,xðt�tÞÞ�
rGTðxðtÞ,xðt�tÞÞ½a�1P�GðxðtÞ,xðt�tÞÞ
r2xT ðtÞsT1½a�1P�s1xðtÞþ2ðxðt�tÞÞTsT
2½a�1P�s2xðt�tÞ, ð3:3Þ
�xT ðtÞPbðxðtÞÞ ¼�Xn
i ¼ 1
xiðtÞpibiðxiðtÞÞr�Xn
i ¼ 1
pigix2i ðtÞ ¼ �xT ðtÞPgxðtÞ, ð3:4Þ
2xT ðtÞPBf ðxðt�tÞÞrxT ðtÞPBP�1BT PxðtÞþ f Tðxðt�tÞÞPf ðxðt�tÞÞ, ð3:5Þ
0r�Xn
i ¼ 1
2u1i½f iðxiðtÞÞ�lþi xiðtÞ�½f iðxiðtÞÞ�l�i xiðtÞ�
¼�2½f TðxðtÞÞU1f ðxðtÞÞ�f T
ðxðtÞÞU1L2xðtÞþxT ðtÞU1L1xðtÞ�, ð3:6Þ
0r�Xn
i ¼ 1
2u2i½f iðxiðt�tiÞÞ�lþi xiðt�tiÞ�½f iðxiðt�tiÞÞ�l�i xiðt�tiÞ�
¼�2½f Tðxðt�tÞÞU2f ðxðt�tÞÞ�f T
ðxðt�tÞÞU2L2xðt�tÞþðxðt�tÞÞT U2L1xðt�tÞ�: ð3:7Þ
From (3.2)–(3.7), one gets
LVðxðtÞ,xðt�tÞ,tÞr�2xT ðtÞPgxðtÞþ2xT ðtÞPAf ðxðtÞÞþxT ðtÞPBP�1BT PxðtÞ
þ f Tðxðt�tÞÞPf ðxðt�tÞÞþ2xT ðtÞsT
1½a�1P�s1xðtÞ
þ2ðxðt�tÞÞTsT2½a�1P�s2xðt�tÞ
�2½f TðxðtÞÞU1f ðxðtÞÞ�f T
ðxðtÞÞU1L2xðtÞþxT ðtÞU1L1xðtÞ�
�2½f Tðxðt�tÞÞU2f ðxðt�tÞÞ�f T
ðxðt�tÞÞU2L2xðt�tÞ
þðxðt�tÞÞT U2L1xðt�tÞ�¼�ZT ðxðtÞÞS1ZðxðtÞÞþZT ðxðt�tÞÞS2Zðxðt�tÞÞ¼�W1ðxðtÞÞþW2ðxðt�tÞÞ, ð3:8Þ
where ZðxðtÞÞ ¼ ðxT ðtÞ,f TðxðtÞÞÞT ,Zðxðt�tÞÞ ¼ ððxðt�tÞÞT ,f T
ðxðt�tÞÞÞT .
On the other hand, from (3.1) and (A1), one obtains
min1r irn
fa�1i pigJxðtÞJ2r
Xn
i ¼ 1
a�1i pix
2i ðtÞrVðxðtÞÞr max
1r irnfa�1
ipigJxðtÞJ2:
ð3:9Þ
Thus it follows from Lemma 2.4 that the desired assertion (2.3)
holds. &
Remark 3.2. From Theorem 3.1, we know that the solution ofsystem (2.1) will approach set K asymptotically with probability 1provided S1ZS2Z0. Let n1, . . . ,nk be the all-orthogonal
eigenvectors of S1�S2 corresponding to the eigenvalue 0, then setKis the linear span of n1, . . . ,nk, that is
K ¼ fa1n1þ � � �aknk : aiAR,i¼ 1, . . . ,kg:
Finally, we give the following result on boundedness for Cohen–
Grossberg neural networks with delays.
Theorem 3.3. Suppose that all conditions of Theorem 3.1 hold. Then
there exists constant Cn40 such that for any eAð0;1Þ, EJxðtÞJ2rCn
and PfJxðtÞJrffiffiffiffiffiffiffiffiffiffiCn=e
pgZ1�e.
Proof. Since matrix S2AR2n�2n is real and symmetric, thereexists an orthogonal matrix Q ¼ ðqijÞ2n�2n such that QTS2Q ¼
diagðl1, . . . ,l2nÞ, where l1, . . . ,l2n are eigenvalues of matrix S2.Therefore,
ZT ðxðt�tÞÞS2Zðxðt�tÞÞ ¼ ZT ðxðt�tÞÞQ diagðl1, . . . ,l2nÞQTZðxðt�tÞÞ
¼Xn
i ¼ 1
li
Xn
j ¼ 1
qjixjðt�tjÞþXn
j ¼ 1
qðjþnÞ,if jðxjðt�tjÞÞ
24
35
2
:
ð3:10Þ
We choose Lyapunov–Krasovskii functional
VðxðtÞÞ ¼ 2Xn
i ¼ 1
pi
Z xiðtÞ
0
s
aiðsÞds
þXn
i ¼ 1
li
Xn
j ¼ 1
Z t
t�tj
qjixjðsÞ
24 þ
Xn
j ¼ 1
Z t
t�tj
qðjþnÞ,if jðxjðsÞÞ
352
ds
and obtain
LVðxðtÞ,xðt�tÞ,tÞ¼ 2xT ðtÞP½�bðxðtÞÞþAf ðxðtÞÞþBf ðxðt�tÞÞ�
þ1
2trace½GT
ðxðtÞ,xðt�tÞÞÞVxxðx,tÞGðxðtÞ,xðt�tÞÞ�
þXn
i ¼ 1
li
Xn
j ¼ 1
qjixjðtÞþXn
j ¼ 1
qðjþnÞ,if jðxjðtÞÞ
24
35
2
�Xn
i ¼ 1
li
Xn
j ¼ 1
qjixjðt�tjÞþXn
j ¼ 1
qðjþnÞ,if jðxjðt�tjÞÞ
24
352
, ð3:11Þ
where
Vxxðx,tÞ ¼ 2diagp1
a1ðx1ðtÞÞ�
p1x1ðtÞa01ðx1ðtÞÞ
a21ðx1ðtÞÞ
, . . . ,pn
anðxnðtÞÞ�
pnxnðtÞa0nðx1ðtÞÞ
a2nðxnðtÞÞ
!:
From (3.3)–(3.7), (3.10) and (3.11), one obtains
LVðxðtÞ,xðt�tÞ,tÞrZT ðxðtÞÞð�S1þS2ÞZðxðtÞÞr0: ð3:12Þ
Thus, one gets
min1r irn
fa�1i pigEJxðtÞJ2rEVðxðtÞÞrEVðxð0ÞÞþEðLVðx,xðt�tÞ,tÞÞ
r max1r irn
fa�1i
pigEJxð0ÞJ2þ2E
Xn
i ¼ 1
li
Xn
j ¼ 1
Z 0
�m9qji99xjðsÞ9
24
3528<
:þ
Xn
j ¼ 1
Z 0
�m9qðjþnÞ,i9lj9xjðsÞ9
24
35
29=; ds
r max1r irn
fa�1i
pigEJxð0ÞJ2þ2m
Xn
i ¼ 1
li
Xn
j ¼ 1
ðq2jiþq2
ðjþnÞ,il2j Þ
Z 0
�mEJxðsÞJ2 ds
r max1r irn
fa�1i
pigþ2m2Xn
i ¼ 1
li
Xn
j ¼ 1
ðq2jiþq2
ðjþnÞ,il2j Þ
24
35 sup�mr sr0
EJxðsÞJ2:
Since x¼ fxðsÞARn : �mrsr0gACbF0ð½�m,0�;Rn
Þ, there exists con-
stant Cn40 such that EJxðtÞJ2rCn. For any e40, it follows from
L. Wan, Q. Zhou / Neurocomputing 79 (2012) 164–167 167
Chebyshev’s inequality that
PfJxðtÞJ4ffiffiffiffiffiffiffiffiffiffiCn=e
pgrEJxðtÞJ2=ðCn=eÞre,
which implies PfJxðtÞJrffiffiffiffiffiffiffiffiffiffiCn=e
pgZ1�e. &
4. One example
In this section, one example is presented to demonstrate thecorrectness and effectiveness of our results.
Example 1. Consider system (2.1) with n¼ 3,aiðyÞ ¼ 3�2e�y2,
f iðyÞ ¼ tanhðyÞ,i¼ 1;2,3,bðxðtÞÞ ¼ gxðtÞ,
A¼
0:2 0:1 �0:2
0:3 �0:1 0:1
�0:1 0:1 0:3
0@
1A, B¼
�0:3 0:2 0:2
0:1 0:3 �0:3
0:2 �0:1 �0:2
0@
1A, g¼
1:5 0 0
0 1:5 0
0 0 1:4
0B@
1CA,
s1 ¼
0 0:1 0:2
0:1 0 0:2
0:1 0:1 0
0B@
1CA, s2 ¼
0:1 0 0
0 0:1 0
0 0 0:1
0B@
1CA:
Clearly, a ¼ L2 ¼ diagð1;1,1Þ,L1 ¼ 0. By using the Matlab LMIToolbox, based on Theorem 3.1, the solution of such systemwill approach Kasymptotically with probability 1 when P¼ diag
ð5:1873,5:7782,5:2487Þ,U1 ¼ diagð3:9040,4:2622, 3:8435Þ,U2 ¼
diag ð3:9048,4:2280,3:8207Þ.
5. Conclusion
Recently, many results on the stability for stochastic neuralnetworks with delays have been reported. But so far there are notany results on the attractor and boundedness for stochastic neuralnetworks. In this paper, new results and sufficient criteria on theattractor are established for stochastic CGNN with delays by usingLasalle-type theorem and Lyapunov method. The almost surelyasymptotic stability is a special case of our results. The bounded-ness is obtained under the same criteria. It is necessary andmeaningful to discuss the attractor and boundedness for moregeneral stochastic neural network models with Markovianjumping parameters and/or mixed time-delays. This is thefuture work.
Acknowledgments
The authors thank the editor and the reviewers for theirdetailed comments and valuable suggestions. This work wassupported by the National Natural Science Foundation of China(nos.: 10801109, 10926128, and 11047114), Science and Tech-nology Research Projects of Hubei Provincial Department ofEducation (Q20091705, Q20111607, and Q20111611) and YoungTalent Cultivation Projects of Guangdong (LYM09134).
References
[1] M.A. Cohen, S. Grossberg, Absolute stability of global pattern formation andparallel memory storage by competitive neural networks, IEEE Trans. Syst.Man Cybernet. 13 (1983) 815–826.
[2] H. Huang, G. Feng, Delay-dependent stability for uncertain stochastic neuralnetworks with time-varying delay, Physica A 381 (15) (2007) 93–103.
[3] J. Zhang, P. Shi, J. Qiu, Novel robust stability criteria for uncertain stochasticHopfield neural networks with time-varying delays, Nonlin. Anal. Real WorldAppl. 8 (4) (2007) 1349–1357.
[4] W.H. Chen, X.M. Lu, Mean square exponential stability of uncertain stochasticdelayed neural networks, Phys. Lett. A 372 (7) (2008) 1061–1069.
[5] F. Deng, M. Hua, X. Liu, Y. Peng, J. Fei, Robust delay-dependent exponentialstability for uncertain stochastic neural networks with mixed delays, Neuro-computing 74 (2011) 1503–1509.
[6] C. Huang, P. Chen, Y. He, L. Huang, W. Tan, Almost sure exponentialstability of delayed Hopfield neural networks, Appl. Math. Lett. 21 (2008)701–705.
[7] C. Huang, J. Cao, Almost sure exponential stability of stochastic cellular neuralnetworks with unbounded distributed delays, Neurocomputing 72 (2009)3352–3356.
[8] H. Zhao, N. Ding, L. Chen, Almost sure exponential stability of stochastic fuzzycellular neural networks with delays, Chaos Soliton Fract. 40 (2009) 1653–1659.
[9] Z. Wang, H. Shu, J. Fang, X. Liu, Robust stability for stochastic Hopfield neuralnetworks with time delays, Nonlin. Anal. Real World Appl. 7 (2006) 1119–1128.
[10] Z. Wang, H. Zhang, W. Yu, Robust stability criteria for interval Cohen–Grossbergneural networks with time varying delay, Neurocomputing 72 (2009) 1105–1110.
[11] T. Li, A. Song, S. Fei, Robust stability of stochastic Cohen–Grossberg neuralnetworks with mixed time-varying delays, Neurocomputing 73 (2010) 542–551.
[12] H. Su, W. Li, K. Wang, X. Ding, Stability analysis for stochastic neural networkwith infinite delay, Neurocomputing 74 (2011) 1535–1540.
[13] C. Huang, J. Cao, On pth moment exponential stability of stochastic Cohen–Grossberg neural networks with time-varying delays, Neurocomputing 73(2010) 986–990.
[14] Y. Sun, J. Cao, pth moment exponential stability of stochastic recurrent neuralnetworks with time-varying delays, Nonlin. Anal. Real World Appl. 8 (2007)1171–1185.
[15] C. Huang, Y. He, L. Huang, W. Zhu, pth moment stability analysis of stochasticrecurrent neural networks with time-varying delays, Inform. Sci. 178 (2008)2194–2203.
[16] C. Huang, Y. He, H. Wang, Mean square exponential stability of stochasticrecurrent neural networks with time-varying delays, Comput. Math. Appl. 56(2008) 1773–1778.
[17] R. Rakkiyappan, P. Balasubramaniam, Delay-dependent asymptotic stabilityfor stochastic delayed recurrent neural networks with time varying delays,Appl. Math. Comput. 198 (2008) 526–533.
[18] Z. Wang, J. Fang, X. Liu, Global stability of stochastic high-order neuralnetworks with discrete and distributed delays, Chaos Solitons Fractals 36(2008) 388–396.
[19] Q. Song, Z. Wang, Stability analysis of impulsive stochastic Cohen–Grossbergneural networks with mixed time delays, Physica A 387 (2008) 3314–3326.
[20] X.D. Li, Existence and global exponential stability of periodic solution fordelayed neural networks with impulsive and stochastic effects, Neurocom-puting 73 (2010) 749–758.
[21] Y. Ou, H. Liu, Y. Si, Z. Feng, Stability analysis of discrete-time stochastic neuralnetworks with time-varying delays, Neurocomputing 73 (2010) 740–748.
[22] Z. Wang, Y. Liu, L. Yu, X. Liu, Exponential stability of delayed recurrent neuralnetworks with Markovian jumping parameters, Phys. Lett. A 356 (2006) 346–352.
[23] M. Dong, H. Zhang, Y. Wang, Dynamics analysis of impulsive stochasticCohen–Grossberg neural networks with Markovian jumping and mixed timedelays, Neurocomputing 72 (2009) 1999–2004.
[24] W. Han, Y. Liu, L. Wang, Robust exponential stability of Markovian jumpingneural networks with mode-dependent delay, Commun. Nonlin. Sci. Numer.Simulat. 15 (2010) 2529–2535.
[25] Q. Zhu, J. Cao, Exponential stability of stochastic neural networks with bothMarkovian jump parameters and mixed time delays, IEEE Trans. Syst. ManCybernet. B 41 (2011) 341–353.
[26] Q. Zhu, C. Huang, X. Yang, Exponential stability for stochastic jumping BAMneural networks with time-varying and distributed delays, Nonlin. Anal.Hybrid Syst. 5 (2011) 52–77.
[27] J. Fu, H. Zhang, T. Ma, Q. Zhang, On passivity analysis for stochastic neuralnetworks with interval time-varying delay, Neurocomputing 73 (2010) 795–801.
[28] X.R. Mao, Stochastic Differential Equations and Applications, Horwood Publishing,1997.
[29] X.R. Mao, A note on the LaSalle-type theorems for stochastic differentialdelay equations, J. Math. Anal. Appl. 268 (2002) 125–142.
Li Wan received the Ph.D. from Nanjing University,Nanjing, China, and the Post-Doctoral Fellow in theDepartment of Mathematics at Huazhong University ofScience and Technology, Wuhan, China. From August2006 until now, he is with the Department of Mathe-matics and Physics at Wuhan Textile University, Wuhan,China. He is also the author or coauthor of more than 20journal papers. His research interests include nonlineardynamic systems, neural networks, control theory.
Qinghua Zhou received the Ph.D. from Nanjing Uni-versity, Nanjing, China. From August 2007 until now,she is with the Department of Mathematics at Zhaoq-ing University, Zhaoqing, China. She is also the authoror coauthor of more than 15 journal papers. Herresearch interests include nonlinear dynamic systems,neural networks.