ultimate boundedness of stochastic hopfield neural networks with time-varying delays

5
Ultimate boundedness of stochastic Hopfield neural networks with time-varying delays Li Wan a, , Qinghua Zhou b , Pei Wang c a School of Mathematics and Physics, Wuhan Textile University, Wuhan 430073, China b Department of Mathematics, Zhaoqing University, Zhaoqing 526061, China c School of Mathematics and Statistics, Wuhan University, Wuhan 430072, China article info Article history: Received 15 September 2010 Received in revised form 3 January 2011 Accepted 1 April 2011 Communicated by N. Ozcan Available online 23 May 2011 Keywords: Stochastic Hopfield neural networks Time-varying delays Ultimate boundedness abstract By employing Lyapunov functional theory as well as linear matrix inequalities, ultimate boundedness of stochastic Hopfield neural networks (HNN) with time-varying delays is investigated. Sufficient criteria on ultimate boundedness of stochastic HNN are firstly obtained, which fills up a gap and includes deterministic systems as our special case. Finally, numerical simulations are presented to illustrate the correctness and effectiveness of our theoretical results. & 2011 Elsevier B.V. All rights reserved. 1. Introduction Neural network is one of the complex dynamical systems with strong backgrounds and various potential real-world applications. Therefore, neural dynamical systems have extensively been investigated [116], which involve not only stability property, but also other dynamics behaviors such as uniform boundedness, ultimate boundedness, bifurcation and chaos and so on. Boundedness of a dynamical system is one of the foundational conceptions, which plays an important role in investigating the uniqueness of equilibrium, global asymptotic stability, global exponential stability, the existence of periodic solution and so on. Recently, ultimate boundedness of several classes of neural networks with time delays have been reported. Some sufficient criteria were derived in [11,12], but these results hold only under constant delays. Following, in Ref. [13], the globally robust ultimate boundedness of integro-differential neural networks with uncertainties and variable delays was studied. After that, some sufficient criteria on the ultimate boundedness of neural networks with both variable and unbounded delays were derived in [14], but its systems concerned are deterministic ones. In Refs. [15,16], a series of criteria on the boundedness, global exponential stability and the existence of periodic solution for non-autonomous recurrent neural networks were established. However, to our knowledge, boundedness of stochastic neural networks with time-varying delays has never been investigated. Whereas, stochastic disturbances are ubiquitous in real nervous systems; Therefore, it is important and interesting to investigate how stochastic disturbances can affect networks properties. Recently, many results on stochastic neural networks with delays have been reported in [1730] and references therein. However, available literatures mainly consider the stability property, there are still not any results on ultimate boundness of stochastic neural networks. Activated by the above mentioned issues, we will firstly investigate the ultimate boundness of stochastic HNN. The left paper is organized as follows: Some preliminaries are in Section 2; Our main results and numerical simulations are presented in Sections 3 and 4, respectively; conclusions are given in the last section. 2. Preliminaries Consider the following stochastic HNN with time-varying delays: dxðtÞ ¼ ½CxðtÞþ Af ðxðtÞÞþ Bf ðxðttðtÞÞÞþ J dt þ½s 1 xðtÞþ s 2 xðttðtÞÞ dwðtÞ, ð2:1Þ in which x ¼ðx 1 , ... , x n Þ T is state vector associated with the neurons; C ¼ diagðc 1 , ... , c n Þ, c i 40 represents the rate with which the ith unit will reset its potential to the resting state in isolation when being disconnected from the network and the external Contents lists available at ScienceDirect journal homepage: www.elsevier.com/locate/neucom Neurocomputing 0925-2312/$ - see front matter & 2011 Elsevier B.V. All rights reserved. doi:10.1016/j.neucom.2011.04.025 Corresponding author. E-mail addresses: [email protected] (L. Wan), [email protected] (Q. Zhou), [email protected] (P. Wang). Neurocomputing 74 (2011) 2967–2971

Upload: li-wan

Post on 10-Sep-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

Neurocomputing 74 (2011) 2967–2971

Contents lists available at ScienceDirect

Neurocomputing

0925-23

doi:10.1

� Corr

E-m

zqhmat

journal homepage: www.elsevier.com/locate/neucom

Ultimate boundedness of stochastic Hopfield neural networks withtime-varying delays

Li Wan a,�, Qinghua Zhou b, Pei Wang c

a School of Mathematics and Physics, Wuhan Textile University, Wuhan 430073, Chinab Department of Mathematics, Zhaoqing University, Zhaoqing 526061, Chinac School of Mathematics and Statistics, Wuhan University, Wuhan 430072, China

a r t i c l e i n f o

Article history:

Received 15 September 2010

Received in revised form

3 January 2011

Accepted 1 April 2011

Communicated by N. Ozcancorrectness and effectiveness of our theoretical results.

Available online 23 May 2011

Keywords:

Stochastic Hopfield neural networks

Time-varying delays

Ultimate boundedness

12/$ - see front matter & 2011 Elsevier B.V. A

016/j.neucom.2011.04.025

esponding author.

ail addresses: [email protected] (L. Wa

[email protected] (Q. Zhou), [email protected]

a b s t r a c t

By employing Lyapunov functional theory as well as linear matrix inequalities, ultimate boundedness of

stochastic Hopfield neural networks (HNN) with time-varying delays is investigated. Sufficient criteria

on ultimate boundedness of stochastic HNN are firstly obtained, which fills up a gap and includes

deterministic systems as our special case. Finally, numerical simulations are presented to illustrate the

& 2011 Elsevier B.V. All rights reserved.

1. Introduction

Neural network is one of the complex dynamical systems withstrong backgrounds and various potential real-world applications.Therefore, neural dynamical systems have extensively beeninvestigated [1–16], which involve not only stability property,but also other dynamics behaviors such as uniform boundedness,ultimate boundedness, bifurcation and chaos and so on.

Boundedness of a dynamical system is one of the foundationalconceptions, which plays an important role in investigating theuniqueness of equilibrium, global asymptotic stability, globalexponential stability, the existence of periodic solution and soon. Recently, ultimate boundedness of several classes of neuralnetworks with time delays have been reported. Some sufficientcriteria were derived in [11,12], but these results hold only underconstant delays. Following, in Ref. [13], the globally robustultimate boundedness of integro-differential neural networkswith uncertainties and variable delays was studied. After that,some sufficient criteria on the ultimate boundedness of neuralnetworks with both variable and unbounded delays were derivedin [14], but its systems concerned are deterministic ones. InRefs. [15,16], a series of criteria on the boundedness, globalexponential stability and the existence of periodic solution fornon-autonomous recurrent neural networks were established.

ll rights reserved.

n),

(P. Wang).

However, to our knowledge, boundedness of stochastic neuralnetworks with time-varying delays has never been investigated.Whereas, stochastic disturbances are ubiquitous in real nervoussystems; Therefore, it is important and interesting to investigatehow stochastic disturbances can affect networks properties.Recently, many results on stochastic neural networks with delayshave been reported in [17–30] and references therein. However,available literatures mainly consider the stability property, thereare still not any results on ultimate boundness of stochasticneural networks.

Activated by the above mentioned issues, we will firstlyinvestigate the ultimate boundness of stochastic HNN. The leftpaper is organized as follows: Some preliminaries are in Section 2;Our main results and numerical simulations are presented inSections 3 and 4, respectively; conclusions are given in the lastsection.

2. Preliminaries

Consider the following stochastic HNN with time-varyingdelays:

dxðtÞ ¼ ½�CxðtÞþAf ðxðtÞÞþBf ðxðt�tðtÞÞÞþ J� dt

þ½s1xðtÞþs2xðt�tðtÞÞ� dwðtÞ, ð2:1Þ

in which x¼ ðx1, . . . ,xnÞT is state vector associated with the

neurons; C ¼ diagðc1, . . . ,cnÞ, ci40 represents the rate with whichthe ith unit will reset its potential to the resting state in isolationwhen being disconnected from the network and the external

L. Wan et al. / Neurocomputing 74 (2011) 2967–29712968

stochastic perturbation; A¼ ðaijÞn�n and B¼ ðbijÞn�n represent theconnection weight matrix and the delayed connection weight matrix,respectively; J¼ ðJ1, . . . ,JnÞ

T ,Ji denotes the external bias on the ithunit; fj denotes activation function, f ðxðtÞÞ ¼ ðf1ðx1ðtÞÞ, . . . ,fnðxnðtÞÞÞ

T ;s1 and s2 are diffusion coefficient matrices; w(t) is a one-dimensionalBrownian motion or Winner process, which is defined on a completeprobability space ðO,F ,PÞ with a natural filtration fF tgtZ0 generatedby fwðsÞ : 0rsrtg; tðtÞ is the transmission delay and satisfies

0rtðtÞrt, _tðtÞrmo1: ð2:2Þ

Initially, x(t) satisfies

xðsÞ ¼ xðsÞ, �trsr0:

Here, xðsÞ ¼ ðx1ðsÞ, . . . ,xnðsÞÞT ACð½�t,0�;RnÞ, xðsÞ is F 0-measurable

and satisfies

JxJ2t ¼ sup

�tr sr0EJxðsÞJ2o1,

where J � J is the Euclidean norm and Cð½�t,0�;RnÞ is the space ofall continuous Rn-valued functions defined on ½�t,0�.

Throughout this paper, we suppose the following assumptionholds.

(A1)

For f ðxðtÞÞ in (2.1), there are always constants lþi and l�i suchthat

l�i rfiðxÞ�fiðyÞ

x�yr lþi , 8x,yAR:

It follows from [31] that under assumption (A1), system (2.1) has aglobal solution for tZ0. Additionally, we note that assumption (A1) isless conservative than that of in [17,21], since the constants lþi and l�iare allowed to be positive, negative or zero, that is to say, theactivation function in assumption (A1) is required to be neithermonotonic, nor differentiable, even nor bounded.

Following, A40 (respectively, AZ0) means that matrix A issymmetric positive definite (respectively, positive semi-definite).AT and A�1 denote the transpose and inverse of the matrixA. lmaxðAÞ and lminðAÞ represent the maximum and minimumeigenvalues of matrix A, respectively.

To begin with, one first introduce the following definition anda preliminary lemma.

Definition 2.1. System (2.1) is said to be stochastically ultimatelybounded if for any eAð0,1Þ, there is a positive constant C ¼ CðeÞsuch that the solution x(t) of system (2.1) satisfies

lim supt-1

PfJxðtÞJrCgZ1�e:

Lemma 2.2 (Bo [32]). Let Q ðxÞ ¼QT ðxÞ, RðxÞ ¼ RT ðxÞ and S(x)depends affinely on x. Then linear matrix inequality

Q ðxÞ SðxÞ

ST ðxÞ RðxÞ

!40

is equivalent to

(1)

RðxÞ40, Q ðxÞ�SðxÞR�1ðxÞST ðxÞ40, (2) Q ðxÞ40, RðxÞ�ST ðxÞQ�1ðxÞSðxÞ40.

3. Main results

In this section, we will show that the solution of system (2.1) isstochastically ultimately bounded. To our aim, we first prove thefollowing theorem.

Theorem 3.1. Suppose that there exist some matrices P40,Qi40 ði¼ 1,2,3,4Þ, U1 ¼ diagðu11, . . . ,u1nÞZ0 and U2 ¼ diag

ðu21, . . . ,u2nÞZ0 such that the following linear matrix inequality holds:

(A2)

D 0 PAþL2U1 PB sT1P

n �ð1�mÞQ1�2L1U2 0 L2U2 sT2P

n n Q3þtQ4�2U1 0 0

n n n �ð1�mÞQ3�2U2 0

n n n n �P

0BBBBBB@

1CCCCCCAo0,

where D¼Q1þtQ2�PC�CP�2L1U1, L1 ¼ diagðl�1 lþ1 , . . . ,l�n lþn Þ, L2 ¼

diagðl�1 þ lþ1 , . . . ,l�n þ lþn Þ, n denotes the corresponding symmetric terms.

Then there is a positive constant Cn, which is independent ofthe initial data, such that the solution x(t) of system (2.1) satisfies

lim supt-1

EJxðtÞJ2rCn: ð3:1Þ

Proof. From (A2) and Lemma 2.2, one easily obtains

D 0 PAþL2U1 PB

n �ð1�mÞQ1�2L1U2 0 L2U2

n n Q3þtQ4�2U1 0

n n n �ð1�mÞQ3�2U2

0BBBB@

1CCCCA

þ

sT1P

sT2P

0

0

0BBB@

1CCCAP�1

sT1P

sT2P

0

0

0BBB@

1CCCA

T

o0:

Hence, there exists a sufficiently small l40 such that

S1 ¼

D1 0 PAþL2U1 PB

n lI�ð1�mÞQ1�2L1U2 0 L2U2

n n lIþeltQ3þtQ4�2U1 0

n n n lI�ð1�mÞQ3�2U2

0BBBB@

1CCCCA

þ

sT1P

sT2P

0

0

0BBB@

1CCCAP�1

sT1P

sT2P

0

0

0BBB@

1CCCA

T

o0,

where D1 ¼ eltQ1þtQ2þ2lPþlI�PC�CP�2L1U1.

We consider the following Lyapunov functional:

VðtÞ ¼ eltxT ðtÞPxðtÞþ

Z t

t�tðtÞelðsþtÞ½xT ðsÞQ1xðsÞþ f T ðxðsÞÞQ3f ðxðsÞÞ� ds

þ

Z t

t�tðtÞ

Z t

sely½xT ðyÞQ2xðyÞþ f T ðxðyÞÞQ4f ðxðyÞÞ� dy ds: ð3:2Þ

Derivate V(t) along with system (2.1), one gets

dVðtÞ ¼M1ðtÞ dwðtÞþM2ðtÞ dtþM3ðtÞ dt, ð3:3Þ

where

M1ðtÞ ¼ elt2xT ðtÞP½s1xðtÞþs2xðt�tðtÞÞ�,

M2ðtÞ ¼ elðtþtÞ½xT ðtÞQ1xðtÞþ f T ðxðtÞÞQ3f ðxðtÞÞ��ð1� _tðtÞÞelðt�tðtÞþtÞ

�½xT ðt�tðtÞÞQ1xðt�tðtÞÞþ f T ðxðt�tðtÞÞÞQ3f ðxðt�tðtÞÞÞ�þelttðtÞ½xT ðtÞQ2xðtÞþ f T ðxðtÞÞQ4f ðxðtÞÞ�

�ð1� _tðtÞÞZ t

t�tðtÞels½xT ðsÞQ2xðsÞþ f T ðxðsÞÞQ4f ðxðsÞÞ� ds

relðtþtÞ½xT ðtÞQ1xðtÞþ f T ðxðtÞÞQ3f ðxðtÞÞ��ð1�mÞelt

�½xT ðt�tðtÞÞQ1xðt�tðtÞÞþ f T ðxðt�tðtÞÞÞQ3f ðxðt�tðtÞÞÞ�þeltt½xT ðtÞQ2xðtÞþ f T ðxðtÞÞQ4f ðxðtÞÞ�, ð3:4Þ

L. Wan et al. / Neurocomputing 74 (2011) 2967–2971 2969

M3ðtÞ ¼ leltxT ðtÞPxðtÞþelt2xT ðtÞP½�CxðtÞþAf ðxðtÞÞþBf ðxðt�tðtÞÞÞþ J�

þelt ½s1xðtÞþs2xðt�tðtÞÞ�T P½s1xðtÞþs2xðt�tðtÞÞ�rleltxT ðtÞPxðtÞþelt2xT ðtÞP½�CxðtÞþAf ðxðtÞÞþBf ðxðt�tðtÞÞÞ�þelt ½lxT ðtÞPxðtÞþl�1JT PJ�þelt½s1xðtÞþs2xðt�tðtÞÞ�T P

�½s1xðtÞþs2xðt�tðtÞÞ�: ð3:5Þ

Among which, we use the fact that P¼ PT 40 and 2xT PJrlxT Pxþl�1JT PT P�1PJ¼ lxT Pxþl�1JT PJ,l40. From (A1), we then

derive for i¼ 1, . . . ,n,

½fiðxiðtÞÞ�fið0Þ�lþi xiðtÞ�½fiðxiðtÞÞ�fið0Þ�l�i xiðtÞ�r0, ð3:6Þ

½fiðxiðt�tðtÞÞÞ�fið0Þ�lþi xiðt�tðtÞÞ�½fiðxiðt�tðtÞÞÞ�fið0Þ�l�i xiðt�tðtÞÞ�r0: ð3:7Þ

From (3.3)–(3.7), we have

dVðtÞrM1ðtÞ dwðtÞþM2ðtÞ dtþM3ðtÞ dt

þelt �2Xn

i ¼ 1

u1i½fiðxiðtÞÞ�fið0Þ�lþi xiðtÞ�½fiðxiðtÞÞ�fið0Þ�l�i xiðtÞ�

(

�2Xn

i ¼ 1

u2i½fiðxiðt�tðtÞÞÞ�fið0Þ�lþi xiðt�tðtÞÞ�½fiðxiðt�tðtÞÞÞ�fið0Þ

�l�i xiðt�tðtÞÞ�)

dt¼M1ðtÞ dwðtÞþM2ðtÞ dtþM3ðtÞ dt

þelt �2Xn

i ¼ 1

u1i½fiðxiðtÞÞ�lþi xiðtÞ�½fiðxiðtÞÞ�l�i xiðtÞ�

(

�2Xn

i ¼ 1

u2i½fiðxiðt�tðtÞÞÞ�lþi xiðt�tðtÞÞ�½fiðxiðt�tðtÞÞÞ�l�i xiðt�tðtÞÞ�

�2Xn

i ¼ 1

u1if2i ð0Þþ2

Xn

i ¼ 1

u1ifið0Þ½2fiðxiðtÞÞ�ðlþ

i þ l�i ÞxiðtÞ�

�2Xn

i ¼ 1

u2if2i ð0Þþ2

Xn

i ¼ 1

u2ifið0Þ½2fiðxiðt�tðtÞÞÞ�ðlþi þ l�i Þxiðt�tðtÞÞ�)

dt

rM1ðtÞ dwðtÞþM2ðtÞ dtþM3ðtÞ dt

þelt �2Xn

i ¼ 1

u1i½fiðxiðtÞÞ�lþi xiðtÞ�½fiðxiðtÞÞ�l�i xiðtÞ�

(

�2Xn

i ¼ 1

u2i½fiðxiðt�tðtÞÞÞ�lþi xiðt�tðtÞÞ�½fiðxiðt�tðtÞÞÞ�l�i xiðt�tðtÞÞ�

þXn

i ¼ 1

½j4u1ifið0ÞfiðxiðtÞÞjþj2u1ifið0Þðlþ

i þ l�i ÞxiðtÞj�

þXn

i ¼ 1

½j4u2ifið0Þfiðxiðt�tðtÞÞÞjþj2u2ifið0Þðlþ

i þ l�i Þxiðt�tðtÞÞj�)

dt

rM1ðtÞ dwðtÞþM2ðtÞ dtþM3ðtÞ dt

þelt �2Xn

i ¼ 1

u1i½fiðxiðtÞÞ�lþi xiðtÞ�½fiðxiðtÞÞ�l�i xiðtÞ�

(

�2Xn

i ¼ 1

u2i½fiðxiðt�tðtÞÞÞ�lþi xiðt�tðtÞÞ�½fiðxiðt�tðtÞÞÞ�l�i xiðt�tðtÞÞ�

þXn

i ¼ 1

½lf 2i ðxiðtÞÞþ4l�1f 2

i ð0Þu21iþlx2

i ðtÞþl�1f 2

i ð0Þu21iðlþ

i þ l�i Þ2�

þXn

i ¼ 1

½lf 2i ðxiðt�tðtÞÞÞþ4l�1f 2

i ð0Þu22iþlx2

i ðt�tðtÞÞ

þl�1f 2i ð0Þu

22iðlþ

i þ l�i Þ2�

)dt

rM1ðtÞ dwðtÞþeltZT ðtÞS1ZðtÞ dtþeltC1 dt

rM1ðtÞ dwðtÞþeltC1 dt,

where ZðtÞ ¼ ðxT ðtÞ,xT ðt�tðtÞÞ,f T ðxðtÞÞ,f T ðxðt�tðtÞÞÞÞT ,

C1 ¼ l�1JT PJþXn

i ¼ 1

½4l�1f 2i ð0Þu

21iþl

�1f 2i ð0Þu

21iðlþ

i þ l�i Þ2

þ4l�1f 2i ð0Þu

22iþl

�1f 2i ð0Þu

22iðlþ

i þ l�i Þ2�: ð3:8Þ

Thus, one obtains

VðtÞrVð0Þþ

Z t

0M1ðsÞ dwðsÞþeltl�1C1 ð3:9Þ

and

EJxðtÞJ2re�ltEVð0Þþl�1C1

lminðPÞr

e�ltEVð0Þ

lminðPÞþCn, ð3:10Þ

where Cn ¼ l�1C1=lminðPÞ. Eq. (3.10) implies that (3.1) holds. The

proof is thus completed. &

From the above theorem, one can easily obtain the followingtheorem.

Theorem 3.2. Under the conditions of Theorem3.1, the solution of

system (2.1) is stochastically ultimately bounded.

Proof. From Theorem 3.1, we know that there is Cn40 such that

lim supt-1

EJxðtÞJ2rCn:

For any e40, set C ¼ffiffiffiffiffiffiffiffiffiffiCn=e

p. By Chebyshev’s inequality, one

obtains

PfJxðtÞJ4CgrEJxðtÞJ2=C2:

Thus we have

lim supt-1

PfJxðtÞJ4CgrCn=C2 ¼ e,

which implies

lim supt-1

fJxðtÞJrCgZ1�e: &

Remark 3.3. From (3.10), we obtain

lim supT-1

1

T

Z T

0EJxðtÞJ2 dtrCn,

which implies that the solution of system (2.1) is mean squarebounded.

Remark 3.4. Assume that J¼0 and f(0)¼0. Then system (2.1) hastrivial solution xðtÞ � 0. Under the conditions of Theorem 3.1, wecan prove that zero solution of system (2.1) is mean squareexponential stability and almost sure exponential stability by(3.9), (3.10) and the semi-martingale convergence theorememployed in [18,20,30].

When s1 ¼ s2 ¼ 0, system (2.1) becomes the following deter-ministic system:

dxðtÞ

dt¼�CxðtÞþAf ðxðtÞÞþBf ðxðt�tðtÞÞÞþ J: ð3:11Þ

Definition 3.5. System (3.11) is said to be uniformly bounded, iffor each H40, there exists a constant M¼MðHÞ40 such that½t0AR, fAC½�t,0�,JfJrH,t4t0� imply Jxðt,t0,fÞJrM, whereJfJ¼ suptA ½�t,0�JfðtÞJ.

Theorem 3.6. Suppose that there exist some matrices P40,Qi4 0 ði¼ 1,2,3,4Þ, U1 ¼ diagðu11, . . . ,u1nÞZ0 and U2 ¼ diag

L. Wan et al. / Neurocomputing 74 (2011) 2967–29712970

ðu21, . . . , u2nÞZ0 such that the following linear matrix inequality

holds:

(A3)

D 0 PAþL2U1 PB

n �ð1�mÞQ1�2L1U2 0 L2U2

n n Q3þtQ4�2U1 0

n n n �ð1�mÞQ3�2U2

0BBBB@

1CCCCAo0,

where D¼Q1þtQ2�PC�CP�2L1U1, L1 ¼ diagðl�1 lþ1 , . . . ,l�n lþn Þ, L2 ¼

diagðl�1 þ lþ1 , . . . ,l�n þ lþn Þ, n also denotes the symmetric terms.

Then system (3.11) is uniformly bounded.

Proof. From (A3), there exists a sufficiently small l40 such thatS1o0, where

S1 ¼

D1 0 PAþL2U1 PB

n lI�ð1�mÞQ1�2L1U2 0 L2U2

n n lIþeltQ3þtQ4�2U1 0

n n n lI�ð1�mÞQ3�2U2

0BBBB@

1CCCCA,

D1 ¼ eltQ1þtQ2þ2lPþlI�PC�CP�2L1U1.

We still consider the Lyapunov functional V(t) in (3.2). From

(3.2)–(3.9), we can obtain

JxðtÞJ2rl�1minðPÞðe

�ltVð0Þþl�1C1Þ

rl�1minðPÞðVð0Þþl

�1C1Þ

rl�1minðPÞ

(l�1C1þ½lmaxðPÞþelttlmaxðQ1Þþt2lmaxðQ2Þ�JxJ2

þ½eltlmaxðQ3ÞþtlmaxðQ4Þ�

Z 0

�tJf ðxðsÞÞJ2 ds

)

rl�1minðPÞ l�1C1þ½lmaxðPÞþelttlmaxðQ1Þþt2lmaxðQ2Þ�JxJ2

nþ½eltlmaxðQ3ÞþtlmaxðQ4Þ� 2t max

1r irnfðl�1 Þ

2,ðlþ1 Þ2gJxJ2

þ2tJf ð0ÞJ2

� ��,

where JxJ2¼ suptA ½�t,0�JxðtÞJ

2, which implies system (3.11) is

uniformly bounded. &

4. Numerical simulations

To demonstrate the effectiveness and correctness of ourtheoretical results, a numerical example will be given in thefollowing discussion.

Example 1. Consider system (2.1) with J¼ ð0,1ÞT , and

A¼�0:1 0:4

0:2 �0:5

� �, B¼

0:1 �1

�1:4 0:4

� �, C ¼

1:2 0

0 1:15

� �,

Fig. 1

s1 ¼0:23 0:1

0:3 0:2

� �, s2 ¼

0:1 �0:2

0:2 0:3

� �:

The activation functions fiðxiÞ ¼ xiþsinðxiÞ ði¼ 1,2Þ satisfies:l�1 ¼ l�2 ¼ 0,lþ1 ¼ lþ2 ¼ 1. Then we compute that L1 ¼ 0, L2 ¼

diagð1,1Þ. By using Matlab’s LMI Control Toolbox [32], form¼ 0:0035 and t¼ 1, based on Theorem 3.1, such system isstochastically ultimately bounded when P, U1, U2, Q1, Q2, Q3 andQ4 satisfy

P¼178:2931 18:2023

18:2023 144:2437

� �, U1 ¼

113:8550 0

0 116:4073

� �,

U2 ¼97:8886 0

0 62:3666

� �, Q1 ¼

105:3748 0:3123

0:3123 77:9345

� �,

Q2 ¼20:6282 �0:1193

�0:1193 18:6245

� �, Q3 ¼

111:0239 �38:5842

�38:5842 129:9318

� �,

Q4 ¼20:3759 �3:1185

�3:1185 24:2611

� �:

Fig. 1 a shows time evolution of the solution of Example 1 and

Fig. 1b shows its phase portrait, in which t¼ 1 and initial values

are taken as x1ðtÞ ¼�0:5, x2ðtÞ ¼ 0:5, �trtr0. The boundedness

of the solution is obvious.

Example 2. Consider system (3.11) with the above matricesA,B,C,J,L1,L2. By using Matlab’s LMI Control Toolbox [32], form¼ 0:0035 and t¼ 1, based on Theorem 3.6, such system isultimately bounded when P, U1, U2, Q1, Q2, Q3 and Q4 satisfy

P¼245:3575 23:7444

23:7444 191:7488

� �, U1 ¼

158:7002 0

0 158:1952

� �,

U2 ¼129:7682 0

0 88:2257

� �, Q1 ¼

147:2493 2:9341

2:9341 96:2711

� �,

Q2 ¼35:1520 0:3407

0:3407 29:5580

� �, Q3 ¼

149:3715 �48:3589

�48:3589 167:3431

� �,

Q4 ¼32:8923 �5:3518

�5:3518 37:4534

� �:

Remark 4.1. We notice that the criteria of Theorem 3.1 in [33]are not applicable to ascertain the boundedness of system (3.11)since there are no positive numbers x1,x2 such that

X2

j ¼ 1

xjðjaj1jþjbj1jÞ

x1cj¼

0:2x1

1:2x1þ

1:6x2

1:15x1o1,

L. Wan et al. / Neurocomputing 74 (2011) 2967–2971 2971

X2

j ¼ 1

xjðjaj2jþjbj2jÞ

x2cj¼

1:4x1

1:2x2þ

0:9x2

1:15x2o1:

This shows that the proposed criteria in this paper are lessconservative than those in [33].

5. Conclusions

A proper Lyapunov functional and linear matrix inequalities areemployed in this work to investigate the ultimate boundedness ofstochastic time-varying delay HNN; some sufficient conditions arederived after extensive deductions. From the proposed sufficientconditions, one can easily prove that zero solution of such networkis mean square exponentially stable and almost surely exponentiallystable by applying the semi-martingale convergence theorem. Ourinvestigations are more realistic and fill up a gap for the boundness ofstochastic HNN, and therefore, may have its potential real-worldapplications.

Acknowledgments

The authors would like to thank the editor and the reviewersfor their insightful comments and valuable suggestions. This workwas supported by the National Natural Science Foundation ofChina (Nos. 10801109, 10926128 and 11047114), Science andTechnology Research Projects of Hubei Provincial Department ofEducation (Q20091705) and Young Talent Cultivation Projects ofGuangdong (LYM09134).

References

[1] C. Bai, Stability analysis of Cohen–Grossberg BAM neural networks withdelays and impulses, Chaos, Solitons & Fractals 35 (2) (2008) 263–267.

[2] J. Cao, J. Wang, Global asymptotic stability of a general class of recurrentneural networks with time-varying delays, IEEE Transactions Circuit System-I50 (1) (2003) 34–44.

[3] O.M. Kwon, J.H. Park, Delay-dependent stability for uncertain cellular neuralnetworks with discrete and distribute time-varying delays, Journal of theFranklin Institute 345 (2008) 766–778.

[4] J. Qiu, J. Cao, Delay-dependent exponential stability for a class of neuralnetworks with time delays and reaction-diffusion terms, Journal of theFranklin Institute 346 (2009) 301–314.

[5] Q. Song, J. Cao, Global exponential robust stability of Cohen–Grossberg neuralnetwork with time-varying delays and reaction-diffusion terms, Journal ofthe Franklin Institute 343 (2006) 705–719.

[6] Y.J. Shen, LMI-based stability criteria with auxiliary matrices for delayedrecurrent neural networks, IEEE Transactions on Circuits and Systems II:Express Briefs 55 (2008) 811–815.

[7] X. Yang, F. Li, Y. Long, X. Cui, Existence of periodic solution for discrete-timecellular neural networks with complex deviating arguments and impulses,Journal of the Franklin Institute 347 (2010) 559–566.

[8] Z. Zhang, D. Zhou, Existence and global exponential stability of a periodicsolution for a discrete-time interval general BAM neural networks, Journal ofthe Franklin Institute 347 (2010) 763–780.

[9] W. He, J. Cao, Stability and bifurcation of a class of discrete-time neuralnetworks, Applied Mathematical Modelling 31 (2007) 2111–2122.

[10] H.Y. Zhao, L. Wang, C.X. Ma, Hopf bifurcation and stability analysis ondiscrete-time Hopfield neural network with delay, Nonlinear Analysis: RealWorld Applications 9 (2008) 103–113.

[11] X. Liao, J. Wang, Global dissipativity of continuous-time recurrent neuralnetworks with time delay, Physics Review E 68 (2003) 1–7.

[12] S. Arik, On the global dissipativity of dynamical neural networks with timedelays, Physics Letters A 326 (2004) 126–132.

[13] Y. Xu, B. Cui, Global robust dissipativity for integro-differential systems modelingneural networks with delays, Chaos, Solitons & Fractals 36 (2008) 469–478.

[14] Q. Song, Z. Zhao, Global dissipativity of neural networks with both variableand unbounded delays, Chaos, Solitons & Fractals 25 (2005) 393–401.

[15] H. Jiang, Z. Teng, Global exponential stability of cellular neural networks withtime-varying coefficients and delays, Neural Networks 17 (2004) 1415–1425.

[16] H. Jiang, Z. Teng, Boundedness, periodic solution sand global stability forcellular neural networks with variable coefficients and infinite delays, NeuralNetworks 72 (2009) 2455–2463.

[17] W.H. Chen, X.M. Lu, Mean square exponential stability of uncertain stochasticdelayed neural networks, Physics Letters A 372 (7) (2008) 1061–1069.

[18] C. Huang, J. Cao, Almost sure exponential stability of stochastic cellularneural networks with unbounded distributed delays, Neurocomputing 72(2009) 3352–3356.

[19] C. Huang, J. Cao, On pth moment exponential stability of stochastic Cohen–Grossberg neural networks with time-varying delays, Neurocomputing 73(2010) 986–990.

[20] C. Huang, P. Chen, Y. He, L. Huang, W. Tan, Almost sure exponential stability ofdelayed Hopfield neural networks, Applied Mathematics Letters 21 (2008)701–705.

[21] H. Huang, G. Feng, Delay-dependent stability for uncertain stochastic neuralnetworks with time-varying delay, Physica A 381 (15) (2007) 93–103.

[22] C. Huang, Y. He, H. Wang, Mean square exponential stability of stochasticrecurrent neural networks with time-varying delays, Computers and Mathe-matics with Applications 56 (2008) 1773–1778.

[23] R. Rakkiyappan, P. Balasubramaniam, Delay-dependent asymptotic stabilityfor stochastic delayed recurrent neural networks with time varying delays,Applied Mathematics and Computations 198 (2008) 526–533.

[24] Y. Sun, J. Cao, pth moment exponential stability of stochastic recurrent neuralnetworks with time-varying delays, Nonlinear Analysis: Real World Applica-tions 8 (2007) 1171–1185.

[25] Q. Song, Z. Wang, Stability analysis of impulsive stochastic Cohen–Grossbergneural networks with mixed time delays, Physica A 387 (2008) 3314–3326.

[26] Z. Wang, J. Fang, X. Liu, Global stability of stochastic high-order neuralnetworks with discrete and distributed delays, Chaos, Solitons & Fractals 36(2008) 388–396.

[27] X.D. Li, Existence and global exponential stability of periodic solution fordelayed neural networks with impulsive and stochastic effects, Neurocom-puting 73 (2010) 749–758.

[28] J. Fu, H. Zhang, T.D. Ma, Q. Zhang, On passivity analysis for stochastic neuralnetworks with interval time-varying delay, Neurocomputing 73 (2010) 795–801.

[29] Y. Ou, H.Y. Liu, Y.L. Si, Z.G. Feng, Stability analysis of discrete-time stochasticneural networks with time-varying delays, Neurocomputing 73 (2010) 740–748.

[30] H. Zhao, N. Ding, L. Chen, Almost sure exponential stability of stochastic fuzzycellular neural networks with delays, Chaos, Solitons & Fractals 40 (2009)1653–1659.

[31] X. Mao, Stochastic Differential Equations and Applications, Horwood Publish-ing, 1997.

[32] S. Boyd, et al., Linear Matrix Inequations in System and Control Theory, SIAM,Philadelphia, PA, 1994.

[33] Z.H. Yuan, L.F. Yuan, L.H. Huang, D.W. Hu, Boundedness and global conver-gence of non-autonomous neural networks with variable delays, NonlinearAnalysis: Real World Applications 10 (2009) 2195–2206.

Li Wan received the Ph.D. from Nanjing University,Nanjing, China, and the Post-Doctoral Fellow in theDepartment of Mathematics at Huazhong University ofScience and Technology, Wuhan, China. From August2006 until now, he is with the Department of Mathe-matics and Physics at Wuhan Textile University,Wuhan, China. He is also the author or coauthor ofmore than 20 journal papers. His research interestsinclude nonlinear dynamic systems, neural networks,control theory.

Qinghua Zhou received the Ph.D. from Nanjing Uni-versity, Nanjing, China. From August 2007 until now,she is with the Department of Mathematics at Zhaoq-ing University, Zhaoqing, China. She is also the authoror coauthor of more than 15 journal papers. Herresearch interests include nonlinear dynamic systems,neural networks.

Pei Wang is now a Ph.D candidate in School ofMathematics and Statistics, Wuhan University. Hecompleted his Master degree in the field of Mathe-matics in the year 2009 and also in Wuhan University.His research interests include systems biology, com-plex systems and networks, chaos control andsynchronization.