ultimate boundedness and an attractor for stochastic hopfield neural networks with time-varying...
TRANSCRIPT
Nonlinear Analysis: Real World Applications 13 (2012) 953–958
Contents lists available at SciVerse ScienceDirect
Nonlinear Analysis: Real World Applications
journal homepage: www.elsevier.com/locate/nonrwa
Ultimate boundedness and an attractor for stochastic Hopfield neuralnetworks with time-varying delays
Li Wan a,∗, Qinghua Zhou b, Pei Wang c, Jizi Li da School of Mathematics and Computer Science, Wuhan Textile University, Wuhan 430073, PR Chinab Department of Mathematics, Zhaoqing University, Zhaoqing 526061, PR Chinac School of Mathematics and Statistics, Wuhan University, Wuhan 430072, PR Chinad School of Management, Wuhan Textile University, Wuhan 430073, PR China
a r t i c l e i n f o
Article history:Received 16 March 2011Accepted 1 September 2011
Keywords:Hopfield neural networksDelaysUltimate boundednessWeak attractor
a b s t r a c t
This paper investigates ultimate boundedness and a weak attractor for stochastic Hopfieldneural networks (HNN) with time-varying delays. By employing the Lyapunov methodand the matrix technique, some novel results and criteria on ultimate boundednessand an attractor for stochastic HNN with time-varying delays are derived. Finally, anumerical example is given to illustrate the correctness and effectiveness of our theoreticalresults.
© 2011 Elsevier Ltd. All rights reserved.
1. Introduction
Recently, it has been well recognized that stochastic disturbances are ubiquitous and inevitable in various systems,ranging from electronic implementations to biochemical systems,which aremainly caused by thermal noise, environmentalfluctuations as well as different orders of ongoing events in the overall systems [1,2]. Therefore, considerable attention hasbeen paid to investigate the dynamics of stochastic neural networks, and many results on stochastic neural networks withdelays have been reported in the literature; see e.g. [3–18] and references therein. Among which, some sufficient criteria onthe stability of uncertain stochastic neural networks were derived in [4–6]. Almost sure exponential stability of stochasticneural networkswas discussed in [7–9]. In [10–14],mean square exponential stability and pthmoment exponential stabilityof stochastic neural networks were investigated. Some sufficient criteria on the exponential stability of the periodic solutionfor impulsive stochastic neural networks were established in [15]. In [16], the stability of discrete-time stochastic neuralnetworks was analyzed, while exponential stability of stochastic neural networks with Markovian jump parameters isinvestigated in [17,18]. However, these papers mainly concern the stability of stochastic neural networks.
In fact, except for the stability property, boundedness is also one of the foundational concepts of dynamical systems,which plays an important role in investigating the uniqueness of equilibrium, global asymptotic stability, global exponentialstability, the existence of the periodic solution, its control and synchronization [19,20], and so on. Recently, ultimateboundedness of several classes of neural networkswith time delays has been reported. Some sufficient criteria were derivedin [21,22], but these results hold only under constant delays. Following, in [23], the globally robust ultimate boundedness ofintegro-differential neural networks with uncertainties and varying delays was studied. After that, some sufficient criteriaon the ultimate boundedness of neural networks with both varying and unbounded delays were derived in [24], but the
∗ Corresponding author. Tel.: +86 02759736926.E-mail addresses:[email protected] (L. Wan), [email protected] (Q. Zhou), [email protected] (P. Wang), [email protected] (J.Z. Li).
1468-1218/$ – see front matter© 2011 Elsevier Ltd. All rights reserved.doi:10.1016/j.nonrwa.2011.09.001
954 L. Wan et al. / Nonlinear Analysis: Real World Applications 13 (2012) 953–958
concerned systems are deterministic ones. In [25,26], a series of criteria on the boundedness, global exponential stabilityand the existence of the periodic solution for non-autonomous recurrent neural networks were established. To the best ofour knowledge, there are few results on the ultimate boundedness and an attractor for stochastic neural networks. Therefore,the arising question about the ultimate boundedness and an attractor for the stochastic Hopfield neural networks with timevarying delays is important yet meaningful.
The rest of the paper is organized as follows: some preliminaries are in Section 2, Section 3 presents our main results, anumerical example and conclusions will be in Sections 4 and 5, respectively.
2. Preliminaries
Consider the following stochastic HNN with time-varying delays
dx(t) = [−Cx(t) + Af (x(t)) + Bf (x(t − τ(t))) + J]dt + σ(x(t), x(t − τ(t)))dw(t), (2.1)
where x = (x1, . . . , xn)T is the state vector associated with the neurons; C = diagc1, . . . , cn, ci > 0 represents therate with which the ith unit will reset its potential to the resting state in isolation when being disconnected from thenetwork and the external stochastic perturbation; A = (aij)n×n and B = (bij)n×n represent the connection weight matrixand the delayed connection weight matrix, respectively; J = (J1, . . . , Jn)T , Ji denotes the external bias on the ith unit; fjdenotes activation function, f (x(t)) = (f1(x1(t)), . . . , fn(xn(t)))T ; σ(·, ·) ∈ Rn×m is the diffusion coefficient matrix; w(t) ism-dimensional Brownian motion defined on a complete probability space (Ω, F , P) with a natural filtration Ftt≥0generated by w(s) : 0 ≤ s ≤ t; τ(t) is the transmission delay and satisfies
0 ≤ τ(t) ≤ τ , τ (t) ≤ µ. (2.2)
The initial conditions are given in the form:
x(s) = ξ(s), −τ ≤ s ≤ 0, j = 1, . . . , n,
where ξ(s) = (ξ1(s), . . . , ξn(s))T is C([−τ , 0]; Rn)-valued function and F0-measurable Rn-valued random variablesatisfying ‖ξ‖
2τ = sup−τ≤s≤0 E‖ξ(s)‖2 < ∞, ‖ · ‖ is the Euclidean norm and C([−τ , 0]; Rn) is the space of all continuous
Rn-valued functions defined on [−τ , 0].Throughout this paper, the following assumption will be considered.
(A1) There exist constants l+i and l−i such that
l−i ≤fi(x) − fi(y)
x − y≤ l+i , ∀x, y ∈ R.
It follows from [27] that under the assumption (A1), system (2.1) has a global solution on t ≥ 0.We note that assumption(A1) is less conservative than that of in [3,6,28], since the constants l+i and l−i are allowed to be positive, negative numbersor zeros.
The notation A > 0 (respectively, A ≥ 0) means that matrix A is symmetric positive definite (respectively, positive semi-definite). AT denotes the transpose of thematrix A. λmin(A) represents theminimum eigenvalue of matrix A. Throughout thepaper, all norms are assumed to be Euclid 2-norms.
3. Main results
In this section, we will give the conditions of the ultimate boundedness and then construct a compact set BC as the weakattractor for the solutions by using the ultimate boundedness.
Theorem 3.1. Suppose that there exist somematrices P > 0,Qi > 0 (i = 1, 2, 3, 4), σ1 > 0, σ2 > 0,U1 = diagu11, . . . , u1n
≥ 0,U2 = diagu21, . . . , u2n ≥ 0 and σ3, such that
(A2)
Σ =
∆ σ3 PA + L2U1 PB∗ σ2 − (1 − µ)Q1 − 2L1U2 0 L2U2∗ ∗ Q3 + τQ4 − 2U1 0∗ ∗ ∗ −(1 − µ)Q3 − 2U2
< 0,
trace[σ T (x(t), x(t − τ(t)))Pσ(x(t), x(t − τ(t)))]≤ xT (t)σ1x(t) + xT (t − τ(t))σ2x(t − τ(t)) + 2xT (t)σ3x(t − τ(t)),
where ∆ = Q1 + τQ2 + σ1 − PC − CP − 2L1U1, L1 = diagl−1 l+
1 , . . . , l−n l+n , L2 = diagl−1 + l+1 , . . . , l−n + l+n , ∗ means
the symmetric terms.
L. Wan et al. / Nonlinear Analysis: Real World Applications 13 (2012) 953–958 955
Then system (2.1) is stochastically ultimately bounded, that is, for any ε ∈ (0, 1), there exists a positive constantC = C(ε)such that the solution x(t) of system (2.1) satisfies
lim supt→∞
P‖x(t)‖ ≤ C ≥ 1 − ε. (3.1)
Proof. From (A2), there exists a sufficiently small λ > 0 such that
Σ1 =
∆1 σ3 PA + L2U1 PB∗ ∆2 0 L2U2∗ ∗ ∆3 0∗ ∗ ∗ λI − (1 − µ)Q3 − 2U2
< 0,
where∆1 = eλτQ1+τQ2+2λP+λI+σ1−PC−CP−2L1U1, ∆2 = λI+σ2−(1−µ)Q1−2L1U2, ∆3 = λI+eλτQ3+τQ4−2U1.Consider the following Lyapunov functional
V (t) = eλtxT (t)Px(t) +
∫ t
t−τ(t)eλ(s+τ)
[xT (s)Q1x(s) + f T (x(s))Q3f (x(s))]ds
+
∫ t
t−τ(t)
∫ t
seλθ
[xT (θ)Q2x(θ) + f T (x(θ))Q4f (x(θ))]dθds. (3.2)
Applying Itô formula in [27] to V (t) along with system (2.1), one obtains
dV (t) = M1(t)dw(t) + M2(t)dt + M3(t)dt, (3.3)
where
M1(t) = 2eλtxT (t)Pσ(x(t), x(t − τ(t))),
M2(t) = eλ(t+τ)[xT (t)Q1x(t) + f T (x(t))Q3f (x(t))] − (1 − τ (t))eλ(t−τ(t)+τ)
[xT (t − τ(t))Q1x(t − τ(t))+ f T (x(t − τ(t)))Q3f (x(t − τ(t)))] + eλtτ(t)[xT (t)Q2x(t) + f T (x(t))Q4f (x(t))]
− (1 − τ (t))∫ t
t−τ(t)eλs
[xT (s)Q2x(s) + f T (x(s))Q4f (x(s))]ds
≤ eλ(t+τ)[xT (t)Q1x(t) + f T (x(t))Q3f (x(t))] − (1 − µ)eλt
[xT (t − τ(t))Q1x(t − τ(t))+ f T (x(t − τ(t)))Q3f (x(t − τ(t)))] + eλtτ [xT (t)Q2x(t) + f T (x(t))Q4f (x(t))], (3.4)
M3(t) = λeλtxT (t)Px(t) + eλt2xT (t)P[−Cx(t) + Af (x(t)) + Bf (x(t − τ(t))) + J]+ eλt trace[σ T (x(t), x(t − τ(t)))Pσ(x(t), x(t − τ(t)))]
≤ 2λeλtxT (t)Px(t) + λ−1eλt JTPJ + eλt2xT (t)P[−Cx(t) + Af (x(t))+ Bf (x(t − τ(t)))] + eλt
[xT (t)σ1x(t) + xT (t − τ(t))σ2x(t − τ(t)) + 2xT (t)σ3x(t − τ(t))]. (3.5)
From (A1), it follows that, for i = 1, . . . , n,
[fi(xi(t)) − fi(0) − l+i xi(t)][fi(xi(t)) − fi(0) − l−i xi(t)] ≤ 0, (3.6)
[fi(xi(t − τ(t))) − fi(0) − l+i xi(t − τ(t))][fi(xi(t − τ(t))) − fi(0) − l−i xi(t − τ(t))] ≤ 0. (3.7)
Further from (3.3)–(3.7), one derives
dV (t) ≤ M1(t)dw(t) + M2(t)dt + M3(t)dt + eλt
−2
n−i=1
u1i[fi(xi(t)) − fi(0) − l+i xi(t)]
× [fi(xi(t)) − fi(0) − l−i xi(t)] − 2n−
i=1
u2i[fi(xi(t − τ(t))) − fi(0) − l+i xi(t − τ(t))]
× [fi(xi(t − τ(t))) − fi(0) − l−i xi(t − τ(t))]
dt
= M1(t)dw(t) + M2(t)dt + M3(t)dt + eλt
−2
n−i=1
u1i[fi(xi(t)) − l+i xi(t)][fi(xi(t)) − l−i xi(t)]
− 2n−
i=1
u2i[fi(xi(t − τ(t))) − l+i xi(t − τ(t))][fi(xi(t − τ(t))) − l−i xi(t − τ(t))]
956 L. Wan et al. / Nonlinear Analysis: Real World Applications 13 (2012) 953–958
− 2n−
i=1
u1if 2i (0) + 2n−
i=1
u1ifi(0)[2fi(xi(t)) − (l+i + l−i )xi(t)]
− 2n−
i=1
u2if 2i (0) + 2n−
i=1
u2ifi(0)[2fi(xi(t − τ(t))) − (l+i + l−i )xi(t − τ(t))]
dt
≤ M1(t)dw(t) + M2(t)dt + M3(t)dt + eλt
−2
n−i=1
u1i[fi(xi(t)) − l+i xi(t)][fi(xi(t)) − l−i xi(t)]
− 2n−
i=1
u2i[fi(xi(t − τ(t))) − l+i xi(t − τ(t))][fi(xi(t − τ(t))) − l−i xi(t − τ(t))]
+
n−i=1
[λf 2i (xi(t)) + 4λ−1f 2i (0)u21i + λx2i (t) + λ−1f 2i (0)u2
1i(l+
i + l−i )2]
+
n−i=1
[λf 2i (xi(t − τ(t))) + 4λ−1f 2i (0)u22i + λx2i (t − τ(t)) + λ−1f 2i (0)u2
2i(l+
i + l−i )2]
dt
≤ M1(t)dw(t) + eλtηT (t)Σ1η(t)dt + eλtC1dt ≤ M1(t)dw(t) + eλtC1dt,
where η(t) = (xT (t), xT (t − τ(t)), f T (x(t)), f T (x(t − τ(t))))T ,
C1 = λ−1JTPJ +
n−i=1
[4λ−1f 2i (0)u21i + λ−1f 2i (0)u2
1i(l+
i + l−i )2 + 4λ−1f 2i (0)u22i + λ−1f 2i (0)u2
2i(l+
i + l−i )2].
Therefore, it follows
V (t) ≤ V (0) +
∫ t
0M1(s)dw(s) + eλtλ−1C1
and
E‖x(t)‖2≤
e−λtEV (0) + λ−1C1
λmin(P)≤
e−λtEV (0)λmin(P)
+ C2,
where C2 =λ−1C1λmin(P)
.For any ε > 0, set C3 =
√C2/ε. By Chebyshev’s inequality, one derives
lim supt→∞
P‖x(t)‖ > C3 ≤ lim supt→∞
E‖x(t)‖2/C23 ≤ C2/C2
3 = ε,
which implies that (3.1) holds.
Theorem 3.1 shows that there exists t0 > 0 such that for any t ≥ t0, P‖x(t)‖ ≤ C ≥ 1 − ε. Let BC denote by
BC = x ∈ Rn|‖x(t)‖ ≤ C, t ≥ t0.
Clearly, BC is closed, bounded and invariant. Moreover,
lim supt→∞
infy∈BC
‖x(t) − y‖ = 0
with probability no less than 1 − ε, which means that the solutions x(t) will visit the neighborhood of BC infinitely manytimes with probability no less than 1− ε. In other words, BC attracts the solutions infinitely many times with probability noless than 1 − ε, so we may say that BC is a weak attractor for the solutions.
Theorem 3.2. Suppose that all conditions of Theorem 3.1 hold. Then there exists a weak attractor BC for the solutions ofsystem (2.1).
Remark 3.3. Comparedwith [28,29], assumption (A1) is less conservative than that in [28] and the system concerned in thispaper is more complex than that in [29]. In particular, we construct a compact set BC as the weak attractor for the solutionsby using the ultimate boundedness.
4. Numerical example
In this section, a numerical example is presented to demonstrate the validity and effectiveness of our theoreticalresults.
L. Wan et al. / Nonlinear Analysis: Real World Applications 13 (2012) 953–958 957
a b
Fig. 1. (a) shows time trajectories, (b) shows the set BC and several phase portraits.
Example 1. Consider the following stochastic HNN with time-varying delays
dx(t) = [−Cx(t) + Af (x(t)) + Bf (x(t − τ(t))) + J]dt + [Gx(t) + Hx(t − τ(t))]dw(t),
where
A =
−0.1 0.40.2 −0.5
, B =
0.1 −1
−1.4 0.4
, C =
1.2 00 1.15
,
J =
0.010.05
, G =
0.23 0.10.3 0.2
, H =
0.1 −0.20.2 0.3
,
and f (x) = tanh(x), w(t) is one-dimensional Brownian motion. Then L1 = 0, L2 = diag1, 1, σ1 = GTPG, σ2 = HTPH, σ3= GTPH . By using the Matlab LMI Control Toolbox [30], for µ = 0.0035 and τ = 1, based on Theorem 3.1, such system isstochastically ultimately bounded when P,U1,U2,Q1,Q2,Q3 and Q4 are chosen as:
P =
176.2695 20.780520.7805 142.6797
, U1 =
109.5227 0
0 112.7215
,
U2 =
95.8392 0
0 59.7006
, Q1 =
102.9417 0.67230.6723 75.3567
,
Q2 =
20.1614 0.03270.0327 18.1207
, Q3 =
107.3896 −36.5024−36.5024 127.0913
,
Q4 =
19.6488 −2.9979−2.9979 23.5898
.
For λ = ε = 0.01, Σ1 < 0 and constant C = C(ε) =
JT PJ
λmin(P)ελ2= 54.5545. Then BC = x ∈ R2
| ‖x(t)‖ ≤ 54.5545, t ≥
0, P(x ∈ Bc) ≥ 0.99. For the system in Example 1 (Color online), Fig. 1(a) shows time trajectories, and Fig. 1(b) shows theset BC and several typical phase portraits, where initial value for t < 0 is chosen as x(t) = (50, 80). The inset figure ofFig. 1(a) is the enlargement of the outside one. In Fig. 1(b), only phase portraits for t ≥ 0 are shown. From Fig. 1, one caneasily find that these trajectories are almost all attracted by the set BC .
5. Conclusion
Recently, many results on the stability of stochastic neural networks with delays have been reported. But so far there arefew published results on the attractor and ultimate boundedness for stochastic neural networks with delays. In this paper,new results and sufficient criteria on the attractor and ultimate boundedness are established for stochastic Hopfield neuralnetworks with delays by using the matrix technique and the Lyapunov method. A numerical example is also presented todemonstrate the correctness of our theoretical results.
Acknowledgments
The authors thank the editor and the reviewers for their detailed comments and valuable suggestions. This work wassupported by the National Natural Science Foundation of China (No: 10801109, 10926128, and 11047114), Science and
958 L. Wan et al. / Nonlinear Analysis: Real World Applications 13 (2012) 953–958
Technology Research Projects of Hubei Provincial Department of Education (Q20091705, Q20111607, and Q20111611) andYoung Talent Cultivation Projects of Guangdong (LYM09134).
References
[1] M. Koern, T.C. Elston, W.J. Blake, J.J. Collins, Stochasticity in gene expression: from theories to phenotypes, Nature Reviews Genetics 6 (6) (2005)451–464.
[2] K. Sriram, S. Soliman, F. Fages, Dynamics of the interlocked positive feedback loops explaining the robust epigenetic switching in Candida albicans,Journal of Theoretical Biology 258 (2009) 71–88.
[3] H. Huang, G. Feng, Delay-dependent stability for uncertain stochastic neural networks with time-varying delay, Physica A 381 (15) (2007) 93–103.[4] H.Y. Zhao, N. Ding, L. Chen, Almost sure exponential stability of stochastic fuzzy cellular neural networks with delays, Chaos, Solitons & Fractals 40
(2009) 1653–1659.[5] H.Y. Zhao, N. Ding, Dynamic analysis of stochastic bidirectional associative memory neural networks with delays, Chaos, Solitons & Fractals 32 (2007)
1692–1702.[6] W.H. Chen, X.M. Lu, Mean square exponential stability of uncertain stochastic delayed neural networks, Physics Letters A 372 (7) (2008) 1061–1069.[7] C. Huang, J.D. Cao, Almost sure exponential stability of stochastic cellular neural networks with unbounded distributed delays, Neurocomputing 72
(2009) 3352–3356.[8] C. Huang, J.D. Cao, On pth moment exponential stability of stochastic Cohen–Grossberg neural networks with time-varying delays, Neurocomputing
73 (2010) 986–990.[9] C. Huang, P. Chen, Y. He, L. Huang, W. Tan, Almost sure exponential stability of delayed Hopfield neural networks, Applied Mathematics Letters 21
(2008) 701–705.[10] C. Huang, Y. He, H. Wang, Mean square exponential stability of stochastic recurrent neural networks with time-varying delays, Computers and
Mathematics with Applications 56 (2008) 1773–1778.[11] R. Rakkiyappan, P. Balasubramaniam,Delay-dependent asymptotic stability for stochastic delayed recurrent neural networkswith timevarying delays,
Applied Mathematics and Computation 198 (2008) 526–533.[12] Y. Sun, J.D. Cao, pth moment exponential stability of stochastic recurrent neural networks with time-varying delays, Nonlinear Analysis: Real World
Applications 8 (2007) 1171–1185.[13] Q. Song, Z. Wang, Stability analysis of impulsive stochastic Cohen–Grossberg neural networks with mixed time delays, Physica A 387 (2008)
3314–3326.[14] Z. Wang, J. Fang, X. Liu, Global stability of stochastic high-order neural networks with discrete and distributed delays, Chaos, Solitons & Fractals 36
(2008) 388–396.[15] X.D. Li, Existence and global exponential stability of periodic solution for delayed neural networks with impulsive and stochastic effects,
Neurocomputing 73 (2010) 749–758.[16] Y. Ou, H.Y. Liu, Y.L. Si, Z.G. Feng, Stability analysis of discrete-time stochastic neural networks with time-varying delays, Neurocomputing 73 (2010)
740–748.[17] Q. Zhu, J. Cao, Exponential stability of stochastic neural networks with both Markovian jump parameters and mixed time delays, IEEE Transactions
on Systems, Man and Cybernetics, Part B (Cybernetics) 41 (2011) 341–353.[18] Q. Zhu, C. Huang, X. Yang, Exponential stability for stochastic jumping BAM neural networks with time-varying and distributed delays, Nonlinear
Analysis Hybrid Systems 5 (2011) 52–77.[19] P. Wang, D. Li, Q. Hu, Bounds of the hyper-chaotic Lorenz-stenflo system, Communications in Nonlinear Science and Numerical Simulation 15 (2010)
2514–2520.[20] P. Wang, D. Li, X. Wu, J. Lü, X. Yu, Ultimate bound estimation of a class of high dimensional quadratic autonomous dynamical systems, Internat. J.
Bifur. Chaos 21 (9) (2011) 1–9.[21] X. Liao, J. Wang, Global dissipativity of continuous-time recurrent neural networks with time delay, Physical Review E 68 (2003) 1–7.[22] S. Arik, On the global dissipativity of dynamical neural networks with time delays, Physics Letters A 326 (2004) 126–132.[23] Y. Xu, B. Cui, Global robust dissipativity for integro-differential systems modeling neural networks with delays, Chaos, Solitons & Fractals 36 (2008)
469–478.[24] Q. Song, Z. Zhao, Global dissipativity of neural networks with both variable and unbounded delays, Chaos, Solitons & Fractals 25 (2005) 393–401.[25] H. Jiang, Z. Teng, Global exponential stability of cellular neural networks with time-varying coe cients and delays, Neural Networks 17 (2004)
1415–1425.[26] H. Jiang, Z. Teng, Boundedness, periodic solutions and global stability for cellular neural networks with variable coefficients and infinite delays, Neural
Networks 72 (2009) 2455–2463.[27] X. Mao, Stochastic Differential Equations and Applications, Horwood Publishing, 1997.[28] L.Wan,Q.H. Zhou, Attractor andultimate boundedness for stochastic cellular neural networkswith delays, Nonlinear Analysis. RealWorldApplications
12 (2011) 2561–2566.[29] L. Wan, Q.H. Zhou, P. Wang, Ultimate boundedness of stochastic Hopfield neural networks with time-varying delays, Neurocomputing 74 (2011)
2967–2971.[30] P. Gahinet, A. Nemirovski, A.J. Laub, M. Chilali, LMI Control Toolbox User’s Guide, The MathWorks, Inc., 1995.