zeroing neural network with comprehensive performance and

12
Zeroing neural network with comprehensive performance and its applications to time-varying Lyapunov equation and perturbed robotic tracking Zeshan Hu a , Kenli Li a,, Keqin Li a,b , Jichun Li c , Lin Xiao d,a College of Information Science and Electronic Engineering, Hunan University, Changsha 410082, China b Department of Computer Science, State University of New York, New Paltz, NY12561, USA c School of Computer Science and Electronic Engineering, University of Essex, Colchester CO4 3SQ, UK d Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, Hunan Normal University, Changsha 410081, China article info Article history: Received 26 November 2019 Revised 21 June 2020 Accepted 4 August 2020 Available online 1 September 2020 Communicated by Long Cheng Keywords: Finite-time convergence Noise tolerance Stable residual error Tracking control Zeroing Neural Network (ZNN) abstract The time-varying Lyapunov equation is an important problem that has been extensively employed in the engineering field and the Zeroing Neural Network (ZNN) is a powerful tool for solving such problem. However, unpredictable noises can potentially harm ZNN’s accuracy in practical situations. Thus, the comprehensive performance of the ZNN model requires both fast convergence rate and strong robust- ness, which are not easy to accomplish. In this paper, based on a new neural dynamic, a novel Noise- Tolerance Finite-time convergent ZNN (NTFZNN) model for solving the time-varying Lyapunov equations has been proposed. The NTFZNN model simultaneously converges in finite time and have stable residual error even under unbounded time-varying noises. Furthermore, the Simplified Finite-te convergent Activation Function (SFAF) with simpler structure is used in the NTFZNN model to reduce model com- plexity while retaining finite convergence time. Theoretical proofs and numerical simulations are pro- vided in this paper to substantiate the NTFZNN model’s convergence and robustness performances, which are better than performances of the ordinary ZNN model and the Noise-Tolerance ZNN (NTZNN) model. Finally, simulation experiment of using the NTFZNN model to control a wheeled robot manipula- tor under perturbation validates the superior applicability of the NTFZNN model. Ó 2020 Elsevier B.V. All rights reserved. 1. Introduction The Lyapunov equation plays a crucial part in many scientific and engineering fields. For instance, it can be applied in communi- cation [1], control theory [2,3] and automatic control [4,5]. Fur- thermore, Lyapunov equation is indispensable in the domain of optimal control. Thus, the solution of Lyapunov equation has earned a large amount of efforts on account of its extensive appli- cations. For solving Lyapunov equation, approaches that have the longest history should be traditional numerical algorithms. In [6], an iterative algorithm was proposed to tackle Lyapunov equation with Markov jump. Other types of numerical algorithms for Lya- punov equation have also been investigated. [7] proposed an iter- ative algorithm based on the gradient to solve this problem, while [8] presented a method based on minimal residual error. However, the minimum time complexity of these numerical algorithms is about the cube of dimensions of input matrix. Thus, solving large scale Lyapunov equations using these numerical algorithms becomes a time consuming task. The large time cost of numerical algorithms has considerably limited their application ranges, espe- cially for online Lyapunov equation problem. Nowadays, we can accelerate many algorithms by taking advan- tage of hardware or software level parallelism, but this approach is hard for above mentioned numerical algorithms because of their serial processing nature. Therefore, Artificial Neural Networks (ANNs) including the Recurrent Neural Networks (RNNs) as other types of approaches, have been found efficient in solving various numerical computation, optimization and robot control problems [9–13] due to their parallel and distributed computing properties. Gradient-based Neural Network (GNN) is a variation of RNN and was investigated for the online stationary Lyapunov equation [14–17]. GNN uses the norm of error matrix as its performance index and the neural network will evolve along the gradient- descent direction until its performance index converges to zero. GNN performs well in stationary Lyapunov equation problems, https://doi.org/10.1016/j.neucom.2020.08.037 0925-2312/Ó 2020 Elsevier B.V. All rights reserved. Corresponding author. E-mail addresses: [email protected] (K. Li), [email protected] (L. Xiao). Neurocomputing 418 (2020) 79–90 Contents lists available at ScienceDirect Neurocomputing journal homepage: www.elsevier.com/locate/neucom

Upload: others

Post on 26-Mar-2022

6 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Zeroing neural network with comprehensive performance and

Neurocomputing 418 (2020) 79–90

Contents lists available at ScienceDirect

Neurocomputing

journal homepage: www.elsevier .com/locate /neucom

Zeroing neural network with comprehensive performance and itsapplications to time-varying Lyapunov equation and perturbedrobotic tracking

https://doi.org/10.1016/j.neucom.2020.08.0370925-2312/� 2020 Elsevier B.V. All rights reserved.

⇑ Corresponding author.E-mail addresses: [email protected] (K. Li), [email protected] (L. Xiao).

Zeshan Hu a, Kenli Li a,⇑, Keqin Li a,b, Jichun Li c, Lin Xiao d,⇑aCollege of Information Science and Electronic Engineering, Hunan University, Changsha 410082, ChinabDepartment of Computer Science, State University of New York, New Paltz, NY12561, USAc School of Computer Science and Electronic Engineering, University of Essex, Colchester CO4 3SQ, UKdHunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, Hunan Normal University, Changsha 410081, China

a r t i c l e i n f o a b s t r a c t

Article history:Received 26 November 2019Revised 21 June 2020Accepted 4 August 2020Available online 1 September 2020Communicated by Long Cheng

Keywords:Finite-time convergenceNoise toleranceStable residual errorTracking controlZeroing Neural Network (ZNN)

The time-varying Lyapunov equation is an important problem that has been extensively employed in theengineering field and the Zeroing Neural Network (ZNN) is a powerful tool for solving such problem.However, unpredictable noises can potentially harm ZNN’s accuracy in practical situations. Thus, thecomprehensive performance of the ZNN model requires both fast convergence rate and strong robust-ness, which are not easy to accomplish. In this paper, based on a new neural dynamic, a novel Noise-Tolerance Finite-time convergent ZNN (NTFZNN) model for solving the time-varying Lyapunov equationshas been proposed. The NTFZNN model simultaneously converges in finite time and have stable residualerror even under unbounded time-varying noises. Furthermore, the Simplified Finite-te convergentActivation Function (SFAF) with simpler structure is used in the NTFZNN model to reduce model com-plexity while retaining finite convergence time. Theoretical proofs and numerical simulations are pro-vided in this paper to substantiate the NTFZNN model’s convergence and robustness performances,which are better than performances of the ordinary ZNN model and the Noise-Tolerance ZNN (NTZNN)model. Finally, simulation experiment of using the NTFZNN model to control a wheeled robot manipula-tor under perturbation validates the superior applicability of the NTFZNN model.

� 2020 Elsevier B.V. All rights reserved.

1. Introduction

The Lyapunov equation plays a crucial part in many scientificand engineering fields. For instance, it can be applied in communi-cation [1], control theory [2,3] and automatic control [4,5]. Fur-thermore, Lyapunov equation is indispensable in the domain ofoptimal control. Thus, the solution of Lyapunov equation hasearned a large amount of efforts on account of its extensive appli-cations. For solving Lyapunov equation, approaches that have thelongest history should be traditional numerical algorithms. In [6],an iterative algorithm was proposed to tackle Lyapunov equationwith Markov jump. Other types of numerical algorithms for Lya-punov equation have also been investigated. [7] proposed an iter-ative algorithm based on the gradient to solve this problem, while[8] presented a method based on minimal residual error. However,the minimum time complexity of these numerical algorithms is

about the cube of dimensions of input matrix. Thus, solving largescale Lyapunov equations using these numerical algorithmsbecomes a time consuming task. The large time cost of numericalalgorithms has considerably limited their application ranges, espe-cially for online Lyapunov equation problem.

Nowadays, we can accelerate many algorithms by taking advan-tage of hardware or software level parallelism, but this approach ishard for above mentioned numerical algorithms because of theirserial processing nature. Therefore, Artificial Neural Networks(ANNs) including the Recurrent Neural Networks (RNNs) as othertypes of approaches, have been found efficient in solving variousnumerical computation, optimization and robot control problems[9–13] due to their parallel and distributed computing properties.Gradient-based Neural Network (GNN) is a variation of RNN andwas investigated for the online stationary Lyapunov equation[14–17]. GNN uses the norm of error matrix as its performanceindex and the neural network will evolve along the gradient-descent direction until its performance index converges to zero.GNN performs well in stationary Lyapunov equation problems,

Page 2: Zeroing neural network with comprehensive performance and

80 Z. Hu et al. / Neurocomputing 418 (2020) 79–90

but it suffers from large residual error on solving time-varying Lya-punov equations and such error exists even after infinite time ofevolving, which promotes researchers to overcome this drawbackby discovering new neural network models.

Under such background, Zeroing Neural Network (ZNN) as aspecial type of RNN was proposed and utilized in Refs. [18–21].This kind of RNN model utilizes the velocity information of time-varying coefficients and uses the error matrix instead of its normas performance index, successfully surpassing GNN model on solv-ing both stationary and nonstationary Lyapunov equations. Never-theless, the ordinary ZNNmodel is far from being perfect because ituses linear function as activation function and can only converge tothe solution exponentially. That is, its error cannot converge tozero in finite time. Recently, many research works about speedingup the convergence speed of ANNs are springing up [22–24].Therefore, to improve the convergence speed of the ordinary ZNNmodel, a finite-time convergent ZNN model which exploited theSign-Bi-Power (SBP) activation function is presented in Ref. [25].Another practical problem is that ZNN usually works in an environ-ment that exists various noise and this can potentially decrease theaccuracy of the neural network, but the general ZNN design for-mula didn’t take this factor into account. Thus, [26–28] designeda Noise-Tolerance ZNN (NTZNN) model to enhance ZNN model’srobustness against additive noise. However, NTZNN’s activationfunction is linear function and cannot be changed, so that it cannotconverge in finite time. In this paper, we present a Noise-ToleranceFinite-time convergent ZNN (NTFZNN) model for the time-varyingLyapunov equation problem following the motivation that bothfinite-time convergence and noise suppression ability are highlydemanded in ZNN applications [29]. The NTFZNN model not onlyconverges to the accurate solution of time-varying Lyapunov equa-tion in finite time in noise-free environments, but also have lessstable residual error than the NTZNN model within additive noisepolluted environment.

The main contributions of this paper are summarized as below:

1) A novel NTFZNN model is developed for solving time-varying Lyapunov equation problem, which possessesfinite-time convergence and powerful additive noise sup-pression ability at the same time.

2) Using the SFAF (Simplified Finite-time convergent ActivationFunction), the NTFZNNSFAF (NTFZNN using SFAF) model isproposed, which has simpler structure and lower calculationcomplexity than the NTFZNNSBP (NTFZNN using SBP) modelbut still being finite-time convergent.

3) The convergence time upper bound and stable residual errorupper bound of the NTFZNNmodel have both been quantita-tively analyzed and then validated by simulation experi-ments. Besides, in the robustness analysis, the morepractical time-varying unknown noises are investigatedrather than the usual constant or limited known noises.

The rest of this paper is organized into 6 sections. In Section 2,the problem description and some preliminaries of the time-varying Lyapunov equation are provided for the following discus-sion and analyses. In Section 3, the NTFZNN model is designedfor solving the time-varying Lyapunov equation, and two finite-time convergent activation functions have been introduced. In Sec-tion 4, the NTFZNNmodel’s convergence performance in noise-freeenvironment and robustness performance when perturbed byadditive noises have been analyzed in detail. In Section 5, numer-ical simulations and comparisons are presented to verify the previ-ous theoretical conclusions. In Section 6, the NTFZNN model issuccessfully applied in controlling a mobile robot manipulator totrack the desired path with additive noise perturbation, which

has further validated the NTFZNNmodel’s applicability and superi-ority. Section 7 concludes this paper briefly.

2. Problem description and preliminaries

In this paper, the problemwemainly concerned about is solvingthe time-varying Lyapunov equation. Let MðtÞ 2 Rn�n be a nonsta-tionary coefficient matrix with QðtÞ 2 Rn�n being a nonstationarysymmetric positive-definite matrix. We have following equation:

MTðtÞXðtÞ þ XðtÞMðtÞ ¼ �QðtÞ: ð1ÞThen, the above equation is widely known as the Lyapunov

equation, where XðtÞ 2 Rn�n is a unique time-varying matrix thatwe should obtain given thatMðtÞ and QðtÞ both satisfied the uniquesolution condition [17]. In the following paper, we use AðtÞ 2 Rn�n

to denote the precise solution of (1) for efficient expression. Undersuch preliminaries, in this paper, we focus on proposing a NTFZNN(Noise-Tolerance Finite-time convergent ZNN), which not onlymakes use of the latest activation function, but also newly pro-posed novel activation function.

Generally speaking, in the field of solving time-varying prob-lems, zeroing neural network is a more powerful tool when com-pared with conventional gradient-based neural network, becauseit eliminates lagging errors which exist in the latter one. Further-more, [30] shows that ordinary ZNN model can deal withtime-varying Lyapunov equation efficiently as well as convergeto accurate solution exponentially. To lay the foundation of thispaper, the design process of ordinary ZNN model for this problem(1) can be separated into following procedures:

1) Firstly, construct a matrix type error function to evaluate thedifference between state solution of neural network and the-oretical solution of (1):

EðtÞ ¼ MTðtÞXðtÞ þ XðtÞMðtÞ þ QðtÞ 2 Rn�n: ð2Þ

2) Then, as the unique solution of (1), AðtÞ is clearly the zero

point of above error function. For the purpose of forcingEðtÞ to converge to zero, the ZNN model uses the followingevolution formula:

dEðtÞ=dt ¼ �lAðEðtÞÞ; ð3Þwhere l is a design parameter satisfying l > 0;Að�Þ : Rn�n ! Rn�n

denotes a mapping matrix with each of its element being the sameactivation function.3) Finally, substituting (2) into (3) leads to the following ordi-

nary ZNN model for the time-varying Lyapunov equationproblem:

MTðtÞ _XðtÞ þ _XðtÞMðtÞ ¼ � _MTðtÞXðtÞ � XðtÞ _MðtÞ� _QðtÞ � lAðMTðtÞXðtÞþXðtÞMðtÞ þ QðtÞÞ;

ð4Þ

where state solution XðtÞ will change smoothly along with theevolution of ZNN model as time goes on. Starting from initialvalue Xð0Þ 2 Rn�n;XðtÞ converges to the accurate solution AðtÞasymptotically. Note that, if linear activation function f ðxÞ ¼ xis used, model (4) possesses exponential convergence property[30].

The ordinary ZNN model (4) is enough for theoretical analysis.But if we want to do simulation experiments or implement thismodel, the ordinary ZNNmodel (4) can be transformed into follow-ing explicit ordinary ZNN model:

N1ðtÞ _xðtÞ ¼ � _N1ðtÞxðtÞ � _N2ðtÞ � lAðN1ðtÞxðtÞ þ N2ðtÞÞ; ð5Þ

Page 3: Zeroing neural network with comprehensive performance and

Z. Hu et al. / Neurocomputing 418 (2020) 79–90 81

where we define the operation A� B ¼ A� I þ I � B with I beingidentity matrix. Under such definition, other parameters in (5)

are: N1ðtÞ ¼ MTðtÞ �MTðtÞ 2 Rn2�n2 ;N2ðtÞ ¼ vecðQðtÞÞ 2 Rn2 and

xðtÞ ¼ vecðXðtÞÞ 2 Rn2 . Besides, the symbol � stands for the wellknown Kronecker product, whose detail information can be foundin [17,31]. Another symbol vecð�Þ denotes the operation that stacksall column vectors of its input matrix into a single column vector.

Mapping matrix in (5) is resized to Að�Þ : Rn2 ! Rn2 due tovectorization.

3. Design of noise-tolerance finite-time convergent ZeroingNeural Network

The ordinary ZNN model introduced in Section 2 is able to han-dle the nonstationary Lyapunov equation well in the ideal noise-free environment. However, because continuous ZNN models aremainly implemented in analog circuits, there are many factors thatmay create noises. These factors include round-off error, circuitimplementation errors and so on, which can result in the great lossfor ZNN model (4) in accuracy. Therefore, we propose a novelNTFZNN model with better noise suppression ability for time-varying Lyapunov equation. To monitor the computation processof the problem, NTFZNN uses the same error matrix as in the ordi-nary ZNN model:

EðtÞ ¼ MTðtÞXðtÞ þ XðtÞMðtÞ þ QðtÞ 2 Rn�n: ð6ÞThen, the design formula of NTFZNN becomes different from the

ordinary ZNN model (4), since it introduces an integral item intothe formula to enhance its noise tolerant ability. The NTFZNNmod-el’s design formula is presented as below:

_EðtÞ ¼ �lA1ðEðtÞÞ � nA2 EðtÞ þ lZ t

0A1ðEðsÞÞds

� �; ð7Þ

whereA1ð�Þ : Rn�n ! Rn�n andA2ð�Þ : Rn�n ! Rn�n represent mono-tone increasing mapping arrays which consist of the same activa-tion functions respectively. Theoretically speaking, A1ð�Þ andA2ð�Þ can be chosen arbitrarily as long as they satisfy the require-ment of being finite-time convergent. In this paper, we set themto be the same to facilitate the following further analyses. Besides,l > 0 and n > 0 are design parameters used to adjust the neuralnetwork’s convergence speed and noise suppression ability. Now,the NTFZNN model (7) can be expanded by substituting (6) into(7), and then we get

MTðtÞ _XðtÞ þ _XðtÞMðtÞ ¼ � _MTðtÞXðtÞ � XðtÞ _MðtÞ� _QðtÞ � lA1ðEðtÞÞ � nA2

EðtÞ þ lR t0 A1ðEðsÞÞds

� �:

The above neural dynamic is equivalent to the NTFZNN model(7) and can also be transformed into explicit NTFZNN model simi-lar to (5):

N1ðtÞ _xðtÞ ¼ � _N1ðtÞxðtÞ � _N2ðtÞ � lA1ðN1ðtÞxðtÞ þ N2ðtÞÞ�nA2 N1ðtÞxðtÞ þ N2ðtÞðþl R t

0 A1ðN1ðsÞxðsÞ þ N2ðsÞÞds�;

ð8Þ

where N1ðtÞ;N2ðtÞ;xðtÞ are all defined the same as in (5).It should be pointed out that researchers have investigated the

performance of the ordinary ZNN model under noise perturbedenvironment, some methods with the internal noise-toleranceability have been proposed to tackle this problem [26,32]. One ofthese methods is called Noise-Tolerance ZNN (NTZNN) model[26], which is also the inspiration source of our NTFZNN model.The NTZNN model uses following design formula:

_EðtÞ ¼ �lEðtÞ � nZ t

0EðsÞds; ð9Þ

where the definition of EðtÞ 2 Rn�n; l > 0 and n > 0 are the same asin (7). NTZNN performs well in the presence of perturbation, even ifthe noise is in the form of additive noise that increases linearly withtime. But since the NTZNNmodel lacks of activation function, whichis an important element contributing to convergence speed of theZNN model, it can only achieve the exponential convergence. Thismotivates us to propose the NTFZNN model to realize both noisetolerant and finite-time convergent properties on solving Lyapunovequation.

In-depth researches about ZNN models have revealed that thechoice of activation functions plays an important role in conver-gence performance of ZNN models including their variations. Fur-thermore, nonlinear activation functions usually speed up theconvergence process of ZNN models. Particularly, some functionscan even realize finite-time convergence, one of them being thewell known Sign-Bi-Power (SBP) function [33], whose equation is

wSBPðeijÞ ¼12sgnpðeijÞ þ 1

2sgn

1pðeijÞ; ð10Þ

with p 2 ð0;1Þ, and the definition of sgnpðeijÞ is as follows:

sgnpðeijÞ ¼epij; eij > 0;0; eij ¼ 0;�jeijjp; eij < 0;

8><>:

where eij is the ijth element of error matrix EðtÞ. Following the ideaof SBP activation function, researchers found that the SBP functionis computing intensive due to the power operation on floating pointnumber p, which not only increases the burden of computation butalso requires more complex structure. A novel Simplified Finite-time convergent Activation Function (SFAF) based on the SBP func-tion is thus proposed [34], whose definition is

wSFAFðeijÞ ¼ b1sgnpðeijÞ þ b2eij; ð11Þ

where b1 > 0 and b2 > 0 are design parameters. It is clearly shownthat SFAF has simpler structure than traditional SBP function.Besides, SFAF can accelerate theZNN model to finite-time conver-gence with even lower upper bound of convergence time.

For concise expression, in the following paper, the NTFZNNmodel using SBP and SFAF activation functions will be named asthe NTFZNNSBP model and the NTFZNNSFAF model respectively.Another thing should be pointed out is that our NTFZNN modelonly uses SBP (10) and SFAF (11) as activation functions in the fol-lowing paper in order to achieve finite-time convergence.

4. Theoretical analysis of noise-tolerance finite-timeconvergent Zeroing Neural Network

In this section, we focus on proving NTFZNN to be globallystable, computing the convergence time upper bound, and analyz-ing inherent noise suppression ability of the NTFZNNmodel (7). Allof these theoretical analyses eventually illustrate the superiority ofour NTFZNN model. As it has been shown in Section 3, the NTFZNNmodel formula (7) is equivalent to the formula (8) and the latterone is mainly used in the following analyses and experiments. Inaddition, the error matrix EðtÞ in (7) becomes the error vectoreðtÞ ¼ N1ðtÞxðtÞ þ N2ðtÞ in (8). Therefore, the design formula ofNTFZNN model (7) can be transformed into the vector form:

_eðtÞ ¼ �lA1ðeðtÞÞ � nA2 eðtÞ þ lZ t

0A1ðeðsÞÞds

� �; ð12Þ

where eðtÞ 2 Rn2 is obtained by stacking all the column vectors ofEðtÞ into a single column vector. In addition, due to the vectorization

Page 4: Zeroing neural network with comprehensive performance and

82 Z. Hu et al. / Neurocomputing 418 (2020) 79–90

of (7), A1ð�Þ : Rn2 ! Rn2 and A2ð�Þ : Rn2 ! Rn2 become vector-valued mapping arrays with their elements being unchanged. Inaddition, applying the same technique as above, the ordinary ZNNmodel (3) can be transformed into the following vector-valuedform:

_eðtÞ ¼ �lAðeðtÞÞ; ð13Þwhere Að�Þ : Rn2 ! Rn2 becomes mapping array of vectors with thesame elements. In order to lay the foundations for the ensuing the-oretical analyses, we have the following lemmas.

Lemma 1 [35]. Consider a system _x ¼ f ðxÞ; x 2 Rn; f : Rn ! Rn

which has a equilibrium point x ¼ 0 meaning f ð0Þ ¼ 0. Besides, thereexists a Lyapunov candidate function V : Rn ! R that satisfies 1)VðxÞ ¼ 0 if and only if x ¼ 0; 2) VðxÞ > 0 if and only if x > 0; 3)_VðxÞ < 0 is true for all x – 0. Then, such system is said to be globallyasymptotically stable.

Lemma 2 [36]. Given smoothly time-varying error vector eðtÞ 2 Rn2

in (13). If a monotonically-increasing odd mapping array Að�Þ is used,then the ordinary ZNN dynamic (13) is globally stable and its errorvector eðtÞ can converge to the equilibrium point eðtÞ ¼ 0 startingfrom a random initial state eð0Þ.

4.1. Convergence analysis

In this section, we will prove the global asymptotic stability ofNTFZNN model (7), the convergence time upper bound of NTFZNNmodel (7) on solving time-varying Lyapunov equation will also becomputed.

Theorem 1. When solving time-varying Lyapunov Eq. (1), NTFZNNmodel (12) is globally stable, meaning that the state solution ofNTFZNN globally converges to the accurate solution of (1).

Proof. The design formula of NTFZNN model (12) gives every ele-ment of eðtÞ the same inherent dynamics, and thus we only need toconsider the ith element. This subsystem with 8i 2 f1;2;3; . . . ;n2gcan be written as:

_eiðtÞ ¼ �lW1ðeiðtÞÞ � nW2 eiðtÞ þ lZ t

0W1ðeiðsÞÞds

� �; ð14Þ

whereW1ð�Þ : R ! R andW2ð�Þ : R ! R denote the elements ofA1ð�Þand A2ð�Þ respectively. We define a new variable giðtÞ to facilitatethe following analysis, and this variable is formulated as follows:

giðtÞ ¼ eiðtÞ þ lZ t

0W1ðeiðsÞÞds: ð15Þ

Then, the time derivative of giðtÞ (15) can be easily obtained:

_giðtÞ ¼ _eiðtÞ þ lW1ðeiðtÞÞ: ð16ÞTherefore, substituting (15) and (16) into (14), one can get fol-

lowing simplified dynamics:

_giðtÞ ¼ �nW2ðgiðtÞÞ:This formula is the same as the ordinary ZNN model (3), whose

properties have already been well studied. According to Lemma 2,if the activation function W2ð�Þ is odd and monotonically increas-ing, giðtÞ will converge to zero as the neural network evolves. SinceSBP (10) and SFAF (11) all satisfy the condition, NTFZNN modelabout giðtÞ; 8i 2 f1;2; . . . ;n2g is globally stable.

Consider proving the global asymptotic stability of NTFZNNabout variable eiðtÞ, we define an auxiliary Lyapunov function forthe ith element of eðtÞ:

LiðtÞ ¼ 12ke2i ðtÞ þ

12g2i ðtÞ;

where k > 0 is a parameter that particularly specified, and we

denote L0 ¼ Lið0Þ ¼ ke2i ð0Þ=2þ g2i ð0Þ=2 with eið0Þ and gið0Þ being

the initial value of eiðtÞ and giðtÞ respectively. Clearly, L0 is a knownvalue because eið0Þ and gið0Þ are known. It also holds true that LiðtÞis positive-definite, because LiðtÞ > 0 for any eiðtÞ – 0 or giðtÞ– 0;only when eiðtÞ ¼ giðtÞ ¼ 0 will LiðtÞ ¼ 0. Then, we calculate thederivative of LiðtÞ about time, and the result is as follows:

dLiðtÞdt ¼ keiðtÞ _eiðtÞ þ giðtÞ _giðtÞ

¼ keiðtÞ _giðtÞ � lW1ðeiðtÞÞ½ � � ngiðtÞW2ðgiðtÞÞ¼ �kneiðtÞW2ðgiðtÞÞ � kleiðtÞW1ðeiðtÞÞ�ngiðtÞW2ðgiðtÞÞ:

ð17Þ

To prove the conclusion of Theorem 1, we are going to provethat LiðtÞ 6 0 always holds true for t 2 ½0;þ1Þ. Firstly, supposethere exists a time instant when LiðtÞ 6 L0, therefore followingequations can be obtained:

12ke2i ðtÞ 6 L0 and

12g2i ðtÞ 6 L0:

The above inequalities are equivalent to

jeiðtÞj 6ffiffiffiffiffiffiffiffiffiffiffiffiffi2L0=k

qand jgiðtÞj 6

ffiffiffiffiffiffiffiffi2L0

p:

Let Ve and Vg represent the domain of eiðtÞ and giðtÞ respec-tively, and we obtain:

Ve ¼ eiðtÞ 2 R; jeiðtÞj 6ffiffiffiffiffiffiffiffiffiffiffiffiffi2L0=k

pn o;

Vg ¼ giðtÞ 2 R; jgiðtÞj 6ffiffiffiffiffiffiffiffi2L0

p� �:

For activation functionsW1ð�Þ andW2ð�Þwith continuous deriva-tive, using the classic mean-value theorem within the domainrestricted by Vg , we get:

W2ðgiðtÞÞ �W2ð0ÞgiðtÞ � 0

¼ @W2ðgiðdÞÞ@gi

jgiðdÞ2Vg: ð18Þ

BecauseW2ð�Þ is an odd function,W2ð0Þ ¼ 0 holds true. Combin-ing with the monotone increasing feature of W2ð�Þ, we obtain@W2ðgiðtÞÞ=@gi > 0. Therefore, (18) leads to the followinginequality:

jW2ðgiðtÞÞj 6 C0jgiðtÞj;where C0 ¼ maxf@W2ðgiðtÞÞ=@gigjgiðtÞ2Vg

> 0 is bounded since Vg is a

closed interval and that @W2ðgiðtÞÞ=@gi is continuous on this inter-val. Therefore, we can obtain

jeiðtÞW2ðgiðtÞÞj 6 jeiðtÞj � jW2ðgiðtÞÞj6 C0jeiðtÞj � jgiðtÞj:

ð19Þ

Similar to the derivation procedure of C0;C1 and C2 can beaccordingly obtained by applying the mean-value method:

jW1ðeiðtÞÞj P C1jeiðtÞj;jW2ðgiðtÞÞj P C2jgiðtÞj;

ð20Þ

where C1 and C2 are defined as

C1 ¼ minf@W1ðeiðtÞÞ=@eigjeiðtÞ2Ve> 0;

C2 ¼ minf@W2ðgiðtÞÞ=@gigjgiðtÞ2Vg> 0:

Combining (17) with (19) and (20), the following inequality canbe derived:

Page 5: Zeroing neural network with comprehensive performance and

Z. Hu et al. / Neurocomputing 418 (2020) 79–90 83

dLiðtÞdt ¼ �kneiðtÞW2ðgiðtÞÞ � kleiðtÞW1ðeiðtÞÞ

�ngiðtÞW2ðgiðtÞÞ6 knjeiðtÞW2ðgiðtÞÞj � kleiðtÞW1ðeiðtÞÞ

�ngiðtÞW2ðgiðtÞÞ6 knC0jeiðtÞj � jgiðtÞj � klC1e2i ðtÞ � nC2g2

i ðtÞ

¼ �kffiffiffiffiffiffiffiffiffilC1

peiðtÞ � nC0

2ffiffiffiffiffiffilC1

p jgiðtÞj� �2

�k nC2k � n2C2

04lC1

� �g2i ðtÞ:

ð21Þ

According to the above conclusion of (21), we can guarantee_LiðtÞ 6 0 when LiðtÞ 6 L0, provided that

nC2

k� n2C2

0

4lC1P 0 and k > 0 () 0 < k 6 4lC1C2

nC20

:

Obviously we can always find k that satisfies the above condi-tion. Thus, _LiðtÞ 6 0 can be guaranteed when LiðtÞ 6 L0. Further-more, for starting time instant t ¼ 0, Lið0Þ 6 L0 holds truemeaning _Lið0Þ 6 0. Then, we conclude that LiðtÞ will be kept in½0; L0� all the time, i.e, _LiðtÞ 6 0, once k is set properly. Followingabove process of proof, we have _LiðtÞ 6 0 for any time instantt 2 ½0;þ1Þ.

The above analysis have proved the negative semi-definitenessof _LiðtÞ. In addition, it follows from the inequality (21) that theupper bound of _LiðtÞ equals to zero only when eiðtÞ ¼ giðtÞ ¼ 0,meaning _LiðtÞ < 0 is true when eiðtÞ – 0 or giðtÞ– 0. Therefore, _LiðtÞis negative definite. According to the Lyapunov stability theory inLemma 1, we finally conclude that all subsystems of (12) areglobally stable. Therefore, the proof of Theorem 1 is now complete.

Before obtaining the convergence time upper bound of NTFZNNmodel (7), we first come to analyze such upper bound of ordinaryZNN model when using two finite-time activation functions SBP(10) and SFAF (11).

Theorem 2. The error vector eðtÞ of the ordinary ZNN model (13) isable to converge to zero in finite time with finite-time convergentactivation functions. When SBP (10) is used in Að�Þ, the convergence

time upper bound tf1 is tf1 6 2j.maxð0Þj1�p=ðlð1� pÞÞ. When SFAF(11) is used in Að�Þ, the upper bound of convergence time is

tf2 ¼ 1a2ð1� pÞ ln

a2j.maxð0Þj1�p þ a1

a1;

where a1 ¼ lb1 > 0;a2 ¼ lb2 > 0, and .ðtÞ 2 Rn2 denotes the errorvector eðtÞ. In addition, .maxðtÞ is one element of .ðtÞ which has thelargest absolute initial error value, i.e, j.maxð0Þj ¼ maxfj.ið0Þj;8i 2 1;2; . . . ; n2g.

Proof. Consider an auxiliary Lyapunov function candidateuiðtÞ ¼ e2i ðtÞ for the ith subsystem of (13).

1) When SBP (10) is applied in Að�Þ, the proof of ZNN model(13) about the convergence time upper bound

tf1 6 2j.maxð0Þj1�p=ðlð1� pÞÞ can be referred to [37].2) Under the condition that the ordinary ZNN model applies

SFAF (11) in Að�Þ, for calculating the convergence time tf2,we plug (11) into the .maxðtÞ subsystem of the originaldesign formula (13):

_.maxðtÞ ¼ �lðb1sgnpð.maxðtÞÞ þ b2.maxðtÞÞ¼ �a1sgnpð.maxðtÞÞ � a2.maxðtÞ:

ð22Þ

Solving the above differential equation should be consideredunder different conditions. First, when .maxð0Þ > 0, (22) leadsto the following equality:

_.maxðtÞ ¼ �a1ð.maxðtÞÞp � a2.maxðtÞ;which is equivalent to a differential equation formed by

ð.maxðtÞÞ�p d.maxðtÞdt

þ a2ð.maxðtÞÞ1�p þ a1 ¼ 0:

Defining a auxiliary function hðtÞ ¼ ð.maxðtÞÞ1�p and substitutingit into above equation, it can be rewritten as

dhðtÞdt

þ ð1� pÞa2hðtÞ þ ð1� pÞa1 ¼ 0:

Solving the above differential equation, we have

hðtÞ ¼ a1

a2þ hð0Þ

� �� expðð1� pÞa2tÞ � a1

a2;

Clearly .maxðtÞ will decrease to zero at t ¼ tf2. Hence, we have

hðtf2Þ ¼ ð.maxðtf2ÞÞ1�p ¼ 0. We set t ¼ tf2 in above equality, whichleads to

tf2 ¼ 1a2ð1�pÞ ln

a2.maxð0Þ1�pþa1a1

¼ 1a2ð1�pÞ ln

a2 j.maxð0Þj1�pþa1a1

:

Then, we consider the case when .maxð0Þ 6 0. Applying the samemethod as in above analysis and we still get

tf2 ¼ 1a2ð1� pÞ ln

a2j.maxð0Þj1�p þ a1

a1:

Summarizing the conclusions of above two conditions, we canconclude that the proof of Theorem 2 is complete.

From the proof of Theorem 1, we know that the NTFZNN model(7) is built on top of the ordinary ZNN design formula (3). WithTheorem 2, the convergence time upper bound of NTFZNN model(7) can now be analyzed.

Theorem 3. The design formula of NTFZNN model (12) is finite-timeconvergent, and its convergence time upper bound is

tSBP 62ðlþ nÞlnð1� pÞ j.maxð0Þj1�p

;

when using SBP (10) in A1ð�Þ and A2ð�Þ, where the definition of.maxð0Þ is the same as in Theorem 2. When SFAF activation functionis applied in A1ð�Þ and A2ð�Þ, the upper bound of convergence time is

tSFAF 6lþ n

lnb2ð1� pÞ lnb2j.maxð0Þj1�p þ b1

b1;

where the definition of b1 and b2 are the same as in (11).

Proof. 1) Firstly, let us investigate the NTFZNNSBP model.The dynamic formula of NTFZNN model (12) can bereformulated into equation _gðtÞ ¼ �nA2ðgðtÞÞ, where gðtÞ ¼ eðtÞþlR t0 AðeðsÞÞds 2 Rn2 . Evidently this dynamic is identical with the

formula of the ordinary ZNN formula (13). Then according to theconclusion of Theorem 2, gðtÞ should converge to zero in finite-time t1 as

t1 6 2nð1� pÞ j.maxð0Þj1�p

;

where .maxð0Þ ¼ maxfjgið0Þjg ¼ maxfjeið0Þjg; 8i 2 f1;2; . . . ;n2g.After time period t1; gðtÞ stays at its equilibrium point, meaning

Page 6: Zeroing neural network with comprehensive performance and

84 Z. Hu et al. / Neurocomputing 418 (2020) 79–90

_gðtÞ ¼ 0. The error vector then satisfies _eðtÞ ¼ �lA1ðeðtÞÞ. Again,this equation is the same as the ordinary ZNN model (13). It isknown that for the NTFZNN model (12), the absolute value jeiðtÞjis always decreasing until it reaches zero, thusjeiðt1Þj 6 jeið0Þj; 8i 2 f1;2; . . . ; n2g holds true. According to the The-orem 2, eðtÞ must converge to zero in time t2:

t2 6 2lð1� pÞ j.maxð0Þj1�p

:

In summary, eðtÞ of NTFZNNSBP eventually converges to zeroafter two time periods, whose time upper bound are t1 and t2respectively. We can now conclude that the NTFZNNSBP model fortime-varying Lyapunov equation converges in finite time tSBP:

tSBP ¼ t1 þ t2 6 2ðlþ nÞlnð1� pÞ j.maxð0Þj1�p

:

The first part of Theorem 3 has been proved.2) Considering the NTFZNNSFAF model, similar to the proof of

first part of Theorem 3, its convergence process can still be splitinto two stages with the aid of auxiliary function gðtÞ ¼ eðtÞþlR t0 AðeðsÞÞds. Following Theorem 2, in first stage, we can derive

that gðtÞ converges to zero in finite time t3:

t3 ¼ 1nb2ð1� pÞ ln

b2j.maxð0Þj1�p þ b1

b1:

Then, in the second stage, eðtÞ will converge to zero in finitetime t4 as

t4 6 1lb2ð1� pÞ ln

b2j.maxð0Þj1�p þ b1

b1:

Finally, we come to a conclusion that NTFZNNSFAF is finite-timeconvergent with convergence time upper bound being

tSFAF ¼ t3 þ t4 6 lþ nlnb2ð1� pÞ ln

b2j.maxð0Þj1�p þ b1

b1:

The proof of Theorem 3 is now complete.

4.2. Robustness analysis

In this subsection, we are going to investigate the noise sup-pression ability of NTFZNNmodel (12). Consider an unknown addi-

tive noise DNðtÞ 2 Rn2 , which is added to the vector form of theNTFZNN model, forming the following inherent dynamics:

_eðtÞ ¼ �lA1ðeðtÞÞ � nA2 eðtÞ þ lZ t

0A1ðeðsÞÞds

� �þ DNðtÞ:

ð23ÞTo lay the basis for the following analysis in this subsection, the

above Eq. (23) is called the dynamics of the noise polluted NTFZNNmodel.

Theorem 4. If the additive noise DNðtÞ is constant, then residual erroreðtÞ of the noise polluted NTFZNN model (23) globally converges to 0with the neural network evolving.

Proof. Because the noise DNðtÞ is constant, we denote it by DN , andthe ith subsystem of (23) leads to

_eiðtÞ ¼ �lW1ðeiðtÞÞ � nW2 eiðtÞ þ lZ t

0W1ðeiðsÞÞds

� �þ di; ð24Þ

where di is the ith element of DN . We introduce a auxiliary functiongiðtÞ defined in (15). Substituting giðtÞ and (16) into (24) leads to

_giðtÞ ¼ �nW2ðgiðtÞÞ þ di:

The following Lyapunov function candidate is designed toinvestigate the stability of (24):

v iðtÞ ¼ ðnW2ðgiðtÞÞ � diÞ2=2:Since v iðtÞ P 0 and v iðtÞ ¼ 0 only when _giðtÞ ¼ 0;v iðtÞ is clearly

a positive definite function. We calculate its time derivative as

dv idt ¼ ðnW2ðgiðtÞÞ � diÞn @W2ðgiðtÞÞ

@gi_giðtÞ

¼ �n @W2ðgiðtÞÞ@gi

ðnW2ðgiðtÞÞ � diÞ2:The conclusion of @W2ðgiðtÞÞ=@gi > 0 can be easily derived from

activation functions’ odd and monotone increasing property. Thus,_v iðtÞ 6 0, and _v iðtÞ ¼ 0 if and only if nW2ðgiðtÞÞ � di ¼ 0, implyingthat _v iðtÞ is negative definite. Now, we obtain that v iðtÞ alwaysconverges to 0 according to Lemma 1. With v iðtÞ converging tozero, nW2ðgiðtÞÞ � di converges to zero too, i.e,limt!þ1 _giðtÞ ¼ �nW2ðgiðtÞÞ þ di ¼ 0.

Let us take _giðtÞ ¼ _eiðtÞ þ nW1ðeiðtÞÞ into account, sincelimt!þ1 _giðtÞ ¼ 0, it will reduce to following equality with t ! þ1:

_eiðtÞ ¼ �nW1ðeiðtÞÞ:The above equality is the same as ith subsystem of the ordinary

ZNNmodel (13). Obviously eiðtÞwill converge to zero with t ! þ1according to the Lemma 2.

Finally, combining the above analyses together, we come intoconclusion that NTFZNN model (12) is globally stable even facingwith unknown constant additive noise. Theorem 4 has now beenproved.

Results of Theorem 4 have shown that polluted NTFZNN model(23) can still converge to the accurate solution even with constantadditive noises, which is excellent noise resistance ability. How-ever, time-varying unknown additive noises are more commonthan specific noise and thus we will investigate them in the follow-ing theorem.

Theorem 5. If the unknown time-varying additive noise DNðtÞ in thedisturbed NTFZNN model (23) is assumed to have continuous timederivative and its time derivative is bounded at any time instantt P 0. Then, the computation error keðtÞk2 of the disturbed NTFZNN

model (23) converges to the interval ½0;nj _dmaxðtÞj=ðlnqÞ� when

t ! þ1. Moreover, diðtÞ is the ith element of DNðtÞ with j _dmaxðtÞjbeing the upper bound of j _diðtÞj;qi ¼ jW1ðeiðtÞÞj=jeiðtÞj P 1 andq ¼ minfqiji 2 1;2; . . . ;n2g.

Proof. We consider an auxiliary function uiðtÞ ¼ g2i ðtÞ=2 ¼

jgiðtÞj2=2 for the ith element of gðtÞ ¼ eðtÞ þ lR t0 AðeðsÞÞds, the

following dynamics can be easily obtained from ith subsystem of(23):

_giðtÞ ¼ �nW2ðgiðtÞÞ þ diðtÞ: ð25ÞTherefore, we take the time derivative of uiðtÞ as follows:

_uiðtÞ ¼ _giðtÞgiðtÞ ¼ ð�nW2ðgiðtÞÞ þ diðtÞÞgiðtÞ:Considering that W2ðgiðtÞÞgiðtÞ P 0 holds true for any giðtÞ, we

have _uiðtÞ 6 0 when diðtÞgiðtÞ 6 0, meaning jgiðtÞj ¼ffiffiffiffiffiffiffiffiffiffiuiðtÞ

pwill

not increase. Particularly, in the case of diðtÞgiðtÞ 6 0 if jgiðtÞj– 0,it yields from (25) that jgiðtÞj always decreases wheneverdiðtÞ ¼ 0 or diðtÞ– 0. In other cases when diðtÞgiðtÞ > 0; jgiðtÞj mayincrease, which leads to the decrease of j � nW2ðgiðtÞÞ þ diðtÞj. How-ever, this decreasing process stops when �nW2ðgiðtÞÞ þ diðtÞ ¼ 0,which makes _uiðtÞ ¼ 0 again. Thus, it is obvious that whent ! þ1; jgiðtÞj is bounded by

Page 7: Zeroing neural network with comprehensive performance and

Z. Hu et al. / Neurocomputing 418 (2020) 79–90 85

� jW�12 ðdiðtÞ=nÞj 6 jgiðtÞj 6 jW�1

2 ðdiðtÞ=nÞj;

where W�12 ð�Þ is the inverse function of W2ð�Þ. Because jW2ðxÞj P jxj,

i.e, jW�12 ðxÞj 6 jxj is true for the activation functions used in this

paper, the bound of jgiðtÞj can be reformulated as follows:

�jdiðtÞ=nj 6 jgiðtÞj 6 jdiðtÞ=nj;�jdiðtÞ=nj 6 giðtÞ 6 jdiðtÞ=nj:

ð26Þ

We now come to prove that lW1ðeiðtÞÞ tends to fall into theinterval lW1ðeiðtÞÞ 2 ½�j _diðtÞj=n; j _diðtÞj=n� when t ! þ1. Supposingthere exists any time instant t1 > 0 such that giðt1Þ ¼ eiðt1ÞþlR t10 W1ðeiðsÞÞds ¼ S1 and lW1ðeiðt1ÞÞ R ½�j _diðt1Þj=n; j _diðt1Þj=n�.Without loss of generality, we assume lW1ðeiðt1ÞÞ >

j _diðt1Þj=n > 0. If eiðtÞ, starting from eiðt1Þ, keeps lW1ðeiðtÞÞ >j _diðtÞj=n in the following time period, then after time period t2 we

have giðt1 þ t2Þ ¼ eiðt1 þ t2Þ þ lR t1þt20 W1ðeiðsÞÞds ¼ S2. Calculating

the difference between S1 and S2, we have

S2 � S1 ¼ ðeiðt1 þ t2Þ � eiðt1ÞÞ þ lR t1þt20 W1ðeiðsÞÞds

�l R t10 W1ðeiðsÞÞds

¼ ðeiðt1 þ t2Þ � eiðt1ÞÞ þ lR t1þt2t1

W1ðeiðsÞÞdsP l

R t1þt2t1

W1ðeiðsÞÞds� eiðt1Þ:

ð27Þ

Besides, from above analysis, we obtain

jdiðt1 þ t2Þ=nj � S1¼ jdiðt1Þ=nþ ½diðt1 þ t2Þ=n� diðt1Þ=n�j � S16 jdiðt1Þ=nj þ jdiðt1 þ t2Þ=n� diðt1Þ=nj � S1¼ jdiðt1Þ=nj þ j R t1þt2

t1ð _diðsÞ=nÞdsj � S1

6 jdiðt1Þ=nj þR t1þt2t1

j _diðsÞ=njds� S1:

ð28Þ

It follows from (27) and (28) that:

S2 � jdiðt1 þ t2Þ=nj PR t1þt2t1

ðlW1ðeiðsÞÞ � j _diðsÞ=njÞds�ðjdiðt1Þ=nj þ eiðt1Þ þ S1Þ:

From previous assumption, lWiðeiðtÞÞ � j _diðtÞ=nj > 0 holds truefor t 2 ½t1; t1 þ t2� and jdiðt1Þ=nj þ eiðt1Þ þ S1 is an given infinitevalue for any given t1. In addition, inequality (26) implies thatS2 ¼ giðt1 þ t2Þ 6 jdiðt1 þ t2Þ=nj. Therefore, we haveZ t1þt2

t1

ðlW1ðeiðsÞÞ � j _diðsÞ=njÞds 6 ðjdiðt1Þ=nj þ eiðt1Þ þ S1Þ: ð29Þ

It is worth pointing out that the left side of (29) is alwaysincreasing with the increasing of time t2, while the right side of(29) is a finite fixed value. Hence, when t2 ! þ1, we conclude thatlimt!þ1ðlW1ðeiðtÞÞ � j _diðtÞ=njÞ ¼ 0, i.e, limt!þ1lW1ðeiðtÞÞ ¼j _diðtÞ=nj. For the case when lW1ðeiðt1ÞÞ < �j _diðt1Þj=n < 0 andlWiðeiðtÞÞ < �j _diðtÞj=n keeps true in the following time period, wecan obtain limt!þ1lW1ðeiðtÞÞ ¼ �j _diðtÞ=nj in a similar way.

Combining the above analysis results, even if there exists timeinstant t such that lW1ðeiðtÞÞ R ½�j _diðtÞj=n; j _diðtÞj=n�, lW1ðeiðtÞÞ stillconverge to interval ½�j _diðtÞj=n ; j _diðtÞj=n� when t ! þ1. Thisconclusion shows that eiðtÞ converges to ½�j _diðtÞj=ðlnqiÞ; j _diðtÞj=ðlnqiÞ� for t ! þ1. Furthermore, keðtÞk2 satisfies

0 6 keðtÞk2 ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiXn2i¼1

e2i ðtÞvuut 6 njemaxðtÞj;

where jemaxðtÞj has the largest value amongjeiðtÞj; 8i 2 f1;2; . . . ;n2g. Finally, we are able to conclude that the

computation error keðtÞk2 of perturbed NTFZNN model (23) con-verges to interval ½0;nj _dmaxðtÞj=ðlnqÞ� with t ! þ1. Theorem 5has now been proved.

Remark 1. As illustrated in Theorem 5, when facing with variouskinds of time-varying additive noises, the steady state errorkeðtÞk2 of the perturbed NTFZNN model (23) is still bounded aslong as the noises have bounded time derivatives. Considering thateven if the additive noises do not satisfy such requirement, most ofthem can be approximated by Fourier series which satisfy ourrequirement. Therefore our conclusion in Theorem 5 actually hasshown NTFZNN model’s strong robustness against wide range ofadditive noises.

5. Simulative verification and comparison

Theoretical analyses in previous sections have laid a solid theo-retical foundation for the NTFZNN model. In this section, wemainly focus on validating the convergence and noise suppressabilities of the NTFZNN model when applied to time-varying Lya-punov equation problem solving. Without loss of generality, thecoefficients in the time-varying Lyapunov problem (1) are selectedas the following form

MðtÞ ¼ �1� 12 cosð3tÞ 1

2 sinð3tÞ12 sinð3tÞ �1þ 1

2 cosð3tÞ

" #;

QðtÞ ¼ sinð3tÞ cosð3tÞ� cosð3tÞ sinð3tÞ

:

Because the coefficient matrix MðtÞ and QðtÞ are given, we cancalculate out the theoretical solution AðtÞ of Lyapunov Eq. (1),and the ijth element AijðtÞ of matrix AðtÞ are written as follows:

A11ðtÞ ¼ � 13 sinð3tÞðcosð3tÞ � 2Þ;

A12ðtÞ ¼ � 16 ð2 cosð3tÞ � 1Þðcosð3tÞ þ 2Þ;

A21ðtÞ ¼ � 16 ð2 cosð3tÞ þ 1Þðcosð3tÞ � 2Þ;

A22ðtÞ ¼ 13 sinð3tÞðcosð3tÞ þ 2Þ;

where the correctness of AðtÞ can be validated by substituting it intothe problem (1). Therefore, we use AðtÞ to verify the model accuracy.

In the ideal environment without internal and external distur-bance, both NTFZNNSBP and NTFZNNSFAF can converge to the accu-rate solution of Lyapunov equation in finite time, which have beenproved in Theorem 3. For experimental purpose, in initial errormatrix Eð0Þ 2 R2�2, every element eijð0Þ 2 Eð0Þ is random generatedand is bounded by jeijð0Þj 2 ½0;2�. Other design parameters areselected as l ¼ 2; n ¼ 1 in NTFZNN models, p ¼ 0:4 in wSFAFð�Þand wSBPð�Þ activation functions, b1 ¼ b2 ¼ 1 in wSFAFð�Þ. Fig. 1 showsthat, the residual error kMTðtÞXðtÞ þ XðtÞMðtÞ þ QðtÞkF ¼ keðtÞk2 oftwo NTFZNN models starting from the same original state con-verges to zero quickly and in finite time. It is worth mentioningthat we can calculate the convergence time upper bound of theseNTFZNN models according to the Theorem 3, which is derived asfollows:

tSBP 6 2ð2þ1Þ2ð1�0:4Þ2

ð1�0:4Þ 6 7:58 s;

tSFAF 6 2þ12ð1�0:4Þ ln

2ð1�0:4Þþ11 6 2:31 s;

where tSBP, and tSFAF are theoretical upper bound of convergencetime of NTFZNNSBP and NTFZNNSFAF, respectively. It can be seen inFig. 1 that keðtÞk2 of these models takes about 3.5 s and 2.3 s toreach zero, which obviously satisfies the conclusion. What’s more,Fig. 2 illustrates the output transients of the NTFZNNSFAF model,where the red doted line denotes theoretical solution and the blue

Page 8: Zeroing neural network with comprehensive performance and

Fig. 1. Trajectory of residual error kMT ðtÞXðtÞ þ XðtÞMðtÞ þ QðtÞkF synthesized byNTFZNN model (7) when using SFAF (blue solid line) and SBP (red doted line)activation functions. (For interpretation of the references to color in this figurelegend, the reader is referred to the web version of this article.)

86 Z. Hu et al. / Neurocomputing 418 (2020) 79–90

solid line represents the neural network solution. Clearly, theNTFZNNSFAF model tracks the accurate solution quickly and pre-cisely. Besides, the performance of NTFZNNSBP is similar and is thusomitted.

Next, we come to examine the performance of the perturbedNTFZNN model (23). This time we choose wSFAFð�Þ as the activationfunction used in W1ð�Þ and W2ð�Þ because its performance is better.We have also conducted simulations using the ordinary ZNNmodel

Fig. 2. Elements of neural state XðtÞ synthesized by NTF

(4) and the NTZNN model (9) as contrast group, where Að�Þ in theZNN model uses wSFAFð�Þ too. Other parameters are set to bep ¼ 0:4; l ¼ 3; n ¼ 2 and b1 ¼ b2 ¼ 1. When facing with constantnoise diðtÞ ¼ 2, the residual error trajectories of neural networkmodels are depicted in Fig. 3, which shows that the residual errorskeðtÞk2 of the NTFZNN and NTZNN models decrease to zero withtime, while the ordinary ZNN model’s stays at a very large level.These facts imply the NTFZNN and NTZNN models can convergeto the theoretical solution but the ordinary ZNN model can not.This phenomenon also verifies the conclusion of Theorem 4. Inorder to demonstrate the superior noise toleration ability of theNTFZNN model, we set the additive noise to be linear typediðtÞ ¼ 2t, the design parameter are set to be l ¼ 5; n ¼ 2 accord-ingly, and the simulative results are shown in Fig. 4. As demon-strated in Fig. 4, even under such huge perturbation, the NTFZNNmodel still maintains its effectiveness with very small stable resid-ual error keðtÞk2 near zero, while the stable residual error of theNTZNN model is about 2 and the ordinary ZNN model with evenlarger increasing error. Furthermore, the convergence speed ofthe NTFZNN model is much faster than the NTZNN model in bothFig. 3 and Fig. 4, because the latter one is lack of nonlinear activa-tion functions.

Theorem 5 illustrates that in the face of unknown nonstationaryadditive noise, the stable residual error keðtÞk2 of the NTFZNNmodel is bounded and predictable given that the noise’s timederivative is bounded. This conclusion shows the superiority ofthe NTFZNN model under noise polluted condition, and thus wehave designed two simulative experiments to verify Theorem 5.Note that in these two experiments, we use f ðxÞ ¼ x as activationfunction in W1ð�Þ and W2ð�Þ for analyze convenience, i.e., q ¼ 1.Firstly, design parameters are set as l ¼ 2; n ¼ 1 and the noisediðtÞ changes from 0:5t; t to 2t, thus the steady-state residual errorupper bound changes from 2, 1 to 0.5 in Theorem 5, which matches

ZNN model (7) using SFAF (11) activation function.

Page 9: Zeroing neural network with comprehensive performance and

Fig. 3. Computing time-varying Lyapunov equation problem in the presence ofadditive constant noise diðtÞ ¼ 2, using the NTFZNN model (blue solid line), theNTZNN model (green doted line) and the ordinary ZNN model (red doted line). (Forinterpretation of the references to color in this figure legend, the reader is referredto the web version of this article.)

Fig. 4. Computing time-varying Lyapunov equation problem facing additiveincreasing noise diðtÞ ¼ 2t, using NTFZNN model (blue solid line), NTZNN model(green doted line) and the ordinary ZNN model (red doted line). (For interpretationof the references to color in this figure legend, the reader is referred to the webversion of this article.)

Z. Hu et al. / Neurocomputing 418 (2020) 79–90 87

the results shown in Fig. 5(a) perfectly. In the second experiment,diðtÞ is fixed to be 2t while design parameters are set asl ¼ 2; n ¼ 1;l ¼ 4; n ¼ 1 to l ¼ 4; n ¼ 2. As depicted in Fig. 5(b), the stable keðtÞk2 changes from about 1, 0.5 to 0.25, validatingthe Theorem 5 again. Obviously, the robustness of the NTFZNNmodel is excellent. Moreover, the convergence speed and noisesuppression ability can be improved through increasing l; n andadopting better nonlinear activation functions.

6. Application to perturbed mobile manipulator control

In the previous section, we have used NTFZNN model (7) tosolve the nonstationary Lyapunov equation problem and we only

solve a slightly simple Lyapunov equation for concise demonstra-tion. However, to validate the efficacy of our proposed NTFZNNmodel on applying to the real-world task, we introduce a mobilemanipulator to test it as in [13,38,39]. The robot manipulator is awheeled mobile manipulator, whose model and mechanical struc-ture can be seen in [40]. Furthermore, one can learn from [40] thatthe kinematics model of this robot manipulator can be formulatedas following equation:

rðtÞ ¼ yðHðtÞÞ 2 Rk;

where HðtÞ ¼ ½/TðtÞ; hTðtÞ�T 2 Rnþ2 represents the angle vector that

consists of the mobile platform angle vector /ðtÞ ¼ ½/LðtÞ;/RðtÞ�Tand the manipulator angle vector hðtÞ ¼ ½h1ðtÞ; . . . ; hnðtÞ�T . Besides,rðtÞ ¼ ½rxðtÞ; ryðtÞ; rzðtÞ�T denotes the position of end-effector inCartesian space, and yð�Þ denotes a smooth function mapping HðtÞto rðtÞ in nonlinear behavior.

Following the inverse kinematic control method and theNTFZNN model, the dynamic equation used to accomplish thetracking control task of this mobile manipulator can be obtained as

JðHðtÞÞ _HðtÞ ¼ _rðtÞ � lW1ðeðtÞÞ � nW2 eðtÞ þ lZ s

0eðsÞds

� �;

where JðHðtÞÞ ¼ @yðHðtÞÞ=@H 2 Rm�ðnþ2Þ; eðtÞ ¼ rðtÞ � yðHðtÞÞ is theerror vector of end-effector position. Considering that the mostimportant advantage of the NTFZNN model is its ability to workreliably even under large noise condition, we inject an increasingnoise dðtÞ ¼ 2t to above dynamics:

JðHðtÞÞ _HðtÞ ¼ _rðtÞ � lW1ðeðtÞÞ � nW2 eðtÞ þ lR s0 eðsÞds� �þ dðtÞ:

ð30ÞIn the simulation, we set l ¼ 100; n ¼ 5; Hð0Þ ¼

½0;0;p=3;p=12;p=12;p=12;p=12;p=12�. The tracking duration is10 s and the activation function we use is SFAF (11) with p ¼ 0:5and b1 ¼ b2 ¼ 1. Under this condition, we use the perturbedmanipulator control dynamic (30) to track a Four-leaf clover shapepath and the results are shown in Figs. 6 and 7(a). Fig. 6 illustratesthe whole tracking process of mobile manipulator controlled bymodel (30), where Fig. 6(a) plots in 3D view angle and Fig. 6(b)plots top graph of the process. Evidently, although under large lin-ear noise, the model accomplish the tracking task well, this can befurther verified in Fig. 7(a) which depicts the desired path andactual end-effector trajectory. Particularly, in Fig. 7(a), we find thatthe position error of the path is about the level of 10�5m on allthree axises under such large noise perturbation.

It has been demonstrated in Fig. 4 that the NTFZNN model (12)outperforms the ordinary ZNN model (13) and the NTZNN model(9) when facing time-varying additive noises. Therefore, it is neces-sary to verify whether the NTFZNN model (12) still outperformsabove other two models in the perturbed robotic tracking task.Similar to constructing the perturbed NTFZNN model based robotcontrol model (30), we have the perturbed NTZNN model based

robot control model as JðHðtÞÞ _HðtÞ ¼ _rðtÞ � leðtÞ � nR t0 eðsÞdsþ

dðtÞ and the perturbed ordinary ZNN model based robot control

model as JðHðtÞÞ _HðtÞ ¼ _rðtÞ � lW1ðeðtÞÞ þ dðtÞ. Then in above threerobot control models, all activation functions W1ð�Þ;W2ð�Þ are set tobe SFAF (11) while other parameters are set as diðtÞ ¼ 2 � t�sinðtÞ; p ¼ 0:5; b1 ¼ b2 ¼ 1; n ¼ 1 and l ¼ 10;11; . . . ;20. Withabove settings and starting from the same initial condition Hð0Þ,the three perturbed robot control model are used to track a desiredpath which is the same as in above experiment. Note that the Max-imum Steady State Position Error (MSSPE) as a new evaluationindex is adopted in the simulation results of Fig. 8. Besides, theMSSPE is defined as maxferrpðtÞg;8t 2 ½7;10�s. From Fig. 8, evi-

Page 10: Zeroing neural network with comprehensive performance and

Fig. 5. Residual error kMT ðtÞXðtÞ þ XðtÞMðtÞ þ QðtÞkF generated by perturbed NTFZNN model (23). (a) Under different additive noise diðtÞ ¼ 0:5t (blue solid line), diðtÞ ¼ t (reddoted line) and diðtÞ ¼ 2t. (b) Using different design parameters l and n. (For interpretation of the references to color in this figure legend, the reader is referred to the webversion of this article.)

Fig. 6. Results of tracking Four-leaf clover path using the perturbed NTFZNNmodel (30) controlled the mobile manipulator. (a) Complete tracking movement process. (b) Topview of according tracking trajectories.

Fig. 7. Profiles produced by the perturbed NTFZNN model (30) controlled mobile manipulator during tracking task. (a) Desired path (red solid line) and actual path (greendots). (b) Tracking position level error along three axises and as a whole in Cartesian space.

88 Z. Hu et al. / Neurocomputing 418 (2020) 79–90

Page 11: Zeroing neural network with comprehensive performance and

Fig. 8. MSSPE trajectories of three perturbed robot manipulator control modelswhich are based on the NTFZNN, NTZNN or ZNN models during tracking tasks andwith different l.

Z. Hu et al. / Neurocomputing 418 (2020) 79–90 89

dently, the perturbed NTFZNN model based robot control model’sMSSPEs are always more than 100 times smaller than that of othertwo perturbed robot control models, which shows that theNTFZNN model works much better than the NTZNN model (9)and the ordinary ZNN model (13) in perturbed robot manipulatorcontrol.

In summary, the above simulation results have shown that theNTFZNN based robot control model (30) can handle the mobilemanipulator tracking task well in heavily perturbed environment.

7. Conclusion

In order to accelerate the convergence speed to finite time aswell as to perform reliably even when there exists various typesof internal and external noise, a novel Noise-Tolerance Finite-time convergent ZNN (NTFZNN) model is established to deal withthe time-varying Lyapunov equation. Equipped with two finite-time convergent activation functions, the advanced properties ofthe NTFZNN model are firstly proved theoretically. Numerical sim-ulative experiments have validated that NTFZNN is able to con-verge to the accurate solution of time-varying Lyapunovequations. It has also been proved and validated by numerical sim-ulations that when the additive noise has bounded time derivative,the stable residual error of perturbed NTFZNN is bounded and canbe computed out. Furthermore, the design method of NTFZNN isadopted to control a wheeled mobile manipulator under increasingadditive noise in real-time, which successfully tracks the desiredpath with high accuracy. Future work can be finding ways to fur-ther enhance convergence speed of the NTFZNNmodel. Transform-ing the NTFZNNmodel into discrete model and exploiting it to dealwith more practical applications also worth in-depth research.

CRediT authorship contribution statement

Zeshan Hu: Data curation, Writing - original draft,Visualization. Kenli Li: Supervision, Validation. Keqin Li: Supervi-sion, Validation. Jichun Li: Writing - review & editing.Lin Xiao: Conceptualization, Methodology, Supervision,Investigation, Writing - review & editing.

Declaration of Competing Interest

The authors declare that they have no known competing finan-cial interests or personal relationships that could have appearedto influence the work reported in this paper.

Acknowledgments

This work was supported by the National Key Research andDevelopment Program of China (Grant No. 2016YFB0201800);the National Natural Science Foundation of China under grants61866013, 61503152, 61976089, 61473259, and 61563017; theNatural Science Foundation of Hunan Province of China undergrants 2019JJ50478, 18A289, JGY201926, 2016JJ2101,2018TP1018, and 2018RS3065.

References

[1] I. Mutlu, F. Schrödel, N. Bajcinca, D. Abel, M.T. Söylemez, Lyapunov equationbased stability mapping approach: A MIMO case study, IFAC-PapersOnLine 49(9) (2016) 130–135.

[2] P. Benner, Solving large-scale control problems, IEEE Control Syst. Mag. 24 (1)(2004) 44–59.

[3] Y. Qian, W. Pang, An implicit sequential algorithm for solving coupledLyapunov equations of continuous-time Markovian jump systems,Automatica 60 (2015) 245–250.

[4] B. Zhou, G. Duan, Z. Lin, A parametric periodic Lyapunov equation withapplication in semi-global stabilization of discrete-time periodic systemssubject to actuator saturation, Automatica 47 (2) (2011) 316–325.

[5] L. Xiao, B. Liao, S. Li, Z. Zhang, L. Ding, L. Jin, Design and analysis of FTZNNapplied to the real-time solution of a nonstationary Lyapunov equation andtracking control of a wheeled mobile manipulator, IEEE Trans. Ind. Inform. 14(1) (2018) 98–105.

[6] Z. Li, B. Zhou, J. Lam, Y. Wang, Positive operator based iterative algorithms forsolving Lyapunov equations for Itô stochastic systems with Markovian jumps,Appl. Math. Comput. 217 (21) (2011) 8179–8195.

[7] F. Ding, H. Zhang, Gradient-based iterative algorithm for a class of the coupledmatrix equations related to control systems, IET Control Theory Appl. 8 (15)(2014) 1588–1595.

[8] Y. Lin, V. Simoncini, Minimal residual methods for large scale Lyapunovequations, Appl. Numer. Math. 72 (2013) 52–71.

[9] L. Jin, S. Li, B. Hu, RNN models for dynamic matrix inversion: A control-theoretical perspective, IEEE Trans. Ind. Inform. 14 (1) (2017) 189–199.

[10] Y. Zhang, Y. Yang, G. Ruan, Performance analysis of gradient neural networkexploited for online time-varying quadratic minimization and equality-constrained quadratic programming, Neurocomputing 74 (10) (2011) 1710–1719.

[11] D. Guo, C. Yi, Y. Zhang, Zhang neural network versus gradient-based neuralnetwork for time-varying linear matrix equation solving, Neurocomputing 74(17) (2011) 3708–3712.

[12] P. Stanimirovic, M. Petkovic, Gradient neural dynamics for solving matrixequations and their applications, Neurocomputing 306 (2018) 200–212.

[13] C. Yang, G. Peng, L. Cheng, J. Na, Z. Li, Force sensorless admittance control forteleoperation of uncertain robot manipulator using neural networks, IEEETrans. Syst. Man Cybern. Syst.

[14] Y. Zhang, K. Chen, X. Li, C. Yi, H. Zhu, Simulink modeling and comparison ofZhang neural networks and gradient neural networks for time-varyingLyapunov equation solving, in: 2008 Fourth International Conference onNatural Computation, vol. 3, IEEE, 2008, pp. 521–525.

[15] Y. Zhang, K. Chen, H. Tan, Performance analysis of gradient neural networkexploited for online time-varying matrix inversion, IEEE Trans. Automat.Control 54 (8) (2009) 1940–1945.

[16] L. Xie, Y. Liu, H. Yang, Gradient based and least squares based iterativealgorithms for matrix equations AXBþ CXTD ¼ F, Appl. Math. Comput. 217 (5)(2010) 2191–2199.

[17] L. Xiao, B. Liao, S. Li, K. Chen, Nonlinear recurrent neural networks for finite-time solution of general time-varying linear matrix equations, Neural Netw. 98(2018) 102–113.

[18] L. Xiao, Y. Zhang, Solving time-varying inverse kinematics problem of wheeledmobile manipulators using Zhang neural network with exponentialconvergence, Nonlinear Dyn. 76 (2) (2014) 1543–1559.

[19] L. Xiao, A finite-time recurrent neural network for solving online time-varyingSylvester matrix equation based on a new evolution formula, Nonlinear Dyn.90 (3) (2017) 1581–1591.

[20] Y. Zhang, B. Qiu, B. Liao, Z. Yang, Control of pendulum tracking (includingswinging up) of IPC system using zeroing-gradient method, Nonlinear Dyn. 89(1) (2017) 1–25.

[21] Z. Zhang, L. Zheng, J. Weng, Y. Mao, W. Lu, L. Xiao, A new varying-parameterrecurrent neural-network for online solution of time-varying Sylvesterequation, IEEE Trans. Cybern. 48 (11) (2018) 3135–3148.

[22] Y. Xue, T. Tang, A. Liu, Large-scale feedforward neural network optimization bya self-adaptive strategy and parameter based particle swarm optimization,IEEE Access 7 (2019) 52473–52483.

[23] Y. Xue, B. Xue, M. Zhang, Self-adaptive particle swarm optimization for large-scale feature selection in classification, ACM Trans. Knowl. Discov. Data 13 (5)(2019) 1–27.

Page 12: Zeroing neural network with comprehensive performance and

90 Z. Hu et al. / Neurocomputing 418 (2020) 79–90

[24] Y. Xue, T. Tang, W. Peng, A. Liu, Self-adaptive parameter and strategy basedparticle swarm optimization for large-scale feature selection problems withmultiple classifiers, Appl. Soft Comput. 88 (2020) 106031.

[25] L. Xiao, A finite-time convergent neural dynamics for online solution of time-varying linear complex matrix equation, Neurocomputing 167 (2015) 254–259.

[26] L. Jin, Y. Zhang, S. Li, Integration-enhanced Zhang neural network for real-time-varying matrix inversion in the presence of various kinds of noises, IEEETrans. Neural Netw. Learn. Syst. 27 (12) (2015) 2615–2627.

[27] L. Jin, Y. Zhang, S. Li, Y. Zhang, Noise-tolerant ZNN models for solving time-varying zero-finding problems: A control-theoretic approach, IEEE Trans.Automat. Control 62 (2) (2016) 992–997.

[28] L. Jin, Y. Zhang, S. Li, Y. Zhang, Modified ZNN for time-varying quadraticprogramming with inherent tolerance to noises and its application tokinematic redundancy resolution of robot manipulators, IEEE Trans. Ind.Electron. 63 (11) (2016) 6978–6988.

[29] L. Xiao, K. Li, M. Duan, Computing time-varying quadratic optimization withfinite-time convergence and noise tolerance: A unified framework for Zeroingneural network, IEEE Trans. Neural Netw. Learn Syst. 99 (1) (2019) 1–10.

[30] C. Yi, Y. Chen, X. Lan, Comparison on neural solvers for the Lyapunov matrixequation with stationary & nonstationary coefficients, Appl. Math. Model. 37(4) (2013) 2495–2502.

[31] Y. Zhang, C. Yi, W. Ma, Simulation and verification of Zhang neural network foronline time-varying matrix inversion, Simul. Model Pract. Theory 17 (10)(2009) 1603–1617.

[32] Q. Xiang, B. Liao, L. Xiao, L. Jin, A noise-tolerant Z-type neural network fortime-dependent pseudoinverse matrices, Optik 165 (2018) 16–28.

[33] S. Li, S. Chen, B. Liu, Accelerating a recurrent neural network to finite-timeconvergence for solving time-varying Sylvester equation by using a sign-bi-power activation function, Neural Process. Lett. 37 (2) (2013) 189–205.

[34] L. Xiao, Z. Zhang, S. Li, Solving time-varying system of nonlinear equations byfinite-time recurrent neural networks with application to motion tracking ofrobot manipulators, IEEE Trans. Syst. Man Cybern. Syst. 99 (1) (2019) 1–11.

[35] A. Lamperski, A. Ames, Lyapunov theory for Zeno stability, IEEE Trans.Automat. Control 58 (1) (2012) 100–112.

[36] Y. Zhang, C. Yi, D. Guo, J. Zheng, Comparison on Zhang neural dynamics andgradient-based neural dynamics for online solution of nonlinear time-varyingequation, Neural Comput. Appl. 20 (1) (2011) 1–7.

[37] D. Guo, Y. Zhang, Li-function activated ZNN with finite-time convergenceapplied to redundant-manipulator kinematic control via time-varyingJacobian matrix pseudoinversion, Appl. Soft Comput. 24 (2014) 158–168.

[38] D. Chen, Y. Zhang, S. Li, Zeroing neural-dynamics approach and its robust andrapid solution for parallel robot manipulators against superposition ofmultiple disturbances, Neurocomputing 275 (2018) 845–858.

[39] L. Jin, Y. Zhang, Discrete-time Zhang neural network of O(s3) pattern for time-varying matrix pseudoinversion with application to manipulator motiongeneration, Neurocomputing 142 (2014) 165–173.

[40] L. Xiao, Y. Zhang, A new performance index for the repetitive motion of mobilemanipulators, IEEE Trans. Cybern. 44 (2) (2014) 280–292.

Zeshan Hu received the B.S. degree in Computer Scienceand Technology from Fuzhou University, Fuzhou, China,in 2018. He is currently a postgraduate with the Collegeof Computer Science and Electronic Engineering, HunanUniversity, Changsha, China. His main research interestsinclude neural networks, and robotics.

Lin Xiao received the Ph.D. degree in Communicationand Information Systems from Sun Yat-sen University,Guangzhou, China, in 2014. He is currently a Professorwith the College of Information Science and Engineer-ing, Hunan Normal University, Changsha, China. He hasauthored over 100 papers in international conferencesand journals, such as the IEEE-TNNLS, the IEEE-TCYB,the IEEE-TII and the IEEE-TSMCA. His main researchinterests include neural networks, robotics, and intelli-gent information processing.

Kenli Li received the Ph.D. degree in computer sciencefrom the Huazhong University of Science and Technol-ogy, Wuhan, China, in 2003. He was a Visiting Scholarwith the University of Illinois at Urbana-Champaign,Champaign, IL, USA, from 2004 to 2005. He is currently aFull Professor of Computer Science and Technology withHunan University, Changsha, China, and also the DeputyDirector of the National Supercomputing Center,Changsha. His current research interests include parallelcomputing, cloud computing, big data computing, andneural computing.

Jichun Li received his Ph.D. degree in mechanicalengineering from King’s College London, University ofLondon, U.K. in 2013. He is currently a Lecturer with theSchool of Computer Science and Electronic Engineering,University of Essex, U.K. His main research interestsinclude neural networks, robotics, intelligent trans-portation and communication, control, IoT and AI solu-tions for bespoke automated systems for chemical,medical, energy, agri-food and biosciences industries.He is a member of the IEEE, IET, and IMeChE.

Keqin Li is a SUNY Distinguished Professor of computerscience with the State University of New York. He is alsoa Distinguished Professor at Hunan University, China.His current research interests include cloud computing,fog computing and mobile edge computing, energy-efficient computing and communication, embeddedsystems and cyberphysical systems, heterogeneouscomputing systems, big data computing, high-performance computing, CPU-GPU hybrid and cooper-ative computing, computer architectures and systems,computer networking, machine learning, intelligent andsoft computing.