research article ...downloads.hindawi.com/journals/scn/2020/8824430.pdf · elman neural network is...

9
Research Article An Improved Elman Network for Stock Price Prediction Service Bo Liu , Qilin Wu, and Qian Cao School of Information Engineering, Chaohu University, Chaohu 238000, China Correspondence should be addressed to Qian Cao; [email protected] Received 16 June 2020; Revised 8 August 2020; Accepted 19 August 2020; Published 3 September 2020 Academic Editor: Xiaolong Xu Copyright © 2020 Bo Liu et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. e rapid development of edge computing drives the rapid development of stock market prediction service in terminal equipment. However, the traditional prediction service algorithm is not applicable in terms of stability and efficiency. In view of this challenge, an improved Elman neural network is proposed in this paper. Elman neural network is a typical dynamic recurrent neural network that can be used to provide the stock price prediction service. First, the prediction model parameters and build process are analysed in detail. en, the historical data of the closing price of Shanghai composite index and the opening price of Shenzhen composite index are collected for training and testing, so as to predict the prices of the next trading day. Finally, the experiment results validate that it is effective to predict the short-term future stock price by using the improved Elman neural network model. 1. Introduction e stock market can be regarded as a complex nonlinear system, and there are many factors that affect the stock price, especially the recent historical stock price, which has a great influence on the future short-term stock price. So, it is difficult, but valuable, to provide stock price prediction service. Fortu- nately, with the development of edge computing and neural network technologies, commercial service providers can benefit from the low-latency edge resources and nonlinear expression ability of neural network to provide their users with more efficient service acquisition for stock price prediction. Based on this well-known cognition, we can design a neural network to predict the stock price of the next period based on the historical stock price [1–4]. In this paper, we will use the historical data of the closing price of the Shanghai composite index to predict the closing price of the Shanghai composite index in the next trading day, and the historical data of the opening price of the Shenzhen composite index to predict the opening price of the Shenzhen composite index in the next trading day. In addition, our research could make stock prediction algorithms deployed on edge terminals more efficient. Over the years, although many scholars have established a large number of mathematical models to predict the stock price, they have not achieved good results and have little impact. However, the rise of big data technology and arti- ficial intelligence technology will provide another effective solution for stock price prediction. is is the motivation of our research. Specifically, we hope to establish a reasonable artificial intelligence model and make a more accurate prediction of the future short-term stock price by inputting the latest stock price history. We expect this model can offer a reference for people of stock investment. In this paper, we propose an improved Elman neural network model to predict stock price, and our main con- tributions include the following: In order to apply traditional stock prediction al- gorithms to terminal devices such as edge computing and mobile phones, we build a stock price prediction model based on an improved Elman network with the aim to predict the stock price simpler and more stable. We give the specific model parameters and build process. In order to reflect the latest stock market situation, we trained and tested the proposed model with the latest dataset, namely, the Shanghai composite index and Shenzhen composite index in 2018, 2019, and 2020; the latest datasets are used to better reflect the current stock market. Hindawi Security and Communication Networks Volume 2020, Article ID 8824430, 9 pages https://doi.org/10.1155/2020/8824430

Upload: others

Post on 06-Sep-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Research Article ...downloads.hindawi.com/journals/scn/2020/8824430.pdf · Elman neural network is a typical feedback neural network model widely used, which is generally divided

Research ArticleAn Improved Elman Network for Stock Price Prediction Service

Bo Liu Qilin Wu and Qian Cao

School of Information Engineering Chaohu University Chaohu 238000 China

Correspondence should be addressed to Qian Cao 19875069qqcom

Received 16 June 2020 Revised 8 August 2020 Accepted 19 August 2020 Published 3 September 2020

Academic Editor Xiaolong Xu

Copyright copy 2020 Bo Liu et al +is is an open access article distributed under the Creative Commons Attribution License whichpermits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

+e rapid development of edge computing drives the rapid development of stockmarket prediction service in terminal equipmentHowever the traditional prediction service algorithm is not applicable in terms of stability and efficiency In view of this challengean improved Elman neural network is proposed in this paper Elman neural network is a typical dynamic recurrent neural networkthat can be used to provide the stock price prediction service First the prediction model parameters and build process areanalysed in detail +en the historical data of the closing price of Shanghai composite index and the opening price of Shenzhencomposite index are collected for training and testing so as to predict the prices of the next trading day Finally the experimentresults validate that it is effective to predict the short-term future stock price by using the improved Elman neural network model

1 Introduction

+e stock market can be regarded as a complex nonlinearsystem and there are many factors that affect the stock priceespecially the recent historical stock price which has a greatinfluence on the future short-term stock price So it is difficultbut valuable to provide stock price prediction service Fortu-nately with the development of edge computing and neuralnetwork technologies commercial service providers can benefitfrom the low-latency edge resources and nonlinear expressionability of neural network to provide their users with moreefficient service acquisition for stock price prediction Based onthis well-known cognition we can design a neural network topredict the stock price of the next period based on the historicalstock price [1ndash4] In this paper we will use the historical data ofthe closing price of the Shanghai composite index to predict theclosing price of the Shanghai composite index in the nexttrading day and the historical data of the opening price of theShenzhen composite index to predict the opening price of theShenzhen composite index in the next trading day In additionour research could make stock prediction algorithms deployedon edge terminals more efficient

Over the years although many scholars have establisheda large number of mathematical models to predict the stockprice they have not achieved good results and have little

impact However the rise of big data technology and arti-ficial intelligence technology will provide another effectivesolution for stock price prediction +is is the motivation ofour research Specifically we hope to establish a reasonableartificial intelligence model and make a more accurateprediction of the future short-term stock price by inputtingthe latest stock price history We expect this model can offera reference for people of stock investment

In this paper we propose an improved Elman neuralnetwork model to predict stock price and our main con-tributions include the following

In order to apply traditional stock prediction al-gorithms to terminal devices such as edge computingand mobile phones we build a stock price predictionmodel based on an improved Elman network withthe aim to predict the stock price simpler and morestable We give the specific model parameters andbuild processIn order to reflect the latest stock market situation wetrained and tested the proposed model with the latestdataset namely the Shanghai composite index andShenzhen composite index in 2018 2019 and 2020 thelatest datasets are used to better reflect the current stockmarket

HindawiSecurity and Communication NetworksVolume 2020 Article ID 8824430 9 pageshttpsdoiorg10115520208824430

To analyse the new algorithm model more clearly wequantitatively analysed the performance of the modelwith a variety of mathematical tools and error analysismethods In addition a large number of diagrams andtables are provided to further clarify the model

+e rest of this paper is organized as follows Section 2reviews and summarizes the related work on this basis toclarify the significance of this study Section 3 is prelimi-naries in which the principle of Elman neural network isclarified In Section 4 we proposed our model and thespecific model construction procedures are introduced indetail Section 5 is experiments in which the model is builttrained and tested In addition we devoted a great deal ofspace to the analysis of the results in this section FinallySection 6 concludes this paper

2 Preliminaries

Elman neural network is a typical feedback neural networkmodel widely used which is generally divided into fourlayers input layer hidden layer bearing layer and outputlayer [5 6]

Figure 1 shows the basic structure of an Elman neuralnetwork the connection of the input layer hidden layer andoutput layer are similar to the feedforward network +einput layer unit only plays the role of signal transmissionand the output layer unit plays the role of weighting +ereare linear and nonlinear excitation functions in the hiddenlayer element and the excitation function usually takes thenonlinear function of Sigmoid +e bearing layer is used toremember the output value of the hidden layer unit at theprevious time which can be considered as a delay operatorwith one-step delay +e output of the hidden layer is self-linked to the input of the hidden layer by accepting the delayand storage of the layer whichmakes it sensitive to historicaldata In other words Elman neural network adds a bearinglayer to the hidden layer as a one-step delay operator toachieve the purpose of memory so that the system has theability to adapt to time-varying characteristics and enhancethe global stability of the network [7ndash9] +e mathematicalexpression of its network is

y(k) g w3x(k)( 1113857

x(k) f w1xc(k) + w2(u(k minus 1))( 1113857

xc(k) x(k minus 1)

(1)

where y(t) is the output node vector x(t) is the nodalelement vector of the hidden layer u(t minus 1) is the inputvector xc(t) is the feedback state vector w3 is the con-nection weight from the hidden layer to the output layer w2is the connection weight from the input layer to the hiddenlayer w1 is the connection weight of the connecting layer tothe hidden layer g is the transfer function of the outputneuron and the linear combination of the output of thehidden layer and f is the transfer function of the hiddenlayer neuron usually using the S function

+is research takes MATLAB as the experimentalplatform And the two datasets used in this study are the

closing prices of 490 trading days of the Shanghai compositeindex from September 26 2017 to September 30 2019 andthe opening prices of 420 trading days from August 15 2018toMay 12 2020We will use the samemodel for training andtesting based on these two datasets

3 Related Work

In fact many researchers have been studying stock priceforecasts for years some of these studies have improved theexisting models and some have further processed the dataHowever these studies are not perfect and some of themodels are too complex and some of the processing pro-cedures are tedious +ese shortcomings will increase theinstability of the models and limit the application and ex-tension of the research results

Shi et al considered that traditional stock forecastingmethods could not fit and analyse highly nonlinear and mul-tifactors of stock market well so there are problems such as lowprediction accuracy and slow training speed +erefore theyproposed a prediction method of the Elman neural networkmodel based on the principal component analysis method Inorder to compare the results better BP network and Elmannetwork with the same structure are established to predict thestock data [10] Yu et al used an improved Elman neuralnetwork as the forecasting model and the market price ofZhongji company (No 000039) in Shenzhen stock market isforecasted their experiment results get higher precision steadierforecasting effect and more rapid convergence speed [11]Zheng et al studied the forecast of opening stock price based onElman neural network in 2015 and they selected the openingprices of Shanghai stock index of 337 trading days from De-cember 2012 to April 2014 as the raw data for stimulatedforecast and the result proves the validity of their forecastmodel[12] Zhang et al successfully applied Elman regression neuralnetwork to the prediction of stock opening price Specificallythe authors described the Particle Swarm Optimization (PSO)algorithm for learning optimization of the Elman RecurrentNeural Network and the results showed that the model basedon LSTM was more accurate than other machine learningmodels [13] Jun used AdaptiveWhale Optimization Algorithmand Elman neural network to predict the stock price andachieved better results based on their experiments [14] JavadZahedi et al used the artificial neural network model andprincipal component analysis to evaluate the predictability ofstock price in Teheran stock exchange with 20 accountingvariables Finally the goodness of fit of principal componentanalysis was determined by actual values and the effectivefactors of Teheran stock exchange price were accurately pre-dicted and modelled by a new model composed of all variables[15] Han et al designed a three-ply BP network and thecorresponding mathematical model +erefore using 140 daysactual price of the stock 600688 as a sample the network wastrained through MATLAB thereby the 10 days predictions ofthe stock price and the dispersion Q 0 0146 to the practicaldata were made [16]

Although scholars have made outstanding contributionsin using artificial intelligence to predict stock prices neitherthe stability of themodels nor the accuracy of the predictions

2 Security and Communication Networks

is satisfactory Based on this fact this study seeks to exploitthe neural network model for the prediction of stock pricebased on Elman network with balancing the simplicitystability and accuracy

4 Supposed Model

+e general steps to build the supposed model of thisstudy include data collection data load sample setconstruction division of sample set and training setconstruction of Elman neural network and training ofthe neural network model +e specific flow chart isshown in Figure 2

41 Construction of Sample Set +e stock price predictionproblem in this study is actually a time series problem whichcan be expressed by the following formula

xn f xnminus1 xnminus2 middot middot middot xnminusN( 1113857 (2)

+is formula means that the closing price of theprevious N trading day can be used to predict the closingprice of the next trading day +e data of 490 closingprices were divided into training samples and test sam-ples for the training samples x1~xN are selected to formthe first sample where (x1 x2 middot middot middot xNminus1) are the inde-pendent variable and xN is the dependent variable andx2~xN+1 are selected to form the second sample where(x2 x3 middot middot middot xN) are the independent variable and xN+1 isthe dependent variable finally a training matrix isformed as follows

x1 x2 xi

x2 x3 xi+1

middot middot middot middot middot middot middot middot middot

xNminus1 xN

xN xN+1

⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝

⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠

(3)

In this matrix each column is a sample and the last rowis the expected output +ese samples are fed into the Elmanneural network for training and then the networkmodel canbe obtained [17ndash19]

In this study x1~xN are selected to form the first sampleand x2~xN+1 are selected to form the second sample the restcan be carried out in the samemanner Here N is randomly setto 8 whichmeans that the closing price of the day is determinedby the closing price of the previous seven trading days

Take the Shanghai composite index dataset as anexample the closing prices of the first eight trading daysare 334358 334527 333964 334894 337438 338299338828 and 338610 which means 334358334527333964 334894 337438 338299 and 338828will be used to forecast the eighth data 338828 which wehave already obtained +e closing prices of the first eighttrading days are 299928 300645 297708 298534295543 292909 293217 and 290519 and the sameprinciple 299928 300645 297708 298534 295543292909 and 293217 will be used to forecast the eighthdata 290519 which we have already obtained +erefore490 pieces of data will be converted into a 8 times 483 matrix483 columns mean 483 samples in which the first 7 datain each column are independent variables and the eighthdata is the data to be predicted +e 8 times 483 matrix isshown as follows

Input layer

Hidden layer Output layer

Bearing layer

u (t ndash 1) w2

w1

w3

xc (t)

x (t) y (t)

Figure 1 +e basic structure of an Elman neural network

Data collection

Data load

Sample set construction

Division of sample set and training set

Construction of Elman neural network

Training of neural network model

Figure 2 Elman neural network model construction steps

Security and Communication Networks 3

334358

334527

333964

339052 middot middot middot 299928

337847 middot middot middot 300645

337204 middot middot middot 297708

334894

337438338299338828

33861

338179 middot middot middot 298534

337017 middot middot middot 295543337865 middot middot middot 29290933807 middot middot middot 293217

338825 middot middot middot 290519

⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝

⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠

(4)

+e Shenzhen composite index dataset is 8786349784709094 85735693 83550002 84197868 8533428984469836 84802244 85113743 87316394 8716817286669025 85092723 84409528 84541357 85195698middot middot middot middot middot middot 104777614 104609947 105755242 106181651108999169 109236123 110538157 109720503 In thesame way the Shenzhen composite index dataset isformed as a 8 times 413 matrix which is as follows

87863497

84709094

85735693

85113743 middot middot middot 104777614

87316394 middot middot middot 104609947

87168172 middot middot middot 105755242

83550002

841978688533428984469836

84802244

86669025 middot middot middot 106181651

85092723 middot middot middot 10899916984409528 middot middot middot 10923612384541357 middot middot middot 110538157

85195698 middot middot middot 109720503

⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝

⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠

(5)

413 columns mean 413 samples in which the first 7 datain each column are independent variables and the eighthdata is the data to be predicted

42 Construction of Elman Neural Network Figure 3 showsthe proposed model structure where u1 middot middot middot u7 are inputdata x1 middot middot middot x18 are hidden-layer data and xc1 middot middot middot xc18 arebearing-layer data With the help of MATLAB neuralnetwork toolbox Elman neural network can be easily

built To be specific the MATLAB neural networktoolbox provides an Elmannet function and Elmannetwork construction can be completed by setting threeparameters in the Elmannet function which are the delaytime the number of hidden layer neurons and thetraining function respectively In this case the number ofhidden-layer neurons is set to be 18 and TRAINGDX ischosen to be the training function [20ndash22] TRAINGDXwhich is named gradient descent with momentum andadaptive learning rate backpropagation is a networktraining function that updates weight and bias valuesaccording to gradient descent momentum and anadaptive learning rate It will return a trained net and thetraining record In addition the maximum number ofiterations in the training is set to 3000 the maximumnumber of validation failures is set to 6 and the errortolerance is set to 000001 which means that the trainingcan be stopped if the error value is reached [23 24]Figure 4 shows the model structure graphic automaticallygenerated by MATLAB

To construct the Elman neural network the MATLABcode can be like this Firstly three parameters in theElmannet function are set and codes are as follows

net Elmannet 1 2 18 lsquoTRAINGDXrsquo( ) (6)

Secondly the maximum number of iterations in thetraining is set to 3000 and codes are as follows

nettrainParamepochs 3000 (7)

+irdly the maximum number of validation failures isset to 6 and the error tolerance is set to 000001 and codesare as follows

nettrainParammax fail 6

nettrainParamgoal 000001(8)

Finally initialize the network and codes are as follows

net init(net) (9)

Inputlayer

Hiddenlayer

Outputlayer

hellip hellip hellip hellip hellip

hellip hellip hellip

hellip

hellip hellip

hellip

Bearinglayer

y

xc1hellipxc18

x1hellipx18

u1hellipu7

Figure 3 Model structure

4 Security and Communication Networks

W

W

b b

WInput (t)

Hidden

Output

Output (t)

7

181

1

1

2

0

+ +

Figure 4 +e model structure graphic generated by MATLAB

50 100 150 200 250 300 350 400

) )

0 50 100 150 200 250 300 350 4000

3600

3400

3200

3000

2800

2600

2400

12000

11500

11000

10500

10000

9500

9000

8500

8000

7500

7000

Test results for training data (Shanghai composite index) Test results for training data (Shenzhen composite index)

Actual valueElman network output value

Actual valueElman network output value

Figure 5 Test results for training data

0 50 100 150 200 250 300 350 400 0 50 100 150 200 250 300 350 400

200

150

100

50

0

ndash50

ndash100

ndash150

ndash200

1400

1200

1000

800

600

400

200

0

ndash200

ndash400

ndash600

Residuals of test results on training data(Shanghai composite index)

Residuals of test results on training data(Shenzhen composite index)

Figure 6 Residuals of test results on training data

Security and Communication Networks 5

0 10 20 30 40 50 60 70 80 90 0 10 20 30 40 50 60

3050

3000

2950

2900

2850

2800

2750

12

115

11

105

1

095

Test results for testing data(Shanghai composite index)

Test results for testing data(Shenzhen composite index)times104

Actual valueElman network output value

Actual valueElman network output value

Figure 7 Test results for testing data

0 10 20 30 40 50 60 70 80 90 0 10 20 30 40 50 60

80

60

40

20

0

ndash20

ndash40

ndash60

ndash80

600

400

200

0

ndash200

ndash400

ndash600

Residuals of test results on testing data(Shanghai composite index)

Residuals of test results on testing data(Shenzhen composite index)

Figure 8 Residuals of test results on testing data

Table 1 Relative error of test (Shanghai composite index)Number 1 2 3 4 5 6 7Relative error 0006674 minus0001090 0010719 minus0010057 minus0025153 0001690 0000677Number 8 9 10 11 12 13 14Relative error 0012402 minus0003898 minus0002381 minus0015004 minus0023218 minus0007697 minus0002238Number 15 16 17 18 19 20 21Relative error 0009203 minus0000263 minus0009339 0000532 minus0022964 minus0000520 0007789Number 22 23 24 25 26 27 28Relative error 0006250 minus0004937 0023693 minus0000488 0004250 minus0005476 minus0003717Number 29 30 31 32 33 34 35Relative error minus0005492 0003612 0001676 0010246 minus0009194 0010889 minus0007296Number 36 37 38 39 40 41 42Relative error minus0006752 minus0008459 minus0001553 minus0000139 minus0003553 0004742 0005815Number 43 44 45 46 47 48 49

6 Security and Communication Networks

After all the above steps construction of Elman neuralnetwork is completed [25ndash27]

5 Experiments

51 Training of the SupposedModel When the Elman neuralnetwork is built the model can be trained but all the datahas to be normalized first considering of the performanceand stability of the model +e normalization operation canuse the mapminmax function provided by MATLAB tool-box and the default normalization interval of mapminmaxfunction is [minus1 1] +e detailed MATLAB code is as follows

trainx1 st11113858 1113859 mapminmax(trainx)

train ty1 sim net train x1( 1113857

train ty mapminmax lsquoreversersquo train ty1 st2( 1113857

(10)

After the normalized operation of training data trainxand trainx1 were obtained +e normalized training data(trainx1) were input into the network model to obtain thecurrent network output (train_ty1) and then reverselynormalized into normal data to obtain train_ty which isthe corresponding stock price of the training data Whatwe want to emphasize is that the data used in the testshould be normalized first and then the output should beunnormalized

52 e Test Results and the Quantitative AnalysisFigure 5 shows a graph of the actual and predicted values theblue solid line is the actual value and the red dotted linerepresents the Elman network output value Apparently themodel fits the training data well In addition we furthercalculated the residuals of test results on training Figure 6shows the residuals of training results on training data andresidual in mathematical statistics refers to the differencebetween the actual observed value and the estimated value(the fitting value)

Figure 7 shows a graph of the actual and predictedvalues the black solid line is the actual value and the reddotted line represents the Elman network output value Inaddition we further calculate the residuals of test results ontesting data Figure 8 shows the residuals of test results ontesting data And the relative errors of each prediction arealso calculated for further study and analysis All relativeerror values are shown in Tables 1 and 2 By analysing thesegraphs and data it is clear that the prediction effect of themodel is pretty good

6 Conclusions

+is study is based on a basic premise that the historicalstock price will have a great impact on the future short-termstock price On this premise we established an improvedElman model and collected the historical data of theShanghai composite index and the Shenzhen composite

Table 2 Relative error of test (Shenzhen composite index)Number 1 2 3 4 5 6 7Relative error minus0013747 0002825 minus0017177 0004370 0030084 minus0001853 minus0047231Number 8 9 10 11 12 13 14Relative error 0017591 minus0017927 0007011 0010986 0026179 minus0044038 0035607Number 15 16 17 18 19 20 21Relative error 0061603 minus0043408 0047926 0008911 0028250 minus0016142 0039523Number 22 23 24 25 26 27 28Relative error minus0012140 minus0021609 0001765 minus0010977 0036256 minus0005063 0006179Number 29 30 31 32 33 34 35Relative error 0003674 minus0022697 minus0014897 minus0002466 minus0006432 0001260 0023451Number 36 37 38 39 40 41 42Relative error minus0009376 minus0016781 0009971 minus0019686 0001902 minus0002325 0014443Number 43 44 45 46 47 48 49Relative error minus0024677 0009285 0008341 minus0003239 minus0000341 minus0010959 minus0005216Number 50 51 52 53 54 55 56Relative error minus0026052 minus0000483 minus0012797 0007071

Table 1 ContinuedRelative error 0013786 0014388 0014745 0000828 minus0010776 0005480 minus0012749Number 50 51 52 53 54 55 56Relative error 0009279 minus0004690 0000227 minus0006596 minus0019610 minus0002958 minus0001138Number 57 58 59 60 61 62 63Relative error 0000246 minus0007637 0009974 minus0016051 0002405 minus0002644 0003361Number 64 65 66 67 68 69 70Relative error minus0015557 minus0002587 minus0012756 minus0009015 minus0007153 minus0009234 minus0001688Number 71 72 73 74 75 76 77Relative error 0002176 minus0008728 minus0003110 0015200 minus0002282 minus0005979 minus0006169Number 78 79 80 81 82 83Relative error 0009414 minus0002398 0010743 0006008 minus0001105 0006332

Security and Communication Networks 7

index as a dataset for the experiment As for dataset pro-cessing we divided two datasets one for training and theother for testing In addition the data were normalizedRegarding model building we take MATLAB as the plat-form and set the number of hidden-layer neurons to be 18TRAINGDX is chosen to be the training function In termsof training the maximum number of iterations in thetraining is set to 3000 the maximum number of validationfailures is set to 6 and the error tolerance is set to 000001Finally we use the model to test the training data and the testdata In order to analyse the experimental results we alsocalculated relative error and the residuals and drew a pictureto show them Based on Elman network this study predictedthe short-term stock price in the future and achieved a goodprediction effect However it is unrealistic to predict thelong-term stock price in the future which is difficult toachieve [28ndash30] +is study provides an effective experi-mental method for predicting the near future stock price

Data Availability

All of the data used in this study are already available on theInternet and is easily accessible

Conflicts of Interest

+e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

+is work was supported by the key project of the NaturalScience Research of Higher Education Institutions in AnhuiProvince (Grant no KJ2018A0461) the Anhui Province KeyResearch and Development Program Project (Grant no201904a05020091) the Provincial Quality EngineeringProject from Department of Education Anhui Province(Grant no 2019mooc283) and the Domestic and ForeignResearch and Study Program for Outstanding YoungBackbone Talents in Colleges and Universities (Grant noGxgnfx2019034)

References

[1] Q Shayea Neural Networks to Predict Stock Market PriceWorld Congress on Engineering and Computer Science SanFrancisco CA USA 2017

[2] X Xu B Shen X Yin et al ldquoEdge server quantification andplacement for offloading social media services in industrialcognitive IoVrdquo IEEE Transactions on Industrial Informatics2020

[3] V Rohit C Pkumar and S Upendra ldquoNeural networksthrough stock market data predictionrdquo in Proceedings of the2017 International Conference of Electronics CoimbatoreIndia April 2017

[4] D Das A S Sadiq N B Ahmad and J Lloret ldquoStock marketprediction with big data through hybridization of data miningand optimized neural network techniquesrdquo Journal of Mul-tiple-Valued Logic and Soft Computing vol 29 no 1-2pp 157ndash181 2017

[5] J Zahedi and M Rounaghi ldquoApplication of artificial neuralnetwork models and principal component analysis method inpredicting stock prices on tehran stock exchangerdquo Physica AStatistical Mechanics and its Application vol 38 pp 178ndash1872015

[6] X Han ldquoStock price prediction with neural network based onMATLABrdquo Systems Engineering 2003

[7] X Xu X Zhang H Gao Y Xue L Qi and W Dou ldquoBe-Come blockchain-enabled computation offloading for IoT inmobile edge computingrdquo IEEE Transactions on IndustrialInformatics vol 16 no 6 pp 4187ndash4195 2020

[8] R Mahanta T N Pandey A K Jagadev and S DehurildquoOptimized radial basis functional neural network for stockindex predictionrdquo in Proceedings of the IEEE ConferencePublications Chennai India March 2016

[9] httpswwwmathworkscomhelpdeeplearningrefElmannethtml

[10] Y Zhang G Cui S Deng et al ldquoEfficient query of qualitycorrelation for service compositionrdquo IEEE Transactions onServices Computing p 1 2018

[11] X Xu X Zhang X Liu J Jiang L Qi and M Z AlamBhuiyan ldquoAdaptive computation offloading with edge for 5G-envisioned Internet of connected vehiclesrdquo IEEE Transactionson Intelligent Transportation Systems 2020

[12] Y Zhang C Yin Q Wu et al ldquoLocation-aware deep col-laborative filtering for service recommendationrdquo IEEETransactions on Systems Man and Cybernetics Systems(TSMC) 2019

[13] J Yu and P Guo ldquoStock price forecasting Model Based onImproved Elman Neural Networkrdquo Computer Technology andDevelopment 2008

[14] H Shi and X Liu ldquoApplication on stock price prediction ofElman neural networks based on principal componentanalysis methodrdquo in Proceedings of the 201411th InternationalComputer Conference on Wavelet Active Media Technology ampInformation Processing Chengdu China December 2014

[15] X Zhang S Qu J Huang B Fang and P Yu ldquoStock marketprediction via multi-source multiple instance learningrdquo IEEEAccess vol 6 no 99 pp 50720ndash50728 2018

[16] M Billah S Waheed and A Hanifa ldquoPredicting closing stockprice using artificial neural network and adaptive neuro fuzzyinference system (ANFIS the case of the Dhaka stock ex-changerdquo International Journal of Computer Applicationsvol 129 no 11 pp 1ndash5 2015

[17] httpswwwmathworkscomhelpdeeplearningindexhtmls_tidCRUX_lftnav

[18] Z Zhang Y Shen and G Zhang ldquoShort-term prediction foropening price of stock market based on self-adapting variantPSO-elman neural networkrdquo in Proceedings of the IEEE In-ternational Conference on Software Engineering and ServiceScience (ICSESS) Beijing China November 2017

[19] V Andrea and L Karel ldquoMatConvNet-convolutional neuralnetworks for MATLABrdquo in Proceedings of the 23rd ACMInternational Conference on Multimedia pp 689ndash692 Bris-bane Australia October 2015

[20] L Ren Y Liu Z Rui H Li and R Feng ldquoApplication ofelman neural network and MATLAB to load forecastingrdquo inProceedings of the International Conference on InformationTechnology and Computer Science Kiev Ukraine July 2009

[21] K Kim and W Lee ldquoStock market prediction using artificialneural networks with optimal feature transformationrdquoNeuralComputing amp Applications vol 13 no 3 pp 255ndash260 2004

8 Security and Communication Networks

[22] H Grigoryan ldquoStock market prediction using artificial neuralnetworks Case Study of TAL1T Nasdaq OMX Baltic StockrdquoDatabase Systems Journal 2015

[23] S Nayak B Misra and H Behera ldquoAn adaptive second orderneural network with genetic-algorithm-based training(ASONN-GA) to forecast the closing prices of the stockmarketrdquo International Journal of Applied MetaheuristicComputing vol 7 no 2 pp 39ndash57 2016

[24] Y Zhang K Wang Q He et al ldquoCovering-based web servicequality prediction via neighborhood-aware matrix factor-izationrdquo IEEE Transactions on Services Computing 2019

[25] P Kai H Huang S Wan and V Leung ldquoEnd-edge-cloudcollaborative computation offloading for multiple mobileusers in heterogeneous edge-server environmentrdquo WirelessNetwork vol 2020 2020

[26] C Goutami and S Chattopadhyay ldquoMonthly sunspot numbertime series analysis and its model construction throughautoregressive artificial neural networkrdquo e EuropeanPhysical Journal Plus vol 127 no 4 2012

[27] Z Guo J Wu H Lu and J Wang ldquoA case study on a hybridwind speed forecasting method using BP neural networkrdquoKnowledge-Based Systems vol 24 no 7 pp 1048ndash1056 2011

[28] Y Zhang and L Wu ldquoStock market prediction of SampP 500 viacombination of improved BCO approach and BP neuralnetworkrdquo Expert Systems with Applications vol 36 no 5pp 8849ndash8854 2008

[29] X Han ldquoStock Price Prediction with Neural Network Basedon MATLABrdquo Systems Engineering vol 2003 2003

[30] K Peng B Zhao S Xue and Q Huang ldquoEnergy- and re-source-aware computation offloading for complex tasks inedge environmentrdquo Complexity vol 2020 Article ID9548262 14 pages 2020

Security and Communication Networks 9

Page 2: Research Article ...downloads.hindawi.com/journals/scn/2020/8824430.pdf · Elman neural network is a typical feedback neural network model widely used, which is generally divided

To analyse the new algorithm model more clearly wequantitatively analysed the performance of the modelwith a variety of mathematical tools and error analysismethods In addition a large number of diagrams andtables are provided to further clarify the model

+e rest of this paper is organized as follows Section 2reviews and summarizes the related work on this basis toclarify the significance of this study Section 3 is prelimi-naries in which the principle of Elman neural network isclarified In Section 4 we proposed our model and thespecific model construction procedures are introduced indetail Section 5 is experiments in which the model is builttrained and tested In addition we devoted a great deal ofspace to the analysis of the results in this section FinallySection 6 concludes this paper

2 Preliminaries

Elman neural network is a typical feedback neural networkmodel widely used which is generally divided into fourlayers input layer hidden layer bearing layer and outputlayer [5 6]

Figure 1 shows the basic structure of an Elman neuralnetwork the connection of the input layer hidden layer andoutput layer are similar to the feedforward network +einput layer unit only plays the role of signal transmissionand the output layer unit plays the role of weighting +ereare linear and nonlinear excitation functions in the hiddenlayer element and the excitation function usually takes thenonlinear function of Sigmoid +e bearing layer is used toremember the output value of the hidden layer unit at theprevious time which can be considered as a delay operatorwith one-step delay +e output of the hidden layer is self-linked to the input of the hidden layer by accepting the delayand storage of the layer whichmakes it sensitive to historicaldata In other words Elman neural network adds a bearinglayer to the hidden layer as a one-step delay operator toachieve the purpose of memory so that the system has theability to adapt to time-varying characteristics and enhancethe global stability of the network [7ndash9] +e mathematicalexpression of its network is

y(k) g w3x(k)( 1113857

x(k) f w1xc(k) + w2(u(k minus 1))( 1113857

xc(k) x(k minus 1)

(1)

where y(t) is the output node vector x(t) is the nodalelement vector of the hidden layer u(t minus 1) is the inputvector xc(t) is the feedback state vector w3 is the con-nection weight from the hidden layer to the output layer w2is the connection weight from the input layer to the hiddenlayer w1 is the connection weight of the connecting layer tothe hidden layer g is the transfer function of the outputneuron and the linear combination of the output of thehidden layer and f is the transfer function of the hiddenlayer neuron usually using the S function

+is research takes MATLAB as the experimentalplatform And the two datasets used in this study are the

closing prices of 490 trading days of the Shanghai compositeindex from September 26 2017 to September 30 2019 andthe opening prices of 420 trading days from August 15 2018toMay 12 2020We will use the samemodel for training andtesting based on these two datasets

3 Related Work

In fact many researchers have been studying stock priceforecasts for years some of these studies have improved theexisting models and some have further processed the dataHowever these studies are not perfect and some of themodels are too complex and some of the processing pro-cedures are tedious +ese shortcomings will increase theinstability of the models and limit the application and ex-tension of the research results

Shi et al considered that traditional stock forecastingmethods could not fit and analyse highly nonlinear and mul-tifactors of stock market well so there are problems such as lowprediction accuracy and slow training speed +erefore theyproposed a prediction method of the Elman neural networkmodel based on the principal component analysis method Inorder to compare the results better BP network and Elmannetwork with the same structure are established to predict thestock data [10] Yu et al used an improved Elman neuralnetwork as the forecasting model and the market price ofZhongji company (No 000039) in Shenzhen stock market isforecasted their experiment results get higher precision steadierforecasting effect and more rapid convergence speed [11]Zheng et al studied the forecast of opening stock price based onElman neural network in 2015 and they selected the openingprices of Shanghai stock index of 337 trading days from De-cember 2012 to April 2014 as the raw data for stimulatedforecast and the result proves the validity of their forecastmodel[12] Zhang et al successfully applied Elman regression neuralnetwork to the prediction of stock opening price Specificallythe authors described the Particle Swarm Optimization (PSO)algorithm for learning optimization of the Elman RecurrentNeural Network and the results showed that the model basedon LSTM was more accurate than other machine learningmodels [13] Jun used AdaptiveWhale Optimization Algorithmand Elman neural network to predict the stock price andachieved better results based on their experiments [14] JavadZahedi et al used the artificial neural network model andprincipal component analysis to evaluate the predictability ofstock price in Teheran stock exchange with 20 accountingvariables Finally the goodness of fit of principal componentanalysis was determined by actual values and the effectivefactors of Teheran stock exchange price were accurately pre-dicted and modelled by a new model composed of all variables[15] Han et al designed a three-ply BP network and thecorresponding mathematical model +erefore using 140 daysactual price of the stock 600688 as a sample the network wastrained through MATLAB thereby the 10 days predictions ofthe stock price and the dispersion Q 0 0146 to the practicaldata were made [16]

Although scholars have made outstanding contributionsin using artificial intelligence to predict stock prices neitherthe stability of themodels nor the accuracy of the predictions

2 Security and Communication Networks

is satisfactory Based on this fact this study seeks to exploitthe neural network model for the prediction of stock pricebased on Elman network with balancing the simplicitystability and accuracy

4 Supposed Model

+e general steps to build the supposed model of thisstudy include data collection data load sample setconstruction division of sample set and training setconstruction of Elman neural network and training ofthe neural network model +e specific flow chart isshown in Figure 2

41 Construction of Sample Set +e stock price predictionproblem in this study is actually a time series problem whichcan be expressed by the following formula

xn f xnminus1 xnminus2 middot middot middot xnminusN( 1113857 (2)

+is formula means that the closing price of theprevious N trading day can be used to predict the closingprice of the next trading day +e data of 490 closingprices were divided into training samples and test sam-ples for the training samples x1~xN are selected to formthe first sample where (x1 x2 middot middot middot xNminus1) are the inde-pendent variable and xN is the dependent variable andx2~xN+1 are selected to form the second sample where(x2 x3 middot middot middot xN) are the independent variable and xN+1 isthe dependent variable finally a training matrix isformed as follows

x1 x2 xi

x2 x3 xi+1

middot middot middot middot middot middot middot middot middot

xNminus1 xN

xN xN+1

⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝

⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠

(3)

In this matrix each column is a sample and the last rowis the expected output +ese samples are fed into the Elmanneural network for training and then the networkmodel canbe obtained [17ndash19]

In this study x1~xN are selected to form the first sampleand x2~xN+1 are selected to form the second sample the restcan be carried out in the samemanner Here N is randomly setto 8 whichmeans that the closing price of the day is determinedby the closing price of the previous seven trading days

Take the Shanghai composite index dataset as anexample the closing prices of the first eight trading daysare 334358 334527 333964 334894 337438 338299338828 and 338610 which means 334358334527333964 334894 337438 338299 and 338828will be used to forecast the eighth data 338828 which wehave already obtained +e closing prices of the first eighttrading days are 299928 300645 297708 298534295543 292909 293217 and 290519 and the sameprinciple 299928 300645 297708 298534 295543292909 and 293217 will be used to forecast the eighthdata 290519 which we have already obtained +erefore490 pieces of data will be converted into a 8 times 483 matrix483 columns mean 483 samples in which the first 7 datain each column are independent variables and the eighthdata is the data to be predicted +e 8 times 483 matrix isshown as follows

Input layer

Hidden layer Output layer

Bearing layer

u (t ndash 1) w2

w1

w3

xc (t)

x (t) y (t)

Figure 1 +e basic structure of an Elman neural network

Data collection

Data load

Sample set construction

Division of sample set and training set

Construction of Elman neural network

Training of neural network model

Figure 2 Elman neural network model construction steps

Security and Communication Networks 3

334358

334527

333964

339052 middot middot middot 299928

337847 middot middot middot 300645

337204 middot middot middot 297708

334894

337438338299338828

33861

338179 middot middot middot 298534

337017 middot middot middot 295543337865 middot middot middot 29290933807 middot middot middot 293217

338825 middot middot middot 290519

⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝

⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠

(4)

+e Shenzhen composite index dataset is 8786349784709094 85735693 83550002 84197868 8533428984469836 84802244 85113743 87316394 8716817286669025 85092723 84409528 84541357 85195698middot middot middot middot middot middot 104777614 104609947 105755242 106181651108999169 109236123 110538157 109720503 In thesame way the Shenzhen composite index dataset isformed as a 8 times 413 matrix which is as follows

87863497

84709094

85735693

85113743 middot middot middot 104777614

87316394 middot middot middot 104609947

87168172 middot middot middot 105755242

83550002

841978688533428984469836

84802244

86669025 middot middot middot 106181651

85092723 middot middot middot 10899916984409528 middot middot middot 10923612384541357 middot middot middot 110538157

85195698 middot middot middot 109720503

⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝

⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠

(5)

413 columns mean 413 samples in which the first 7 datain each column are independent variables and the eighthdata is the data to be predicted

42 Construction of Elman Neural Network Figure 3 showsthe proposed model structure where u1 middot middot middot u7 are inputdata x1 middot middot middot x18 are hidden-layer data and xc1 middot middot middot xc18 arebearing-layer data With the help of MATLAB neuralnetwork toolbox Elman neural network can be easily

built To be specific the MATLAB neural networktoolbox provides an Elmannet function and Elmannetwork construction can be completed by setting threeparameters in the Elmannet function which are the delaytime the number of hidden layer neurons and thetraining function respectively In this case the number ofhidden-layer neurons is set to be 18 and TRAINGDX ischosen to be the training function [20ndash22] TRAINGDXwhich is named gradient descent with momentum andadaptive learning rate backpropagation is a networktraining function that updates weight and bias valuesaccording to gradient descent momentum and anadaptive learning rate It will return a trained net and thetraining record In addition the maximum number ofiterations in the training is set to 3000 the maximumnumber of validation failures is set to 6 and the errortolerance is set to 000001 which means that the trainingcan be stopped if the error value is reached [23 24]Figure 4 shows the model structure graphic automaticallygenerated by MATLAB

To construct the Elman neural network the MATLABcode can be like this Firstly three parameters in theElmannet function are set and codes are as follows

net Elmannet 1 2 18 lsquoTRAINGDXrsquo( ) (6)

Secondly the maximum number of iterations in thetraining is set to 3000 and codes are as follows

nettrainParamepochs 3000 (7)

+irdly the maximum number of validation failures isset to 6 and the error tolerance is set to 000001 and codesare as follows

nettrainParammax fail 6

nettrainParamgoal 000001(8)

Finally initialize the network and codes are as follows

net init(net) (9)

Inputlayer

Hiddenlayer

Outputlayer

hellip hellip hellip hellip hellip

hellip hellip hellip

hellip

hellip hellip

hellip

Bearinglayer

y

xc1hellipxc18

x1hellipx18

u1hellipu7

Figure 3 Model structure

4 Security and Communication Networks

W

W

b b

WInput (t)

Hidden

Output

Output (t)

7

181

1

1

2

0

+ +

Figure 4 +e model structure graphic generated by MATLAB

50 100 150 200 250 300 350 400

) )

0 50 100 150 200 250 300 350 4000

3600

3400

3200

3000

2800

2600

2400

12000

11500

11000

10500

10000

9500

9000

8500

8000

7500

7000

Test results for training data (Shanghai composite index) Test results for training data (Shenzhen composite index)

Actual valueElman network output value

Actual valueElman network output value

Figure 5 Test results for training data

0 50 100 150 200 250 300 350 400 0 50 100 150 200 250 300 350 400

200

150

100

50

0

ndash50

ndash100

ndash150

ndash200

1400

1200

1000

800

600

400

200

0

ndash200

ndash400

ndash600

Residuals of test results on training data(Shanghai composite index)

Residuals of test results on training data(Shenzhen composite index)

Figure 6 Residuals of test results on training data

Security and Communication Networks 5

0 10 20 30 40 50 60 70 80 90 0 10 20 30 40 50 60

3050

3000

2950

2900

2850

2800

2750

12

115

11

105

1

095

Test results for testing data(Shanghai composite index)

Test results for testing data(Shenzhen composite index)times104

Actual valueElman network output value

Actual valueElman network output value

Figure 7 Test results for testing data

0 10 20 30 40 50 60 70 80 90 0 10 20 30 40 50 60

80

60

40

20

0

ndash20

ndash40

ndash60

ndash80

600

400

200

0

ndash200

ndash400

ndash600

Residuals of test results on testing data(Shanghai composite index)

Residuals of test results on testing data(Shenzhen composite index)

Figure 8 Residuals of test results on testing data

Table 1 Relative error of test (Shanghai composite index)Number 1 2 3 4 5 6 7Relative error 0006674 minus0001090 0010719 minus0010057 minus0025153 0001690 0000677Number 8 9 10 11 12 13 14Relative error 0012402 minus0003898 minus0002381 minus0015004 minus0023218 minus0007697 minus0002238Number 15 16 17 18 19 20 21Relative error 0009203 minus0000263 minus0009339 0000532 minus0022964 minus0000520 0007789Number 22 23 24 25 26 27 28Relative error 0006250 minus0004937 0023693 minus0000488 0004250 minus0005476 minus0003717Number 29 30 31 32 33 34 35Relative error minus0005492 0003612 0001676 0010246 minus0009194 0010889 minus0007296Number 36 37 38 39 40 41 42Relative error minus0006752 minus0008459 minus0001553 minus0000139 minus0003553 0004742 0005815Number 43 44 45 46 47 48 49

6 Security and Communication Networks

After all the above steps construction of Elman neuralnetwork is completed [25ndash27]

5 Experiments

51 Training of the SupposedModel When the Elman neuralnetwork is built the model can be trained but all the datahas to be normalized first considering of the performanceand stability of the model +e normalization operation canuse the mapminmax function provided by MATLAB tool-box and the default normalization interval of mapminmaxfunction is [minus1 1] +e detailed MATLAB code is as follows

trainx1 st11113858 1113859 mapminmax(trainx)

train ty1 sim net train x1( 1113857

train ty mapminmax lsquoreversersquo train ty1 st2( 1113857

(10)

After the normalized operation of training data trainxand trainx1 were obtained +e normalized training data(trainx1) were input into the network model to obtain thecurrent network output (train_ty1) and then reverselynormalized into normal data to obtain train_ty which isthe corresponding stock price of the training data Whatwe want to emphasize is that the data used in the testshould be normalized first and then the output should beunnormalized

52 e Test Results and the Quantitative AnalysisFigure 5 shows a graph of the actual and predicted values theblue solid line is the actual value and the red dotted linerepresents the Elman network output value Apparently themodel fits the training data well In addition we furthercalculated the residuals of test results on training Figure 6shows the residuals of training results on training data andresidual in mathematical statistics refers to the differencebetween the actual observed value and the estimated value(the fitting value)

Figure 7 shows a graph of the actual and predictedvalues the black solid line is the actual value and the reddotted line represents the Elman network output value Inaddition we further calculate the residuals of test results ontesting data Figure 8 shows the residuals of test results ontesting data And the relative errors of each prediction arealso calculated for further study and analysis All relativeerror values are shown in Tables 1 and 2 By analysing thesegraphs and data it is clear that the prediction effect of themodel is pretty good

6 Conclusions

+is study is based on a basic premise that the historicalstock price will have a great impact on the future short-termstock price On this premise we established an improvedElman model and collected the historical data of theShanghai composite index and the Shenzhen composite

Table 2 Relative error of test (Shenzhen composite index)Number 1 2 3 4 5 6 7Relative error minus0013747 0002825 minus0017177 0004370 0030084 minus0001853 minus0047231Number 8 9 10 11 12 13 14Relative error 0017591 minus0017927 0007011 0010986 0026179 minus0044038 0035607Number 15 16 17 18 19 20 21Relative error 0061603 minus0043408 0047926 0008911 0028250 minus0016142 0039523Number 22 23 24 25 26 27 28Relative error minus0012140 minus0021609 0001765 minus0010977 0036256 minus0005063 0006179Number 29 30 31 32 33 34 35Relative error 0003674 minus0022697 minus0014897 minus0002466 minus0006432 0001260 0023451Number 36 37 38 39 40 41 42Relative error minus0009376 minus0016781 0009971 minus0019686 0001902 minus0002325 0014443Number 43 44 45 46 47 48 49Relative error minus0024677 0009285 0008341 minus0003239 minus0000341 minus0010959 minus0005216Number 50 51 52 53 54 55 56Relative error minus0026052 minus0000483 minus0012797 0007071

Table 1 ContinuedRelative error 0013786 0014388 0014745 0000828 minus0010776 0005480 minus0012749Number 50 51 52 53 54 55 56Relative error 0009279 minus0004690 0000227 minus0006596 minus0019610 minus0002958 minus0001138Number 57 58 59 60 61 62 63Relative error 0000246 minus0007637 0009974 minus0016051 0002405 minus0002644 0003361Number 64 65 66 67 68 69 70Relative error minus0015557 minus0002587 minus0012756 minus0009015 minus0007153 minus0009234 minus0001688Number 71 72 73 74 75 76 77Relative error 0002176 minus0008728 minus0003110 0015200 minus0002282 minus0005979 minus0006169Number 78 79 80 81 82 83Relative error 0009414 minus0002398 0010743 0006008 minus0001105 0006332

Security and Communication Networks 7

index as a dataset for the experiment As for dataset pro-cessing we divided two datasets one for training and theother for testing In addition the data were normalizedRegarding model building we take MATLAB as the plat-form and set the number of hidden-layer neurons to be 18TRAINGDX is chosen to be the training function In termsof training the maximum number of iterations in thetraining is set to 3000 the maximum number of validationfailures is set to 6 and the error tolerance is set to 000001Finally we use the model to test the training data and the testdata In order to analyse the experimental results we alsocalculated relative error and the residuals and drew a pictureto show them Based on Elman network this study predictedthe short-term stock price in the future and achieved a goodprediction effect However it is unrealistic to predict thelong-term stock price in the future which is difficult toachieve [28ndash30] +is study provides an effective experi-mental method for predicting the near future stock price

Data Availability

All of the data used in this study are already available on theInternet and is easily accessible

Conflicts of Interest

+e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

+is work was supported by the key project of the NaturalScience Research of Higher Education Institutions in AnhuiProvince (Grant no KJ2018A0461) the Anhui Province KeyResearch and Development Program Project (Grant no201904a05020091) the Provincial Quality EngineeringProject from Department of Education Anhui Province(Grant no 2019mooc283) and the Domestic and ForeignResearch and Study Program for Outstanding YoungBackbone Talents in Colleges and Universities (Grant noGxgnfx2019034)

References

[1] Q Shayea Neural Networks to Predict Stock Market PriceWorld Congress on Engineering and Computer Science SanFrancisco CA USA 2017

[2] X Xu B Shen X Yin et al ldquoEdge server quantification andplacement for offloading social media services in industrialcognitive IoVrdquo IEEE Transactions on Industrial Informatics2020

[3] V Rohit C Pkumar and S Upendra ldquoNeural networksthrough stock market data predictionrdquo in Proceedings of the2017 International Conference of Electronics CoimbatoreIndia April 2017

[4] D Das A S Sadiq N B Ahmad and J Lloret ldquoStock marketprediction with big data through hybridization of data miningand optimized neural network techniquesrdquo Journal of Mul-tiple-Valued Logic and Soft Computing vol 29 no 1-2pp 157ndash181 2017

[5] J Zahedi and M Rounaghi ldquoApplication of artificial neuralnetwork models and principal component analysis method inpredicting stock prices on tehran stock exchangerdquo Physica AStatistical Mechanics and its Application vol 38 pp 178ndash1872015

[6] X Han ldquoStock price prediction with neural network based onMATLABrdquo Systems Engineering 2003

[7] X Xu X Zhang H Gao Y Xue L Qi and W Dou ldquoBe-Come blockchain-enabled computation offloading for IoT inmobile edge computingrdquo IEEE Transactions on IndustrialInformatics vol 16 no 6 pp 4187ndash4195 2020

[8] R Mahanta T N Pandey A K Jagadev and S DehurildquoOptimized radial basis functional neural network for stockindex predictionrdquo in Proceedings of the IEEE ConferencePublications Chennai India March 2016

[9] httpswwwmathworkscomhelpdeeplearningrefElmannethtml

[10] Y Zhang G Cui S Deng et al ldquoEfficient query of qualitycorrelation for service compositionrdquo IEEE Transactions onServices Computing p 1 2018

[11] X Xu X Zhang X Liu J Jiang L Qi and M Z AlamBhuiyan ldquoAdaptive computation offloading with edge for 5G-envisioned Internet of connected vehiclesrdquo IEEE Transactionson Intelligent Transportation Systems 2020

[12] Y Zhang C Yin Q Wu et al ldquoLocation-aware deep col-laborative filtering for service recommendationrdquo IEEETransactions on Systems Man and Cybernetics Systems(TSMC) 2019

[13] J Yu and P Guo ldquoStock price forecasting Model Based onImproved Elman Neural Networkrdquo Computer Technology andDevelopment 2008

[14] H Shi and X Liu ldquoApplication on stock price prediction ofElman neural networks based on principal componentanalysis methodrdquo in Proceedings of the 201411th InternationalComputer Conference on Wavelet Active Media Technology ampInformation Processing Chengdu China December 2014

[15] X Zhang S Qu J Huang B Fang and P Yu ldquoStock marketprediction via multi-source multiple instance learningrdquo IEEEAccess vol 6 no 99 pp 50720ndash50728 2018

[16] M Billah S Waheed and A Hanifa ldquoPredicting closing stockprice using artificial neural network and adaptive neuro fuzzyinference system (ANFIS the case of the Dhaka stock ex-changerdquo International Journal of Computer Applicationsvol 129 no 11 pp 1ndash5 2015

[17] httpswwwmathworkscomhelpdeeplearningindexhtmls_tidCRUX_lftnav

[18] Z Zhang Y Shen and G Zhang ldquoShort-term prediction foropening price of stock market based on self-adapting variantPSO-elman neural networkrdquo in Proceedings of the IEEE In-ternational Conference on Software Engineering and ServiceScience (ICSESS) Beijing China November 2017

[19] V Andrea and L Karel ldquoMatConvNet-convolutional neuralnetworks for MATLABrdquo in Proceedings of the 23rd ACMInternational Conference on Multimedia pp 689ndash692 Bris-bane Australia October 2015

[20] L Ren Y Liu Z Rui H Li and R Feng ldquoApplication ofelman neural network and MATLAB to load forecastingrdquo inProceedings of the International Conference on InformationTechnology and Computer Science Kiev Ukraine July 2009

[21] K Kim and W Lee ldquoStock market prediction using artificialneural networks with optimal feature transformationrdquoNeuralComputing amp Applications vol 13 no 3 pp 255ndash260 2004

8 Security and Communication Networks

[22] H Grigoryan ldquoStock market prediction using artificial neuralnetworks Case Study of TAL1T Nasdaq OMX Baltic StockrdquoDatabase Systems Journal 2015

[23] S Nayak B Misra and H Behera ldquoAn adaptive second orderneural network with genetic-algorithm-based training(ASONN-GA) to forecast the closing prices of the stockmarketrdquo International Journal of Applied MetaheuristicComputing vol 7 no 2 pp 39ndash57 2016

[24] Y Zhang K Wang Q He et al ldquoCovering-based web servicequality prediction via neighborhood-aware matrix factor-izationrdquo IEEE Transactions on Services Computing 2019

[25] P Kai H Huang S Wan and V Leung ldquoEnd-edge-cloudcollaborative computation offloading for multiple mobileusers in heterogeneous edge-server environmentrdquo WirelessNetwork vol 2020 2020

[26] C Goutami and S Chattopadhyay ldquoMonthly sunspot numbertime series analysis and its model construction throughautoregressive artificial neural networkrdquo e EuropeanPhysical Journal Plus vol 127 no 4 2012

[27] Z Guo J Wu H Lu and J Wang ldquoA case study on a hybridwind speed forecasting method using BP neural networkrdquoKnowledge-Based Systems vol 24 no 7 pp 1048ndash1056 2011

[28] Y Zhang and L Wu ldquoStock market prediction of SampP 500 viacombination of improved BCO approach and BP neuralnetworkrdquo Expert Systems with Applications vol 36 no 5pp 8849ndash8854 2008

[29] X Han ldquoStock Price Prediction with Neural Network Basedon MATLABrdquo Systems Engineering vol 2003 2003

[30] K Peng B Zhao S Xue and Q Huang ldquoEnergy- and re-source-aware computation offloading for complex tasks inedge environmentrdquo Complexity vol 2020 Article ID9548262 14 pages 2020

Security and Communication Networks 9

Page 3: Research Article ...downloads.hindawi.com/journals/scn/2020/8824430.pdf · Elman neural network is a typical feedback neural network model widely used, which is generally divided

is satisfactory Based on this fact this study seeks to exploitthe neural network model for the prediction of stock pricebased on Elman network with balancing the simplicitystability and accuracy

4 Supposed Model

+e general steps to build the supposed model of thisstudy include data collection data load sample setconstruction division of sample set and training setconstruction of Elman neural network and training ofthe neural network model +e specific flow chart isshown in Figure 2

41 Construction of Sample Set +e stock price predictionproblem in this study is actually a time series problem whichcan be expressed by the following formula

xn f xnminus1 xnminus2 middot middot middot xnminusN( 1113857 (2)

+is formula means that the closing price of theprevious N trading day can be used to predict the closingprice of the next trading day +e data of 490 closingprices were divided into training samples and test sam-ples for the training samples x1~xN are selected to formthe first sample where (x1 x2 middot middot middot xNminus1) are the inde-pendent variable and xN is the dependent variable andx2~xN+1 are selected to form the second sample where(x2 x3 middot middot middot xN) are the independent variable and xN+1 isthe dependent variable finally a training matrix isformed as follows

x1 x2 xi

x2 x3 xi+1

middot middot middot middot middot middot middot middot middot

xNminus1 xN

xN xN+1

⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝

⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠

(3)

In this matrix each column is a sample and the last rowis the expected output +ese samples are fed into the Elmanneural network for training and then the networkmodel canbe obtained [17ndash19]

In this study x1~xN are selected to form the first sampleand x2~xN+1 are selected to form the second sample the restcan be carried out in the samemanner Here N is randomly setto 8 whichmeans that the closing price of the day is determinedby the closing price of the previous seven trading days

Take the Shanghai composite index dataset as anexample the closing prices of the first eight trading daysare 334358 334527 333964 334894 337438 338299338828 and 338610 which means 334358334527333964 334894 337438 338299 and 338828will be used to forecast the eighth data 338828 which wehave already obtained +e closing prices of the first eighttrading days are 299928 300645 297708 298534295543 292909 293217 and 290519 and the sameprinciple 299928 300645 297708 298534 295543292909 and 293217 will be used to forecast the eighthdata 290519 which we have already obtained +erefore490 pieces of data will be converted into a 8 times 483 matrix483 columns mean 483 samples in which the first 7 datain each column are independent variables and the eighthdata is the data to be predicted +e 8 times 483 matrix isshown as follows

Input layer

Hidden layer Output layer

Bearing layer

u (t ndash 1) w2

w1

w3

xc (t)

x (t) y (t)

Figure 1 +e basic structure of an Elman neural network

Data collection

Data load

Sample set construction

Division of sample set and training set

Construction of Elman neural network

Training of neural network model

Figure 2 Elman neural network model construction steps

Security and Communication Networks 3

334358

334527

333964

339052 middot middot middot 299928

337847 middot middot middot 300645

337204 middot middot middot 297708

334894

337438338299338828

33861

338179 middot middot middot 298534

337017 middot middot middot 295543337865 middot middot middot 29290933807 middot middot middot 293217

338825 middot middot middot 290519

⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝

⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠

(4)

+e Shenzhen composite index dataset is 8786349784709094 85735693 83550002 84197868 8533428984469836 84802244 85113743 87316394 8716817286669025 85092723 84409528 84541357 85195698middot middot middot middot middot middot 104777614 104609947 105755242 106181651108999169 109236123 110538157 109720503 In thesame way the Shenzhen composite index dataset isformed as a 8 times 413 matrix which is as follows

87863497

84709094

85735693

85113743 middot middot middot 104777614

87316394 middot middot middot 104609947

87168172 middot middot middot 105755242

83550002

841978688533428984469836

84802244

86669025 middot middot middot 106181651

85092723 middot middot middot 10899916984409528 middot middot middot 10923612384541357 middot middot middot 110538157

85195698 middot middot middot 109720503

⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝

⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠

(5)

413 columns mean 413 samples in which the first 7 datain each column are independent variables and the eighthdata is the data to be predicted

42 Construction of Elman Neural Network Figure 3 showsthe proposed model structure where u1 middot middot middot u7 are inputdata x1 middot middot middot x18 are hidden-layer data and xc1 middot middot middot xc18 arebearing-layer data With the help of MATLAB neuralnetwork toolbox Elman neural network can be easily

built To be specific the MATLAB neural networktoolbox provides an Elmannet function and Elmannetwork construction can be completed by setting threeparameters in the Elmannet function which are the delaytime the number of hidden layer neurons and thetraining function respectively In this case the number ofhidden-layer neurons is set to be 18 and TRAINGDX ischosen to be the training function [20ndash22] TRAINGDXwhich is named gradient descent with momentum andadaptive learning rate backpropagation is a networktraining function that updates weight and bias valuesaccording to gradient descent momentum and anadaptive learning rate It will return a trained net and thetraining record In addition the maximum number ofiterations in the training is set to 3000 the maximumnumber of validation failures is set to 6 and the errortolerance is set to 000001 which means that the trainingcan be stopped if the error value is reached [23 24]Figure 4 shows the model structure graphic automaticallygenerated by MATLAB

To construct the Elman neural network the MATLABcode can be like this Firstly three parameters in theElmannet function are set and codes are as follows

net Elmannet 1 2 18 lsquoTRAINGDXrsquo( ) (6)

Secondly the maximum number of iterations in thetraining is set to 3000 and codes are as follows

nettrainParamepochs 3000 (7)

+irdly the maximum number of validation failures isset to 6 and the error tolerance is set to 000001 and codesare as follows

nettrainParammax fail 6

nettrainParamgoal 000001(8)

Finally initialize the network and codes are as follows

net init(net) (9)

Inputlayer

Hiddenlayer

Outputlayer

hellip hellip hellip hellip hellip

hellip hellip hellip

hellip

hellip hellip

hellip

Bearinglayer

y

xc1hellipxc18

x1hellipx18

u1hellipu7

Figure 3 Model structure

4 Security and Communication Networks

W

W

b b

WInput (t)

Hidden

Output

Output (t)

7

181

1

1

2

0

+ +

Figure 4 +e model structure graphic generated by MATLAB

50 100 150 200 250 300 350 400

) )

0 50 100 150 200 250 300 350 4000

3600

3400

3200

3000

2800

2600

2400

12000

11500

11000

10500

10000

9500

9000

8500

8000

7500

7000

Test results for training data (Shanghai composite index) Test results for training data (Shenzhen composite index)

Actual valueElman network output value

Actual valueElman network output value

Figure 5 Test results for training data

0 50 100 150 200 250 300 350 400 0 50 100 150 200 250 300 350 400

200

150

100

50

0

ndash50

ndash100

ndash150

ndash200

1400

1200

1000

800

600

400

200

0

ndash200

ndash400

ndash600

Residuals of test results on training data(Shanghai composite index)

Residuals of test results on training data(Shenzhen composite index)

Figure 6 Residuals of test results on training data

Security and Communication Networks 5

0 10 20 30 40 50 60 70 80 90 0 10 20 30 40 50 60

3050

3000

2950

2900

2850

2800

2750

12

115

11

105

1

095

Test results for testing data(Shanghai composite index)

Test results for testing data(Shenzhen composite index)times104

Actual valueElman network output value

Actual valueElman network output value

Figure 7 Test results for testing data

0 10 20 30 40 50 60 70 80 90 0 10 20 30 40 50 60

80

60

40

20

0

ndash20

ndash40

ndash60

ndash80

600

400

200

0

ndash200

ndash400

ndash600

Residuals of test results on testing data(Shanghai composite index)

Residuals of test results on testing data(Shenzhen composite index)

Figure 8 Residuals of test results on testing data

Table 1 Relative error of test (Shanghai composite index)Number 1 2 3 4 5 6 7Relative error 0006674 minus0001090 0010719 minus0010057 minus0025153 0001690 0000677Number 8 9 10 11 12 13 14Relative error 0012402 minus0003898 minus0002381 minus0015004 minus0023218 minus0007697 minus0002238Number 15 16 17 18 19 20 21Relative error 0009203 minus0000263 minus0009339 0000532 minus0022964 minus0000520 0007789Number 22 23 24 25 26 27 28Relative error 0006250 minus0004937 0023693 minus0000488 0004250 minus0005476 minus0003717Number 29 30 31 32 33 34 35Relative error minus0005492 0003612 0001676 0010246 minus0009194 0010889 minus0007296Number 36 37 38 39 40 41 42Relative error minus0006752 minus0008459 minus0001553 minus0000139 minus0003553 0004742 0005815Number 43 44 45 46 47 48 49

6 Security and Communication Networks

After all the above steps construction of Elman neuralnetwork is completed [25ndash27]

5 Experiments

51 Training of the SupposedModel When the Elman neuralnetwork is built the model can be trained but all the datahas to be normalized first considering of the performanceand stability of the model +e normalization operation canuse the mapminmax function provided by MATLAB tool-box and the default normalization interval of mapminmaxfunction is [minus1 1] +e detailed MATLAB code is as follows

trainx1 st11113858 1113859 mapminmax(trainx)

train ty1 sim net train x1( 1113857

train ty mapminmax lsquoreversersquo train ty1 st2( 1113857

(10)

After the normalized operation of training data trainxand trainx1 were obtained +e normalized training data(trainx1) were input into the network model to obtain thecurrent network output (train_ty1) and then reverselynormalized into normal data to obtain train_ty which isthe corresponding stock price of the training data Whatwe want to emphasize is that the data used in the testshould be normalized first and then the output should beunnormalized

52 e Test Results and the Quantitative AnalysisFigure 5 shows a graph of the actual and predicted values theblue solid line is the actual value and the red dotted linerepresents the Elman network output value Apparently themodel fits the training data well In addition we furthercalculated the residuals of test results on training Figure 6shows the residuals of training results on training data andresidual in mathematical statistics refers to the differencebetween the actual observed value and the estimated value(the fitting value)

Figure 7 shows a graph of the actual and predictedvalues the black solid line is the actual value and the reddotted line represents the Elman network output value Inaddition we further calculate the residuals of test results ontesting data Figure 8 shows the residuals of test results ontesting data And the relative errors of each prediction arealso calculated for further study and analysis All relativeerror values are shown in Tables 1 and 2 By analysing thesegraphs and data it is clear that the prediction effect of themodel is pretty good

6 Conclusions

+is study is based on a basic premise that the historicalstock price will have a great impact on the future short-termstock price On this premise we established an improvedElman model and collected the historical data of theShanghai composite index and the Shenzhen composite

Table 2 Relative error of test (Shenzhen composite index)Number 1 2 3 4 5 6 7Relative error minus0013747 0002825 minus0017177 0004370 0030084 minus0001853 minus0047231Number 8 9 10 11 12 13 14Relative error 0017591 minus0017927 0007011 0010986 0026179 minus0044038 0035607Number 15 16 17 18 19 20 21Relative error 0061603 minus0043408 0047926 0008911 0028250 minus0016142 0039523Number 22 23 24 25 26 27 28Relative error minus0012140 minus0021609 0001765 minus0010977 0036256 minus0005063 0006179Number 29 30 31 32 33 34 35Relative error 0003674 minus0022697 minus0014897 minus0002466 minus0006432 0001260 0023451Number 36 37 38 39 40 41 42Relative error minus0009376 minus0016781 0009971 minus0019686 0001902 minus0002325 0014443Number 43 44 45 46 47 48 49Relative error minus0024677 0009285 0008341 minus0003239 minus0000341 minus0010959 minus0005216Number 50 51 52 53 54 55 56Relative error minus0026052 minus0000483 minus0012797 0007071

Table 1 ContinuedRelative error 0013786 0014388 0014745 0000828 minus0010776 0005480 minus0012749Number 50 51 52 53 54 55 56Relative error 0009279 minus0004690 0000227 minus0006596 minus0019610 minus0002958 minus0001138Number 57 58 59 60 61 62 63Relative error 0000246 minus0007637 0009974 minus0016051 0002405 minus0002644 0003361Number 64 65 66 67 68 69 70Relative error minus0015557 minus0002587 minus0012756 minus0009015 minus0007153 minus0009234 minus0001688Number 71 72 73 74 75 76 77Relative error 0002176 minus0008728 minus0003110 0015200 minus0002282 minus0005979 minus0006169Number 78 79 80 81 82 83Relative error 0009414 minus0002398 0010743 0006008 minus0001105 0006332

Security and Communication Networks 7

index as a dataset for the experiment As for dataset pro-cessing we divided two datasets one for training and theother for testing In addition the data were normalizedRegarding model building we take MATLAB as the plat-form and set the number of hidden-layer neurons to be 18TRAINGDX is chosen to be the training function In termsof training the maximum number of iterations in thetraining is set to 3000 the maximum number of validationfailures is set to 6 and the error tolerance is set to 000001Finally we use the model to test the training data and the testdata In order to analyse the experimental results we alsocalculated relative error and the residuals and drew a pictureto show them Based on Elman network this study predictedthe short-term stock price in the future and achieved a goodprediction effect However it is unrealistic to predict thelong-term stock price in the future which is difficult toachieve [28ndash30] +is study provides an effective experi-mental method for predicting the near future stock price

Data Availability

All of the data used in this study are already available on theInternet and is easily accessible

Conflicts of Interest

+e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

+is work was supported by the key project of the NaturalScience Research of Higher Education Institutions in AnhuiProvince (Grant no KJ2018A0461) the Anhui Province KeyResearch and Development Program Project (Grant no201904a05020091) the Provincial Quality EngineeringProject from Department of Education Anhui Province(Grant no 2019mooc283) and the Domestic and ForeignResearch and Study Program for Outstanding YoungBackbone Talents in Colleges and Universities (Grant noGxgnfx2019034)

References

[1] Q Shayea Neural Networks to Predict Stock Market PriceWorld Congress on Engineering and Computer Science SanFrancisco CA USA 2017

[2] X Xu B Shen X Yin et al ldquoEdge server quantification andplacement for offloading social media services in industrialcognitive IoVrdquo IEEE Transactions on Industrial Informatics2020

[3] V Rohit C Pkumar and S Upendra ldquoNeural networksthrough stock market data predictionrdquo in Proceedings of the2017 International Conference of Electronics CoimbatoreIndia April 2017

[4] D Das A S Sadiq N B Ahmad and J Lloret ldquoStock marketprediction with big data through hybridization of data miningand optimized neural network techniquesrdquo Journal of Mul-tiple-Valued Logic and Soft Computing vol 29 no 1-2pp 157ndash181 2017

[5] J Zahedi and M Rounaghi ldquoApplication of artificial neuralnetwork models and principal component analysis method inpredicting stock prices on tehran stock exchangerdquo Physica AStatistical Mechanics and its Application vol 38 pp 178ndash1872015

[6] X Han ldquoStock price prediction with neural network based onMATLABrdquo Systems Engineering 2003

[7] X Xu X Zhang H Gao Y Xue L Qi and W Dou ldquoBe-Come blockchain-enabled computation offloading for IoT inmobile edge computingrdquo IEEE Transactions on IndustrialInformatics vol 16 no 6 pp 4187ndash4195 2020

[8] R Mahanta T N Pandey A K Jagadev and S DehurildquoOptimized radial basis functional neural network for stockindex predictionrdquo in Proceedings of the IEEE ConferencePublications Chennai India March 2016

[9] httpswwwmathworkscomhelpdeeplearningrefElmannethtml

[10] Y Zhang G Cui S Deng et al ldquoEfficient query of qualitycorrelation for service compositionrdquo IEEE Transactions onServices Computing p 1 2018

[11] X Xu X Zhang X Liu J Jiang L Qi and M Z AlamBhuiyan ldquoAdaptive computation offloading with edge for 5G-envisioned Internet of connected vehiclesrdquo IEEE Transactionson Intelligent Transportation Systems 2020

[12] Y Zhang C Yin Q Wu et al ldquoLocation-aware deep col-laborative filtering for service recommendationrdquo IEEETransactions on Systems Man and Cybernetics Systems(TSMC) 2019

[13] J Yu and P Guo ldquoStock price forecasting Model Based onImproved Elman Neural Networkrdquo Computer Technology andDevelopment 2008

[14] H Shi and X Liu ldquoApplication on stock price prediction ofElman neural networks based on principal componentanalysis methodrdquo in Proceedings of the 201411th InternationalComputer Conference on Wavelet Active Media Technology ampInformation Processing Chengdu China December 2014

[15] X Zhang S Qu J Huang B Fang and P Yu ldquoStock marketprediction via multi-source multiple instance learningrdquo IEEEAccess vol 6 no 99 pp 50720ndash50728 2018

[16] M Billah S Waheed and A Hanifa ldquoPredicting closing stockprice using artificial neural network and adaptive neuro fuzzyinference system (ANFIS the case of the Dhaka stock ex-changerdquo International Journal of Computer Applicationsvol 129 no 11 pp 1ndash5 2015

[17] httpswwwmathworkscomhelpdeeplearningindexhtmls_tidCRUX_lftnav

[18] Z Zhang Y Shen and G Zhang ldquoShort-term prediction foropening price of stock market based on self-adapting variantPSO-elman neural networkrdquo in Proceedings of the IEEE In-ternational Conference on Software Engineering and ServiceScience (ICSESS) Beijing China November 2017

[19] V Andrea and L Karel ldquoMatConvNet-convolutional neuralnetworks for MATLABrdquo in Proceedings of the 23rd ACMInternational Conference on Multimedia pp 689ndash692 Bris-bane Australia October 2015

[20] L Ren Y Liu Z Rui H Li and R Feng ldquoApplication ofelman neural network and MATLAB to load forecastingrdquo inProceedings of the International Conference on InformationTechnology and Computer Science Kiev Ukraine July 2009

[21] K Kim and W Lee ldquoStock market prediction using artificialneural networks with optimal feature transformationrdquoNeuralComputing amp Applications vol 13 no 3 pp 255ndash260 2004

8 Security and Communication Networks

[22] H Grigoryan ldquoStock market prediction using artificial neuralnetworks Case Study of TAL1T Nasdaq OMX Baltic StockrdquoDatabase Systems Journal 2015

[23] S Nayak B Misra and H Behera ldquoAn adaptive second orderneural network with genetic-algorithm-based training(ASONN-GA) to forecast the closing prices of the stockmarketrdquo International Journal of Applied MetaheuristicComputing vol 7 no 2 pp 39ndash57 2016

[24] Y Zhang K Wang Q He et al ldquoCovering-based web servicequality prediction via neighborhood-aware matrix factor-izationrdquo IEEE Transactions on Services Computing 2019

[25] P Kai H Huang S Wan and V Leung ldquoEnd-edge-cloudcollaborative computation offloading for multiple mobileusers in heterogeneous edge-server environmentrdquo WirelessNetwork vol 2020 2020

[26] C Goutami and S Chattopadhyay ldquoMonthly sunspot numbertime series analysis and its model construction throughautoregressive artificial neural networkrdquo e EuropeanPhysical Journal Plus vol 127 no 4 2012

[27] Z Guo J Wu H Lu and J Wang ldquoA case study on a hybridwind speed forecasting method using BP neural networkrdquoKnowledge-Based Systems vol 24 no 7 pp 1048ndash1056 2011

[28] Y Zhang and L Wu ldquoStock market prediction of SampP 500 viacombination of improved BCO approach and BP neuralnetworkrdquo Expert Systems with Applications vol 36 no 5pp 8849ndash8854 2008

[29] X Han ldquoStock Price Prediction with Neural Network Basedon MATLABrdquo Systems Engineering vol 2003 2003

[30] K Peng B Zhao S Xue and Q Huang ldquoEnergy- and re-source-aware computation offloading for complex tasks inedge environmentrdquo Complexity vol 2020 Article ID9548262 14 pages 2020

Security and Communication Networks 9

Page 4: Research Article ...downloads.hindawi.com/journals/scn/2020/8824430.pdf · Elman neural network is a typical feedback neural network model widely used, which is generally divided

334358

334527

333964

339052 middot middot middot 299928

337847 middot middot middot 300645

337204 middot middot middot 297708

334894

337438338299338828

33861

338179 middot middot middot 298534

337017 middot middot middot 295543337865 middot middot middot 29290933807 middot middot middot 293217

338825 middot middot middot 290519

⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝

⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠

(4)

+e Shenzhen composite index dataset is 8786349784709094 85735693 83550002 84197868 8533428984469836 84802244 85113743 87316394 8716817286669025 85092723 84409528 84541357 85195698middot middot middot middot middot middot 104777614 104609947 105755242 106181651108999169 109236123 110538157 109720503 In thesame way the Shenzhen composite index dataset isformed as a 8 times 413 matrix which is as follows

87863497

84709094

85735693

85113743 middot middot middot 104777614

87316394 middot middot middot 104609947

87168172 middot middot middot 105755242

83550002

841978688533428984469836

84802244

86669025 middot middot middot 106181651

85092723 middot middot middot 10899916984409528 middot middot middot 10923612384541357 middot middot middot 110538157

85195698 middot middot middot 109720503

⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝

⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠

(5)

413 columns mean 413 samples in which the first 7 datain each column are independent variables and the eighthdata is the data to be predicted

42 Construction of Elman Neural Network Figure 3 showsthe proposed model structure where u1 middot middot middot u7 are inputdata x1 middot middot middot x18 are hidden-layer data and xc1 middot middot middot xc18 arebearing-layer data With the help of MATLAB neuralnetwork toolbox Elman neural network can be easily

built To be specific the MATLAB neural networktoolbox provides an Elmannet function and Elmannetwork construction can be completed by setting threeparameters in the Elmannet function which are the delaytime the number of hidden layer neurons and thetraining function respectively In this case the number ofhidden-layer neurons is set to be 18 and TRAINGDX ischosen to be the training function [20ndash22] TRAINGDXwhich is named gradient descent with momentum andadaptive learning rate backpropagation is a networktraining function that updates weight and bias valuesaccording to gradient descent momentum and anadaptive learning rate It will return a trained net and thetraining record In addition the maximum number ofiterations in the training is set to 3000 the maximumnumber of validation failures is set to 6 and the errortolerance is set to 000001 which means that the trainingcan be stopped if the error value is reached [23 24]Figure 4 shows the model structure graphic automaticallygenerated by MATLAB

To construct the Elman neural network the MATLABcode can be like this Firstly three parameters in theElmannet function are set and codes are as follows

net Elmannet 1 2 18 lsquoTRAINGDXrsquo( ) (6)

Secondly the maximum number of iterations in thetraining is set to 3000 and codes are as follows

nettrainParamepochs 3000 (7)

+irdly the maximum number of validation failures isset to 6 and the error tolerance is set to 000001 and codesare as follows

nettrainParammax fail 6

nettrainParamgoal 000001(8)

Finally initialize the network and codes are as follows

net init(net) (9)

Inputlayer

Hiddenlayer

Outputlayer

hellip hellip hellip hellip hellip

hellip hellip hellip

hellip

hellip hellip

hellip

Bearinglayer

y

xc1hellipxc18

x1hellipx18

u1hellipu7

Figure 3 Model structure

4 Security and Communication Networks

W

W

b b

WInput (t)

Hidden

Output

Output (t)

7

181

1

1

2

0

+ +

Figure 4 +e model structure graphic generated by MATLAB

50 100 150 200 250 300 350 400

) )

0 50 100 150 200 250 300 350 4000

3600

3400

3200

3000

2800

2600

2400

12000

11500

11000

10500

10000

9500

9000

8500

8000

7500

7000

Test results for training data (Shanghai composite index) Test results for training data (Shenzhen composite index)

Actual valueElman network output value

Actual valueElman network output value

Figure 5 Test results for training data

0 50 100 150 200 250 300 350 400 0 50 100 150 200 250 300 350 400

200

150

100

50

0

ndash50

ndash100

ndash150

ndash200

1400

1200

1000

800

600

400

200

0

ndash200

ndash400

ndash600

Residuals of test results on training data(Shanghai composite index)

Residuals of test results on training data(Shenzhen composite index)

Figure 6 Residuals of test results on training data

Security and Communication Networks 5

0 10 20 30 40 50 60 70 80 90 0 10 20 30 40 50 60

3050

3000

2950

2900

2850

2800

2750

12

115

11

105

1

095

Test results for testing data(Shanghai composite index)

Test results for testing data(Shenzhen composite index)times104

Actual valueElman network output value

Actual valueElman network output value

Figure 7 Test results for testing data

0 10 20 30 40 50 60 70 80 90 0 10 20 30 40 50 60

80

60

40

20

0

ndash20

ndash40

ndash60

ndash80

600

400

200

0

ndash200

ndash400

ndash600

Residuals of test results on testing data(Shanghai composite index)

Residuals of test results on testing data(Shenzhen composite index)

Figure 8 Residuals of test results on testing data

Table 1 Relative error of test (Shanghai composite index)Number 1 2 3 4 5 6 7Relative error 0006674 minus0001090 0010719 minus0010057 minus0025153 0001690 0000677Number 8 9 10 11 12 13 14Relative error 0012402 minus0003898 minus0002381 minus0015004 minus0023218 minus0007697 minus0002238Number 15 16 17 18 19 20 21Relative error 0009203 minus0000263 minus0009339 0000532 minus0022964 minus0000520 0007789Number 22 23 24 25 26 27 28Relative error 0006250 minus0004937 0023693 minus0000488 0004250 minus0005476 minus0003717Number 29 30 31 32 33 34 35Relative error minus0005492 0003612 0001676 0010246 minus0009194 0010889 minus0007296Number 36 37 38 39 40 41 42Relative error minus0006752 minus0008459 minus0001553 minus0000139 minus0003553 0004742 0005815Number 43 44 45 46 47 48 49

6 Security and Communication Networks

After all the above steps construction of Elman neuralnetwork is completed [25ndash27]

5 Experiments

51 Training of the SupposedModel When the Elman neuralnetwork is built the model can be trained but all the datahas to be normalized first considering of the performanceand stability of the model +e normalization operation canuse the mapminmax function provided by MATLAB tool-box and the default normalization interval of mapminmaxfunction is [minus1 1] +e detailed MATLAB code is as follows

trainx1 st11113858 1113859 mapminmax(trainx)

train ty1 sim net train x1( 1113857

train ty mapminmax lsquoreversersquo train ty1 st2( 1113857

(10)

After the normalized operation of training data trainxand trainx1 were obtained +e normalized training data(trainx1) were input into the network model to obtain thecurrent network output (train_ty1) and then reverselynormalized into normal data to obtain train_ty which isthe corresponding stock price of the training data Whatwe want to emphasize is that the data used in the testshould be normalized first and then the output should beunnormalized

52 e Test Results and the Quantitative AnalysisFigure 5 shows a graph of the actual and predicted values theblue solid line is the actual value and the red dotted linerepresents the Elman network output value Apparently themodel fits the training data well In addition we furthercalculated the residuals of test results on training Figure 6shows the residuals of training results on training data andresidual in mathematical statistics refers to the differencebetween the actual observed value and the estimated value(the fitting value)

Figure 7 shows a graph of the actual and predictedvalues the black solid line is the actual value and the reddotted line represents the Elman network output value Inaddition we further calculate the residuals of test results ontesting data Figure 8 shows the residuals of test results ontesting data And the relative errors of each prediction arealso calculated for further study and analysis All relativeerror values are shown in Tables 1 and 2 By analysing thesegraphs and data it is clear that the prediction effect of themodel is pretty good

6 Conclusions

+is study is based on a basic premise that the historicalstock price will have a great impact on the future short-termstock price On this premise we established an improvedElman model and collected the historical data of theShanghai composite index and the Shenzhen composite

Table 2 Relative error of test (Shenzhen composite index)Number 1 2 3 4 5 6 7Relative error minus0013747 0002825 minus0017177 0004370 0030084 minus0001853 minus0047231Number 8 9 10 11 12 13 14Relative error 0017591 minus0017927 0007011 0010986 0026179 minus0044038 0035607Number 15 16 17 18 19 20 21Relative error 0061603 minus0043408 0047926 0008911 0028250 minus0016142 0039523Number 22 23 24 25 26 27 28Relative error minus0012140 minus0021609 0001765 minus0010977 0036256 minus0005063 0006179Number 29 30 31 32 33 34 35Relative error 0003674 minus0022697 minus0014897 minus0002466 minus0006432 0001260 0023451Number 36 37 38 39 40 41 42Relative error minus0009376 minus0016781 0009971 minus0019686 0001902 minus0002325 0014443Number 43 44 45 46 47 48 49Relative error minus0024677 0009285 0008341 minus0003239 minus0000341 minus0010959 minus0005216Number 50 51 52 53 54 55 56Relative error minus0026052 minus0000483 minus0012797 0007071

Table 1 ContinuedRelative error 0013786 0014388 0014745 0000828 minus0010776 0005480 minus0012749Number 50 51 52 53 54 55 56Relative error 0009279 minus0004690 0000227 minus0006596 minus0019610 minus0002958 minus0001138Number 57 58 59 60 61 62 63Relative error 0000246 minus0007637 0009974 minus0016051 0002405 minus0002644 0003361Number 64 65 66 67 68 69 70Relative error minus0015557 minus0002587 minus0012756 minus0009015 minus0007153 minus0009234 minus0001688Number 71 72 73 74 75 76 77Relative error 0002176 minus0008728 minus0003110 0015200 minus0002282 minus0005979 minus0006169Number 78 79 80 81 82 83Relative error 0009414 minus0002398 0010743 0006008 minus0001105 0006332

Security and Communication Networks 7

index as a dataset for the experiment As for dataset pro-cessing we divided two datasets one for training and theother for testing In addition the data were normalizedRegarding model building we take MATLAB as the plat-form and set the number of hidden-layer neurons to be 18TRAINGDX is chosen to be the training function In termsof training the maximum number of iterations in thetraining is set to 3000 the maximum number of validationfailures is set to 6 and the error tolerance is set to 000001Finally we use the model to test the training data and the testdata In order to analyse the experimental results we alsocalculated relative error and the residuals and drew a pictureto show them Based on Elman network this study predictedthe short-term stock price in the future and achieved a goodprediction effect However it is unrealistic to predict thelong-term stock price in the future which is difficult toachieve [28ndash30] +is study provides an effective experi-mental method for predicting the near future stock price

Data Availability

All of the data used in this study are already available on theInternet and is easily accessible

Conflicts of Interest

+e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

+is work was supported by the key project of the NaturalScience Research of Higher Education Institutions in AnhuiProvince (Grant no KJ2018A0461) the Anhui Province KeyResearch and Development Program Project (Grant no201904a05020091) the Provincial Quality EngineeringProject from Department of Education Anhui Province(Grant no 2019mooc283) and the Domestic and ForeignResearch and Study Program for Outstanding YoungBackbone Talents in Colleges and Universities (Grant noGxgnfx2019034)

References

[1] Q Shayea Neural Networks to Predict Stock Market PriceWorld Congress on Engineering and Computer Science SanFrancisco CA USA 2017

[2] X Xu B Shen X Yin et al ldquoEdge server quantification andplacement for offloading social media services in industrialcognitive IoVrdquo IEEE Transactions on Industrial Informatics2020

[3] V Rohit C Pkumar and S Upendra ldquoNeural networksthrough stock market data predictionrdquo in Proceedings of the2017 International Conference of Electronics CoimbatoreIndia April 2017

[4] D Das A S Sadiq N B Ahmad and J Lloret ldquoStock marketprediction with big data through hybridization of data miningand optimized neural network techniquesrdquo Journal of Mul-tiple-Valued Logic and Soft Computing vol 29 no 1-2pp 157ndash181 2017

[5] J Zahedi and M Rounaghi ldquoApplication of artificial neuralnetwork models and principal component analysis method inpredicting stock prices on tehran stock exchangerdquo Physica AStatistical Mechanics and its Application vol 38 pp 178ndash1872015

[6] X Han ldquoStock price prediction with neural network based onMATLABrdquo Systems Engineering 2003

[7] X Xu X Zhang H Gao Y Xue L Qi and W Dou ldquoBe-Come blockchain-enabled computation offloading for IoT inmobile edge computingrdquo IEEE Transactions on IndustrialInformatics vol 16 no 6 pp 4187ndash4195 2020

[8] R Mahanta T N Pandey A K Jagadev and S DehurildquoOptimized radial basis functional neural network for stockindex predictionrdquo in Proceedings of the IEEE ConferencePublications Chennai India March 2016

[9] httpswwwmathworkscomhelpdeeplearningrefElmannethtml

[10] Y Zhang G Cui S Deng et al ldquoEfficient query of qualitycorrelation for service compositionrdquo IEEE Transactions onServices Computing p 1 2018

[11] X Xu X Zhang X Liu J Jiang L Qi and M Z AlamBhuiyan ldquoAdaptive computation offloading with edge for 5G-envisioned Internet of connected vehiclesrdquo IEEE Transactionson Intelligent Transportation Systems 2020

[12] Y Zhang C Yin Q Wu et al ldquoLocation-aware deep col-laborative filtering for service recommendationrdquo IEEETransactions on Systems Man and Cybernetics Systems(TSMC) 2019

[13] J Yu and P Guo ldquoStock price forecasting Model Based onImproved Elman Neural Networkrdquo Computer Technology andDevelopment 2008

[14] H Shi and X Liu ldquoApplication on stock price prediction ofElman neural networks based on principal componentanalysis methodrdquo in Proceedings of the 201411th InternationalComputer Conference on Wavelet Active Media Technology ampInformation Processing Chengdu China December 2014

[15] X Zhang S Qu J Huang B Fang and P Yu ldquoStock marketprediction via multi-source multiple instance learningrdquo IEEEAccess vol 6 no 99 pp 50720ndash50728 2018

[16] M Billah S Waheed and A Hanifa ldquoPredicting closing stockprice using artificial neural network and adaptive neuro fuzzyinference system (ANFIS the case of the Dhaka stock ex-changerdquo International Journal of Computer Applicationsvol 129 no 11 pp 1ndash5 2015

[17] httpswwwmathworkscomhelpdeeplearningindexhtmls_tidCRUX_lftnav

[18] Z Zhang Y Shen and G Zhang ldquoShort-term prediction foropening price of stock market based on self-adapting variantPSO-elman neural networkrdquo in Proceedings of the IEEE In-ternational Conference on Software Engineering and ServiceScience (ICSESS) Beijing China November 2017

[19] V Andrea and L Karel ldquoMatConvNet-convolutional neuralnetworks for MATLABrdquo in Proceedings of the 23rd ACMInternational Conference on Multimedia pp 689ndash692 Bris-bane Australia October 2015

[20] L Ren Y Liu Z Rui H Li and R Feng ldquoApplication ofelman neural network and MATLAB to load forecastingrdquo inProceedings of the International Conference on InformationTechnology and Computer Science Kiev Ukraine July 2009

[21] K Kim and W Lee ldquoStock market prediction using artificialneural networks with optimal feature transformationrdquoNeuralComputing amp Applications vol 13 no 3 pp 255ndash260 2004

8 Security and Communication Networks

[22] H Grigoryan ldquoStock market prediction using artificial neuralnetworks Case Study of TAL1T Nasdaq OMX Baltic StockrdquoDatabase Systems Journal 2015

[23] S Nayak B Misra and H Behera ldquoAn adaptive second orderneural network with genetic-algorithm-based training(ASONN-GA) to forecast the closing prices of the stockmarketrdquo International Journal of Applied MetaheuristicComputing vol 7 no 2 pp 39ndash57 2016

[24] Y Zhang K Wang Q He et al ldquoCovering-based web servicequality prediction via neighborhood-aware matrix factor-izationrdquo IEEE Transactions on Services Computing 2019

[25] P Kai H Huang S Wan and V Leung ldquoEnd-edge-cloudcollaborative computation offloading for multiple mobileusers in heterogeneous edge-server environmentrdquo WirelessNetwork vol 2020 2020

[26] C Goutami and S Chattopadhyay ldquoMonthly sunspot numbertime series analysis and its model construction throughautoregressive artificial neural networkrdquo e EuropeanPhysical Journal Plus vol 127 no 4 2012

[27] Z Guo J Wu H Lu and J Wang ldquoA case study on a hybridwind speed forecasting method using BP neural networkrdquoKnowledge-Based Systems vol 24 no 7 pp 1048ndash1056 2011

[28] Y Zhang and L Wu ldquoStock market prediction of SampP 500 viacombination of improved BCO approach and BP neuralnetworkrdquo Expert Systems with Applications vol 36 no 5pp 8849ndash8854 2008

[29] X Han ldquoStock Price Prediction with Neural Network Basedon MATLABrdquo Systems Engineering vol 2003 2003

[30] K Peng B Zhao S Xue and Q Huang ldquoEnergy- and re-source-aware computation offloading for complex tasks inedge environmentrdquo Complexity vol 2020 Article ID9548262 14 pages 2020

Security and Communication Networks 9

Page 5: Research Article ...downloads.hindawi.com/journals/scn/2020/8824430.pdf · Elman neural network is a typical feedback neural network model widely used, which is generally divided

W

W

b b

WInput (t)

Hidden

Output

Output (t)

7

181

1

1

2

0

+ +

Figure 4 +e model structure graphic generated by MATLAB

50 100 150 200 250 300 350 400

) )

0 50 100 150 200 250 300 350 4000

3600

3400

3200

3000

2800

2600

2400

12000

11500

11000

10500

10000

9500

9000

8500

8000

7500

7000

Test results for training data (Shanghai composite index) Test results for training data (Shenzhen composite index)

Actual valueElman network output value

Actual valueElman network output value

Figure 5 Test results for training data

0 50 100 150 200 250 300 350 400 0 50 100 150 200 250 300 350 400

200

150

100

50

0

ndash50

ndash100

ndash150

ndash200

1400

1200

1000

800

600

400

200

0

ndash200

ndash400

ndash600

Residuals of test results on training data(Shanghai composite index)

Residuals of test results on training data(Shenzhen composite index)

Figure 6 Residuals of test results on training data

Security and Communication Networks 5

0 10 20 30 40 50 60 70 80 90 0 10 20 30 40 50 60

3050

3000

2950

2900

2850

2800

2750

12

115

11

105

1

095

Test results for testing data(Shanghai composite index)

Test results for testing data(Shenzhen composite index)times104

Actual valueElman network output value

Actual valueElman network output value

Figure 7 Test results for testing data

0 10 20 30 40 50 60 70 80 90 0 10 20 30 40 50 60

80

60

40

20

0

ndash20

ndash40

ndash60

ndash80

600

400

200

0

ndash200

ndash400

ndash600

Residuals of test results on testing data(Shanghai composite index)

Residuals of test results on testing data(Shenzhen composite index)

Figure 8 Residuals of test results on testing data

Table 1 Relative error of test (Shanghai composite index)Number 1 2 3 4 5 6 7Relative error 0006674 minus0001090 0010719 minus0010057 minus0025153 0001690 0000677Number 8 9 10 11 12 13 14Relative error 0012402 minus0003898 minus0002381 minus0015004 minus0023218 minus0007697 minus0002238Number 15 16 17 18 19 20 21Relative error 0009203 minus0000263 minus0009339 0000532 minus0022964 minus0000520 0007789Number 22 23 24 25 26 27 28Relative error 0006250 minus0004937 0023693 minus0000488 0004250 minus0005476 minus0003717Number 29 30 31 32 33 34 35Relative error minus0005492 0003612 0001676 0010246 minus0009194 0010889 minus0007296Number 36 37 38 39 40 41 42Relative error minus0006752 minus0008459 minus0001553 minus0000139 minus0003553 0004742 0005815Number 43 44 45 46 47 48 49

6 Security and Communication Networks

After all the above steps construction of Elman neuralnetwork is completed [25ndash27]

5 Experiments

51 Training of the SupposedModel When the Elman neuralnetwork is built the model can be trained but all the datahas to be normalized first considering of the performanceand stability of the model +e normalization operation canuse the mapminmax function provided by MATLAB tool-box and the default normalization interval of mapminmaxfunction is [minus1 1] +e detailed MATLAB code is as follows

trainx1 st11113858 1113859 mapminmax(trainx)

train ty1 sim net train x1( 1113857

train ty mapminmax lsquoreversersquo train ty1 st2( 1113857

(10)

After the normalized operation of training data trainxand trainx1 were obtained +e normalized training data(trainx1) were input into the network model to obtain thecurrent network output (train_ty1) and then reverselynormalized into normal data to obtain train_ty which isthe corresponding stock price of the training data Whatwe want to emphasize is that the data used in the testshould be normalized first and then the output should beunnormalized

52 e Test Results and the Quantitative AnalysisFigure 5 shows a graph of the actual and predicted values theblue solid line is the actual value and the red dotted linerepresents the Elman network output value Apparently themodel fits the training data well In addition we furthercalculated the residuals of test results on training Figure 6shows the residuals of training results on training data andresidual in mathematical statistics refers to the differencebetween the actual observed value and the estimated value(the fitting value)

Figure 7 shows a graph of the actual and predictedvalues the black solid line is the actual value and the reddotted line represents the Elman network output value Inaddition we further calculate the residuals of test results ontesting data Figure 8 shows the residuals of test results ontesting data And the relative errors of each prediction arealso calculated for further study and analysis All relativeerror values are shown in Tables 1 and 2 By analysing thesegraphs and data it is clear that the prediction effect of themodel is pretty good

6 Conclusions

+is study is based on a basic premise that the historicalstock price will have a great impact on the future short-termstock price On this premise we established an improvedElman model and collected the historical data of theShanghai composite index and the Shenzhen composite

Table 2 Relative error of test (Shenzhen composite index)Number 1 2 3 4 5 6 7Relative error minus0013747 0002825 minus0017177 0004370 0030084 minus0001853 minus0047231Number 8 9 10 11 12 13 14Relative error 0017591 minus0017927 0007011 0010986 0026179 minus0044038 0035607Number 15 16 17 18 19 20 21Relative error 0061603 minus0043408 0047926 0008911 0028250 minus0016142 0039523Number 22 23 24 25 26 27 28Relative error minus0012140 minus0021609 0001765 minus0010977 0036256 minus0005063 0006179Number 29 30 31 32 33 34 35Relative error 0003674 minus0022697 minus0014897 minus0002466 minus0006432 0001260 0023451Number 36 37 38 39 40 41 42Relative error minus0009376 minus0016781 0009971 minus0019686 0001902 minus0002325 0014443Number 43 44 45 46 47 48 49Relative error minus0024677 0009285 0008341 minus0003239 minus0000341 minus0010959 minus0005216Number 50 51 52 53 54 55 56Relative error minus0026052 minus0000483 minus0012797 0007071

Table 1 ContinuedRelative error 0013786 0014388 0014745 0000828 minus0010776 0005480 minus0012749Number 50 51 52 53 54 55 56Relative error 0009279 minus0004690 0000227 minus0006596 minus0019610 minus0002958 minus0001138Number 57 58 59 60 61 62 63Relative error 0000246 minus0007637 0009974 minus0016051 0002405 minus0002644 0003361Number 64 65 66 67 68 69 70Relative error minus0015557 minus0002587 minus0012756 minus0009015 minus0007153 minus0009234 minus0001688Number 71 72 73 74 75 76 77Relative error 0002176 minus0008728 minus0003110 0015200 minus0002282 minus0005979 minus0006169Number 78 79 80 81 82 83Relative error 0009414 minus0002398 0010743 0006008 minus0001105 0006332

Security and Communication Networks 7

index as a dataset for the experiment As for dataset pro-cessing we divided two datasets one for training and theother for testing In addition the data were normalizedRegarding model building we take MATLAB as the plat-form and set the number of hidden-layer neurons to be 18TRAINGDX is chosen to be the training function In termsof training the maximum number of iterations in thetraining is set to 3000 the maximum number of validationfailures is set to 6 and the error tolerance is set to 000001Finally we use the model to test the training data and the testdata In order to analyse the experimental results we alsocalculated relative error and the residuals and drew a pictureto show them Based on Elman network this study predictedthe short-term stock price in the future and achieved a goodprediction effect However it is unrealistic to predict thelong-term stock price in the future which is difficult toachieve [28ndash30] +is study provides an effective experi-mental method for predicting the near future stock price

Data Availability

All of the data used in this study are already available on theInternet and is easily accessible

Conflicts of Interest

+e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

+is work was supported by the key project of the NaturalScience Research of Higher Education Institutions in AnhuiProvince (Grant no KJ2018A0461) the Anhui Province KeyResearch and Development Program Project (Grant no201904a05020091) the Provincial Quality EngineeringProject from Department of Education Anhui Province(Grant no 2019mooc283) and the Domestic and ForeignResearch and Study Program for Outstanding YoungBackbone Talents in Colleges and Universities (Grant noGxgnfx2019034)

References

[1] Q Shayea Neural Networks to Predict Stock Market PriceWorld Congress on Engineering and Computer Science SanFrancisco CA USA 2017

[2] X Xu B Shen X Yin et al ldquoEdge server quantification andplacement for offloading social media services in industrialcognitive IoVrdquo IEEE Transactions on Industrial Informatics2020

[3] V Rohit C Pkumar and S Upendra ldquoNeural networksthrough stock market data predictionrdquo in Proceedings of the2017 International Conference of Electronics CoimbatoreIndia April 2017

[4] D Das A S Sadiq N B Ahmad and J Lloret ldquoStock marketprediction with big data through hybridization of data miningand optimized neural network techniquesrdquo Journal of Mul-tiple-Valued Logic and Soft Computing vol 29 no 1-2pp 157ndash181 2017

[5] J Zahedi and M Rounaghi ldquoApplication of artificial neuralnetwork models and principal component analysis method inpredicting stock prices on tehran stock exchangerdquo Physica AStatistical Mechanics and its Application vol 38 pp 178ndash1872015

[6] X Han ldquoStock price prediction with neural network based onMATLABrdquo Systems Engineering 2003

[7] X Xu X Zhang H Gao Y Xue L Qi and W Dou ldquoBe-Come blockchain-enabled computation offloading for IoT inmobile edge computingrdquo IEEE Transactions on IndustrialInformatics vol 16 no 6 pp 4187ndash4195 2020

[8] R Mahanta T N Pandey A K Jagadev and S DehurildquoOptimized radial basis functional neural network for stockindex predictionrdquo in Proceedings of the IEEE ConferencePublications Chennai India March 2016

[9] httpswwwmathworkscomhelpdeeplearningrefElmannethtml

[10] Y Zhang G Cui S Deng et al ldquoEfficient query of qualitycorrelation for service compositionrdquo IEEE Transactions onServices Computing p 1 2018

[11] X Xu X Zhang X Liu J Jiang L Qi and M Z AlamBhuiyan ldquoAdaptive computation offloading with edge for 5G-envisioned Internet of connected vehiclesrdquo IEEE Transactionson Intelligent Transportation Systems 2020

[12] Y Zhang C Yin Q Wu et al ldquoLocation-aware deep col-laborative filtering for service recommendationrdquo IEEETransactions on Systems Man and Cybernetics Systems(TSMC) 2019

[13] J Yu and P Guo ldquoStock price forecasting Model Based onImproved Elman Neural Networkrdquo Computer Technology andDevelopment 2008

[14] H Shi and X Liu ldquoApplication on stock price prediction ofElman neural networks based on principal componentanalysis methodrdquo in Proceedings of the 201411th InternationalComputer Conference on Wavelet Active Media Technology ampInformation Processing Chengdu China December 2014

[15] X Zhang S Qu J Huang B Fang and P Yu ldquoStock marketprediction via multi-source multiple instance learningrdquo IEEEAccess vol 6 no 99 pp 50720ndash50728 2018

[16] M Billah S Waheed and A Hanifa ldquoPredicting closing stockprice using artificial neural network and adaptive neuro fuzzyinference system (ANFIS the case of the Dhaka stock ex-changerdquo International Journal of Computer Applicationsvol 129 no 11 pp 1ndash5 2015

[17] httpswwwmathworkscomhelpdeeplearningindexhtmls_tidCRUX_lftnav

[18] Z Zhang Y Shen and G Zhang ldquoShort-term prediction foropening price of stock market based on self-adapting variantPSO-elman neural networkrdquo in Proceedings of the IEEE In-ternational Conference on Software Engineering and ServiceScience (ICSESS) Beijing China November 2017

[19] V Andrea and L Karel ldquoMatConvNet-convolutional neuralnetworks for MATLABrdquo in Proceedings of the 23rd ACMInternational Conference on Multimedia pp 689ndash692 Bris-bane Australia October 2015

[20] L Ren Y Liu Z Rui H Li and R Feng ldquoApplication ofelman neural network and MATLAB to load forecastingrdquo inProceedings of the International Conference on InformationTechnology and Computer Science Kiev Ukraine July 2009

[21] K Kim and W Lee ldquoStock market prediction using artificialneural networks with optimal feature transformationrdquoNeuralComputing amp Applications vol 13 no 3 pp 255ndash260 2004

8 Security and Communication Networks

[22] H Grigoryan ldquoStock market prediction using artificial neuralnetworks Case Study of TAL1T Nasdaq OMX Baltic StockrdquoDatabase Systems Journal 2015

[23] S Nayak B Misra and H Behera ldquoAn adaptive second orderneural network with genetic-algorithm-based training(ASONN-GA) to forecast the closing prices of the stockmarketrdquo International Journal of Applied MetaheuristicComputing vol 7 no 2 pp 39ndash57 2016

[24] Y Zhang K Wang Q He et al ldquoCovering-based web servicequality prediction via neighborhood-aware matrix factor-izationrdquo IEEE Transactions on Services Computing 2019

[25] P Kai H Huang S Wan and V Leung ldquoEnd-edge-cloudcollaborative computation offloading for multiple mobileusers in heterogeneous edge-server environmentrdquo WirelessNetwork vol 2020 2020

[26] C Goutami and S Chattopadhyay ldquoMonthly sunspot numbertime series analysis and its model construction throughautoregressive artificial neural networkrdquo e EuropeanPhysical Journal Plus vol 127 no 4 2012

[27] Z Guo J Wu H Lu and J Wang ldquoA case study on a hybridwind speed forecasting method using BP neural networkrdquoKnowledge-Based Systems vol 24 no 7 pp 1048ndash1056 2011

[28] Y Zhang and L Wu ldquoStock market prediction of SampP 500 viacombination of improved BCO approach and BP neuralnetworkrdquo Expert Systems with Applications vol 36 no 5pp 8849ndash8854 2008

[29] X Han ldquoStock Price Prediction with Neural Network Basedon MATLABrdquo Systems Engineering vol 2003 2003

[30] K Peng B Zhao S Xue and Q Huang ldquoEnergy- and re-source-aware computation offloading for complex tasks inedge environmentrdquo Complexity vol 2020 Article ID9548262 14 pages 2020

Security and Communication Networks 9

Page 6: Research Article ...downloads.hindawi.com/journals/scn/2020/8824430.pdf · Elman neural network is a typical feedback neural network model widely used, which is generally divided

0 10 20 30 40 50 60 70 80 90 0 10 20 30 40 50 60

3050

3000

2950

2900

2850

2800

2750

12

115

11

105

1

095

Test results for testing data(Shanghai composite index)

Test results for testing data(Shenzhen composite index)times104

Actual valueElman network output value

Actual valueElman network output value

Figure 7 Test results for testing data

0 10 20 30 40 50 60 70 80 90 0 10 20 30 40 50 60

80

60

40

20

0

ndash20

ndash40

ndash60

ndash80

600

400

200

0

ndash200

ndash400

ndash600

Residuals of test results on testing data(Shanghai composite index)

Residuals of test results on testing data(Shenzhen composite index)

Figure 8 Residuals of test results on testing data

Table 1 Relative error of test (Shanghai composite index)Number 1 2 3 4 5 6 7Relative error 0006674 minus0001090 0010719 minus0010057 minus0025153 0001690 0000677Number 8 9 10 11 12 13 14Relative error 0012402 minus0003898 minus0002381 minus0015004 minus0023218 minus0007697 minus0002238Number 15 16 17 18 19 20 21Relative error 0009203 minus0000263 minus0009339 0000532 minus0022964 minus0000520 0007789Number 22 23 24 25 26 27 28Relative error 0006250 minus0004937 0023693 minus0000488 0004250 minus0005476 minus0003717Number 29 30 31 32 33 34 35Relative error minus0005492 0003612 0001676 0010246 minus0009194 0010889 minus0007296Number 36 37 38 39 40 41 42Relative error minus0006752 minus0008459 minus0001553 minus0000139 minus0003553 0004742 0005815Number 43 44 45 46 47 48 49

6 Security and Communication Networks

After all the above steps construction of Elman neuralnetwork is completed [25ndash27]

5 Experiments

51 Training of the SupposedModel When the Elman neuralnetwork is built the model can be trained but all the datahas to be normalized first considering of the performanceand stability of the model +e normalization operation canuse the mapminmax function provided by MATLAB tool-box and the default normalization interval of mapminmaxfunction is [minus1 1] +e detailed MATLAB code is as follows

trainx1 st11113858 1113859 mapminmax(trainx)

train ty1 sim net train x1( 1113857

train ty mapminmax lsquoreversersquo train ty1 st2( 1113857

(10)

After the normalized operation of training data trainxand trainx1 were obtained +e normalized training data(trainx1) were input into the network model to obtain thecurrent network output (train_ty1) and then reverselynormalized into normal data to obtain train_ty which isthe corresponding stock price of the training data Whatwe want to emphasize is that the data used in the testshould be normalized first and then the output should beunnormalized

52 e Test Results and the Quantitative AnalysisFigure 5 shows a graph of the actual and predicted values theblue solid line is the actual value and the red dotted linerepresents the Elman network output value Apparently themodel fits the training data well In addition we furthercalculated the residuals of test results on training Figure 6shows the residuals of training results on training data andresidual in mathematical statistics refers to the differencebetween the actual observed value and the estimated value(the fitting value)

Figure 7 shows a graph of the actual and predictedvalues the black solid line is the actual value and the reddotted line represents the Elman network output value Inaddition we further calculate the residuals of test results ontesting data Figure 8 shows the residuals of test results ontesting data And the relative errors of each prediction arealso calculated for further study and analysis All relativeerror values are shown in Tables 1 and 2 By analysing thesegraphs and data it is clear that the prediction effect of themodel is pretty good

6 Conclusions

+is study is based on a basic premise that the historicalstock price will have a great impact on the future short-termstock price On this premise we established an improvedElman model and collected the historical data of theShanghai composite index and the Shenzhen composite

Table 2 Relative error of test (Shenzhen composite index)Number 1 2 3 4 5 6 7Relative error minus0013747 0002825 minus0017177 0004370 0030084 minus0001853 minus0047231Number 8 9 10 11 12 13 14Relative error 0017591 minus0017927 0007011 0010986 0026179 minus0044038 0035607Number 15 16 17 18 19 20 21Relative error 0061603 minus0043408 0047926 0008911 0028250 minus0016142 0039523Number 22 23 24 25 26 27 28Relative error minus0012140 minus0021609 0001765 minus0010977 0036256 minus0005063 0006179Number 29 30 31 32 33 34 35Relative error 0003674 minus0022697 minus0014897 minus0002466 minus0006432 0001260 0023451Number 36 37 38 39 40 41 42Relative error minus0009376 minus0016781 0009971 minus0019686 0001902 minus0002325 0014443Number 43 44 45 46 47 48 49Relative error minus0024677 0009285 0008341 minus0003239 minus0000341 minus0010959 minus0005216Number 50 51 52 53 54 55 56Relative error minus0026052 minus0000483 minus0012797 0007071

Table 1 ContinuedRelative error 0013786 0014388 0014745 0000828 minus0010776 0005480 minus0012749Number 50 51 52 53 54 55 56Relative error 0009279 minus0004690 0000227 minus0006596 minus0019610 minus0002958 minus0001138Number 57 58 59 60 61 62 63Relative error 0000246 minus0007637 0009974 minus0016051 0002405 minus0002644 0003361Number 64 65 66 67 68 69 70Relative error minus0015557 minus0002587 minus0012756 minus0009015 minus0007153 minus0009234 minus0001688Number 71 72 73 74 75 76 77Relative error 0002176 minus0008728 minus0003110 0015200 minus0002282 minus0005979 minus0006169Number 78 79 80 81 82 83Relative error 0009414 minus0002398 0010743 0006008 minus0001105 0006332

Security and Communication Networks 7

index as a dataset for the experiment As for dataset pro-cessing we divided two datasets one for training and theother for testing In addition the data were normalizedRegarding model building we take MATLAB as the plat-form and set the number of hidden-layer neurons to be 18TRAINGDX is chosen to be the training function In termsof training the maximum number of iterations in thetraining is set to 3000 the maximum number of validationfailures is set to 6 and the error tolerance is set to 000001Finally we use the model to test the training data and the testdata In order to analyse the experimental results we alsocalculated relative error and the residuals and drew a pictureto show them Based on Elman network this study predictedthe short-term stock price in the future and achieved a goodprediction effect However it is unrealistic to predict thelong-term stock price in the future which is difficult toachieve [28ndash30] +is study provides an effective experi-mental method for predicting the near future stock price

Data Availability

All of the data used in this study are already available on theInternet and is easily accessible

Conflicts of Interest

+e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

+is work was supported by the key project of the NaturalScience Research of Higher Education Institutions in AnhuiProvince (Grant no KJ2018A0461) the Anhui Province KeyResearch and Development Program Project (Grant no201904a05020091) the Provincial Quality EngineeringProject from Department of Education Anhui Province(Grant no 2019mooc283) and the Domestic and ForeignResearch and Study Program for Outstanding YoungBackbone Talents in Colleges and Universities (Grant noGxgnfx2019034)

References

[1] Q Shayea Neural Networks to Predict Stock Market PriceWorld Congress on Engineering and Computer Science SanFrancisco CA USA 2017

[2] X Xu B Shen X Yin et al ldquoEdge server quantification andplacement for offloading social media services in industrialcognitive IoVrdquo IEEE Transactions on Industrial Informatics2020

[3] V Rohit C Pkumar and S Upendra ldquoNeural networksthrough stock market data predictionrdquo in Proceedings of the2017 International Conference of Electronics CoimbatoreIndia April 2017

[4] D Das A S Sadiq N B Ahmad and J Lloret ldquoStock marketprediction with big data through hybridization of data miningand optimized neural network techniquesrdquo Journal of Mul-tiple-Valued Logic and Soft Computing vol 29 no 1-2pp 157ndash181 2017

[5] J Zahedi and M Rounaghi ldquoApplication of artificial neuralnetwork models and principal component analysis method inpredicting stock prices on tehran stock exchangerdquo Physica AStatistical Mechanics and its Application vol 38 pp 178ndash1872015

[6] X Han ldquoStock price prediction with neural network based onMATLABrdquo Systems Engineering 2003

[7] X Xu X Zhang H Gao Y Xue L Qi and W Dou ldquoBe-Come blockchain-enabled computation offloading for IoT inmobile edge computingrdquo IEEE Transactions on IndustrialInformatics vol 16 no 6 pp 4187ndash4195 2020

[8] R Mahanta T N Pandey A K Jagadev and S DehurildquoOptimized radial basis functional neural network for stockindex predictionrdquo in Proceedings of the IEEE ConferencePublications Chennai India March 2016

[9] httpswwwmathworkscomhelpdeeplearningrefElmannethtml

[10] Y Zhang G Cui S Deng et al ldquoEfficient query of qualitycorrelation for service compositionrdquo IEEE Transactions onServices Computing p 1 2018

[11] X Xu X Zhang X Liu J Jiang L Qi and M Z AlamBhuiyan ldquoAdaptive computation offloading with edge for 5G-envisioned Internet of connected vehiclesrdquo IEEE Transactionson Intelligent Transportation Systems 2020

[12] Y Zhang C Yin Q Wu et al ldquoLocation-aware deep col-laborative filtering for service recommendationrdquo IEEETransactions on Systems Man and Cybernetics Systems(TSMC) 2019

[13] J Yu and P Guo ldquoStock price forecasting Model Based onImproved Elman Neural Networkrdquo Computer Technology andDevelopment 2008

[14] H Shi and X Liu ldquoApplication on stock price prediction ofElman neural networks based on principal componentanalysis methodrdquo in Proceedings of the 201411th InternationalComputer Conference on Wavelet Active Media Technology ampInformation Processing Chengdu China December 2014

[15] X Zhang S Qu J Huang B Fang and P Yu ldquoStock marketprediction via multi-source multiple instance learningrdquo IEEEAccess vol 6 no 99 pp 50720ndash50728 2018

[16] M Billah S Waheed and A Hanifa ldquoPredicting closing stockprice using artificial neural network and adaptive neuro fuzzyinference system (ANFIS the case of the Dhaka stock ex-changerdquo International Journal of Computer Applicationsvol 129 no 11 pp 1ndash5 2015

[17] httpswwwmathworkscomhelpdeeplearningindexhtmls_tidCRUX_lftnav

[18] Z Zhang Y Shen and G Zhang ldquoShort-term prediction foropening price of stock market based on self-adapting variantPSO-elman neural networkrdquo in Proceedings of the IEEE In-ternational Conference on Software Engineering and ServiceScience (ICSESS) Beijing China November 2017

[19] V Andrea and L Karel ldquoMatConvNet-convolutional neuralnetworks for MATLABrdquo in Proceedings of the 23rd ACMInternational Conference on Multimedia pp 689ndash692 Bris-bane Australia October 2015

[20] L Ren Y Liu Z Rui H Li and R Feng ldquoApplication ofelman neural network and MATLAB to load forecastingrdquo inProceedings of the International Conference on InformationTechnology and Computer Science Kiev Ukraine July 2009

[21] K Kim and W Lee ldquoStock market prediction using artificialneural networks with optimal feature transformationrdquoNeuralComputing amp Applications vol 13 no 3 pp 255ndash260 2004

8 Security and Communication Networks

[22] H Grigoryan ldquoStock market prediction using artificial neuralnetworks Case Study of TAL1T Nasdaq OMX Baltic StockrdquoDatabase Systems Journal 2015

[23] S Nayak B Misra and H Behera ldquoAn adaptive second orderneural network with genetic-algorithm-based training(ASONN-GA) to forecast the closing prices of the stockmarketrdquo International Journal of Applied MetaheuristicComputing vol 7 no 2 pp 39ndash57 2016

[24] Y Zhang K Wang Q He et al ldquoCovering-based web servicequality prediction via neighborhood-aware matrix factor-izationrdquo IEEE Transactions on Services Computing 2019

[25] P Kai H Huang S Wan and V Leung ldquoEnd-edge-cloudcollaborative computation offloading for multiple mobileusers in heterogeneous edge-server environmentrdquo WirelessNetwork vol 2020 2020

[26] C Goutami and S Chattopadhyay ldquoMonthly sunspot numbertime series analysis and its model construction throughautoregressive artificial neural networkrdquo e EuropeanPhysical Journal Plus vol 127 no 4 2012

[27] Z Guo J Wu H Lu and J Wang ldquoA case study on a hybridwind speed forecasting method using BP neural networkrdquoKnowledge-Based Systems vol 24 no 7 pp 1048ndash1056 2011

[28] Y Zhang and L Wu ldquoStock market prediction of SampP 500 viacombination of improved BCO approach and BP neuralnetworkrdquo Expert Systems with Applications vol 36 no 5pp 8849ndash8854 2008

[29] X Han ldquoStock Price Prediction with Neural Network Basedon MATLABrdquo Systems Engineering vol 2003 2003

[30] K Peng B Zhao S Xue and Q Huang ldquoEnergy- and re-source-aware computation offloading for complex tasks inedge environmentrdquo Complexity vol 2020 Article ID9548262 14 pages 2020

Security and Communication Networks 9

Page 7: Research Article ...downloads.hindawi.com/journals/scn/2020/8824430.pdf · Elman neural network is a typical feedback neural network model widely used, which is generally divided

After all the above steps construction of Elman neuralnetwork is completed [25ndash27]

5 Experiments

51 Training of the SupposedModel When the Elman neuralnetwork is built the model can be trained but all the datahas to be normalized first considering of the performanceand stability of the model +e normalization operation canuse the mapminmax function provided by MATLAB tool-box and the default normalization interval of mapminmaxfunction is [minus1 1] +e detailed MATLAB code is as follows

trainx1 st11113858 1113859 mapminmax(trainx)

train ty1 sim net train x1( 1113857

train ty mapminmax lsquoreversersquo train ty1 st2( 1113857

(10)

After the normalized operation of training data trainxand trainx1 were obtained +e normalized training data(trainx1) were input into the network model to obtain thecurrent network output (train_ty1) and then reverselynormalized into normal data to obtain train_ty which isthe corresponding stock price of the training data Whatwe want to emphasize is that the data used in the testshould be normalized first and then the output should beunnormalized

52 e Test Results and the Quantitative AnalysisFigure 5 shows a graph of the actual and predicted values theblue solid line is the actual value and the red dotted linerepresents the Elman network output value Apparently themodel fits the training data well In addition we furthercalculated the residuals of test results on training Figure 6shows the residuals of training results on training data andresidual in mathematical statistics refers to the differencebetween the actual observed value and the estimated value(the fitting value)

Figure 7 shows a graph of the actual and predictedvalues the black solid line is the actual value and the reddotted line represents the Elman network output value Inaddition we further calculate the residuals of test results ontesting data Figure 8 shows the residuals of test results ontesting data And the relative errors of each prediction arealso calculated for further study and analysis All relativeerror values are shown in Tables 1 and 2 By analysing thesegraphs and data it is clear that the prediction effect of themodel is pretty good

6 Conclusions

+is study is based on a basic premise that the historicalstock price will have a great impact on the future short-termstock price On this premise we established an improvedElman model and collected the historical data of theShanghai composite index and the Shenzhen composite

Table 2 Relative error of test (Shenzhen composite index)Number 1 2 3 4 5 6 7Relative error minus0013747 0002825 minus0017177 0004370 0030084 minus0001853 minus0047231Number 8 9 10 11 12 13 14Relative error 0017591 minus0017927 0007011 0010986 0026179 minus0044038 0035607Number 15 16 17 18 19 20 21Relative error 0061603 minus0043408 0047926 0008911 0028250 minus0016142 0039523Number 22 23 24 25 26 27 28Relative error minus0012140 minus0021609 0001765 minus0010977 0036256 minus0005063 0006179Number 29 30 31 32 33 34 35Relative error 0003674 minus0022697 minus0014897 minus0002466 minus0006432 0001260 0023451Number 36 37 38 39 40 41 42Relative error minus0009376 minus0016781 0009971 minus0019686 0001902 minus0002325 0014443Number 43 44 45 46 47 48 49Relative error minus0024677 0009285 0008341 minus0003239 minus0000341 minus0010959 minus0005216Number 50 51 52 53 54 55 56Relative error minus0026052 minus0000483 minus0012797 0007071

Table 1 ContinuedRelative error 0013786 0014388 0014745 0000828 minus0010776 0005480 minus0012749Number 50 51 52 53 54 55 56Relative error 0009279 minus0004690 0000227 minus0006596 minus0019610 minus0002958 minus0001138Number 57 58 59 60 61 62 63Relative error 0000246 minus0007637 0009974 minus0016051 0002405 minus0002644 0003361Number 64 65 66 67 68 69 70Relative error minus0015557 minus0002587 minus0012756 minus0009015 minus0007153 minus0009234 minus0001688Number 71 72 73 74 75 76 77Relative error 0002176 minus0008728 minus0003110 0015200 minus0002282 minus0005979 minus0006169Number 78 79 80 81 82 83Relative error 0009414 minus0002398 0010743 0006008 minus0001105 0006332

Security and Communication Networks 7

index as a dataset for the experiment As for dataset pro-cessing we divided two datasets one for training and theother for testing In addition the data were normalizedRegarding model building we take MATLAB as the plat-form and set the number of hidden-layer neurons to be 18TRAINGDX is chosen to be the training function In termsof training the maximum number of iterations in thetraining is set to 3000 the maximum number of validationfailures is set to 6 and the error tolerance is set to 000001Finally we use the model to test the training data and the testdata In order to analyse the experimental results we alsocalculated relative error and the residuals and drew a pictureto show them Based on Elman network this study predictedthe short-term stock price in the future and achieved a goodprediction effect However it is unrealistic to predict thelong-term stock price in the future which is difficult toachieve [28ndash30] +is study provides an effective experi-mental method for predicting the near future stock price

Data Availability

All of the data used in this study are already available on theInternet and is easily accessible

Conflicts of Interest

+e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

+is work was supported by the key project of the NaturalScience Research of Higher Education Institutions in AnhuiProvince (Grant no KJ2018A0461) the Anhui Province KeyResearch and Development Program Project (Grant no201904a05020091) the Provincial Quality EngineeringProject from Department of Education Anhui Province(Grant no 2019mooc283) and the Domestic and ForeignResearch and Study Program for Outstanding YoungBackbone Talents in Colleges and Universities (Grant noGxgnfx2019034)

References

[1] Q Shayea Neural Networks to Predict Stock Market PriceWorld Congress on Engineering and Computer Science SanFrancisco CA USA 2017

[2] X Xu B Shen X Yin et al ldquoEdge server quantification andplacement for offloading social media services in industrialcognitive IoVrdquo IEEE Transactions on Industrial Informatics2020

[3] V Rohit C Pkumar and S Upendra ldquoNeural networksthrough stock market data predictionrdquo in Proceedings of the2017 International Conference of Electronics CoimbatoreIndia April 2017

[4] D Das A S Sadiq N B Ahmad and J Lloret ldquoStock marketprediction with big data through hybridization of data miningand optimized neural network techniquesrdquo Journal of Mul-tiple-Valued Logic and Soft Computing vol 29 no 1-2pp 157ndash181 2017

[5] J Zahedi and M Rounaghi ldquoApplication of artificial neuralnetwork models and principal component analysis method inpredicting stock prices on tehran stock exchangerdquo Physica AStatistical Mechanics and its Application vol 38 pp 178ndash1872015

[6] X Han ldquoStock price prediction with neural network based onMATLABrdquo Systems Engineering 2003

[7] X Xu X Zhang H Gao Y Xue L Qi and W Dou ldquoBe-Come blockchain-enabled computation offloading for IoT inmobile edge computingrdquo IEEE Transactions on IndustrialInformatics vol 16 no 6 pp 4187ndash4195 2020

[8] R Mahanta T N Pandey A K Jagadev and S DehurildquoOptimized radial basis functional neural network for stockindex predictionrdquo in Proceedings of the IEEE ConferencePublications Chennai India March 2016

[9] httpswwwmathworkscomhelpdeeplearningrefElmannethtml

[10] Y Zhang G Cui S Deng et al ldquoEfficient query of qualitycorrelation for service compositionrdquo IEEE Transactions onServices Computing p 1 2018

[11] X Xu X Zhang X Liu J Jiang L Qi and M Z AlamBhuiyan ldquoAdaptive computation offloading with edge for 5G-envisioned Internet of connected vehiclesrdquo IEEE Transactionson Intelligent Transportation Systems 2020

[12] Y Zhang C Yin Q Wu et al ldquoLocation-aware deep col-laborative filtering for service recommendationrdquo IEEETransactions on Systems Man and Cybernetics Systems(TSMC) 2019

[13] J Yu and P Guo ldquoStock price forecasting Model Based onImproved Elman Neural Networkrdquo Computer Technology andDevelopment 2008

[14] H Shi and X Liu ldquoApplication on stock price prediction ofElman neural networks based on principal componentanalysis methodrdquo in Proceedings of the 201411th InternationalComputer Conference on Wavelet Active Media Technology ampInformation Processing Chengdu China December 2014

[15] X Zhang S Qu J Huang B Fang and P Yu ldquoStock marketprediction via multi-source multiple instance learningrdquo IEEEAccess vol 6 no 99 pp 50720ndash50728 2018

[16] M Billah S Waheed and A Hanifa ldquoPredicting closing stockprice using artificial neural network and adaptive neuro fuzzyinference system (ANFIS the case of the Dhaka stock ex-changerdquo International Journal of Computer Applicationsvol 129 no 11 pp 1ndash5 2015

[17] httpswwwmathworkscomhelpdeeplearningindexhtmls_tidCRUX_lftnav

[18] Z Zhang Y Shen and G Zhang ldquoShort-term prediction foropening price of stock market based on self-adapting variantPSO-elman neural networkrdquo in Proceedings of the IEEE In-ternational Conference on Software Engineering and ServiceScience (ICSESS) Beijing China November 2017

[19] V Andrea and L Karel ldquoMatConvNet-convolutional neuralnetworks for MATLABrdquo in Proceedings of the 23rd ACMInternational Conference on Multimedia pp 689ndash692 Bris-bane Australia October 2015

[20] L Ren Y Liu Z Rui H Li and R Feng ldquoApplication ofelman neural network and MATLAB to load forecastingrdquo inProceedings of the International Conference on InformationTechnology and Computer Science Kiev Ukraine July 2009

[21] K Kim and W Lee ldquoStock market prediction using artificialneural networks with optimal feature transformationrdquoNeuralComputing amp Applications vol 13 no 3 pp 255ndash260 2004

8 Security and Communication Networks

[22] H Grigoryan ldquoStock market prediction using artificial neuralnetworks Case Study of TAL1T Nasdaq OMX Baltic StockrdquoDatabase Systems Journal 2015

[23] S Nayak B Misra and H Behera ldquoAn adaptive second orderneural network with genetic-algorithm-based training(ASONN-GA) to forecast the closing prices of the stockmarketrdquo International Journal of Applied MetaheuristicComputing vol 7 no 2 pp 39ndash57 2016

[24] Y Zhang K Wang Q He et al ldquoCovering-based web servicequality prediction via neighborhood-aware matrix factor-izationrdquo IEEE Transactions on Services Computing 2019

[25] P Kai H Huang S Wan and V Leung ldquoEnd-edge-cloudcollaborative computation offloading for multiple mobileusers in heterogeneous edge-server environmentrdquo WirelessNetwork vol 2020 2020

[26] C Goutami and S Chattopadhyay ldquoMonthly sunspot numbertime series analysis and its model construction throughautoregressive artificial neural networkrdquo e EuropeanPhysical Journal Plus vol 127 no 4 2012

[27] Z Guo J Wu H Lu and J Wang ldquoA case study on a hybridwind speed forecasting method using BP neural networkrdquoKnowledge-Based Systems vol 24 no 7 pp 1048ndash1056 2011

[28] Y Zhang and L Wu ldquoStock market prediction of SampP 500 viacombination of improved BCO approach and BP neuralnetworkrdquo Expert Systems with Applications vol 36 no 5pp 8849ndash8854 2008

[29] X Han ldquoStock Price Prediction with Neural Network Basedon MATLABrdquo Systems Engineering vol 2003 2003

[30] K Peng B Zhao S Xue and Q Huang ldquoEnergy- and re-source-aware computation offloading for complex tasks inedge environmentrdquo Complexity vol 2020 Article ID9548262 14 pages 2020

Security and Communication Networks 9

Page 8: Research Article ...downloads.hindawi.com/journals/scn/2020/8824430.pdf · Elman neural network is a typical feedback neural network model widely used, which is generally divided

index as a dataset for the experiment As for dataset pro-cessing we divided two datasets one for training and theother for testing In addition the data were normalizedRegarding model building we take MATLAB as the plat-form and set the number of hidden-layer neurons to be 18TRAINGDX is chosen to be the training function In termsof training the maximum number of iterations in thetraining is set to 3000 the maximum number of validationfailures is set to 6 and the error tolerance is set to 000001Finally we use the model to test the training data and the testdata In order to analyse the experimental results we alsocalculated relative error and the residuals and drew a pictureto show them Based on Elman network this study predictedthe short-term stock price in the future and achieved a goodprediction effect However it is unrealistic to predict thelong-term stock price in the future which is difficult toachieve [28ndash30] +is study provides an effective experi-mental method for predicting the near future stock price

Data Availability

All of the data used in this study are already available on theInternet and is easily accessible

Conflicts of Interest

+e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

+is work was supported by the key project of the NaturalScience Research of Higher Education Institutions in AnhuiProvince (Grant no KJ2018A0461) the Anhui Province KeyResearch and Development Program Project (Grant no201904a05020091) the Provincial Quality EngineeringProject from Department of Education Anhui Province(Grant no 2019mooc283) and the Domestic and ForeignResearch and Study Program for Outstanding YoungBackbone Talents in Colleges and Universities (Grant noGxgnfx2019034)

References

[1] Q Shayea Neural Networks to Predict Stock Market PriceWorld Congress on Engineering and Computer Science SanFrancisco CA USA 2017

[2] X Xu B Shen X Yin et al ldquoEdge server quantification andplacement for offloading social media services in industrialcognitive IoVrdquo IEEE Transactions on Industrial Informatics2020

[3] V Rohit C Pkumar and S Upendra ldquoNeural networksthrough stock market data predictionrdquo in Proceedings of the2017 International Conference of Electronics CoimbatoreIndia April 2017

[4] D Das A S Sadiq N B Ahmad and J Lloret ldquoStock marketprediction with big data through hybridization of data miningand optimized neural network techniquesrdquo Journal of Mul-tiple-Valued Logic and Soft Computing vol 29 no 1-2pp 157ndash181 2017

[5] J Zahedi and M Rounaghi ldquoApplication of artificial neuralnetwork models and principal component analysis method inpredicting stock prices on tehran stock exchangerdquo Physica AStatistical Mechanics and its Application vol 38 pp 178ndash1872015

[6] X Han ldquoStock price prediction with neural network based onMATLABrdquo Systems Engineering 2003

[7] X Xu X Zhang H Gao Y Xue L Qi and W Dou ldquoBe-Come blockchain-enabled computation offloading for IoT inmobile edge computingrdquo IEEE Transactions on IndustrialInformatics vol 16 no 6 pp 4187ndash4195 2020

[8] R Mahanta T N Pandey A K Jagadev and S DehurildquoOptimized radial basis functional neural network for stockindex predictionrdquo in Proceedings of the IEEE ConferencePublications Chennai India March 2016

[9] httpswwwmathworkscomhelpdeeplearningrefElmannethtml

[10] Y Zhang G Cui S Deng et al ldquoEfficient query of qualitycorrelation for service compositionrdquo IEEE Transactions onServices Computing p 1 2018

[11] X Xu X Zhang X Liu J Jiang L Qi and M Z AlamBhuiyan ldquoAdaptive computation offloading with edge for 5G-envisioned Internet of connected vehiclesrdquo IEEE Transactionson Intelligent Transportation Systems 2020

[12] Y Zhang C Yin Q Wu et al ldquoLocation-aware deep col-laborative filtering for service recommendationrdquo IEEETransactions on Systems Man and Cybernetics Systems(TSMC) 2019

[13] J Yu and P Guo ldquoStock price forecasting Model Based onImproved Elman Neural Networkrdquo Computer Technology andDevelopment 2008

[14] H Shi and X Liu ldquoApplication on stock price prediction ofElman neural networks based on principal componentanalysis methodrdquo in Proceedings of the 201411th InternationalComputer Conference on Wavelet Active Media Technology ampInformation Processing Chengdu China December 2014

[15] X Zhang S Qu J Huang B Fang and P Yu ldquoStock marketprediction via multi-source multiple instance learningrdquo IEEEAccess vol 6 no 99 pp 50720ndash50728 2018

[16] M Billah S Waheed and A Hanifa ldquoPredicting closing stockprice using artificial neural network and adaptive neuro fuzzyinference system (ANFIS the case of the Dhaka stock ex-changerdquo International Journal of Computer Applicationsvol 129 no 11 pp 1ndash5 2015

[17] httpswwwmathworkscomhelpdeeplearningindexhtmls_tidCRUX_lftnav

[18] Z Zhang Y Shen and G Zhang ldquoShort-term prediction foropening price of stock market based on self-adapting variantPSO-elman neural networkrdquo in Proceedings of the IEEE In-ternational Conference on Software Engineering and ServiceScience (ICSESS) Beijing China November 2017

[19] V Andrea and L Karel ldquoMatConvNet-convolutional neuralnetworks for MATLABrdquo in Proceedings of the 23rd ACMInternational Conference on Multimedia pp 689ndash692 Bris-bane Australia October 2015

[20] L Ren Y Liu Z Rui H Li and R Feng ldquoApplication ofelman neural network and MATLAB to load forecastingrdquo inProceedings of the International Conference on InformationTechnology and Computer Science Kiev Ukraine July 2009

[21] K Kim and W Lee ldquoStock market prediction using artificialneural networks with optimal feature transformationrdquoNeuralComputing amp Applications vol 13 no 3 pp 255ndash260 2004

8 Security and Communication Networks

[22] H Grigoryan ldquoStock market prediction using artificial neuralnetworks Case Study of TAL1T Nasdaq OMX Baltic StockrdquoDatabase Systems Journal 2015

[23] S Nayak B Misra and H Behera ldquoAn adaptive second orderneural network with genetic-algorithm-based training(ASONN-GA) to forecast the closing prices of the stockmarketrdquo International Journal of Applied MetaheuristicComputing vol 7 no 2 pp 39ndash57 2016

[24] Y Zhang K Wang Q He et al ldquoCovering-based web servicequality prediction via neighborhood-aware matrix factor-izationrdquo IEEE Transactions on Services Computing 2019

[25] P Kai H Huang S Wan and V Leung ldquoEnd-edge-cloudcollaborative computation offloading for multiple mobileusers in heterogeneous edge-server environmentrdquo WirelessNetwork vol 2020 2020

[26] C Goutami and S Chattopadhyay ldquoMonthly sunspot numbertime series analysis and its model construction throughautoregressive artificial neural networkrdquo e EuropeanPhysical Journal Plus vol 127 no 4 2012

[27] Z Guo J Wu H Lu and J Wang ldquoA case study on a hybridwind speed forecasting method using BP neural networkrdquoKnowledge-Based Systems vol 24 no 7 pp 1048ndash1056 2011

[28] Y Zhang and L Wu ldquoStock market prediction of SampP 500 viacombination of improved BCO approach and BP neuralnetworkrdquo Expert Systems with Applications vol 36 no 5pp 8849ndash8854 2008

[29] X Han ldquoStock Price Prediction with Neural Network Basedon MATLABrdquo Systems Engineering vol 2003 2003

[30] K Peng B Zhao S Xue and Q Huang ldquoEnergy- and re-source-aware computation offloading for complex tasks inedge environmentrdquo Complexity vol 2020 Article ID9548262 14 pages 2020

Security and Communication Networks 9

Page 9: Research Article ...downloads.hindawi.com/journals/scn/2020/8824430.pdf · Elman neural network is a typical feedback neural network model widely used, which is generally divided

[22] H Grigoryan ldquoStock market prediction using artificial neuralnetworks Case Study of TAL1T Nasdaq OMX Baltic StockrdquoDatabase Systems Journal 2015

[23] S Nayak B Misra and H Behera ldquoAn adaptive second orderneural network with genetic-algorithm-based training(ASONN-GA) to forecast the closing prices of the stockmarketrdquo International Journal of Applied MetaheuristicComputing vol 7 no 2 pp 39ndash57 2016

[24] Y Zhang K Wang Q He et al ldquoCovering-based web servicequality prediction via neighborhood-aware matrix factor-izationrdquo IEEE Transactions on Services Computing 2019

[25] P Kai H Huang S Wan and V Leung ldquoEnd-edge-cloudcollaborative computation offloading for multiple mobileusers in heterogeneous edge-server environmentrdquo WirelessNetwork vol 2020 2020

[26] C Goutami and S Chattopadhyay ldquoMonthly sunspot numbertime series analysis and its model construction throughautoregressive artificial neural networkrdquo e EuropeanPhysical Journal Plus vol 127 no 4 2012

[27] Z Guo J Wu H Lu and J Wang ldquoA case study on a hybridwind speed forecasting method using BP neural networkrdquoKnowledge-Based Systems vol 24 no 7 pp 1048ndash1056 2011

[28] Y Zhang and L Wu ldquoStock market prediction of SampP 500 viacombination of improved BCO approach and BP neuralnetworkrdquo Expert Systems with Applications vol 36 no 5pp 8849ndash8854 2008

[29] X Han ldquoStock Price Prediction with Neural Network Basedon MATLABrdquo Systems Engineering vol 2003 2003

[30] K Peng B Zhao S Xue and Q Huang ldquoEnergy- and re-source-aware computation offloading for complex tasks inedge environmentrdquo Complexity vol 2020 Article ID9548262 14 pages 2020

Security and Communication Networks 9