a deep learning prediction model based on extreme-point

7
Research Article A Deep Learning Prediction Model Based on Extreme-Point Symmetric Mode Decomposition and Cluster Analysis Guohui Li, Songling Zhang, and Hong Yang School of Electronic Engineering, Xi’an University of Posts and Telecommunications, Xi’an, Shaanxi 710121, China Correspondence should be addressed to Hong Yang; [email protected] Received 14 July 2017; Accepted 5 December 2017; Published 27 December 2017 Academic Editor: Simone Bianco Copyright © 2017 Guohui Li et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Aiming at the irregularity of nonlinear signal and its predicting difficulty, a deep learning prediction model based on extreme- point symmetric mode decomposition (ESMD) and clustering analysis is proposed. Firstly, the original data is decomposed by ESMD to obtain the finite number of intrinsic mode functions (IMFs) and residuals. Secondly, the fuzzy -means is used to cluster the decomposed components, and then the deep belief network (DBN) is used to predict it. Finally, the reconstructed IMFs and residuals are the final prediction results. Six kinds of prediction models are compared, which are DBN prediction model, EMD- DBN prediction model, EEMD-DBN prediction model, CEEMD-DBN prediction model, ESMD-DBN prediction model, and the proposed model in this paper. e same sunspots time series are predicted with six kinds of prediction models. e experimental results show that the proposed model has better prediction accuracy and smaller error. 1. Introduction At present, there are still many difficulties in predicting non- linear signal such as sunspots and underwater acoustic signal. Sunspots are the basic parameters of the solar activity level. ey are closely related to the geomagnetic disturbance and ionospheric electron concentration. Prediction of sunspots is an important part of spatial forecast which can pro- vide important reference information for communication, navigation, and positioning. Some scholars have conducted extensive research on the theory of forecasting [1, 2]. In the time-frequency signal analysis, the commonly used method is Fourier transform which is mainly mapping the time domain signal to the frequency domain energy spectrum space, but Fourier transform only applies to the stationary signal. Artificial neural network has the characteristics of independent learning compared with the previous regression analysis which is especially suitable for nonlinear signal processing. However, due to the limitation of synchronous instantaneous input, the time cumulative effect of continuous signal cannot be reflected, and the prediction accuracy is low [3]. Wavelet neural network is combined with the char- acteristics of artificial neural network and wavelet analysis which has been widely applied to the processing of nonlinear signal. Li and Wang [1] propose the prediction model based on complementary ensemble empirical mode decomposition and wavelet neural network. Although its prediction accuracy is improved to a certain extent, there is room for further improvement. e emergence of empirical mode decomposi- tion [4] (EMD) provides an idea for the processing of nonlin- ear signal. It does not need to select a basis function, but it is difficult to determine the number of screenings and there are many defects in Hilbert spectral analysis. e extreme-point symmetric mode decomposition [5–7] (ESMD) method is a further improvement of the EMD, whose envelope interpola- tion from extreme points of the original external changes to internal upper and lower extreme symmetric interpolation. e residual modal component is optimized by the least squares method, which has the characteristics of adaptive global to determine the number of screenings. ESMD uses the direct interpolation (DI) method, which is different from Fourier transform only by the idea of transformation of the integral algorithm. In view of the advantages of ESMD, this paper selects the ESMD method to decompose the nonlinear time series. en, fuzzy -means [8, 9] clustering analysis is used to aggregate the data of the same membership to facilitate the prediction analysis of the model. Finally, the Hindawi Mathematical Problems in Engineering Volume 2017, Article ID 8513652, 6 pages https://doi.org/10.1155/2017/8513652

Upload: others

Post on 15-Oct-2021

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A Deep Learning Prediction Model Based on Extreme-Point

Research ArticleA Deep Learning Prediction Model Based on Extreme-PointSymmetric Mode Decomposition and Cluster Analysis

Guohui Li Songling Zhang and Hong Yang

School of Electronic Engineering Xirsquoan University of Posts and Telecommunications Xirsquoan Shaanxi 710121 China

Correspondence should be addressed to Hong Yang uestcyhong163com

Received 14 July 2017 Accepted 5 December 2017 Published 27 December 2017

Academic Editor Simone Bianco

Copyright copy 2017 Guohui Li et al This is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

Aiming at the irregularity of nonlinear signal and its predicting difficulty a deep learning prediction model based on extreme-point symmetric mode decomposition (ESMD) and clustering analysis is proposed Firstly the original data is decomposed byESMD to obtain the finite number of intrinsic mode functions (IMFs) and residuals Secondly the fuzzy 119888-means is used to clusterthe decomposed components and then the deep belief network (DBN) is used to predict it Finally the reconstructed IMFs andresiduals are the final prediction results Six kinds of prediction models are compared which are DBN prediction model EMD-DBN prediction model EEMD-DBN prediction model CEEMD-DBN prediction model ESMD-DBN prediction model and theproposed model in this paper The same sunspots time series are predicted with six kinds of prediction models The experimentalresults show that the proposed model has better prediction accuracy and smaller error

1 Introduction

At present there are still many difficulties in predicting non-linear signal such as sunspots and underwater acoustic signalSunspots are the basic parameters of the solar activity levelThey are closely related to the geomagnetic disturbance andionospheric electron concentration Prediction of sunspotsis an important part of spatial forecast which can pro-vide important reference information for communicationnavigation and positioning Some scholars have conductedextensive research on the theory of forecasting [1 2] In thetime-frequency signal analysis the commonly used methodis Fourier transform which is mainly mapping the timedomain signal to the frequency domain energy spectrumspace but Fourier transform only applies to the stationarysignal Artificial neural network has the characteristics ofindependent learning compared with the previous regressionanalysis which is especially suitable for nonlinear signalprocessing However due to the limitation of synchronousinstantaneous input the time cumulative effect of continuoussignal cannot be reflected and the prediction accuracy islow [3] Wavelet neural network is combined with the char-acteristics of artificial neural network and wavelet analysiswhich has been widely applied to the processing of nonlinear

signal Li and Wang [1] propose the prediction model basedon complementary ensemble empirical mode decompositionandwavelet neural network Although its prediction accuracyis improved to a certain extent there is room for furtherimprovementThe emergence of empirical mode decomposi-tion [4] (EMD) provides an idea for the processing of nonlin-ear signal It does not need to select a basis function but it isdifficult to determine the number of screenings and there aremany defects in Hilbert spectral analysis The extreme-pointsymmetric mode decomposition [5ndash7] (ESMD) method is afurther improvement of the EMD whose envelope interpola-tion from extreme points of the original external changes tointernal upper and lower extreme symmetric interpolationThe residual modal component is optimized by the leastsquares method which has the characteristics of adaptiveglobal to determine the number of screenings ESMD usesthe direct interpolation (DI) method which is different fromFourier transform only by the idea of transformation of theintegral algorithm In view of the advantages of ESMD thispaper selects the ESMDmethod to decompose the nonlineartime series Then fuzzy 119888-means [8 9] clustering analysisis used to aggregate the data of the same membership tofacilitate the prediction analysis of the model Finally the

HindawiMathematical Problems in EngineeringVolume 2017 Article ID 8513652 6 pageshttpsdoiorg10115520178513652

2 Mathematical Problems in Engineering

deep belief network [10ndash13] (DBN) is trained to achieve theexpected output value and then the predicted output value isreconstructed to obtain the final predicted value

2 ESMD Method

ESMD is a newdevelopment of theHilbert-Huang transformand its algorithm is as follows

(1) Find all the extreme points (maximum and min-imum) of the data 119885 and record them as 119883 =1199091 1199092 119909119899 isin 119877119901119899

(2) Connect the adjacent poles with lines and the mid-point is recorded as 119861119894 (119894 = 1 2 119899 minus 1)

(3) Supplement the left and right border midpoint 1198610 119861119899by certain methods

(4) Use the obtained 119899 + 1 midpoints to construct 119901 bardifferential lines and calculate their mean curves119883 =1199091 1199092 119909119899 isin 119877119901119899

(5) Repeat the above steps until the number of screeningsreaches the preset maximum value then the firstdecomposed empirical mode is recorded as1198721

(6) Repeat the above steps for 119885 minus 1198721 to obtain11987221198723 until the final margin 119877 only has a certainnumber of poles

(7) Let the maximum number of screenings 119863 processand cycle of the above process in the integer interval119883 = 1199091 1199092 119909119899 isin 119877119901119899 to get some componentsand then calculate the variance ratio 1205901205900 and drawit with the 119863 change map where 120590 is the relativestandard deviation of 119885 minus 119877 and 1205900 is the standarddeviation of the original data

(8) Select the maximum number of screenings1198630 whichcorresponded to the minimum variance ratio 1205901205900 inthe interval [119863min 119863max] and repeat the first six stepsto output the decomposition results

3 Clustering Algorithm

The fuzzy clustering algorithm was originally proposed byDunn [14] and further introduced by Bezdek [15] which isnow being applied to many fields Its operation steps can beexpressed as follows the sample set 119883 = 1199091 1199092 119909119899 isin119877119901119899 is divided into 119888 class Membership degree of any element119909119896 in the sample on the 119894 class is recorded as 119906119894119896 The fuzzymembership matrix is used in the matrix after clusteringwhich is recorded as 119880 = 119906119894119896 isin 119877119888119899 and satisfies thefollowing conditions

119906119894119896 isin [0 1] forall119894 1198960 lt sum119896

119906119894119896 lt 119899 forall119894sum119894

119906119894119896 = 1 forall119896(1)

The fuzzy 119888-means clustering is obtained by minimizingthe purpose function 119869119898(119880 119881) The purpose function is asfollows

119869119898 (119880 119881) = 119899sum119896=1

119888sum119894=1

(119906119894119896)119898 1198892119894119896 (119909119896 V119894) (2)

where 119880 = 119906119894119896 is the membership matrix 119881 = V1V2 V119888 isin 119877119901119888 represents 119888 clustering center point sets and119898 isin [1infin) is the weighted index The fuzzy clustering istransformed into hard mean clustering [14] when 119898 is 1 Theideal range of119898 is [15 25] usually119898 = 2

The distance from the 119896th sample to the 119894th class center is1198892119894119896 (119909119896 V119894) = 1003817100381710038171003817119909119896 minus V119894

10038171003817100381710038172119860 = (119909119896 minus V119894)119879119860 (119909119896 minus V119894) (3)

where 119860 is the positive definite matrix of 119901 times 119901 specialconditions 119860 = 119868 and (3) is the Euclidean distance FCM[16] is achieved by continuously optimizing the objectivefunction FCM algorithm process is as follows

(1) Initialize the cluster center 119881 = V1 V2 V119888(2) Calculate membership matrix

119906119894119896 = [[119888sum119895=1

[ 119889119894119896 (119909119896 V119894)119889119895119896 (119909119896 V119895)]2(119898minus1)]]

minus1

119896 = 1 2 119899 (4)

(3) Calculate the new cluster center

V119894 = sum119899119896=1 (119906119894119896)119898 119909119896sum119899119896=1 (119906119894119896)119898 119894 = 1 2 119888 (5)

(4) Repeat steps (2) and (3) until (2) reaches convergenceWhen 119889119894119896(119909119896 V119894) = 0 a singular value is generatedand membership cannot be calculated by (4) A classof nonsingular values will appear when the member-ship value is 0The class of singular value appears andthen the membership is calculated according to (1)

4 Forecasting Model

41 DBN Network Structure DBN [17 18] is organized by anumber of restrictedBoltzmannmachine (RBM)modelsThevisual layer of the RBM model is similar to the input layerand the hidden layer is similar to the output layer Learningbetween layers and layers of a large numbers of RBMmodelsis used to end the final operation The specific structure ofRBM model is shown in Figure 1 The unit of the visual layerand the unit of the hidden layer can be interconnected witheach other The elements inside the layers are not connectedThe units of the hidden layer can obtain a close correlationbetween the units of the visual layer

The core of DBN is restricted Boltzmann machine unitRBM is a typical artificial neural network and it is aspecial logarithmic linear Markova random field [19] TheRBM model has three parameters the offset vector 119881 =(1198811 1198812 119881119898) represents the offset of each node of the visual

Mathematical Problems in Engineering 3

ℎ1 ℎ2 ℎ3 ℎn

b1 b2 b3 bn

a1 a2 a3 a4 am

1 2 3 4 m

b

a

wntimesmwntimesm

middot middot middot

middot middot middot

Figure 1 Network structure diagram of RBM

Data output

BP

RMB

Counterpropagation

Fine tuning

Information

RMB

Input data

Fine tuning

w2

w1

w0

V2

H1

H0

V0

V1

Figure 2 Deep belief network architecture

layer 119867 = (1198671 1198672 119867119899) represents the offset of eachnode of the hidden layer and 119908 represents the weight matrixbetween the nodes of two layer These three parametersdirectly determine themodel to encode the119898dimension datainto 119899 dimension data thus the conversion between featuresis realized

DBN is composed of a large number of RBM modelsfrom the bottom to the top and the top of a layer of BPneural network which is shown in Figure 2The bottom is thesample input which is waiting for training 1198810 and1198670 are thenodes of RBM visual layer and hidden layer in the first layerrespectively 1199080 represents the weight between the visual andhidden layers [20]

42 DBN Training Process

(1) The input sample is entered from the bottom level

(2) The first RBM model was trained and then passedto the second RBM model for training followed by

continuous training until the training of the top of theRBMmodel is also complete

(3) After the training is completed the training data cansupervise the operation and adopt themaximum like-lihood estimation method to fine-tune the networkmodel

(4) Finally the BP model is used to fine-tune the modelparameters of the top layer so as tominimize the valueof the loss function

43 The Training Method of Deep Learning

(1) Unsupervised Learning from Bottom Up (Pretraining)Using unlabeled data to train each parameter hierarchicallythis is an unsupervised training method which is the biggestdifference from the traditional neural network and also canbe regarded as the process of feature learning The firstlayer is first trained with unlabeled data and the first layerparameters are obtained The output of the first layer is usedas the input of the second layer so as to train the second layersand finally obtain the parameters of each layer

(2) Top-Down Supervised Study (Tuning) After the first stepis completed the network adopted discriminative trainingusing labeled data and the error is transmitted from top tobottom The first step is similar to the random initializationof the traditional neural network The difference is that thefirst step of deep learning is obtained through the studyof unlabeled data rather than random initialization So theinitial value is closer to the overall optimal so the effect ofdeep learning is mainly the pretraining of the first step

44 ESMD and DBN Prediction Model Based on ClusteringESMD and DBN prediction model based on clustering isproposed whose structure is described as follows

(1) The original sequence is decomposed by ESMD thenthe finite number of IMFs and residuals is obtained

(2) The fuzzy 119888-means clustering analysis is performedfor each IMF component and residual then thefrequency fluctuation rule is got

(3) The DBN model is established for each IMF com-ponent and residual respectively then the predictedvalue of each component is obtained

(4) Reconstruct IMF predicting value to obtain the finalpredicting results

5 Data Simulation and Analysis

The monthly mean total sunspot number from 1963 to 2012was used as the original data There are a total of 600 datapoints shown in Figure 3The original data is decomposed byEMD and ESMD respectively shown in Figures 4 and 5

In Figure 5 modal components IMF1simIMF6 are shownfrom top to bottom and the instantaneous frequency ofmodal components IMF3simIMF6 is basically stable Themodal components can achieve relatively high prediction

4 Mathematical Problems in Engineering

0 100 200 300 400 500 600Time (months)

0

50

100

150

200

250

Mon

thly

mea

n to

tal s

unsp

ot n

umbe

r

Figure 3 Monthly mean total sunspot number in 1963ndash2012

0 100 200 300 400 500 600minus50

050

IMF1

0 100 200 300 400 500 600minus50

050

IMF2

0 100 200 300 400 500 600minus20

020

IMF3

0 100 200 300 400 500 600minus50

050

IMF4

0 100 200 300 400 500 600minus100

0100

IMF5

0 100 200 300 400 500 600minus50

050

IMF6

0 100 200 300 400 500 6006080

100

Time (months)

R

Figure 4 The results of EMD decomposition

results after they are decomposed and predicted But IMF1simIMF2 are still quite complex compared to other componentsthe instantaneous frequency is very large and nonstationaryis strong So the fuzzy 119888-means clustering analysis is per-formed and the results are shown in Figure 6

The DBN structure contains two hidden layers Thenumber of neurons is 2 and 12 and the learning rate is 1The DBN network includes two hidden layers The numberof neurons is 20 and 10 the learning rate is 01 the cyclenumber is set to 100 and the momentum is set to 0 Afterthe training is completed each layer of the RBM model canobtain initialization parameters which constitute the simplest

0 100 200 300 400 500 600minus50

050

IMF1

0 100 200 300 400 500 600minus50

050

IMF2

0 100 200 300 400 500 600minus20

020

IMF3

0 100 200 300 400 500 600minus100

0100

IMF4

0 100 200 300 400 500 600minus100

0100

IMF5

0 100 200 300 400 500 600minus50

050

IMF6

0 100 200 300 400 500 6000

50100

Time (months)

R

Figure 5 The results of ESMD decomposition

0 100 200 300 400 500 600minus50

050

0 100 200 300 400 500 600minus50

050

0 100 200 300 400 500 600minus50

050

0 100 200 300 400 500 600minus100

0100

IMF4

0 100 200 300 400 500 600minus50

050

IMF1

IMF2

IMF3

IMF5

0 100 200 300 400 500 6006080

100

IMF6

Time (months)

Figure 6 Modal components after fuzzy clustering

model of DBNThe prediction experimental steps are shownin Figure 7

The predicted results of ESMD-DBN prediction modelbased on clustering are shown in Figure 8 The samesunspot time series is predicted by DBN directly namedDBNprediction The same sunspot time series is decomposedrespectively by EMD EEMD CEEMD and ESMD and thefinite number of IMF components and residue are obtained

Mathematical Problems in Engineering 5

Experimental data

The training set and testing set

Setup parameters

Training DBN

Training and projections for the sunspot

Figure 7 Steps of sunspot prediction based on DBN

0 10 20 30 40 50 60 70 80 90 100Time (months)

0

10

20

30

40

50

60

70

80

90

100

Mon

thly

mea

n to

tal s

unsp

ot n

umbe

r

SunspotThe proposed model

Figure 8 The predicted results based on clustering ESMD-DBN

The DBN model is established for each IMF componentand residue respectively then the predicted value of eachcomponent is obtained They were named respectively bythe predictionmodels of EMD-DBN EEMD-DBNCEEMD-DBN and ESMD-DBN

The purple line in Figure 8 represents the predictednumber of sunspots and the blue line represents the practicalnumber of sunspots It can be seen that the clustering ESMD-DBN model proposed in this paper has good fitting to theoriginal data and can predict the number of sunspots wellFigure 9 shows the comparison results of the predictionmodels of DBN EMD-DBN EEMD-DBN CEEMD-DBNand ESMD-DBN In order to identify predicted results localpredicted results are shown in Figure 10

In order to verify the prediction result the root meansquare error (RMSE) and the mean absolute percentage error

SunspotDBNEMD-DBNEEMD-DBN

CEMD-DBNESMD-DBNThe proposed model

0 10 20 30 40 50 60 70 80 90 100Time (months)

0

10

20

30

40

50

60

70

80

90

100

Mon

thly

mea

n to

tal s

unsp

ot n

umbe

rFigure 9 Predicted results of sunspot numbers for each model

SunspotDBNEMD-DBNEEMD-DBN

CEMD-DBNESMD-DBNThe proposed model

90 91 92 93 94 95 96 97 98 99 100Time (months)

0

10

20

30

40

50

60

70

80

90

100M

onth

ly m

ean

tota

l sun

spot

num

ber

Figure 10 Local predicted results of sunspot numbers for eachmodel

(MAPE) are used to measure the prediction performanceThe formulas are as follows

RMSE = radic 1119899119899sum119894=1

[119909 (119894) minus 119909 (119894)]2

MAPE = 1119899119899sum119894=1

10038161003816100381610038161003816100381610038161003816119909 (119894) minus 119909 (119894)119909 (119894)10038161003816100381610038161003816100381610038161003816 times 100

(6)

where 119899 is the number of sample datasets 119909(119894) is the 119894th valueof the predicted data and 119909(119894) is the 119894th value of the actualdata Performance comparison of the six models is shown inTable 1

6 Mathematical Problems in Engineering

Table 1 Performance comparison of the six models

Models RMSE MAPEDBN 84398 2652EMD-DBN 58374 1236EEMD-DBN 30837 758CEEMD-DBN 13405 367ESMD-DBN 10543 245The proposed model 08760 136

As shown in Table 1 the RMSE and MAPE of theproposed model are smaller than the other five modelsTherefore the proposed model can predict sunspot numberand the trend of sunspot time series better and it is aneffective prediction model

6 Conclusions

In this paper a deep learning prediction model based onextreme-point symmetric mode decomposition and cluster-ing analysis is proposed to predict the sunspot monthlymean time series Comparing with the other models such asDBN EMD-DBN EEMD-DBN CEEMD-DBN and ESMD-DBN the RMSE and MAPE of the proposed model are thesmallest The experimental results show that the proposedmodel can improve the prediction precision and reduce theerror compared with other models in predicting the samesunspot time series It can also be applied to other fieldsafter conducting some modification and has high applicationvalue

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China (no 51709228)

References

[1] G Li and S Wang ldquoSunspots time-series prediction based oncomplementary ensemble empirical mode decomposition andwavelet neural networkrdquoMathematical Problems in Engineeringvol 2017 Article ID 3513980 7 pages 2017

[2] E M Roshchina and A P Sarychev ldquoApproximation of period-icity in sunspot formation of and prediction of the 25th cyclerdquoGeomagnetism and Aeronomy vol 55 no 7 pp 892ndash895 2015

[3] S D Taabu F M Drsquoujanga and T Ssenyonga ldquoPredictionof ionospheric scintillation using neural network over EastAfrican region during ascending phase of sunspot cycle 24rdquoAdvances in Space Research vol 57 no 7 pp 1570ndash1584 2016

[4] N E Huang Z Shen S R Long et al ldquoThe empirical modedecomposition and the Hilbert spectrum for nonlinear andnon-stationary time series analysisrdquo in Proceedings of the Royal

Society of London A Mathematical Physical and EngineeringSciences vol 454 pp 903ndash995 1998

[5] W-A Yang W Zhou W Liao and Y Guo ldquoIdentificationand quantification of concurrent control chart patterns usingextreme-point symmetric mode decomposition and extremelearning machinesrdquo Neurocomputing vol 147 no 1 pp 260ndash270 2015

[6] J R Lei Z H Liu L Bai Z S Chen J H Xu and L LWang ldquoThe regional features of precipitation variation trendsover Sichuan in China by the ESMDmethodrdquoMausam vol 67no 11 pp 849ndash860 2016

[7] X Tian Y Li H Zhou X Li L Chen and X Zhang ldquoElec-trocardiogram signal denoising using extreme-point symmetricmode decomposition and nonlocal meansrdquo Sensors vol 16 no10 article no 1584 2016

[8] M Fatehi and H H Asadi ldquoApplication of semi-supervisedfuzzy c-means method in clustering multivariate geochemicaldata a case study from the Dalli Cu-Au porphyry deposit incentral Iranrdquo Ore Geology Reviews vol 81 pp 245ndash255 2017

[9] P R Meena and S K R Shantha ldquoSpatial fuzzy c means andexpectation maximization algorithms with bias correction forsegmentation of MR brain imagesrdquo Journal of Medical Systemsvol 41 article 15 2017

[10] H B Huang X R Huang R X Li T C Lim and W P DingldquoSound quality prediction of vehicle interior noise using deepbelief networksrdquo Applied Acoustics vol 113 pp 149ndash161 2016

[11] Z Zhao L Jiao J Zhao J Gu and J Zhao ldquoDiscriminant deepbelief network for high-resolution SAR image classificationrdquoPattern Recognition vol 61 pp 686ndash701 2017

[12] Y-J Hu and Z-H Ling ldquoDBN-based spectral feature represen-tation for statistical parametric speech synthesisrdquo IEEE SignalProcessing Letters vol 23 no 3 pp 321ndash325 2016

[13] A-RMohamed G E Dahl andGHinton ldquoAcousticmodelingusing deep belief networksrdquo IEEE Transactions on Audio Speechand Language Processing vol 20 no 1 pp 14ndash22 2012

[14] J C Dunn ldquoA fuzzy relative of the ISODATA process and itsuse in detecting compact well-separated clustersrdquo Journal ofCybernetics vol 3 no 3 pp 32ndash57 1973

[15] J C Bezdek Pattern Recognition with Fuzzy Objective FunctionAlgorithms Plenum Press New York NY USA 1981

[16] S Niazmardi S Homayouni and A Safari ldquoAn improved FCMalgorithm based on the svdd for unsupervised hyperspectraldata classificationrdquo IEEE Journal of Selected Topics in AppliedEarth Observations and Remote Sensing vol 6 no 2 pp 831ndash839 2013

[17] F Liu L Jiao B Hou and S Yang ldquoPOL-SAR Image Classifi-cation Based on Wishart DBN and Local Spatial InformationrdquoIEEE Transactions on Geoscience and Remote Sensing vol 54no 6 pp 3292ndash3308 2016

[18] A Courville G Desjardins J Bergstra and Y Bengio ldquoThespike-and-slab RBM and extensions to discrete and sparsedata distributionsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 36 no 9 pp 1874ndash1887 2014

[19] Y Yuan J Lin and Q Wang ldquoHyperspectral image classifica-tion viamultitask joint sparse representation and stepwiseMRFoptimizationrdquo IEEE Transactions on Cybernetics vol 46 no 102015

[20] K Orphanou A Stassopoulou and E Keravnou ldquoDBN-extended A dynamic Bayesian network model extended withtemporal abstractions for coronary heart disease prognosisrdquoIEEE Journal of Biomedical and Health Informatics vol 20 no3 pp 944ndash952 2016

Submit your manuscripts athttpswwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 2: A Deep Learning Prediction Model Based on Extreme-Point

2 Mathematical Problems in Engineering

deep belief network [10ndash13] (DBN) is trained to achieve theexpected output value and then the predicted output value isreconstructed to obtain the final predicted value

2 ESMD Method

ESMD is a newdevelopment of theHilbert-Huang transformand its algorithm is as follows

(1) Find all the extreme points (maximum and min-imum) of the data 119885 and record them as 119883 =1199091 1199092 119909119899 isin 119877119901119899

(2) Connect the adjacent poles with lines and the mid-point is recorded as 119861119894 (119894 = 1 2 119899 minus 1)

(3) Supplement the left and right border midpoint 1198610 119861119899by certain methods

(4) Use the obtained 119899 + 1 midpoints to construct 119901 bardifferential lines and calculate their mean curves119883 =1199091 1199092 119909119899 isin 119877119901119899

(5) Repeat the above steps until the number of screeningsreaches the preset maximum value then the firstdecomposed empirical mode is recorded as1198721

(6) Repeat the above steps for 119885 minus 1198721 to obtain11987221198723 until the final margin 119877 only has a certainnumber of poles

(7) Let the maximum number of screenings 119863 processand cycle of the above process in the integer interval119883 = 1199091 1199092 119909119899 isin 119877119901119899 to get some componentsand then calculate the variance ratio 1205901205900 and drawit with the 119863 change map where 120590 is the relativestandard deviation of 119885 minus 119877 and 1205900 is the standarddeviation of the original data

(8) Select the maximum number of screenings1198630 whichcorresponded to the minimum variance ratio 1205901205900 inthe interval [119863min 119863max] and repeat the first six stepsto output the decomposition results

3 Clustering Algorithm

The fuzzy clustering algorithm was originally proposed byDunn [14] and further introduced by Bezdek [15] which isnow being applied to many fields Its operation steps can beexpressed as follows the sample set 119883 = 1199091 1199092 119909119899 isin119877119901119899 is divided into 119888 class Membership degree of any element119909119896 in the sample on the 119894 class is recorded as 119906119894119896 The fuzzymembership matrix is used in the matrix after clusteringwhich is recorded as 119880 = 119906119894119896 isin 119877119888119899 and satisfies thefollowing conditions

119906119894119896 isin [0 1] forall119894 1198960 lt sum119896

119906119894119896 lt 119899 forall119894sum119894

119906119894119896 = 1 forall119896(1)

The fuzzy 119888-means clustering is obtained by minimizingthe purpose function 119869119898(119880 119881) The purpose function is asfollows

119869119898 (119880 119881) = 119899sum119896=1

119888sum119894=1

(119906119894119896)119898 1198892119894119896 (119909119896 V119894) (2)

where 119880 = 119906119894119896 is the membership matrix 119881 = V1V2 V119888 isin 119877119901119888 represents 119888 clustering center point sets and119898 isin [1infin) is the weighted index The fuzzy clustering istransformed into hard mean clustering [14] when 119898 is 1 Theideal range of119898 is [15 25] usually119898 = 2

The distance from the 119896th sample to the 119894th class center is1198892119894119896 (119909119896 V119894) = 1003817100381710038171003817119909119896 minus V119894

10038171003817100381710038172119860 = (119909119896 minus V119894)119879119860 (119909119896 minus V119894) (3)

where 119860 is the positive definite matrix of 119901 times 119901 specialconditions 119860 = 119868 and (3) is the Euclidean distance FCM[16] is achieved by continuously optimizing the objectivefunction FCM algorithm process is as follows

(1) Initialize the cluster center 119881 = V1 V2 V119888(2) Calculate membership matrix

119906119894119896 = [[119888sum119895=1

[ 119889119894119896 (119909119896 V119894)119889119895119896 (119909119896 V119895)]2(119898minus1)]]

minus1

119896 = 1 2 119899 (4)

(3) Calculate the new cluster center

V119894 = sum119899119896=1 (119906119894119896)119898 119909119896sum119899119896=1 (119906119894119896)119898 119894 = 1 2 119888 (5)

(4) Repeat steps (2) and (3) until (2) reaches convergenceWhen 119889119894119896(119909119896 V119894) = 0 a singular value is generatedand membership cannot be calculated by (4) A classof nonsingular values will appear when the member-ship value is 0The class of singular value appears andthen the membership is calculated according to (1)

4 Forecasting Model

41 DBN Network Structure DBN [17 18] is organized by anumber of restrictedBoltzmannmachine (RBM)modelsThevisual layer of the RBM model is similar to the input layerand the hidden layer is similar to the output layer Learningbetween layers and layers of a large numbers of RBMmodelsis used to end the final operation The specific structure ofRBM model is shown in Figure 1 The unit of the visual layerand the unit of the hidden layer can be interconnected witheach other The elements inside the layers are not connectedThe units of the hidden layer can obtain a close correlationbetween the units of the visual layer

The core of DBN is restricted Boltzmann machine unitRBM is a typical artificial neural network and it is aspecial logarithmic linear Markova random field [19] TheRBM model has three parameters the offset vector 119881 =(1198811 1198812 119881119898) represents the offset of each node of the visual

Mathematical Problems in Engineering 3

ℎ1 ℎ2 ℎ3 ℎn

b1 b2 b3 bn

a1 a2 a3 a4 am

1 2 3 4 m

b

a

wntimesmwntimesm

middot middot middot

middot middot middot

Figure 1 Network structure diagram of RBM

Data output

BP

RMB

Counterpropagation

Fine tuning

Information

RMB

Input data

Fine tuning

w2

w1

w0

V2

H1

H0

V0

V1

Figure 2 Deep belief network architecture

layer 119867 = (1198671 1198672 119867119899) represents the offset of eachnode of the hidden layer and 119908 represents the weight matrixbetween the nodes of two layer These three parametersdirectly determine themodel to encode the119898dimension datainto 119899 dimension data thus the conversion between featuresis realized

DBN is composed of a large number of RBM modelsfrom the bottom to the top and the top of a layer of BPneural network which is shown in Figure 2The bottom is thesample input which is waiting for training 1198810 and1198670 are thenodes of RBM visual layer and hidden layer in the first layerrespectively 1199080 represents the weight between the visual andhidden layers [20]

42 DBN Training Process

(1) The input sample is entered from the bottom level

(2) The first RBM model was trained and then passedto the second RBM model for training followed by

continuous training until the training of the top of theRBMmodel is also complete

(3) After the training is completed the training data cansupervise the operation and adopt themaximum like-lihood estimation method to fine-tune the networkmodel

(4) Finally the BP model is used to fine-tune the modelparameters of the top layer so as tominimize the valueof the loss function

43 The Training Method of Deep Learning

(1) Unsupervised Learning from Bottom Up (Pretraining)Using unlabeled data to train each parameter hierarchicallythis is an unsupervised training method which is the biggestdifference from the traditional neural network and also canbe regarded as the process of feature learning The firstlayer is first trained with unlabeled data and the first layerparameters are obtained The output of the first layer is usedas the input of the second layer so as to train the second layersand finally obtain the parameters of each layer

(2) Top-Down Supervised Study (Tuning) After the first stepis completed the network adopted discriminative trainingusing labeled data and the error is transmitted from top tobottom The first step is similar to the random initializationof the traditional neural network The difference is that thefirst step of deep learning is obtained through the studyof unlabeled data rather than random initialization So theinitial value is closer to the overall optimal so the effect ofdeep learning is mainly the pretraining of the first step

44 ESMD and DBN Prediction Model Based on ClusteringESMD and DBN prediction model based on clustering isproposed whose structure is described as follows

(1) The original sequence is decomposed by ESMD thenthe finite number of IMFs and residuals is obtained

(2) The fuzzy 119888-means clustering analysis is performedfor each IMF component and residual then thefrequency fluctuation rule is got

(3) The DBN model is established for each IMF com-ponent and residual respectively then the predictedvalue of each component is obtained

(4) Reconstruct IMF predicting value to obtain the finalpredicting results

5 Data Simulation and Analysis

The monthly mean total sunspot number from 1963 to 2012was used as the original data There are a total of 600 datapoints shown in Figure 3The original data is decomposed byEMD and ESMD respectively shown in Figures 4 and 5

In Figure 5 modal components IMF1simIMF6 are shownfrom top to bottom and the instantaneous frequency ofmodal components IMF3simIMF6 is basically stable Themodal components can achieve relatively high prediction

4 Mathematical Problems in Engineering

0 100 200 300 400 500 600Time (months)

0

50

100

150

200

250

Mon

thly

mea

n to

tal s

unsp

ot n

umbe

r

Figure 3 Monthly mean total sunspot number in 1963ndash2012

0 100 200 300 400 500 600minus50

050

IMF1

0 100 200 300 400 500 600minus50

050

IMF2

0 100 200 300 400 500 600minus20

020

IMF3

0 100 200 300 400 500 600minus50

050

IMF4

0 100 200 300 400 500 600minus100

0100

IMF5

0 100 200 300 400 500 600minus50

050

IMF6

0 100 200 300 400 500 6006080

100

Time (months)

R

Figure 4 The results of EMD decomposition

results after they are decomposed and predicted But IMF1simIMF2 are still quite complex compared to other componentsthe instantaneous frequency is very large and nonstationaryis strong So the fuzzy 119888-means clustering analysis is per-formed and the results are shown in Figure 6

The DBN structure contains two hidden layers Thenumber of neurons is 2 and 12 and the learning rate is 1The DBN network includes two hidden layers The numberof neurons is 20 and 10 the learning rate is 01 the cyclenumber is set to 100 and the momentum is set to 0 Afterthe training is completed each layer of the RBM model canobtain initialization parameters which constitute the simplest

0 100 200 300 400 500 600minus50

050

IMF1

0 100 200 300 400 500 600minus50

050

IMF2

0 100 200 300 400 500 600minus20

020

IMF3

0 100 200 300 400 500 600minus100

0100

IMF4

0 100 200 300 400 500 600minus100

0100

IMF5

0 100 200 300 400 500 600minus50

050

IMF6

0 100 200 300 400 500 6000

50100

Time (months)

R

Figure 5 The results of ESMD decomposition

0 100 200 300 400 500 600minus50

050

0 100 200 300 400 500 600minus50

050

0 100 200 300 400 500 600minus50

050

0 100 200 300 400 500 600minus100

0100

IMF4

0 100 200 300 400 500 600minus50

050

IMF1

IMF2

IMF3

IMF5

0 100 200 300 400 500 6006080

100

IMF6

Time (months)

Figure 6 Modal components after fuzzy clustering

model of DBNThe prediction experimental steps are shownin Figure 7

The predicted results of ESMD-DBN prediction modelbased on clustering are shown in Figure 8 The samesunspot time series is predicted by DBN directly namedDBNprediction The same sunspot time series is decomposedrespectively by EMD EEMD CEEMD and ESMD and thefinite number of IMF components and residue are obtained

Mathematical Problems in Engineering 5

Experimental data

The training set and testing set

Setup parameters

Training DBN

Training and projections for the sunspot

Figure 7 Steps of sunspot prediction based on DBN

0 10 20 30 40 50 60 70 80 90 100Time (months)

0

10

20

30

40

50

60

70

80

90

100

Mon

thly

mea

n to

tal s

unsp

ot n

umbe

r

SunspotThe proposed model

Figure 8 The predicted results based on clustering ESMD-DBN

The DBN model is established for each IMF componentand residue respectively then the predicted value of eachcomponent is obtained They were named respectively bythe predictionmodels of EMD-DBN EEMD-DBNCEEMD-DBN and ESMD-DBN

The purple line in Figure 8 represents the predictednumber of sunspots and the blue line represents the practicalnumber of sunspots It can be seen that the clustering ESMD-DBN model proposed in this paper has good fitting to theoriginal data and can predict the number of sunspots wellFigure 9 shows the comparison results of the predictionmodels of DBN EMD-DBN EEMD-DBN CEEMD-DBNand ESMD-DBN In order to identify predicted results localpredicted results are shown in Figure 10

In order to verify the prediction result the root meansquare error (RMSE) and the mean absolute percentage error

SunspotDBNEMD-DBNEEMD-DBN

CEMD-DBNESMD-DBNThe proposed model

0 10 20 30 40 50 60 70 80 90 100Time (months)

0

10

20

30

40

50

60

70

80

90

100

Mon

thly

mea

n to

tal s

unsp

ot n

umbe

rFigure 9 Predicted results of sunspot numbers for each model

SunspotDBNEMD-DBNEEMD-DBN

CEMD-DBNESMD-DBNThe proposed model

90 91 92 93 94 95 96 97 98 99 100Time (months)

0

10

20

30

40

50

60

70

80

90

100M

onth

ly m

ean

tota

l sun

spot

num

ber

Figure 10 Local predicted results of sunspot numbers for eachmodel

(MAPE) are used to measure the prediction performanceThe formulas are as follows

RMSE = radic 1119899119899sum119894=1

[119909 (119894) minus 119909 (119894)]2

MAPE = 1119899119899sum119894=1

10038161003816100381610038161003816100381610038161003816119909 (119894) minus 119909 (119894)119909 (119894)10038161003816100381610038161003816100381610038161003816 times 100

(6)

where 119899 is the number of sample datasets 119909(119894) is the 119894th valueof the predicted data and 119909(119894) is the 119894th value of the actualdata Performance comparison of the six models is shown inTable 1

6 Mathematical Problems in Engineering

Table 1 Performance comparison of the six models

Models RMSE MAPEDBN 84398 2652EMD-DBN 58374 1236EEMD-DBN 30837 758CEEMD-DBN 13405 367ESMD-DBN 10543 245The proposed model 08760 136

As shown in Table 1 the RMSE and MAPE of theproposed model are smaller than the other five modelsTherefore the proposed model can predict sunspot numberand the trend of sunspot time series better and it is aneffective prediction model

6 Conclusions

In this paper a deep learning prediction model based onextreme-point symmetric mode decomposition and cluster-ing analysis is proposed to predict the sunspot monthlymean time series Comparing with the other models such asDBN EMD-DBN EEMD-DBN CEEMD-DBN and ESMD-DBN the RMSE and MAPE of the proposed model are thesmallest The experimental results show that the proposedmodel can improve the prediction precision and reduce theerror compared with other models in predicting the samesunspot time series It can also be applied to other fieldsafter conducting some modification and has high applicationvalue

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China (no 51709228)

References

[1] G Li and S Wang ldquoSunspots time-series prediction based oncomplementary ensemble empirical mode decomposition andwavelet neural networkrdquoMathematical Problems in Engineeringvol 2017 Article ID 3513980 7 pages 2017

[2] E M Roshchina and A P Sarychev ldquoApproximation of period-icity in sunspot formation of and prediction of the 25th cyclerdquoGeomagnetism and Aeronomy vol 55 no 7 pp 892ndash895 2015

[3] S D Taabu F M Drsquoujanga and T Ssenyonga ldquoPredictionof ionospheric scintillation using neural network over EastAfrican region during ascending phase of sunspot cycle 24rdquoAdvances in Space Research vol 57 no 7 pp 1570ndash1584 2016

[4] N E Huang Z Shen S R Long et al ldquoThe empirical modedecomposition and the Hilbert spectrum for nonlinear andnon-stationary time series analysisrdquo in Proceedings of the Royal

Society of London A Mathematical Physical and EngineeringSciences vol 454 pp 903ndash995 1998

[5] W-A Yang W Zhou W Liao and Y Guo ldquoIdentificationand quantification of concurrent control chart patterns usingextreme-point symmetric mode decomposition and extremelearning machinesrdquo Neurocomputing vol 147 no 1 pp 260ndash270 2015

[6] J R Lei Z H Liu L Bai Z S Chen J H Xu and L LWang ldquoThe regional features of precipitation variation trendsover Sichuan in China by the ESMDmethodrdquoMausam vol 67no 11 pp 849ndash860 2016

[7] X Tian Y Li H Zhou X Li L Chen and X Zhang ldquoElec-trocardiogram signal denoising using extreme-point symmetricmode decomposition and nonlocal meansrdquo Sensors vol 16 no10 article no 1584 2016

[8] M Fatehi and H H Asadi ldquoApplication of semi-supervisedfuzzy c-means method in clustering multivariate geochemicaldata a case study from the Dalli Cu-Au porphyry deposit incentral Iranrdquo Ore Geology Reviews vol 81 pp 245ndash255 2017

[9] P R Meena and S K R Shantha ldquoSpatial fuzzy c means andexpectation maximization algorithms with bias correction forsegmentation of MR brain imagesrdquo Journal of Medical Systemsvol 41 article 15 2017

[10] H B Huang X R Huang R X Li T C Lim and W P DingldquoSound quality prediction of vehicle interior noise using deepbelief networksrdquo Applied Acoustics vol 113 pp 149ndash161 2016

[11] Z Zhao L Jiao J Zhao J Gu and J Zhao ldquoDiscriminant deepbelief network for high-resolution SAR image classificationrdquoPattern Recognition vol 61 pp 686ndash701 2017

[12] Y-J Hu and Z-H Ling ldquoDBN-based spectral feature represen-tation for statistical parametric speech synthesisrdquo IEEE SignalProcessing Letters vol 23 no 3 pp 321ndash325 2016

[13] A-RMohamed G E Dahl andGHinton ldquoAcousticmodelingusing deep belief networksrdquo IEEE Transactions on Audio Speechand Language Processing vol 20 no 1 pp 14ndash22 2012

[14] J C Dunn ldquoA fuzzy relative of the ISODATA process and itsuse in detecting compact well-separated clustersrdquo Journal ofCybernetics vol 3 no 3 pp 32ndash57 1973

[15] J C Bezdek Pattern Recognition with Fuzzy Objective FunctionAlgorithms Plenum Press New York NY USA 1981

[16] S Niazmardi S Homayouni and A Safari ldquoAn improved FCMalgorithm based on the svdd for unsupervised hyperspectraldata classificationrdquo IEEE Journal of Selected Topics in AppliedEarth Observations and Remote Sensing vol 6 no 2 pp 831ndash839 2013

[17] F Liu L Jiao B Hou and S Yang ldquoPOL-SAR Image Classifi-cation Based on Wishart DBN and Local Spatial InformationrdquoIEEE Transactions on Geoscience and Remote Sensing vol 54no 6 pp 3292ndash3308 2016

[18] A Courville G Desjardins J Bergstra and Y Bengio ldquoThespike-and-slab RBM and extensions to discrete and sparsedata distributionsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 36 no 9 pp 1874ndash1887 2014

[19] Y Yuan J Lin and Q Wang ldquoHyperspectral image classifica-tion viamultitask joint sparse representation and stepwiseMRFoptimizationrdquo IEEE Transactions on Cybernetics vol 46 no 102015

[20] K Orphanou A Stassopoulou and E Keravnou ldquoDBN-extended A dynamic Bayesian network model extended withtemporal abstractions for coronary heart disease prognosisrdquoIEEE Journal of Biomedical and Health Informatics vol 20 no3 pp 944ndash952 2016

Submit your manuscripts athttpswwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 3: A Deep Learning Prediction Model Based on Extreme-Point

Mathematical Problems in Engineering 3

ℎ1 ℎ2 ℎ3 ℎn

b1 b2 b3 bn

a1 a2 a3 a4 am

1 2 3 4 m

b

a

wntimesmwntimesm

middot middot middot

middot middot middot

Figure 1 Network structure diagram of RBM

Data output

BP

RMB

Counterpropagation

Fine tuning

Information

RMB

Input data

Fine tuning

w2

w1

w0

V2

H1

H0

V0

V1

Figure 2 Deep belief network architecture

layer 119867 = (1198671 1198672 119867119899) represents the offset of eachnode of the hidden layer and 119908 represents the weight matrixbetween the nodes of two layer These three parametersdirectly determine themodel to encode the119898dimension datainto 119899 dimension data thus the conversion between featuresis realized

DBN is composed of a large number of RBM modelsfrom the bottom to the top and the top of a layer of BPneural network which is shown in Figure 2The bottom is thesample input which is waiting for training 1198810 and1198670 are thenodes of RBM visual layer and hidden layer in the first layerrespectively 1199080 represents the weight between the visual andhidden layers [20]

42 DBN Training Process

(1) The input sample is entered from the bottom level

(2) The first RBM model was trained and then passedto the second RBM model for training followed by

continuous training until the training of the top of theRBMmodel is also complete

(3) After the training is completed the training data cansupervise the operation and adopt themaximum like-lihood estimation method to fine-tune the networkmodel

(4) Finally the BP model is used to fine-tune the modelparameters of the top layer so as tominimize the valueof the loss function

43 The Training Method of Deep Learning

(1) Unsupervised Learning from Bottom Up (Pretraining)Using unlabeled data to train each parameter hierarchicallythis is an unsupervised training method which is the biggestdifference from the traditional neural network and also canbe regarded as the process of feature learning The firstlayer is first trained with unlabeled data and the first layerparameters are obtained The output of the first layer is usedas the input of the second layer so as to train the second layersand finally obtain the parameters of each layer

(2) Top-Down Supervised Study (Tuning) After the first stepis completed the network adopted discriminative trainingusing labeled data and the error is transmitted from top tobottom The first step is similar to the random initializationof the traditional neural network The difference is that thefirst step of deep learning is obtained through the studyof unlabeled data rather than random initialization So theinitial value is closer to the overall optimal so the effect ofdeep learning is mainly the pretraining of the first step

44 ESMD and DBN Prediction Model Based on ClusteringESMD and DBN prediction model based on clustering isproposed whose structure is described as follows

(1) The original sequence is decomposed by ESMD thenthe finite number of IMFs and residuals is obtained

(2) The fuzzy 119888-means clustering analysis is performedfor each IMF component and residual then thefrequency fluctuation rule is got

(3) The DBN model is established for each IMF com-ponent and residual respectively then the predictedvalue of each component is obtained

(4) Reconstruct IMF predicting value to obtain the finalpredicting results

5 Data Simulation and Analysis

The monthly mean total sunspot number from 1963 to 2012was used as the original data There are a total of 600 datapoints shown in Figure 3The original data is decomposed byEMD and ESMD respectively shown in Figures 4 and 5

In Figure 5 modal components IMF1simIMF6 are shownfrom top to bottom and the instantaneous frequency ofmodal components IMF3simIMF6 is basically stable Themodal components can achieve relatively high prediction

4 Mathematical Problems in Engineering

0 100 200 300 400 500 600Time (months)

0

50

100

150

200

250

Mon

thly

mea

n to

tal s

unsp

ot n

umbe

r

Figure 3 Monthly mean total sunspot number in 1963ndash2012

0 100 200 300 400 500 600minus50

050

IMF1

0 100 200 300 400 500 600minus50

050

IMF2

0 100 200 300 400 500 600minus20

020

IMF3

0 100 200 300 400 500 600minus50

050

IMF4

0 100 200 300 400 500 600minus100

0100

IMF5

0 100 200 300 400 500 600minus50

050

IMF6

0 100 200 300 400 500 6006080

100

Time (months)

R

Figure 4 The results of EMD decomposition

results after they are decomposed and predicted But IMF1simIMF2 are still quite complex compared to other componentsthe instantaneous frequency is very large and nonstationaryis strong So the fuzzy 119888-means clustering analysis is per-formed and the results are shown in Figure 6

The DBN structure contains two hidden layers Thenumber of neurons is 2 and 12 and the learning rate is 1The DBN network includes two hidden layers The numberof neurons is 20 and 10 the learning rate is 01 the cyclenumber is set to 100 and the momentum is set to 0 Afterthe training is completed each layer of the RBM model canobtain initialization parameters which constitute the simplest

0 100 200 300 400 500 600minus50

050

IMF1

0 100 200 300 400 500 600minus50

050

IMF2

0 100 200 300 400 500 600minus20

020

IMF3

0 100 200 300 400 500 600minus100

0100

IMF4

0 100 200 300 400 500 600minus100

0100

IMF5

0 100 200 300 400 500 600minus50

050

IMF6

0 100 200 300 400 500 6000

50100

Time (months)

R

Figure 5 The results of ESMD decomposition

0 100 200 300 400 500 600minus50

050

0 100 200 300 400 500 600minus50

050

0 100 200 300 400 500 600minus50

050

0 100 200 300 400 500 600minus100

0100

IMF4

0 100 200 300 400 500 600minus50

050

IMF1

IMF2

IMF3

IMF5

0 100 200 300 400 500 6006080

100

IMF6

Time (months)

Figure 6 Modal components after fuzzy clustering

model of DBNThe prediction experimental steps are shownin Figure 7

The predicted results of ESMD-DBN prediction modelbased on clustering are shown in Figure 8 The samesunspot time series is predicted by DBN directly namedDBNprediction The same sunspot time series is decomposedrespectively by EMD EEMD CEEMD and ESMD and thefinite number of IMF components and residue are obtained

Mathematical Problems in Engineering 5

Experimental data

The training set and testing set

Setup parameters

Training DBN

Training and projections for the sunspot

Figure 7 Steps of sunspot prediction based on DBN

0 10 20 30 40 50 60 70 80 90 100Time (months)

0

10

20

30

40

50

60

70

80

90

100

Mon

thly

mea

n to

tal s

unsp

ot n

umbe

r

SunspotThe proposed model

Figure 8 The predicted results based on clustering ESMD-DBN

The DBN model is established for each IMF componentand residue respectively then the predicted value of eachcomponent is obtained They were named respectively bythe predictionmodels of EMD-DBN EEMD-DBNCEEMD-DBN and ESMD-DBN

The purple line in Figure 8 represents the predictednumber of sunspots and the blue line represents the practicalnumber of sunspots It can be seen that the clustering ESMD-DBN model proposed in this paper has good fitting to theoriginal data and can predict the number of sunspots wellFigure 9 shows the comparison results of the predictionmodels of DBN EMD-DBN EEMD-DBN CEEMD-DBNand ESMD-DBN In order to identify predicted results localpredicted results are shown in Figure 10

In order to verify the prediction result the root meansquare error (RMSE) and the mean absolute percentage error

SunspotDBNEMD-DBNEEMD-DBN

CEMD-DBNESMD-DBNThe proposed model

0 10 20 30 40 50 60 70 80 90 100Time (months)

0

10

20

30

40

50

60

70

80

90

100

Mon

thly

mea

n to

tal s

unsp

ot n

umbe

rFigure 9 Predicted results of sunspot numbers for each model

SunspotDBNEMD-DBNEEMD-DBN

CEMD-DBNESMD-DBNThe proposed model

90 91 92 93 94 95 96 97 98 99 100Time (months)

0

10

20

30

40

50

60

70

80

90

100M

onth

ly m

ean

tota

l sun

spot

num

ber

Figure 10 Local predicted results of sunspot numbers for eachmodel

(MAPE) are used to measure the prediction performanceThe formulas are as follows

RMSE = radic 1119899119899sum119894=1

[119909 (119894) minus 119909 (119894)]2

MAPE = 1119899119899sum119894=1

10038161003816100381610038161003816100381610038161003816119909 (119894) minus 119909 (119894)119909 (119894)10038161003816100381610038161003816100381610038161003816 times 100

(6)

where 119899 is the number of sample datasets 119909(119894) is the 119894th valueof the predicted data and 119909(119894) is the 119894th value of the actualdata Performance comparison of the six models is shown inTable 1

6 Mathematical Problems in Engineering

Table 1 Performance comparison of the six models

Models RMSE MAPEDBN 84398 2652EMD-DBN 58374 1236EEMD-DBN 30837 758CEEMD-DBN 13405 367ESMD-DBN 10543 245The proposed model 08760 136

As shown in Table 1 the RMSE and MAPE of theproposed model are smaller than the other five modelsTherefore the proposed model can predict sunspot numberand the trend of sunspot time series better and it is aneffective prediction model

6 Conclusions

In this paper a deep learning prediction model based onextreme-point symmetric mode decomposition and cluster-ing analysis is proposed to predict the sunspot monthlymean time series Comparing with the other models such asDBN EMD-DBN EEMD-DBN CEEMD-DBN and ESMD-DBN the RMSE and MAPE of the proposed model are thesmallest The experimental results show that the proposedmodel can improve the prediction precision and reduce theerror compared with other models in predicting the samesunspot time series It can also be applied to other fieldsafter conducting some modification and has high applicationvalue

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China (no 51709228)

References

[1] G Li and S Wang ldquoSunspots time-series prediction based oncomplementary ensemble empirical mode decomposition andwavelet neural networkrdquoMathematical Problems in Engineeringvol 2017 Article ID 3513980 7 pages 2017

[2] E M Roshchina and A P Sarychev ldquoApproximation of period-icity in sunspot formation of and prediction of the 25th cyclerdquoGeomagnetism and Aeronomy vol 55 no 7 pp 892ndash895 2015

[3] S D Taabu F M Drsquoujanga and T Ssenyonga ldquoPredictionof ionospheric scintillation using neural network over EastAfrican region during ascending phase of sunspot cycle 24rdquoAdvances in Space Research vol 57 no 7 pp 1570ndash1584 2016

[4] N E Huang Z Shen S R Long et al ldquoThe empirical modedecomposition and the Hilbert spectrum for nonlinear andnon-stationary time series analysisrdquo in Proceedings of the Royal

Society of London A Mathematical Physical and EngineeringSciences vol 454 pp 903ndash995 1998

[5] W-A Yang W Zhou W Liao and Y Guo ldquoIdentificationand quantification of concurrent control chart patterns usingextreme-point symmetric mode decomposition and extremelearning machinesrdquo Neurocomputing vol 147 no 1 pp 260ndash270 2015

[6] J R Lei Z H Liu L Bai Z S Chen J H Xu and L LWang ldquoThe regional features of precipitation variation trendsover Sichuan in China by the ESMDmethodrdquoMausam vol 67no 11 pp 849ndash860 2016

[7] X Tian Y Li H Zhou X Li L Chen and X Zhang ldquoElec-trocardiogram signal denoising using extreme-point symmetricmode decomposition and nonlocal meansrdquo Sensors vol 16 no10 article no 1584 2016

[8] M Fatehi and H H Asadi ldquoApplication of semi-supervisedfuzzy c-means method in clustering multivariate geochemicaldata a case study from the Dalli Cu-Au porphyry deposit incentral Iranrdquo Ore Geology Reviews vol 81 pp 245ndash255 2017

[9] P R Meena and S K R Shantha ldquoSpatial fuzzy c means andexpectation maximization algorithms with bias correction forsegmentation of MR brain imagesrdquo Journal of Medical Systemsvol 41 article 15 2017

[10] H B Huang X R Huang R X Li T C Lim and W P DingldquoSound quality prediction of vehicle interior noise using deepbelief networksrdquo Applied Acoustics vol 113 pp 149ndash161 2016

[11] Z Zhao L Jiao J Zhao J Gu and J Zhao ldquoDiscriminant deepbelief network for high-resolution SAR image classificationrdquoPattern Recognition vol 61 pp 686ndash701 2017

[12] Y-J Hu and Z-H Ling ldquoDBN-based spectral feature represen-tation for statistical parametric speech synthesisrdquo IEEE SignalProcessing Letters vol 23 no 3 pp 321ndash325 2016

[13] A-RMohamed G E Dahl andGHinton ldquoAcousticmodelingusing deep belief networksrdquo IEEE Transactions on Audio Speechand Language Processing vol 20 no 1 pp 14ndash22 2012

[14] J C Dunn ldquoA fuzzy relative of the ISODATA process and itsuse in detecting compact well-separated clustersrdquo Journal ofCybernetics vol 3 no 3 pp 32ndash57 1973

[15] J C Bezdek Pattern Recognition with Fuzzy Objective FunctionAlgorithms Plenum Press New York NY USA 1981

[16] S Niazmardi S Homayouni and A Safari ldquoAn improved FCMalgorithm based on the svdd for unsupervised hyperspectraldata classificationrdquo IEEE Journal of Selected Topics in AppliedEarth Observations and Remote Sensing vol 6 no 2 pp 831ndash839 2013

[17] F Liu L Jiao B Hou and S Yang ldquoPOL-SAR Image Classifi-cation Based on Wishart DBN and Local Spatial InformationrdquoIEEE Transactions on Geoscience and Remote Sensing vol 54no 6 pp 3292ndash3308 2016

[18] A Courville G Desjardins J Bergstra and Y Bengio ldquoThespike-and-slab RBM and extensions to discrete and sparsedata distributionsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 36 no 9 pp 1874ndash1887 2014

[19] Y Yuan J Lin and Q Wang ldquoHyperspectral image classifica-tion viamultitask joint sparse representation and stepwiseMRFoptimizationrdquo IEEE Transactions on Cybernetics vol 46 no 102015

[20] K Orphanou A Stassopoulou and E Keravnou ldquoDBN-extended A dynamic Bayesian network model extended withtemporal abstractions for coronary heart disease prognosisrdquoIEEE Journal of Biomedical and Health Informatics vol 20 no3 pp 944ndash952 2016

Submit your manuscripts athttpswwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 4: A Deep Learning Prediction Model Based on Extreme-Point

4 Mathematical Problems in Engineering

0 100 200 300 400 500 600Time (months)

0

50

100

150

200

250

Mon

thly

mea

n to

tal s

unsp

ot n

umbe

r

Figure 3 Monthly mean total sunspot number in 1963ndash2012

0 100 200 300 400 500 600minus50

050

IMF1

0 100 200 300 400 500 600minus50

050

IMF2

0 100 200 300 400 500 600minus20

020

IMF3

0 100 200 300 400 500 600minus50

050

IMF4

0 100 200 300 400 500 600minus100

0100

IMF5

0 100 200 300 400 500 600minus50

050

IMF6

0 100 200 300 400 500 6006080

100

Time (months)

R

Figure 4 The results of EMD decomposition

results after they are decomposed and predicted But IMF1simIMF2 are still quite complex compared to other componentsthe instantaneous frequency is very large and nonstationaryis strong So the fuzzy 119888-means clustering analysis is per-formed and the results are shown in Figure 6

The DBN structure contains two hidden layers Thenumber of neurons is 2 and 12 and the learning rate is 1The DBN network includes two hidden layers The numberof neurons is 20 and 10 the learning rate is 01 the cyclenumber is set to 100 and the momentum is set to 0 Afterthe training is completed each layer of the RBM model canobtain initialization parameters which constitute the simplest

0 100 200 300 400 500 600minus50

050

IMF1

0 100 200 300 400 500 600minus50

050

IMF2

0 100 200 300 400 500 600minus20

020

IMF3

0 100 200 300 400 500 600minus100

0100

IMF4

0 100 200 300 400 500 600minus100

0100

IMF5

0 100 200 300 400 500 600minus50

050

IMF6

0 100 200 300 400 500 6000

50100

Time (months)

R

Figure 5 The results of ESMD decomposition

0 100 200 300 400 500 600minus50

050

0 100 200 300 400 500 600minus50

050

0 100 200 300 400 500 600minus50

050

0 100 200 300 400 500 600minus100

0100

IMF4

0 100 200 300 400 500 600minus50

050

IMF1

IMF2

IMF3

IMF5

0 100 200 300 400 500 6006080

100

IMF6

Time (months)

Figure 6 Modal components after fuzzy clustering

model of DBNThe prediction experimental steps are shownin Figure 7

The predicted results of ESMD-DBN prediction modelbased on clustering are shown in Figure 8 The samesunspot time series is predicted by DBN directly namedDBNprediction The same sunspot time series is decomposedrespectively by EMD EEMD CEEMD and ESMD and thefinite number of IMF components and residue are obtained

Mathematical Problems in Engineering 5

Experimental data

The training set and testing set

Setup parameters

Training DBN

Training and projections for the sunspot

Figure 7 Steps of sunspot prediction based on DBN

0 10 20 30 40 50 60 70 80 90 100Time (months)

0

10

20

30

40

50

60

70

80

90

100

Mon

thly

mea

n to

tal s

unsp

ot n

umbe

r

SunspotThe proposed model

Figure 8 The predicted results based on clustering ESMD-DBN

The DBN model is established for each IMF componentand residue respectively then the predicted value of eachcomponent is obtained They were named respectively bythe predictionmodels of EMD-DBN EEMD-DBNCEEMD-DBN and ESMD-DBN

The purple line in Figure 8 represents the predictednumber of sunspots and the blue line represents the practicalnumber of sunspots It can be seen that the clustering ESMD-DBN model proposed in this paper has good fitting to theoriginal data and can predict the number of sunspots wellFigure 9 shows the comparison results of the predictionmodels of DBN EMD-DBN EEMD-DBN CEEMD-DBNand ESMD-DBN In order to identify predicted results localpredicted results are shown in Figure 10

In order to verify the prediction result the root meansquare error (RMSE) and the mean absolute percentage error

SunspotDBNEMD-DBNEEMD-DBN

CEMD-DBNESMD-DBNThe proposed model

0 10 20 30 40 50 60 70 80 90 100Time (months)

0

10

20

30

40

50

60

70

80

90

100

Mon

thly

mea

n to

tal s

unsp

ot n

umbe

rFigure 9 Predicted results of sunspot numbers for each model

SunspotDBNEMD-DBNEEMD-DBN

CEMD-DBNESMD-DBNThe proposed model

90 91 92 93 94 95 96 97 98 99 100Time (months)

0

10

20

30

40

50

60

70

80

90

100M

onth

ly m

ean

tota

l sun

spot

num

ber

Figure 10 Local predicted results of sunspot numbers for eachmodel

(MAPE) are used to measure the prediction performanceThe formulas are as follows

RMSE = radic 1119899119899sum119894=1

[119909 (119894) minus 119909 (119894)]2

MAPE = 1119899119899sum119894=1

10038161003816100381610038161003816100381610038161003816119909 (119894) minus 119909 (119894)119909 (119894)10038161003816100381610038161003816100381610038161003816 times 100

(6)

where 119899 is the number of sample datasets 119909(119894) is the 119894th valueof the predicted data and 119909(119894) is the 119894th value of the actualdata Performance comparison of the six models is shown inTable 1

6 Mathematical Problems in Engineering

Table 1 Performance comparison of the six models

Models RMSE MAPEDBN 84398 2652EMD-DBN 58374 1236EEMD-DBN 30837 758CEEMD-DBN 13405 367ESMD-DBN 10543 245The proposed model 08760 136

As shown in Table 1 the RMSE and MAPE of theproposed model are smaller than the other five modelsTherefore the proposed model can predict sunspot numberand the trend of sunspot time series better and it is aneffective prediction model

6 Conclusions

In this paper a deep learning prediction model based onextreme-point symmetric mode decomposition and cluster-ing analysis is proposed to predict the sunspot monthlymean time series Comparing with the other models such asDBN EMD-DBN EEMD-DBN CEEMD-DBN and ESMD-DBN the RMSE and MAPE of the proposed model are thesmallest The experimental results show that the proposedmodel can improve the prediction precision and reduce theerror compared with other models in predicting the samesunspot time series It can also be applied to other fieldsafter conducting some modification and has high applicationvalue

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China (no 51709228)

References

[1] G Li and S Wang ldquoSunspots time-series prediction based oncomplementary ensemble empirical mode decomposition andwavelet neural networkrdquoMathematical Problems in Engineeringvol 2017 Article ID 3513980 7 pages 2017

[2] E M Roshchina and A P Sarychev ldquoApproximation of period-icity in sunspot formation of and prediction of the 25th cyclerdquoGeomagnetism and Aeronomy vol 55 no 7 pp 892ndash895 2015

[3] S D Taabu F M Drsquoujanga and T Ssenyonga ldquoPredictionof ionospheric scintillation using neural network over EastAfrican region during ascending phase of sunspot cycle 24rdquoAdvances in Space Research vol 57 no 7 pp 1570ndash1584 2016

[4] N E Huang Z Shen S R Long et al ldquoThe empirical modedecomposition and the Hilbert spectrum for nonlinear andnon-stationary time series analysisrdquo in Proceedings of the Royal

Society of London A Mathematical Physical and EngineeringSciences vol 454 pp 903ndash995 1998

[5] W-A Yang W Zhou W Liao and Y Guo ldquoIdentificationand quantification of concurrent control chart patterns usingextreme-point symmetric mode decomposition and extremelearning machinesrdquo Neurocomputing vol 147 no 1 pp 260ndash270 2015

[6] J R Lei Z H Liu L Bai Z S Chen J H Xu and L LWang ldquoThe regional features of precipitation variation trendsover Sichuan in China by the ESMDmethodrdquoMausam vol 67no 11 pp 849ndash860 2016

[7] X Tian Y Li H Zhou X Li L Chen and X Zhang ldquoElec-trocardiogram signal denoising using extreme-point symmetricmode decomposition and nonlocal meansrdquo Sensors vol 16 no10 article no 1584 2016

[8] M Fatehi and H H Asadi ldquoApplication of semi-supervisedfuzzy c-means method in clustering multivariate geochemicaldata a case study from the Dalli Cu-Au porphyry deposit incentral Iranrdquo Ore Geology Reviews vol 81 pp 245ndash255 2017

[9] P R Meena and S K R Shantha ldquoSpatial fuzzy c means andexpectation maximization algorithms with bias correction forsegmentation of MR brain imagesrdquo Journal of Medical Systemsvol 41 article 15 2017

[10] H B Huang X R Huang R X Li T C Lim and W P DingldquoSound quality prediction of vehicle interior noise using deepbelief networksrdquo Applied Acoustics vol 113 pp 149ndash161 2016

[11] Z Zhao L Jiao J Zhao J Gu and J Zhao ldquoDiscriminant deepbelief network for high-resolution SAR image classificationrdquoPattern Recognition vol 61 pp 686ndash701 2017

[12] Y-J Hu and Z-H Ling ldquoDBN-based spectral feature represen-tation for statistical parametric speech synthesisrdquo IEEE SignalProcessing Letters vol 23 no 3 pp 321ndash325 2016

[13] A-RMohamed G E Dahl andGHinton ldquoAcousticmodelingusing deep belief networksrdquo IEEE Transactions on Audio Speechand Language Processing vol 20 no 1 pp 14ndash22 2012

[14] J C Dunn ldquoA fuzzy relative of the ISODATA process and itsuse in detecting compact well-separated clustersrdquo Journal ofCybernetics vol 3 no 3 pp 32ndash57 1973

[15] J C Bezdek Pattern Recognition with Fuzzy Objective FunctionAlgorithms Plenum Press New York NY USA 1981

[16] S Niazmardi S Homayouni and A Safari ldquoAn improved FCMalgorithm based on the svdd for unsupervised hyperspectraldata classificationrdquo IEEE Journal of Selected Topics in AppliedEarth Observations and Remote Sensing vol 6 no 2 pp 831ndash839 2013

[17] F Liu L Jiao B Hou and S Yang ldquoPOL-SAR Image Classifi-cation Based on Wishart DBN and Local Spatial InformationrdquoIEEE Transactions on Geoscience and Remote Sensing vol 54no 6 pp 3292ndash3308 2016

[18] A Courville G Desjardins J Bergstra and Y Bengio ldquoThespike-and-slab RBM and extensions to discrete and sparsedata distributionsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 36 no 9 pp 1874ndash1887 2014

[19] Y Yuan J Lin and Q Wang ldquoHyperspectral image classifica-tion viamultitask joint sparse representation and stepwiseMRFoptimizationrdquo IEEE Transactions on Cybernetics vol 46 no 102015

[20] K Orphanou A Stassopoulou and E Keravnou ldquoDBN-extended A dynamic Bayesian network model extended withtemporal abstractions for coronary heart disease prognosisrdquoIEEE Journal of Biomedical and Health Informatics vol 20 no3 pp 944ndash952 2016

Submit your manuscripts athttpswwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 5: A Deep Learning Prediction Model Based on Extreme-Point

Mathematical Problems in Engineering 5

Experimental data

The training set and testing set

Setup parameters

Training DBN

Training and projections for the sunspot

Figure 7 Steps of sunspot prediction based on DBN

0 10 20 30 40 50 60 70 80 90 100Time (months)

0

10

20

30

40

50

60

70

80

90

100

Mon

thly

mea

n to

tal s

unsp

ot n

umbe

r

SunspotThe proposed model

Figure 8 The predicted results based on clustering ESMD-DBN

The DBN model is established for each IMF componentand residue respectively then the predicted value of eachcomponent is obtained They were named respectively bythe predictionmodels of EMD-DBN EEMD-DBNCEEMD-DBN and ESMD-DBN

The purple line in Figure 8 represents the predictednumber of sunspots and the blue line represents the practicalnumber of sunspots It can be seen that the clustering ESMD-DBN model proposed in this paper has good fitting to theoriginal data and can predict the number of sunspots wellFigure 9 shows the comparison results of the predictionmodels of DBN EMD-DBN EEMD-DBN CEEMD-DBNand ESMD-DBN In order to identify predicted results localpredicted results are shown in Figure 10

In order to verify the prediction result the root meansquare error (RMSE) and the mean absolute percentage error

SunspotDBNEMD-DBNEEMD-DBN

CEMD-DBNESMD-DBNThe proposed model

0 10 20 30 40 50 60 70 80 90 100Time (months)

0

10

20

30

40

50

60

70

80

90

100

Mon

thly

mea

n to

tal s

unsp

ot n

umbe

rFigure 9 Predicted results of sunspot numbers for each model

SunspotDBNEMD-DBNEEMD-DBN

CEMD-DBNESMD-DBNThe proposed model

90 91 92 93 94 95 96 97 98 99 100Time (months)

0

10

20

30

40

50

60

70

80

90

100M

onth

ly m

ean

tota

l sun

spot

num

ber

Figure 10 Local predicted results of sunspot numbers for eachmodel

(MAPE) are used to measure the prediction performanceThe formulas are as follows

RMSE = radic 1119899119899sum119894=1

[119909 (119894) minus 119909 (119894)]2

MAPE = 1119899119899sum119894=1

10038161003816100381610038161003816100381610038161003816119909 (119894) minus 119909 (119894)119909 (119894)10038161003816100381610038161003816100381610038161003816 times 100

(6)

where 119899 is the number of sample datasets 119909(119894) is the 119894th valueof the predicted data and 119909(119894) is the 119894th value of the actualdata Performance comparison of the six models is shown inTable 1

6 Mathematical Problems in Engineering

Table 1 Performance comparison of the six models

Models RMSE MAPEDBN 84398 2652EMD-DBN 58374 1236EEMD-DBN 30837 758CEEMD-DBN 13405 367ESMD-DBN 10543 245The proposed model 08760 136

As shown in Table 1 the RMSE and MAPE of theproposed model are smaller than the other five modelsTherefore the proposed model can predict sunspot numberand the trend of sunspot time series better and it is aneffective prediction model

6 Conclusions

In this paper a deep learning prediction model based onextreme-point symmetric mode decomposition and cluster-ing analysis is proposed to predict the sunspot monthlymean time series Comparing with the other models such asDBN EMD-DBN EEMD-DBN CEEMD-DBN and ESMD-DBN the RMSE and MAPE of the proposed model are thesmallest The experimental results show that the proposedmodel can improve the prediction precision and reduce theerror compared with other models in predicting the samesunspot time series It can also be applied to other fieldsafter conducting some modification and has high applicationvalue

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China (no 51709228)

References

[1] G Li and S Wang ldquoSunspots time-series prediction based oncomplementary ensemble empirical mode decomposition andwavelet neural networkrdquoMathematical Problems in Engineeringvol 2017 Article ID 3513980 7 pages 2017

[2] E M Roshchina and A P Sarychev ldquoApproximation of period-icity in sunspot formation of and prediction of the 25th cyclerdquoGeomagnetism and Aeronomy vol 55 no 7 pp 892ndash895 2015

[3] S D Taabu F M Drsquoujanga and T Ssenyonga ldquoPredictionof ionospheric scintillation using neural network over EastAfrican region during ascending phase of sunspot cycle 24rdquoAdvances in Space Research vol 57 no 7 pp 1570ndash1584 2016

[4] N E Huang Z Shen S R Long et al ldquoThe empirical modedecomposition and the Hilbert spectrum for nonlinear andnon-stationary time series analysisrdquo in Proceedings of the Royal

Society of London A Mathematical Physical and EngineeringSciences vol 454 pp 903ndash995 1998

[5] W-A Yang W Zhou W Liao and Y Guo ldquoIdentificationand quantification of concurrent control chart patterns usingextreme-point symmetric mode decomposition and extremelearning machinesrdquo Neurocomputing vol 147 no 1 pp 260ndash270 2015

[6] J R Lei Z H Liu L Bai Z S Chen J H Xu and L LWang ldquoThe regional features of precipitation variation trendsover Sichuan in China by the ESMDmethodrdquoMausam vol 67no 11 pp 849ndash860 2016

[7] X Tian Y Li H Zhou X Li L Chen and X Zhang ldquoElec-trocardiogram signal denoising using extreme-point symmetricmode decomposition and nonlocal meansrdquo Sensors vol 16 no10 article no 1584 2016

[8] M Fatehi and H H Asadi ldquoApplication of semi-supervisedfuzzy c-means method in clustering multivariate geochemicaldata a case study from the Dalli Cu-Au porphyry deposit incentral Iranrdquo Ore Geology Reviews vol 81 pp 245ndash255 2017

[9] P R Meena and S K R Shantha ldquoSpatial fuzzy c means andexpectation maximization algorithms with bias correction forsegmentation of MR brain imagesrdquo Journal of Medical Systemsvol 41 article 15 2017

[10] H B Huang X R Huang R X Li T C Lim and W P DingldquoSound quality prediction of vehicle interior noise using deepbelief networksrdquo Applied Acoustics vol 113 pp 149ndash161 2016

[11] Z Zhao L Jiao J Zhao J Gu and J Zhao ldquoDiscriminant deepbelief network for high-resolution SAR image classificationrdquoPattern Recognition vol 61 pp 686ndash701 2017

[12] Y-J Hu and Z-H Ling ldquoDBN-based spectral feature represen-tation for statistical parametric speech synthesisrdquo IEEE SignalProcessing Letters vol 23 no 3 pp 321ndash325 2016

[13] A-RMohamed G E Dahl andGHinton ldquoAcousticmodelingusing deep belief networksrdquo IEEE Transactions on Audio Speechand Language Processing vol 20 no 1 pp 14ndash22 2012

[14] J C Dunn ldquoA fuzzy relative of the ISODATA process and itsuse in detecting compact well-separated clustersrdquo Journal ofCybernetics vol 3 no 3 pp 32ndash57 1973

[15] J C Bezdek Pattern Recognition with Fuzzy Objective FunctionAlgorithms Plenum Press New York NY USA 1981

[16] S Niazmardi S Homayouni and A Safari ldquoAn improved FCMalgorithm based on the svdd for unsupervised hyperspectraldata classificationrdquo IEEE Journal of Selected Topics in AppliedEarth Observations and Remote Sensing vol 6 no 2 pp 831ndash839 2013

[17] F Liu L Jiao B Hou and S Yang ldquoPOL-SAR Image Classifi-cation Based on Wishart DBN and Local Spatial InformationrdquoIEEE Transactions on Geoscience and Remote Sensing vol 54no 6 pp 3292ndash3308 2016

[18] A Courville G Desjardins J Bergstra and Y Bengio ldquoThespike-and-slab RBM and extensions to discrete and sparsedata distributionsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 36 no 9 pp 1874ndash1887 2014

[19] Y Yuan J Lin and Q Wang ldquoHyperspectral image classifica-tion viamultitask joint sparse representation and stepwiseMRFoptimizationrdquo IEEE Transactions on Cybernetics vol 46 no 102015

[20] K Orphanou A Stassopoulou and E Keravnou ldquoDBN-extended A dynamic Bayesian network model extended withtemporal abstractions for coronary heart disease prognosisrdquoIEEE Journal of Biomedical and Health Informatics vol 20 no3 pp 944ndash952 2016

Submit your manuscripts athttpswwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 6: A Deep Learning Prediction Model Based on Extreme-Point

6 Mathematical Problems in Engineering

Table 1 Performance comparison of the six models

Models RMSE MAPEDBN 84398 2652EMD-DBN 58374 1236EEMD-DBN 30837 758CEEMD-DBN 13405 367ESMD-DBN 10543 245The proposed model 08760 136

As shown in Table 1 the RMSE and MAPE of theproposed model are smaller than the other five modelsTherefore the proposed model can predict sunspot numberand the trend of sunspot time series better and it is aneffective prediction model

6 Conclusions

In this paper a deep learning prediction model based onextreme-point symmetric mode decomposition and cluster-ing analysis is proposed to predict the sunspot monthlymean time series Comparing with the other models such asDBN EMD-DBN EEMD-DBN CEEMD-DBN and ESMD-DBN the RMSE and MAPE of the proposed model are thesmallest The experimental results show that the proposedmodel can improve the prediction precision and reduce theerror compared with other models in predicting the samesunspot time series It can also be applied to other fieldsafter conducting some modification and has high applicationvalue

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China (no 51709228)

References

[1] G Li and S Wang ldquoSunspots time-series prediction based oncomplementary ensemble empirical mode decomposition andwavelet neural networkrdquoMathematical Problems in Engineeringvol 2017 Article ID 3513980 7 pages 2017

[2] E M Roshchina and A P Sarychev ldquoApproximation of period-icity in sunspot formation of and prediction of the 25th cyclerdquoGeomagnetism and Aeronomy vol 55 no 7 pp 892ndash895 2015

[3] S D Taabu F M Drsquoujanga and T Ssenyonga ldquoPredictionof ionospheric scintillation using neural network over EastAfrican region during ascending phase of sunspot cycle 24rdquoAdvances in Space Research vol 57 no 7 pp 1570ndash1584 2016

[4] N E Huang Z Shen S R Long et al ldquoThe empirical modedecomposition and the Hilbert spectrum for nonlinear andnon-stationary time series analysisrdquo in Proceedings of the Royal

Society of London A Mathematical Physical and EngineeringSciences vol 454 pp 903ndash995 1998

[5] W-A Yang W Zhou W Liao and Y Guo ldquoIdentificationand quantification of concurrent control chart patterns usingextreme-point symmetric mode decomposition and extremelearning machinesrdquo Neurocomputing vol 147 no 1 pp 260ndash270 2015

[6] J R Lei Z H Liu L Bai Z S Chen J H Xu and L LWang ldquoThe regional features of precipitation variation trendsover Sichuan in China by the ESMDmethodrdquoMausam vol 67no 11 pp 849ndash860 2016

[7] X Tian Y Li H Zhou X Li L Chen and X Zhang ldquoElec-trocardiogram signal denoising using extreme-point symmetricmode decomposition and nonlocal meansrdquo Sensors vol 16 no10 article no 1584 2016

[8] M Fatehi and H H Asadi ldquoApplication of semi-supervisedfuzzy c-means method in clustering multivariate geochemicaldata a case study from the Dalli Cu-Au porphyry deposit incentral Iranrdquo Ore Geology Reviews vol 81 pp 245ndash255 2017

[9] P R Meena and S K R Shantha ldquoSpatial fuzzy c means andexpectation maximization algorithms with bias correction forsegmentation of MR brain imagesrdquo Journal of Medical Systemsvol 41 article 15 2017

[10] H B Huang X R Huang R X Li T C Lim and W P DingldquoSound quality prediction of vehicle interior noise using deepbelief networksrdquo Applied Acoustics vol 113 pp 149ndash161 2016

[11] Z Zhao L Jiao J Zhao J Gu and J Zhao ldquoDiscriminant deepbelief network for high-resolution SAR image classificationrdquoPattern Recognition vol 61 pp 686ndash701 2017

[12] Y-J Hu and Z-H Ling ldquoDBN-based spectral feature represen-tation for statistical parametric speech synthesisrdquo IEEE SignalProcessing Letters vol 23 no 3 pp 321ndash325 2016

[13] A-RMohamed G E Dahl andGHinton ldquoAcousticmodelingusing deep belief networksrdquo IEEE Transactions on Audio Speechand Language Processing vol 20 no 1 pp 14ndash22 2012

[14] J C Dunn ldquoA fuzzy relative of the ISODATA process and itsuse in detecting compact well-separated clustersrdquo Journal ofCybernetics vol 3 no 3 pp 32ndash57 1973

[15] J C Bezdek Pattern Recognition with Fuzzy Objective FunctionAlgorithms Plenum Press New York NY USA 1981

[16] S Niazmardi S Homayouni and A Safari ldquoAn improved FCMalgorithm based on the svdd for unsupervised hyperspectraldata classificationrdquo IEEE Journal of Selected Topics in AppliedEarth Observations and Remote Sensing vol 6 no 2 pp 831ndash839 2013

[17] F Liu L Jiao B Hou and S Yang ldquoPOL-SAR Image Classifi-cation Based on Wishart DBN and Local Spatial InformationrdquoIEEE Transactions on Geoscience and Remote Sensing vol 54no 6 pp 3292ndash3308 2016

[18] A Courville G Desjardins J Bergstra and Y Bengio ldquoThespike-and-slab RBM and extensions to discrete and sparsedata distributionsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 36 no 9 pp 1874ndash1887 2014

[19] Y Yuan J Lin and Q Wang ldquoHyperspectral image classifica-tion viamultitask joint sparse representation and stepwiseMRFoptimizationrdquo IEEE Transactions on Cybernetics vol 46 no 102015

[20] K Orphanou A Stassopoulou and E Keravnou ldquoDBN-extended A dynamic Bayesian network model extended withtemporal abstractions for coronary heart disease prognosisrdquoIEEE Journal of Biomedical and Health Informatics vol 20 no3 pp 944ndash952 2016

Submit your manuscripts athttpswwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 7: A Deep Learning Prediction Model Based on Extreme-Point

Submit your manuscripts athttpswwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of