[ieee 2010 sixth international conference on natural computation (icnc) - yantai, china...

4
978-1-4244-5961-2/10/$26.00 ©2010 IEEE 1186 2010 Sixth International Conference on Natural Computation (ICNC 2010) An Integrated Faults Classification Approach Based on LW-MWPCA and PNN Qing Yang 1, 2 , 1) School of Information Science Shenyang Ligong University Shenyang, China Feng Tian 1 , Dongsheng Wu 1, 2 , 2) College of Optical and Electronical Changchun University of Science and Technology Changchun, China Dazhi Wang 2, 3 3) College of Information Science Northeast University Shenyang, China Abstract—This paper presents the development of an algorithm based on lifting wavelets, moving window principal components analysis and probabilistic neural network (LW-MWPCA and PNN) for classifying the industrial system faults. The proposed technique consists of a pre-processing unit based on lifting wavelets transform in combination with moving window principal components analysis (MWPCA) and PNN. Firstly the data are pre-processed to remove noise through lifting scheme wavelets, which are faster than first generation wavelets, MWPCA is used to reduce the dimensionality, and then PNN is used to diagnose faults. To validate the performance and effectiveness of the proposed scheme, the method based on LW- MPCA and PNN is applied to diagnose the faults in TE Process. Simulation studies show that the proposed algorithm not only provides an accepted degree of accuracy in fault classification under different fault conditions, but also is reliable, fast and computationally efficient tool. Keywords-Fault detection and diagnosis; Fault classification; Lifting wavelets;LW-MPCA and PNN; TE process. I. INTRODUCTION A quick and correct fault diagnosis system helps to avoid product quality problems and facilitates preventive maintenance. Now that one of widely adopted fault diagnosis approaches is based on artificial neural networks (ANN) [1-6]. Furthermore several types of artificial neural networks have been used for classification applications, but probabilistic neural network (PNN) is usually preferred because of its advantages. PNN is a feed-forward neural network with supervised learning which uses Bayes decision rule and Parzen window. The PNN offers the following advantages: (1) rapid training speed: the PNN is more than five times faster than back-propagation; (2) guaranteed convergence to a Bayes classifier if enough training examples are provided; (3) enables incremental training which is fast. In spite of above advantages, the quality of the classifier based on conventional PNN is related to input data. For better performance in the classifier, the pre-processing of input data for classifier is a key step. Recently, wavelet used as data pre-processor has attracted much attention. References [7-11] proposed a neural network based fault diagnosis method using wavelet transform as a pre- processor to reduce noise in input data, or using principal components analysis (PCA) to reduce the number of input features to the neural network. However, the speed of calculation of wavelet transform and PCA is a problem. For above reasons, a fast integrated fault diagnosis approach based on lifting scheme wavelets, moving window PCA and PNN is presented in this paper. II. LIFTING WAVELETS, MWPCA AND PNN A. Lifting Scheme Wavelets Lifting scheme , shown in Fig.1, was first proposed by Wim Sweldens[12-13]. The original motivation for developing lifting was to build second generation wavelets, i.e., wavelets adapted to situations that do not allow translation and dilation like non-Euclidean spaces. 1) The original signals ] [n X are split into two non- intersecting subsets ] [n c and ] [n d .The greater the correlation between them, the better the split effect is. Usually a signal sequence is split into odd and even sequences, the even indexed samples ] [n X e , and the odd indexed samples ] [n X o . 2) Using the similarity of data, we can predict ] [n d from ] [n c by using a predict operator P which is independent of the dataset. Of course the predictor need not be exact, so we need to record the difference or detail d : ]) [ ( ] [ ] [ 0 n X P n X n d e = (1) Given the detail d and the odd, we can immediately recover the odd as ] [ ]) [ ( ] [ n d n X P n X e o + = (2) 3) In particular, the running average of the ] [n X e is not the same as that of the original samples ] [n X .To correct this, we propose a second lifting step, which replaces the evens with smoothed values ] [n c with the use of an update operator U applied to the details: ]) [ ( ] [ ] [ n d U n X n c e + = (3) From this, we see that if P is a good predictor, and then d approximately will be a sparse set; in other words, we expect the first order entropy to be smaller for d than for o X . This work is supported by Chinese Liaoning Provincial Department of Education Science and Technology Project under Grant 2005354.

Upload: dazhi

Post on 27-Jan-2017

216 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: [IEEE 2010 Sixth International Conference on Natural Computation (ICNC) - Yantai, China (2010.08.10-2010.08.12)] 2010 Sixth International Conference on Natural Computation - An integrated

978-1-4244-5961-2/10/$26.00 ©2010 IEEE 1186

2010 Sixth International Conference on Natural Computation (ICNC 2010)

An Integrated Faults Classification Approach Based on LW-MWPCA and PNN

Qing Yang1, 2,

1) School of Information Science Shenyang Ligong University

Shenyang, China

Feng Tian1, Dongsheng Wu1, 2,

2) College of Optical and Electronical Changchun University of Science and Technology

Changchun, China

Dazhi Wang2, 3

3) College of Information Science Northeast University

Shenyang, China

Abstract—This paper presents the development of an algorithm based on lifting wavelets, moving window principal components analysis and probabilistic neural network (LW-MWPCA and PNN) for classifying the industrial system faults. The proposed technique consists of a pre-processing unit based on lifting wavelets transform in combination with moving window principal components analysis (MWPCA) and PNN. Firstly the data are pre-processed to remove noise through lifting scheme wavelets, which are faster than first generation wavelets, MWPCA is used to reduce the dimensionality, and then PNN is used to diagnose faults. To validate the performance and effectiveness of the proposed scheme, the method based on LW-MPCA and PNN is applied to diagnose the faults in TE Process. Simulation studies show that the proposed algorithm not only provides an accepted degree of accuracy in fault classification under different fault conditions, but also is reliable, fast and computationally efficient tool.

Keywords-Fault detection and diagnosis; Fault classification; Lifting wavelets;LW-MPCA and PNN; TE process.

I. INTRODUCTION A quick and correct fault diagnosis system helps to

avoid product quality problems and facilitates preventive maintenance. Now that one of widely adopted fault diagnosis approaches is based on artificial neural networks (ANN) [1-6]. Furthermore several types of artificial neural networks have been used for classification applications, but probabilistic neural network (PNN) is usually preferred because of its advantages. PNN is a feed-forward neural network with supervised learning which uses Bayes decision rule and Parzen window. The PNN offers the following advantages: (1) rapid training speed: the PNN is more than five times faster than back-propagation; (2) guaranteed convergence to a Bayes classifier if enough training examples are provided; (3) enables incremental training which is fast. In spite of above advantages, the quality of the classifier based on conventional PNN is related to input data. For better performance in the classifier, the pre-processing of input data for classifier is a key step.

Recently, wavelet used as data pre-processor has attracted much attention. References [7-11] proposed a neural network based fault diagnosis method using wavelet transform as a pre-processor to reduce noise in input data, or using principal

components analysis (PCA) to reduce the number of input features to the neural network. However, the speed of calculation of wavelet transform and PCA is a problem.

For above reasons, a fast integrated fault diagnosis approach based on lifting scheme wavelets, moving window PCA and PNN is presented in this paper.

II. LIFTING WAVELETS, MWPCA AND PNN

A. Lifting Scheme Wavelets Lifting scheme , shown in Fig.1, was first proposed by

Wim Sweldens[12-13]. The original motivation for developing lifting was to build second generation wavelets, i.e., wavelets adapted to situations that do not allow translation and dilation like non-Euclidean spaces.

1) The original signals ][nX are split into two non-intersecting subsets ][nc and ][nd .The greater the correlation between them, the better the split effect is. Usually a signal sequence is split into odd and even sequences, the even indexed samples ][nX e , and the odd indexed samples ][nX o .

2) Using the similarity of data, we can predict ][nd from ][nc by using a predict operator P which is independent of

the dataset. Of course the predictor need not be exact, so we need to record the difference or detail d :

])[(][][ 0 nXPnXnd e−= (1) Given the detail d and the odd, we can immediately recover the odd as

][])[(][ ndnXPnX eo += (2)

3) In particular, the running average of the ][nX e is not the same as that of the original samples ][nX .To correct this, we propose a second lifting step, which replaces the evens with smoothed values ][nc with the use of an update operatorU applied to the details:

])[(][][ ndUnXnc e += (3) From this, we see that if P is a good predictor, and then

d approximately will be a sparse set; in other words, we expect the first order entropy to be smaller for d than for oX .

This work is supported by Chinese Liaoning Provincial Department of Education Science and Technology Project under Grant 2005354.

Page 2: [IEEE 2010 Sixth International Conference on Natural Computation (ICNC) - Yantai, China (2010.08.10-2010.08.12)] 2010 Sixth International Conference on Natural Computation - An integrated

1187

Split

+

+

+

+

merge

][nXe

][nXo

][nc

][nd][nXo

][nXe

][nX ][nXP− U U P−

Figure 1. The forward wavelet transform and inverse using lifting

In the de-noising procedure, we first perform the lifting wavelet transform for the process data and the detail signals at each level are processed by using threshold method. As a result, the noise part in the wavelet coefficient is reduced.

B. Moving Window Principal Components Analysis 1) PCA Algorithm: The PCA technique determines

combinations of variables that describe major trends, or variation, in a data set. For a data matrix, X , with n rows (of measurements or samples) and m columns (of variables), the covariance matrix of X is defined as

1)(

−=

nXXXCOV

T

(4)

where the columns of X have been centered to have a mean of zero and scaled to a standard deviation of one. PCA decomposes this data matrix, X , as the sum of the outer product of vectors, it and ip plus a residual error matrix E

EptptptETPX Tkk

TTT ++⋅⋅⋅++=+= 2211 (5) It is now possible to generate a new data set based on the

number of principal components retained ( k ). In general, k will normally be much smaller than the number of variables in the original data.

2) MWPCA Algorithm: The details of this two-step procedure [14] are shown in Fig.2 for a window size L :

IIIMatrix

mLLk

Lk

k

IIMatrix

mLLk

k

IMatrix

mLLk

k

k

x

x

x

x

x

x

x

x

×+

−+

+

×−−+

+

×−+

+

⎟⎟⎟⎟⎟

⎜⎜⎜⎜⎜

⇒⎟⎟⎟⎟

⎜⎜⎜⎜

⎟⎟⎟⎟⎟

⎜⎜⎜⎜⎜

0

01

01

)1(0

1

01

01

01

0

Figure 2. Two-step adaptation to construct new data window

The three matrices in Fig.2 represent the data in the previous window (Matrix I), the result of removing the oldest sample 0

kx (Matrix II), and the current window of selected data

(Matrix III) produced by adding the new sample 0lkx + to Matrix

II. Mean of Matrix I kμ :

∑−+

==

101 Lk

kiik x

Lμ (6)

Because of 0

1

1

0kk

Lk

kii xLx −=∑

−+

+=μ (7)

So that mean of Matrix II μ~ :

)(1

11

1~ 01

1

0kk

Lk

kii xL

Lx

L−

−=

−= ∑

−+

+=μμ (8)

Difference between means: μμμ ~~ −=Δ k (9)

Scale the discarded sample:

)(1 0kkk xx μ−

Σ= (10)

Bridge over Matrix I and Matrix III: Tkkk

Tkk xx

LRR

11~~ 11*

−−ΣΔΔΣ−= −− μμ (11)

Mean of Matrix III:

]~)1[(1 01 Lkk xL

L ++ +−= μμ (12)

Difference between means of Matrix III and Matrix II: μμμ ~

11 −=Δ ++ kk (13) Standard deviation of Matrix III:

1)]()([)]()([

))(~())(())(())((202

10

221

221

−−−−+

Δ−Δ+=

++

++

Liixiix

iiii

kkkLk

kkk

μμμμσσ

(14)

)}()1({ 111 mdiag kkk +++ =Σ σσ (15) Scale the new sample:

)(11

0

1++

++ −

Σ= kLk

kLk xx μ (16)

Correlation matrix of Matrix III:

TlkLk

kT

kkkkkkkk

xxL

RR

++

−+++

−+

−+

−++

−+

ΣΔΔΣ+ΣΣΣΣ=

11

1111

11

11

*111 μμ

(17)

C. Probabilistic Neural Network The probabilistic neural network used in this study,

shown in Fig.3, mainly includes a radial basis layer and a competitive layer. The radial basis layer contains the same number of neurons as that of the train set. Each neuron is responsible to calculate the probability that an input feature vector is associated with a specific class.

The radial basis layer biases are all set as

spreadb

21

)]5.0log([−= (18)

where spread is extended coefficient of RBF. With an input victor X , the radial basis neuron compares

it with the neuron weight jiW and multiplies with a bias b to calculate the probability,

)( jj vradbasO = (19) where radbas can be selected from any of several candidate radial functions. In this paper, this radial function is selected as a Gaussian function.

2jv

j eO −= (20)

Page 3: [IEEE 2010 Sixth International Conference on Natural Computation (ICNC) - Yantai, China (2010.08.10-2010.08.12)] 2010 Sixth International Conference on Natural Computation - An integrated

1188

1x

2x

mx

1y

2y

my

jiw kjw

Figure 3. The sketch of LWPNN

bXWv jij ⋅−= (21)

and • denotes the Euclidean distance. Consequently, as the distance between jiW and

X decreases, the output jO increases and reaches the maximum of 1 when XWji = . The sensitivity of the radial basis neurons can be adjusted by varying the value of b through the extended coefficient: spread .

The competitive lay then determines the maximum in the probabilities and assign 1 to the associated class and 0 for the other classes.

D. LW-MWPCA-PNN Classification Algorithm The integrated algorithm based on LW-MWPCA and

PNN is shown in Fig. 4.

Figure 4. The sketch of LW-MVPCA-PNN

III. CASE STUDY

An example of the application of the proposed strategy based on LW-MWPCA and PNN is presented, and it is also compared with PNN under the TE process [15-16] (see Fig.5).

TE process is a benchmark problem in process engineering. Downs and Vogel presented this particular process at an AIChE meeting in 1990 as a plant-wide control problem. The simulator of the Tennessee Eastman process consists of five major unit operations: a reactor, a product condenser, a vapor-liquid separator, a recycle compressor, and a product stripper. Two products are produced by two simultaneous gas-liquid exothermic reactions, and a byproduct is generated by two additional exothermic reactions. The process has 12 manipulated variables, 22 continuous process measurements, and 19 compositions. The simulator can generate 21 types of different faults, listed in Table 1 and Table 2. Once the fault enters the process, it affects almost all

Figure 5. Control system of the Tennessee Eastman process

TABEL I . PROCESS FAULTS FOR THE TENNESSEE EASTMAN PROCESS

Variable Disturbances Type

1 A/C feed ratio, B composition constant Step

2 B composition, A/C ratio constant Step

3 D feed temperature Step

4 Reactor cooling water inlet temperature Step

5 Condenser cooling water inlet temperature Step

6 A feed loss Step

7 C header pressure loss-reduced availability Step

8 A, B, C feed composition Random variation 9 D feed temperature Random variation 10 C feed temperature Random variation

11 Reactor cooling water inlet temperature Random variation

12 Condenser cooling water inlet temperature Random variation

13 Reaction kinetics Slow drift 14 Reactor cooling water valve Sticking

15 Condenser cooling water valve Sticking

16-20 Unknown Unknown

21 The valve for Stream 4 was fixed at the steady state position

Step Constant Position

Figure 6. PNN Fault Classification of Fault 1-15

Page 4: [IEEE 2010 Sixth International Conference on Natural Computation (ICNC) - Yantai, China (2010.08.10-2010.08.12)] 2010 Sixth International Conference on Natural Computation - An integrated

1189

TABEL II. FAULTS TYPES FOR THE TENNESSEE EASTMAN PROCESS

Classification Type 1 Normal 2 Fault 1 3 Fault2 4 Fault3 5 Fault 4 6 Fault 5 7 Fault6 8 Fault 7 9 Fault 8 10 Fault9 11 Fault10 12 Fault 11 13 Fault 12 14 Fault13 15 Fault 14 16 Fault15

Figure 7. LW-MWPCA-PNN Fault Classification of Fault 1-15

state variables in the process. In this paper, a total of 11 manipulated variables, selected

for monitoring purposes, are used as monitored variables. Fifteen known type faults are considered. These faults represent the typical faults encountered in an industrial process. In the simulation, we set the extended coefficient of RBF 0.7, and the construction of PNN is 11-480-1. Fig. 6 and Fig. 7 show parts of simulation results. When the PNN judges the input data in normal condition, 1 is outputted. When the network judges the input data have type i fault, then i is outputted. From Fig. 6-7, we find that the diagnosis accuracy of the approach which consists of LW-MWPCA and PNN is better than the method based on PNN.

IV. CONCLUSIONS The study of fault diagnosis in industrial process is full of

challenge. In this paper, an integrated method based on LW-MWPCA and PNN is proposed, and it is used to diagnose the faults in Tennessee Eastman process. The simulation results show that the Integrated algorithm, which is based on LW-MWPCA and PNN, is faster than the algorithm, which is based on Wavelets-PCA and PNN, and the Integrated algorithm can provide higher diagnosis accuracy than PNN.

REFERENCES [1] L. H. Chiang, F. L. Russell, and R. D. Braatz, Fault Detection and

Diagnosis in Industrial Systems. London: Springer, 2001. [2] Y. Power and P.A.Bahri, “Integration techniques in intelligent

operational management: a review,” Knowledge-Based Systems, vol. 18, pp. 89-97, 2005.

[3] Venkat Venkatasubramanian, Raghunathan Rengaswamy, Surya N. Kavuri and Kewen Yin, “A review of process fault detection and diagnosis: Part III: Process history based methods,” Computers & Chemical Engineering, vol. 27, pp. 327-346, 2003.

[4] M. Kano and Y. Nakag, “Data-based process monitoring, process control and quality improvement: recent developments and applications in steel industry,” Computers and Chemical Engineering, vol. 32, pp. 12-24, 2008.

[5] S. Simani and C. Fantuzzi, “Fault diagnosis in power plant using neural networks,” Information Sciences, vol. 127, pp. 125-136, 2000.

[6] Varanon Uraikul, Christine W. Chan and Paitoon Tontiwachwuthikul, “Artificial intelligence for monitoring and supervisory control of process systems,” Engineering Applications of Artificial Intelligence, vol.20, pp.115-131, 2007.

[7] B. R. Bakshi, “Multiscale PCA with application to multivariate statistical process monitoring,” American Institute of Chemical Engineering Journal, vol. 44, pp. 1596-1610, 1998.

[8] A. Maulud, D. Wang, and J. A. Romagnoli, “A multi-scale orthogonal nonlinear strategy for multi-variate statistical process monitoring,” Journal of Process Control, vol. 16, pp. 671-683, 2006.

[9] Q. Yang, L. Gu, D. Wang, and D. S. Wu, “Fault diagnosis approach based on probabilistic neural network and wavelet analysis,” Proceedings of the 7th WCICA'08, pp. 1796-1799, 2008.

[10] T. Jiang, M. H. He, and W. J. Zhai, “Application of wavelet neural network to radar signal sorting,” Radar Science and Technology, vol. 2, pp. 341-344, 2003.

[11] Y. He, Y. Tan and Y. Sun, “Wavelet neural network approach for fault diagnosis of analogue circuits,” IEE Proc.-Circuits Devices Syst., vol.151, pp.379-384, 2004.

[12] W. Sweldens, Construction and Applications of Wavelets in Numerical Analysis. Ph. D. thesis, Department of Computer Science, Katholieke Universiteit Leuven, Belgium, 1994.

[13] W. Sweldens, “The lifting scheme: a construction of second generation wavelets,” SIAM J. Math. Anal, vol.29, pp.511-546, 1998.

[14] X. Wang, U. Kruger, and G. W. Irwin, “Process monitoring approach using fast moving window PCA” Ind. Eng. Chem. Res, vol. 44, pp. 5691-5702, 2005

[15] J. J. Downs, E. F. Vogel, “A plant-wide industrial-process control problem,” Computers & Chemical Engineering, vol.17, pp.245-255, 1993.

[16] T. J. Mcavoy, N. Ye. Base, “Control for the Tennessee Eastman problem,” Computers & Chemical Engineering, vol.18, pp.383-393, 1994.