a multi-sensor data fusion system for assessing the...

21
A MULTI-SENSOR DATA FUSION SYSTEM FOR ASSESSING THE INTEGRITY OF GAS TRANSMISSION PIPELINES Joseph A. Oagaro, Philip J. Kulick, Min T. Kim, Robi Polikar, John C. Chen, and Shreekanth Mandayam College of Engineering Rowan University Glassboro, NJ 08028 U.S.A. ABSTRACT Accurate and reliable characterization the pipe-wall condition of gas transmission pipelines requires inspection using more than one method of non-destructive testing. This paper describes a suite of sensor data fusion algorithms that aims to synergistically combine information that is present not in heterogeneous sensors (for example, magnetic, ultrasonic and thermal). The objective of the data fusion algorithms is to improve the accuracy and reliability of pipeline monitoring by providing the location, size and shape of pipe-wall anomalies. The multi-sensor data fusion algorithms are employed in two stages. In the first stage, data from multiple inspection modalities are fused to identify and separate pipe-wall anomalies from benign indications. A machine- learning algorithm that is capable of incremental learning is employed for this purpose. In the second stage, the multi-sensor data is fused to predict the size and shape of those indications that are identified as anomalies. Models that are based on human stereoscopic vision are used to design a data fusion process in order to extract information that is redundant and complementary among different sets of sensors. Results demonstrating the effectiveness of the data fusion algorithms are presented. This research work is

Upload: vumien

Post on 17-Jun-2018

220 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A Multi-sensor Data Fusion System for Assessing the ...users.rowan.edu/~shreek/share/DOE-Gas/Pubs/GTI04/… · Web viewThe test-specimen suite is subjected to interrogation using

A MULTI-SENSOR DATA FUSION SYSTEM FOR ASSESSING THE INTEGRITY OF GAS TRANSMISSION PIPELINES

Joseph A. Oagaro, Philip J. Kulick, Min T. Kim, Robi Polikar, John C. Chen,and Shreekanth Mandayam

College of EngineeringRowan University

Glassboro, NJ 08028U.S.A.

ABSTRACTAccurate and reliable characterization the pipe-wall condition of gas transmission

pipelines requires inspection using more than one method of non-destructive testing. This paper describes a suite of sensor data fusion algorithms that aims to synergistically combine information that is present not in heterogeneous sensors (for example, magnetic, ultrasonic and thermal). The objective of the data fusion algorithms is to improve the accuracy and reliability of pipeline monitoring by providing the location, size and shape of pipe-wall anomalies.

The multi-sensor data fusion algorithms are employed in two stages. In the first stage, data from multiple inspection modalities are fused to identify and separate pipe-wall anomalies from benign indications. A machine-learning algorithm that is capable of incremental learning is employed for this purpose. In the second stage, the multi-sensor data is fused to predict the size and shape of those indications that are identified as anomalies. Models that are based on human stereoscopic vision are used to design a data fusion process in order to extract information that is redundant and complementary among different sets of sensors.

Results demonstrating the effectiveness of the data fusion algorithms are presented. This research work is supported by the U.S. Department of Energy under grant no. DE-FC26-02NT41648.

INTRODUCTIONNo single nondestructive evaluation (NDE) method is capable of inspecting

everything and extracting all required information – a combination of methods must be used and the resulting data fused to extract relevant information. This is especially true for the in-line inspection and characterization of the integrity of the nation’s gas transmission pipeline network. Accurate and reliable identification and characterization of pipe-wall anomalies in terms of location, size and shape require multi-sensor interrogation and consequently, multi-sensor data fusion. A variety of signal and image processing techniques have been explored for fusing NDE data from multiple sources [1-5]. Artificial neural networks have played a significant role in the signal inversion process for NDE data interpretation [6-8].

The effectiveness of the various data fusion algorithms have typically been demonstrated by showing that the fused NDE signature contains features that cannot be

Page 2: A Multi-sensor Data Fusion System for Assessing the ...users.rowan.edu/~shreek/share/DOE-Gas/Pubs/GTI04/… · Web viewThe test-specimen suite is subjected to interrogation using

readily discerned in the original NDE signatures that are input to the algorithm. There have not been sufficient attempts to define quantitative measures for this purpose. In this paper, we attempt to address the measurement of the effectiveness of an NDE multi-sensor data fusion technique by explicitly defining the information that is expected as a result of the fusion process. Furthermore, when pipeline segments are inspected using multiple methods, it is essential to effectively manage the “new” information that results from a later inspection that uses a different technique. We present an incremental learning algorithm that can be tailored for this purpose.

A two-stage approach is employed for applying the multi-sensor data fusion algorithms. In the first stage, data from multiple inspection modalities are fused to identify and separate pipe-wall anomalies from benign indications. In the second stage, the multi-sensor data is fused to predict the size and shape of those indications that are identified as anomalies (defects). Defect-related information that is redundant and complementary among different sets of sensors is extracted as part of the data fusion process. The data fusion algorithms are exercised on a suite of test-specimens that is fabricated to be representative of pipe-wall indications, both benign and anomalous. The test-specimen suite is subjected to interrogation using magnetic flux leakage (MFL), ultrasonic testing (UT) and thermal imaging techniques.

APPROACHThe objectives of this research work are to:

1. Synergistically combine multi-sensor NDE data for information extraction, interpretation, analysis and decision-making

2. Incrementally learn when presented with new data with or without forgetting portions of previously learned information

3. Adapt the learning process when presented with missing or partially complete data

4. Extract redundant and complementary information embedded within multi-sensor data sets

5. Develop quantitative measures that demonstrate the effectiveness of multi-sensor data fusion for NDE

We have adapted and developed previously established methods for modeling the following biological processes – vision and learning – for arriving at a comprehensive set of tools that are required for performing information fusion from multiple sensors and/or multiple databases. These are described below.

Learn++: A model for incremental learningLearn++ is a novel algorithm capable of incremental learning of additional data,

estimating classification confidence and combining information from different sources [9]. Learn++ employs an ensemble of classifiers approach for this purpose. Figure 1 conceptually illustrates the underlying idea for the Learn++ algorithm. The white curve represents a simple hypothetical decision boundary to be learned. The classifier’s job is to identify whether a given point is inside the boundary. Decision boundaries (hypotheses) generated by base classifiers (BCi, i=1,2,…8), are illustrated with simple geometric figures. Hypotheses decide whether a data point is within their decision boundary. They

Page 3: A Multi-sensor Data Fusion System for Assessing the ...users.rowan.edu/~shreek/share/DOE-Gas/Pubs/GTI04/… · Web viewThe test-specimen suite is subjected to interrogation using

are hierarchically combined through weighted majority voting to form composite hypotheses Ht, t=1,…7, which are then combined to form the final hypothesis Hfinal. In this study, different ensemble classifiers, each trained with signals of different modalities used as features, were incorporated into the NDE signal identification system. To work in data fusion mode, Learn++ was modified according to structure in Figure 2, combining the pertinent information from all identifiers.

Figure 1. Conceptual illustration of the Learn++ algorithm.

Figure 2. Learn++ for data fusion.

: weighted majority voting

Page 4: A Multi-sensor Data Fusion System for Assessing the ...users.rowan.edu/~shreek/share/DOE-Gas/Pubs/GTI04/… · Web viewThe test-specimen suite is subjected to interrogation using

Geometric transformations: A model for stereoscopic visionA mathematical model of human monoscopic vs. stereoscopic vision (see Figure 3)

can be employed for developing a data fusion algorithm. Consider an instance where there are two similar objects in one’s field of view. One object is smaller and relatively closer to the observer while the second object is situated farther from the observer, and is larger than the first object. If one were to view this scene with only one eye, the image perceived on the retina would be that of two objects of identical size. Viewing the scene with both eyes, a separate and unique image is developed in each eye – with two separate, dissimilar images of the same scene, the brain can then fuse the images to develop fairly accurate estimates of the size and distance of the objects.

Figure 3. Model of human vision system - monoscopic vs. stereoscopic vision.

This stereoscopic vision process can be modeling mathematically with the following equation [10]:

(1)

where x1 and x2 represents the images seen by each eye, which are dependent on object distance, d, and object size, l. The resulting perception of distances of each of the objects is provided by the function, h(d). The function g, fuses the information present in each of the original images, x1 and x2 and can be modeled as a universal approximator given by the radial basis function neural network [11]

(2)

where denotes the weights of the hidden layer nodes in the network and is a Gaussian basis given by

(3)

where cij is the basis center (mean) and is the radius (variance) of the Gaussian kernel.

Monoscopic Vision Stereoscopic Vision

Larger object, farther

Smaller object, nearer

Focal point of the eye

Same image formed on the retina !!!

Larger object, farther

Smaller object, nearer

Focal point of the eye

Same image formed on the retina !!!

Larger object, farther

Smaller object,nearer

Focal point of left eye

Image inleft eye

Focal point of right eye

Image inright eye

Slightly different images,to brain

Larger object, farther

Smaller object,nearer

Focal point of left eye

Image inleft eye

Focal point of right eye

Image inright eye

Slightly different images,to brain

Page 5: A Multi-sensor Data Fusion System for Assessing the ...users.rowan.edu/~shreek/share/DOE-Gas/Pubs/GTI04/… · Web viewThe test-specimen suite is subjected to interrogation using

This stereovision model can be employed for multi-sensor data fusion, as follows. When NDE images that are obtained from different inspection methods are fused, the fused image can be assumed to contain two main types of information that are related to the characteristics of the test object: redundant and complementary information. Redundant information is the information related to the defect that is common among different inspection methods and can be used to increase the reliability of the defect characterization result. Complementary information is the defect related information that is unique to each inspection method and can be used to improve the accuracy of defect characterization. Figure 4 pictorially depicts the redundant and complementary information in the data fusion process. Figure 1 illustrates the resulting redundant and complementary information from the data fusion process.

Figure 4: Illustration of redundant and complementary information in multi-sensor data fusion.

The stereovision model is now applied as follows. Let and be two different NDE images that are the results of the

inspection of the same object using two different inspection modalities. The variable represents the redundant information features and is the same for both images. Likewise, the variables and represent the complementary information features for each image. A function that extracts the redundant defect related information, , between

and can be defined as:

(4)

and the function that extracts the complementary information can be defined as

(5)

Both these functions are RBF neural networks, described in Equation (2).

DataFusion

Image 2

Image 1 Redundant

Complementary

Page 6: A Multi-sensor Data Fusion System for Assessing the ...users.rowan.edu/~shreek/share/DOE-Gas/Pubs/GTI04/… · Web viewThe test-specimen suite is subjected to interrogation using

IMPLEMENTATION RESULTSTest-specimen Suite

These data fusion techniques were exercised on a test-specimen suite that was fabricated to mimic a subset of a few common indications (both benign and anomalous) that occur in gas transmission pipelines. All of the specimens were machined from ASTM-836 steel and have length and width dimensions of 6” x 4”. However, three separate specimen thicknesses: 5/16, 3/8 and 1/2 inch, have been incorporated into the test specimen suite to account varying pipe-wall thicknesses. A total of nine slotted defect specimens developed to mimic pitting corrosion defects were produced in a milling machine with 0.1, 0.2 and 0.3-inch deep defects for all three specimen thicknesses. Stress corrosion cracking was simulated with a saw-cut, while a hydraulic punch was used to create a dent or gouge in the specimen. A weld specimen was also fabricates for each specimen thickness. Details of the entire test-specimen suite is shown in Table 1.

Table 1: Test-specimen suite.

Specimen #Plate thickness

(in) Indication Defect Depth (in)A1 0.5 None N/AA2 0.5 Pitting 0.3005A3 0.5 Pitting 0.198A4 0.5 Pitting 0.0945A5 0.5 Crack 80% Saw CutA6 0.5 Mechanical Damage VariesA7 0.5 Weld VariesB1 0.375 None N/AB2 0.375 Pitting 0.298B3 0.375 Pitting 0.199B4 0.375 Pitting 0.1105B5 0.375 Crack 80% Saw CutB6 0.375 Mechanical Damage VariesB7 0.375 Weld VariesC1 0.3125 None N/AC2 0.3125 Pitting 0.303C3 0.3125 Pitting 0.1955C4 0.3125 Pitting 0.0995C5 0.3125 Crack 80% Saw CutC6 0.3125 Mechanical Damage VariesC7 0.3125 Weld Varies

Page 7: A Multi-sensor Data Fusion System for Assessing the ...users.rowan.edu/~shreek/share/DOE-Gas/Pubs/GTI04/… · Web viewThe test-specimen suite is subjected to interrogation using

Multi-sensor Interrogation The test-specimen suite described in the previous section was subjected to

multisensor interrogation using ultrasonic testing, magnetic flux leakage and thermal imaging methods. NDE images obtain from each inspection modality are shown.

Ultrasonic testing (UT): The laboratory setup for the UT testing consists of a typical immersion ultrasound test station that allows for pulse-echo testing using a piezoelectric transducers operating in the pitch-catch mode. Precision linear actuators and controlled stepper motors were interfaced via custom hardware to a PC providing real-time control and display of A-scan, C-scan, time-of-flight, and amplitude ultrasound data to be utilized for defect characterization. Each specimen was submerged in water and scanned with a 10MHz UT transducer. Figure 5 shows the resulting time-of-flight (TOF) ultrasound images. Rows 1 through 4 show the progression of increasing defect depth starting with no defect followed by 0.1”, 0.2”, and 0.3” deep. Rows 5 through 7 show the stress corrosion cracking, mechanical damage, and welded specimens respectively. Column A, B, and C display the varying specimen thickness with Column A being ½” thick, B 3/8” thick, and C 5/16” thick, as seen in Table 1.

Figure 5: UT images of the test-specimen suite.

A B C

1

2

3

7

6

4

5

Page 8: A Multi-sensor Data Fusion System for Assessing the ...users.rowan.edu/~shreek/share/DOE-Gas/Pubs/GTI04/… · Web viewThe test-specimen suite is subjected to interrogation using

Magnetic flux leakage (MFL): The laboratory setup for the MFL data collection utilizes an FW Bell Gaussmeter with a Hall-effect probe connected to a set of linear actuators and stepper motors. Each specimen is magnetized by a direct current of 200 A flowing through the specimen. Figure 6 shows the resulting magnetic images.

Figure 6: MFL images of the test-specimen suite.

Thermal imaging: The thermal imaging laboratory setup includes 110 W Halogen lamp heat source that is sinusoidally excited at a rate of 8-10 seconds/cycle. The specimen is placed on an optical table and is thermally insulated with an Aluminum honeycomb panel. The thermal images of the test specimens are captured using a FLIR Systems Microbolometer camera. The images were obtained at 1-second intervals over the excitation cycle. Five images at equally spaced time intervals over each cycle were

A B C

1

2

3

7

6

4

5

Page 9: A Multi-sensor Data Fusion System for Assessing the ...users.rowan.edu/~shreek/share/DOE-Gas/Pubs/GTI04/… · Web viewThe test-specimen suite is subjected to interrogation using

processed to extract the phase images shown in Figure 7. The images are scaled and registered to reflect a resolution of 100 pixels/inch. It can be noticed that the defect related information in the thermal phase images is less than that contained in the UT and MFL images.

Figure 7: Thermal images of the test-specimen suite.

Data Fusion for Signal Classification The Learn++ algorithm’s data fusion capabilities were evaluated on the MFL and UT

data obtained from the specimen suite described earlier. We were particularly interested in whether we could improve the generalization performance of the algorithm (the correct classification performance on the validation data not seen by the algorithm during training) when data from two different NDE modalities (namely, MFL and UT) are fused. To test this performance, we have partitioned the test-specimen suite into two sections, ten to be used for training the algorithm, and eleven to be used for validation. We had a total of 5 categories, namely pitting, crack, mechanical damage, weld and no defect. Our goal was to compare the classification performances when trained with only MFL or UT data to the performance when the data was combined through the Learn++ algorithm. As discussed in the following paragraphs, our initial results were extremely promising.

A B C

1

2

3

4

Page 10: A Multi-sensor Data Fusion System for Assessing the ...users.rowan.edu/~shreek/share/DOE-Gas/Pubs/GTI04/… · Web viewThe test-specimen suite is subjected to interrogation using

We have used the multilayer perceptron (MLP) neural network as the base classifier with Learn++ and we have tested the algorithm with several MLP architectures from 10 hidden layer nodes to 40 hidden layer nodes [11]. All networks are trained with an error goal of 0.1. Table 2 summarizes our initial results. We note that the results provided below are on the eleven validation (test) suites not seen by the algorithm during training.

Table 2: Data fusion results for signal identification.

The first column indicates the number of hidden layer nodes used in the MLP architecture. The selection of this parameter is more of an art then science. As commonly reported in the literature, too small number of hidden layer nodes adversely affects the expressive power of the network, whereas selecting too many hidden layer nodes cause memorization of the noise in the training data. The second column indicates the number of networks used in creating the classifier ensemble of Learn++ using the UT images only, whereas the third column provides the generalization performance of the algorithm when trained with UT images only. Fourth and fifth columns indicate the number of classifiers used in the ensemble and the corresponding performance of the algorithm, when trained with MFL images only, respectively. Finally, the last column shows the generalization performance of Learn++ when information from UT and MFL images were combined.

Several observations can be made from this table. First, the proper selection of number of hidden layer nodes is indeed crucial in the performance of the MLP base classifiers. In general, we notice that increasing the number of hidden layer nodes beyond 25 provides little improvement (and only on MFL images) at a cost of substantially increased number of networks. This is a significant cost, as training large number of classifiers can be time-consuming. It is reasonable to deduct that a larger number of classifiers are generated and combined by the algorithm simply to remove the overfitting errors associated with larger networks.

Second, for all three cases of using 10, 20 or 25 hidden layer nodes, the data-fusion performance of Learn++ provides improved performance over using UT or MFL only. In particular, while the individual performances were in the 64% ~ 82% range, the Learn++ was able to achieve a 91% classification performance by fusing the information coming from UT and MFL.

Third, in all other trials with larger number of hidden layer nodes, the combined performance was never worse then the individual ones. In other words, the Learn++ data

Page 11: A Multi-sensor Data Fusion System for Assessing the ...users.rowan.edu/~shreek/share/DOE-Gas/Pubs/GTI04/… · Web viewThe test-specimen suite is subjected to interrogation using

fusion algorithm performed as well – if not better – as the classifiers trained with only one of the modalities.

We must emphasize that we had very little training and evaluation data in obtain the results shown. In particular, on a validation dataset of 11 images, the difference between 72.7 % and 81.8% performance is due to only one additional image correctly or misclassified, so is the difference between 81.8% and 90.9% performance. Therefore, it is essential for us to substantially expand our training and test datasets to further validate the statistical relevance of the improved performance. Nevertheless, we are extremely optimistic that with increased training and test data, the performance of the algorithm will improve. This is because these results were obtained with only 10 images used for training. In general, training with additional data can only improve the performance, provided that the additional data is relevant to the classification problem.

In summary, Learn++ proved its feasibility as a viable data-fusion algorithm, even after training on a very limited dataset. We believe that the algorithm will achieve even further improved data-fusion results after training on additional data. Our future work will include not only collecting and using additional data, but also testing the algorithm with different error goals, different feature extraction schemes and different classifier combination strategies.

Data Fusion for Defect Characterization In order to train the artificial neural network for defect characterization, it is necessary

to indicate the desired complementary and redundant information between the two NDE inspection methods. In this paper, since the actual defect size, shape, depth and location is known for the specimen suite, these definitions can be made by comparing the NDE signature for each of the inspection methods with the size, shape, depth and location of the defect. Figure 8 illustrates this definition process. Complementary information in two NDE images are defined as those distinct pixels in each of the NDE signatures that are present in the defect region, but are not shared between them. Redundant information in two NDE images are defined as those common pixels that are present in both NDE signatures and are also present in the defect region.

Figure 8: NDE image signatures used to define redundant and complementary information.

Page 12: A Multi-sensor Data Fusion System for Assessing the ...users.rowan.edu/~shreek/share/DOE-Gas/Pubs/GTI04/… · Web viewThe test-specimen suite is subjected to interrogation using

Typical defect characterization test data results for each combination of two testing modalities can be seen in Figures 9 – 11. The result figures include the inputs, outputs, and desired outputs in three rows respectively with the first row containing each inspection modality test image. These tests employ the definitions for redundant and complementary information for neural network training. The network inputs and outputs used were the spectral coefficients of the images using the Discrete Cosine Transform (DCT). The results show that the proposed technique is able to successfully extract redundant and complementary information related to the geometry of the defect. The best results are obtained for the fusion of UT and MFL data, and the poorest performance results from combining thermal imaging with any other method. These results are consistent with the quality and amount of the information contained in these respective modalities.

Figure 9: MFL & UT data fusion.

UT DataMFL Data

Redundant Complementary

Redundant Complementary

Input

Output

DesiredOutput

Page 13: A Multi-sensor Data Fusion System for Assessing the ...users.rowan.edu/~shreek/share/DOE-Gas/Pubs/GTI04/… · Web viewThe test-specimen suite is subjected to interrogation using

Figure 10: Thermal & UT data fusion.

Figure 11: Thermal & MFL data fusion.

CONCLUSIONSResearch in multi-sensor data fusion has seen phenomenal growth in recent years, as

the opportunities for sensor deployment have increased and the signal processing algorithms for managing the data have become more sophisticated. In this paper we have presented:

1. The development of a generalized technique for fusing data from multiple, heterogeneous NDE interrogations of the same test-specimen.

2. The design and development of a data fusion algorithm that can extract redundant and complementary defect-related information present in distinct NDE signals.

Redundant

Redundant

Redundant

Redundant

Complementary

Complementary

Complementary

Complementary

UT DataThermal Data

Thermal Data MFL Data

Input

Input

Output

DesiredOutput

DesiredOutput

Output

Page 14: A Multi-sensor Data Fusion System for Assessing the ...users.rowan.edu/~shreek/share/DOE-Gas/Pubs/GTI04/… · Web viewThe test-specimen suite is subjected to interrogation using

3. The design and development of a data fusion algorithm that incrementally learn to classify multi-sensor NDE signals.

4. The validation of the data fusion algorithms using laboratory generated data that is representative of the nondestructive evaluation of gas transmission pipelines. In particular, combinations of MFL, UT and thermal imaging of metallic test specimens embedded with benign and anomalous indications are chosen as candidates for data fusion.

The task of combining data from multiple sources and extracting meaningful information as a result of this combination continues to be a challenging task. There still remains considerable work that needs to be done for arriving at a comprehensive technique for performing multi-sensor data fusion. The algorithms presented in this paper show considerable promise, as indicated by the implementation results; however, the following issues need to be addressed before the technique is ready for field-testing:

1. The size and diversity of the training and test data sets require enhancement, even though this requires an outlay of significant resources in time, personnel and supplies.

2. A variety of image preprocessing techniques need to be explored for information compaction and feature extraction.

3. Although the data fusion algorithms described in this paper have been exercised with experimental data, the robustness of the technique using noisy real-world NDE signals remains to be tested.

4. In this paper, all of the data used to test the data fusion algorithms consisted of spatial domain 2-D images. In any real-word data fusion application, the information bearing data sets are heterogeneous – i.e., the data would consist of a combination of time-domain 1-D signals, spatial domain 2-D images, singular events describing time-history, anecdotal evidence and a priori knowledge. The data fusion algorithm proposed in this thesis must be augmented to handle such heterogeneous data sets.

ACKNOWLEDGEMENTSThis research work is supported by the U.S. Department of Energy under grant no.

DE-FC26-02NT41648.

REFERENCES[1] M. Mina, J. Yim, S. S. Udpa, L. Udpa, W. Lord and K. Sun, “Two-dimensional

multi-frequency eddy current data fusion,” Review of Progress in Quantitative Nondestructive Evaluation, Vol. 15, Plenum Press, New York, pp. 2125-2132, 1996.

[2] X. E. Gros, J. Bousigue and K. Takahashi, “NDT data fusion at pixel level,” NDT & E International, Vol. 32, Issue 5, pp. 283-292, July 1999.

Page 15: A Multi-sensor Data Fusion System for Assessing the ...users.rowan.edu/~shreek/share/DOE-Gas/Pubs/GTI04/… · Web viewThe test-specimen suite is subjected to interrogation using

[3] D. Horn and W. R. Mayo, “NDE reliability gains from combining eddy-current and ultrasonic testing,” NDT & E International, Vol. 33, Issue 6, pp. 351-362, September 2000.

[4] J. Yim, S. S. Udpa, M. Mina, L. Udpa, “Optimum filter based techniques for data fusion,” Review of Progress in Quantitative Nondestructive Evaluation, Vol. 15, Plenum Press, New York, 1996, pp. 773-780.

[5]Y.W. Song, NDE Data Fusion Using Morphological Approaches, PhD Dissertation, Iowa State University, Ames, Iowa, 1997.

[6]K. Hwang, S. Mandayam, S. S. Udpa, L. Udpa, W. Lord and M. Afzal, “Characterization of gas pipeline inspection signals using wavelet basis function neural networks,” NDT & E International, Volume 33, Issue 8, pp. 531-545, December 2000.

[7]S. Mandayam, L. Udpa, S. S. Udpa and W. Lord, “Wavelet based permeability compensation technique for characterizing magnetic flux leakage images,” NDT & E International, Vol. 30, No. 5, pp. 297-303, 1997.

[8] S. Mandayam, L. Udpa, S. S. Udpa and W. Lord, “Signal processing for in-line inspection of gas transmission pipelines,” Research in Nondestructive Evaluation, Vol. 8, no. 4, pp. 233-247, 1996.

[9] R. Polikar, et. al., “Learn++: An incremental learning algorithm for supervised neural networks,” IEEE Trans. Sys., Man and Cyber. (C), Special Issue on Knowledge Management, 31:4, pp. 497-508, 2001.

[10] S. Mandayam, L. Udpa, S. S. Udpa and W. Lord, “Invariance transformations for magnetic flux leakage signals,” IEEE Transactions on Magnetics, Vol. 32, No. 3, pp. 1577-1580, May 1996.

[11] S. Haykin, Neural Networks - A Comprehensive Foundation, Second Edition, Prentice Hall, Upper Saddle River, NJ, 1999.