pso-optimized modular neural network trained by owo-hwo algorithm for fault location in analog...
TRANSCRIPT
ORIGINAL ARTICLE
PSO-optimized modular neural network trained by OWO-HWOalgorithm for fault location in analog circuits
Mansour Sheikhan • Amir Ali Sha’bani
Received: 29 September 2011 / Accepted: 12 April 2012 / Published online: 25 April 2012
� Springer-Verlag London Limited 2012
Abstract Fault diagnosis of analog circuits is a key
problem in the theory of circuit networks and has been
investigated by many researchers in recent decades. In this
paper, an active filter circuit is used as the circuit under test
(CUT) and is simulated in both fault-free and faulty con-
ditions. A modular neural network model is proposed in
this paper for soft fault diagnosis of the CUT. To optimize
the structure of neural network modules in the proposed
scheme, particle swarm optimization (PSO) algorithm is
used to determine the number of hidden layer nodes of
neural network modules. In addition, the output weight
optimization–hidden weight optimization (OWO-HWO)
training algorithm is employed, instead of conventional
output weight optimization–backpropagation (OWO-BP)
algorithm, to improve convergence speed in training of the
neural network modules in proposed modular model. The
performance of the proposed method is compared to that of
monolithic multilayer perceptrons (MLPs) trained by
OWO-BP and OWO-HWO algorithms, K-nearest neighbor
(KNN) classifier and a related system with the same CUT.
Experimental results show that the PSO-optimized modular
neural network model which is trained by the OWO-HWO
algorithm offers higher correct fault location rate in analog
circuit fault diagnosis application as compared to the
classic and monolithic investigated neural models.
Keywords Fault diagnosis � Modular neural model �Analog circuits � PSO algorithm � OWO-HWO algorithm
1 Introduction
There is a crucial and complex task in the microelectronics
and semiconductor industry that is known as fault diagnosis
and testing of electronic circuits and systems. This subject
has been investigated by many researchers in the recent
three decades [1–7]. The techniques for digital circuit
diagnosis and testing have been developed and worth
effective, whereas the testing of analog and mixed-signal
systems is more intricate and less understandable [8–11].
Fault detection, fault location or identification, and finally
fault prediction are the three major aims of network testing
and diagnosis [12]. Fault detection is obviously a minimum
requirement for fault location or identification.
Fault detection is the most basic diagnosis test. The
main purpose of fault detection is to determine whether the
circuit under test (CUT) is a fault circuit. Fault detection
methods can be classified in the following main groups:
(a) signal-based methods that are focused on analyzing
signal features. Change detection is measured as a devia-
tion from normal behavior. For this purpose, statistical
(based on mean, variance, or entropy estimation) [13],
frequencial (based on filtering or spectral estimations),
probabilistic (such as Bayes decision), fuzzy logic [14],
and neural network [15–17] approaches can be used as
sample methods; (b) model-based methods that are focused
on residual generation. Residuals are obtained as changes
or discrepancies in special features of the process obtained
from process variables (for example output signals, state
variables) or coefficients (for example estimated parame-
ters or other calculated ratios). To achieve this goal, data
M. Sheikhan (&) � A. A. Sha’bani
Department of Electrical Engineering, Faculty of Engineering,
Islamic Azad University, South Tehran Branch,
P.O. Box: 11365-4435, Tehran, Iran
e-mail: [email protected]
A. A. Sha’bani
e-mail: [email protected]
123
Neural Comput & Applic (2013) 23:519–530
DOI 10.1007/s00521-012-0947-9
obtained from the process is compared to the data supplied
by models representing normal operating conditions;
(c) knowledge-based methods that are suitable strategies in
the case of noticeable modeling uncertainty. Instead of
output signals, any kind of symptoms can be used and the
robustness can be attained by restricting to only those
symptoms that are not strongly dependent upon the sys-
tem’s uncertainty [18].
Finding the source of fault(s) is possible using a fault
isolation procedure. There are three diagnosis procedures:
simulation before test (SBT), simulation after test (SAT),
and built-in self-test (BIST). The SBT methods identify
faults by comparing the measured circuit responses with
those correspondents in the fault dictionary associated with
predefined fault values. For the SAT methods, fault diag-
nosis is achieved by calculating the circuit parameters from
the measured responses of the CUT. The SAT method
takes more time for online computation than the SBT
method, which is based on fault dictionaries that can be
generated offline. The BIST method requires designing a
whole circuit in the way that allows for an independent
diagnosis of chosen test blocks [19]. In this way, various
types of fault diagnosis techniques have been proposed for
analog circuits [20]. These techniques can be broadly cat-
egorized as rule-based [21], fault model–based [22–24],
and behavioral model–based [25, 26].
In the fault prediction problem, the response of the
network is constantly monitored to identify whether any of
the network elements is about to fail. The main concern is
to replace these elements before an actual failure occurs
and tries to have the least loss in the lifetime of replaced
elements.
Researches on the fault diagnosis of analog circuits have
been started since 1960s and became an active field in
1970s. In general, a fault refers to any change in the value
of an element with respect to its nominal value, which can
result in the failure of whole circuit. The faults could be
catastrophic faults (hard faults) when the faulty element
yields either a short circuit or an open circuit [27, 28], or
deviation faults (soft faults) when the faulty element
deviates from its nominal value without achieving its
extreme extent [29, 30]. Catastrophic faults eventuate to
drastic malfunction and are usually detected by simple
direct current (DC) tests. It is important to mention that
manufacturing tolerances, aging, or parasitic effects could
culminate in soft faults. Soft faults, also known as the
parametric faults, are the most difficult to model and test
[31–41]. Many fault location techniques only address the
case when just one parameter causes the fault. This is
referred to as a single fault. It is worth to state that multiple
faults and contemporaneous changes in several parameters
may occur. The values of analog circuits’ input and output
signals and the component parameters are continuous, and
meanwhile, there are inevitable tolerance and nonlinear
components in the analog circuits; therefore, the presence
of these factors increases the complexity of analog cir-
cuits fault diagnosis [42, 43].
Several artificial neural networks (ANN)-based approa-
ches have been proposed for fault diagnosis of analog
circuits [17, 44–51] because of their outstanding ability in
solving classification and nonlinear function approximation
problems, but the conventional monolithic ANNs have
poor generalization ability. On the other hand, the modular
system design approach offers some advantages such as
simplicity and economy of design, computational effi-
ciency, fault tolerance, and better extendibility.
In this paper, a modular neural network model is pro-
posed for analog circuit fault diagnosis. Particle swarm
optimization (PSO) algorithm is also adopted to determine
the optimal structure of ANN modules in the proposed
modular scheme. In addition, output weight optimization–
hidden weight optimization (OWO-HWO) training algo-
rithm [52] is used as a superior technique [53] in training
PSO-optimized ANN modules as compared to conven-
tional output weight optimization–backpropagation (OWO-
BP) algorithm. The performance of proposed model is
compared to that of standard monolithic multilayer per-
ceptrons (MLPs) trained by OWO-BP and OWO-HWO
algorithms, K-nearest neighbor (KNN) classifier, and a
related system with the same CUT. Experimental results
show that the proposed model is succeeded in locating
faults effectively. In addition, validation of the methodol-
ogy using an active three-mode filter circuit as CUT shows
that further application to more complex analog circuits,
employing a large number of electronic components, is
possible because of the proposed modular design.
The rest of this paper is organized as follows. Related
work on the fault diagnosis of analog circuits and opti-
mizing ANNs using intelligent approaches is reviewed in
Sect. 2. Section 3 introduces the preliminaries including
OWO-HWO and PSO algorithms. The CUT and corre-
sponding faults are introduced in Sect. 4. The structure of
proposed modular ANN is described in Sect. 5. Simulation
and experimental results are reported in Sect. 6. Finally,
paper is concluded in Sect. 7.
2 Related work
The techniques used in the fault diagnosis can be divided
into two broad categories: the estimation methods and
pattern recognition methods. The estimation methods
[2, 54] require mathematical process models that represent
the real process satisfactorily, and a component is identified
as a faulty one when the calculated value is beyond its
tolerance range. If the model is complex, computation can
520 Neural Comput & Applic (2013) 23:519–530
123
easily become very time-consuming. Thus, the application
of estimation methods is very limited in practice. However,
no mathematical model of the process is required in the
pattern recognition methods [16, 32, 55] as the operation of
process is classified by matching the measurement data.
Formally, this is a mapping from measurement space into
decision space.
With the aim of fault diagnosis for analog circuit response
analysis, several transforms have been used such as Fourier
transform [56], wavelet transform [11, 17, 57, 58], and
bilinear transform [59, 60]. To reduce the dimensionality of
candidate features so as to obtain the optimal features as
inputs to the classifier, several approaches have been
employed such as principal component analysis (PCA) [17],
evolutionary algorithms [61], and maximal class separabil-
ity–based kernel principal component analysis [51].
In the recent decades, several methods have been pro-
posed for fault classification in linear and nonlinear analog
circuits such as support vector machine (SVM) [6], support
vector data description (SVDD) [58], relevance vector
machine [62], evolutionary algorithms (EAs) [63], fuzzy
logic approach [64], rough sets [65], swarm intelligence
(SI) algorithms [34], higher-order statistical (HOS) systems
[35], ANNs [11, 43, 49, 51, 57, 66, 67], and hybrid
approaches [37–39]. It is noted that in using ANNs for this
purpose, the networks are trained with circuit signatures,
obtained by measuring circuit input and output signals,
which are considered in a fault dictionary [2, 48–50].
As examples of SVM-based fault classifiers, Wang et al.
[32] have used S-transform time–frequency analysis to
extract the features corresponding to various faults in
power electronics circuits. Then, the fault types were
identified by SVM. Similarly, Long et al. [33] have
employed least squares SVM (LS-SVM) for fault diagnosis
of four-opamp biquad highpass filter circuit. To reduce the
fault feature vectors to train LS-SVM, they used the energy
of high frequency of wavelet transform coefficients (detail
signals) of various levels.
As an example of EA-based fault diagnosis system, Luo
et al. [68] have proposed a module-level fault diagnosis
method in which the transfer function was constructed first.
Every system parameter of the transfer function was
expressed by several component parameters. Genetic
algorithm (GA) was adopted to solve nonlinear equations
obtained by multifrequency testing, and the module-level
faults were detected by comparing the estimated system
parameters to their normal values.
As an example of SI-based fault diagnosis system, Zhou
et al. [34] have proposed a single soft fault diagnosis
method for analog circuit with tolerance based on PSO.
Node-voltage incremental equations based on the sensi-
tivity analysis were built as constraints of a linear pro-
gramming (LP) equation. Through inducing the penalty
coefficient, the LP equation was set as the fitness function
for the PSO algorithm. After evaluating the best position of
particles, it was apparent whether the actual parameter is
within tolerance range or not.
As an example of HOS-based fault diagnosis system,
Yaun et al. [35] have developed an approach on higher-
order statistics in signal processing, the third- and fourth-
order moments or cumulants and their frequency domain
counterparts, to analog Sallen–Key bandpass filter fault
diagnosis.
As examples of ANN-based fault classifiers, Deng et al.
[69] have proposed an MLP-based method for fault diag-
nosis of an analog resistive circuit with tolerances.
Mohammadi et al. [70] have used radial basis function (RBF)
and MLP for analog fault diagnosis in a resistive circuit
and a JFET amplifier. Han and Wu [71] have employed a
sort of improved multiple-input multiple-output compact
type of wavelet neural network [72] and adopted adaptive
learning rate and additional momentum BP algorithm to
carry out training in fault diagnosis of an analog negative
feedback amplifier circuit. Yuan et al. [73] have employed
a preprocessing technique based on the kurtosis and
entropy of signals, which have been used to measure the
high-order statistics of signals, for the MLP classifier to
simplify the network architecture, to reduce the training
time, and to improve the performance of the network.
Sallen–Key bandpass filter and four-opamp biquad high-
pass filter were used as the CUTs in [73]. He et al. [74]
have used the wavelet transform to extract appropriate
feature vectors from the signals sampled from an active
filter as the CUT. The optimal feature vectors were selected
to train the wavelet neural networks by PCA and normal-
ization of approximation and detail coefficients. Xiao and
Feng [36] have developed a fault diagnosis approach of
analog circuits based on linear ridgelet network after two
preprocessing stages: wavelet-based fractal analysis and
kernel principal components analysis (kernel PCA). They
also adopted the kernel PCA to select the proper numbers
of hidden ridgelet neurons of the linear ridgelet networks.
As an example of hybrid fault diagnosis system, Bo
et al. [37] have combined fuzzy logic and ANN approaches
for fault diagnosis of a negative feedback amplifier circuit.
For the training of ANN, they adopted resilient backprop-
agation (RPROP) algorithm.
In addition, many artificial intelligent (AI), swarm and
evolutionary algorithms have been proposed for the opti-
mization of structure and parameters of ANNs. As sample
researches in this field, Lee and Ko [75] have developed a
nonlinear time-varying evolution PSO (NTVE-PSO) algo-
rithm to determine the optimal structure of RBF neural
network. Yu et al. [76] have proposed an improved PSO
and discrete PSO (DPSO) for joint optimization of three-
layer feedforward ANN structure and weights and biases.
Neural Comput & Applic (2013) 23:519–530 521
123
Leung et al. [77] have used PSO algorithm to optimize the
structure of RBF neural network, including the weights and
controlling parameters. Luitel and Venayagamoorthy [78]
have developed a training algorithm from combined con-
cepts of swarm intelligence and quantum principles, called
PSO-QI, for training a recurrent neural network. Zhang
et al. [79] have proposed a hybrid algorithm combining
PSO with BP algorithm to train the weights of feedforward
neural network. Shen et al. [80] have used artificial fish
swarm algorithm (AFSA) to optimize the learning process
of RBF. Li and Liu [81] have used a modified PSO and
simulated annealing (MPSO-SA) algorithm to optimize the
parameters of RBF neural network.
As examples of hybrid fault diagnosis systems in which
the parameters or architecture of their classifiers are opti-
mized by EA or SI algorithms, Li and Zhang [39] have used
an SVM-based classifier for fault diagnosis of a negative
feedback amplifier circuit. The GA was used to select
appropriate parameters of SVM and improve the classifica-
tion accuracy of classifier. Tang et al. [38] have developed an
analog filter fault diagnosis system in which the best value of
the SVM classifier parameters and the optimized feature
subspaces were selected by PSO algorithm. He and Wang
[56] have used an RBF neural network trained by PSO
algorithm to provide robust diagnosis of the soft faults. Li
et al. [82] have developed a fault diagnosis method based on
chaos differential evolution (CDE) algorithm and wavelet
neural network (WNN). In this way, the architecture and
parameters of WNN were optimized by CDE algorithm.
It is noted that in order to improve the performance of
the single neural networks, two independent approaches
have been adopted: ensemble-based and modular [83]. The
ensemble-based approach deals with the determination of
an optimal combination of already trained ANNs. Each
member in the ensemble is trained to learn the same task
and the outputs of each member are combined to improve
the performance [84]. On the other hand, in the modular
approach, complex tasks are decomposed into simpler
subtasks using the principle of divide and conquer [83]. It
is noted that modular neural networks can achieve perfor-
mance improvement and are more attractive than a con-
ventional monolithic global neural network design
approach because of model complexity reduction [85],
robustness [86], scalability [87], computational efficiency
[88], learning capacity [89], knowledge integration [90],
and immunity to crosstalk [91]. Several modular ANN
architectures have been proposed such as decoupled mod-
ules [92], hierarchical network [93], hierarchical competi-
tive modular ANN [94], cooperative modular ANN [95],
merge-and-glue network [96], adaptive mixture of local
experts [97] along with its variants [98, 99].
As examples of ensemble/modular ANN-based fault
diagnosis systems, Stosovic and Litovski [40] have applied
ANNs to the diagnosis of mixed-mode electronic circuits.
In order to tackle the circuit complexity and to reduce the
number of test points, hierarchical approach to the diag-
nosis generation was implemented with two levels of
decision: the system level and the circuit level. For each
level, using the SBT approach, fault dictionary was created
first. ANNs were used to model the fault dictionaries. At
the topmost level, the fault dictionary was split into parts
simplifying the implementation of the concept. A voting
system was created at the topmost level in order to dis-
tinguish which ANN’s output should be accepted as the
final diagnostic statement. Liu et al. [41] have proposed a
method for fault diagnosis of analog circuits with tolerance
based on NN ensemble method with cross-validation. In
this way, bias-variance decomposition was used to choose
the component networks when composing the ensemble.
Then, Bagging algorithm was employed to produce the
different training sets in order to train the different com-
ponent networks, and cross-validation technique was used
to further improve fault diagnosis accuracy. Finally, the
outputs of the component ensemble members were com-
bined to isolate the CUT faults. Also, a hierarchical neural
network (HNN) method has been proposed in [100] for
fault diagnosis of large-scale circuits.
In this paper, a variant of modular ANN model is pro-
posed for fault diagnosis in which the expert networks and
gating network are PSO-optimized MLPs.
3 Preliminaries
3.1 OWO-HWO algorithm
A critical problem in multilayer perceptron (MLP) neural
networks has been the long training time required. Several
fast training techniques that require the solution of sets of
linear equations have been devised [101, 102].
In the output weight optimization–backpropagation
(OWO-BP) algorithm, a set of linear equations are solved
to find the output weights, and the backpropagation algo-
rithm is used to find hidden weights [52]. Unfortunately,
the backpropagation is not a very effective method for
updating hidden weights [103].
A non-batching approach for finding all the MLP weights,
by minimizing separate error functions for each hidden unit,
has been proposed in [104]. Although this technique is more
effective than backpropagation algorithm, it does not use the
OWO algorithm to find the output weights optimally. The idea
of minimizing a separate error function for each hidden unit is
adapted to find the hidden weights and is termed as hidden
weight optimization (HWO) [52].
In this paper, the OWO-HWO algorithm is used as a
superior technique in terms of convergence as compared with
522 Neural Comput & Applic (2013) 23:519–530
123
the standard OWO-BP. In this section, the notations and error
functions in MLP network are introduced first, and then, the
OWO-HWO algorithm is described. In the MLP, if the jth unit
is a hidden unit, then the net input, netp(j), and the output
activation, Op(j), for the pth training pattern are as follows:
netpðjÞ ¼XNþ1
i¼1
wðj; iÞxpðiÞ ð1Þ
OpðjÞ ¼ f ðnetpðjÞÞ ð2Þ
where the ith unit is in any previous layer and N is the
number of nodes in the previous layer, xp(i) is the ith input
from the previous layer, and w(j, i) denotes the weight
connecting the ith unit to the jth unit. For the kth output
unit, the net input netop(k) for the pth training pattern and
the output activation Oop(k), with the linear property
assumption of the output units, are as follows:
netopðkÞ ¼XM
i¼1
woðk; iÞOpðiÞ ð3Þ
OopðkÞ ¼ netopðkÞ ð4Þ
where wo(k, i) denotes the output weight connecting the ith
unit to the kth output unit and M is the number of nodes in
the hidden layer. In order to train a neural network in batch
mode, the error for the kth output unit is defined as follows:
EðkÞ ¼ 1
Nv
XNv
p¼1
½TpðkÞ � OopðkÞ�2 ð5Þ
in which Nv is the number of training patterns {(xp, Tp)}
and Tp(k) denotes the target value at the kth output unit for
the pth training pattern.
In this paper, the conjugate gradient approach is used to
minimize E(k) [52]. For hidden weight changes, it is desirable
to optimize the hidden weights by minimizing separate error
functions for each hidden unit. By minimizing many simple
error functions, instead of a large one, it is hoped that the
training speed and convergence can be improved. The desired
hidden net function can be approximated by a current net
function plus a net change. That is, for jth unit and pth pattern,
a desired net function can be constructed as follows [104]:
netpdðjÞ ¼ netpðjÞ þ zdpðjÞ ð6Þ
where z is the learning factor and dp(j) for output units and
hidden units are as follows, respectively:
dpðjÞ ¼ f0 ðnetjÞ½TpðjÞ � OpðjÞ� ð7Þ
dpðjÞ ¼ f0 ðnetjÞ
X
n
dpðnÞ � wðn; jÞ ð8Þ
Similarly, the hidden weights can be updated as follows:
w j; ið Þ w j; ið Þ þ ze j; ið Þ ð9Þ
where e(j, i) is the weight change and serves the same
purpose as the negative gradient in backpropagation
algorithm. By defining an objective function in terms of
mean squared error (MSE) for the jth unit as follows:
EdðjÞ ¼XNv
p¼1
dpðjÞ �X
i
eðj; iÞOpðiÞ" #2
ð10Þ
and taking the gradient of Ed(j) with respect to the weight
changes and setting it to zero, the following linear
equations are achieved:
X
i
eðj; iÞRooði;mÞ ¼�oE
owðj;mÞ ð11Þ
where
Rooði;mÞ ¼XNv
p¼1
OpðiÞOpðmÞ ð12Þ
The steps of OWO-HWO algorithm are listed below:
Step 1: Initialize all the weights and thresholds.
Step 2: Increase n by 1 and stop if n [ Nit (Nit is the
number of iterations).
Step 3: Apply the training pattern and calculate the
output activation.
Step 4: Use the conjugate gradient approach to minimize
error.
Step 5: If MSE(n) [ MSE(n - 1), then
Reduce the value of z (learning factor)
Reload the previous best hidden weights
Go to step 9
Step 6: If MSE(n) B MSE(n - 1), then
Accumulate the cross-correlation Rdo(m) and auto-cor-
relation Roo(m) for hidden units:
RdoðmÞ ¼XNv
p¼1
dpðjÞOpðmÞ
RooðmÞ ¼XNv
p¼1
OpðiÞOpðmÞ
Step 7: Solve linear equations for the changes in hidden
weights as follows:X
i
eðj; iÞRooði;mÞ ¼ RdoðmÞ
Step 8: Calculate the learning factor as follows:
z ¼ �0:05EP
j
Pi
oEowðj;iÞ eðj; iÞ
h i
Neural Comput & Applic (2013) 23:519–530 523
123
Step 9: Update the hidden weights as follows:
w j; ið Þ w j; ið Þ þ ze j; ið Þ
Step 10: Go to step 2
3.2 PSO algorithm
PSO was first proposed by Kennedy and Eberhart [105].
The main principle behind this optimization method is
communication. In PSO, there is a group of particles that
look for the best solution within the search area. If a par-
ticle finds a better value for the objective function, the
particle will communicate this result to the rest of the
particles. All the particles in PSO algorithm have ‘‘mem-
ory,’’ and they modify these memorized values as the
optimization routine advances. In this algorithm, each
particle has a velocity and a position as follows [105]:
viðk þ 1Þ ¼ viðkÞ þ c1iðPi � xiðkÞÞ þ c2iðG� xiðkÞÞ ð13Þxiðk þ 1Þ ¼ xiðkÞ þ viðk þ 1Þ ð14Þ
where i is the particle index, k is the discrete time index,
vi is the velocity of ith particle, xi is position of ith particle,
Pi is the best position found by ith particle (personal best),
G is the best position found by swarm (global best), and c1i
and c2i are random numbers in the interval [0, 1] applied to
ith particle. In our simulations, the following equation is
used for velocity [106]:
viðk þ 1Þ ¼ uðkÞviðkÞ þ a1 c1iðPi � xiðkÞÞ½ �þ a2 c2iðGi � xiðkÞÞ½ � ð15Þ
in which u(k) is the inertia function and a1 and a2 are the
acceleration constants.
A number of modifications to the basic PSO algorithm
have been developed to improve speed of convergence and
the quality of solutions found by PSO algorithm. These
modifications include the introduction of an inertia weight,
velocity clamping, velocity constriction, different ways of
determining the personal best and global best positions, and
different velocity models [107].
The constant a1 expresses how much confidence a par-
ticle has in itself, while a2 expresses how much confidence
a particle has in its neighbors. Particles draw their strength
from their cooperative nature and are most effective when
a1 and a2 coexist in a good balance, that is, a1 & a2. Low
values for a1 and a2 result in smooth particle trajectories,
allowing particles to roam far from good regions to explore
before being pulled back toward good regions. High values
cause more acceleration, with abrupt movement toward or
past good regions.
In the early applications of the basic PSO algorithm, it
was found that the velocity quickly explodes to large
values, especially for particles far from the neighborhood
best and personal best positions. Consequently, particles
have large position updates, which result in particles
leaving the boundaries of the search space (divergence of
particles). To control the global exploration of particles,
velocities are clamped to stay within boundary constraints
[108]. If a particle’s velocity exceeds a specified maximum
velocity, the particle’s velocity is set to the maximum
velocity.
Let Vmax denote the maximum allowed velocity. Particle
velocity is then adjusted before the position update using
(16) the following equation:
viðk þ 1Þ ¼ viðk þ 1Þ if viðk þ 1Þ\Vmax
Vmax if viðk þ 1Þ�Vmax
�ð16Þ
The value of Vmax is very important, since it controls the
granularity of the search by clamping escalating velocities.
Large values of Vmax facilitate global exploration, while
smaller values encourage local exploitation. If Vmax is too
small, the swarm may not explore sufficiently beyond
locally good regions. Also, too small values for Vmax
increase the number of time steps to reach an optimum.
Furthermore, the swarm may become trapped in a local
optimum, with no means of escape. On the other hand, too
large values of Vmax risk the possibility of missing a good
region.
In this paper, linear decreasing strategy has been used in
which an initially large inertia weight (i.e., 0.9) is linearly
decreased to a small value (i.e., 0.2) as follows:
uðkÞ ¼ uð0Þ � uðNTÞ½ � ðNT � kÞNT
þ uðNTÞ ð17Þ
where NT is the maximum number of time steps for which
the algorithm is executed, u(0) is the initial inertia weight,
u(NT) is the final inertia weight, and u(k) is the inertia at
time step k.
4 CUT and soft faults
In this paper, the CUT is a three-mode active filter circuit
that includes two integrator-loop active circuits. As shown
in Fig. 1, the CUT operates as a lowpass, bandpass, and
highpass filters at nodes 1, 2, and 3, respectively. The
circuit parameters are adjusted to achieve a cutoff fre-
quency of 10 kHz. The amplitude of input voltage to the
CUT is considered as 1 volt. Firstly, for each of the eight
circuit components, the soft faults are simulated by toler-
ating each element by ±5 % around its nominal value. The
nominal values of the mentioned elements are listed in
Table 1.
Eight elements of the feature vector in the training data
set are selected among voltages of four points in the CUT
524 Neural Comput & Applic (2013) 23:519–530
123
(i.e., V1, V2, V3, and V as shown in Fig. 1) and currents of
four branches in the CUT (i.e., I1, I4, I5, and If corre-
sponding to R1, R4, R5, and Rf, respectively). The men-
tioned values are normalized in our simulations. It is noted
that the trained neural models can perform the soft fault
diagnosis of the mentioned eight components at their out-
put layer.
5 Proposed modular ANN for fault location
Based on the neurobiological researches, the modularity is
a key element to efficient and intelligent working of human
and animal brains. Similarly, an artificial neural computa-
tional system can be considered to have a modular archi-
tecture having two or more subsystems in which each
individual subsystem evaluates either distinct inputs or the
same inputs without communicating with other subsys-
tems. The overall output of the modular system depends on
integration/gating unit. Furthermore, in structural modu-
larization, a priori knowledge about a task can be intro-
duced into the structure of a neural network, which gives it
a meaningful structural representation.
In this paper, based on the fault location rate of eight
elements in the CUT, we use two expert networks in the
proposed modular ANN model, each with 8 inputs and 4
outputs (Fig. 2). In other words, soft faults corresponding
to R1, R2, R5, and C2 elements which have lower fault
location rates in our simulations using the base MLP
classifier trained by OWO-BP algorithm are considered as
a group and form the outputs of expert network 1 (as shown
in the second column of Table 7 in the next section, it is
noted that the rate values are rounded). Similarly, soft
faults corresponding to R3, R4, Rf, and C1 elements which
have higher fault location rates in our simulations, using
the mentioned base classifier, are considered as another
group and form the outputs of expert network 2.
There is an MLP-based gating network in the proposed
model, too. As shown in Fig. 2, all of the networks (both
expert and gating networks) receive the same input vector.
However, the only difference is that the gating network
uses this input vector to determine the confidence level for
the outputs of two expert networks and helps us to select
one of the expert networks in locating fault. On the other
hand, the expert networks use the input to generate an
estimate of the desired output value (correct location of
fault).
As mentioned earlier, PSO algorithm is adopted to
determine the optimal structure of expert and gating net-
works by optimizing the number of hidden layer nodes.
Fig. 1 Lowpass, bandpass, and highpass active filter circuit (CUT in
this study)
Table 1 Nominal values of elements in the CUT
Element Nominal value
R1 = R2 = Rf 10 kX
R3 30 kX
R4 = R5 15.9 kX
C1 = C2 1 nF
Expert network 1 (optimized-structure
single layer MLP trained by OWO-HWO)
V1
V2
If
PSO algorithm
Expert network 2 (optimized-structure
single layer MLP trained by OWO-HWO)
V1
V2
If
PSO algorithm
Gating network (optimized-structure
single layer MLP trained by OWO-HWO)
V1
V2
If
PSO algorithm
R1
R2
R5
C2
R3
R4
Rf
C1
Fig. 2 Proposed PSO-optimized modular ANN for soft fault diag-
nosis of analog filter circuit
Neural Comput & Applic (2013) 23:519–530 525
123
6 Simulation and experimental results
The fault dictionary in our simulations consists of 4,800
records. We use 4,000 records as the training set and 800
records as the test set.
To evaluate the performance of the proposed PSO-
optimized modular ANN soft fault classifier, three bench-
mark models have been simulated: (a) conventional
monolithic MLP trained by OWO-BP algorithm, (b) con-
ventional monolithic MLP trained by OWO-HWO algo-
rithm, and (c) KNN-based classifier. In this work, all the
simulation programs have been written and compiled in
MATLAB 7.10 and run on PC with Intel Pentium E5300
CPU and 2-GB RAM.
In the simulation of first benchmark model over several
experiments, setting the topology as 8–14–16–8 results in
the best average fault location rate when the ‘‘hyperbolic
tangent sigmoid’’ and ‘‘linear’’ functions have been selec-
ted as the transfer functions of two hidden layers and
output layer, respectively. The ‘‘trainrp’’ is selected as the
backpropagation training function in the simulation of first
benchmark model. It is noted that ‘‘trainrp’’ updates the
weight and bias values according to the resilient back-
propagation algorithm (RPROP) [109]. The number of
epochs is set to 20,000. The training parameters are set as
follows: minimum performance gradient = 1e-6, learning
rate = 0.01, increment to weight change = 1.2, decrement
to weight change = 0.5, initial weight change = 0.07 and
maximum weight change = 50. The training time of MLP
by OWO-BP algorithm in this simulation is 1760 s. The
average fault location rate of this system reaches to 88.1 %.
In the simulation of second benchmark model, the
monolithic MLP trained by OWO-HWO algorithm, several
single-layer MLPs have been simulated to achieve the best
fault location rate (Table 2). In our simulations, the number
of iterations (Nit) and initial learning factor (z) are set to
300 and 0.5, respectively.
As can be seen, by using OWO-HWO training algo-
rithm, the velocity of network convergence is improved as
compared to OWO-BP training algorithm. Also, the 8–80–
8 topology results in the best average fault location rate
when the ‘‘hyperbolic tangent sigmoid’’ and ‘‘linear’’
functions have been selected as the transfer function of
hidden layer and output layer, respectively. The training
time of MLP, with 8–80–8 topology that trained by OWO-
HWO algorithm, in this simulation is 693 s.
As the third benchmark model, KNN algorithm is used
in which an object is classified based on the closest training
examples in the feature space. The average fault location
rate when using some different odd values of K is reported
in Table 3.
To avoid experiments based on try and error, the
structure of modules in the proposed modular ANN model
that are single-layer MLPs is optimized by PSO algo-
rithm. In this way, the optimum number of hidden layer
nodes is determined by PSO algorithm. The PSO param-
eters in our simulations are set to the values shown in
Table 4.
The specifications and average fault location rate of
proposed PSO-optimized modular ANN model and its
modules, when the component values tolerated by ±5 %,
are reported in Table 5.
It is noted that the modules of proposed model are
trained using the OWO-HWO algorithm. In our
Table 2 Performance of different MLPs trained by OWO-HWO
algorithm
Network topology Number of epochs Fault location rate (%)
8–15–8 200 70.5
8–20–8 800 67.1
8–25–8 1,000 73.3
8–35–8 1,000 74.4
8–40–8 1,000 84.9
8–45–8 1,000 83.5
8–50–8 1,000 84.6
8–70–8 300 86.4
8–80–8 300 90.5
8–85–8 300 86.6
8–90–8 500 87.4
Table 3 Average fault location rate when using KNN algorithm
K Fault location rate (%)
1 87.1
3 88.1
5 88.6
7 88.8
9 87.2
11 86.8
13 86.5
15 86.1
Table 4 PSO parameters in simulations of proposed modular neural
model
Parameter Value
Population size 20
Maximum particle velocity 4
a1 = a2 2
Initial inertia weight 0.9
Final inertia weight 0.2
526 Neural Comput & Applic (2013) 23:519–530
123
simulations, the number of iterations (Nit) and initial
learning factor (z) are set to 200 and 0.5, respectively. The
average fault location rate of proposed PSO-optimized
modular ANN model and its modules for some other tol-
erance values of components is reported in Table 6.
The performance of simulated models in this study for
different single soft faults is given in Table 7.
As shown in Table 7, the PSO-optimized modular
neural model trained by OWO-HWO algorithm performs
better in fault diagnosis, especially for some soft faults
such as parametric fault in R2, as compared to other sim-
ulated models with a significantly reduced training time.
Also, the overall fault location rate of proposed model is
about 98.6 %, which is the best among the simulated
models in this study.
7 Conclusion
In this paper, a modular neural model has been proposed
for soft fault diagnosis of analog circuits. The three-mode
active filter circuit has been selected as the circuit under
test (CUT). To optimize the structure of ANN modules in
the proposed scheme, PSO algorithm has been used to
determine the number of hidden layer nodes. In addition,
the OWO-HWO training algorithm has been employed,
instead of conventional OWO-BP algorithm, to improve
the convergence speed of ANN modules of proposed
modular model.
The performance of the proposed scheme has been
compared to monolithic MLPs trained by OWO-BP and
OWO-HWO algorithms, conventional KNN algorithm, and
a similar work with the same CUT (Table 8). It is noted
that the tolerance of component in this paper is set to dif-
ferent values (±5, ±10, ±30, and ±40 %), but in the
system reported in [110], this tolerance was ±50 %, which
simplified the fault location task as compared to our study. So,
experimental results have shown that the PSO-optimized
Table 5 Specifications and
performance of proposed PSO-
optimized modular neural
model, tolerance of
components: ±5 %
Network Network topology
(optimized by PSO)
Number
of training
samples
Number
of test
samples
Number
of epochs
Training
time (s)
Fault
location
rate (%)
Gating network 8–77–2 4,000 800 100 164 98.8
Expert network 1 8–54–4 2,000 396 600 292 99.5
Expert network 2 8–51–4 2,000 404 250 206 97.8
Proposed modular model 662 98.6
Table 6 Fault location rate of proposed PSO-optimized modular
neural model, tolerance of components: ±10, ±30, and ±40 %
Network Tolerance of components (%)
10 30 40
Gating network 98.9 99.0 99.9
Expert network 1 99.0 99.8 99.8
Expert network 2 98.8 99.0 99.5
Proposed modular model 98.9 99.4 99.6
Table 7 Performance of simulated models for different soft single
faults
Element Fault location rate (%)
MLP
trained by
OWO-BP
MLP trained
by OWO-
HWO
KNN
algorithm
PSO-optimized
modular neural
model
R1 99 89 100 97
R2 12 76 48 100
R3 100 77 63 100
R4 99 91 100 100
R5 98 97 99 98
Rf 100 95 100 98
C1 99 100 100 96
C2 98 99 100 100
Table 8 Performance comparison of different models in soft fault
location of 3-mode active filter circuit
Model Fault location rate (%)
Monolithic MLP [110] 99.3a
Monolithic MLP with 2 hidden layers
trained by OWO-BP algorithm
(simulated in this study)
88.1b,c
Monolithic MLP with single hidden layer
trained by OWO-HWO algorithm
(simulated in this study)
90.5b,c
KNN classifier (K = 7) (simulated in this
study)
88.8c
PSO-optimized modular ANN trained by
OWO-HWO algorithm (proposed in this
study)
98.6c, 98.9d, 99.4e, 99.6f
a The component values were tolerated by ±50 %b The best result among different simulated MLPs is reportedc The component values were tolerated by ±5 %d The component values were tolerated by ±10 %e The component values were tolerated by ±30 %f The component values were tolerated by ±40 %
Neural Comput & Applic (2013) 23:519–530 527
123
modular neural model trained by OWO-HWO algorithm
offers higher correct fault location rates in analog circuit fault
diagnosis as compared to classic and monolithic investigated
neural models.
References
1. Parten C, Saeks R, Pap R (1991) Fault diagnosis and neural
networks. In: The proceedings of the IEEE international con-
ference on systems, man and cybernetics, pp 1517–1521
2. Catelani M, Gori M (1996) On the application of neural
networks to fault diagnosis of electronic analog circuits. Mea-
surement 17:73–80
3. El-Gamal MA, Abu El-Yazeed MF (1999) A combined clus-
tering and neural network approach for analog multiple hard
fault classification. J Elec Testing Theory Appl 14:207–217
4. El-Gamal MA, Abdulghafour M (2003) Fault isolation in analog
circuits using a fuzzy inference system. Comput Electr Eng
29:213–229
5. Ma H-G, Zhu X-F, Ai M-S, Wang J-D (2007) Fault diagnosis for
analog circuits based on chaotic signal excitation. J Franklin Inst
344:1102–1112
6. Cui J, Wang Y (2011) A novel approach of analog circuit fault
diagnosis using support vector machine classifier. Measurement
44:281–289
7. Shashank SB, Wajid M, Mandavalli S (2012) Fault detection in
resistive ladder network with minimal measurements. Micro-
electron Reliab (Article in Press, doi:10.1016/j.microrel.2011.
12.012, Published online 21 Jan 2012)
8. Qu H, Xu W, Yu Y (2007) Design of neural network output
layer in fault diagnosis of analog circuit. In: The proceedings of
the IEEE international conference on electronic measurement
and instruments, pp 3-639–3-642
9. Fan B, Xue P, Liu J, Dong M (2009) Analog circuit fault
diagnosis based on neural network and fuzzy logic. In: The
proceedings of control and decision conference, pp 199–202
10. Gu Y, Hu Z, Liu T (2010) Fault diagnosis for analog circuits
based on support vector machines. In: The proceedings of
international conference on wireless networks and information
systems, pp 197–200
11. Zuo L, Hou L, Wu W, Wang J, Geng S (2009) Fault diagnosis of
analog IC based on wavelet neural network ensemble. Lecture
Notes Comput Sci 5553:772–779
12. Chruszczyk L (2011) Fault diagnosis of analog electronic
circuits with tolerances in mind. In: The proceedings of inter-
national conference on mixed design of integrated circuits and
systems, pp 496–501
13. Epstein BR, Czigler M, Miller SR (1993) Fault detection and
classification in linear integrated circuits: an application of
discrimination analysis and hypothesis testing. IEEE Trans
Comput-Aided Design 12:102–113
14. Catelani M, Fort A, Alippi C (2002) A fuzzy approach for soft
fault detection in analog circuits. Measurement 32:73–83
15. El-Gamal MA (1997) A knowledge-based approach for fault
detection and isolation in analog circuits. In: The proceedings of
international conference on neural networks, vol 3, pp 1580–1584
16. Catelani M, Fort A (2002) Soft fault detection and isolation in
analog circuits: some results and a comparison between a fuzzy
approach and radial basis function networks. IEEE Trans
Instrum Meas 51:196–202
17. Kalpana P, Gunavathi K (2008) Wavelet based fault detection in
analog VLSI circuits using neural networks. Appl Soft Comput
8:1592–1598
18. Isermann R (1997) Supervision, fault-detection and fault diag-
nosis methods: an introduction. Control Eng Prac 5:639–652
19. Jantos P, Grzechca D, Golonek T, Rutkowski J (2008) Heuristic
methods to test frequencies optimization for analogue circuit
diagnosis. Bull Polish Acad Sci Tech Sci 56:29–38
20. Fenton W, McGinnity TM, Maguire LP (2001) Fault diagnosis
of electronic systems using intelligent techniques: a review.
IEEE Trans Syst Man Cybern C Appl Rev 31:269–281
21. Erdogan ES, Ozev S, Cauvet P (2008) Diagnosis of assembly
failures for system-in-package RF tuners. In: The proceedings of
the IEEE international symposium on circuits and systems,
pp 2286–2289
22. Somayajula SS, Sanchez-Sinencio E, Pineda de Gyvez J (1996)
Analog fault diagnosis based on ramping power supply current
signature clusters. IEEE Trans Circuits Syst II Analog Digit
Signal Process 43:703–712
23. Maidon Y, Jervis BW, Dutton N, Lesage S (1997) Diagnosis of
multifaults in analogue circuits using multilayer perceptrons.
Proc Inst Electr Eng-Circuits Devices Syst 144:149–154
24. Chakrabarti S, Cherubal S, Chatterjee A (1999) Fault diagnosis
for mixed-signal electronic systems. In: The proceedings of the
IEEE aerospace conference, pp 169–179
25. Cota EF, Negreiros M, Carro L, Lubaszewski M (2000) A new
adaptive analog test and diagnosis system. IEEE Trans Instrum
Meas 49:223–227
26. Simeu E, Mir S (2005) Parameter identification based diagnosis in
linear and nonlinear mixed-signal systems. In: The proceedings of
international workshop on mixed-signals test, pp 140–147
27. Ke H, Stratigopoulos HG, Mir S (2010) Fault diagnosis of
analog circuits based on machine learning. In: The proceedings
of the European conference and exhibition on design, automa-
tion and test, pp 1761–1766
28. Kyziol P, Grzechca D, Rutkowski J (2009) Multidimensional
search space for catastrophic faults diagnosis in analog elec-
tronic circuits. In: The proceedings of the European conference
on circuit theory and design, pp 555–558
29. Tadeusiewicz M, Sidyk P, Halgas S (2007) A method for
multiple fault diagnosis in dynamic analogue circuits. In: The
proceedings of the European conference on circuit theory and
design, pp 834–837
30. Chruszczyk L, Rutkowski J (2008) Excitation optimization in
fault diagnosis of analog electronic circuits. In: The proceedings
of the IEEE workshop on design and diagnostics of electronic
circuits and systems, pp 1–4
31. Nissar AI, Upadhyaya SJ (1999) Fault diagnosis of mixed signal
VLSI systems using artificial neural networks. In: The pro-
ceedings of the southwest symposium on mixed-signal design,
pp 93-98
32. Wang R, Zhan Y, Zhou H (2012) Application of S transform in
fault diagnosis of power electronics circuits. Scientia Iranica
(Article in Press, doi:10.1016/j.scient.2011.06.013, Published
online 18 Jan 2012)
33. Long B, Huang J, Tian S (2008) Least squares support vector
machine based analog-circuit fault diagnosis using wavelet trans-
form as preprocessor. In: The proceedings of international confer-
ence on communications, circuits and systems, pp 1026–1029
34. Zhou L-F, Shi Y-B, Zhang W (2009) Soft fault diagnosis of
analog circuit based on particle swarm optimization. J Electron
Sci Technol China 7:358–361
35. Yuan T, Xie Y, Chen G (2009) Analog circuit fault diagnosis
using higher order cumulants. In: The proceedings of interna-
tional conference on electronic measurement and instruments,
vol 4, pp 1023–1027
36. Xiao Y, Feng L (2012) A novel linear ridgelet network approach
for analog fault diagnosis using wavelet-based fractal analysis
and kernel PCA as preprocessors. Measurement 45:297–310
528 Neural Comput & Applic (2013) 23:519–530
123
37. Bo F, Peng X, Junjie L, Ming D (2009) Analog circuit fault diag-
nosis based on neural network and fuzzy logic. In: The proceedings
of Chinese conference on control and decision, pp 199–202
38. Tang J, Shi Y, Jiang D (2009) Analog circuit fault diagnosis
with hybrid PSO-SVM. In: The proceedings of the IEEE circuits
and systems international conference on testing and diagnosis,
pp 1–5. doi:10.1109/CAS-ICTD.2009.4960778
39. Li H, Zhang Y (2009) An algorithm of soft fault diagnosis for
analog circuit based on the optimized SVM by GA. In: The
proceedings of international conference on electronic measure-
ment and instruments, vol 4, pp 1023–1027
40. Stosovic MA, Litovski V (2010) Hierarchical approach to
diagnosis of electronic circuits using ANNs. In: The proceedings
of 10th symposium on neural network applications in electrical
engineering, pp 117–122
41. Liu H, Chen G, Song G, Han T (2009) Analog circuit fault
diagnosis using bagging ensemble method with cross-validation.
In: The proceedings of international conference on mechatronics
and automation, pp 4430–4434
42. Grasso F, Luchetta A, Manetti S, Piccirilli MC (2010) Symbolic
techniques in neural network based fault diagnosis of analog
circuits. In: The proceedings of international workshop on
symbolic and numerical methods, modeling and applications to
circuit design, pp 1–4
43. Li X, Zhang Y, Wang S, Zhai G (2011) A method for analog
circuits fault diagnosis by neural network and virtual instru-
ments. In: The proceedings of international workshop on intel-
ligent systems and applications, pp 1–5
44. Rutkowski G (1992) A neural approach to fault location in
nonlinear DC circuits. In: The proceedings of international
conference on artificial neural networks, pp 1123–1126
45. Yu S, Jervis B, Eckersall K, Bell I, Hall A, Taylor G (1994)
Neural network approach to fault diagnosis in CMOS opamps
with gate oxide short faults. Electron Lett 30:695–696
46. Fanni A, Giua A, Sandoli E (1996) Neural networks for multiple fault
diagnosis in analog circuits. In: Neural networks theory, technology
and applications (IEEE technology update series), pp 745–752
47. Liu H, Chen G, Jiang S, Song G (2008) A survey of feature
extraction approaches in analog circuit fault diagnosis. In: The
proceedings of the IEEE Pacific-Asia workshop on computa-
tional intelligence and industrial application, pp 676–680
48. Catelani M, Fort A (2000) Fault diagnosis of electronic analog
circuits using a radial basis function network classifier.
Measurement 28:147–158
49. Litovski V, Andrejevic M, Zwolinski M (2006) Analogue
electronic circuit diagnosis based on ANNs. Microelectron
Reliab 46:1382–1391
50. Mohammadi K, Seyyed Mahdavi SJ (2008) On improving
training time of neural networks in mixed signal circuit fault
diagnosis applications. Microelectron Reliab 48:781–793
51. Xiao Y, He Y (2011) A novel approach for analog fault diag-
nosis based on neural networks and improved kernel PCA.
Neurocomputing 74:1102–1115
52. Chen HH, Manry MT, Chandrasekaran H (1996) A neural
network training algorithm utilizing multiple sets of linear equa-
tions. In: The proceedings of Asilomar conference on signals,
systems and computers, pp 1166–1170
53. Sheikhan M, Sha’bani AA (2009) Fast neural intrusion detection
system based on hidden weight optimization algorithm and feature
selection. World Appl Sci J 7 (Special Issue of Computer and
IT):45–53
54. Bandler JW (1985) Fault diagnosis of analog circuits. Proc IEEE
73:1279–1325
55. Xiao Y, Feng L (2012) A novel neural-network approach of
analog fault diagnosis based on kernel discriminant analysis and
particle swarm optimization. Appl Soft Comput 12:904–920
56. He W, Wang P (2010) Analog circuit fault diagnosis based on
RBF neural network optimized by PSO algorithm. In: The
proceedings of international conference on intelligent compu-
tation technology and automation, vol 1, pp 628–631
57. Yin S, Chen G, Xie Y (2006) Wavelet neural network based
fault diagnosis in nonlinear analog circuits. J Syst Eng Electron
17:521–526
58. Luo H, Wang Y, Cui J (2011) A SVDD approach of fuzzy
classification for analog circuit fault diagnosis with FWT as
preprocessor. Expert Syst Appl 38:10554–10561
59. Czaja Z, Zielonko R (2004) On fault diagnosis of analogue
electronic circuits based on transformations in multi-dimen-
sional spaces. Measurement 35:293–301
60. He Y, Zhu W (2009) Fault diagnosis of nonlinear analog circuits
using neural networks and multi-space transformations. Lect
Notes Comput Sci 5553:714–723
61. Seyyed Mahdavi SJ, Mohammadi K (2009) Evolutionary deri-
vation of optimal test sets for neural network based analog and
mixed signal circuits fault diagnosis approach. Microelec Reliab
49:199–208
62. Jain V, Pillai GN, Gupta I (2011) Fault diagnosis in analog
circuits using multiclass relevance vector machine. In: The
proceedings of international conference on emerging trends in
electrical and computer technology, pp 641–643
63. Jantos P, Grzechca D, Golonek T, Rutkowski J (2007) Gene
expression programming-based method of optimal frequency set
determination for purpose of analogue circuits’ diagnosis. Adv
Soft Comput 45:794–801
64. Bilski P, Wojciechowski JM (2007) Automated diagnostics of
analog systems using fuzzy logic approach. IEEE Trans Instrum
Meas 56:2175–2185
65. Shen L, Tay FEH, Qu L, Shen Y (2000) Fault diagnosis using
rough sets theory. Comput Ind 43:61–72
66. Hu M, Wang H, Hu G, Yang S (2007) Soft fault diagnosis for
analog circuits based on slope fault features and BP neural
networks. Tsinghua Sci Technol 12:26–31
67. Wang L, Liu Y, Li X, Guan J, Song Q (2010) Analog circuit
fault diagnosis based on distributed neural network. J Comput
5:1747–1754
68. Luo H, Wang Y, Lin H, Jiang Y (2012) Module level fault diagnosis
for analog circuits based on system identification and genetic
algorithm. Measurement (Article in Press, doi:10.1016/
j.measurement.2011.12.010, Published online 8 Jan 2012)
69. Deng Y, He Y, Sun Y (2000) Fault diagnosis of analog circuits
with tolerances using artificial neural networks. In: The pro-
ceedings of the IEEE Asia-Pacific conference on circuits and
systems, pp 292–295
70. Mohammadi K, Mohseni Monfared AR, Molaei Nejad A (2002)
Fault diagnosis of analog circuits with tolerances by using RBF
and BP neural networks. In: The proceedings of student con-
ference on research and development, pp 317–321
71. Han B, Wu H (2009) Based on compact type of wavelet neural
network tolerance analog circuit fault diagnosis. In: The proceed-
ings of international conference on information engineering and
computer science, pp 1–4. doi:10.1109/ICIECS.2009.5363065
72. Kugarajah T, Zhang Q (2005) Mutidimensional wavelet frames.
IEEE Trans Neural Netw 6:1552–1556
73. Yuan L, He Y, Huang J, Sun Y (2010) A new neural-network-based
fault diagnosis approach for analog circuits by using kurtosis and
entropy as a preprocessor. IEEE Trans Instrum Meas 59:586–595
74. He Y, Tan Y, Sun Y (2004) Wavelet neural network approach
for fault diagnosis of analogue circuits. IEEE Proc Circ Devices
Syst 151:379–384
75. Lee CM, Ko CN (2009) Time series prediction using RBF
neural networks with a nonlinear time-varying evolution PSO
algorithm. Neurocomputing 73:449–460
Neural Comput & Applic (2013) 23:519–530 529
123
76. Yu J, Wang S, Xi L (2008) Evolving artificial neural networks
using an improved PSO and DPSO. Neurocomputing 71:1054–
1060
77. Leung SYS, Tang Y, Wong WK (2011) A hybrid particle swarm
optimization and its application in neural networks. Experts
systems with applications (Article in Press, doi:10.1016/
j.eswa.2011.07.028, Published online 22 July 2011)
78. Luitel B, Venayagamoorthy GK (2010) Quantum inspired PSO
for the optimization of simultaneous recurrent neural networks
as MIMO learning systems. Neural Netw 23:583–586
79. Zhang JR, Zhang J, Lok TM, Lyu MR (2007) A hybrid particle
swarm optimization-back propagation algorithm for feedforward
neural network training. Appl Math Comput 185:1026–1037
80. Shen W, Guo X, Wu C, Wu D (2011) Forecasting stock indices
using radial basis function neural networks optimized by artifi-
cial swarm algorithm. Knowl-Based Syst 24:378–385
81. Li J, Liu X (2011) Melt index prediction by RBF neural network
optimized with an MPSO-SA hybrid algorithm. Neurocomput-
ing 74:735–740
82. Li M, He Y, Yuan L (2010) Fault diagnosis of analog circuit
based on wavelet neural networks and chaos differential evo-
lution algorithm. In: The proceedings of international confer-
ence on electrical and control engineering, pp 986–989
83. Sharkey AJ (1996) On combining artificial neural networks.
Connect Sci 8:299–313
84. Sharkey AJ (1997) Modularity, combining and artificial neural
nets. Connect Sci 9:3–10
85. Gomi H, Kawato M (1993) Recognition of manipulated objects
by motor learning with modular architecture networks. Neural
Netw 6:485–497
86. Lee T (1991) Structure level adaptation for artificial neural
networks. Kluwer, The Netherlands
87. Osherson DN, Weinstein S, Stoli M (1990) Modular learning.
In: Computational Neuroscience, pp 369–377
88. Kosslyn SM (1994) Image and brain. MIT Press, Cambridge
89. Haykin S (1994) Neural networks, a comprehensive foundation.
Macmillan College Publishing Company, UK
90. Jordan MI, Jacobs RA (1991) Task decomposition through
competition in a modular connectionist architecture: the what
and where vision tasks. Cogn Sci 15:219–250
91. French RM (1999) Catastrophic forgetting in connectionist
networks. Trends Cogn Sci 3:128–135
92. Auda G, Kamel M (1998) Modular neural networks classifiers: a
comparative study. J Intell Rob Syst 21:117–129
93. Raafat H, Rashwan M (1993) A tree structured neural network.
In: The proceedings of international conference on document
analysis and recognition, pp 939–942
94. Auda G, Kamel M, Raafat H (1996) Modular neural network
architecture for classification. In: The proceedings of the IEEE
international conference on neural networks, vol 2, pp 1279–1284
95. Auda G, Kamel M, Raafat H (1995) Modular neural network
architecture for classification. In: The proceedings of the IEEE
international conference on neural networks, vol 3, pp 1240–
1243
96. Waibel A (1989) Modular construction of time-delay neural
networks for speech recognition. Neural Comput 1:39–46
97. Jacobs RA, Jordan MI, Nowlan SJ, Hinton GE (1991) Adaptive
mixtures of local experts. Neural Comput 3:79–87
98. Becker S, Hinton GE (1993) Learning mixture-models of spatial
coherence. Neural Comput 5:267–277
99. Alpaydin E, Jordan MI (1996) Local linear perceptrons for
classification. IEEE Trans Neural Netw 7:788–792
100. Tan Y, He Y, Fang G (2007) Hierarchical neural networks
method for fault diagnosis of large-scale analog circuits.
Tsinghua Sci Technol 12:260–265
101. Sartori MA, Antsaklis PJ (1991) A simple method to derive
bounds on the size and to train multilayer neural networks. IEEE
Trans Neural Netw 2:467–471
102. Rohani K, Chen MS, Manry MT (1992) Neural subnet design
by direct polynomial mapping. IEEE Trans Neural Netw
3:1024–1026
103. Werbos P (1988) Backpropagation: past and future. In: The
proceedings of the IEEE international conference on neural
networks, pp 343–353
104. Scalero RS, Tepedelenlioglu N (1992) A fast new algorithm for
training feedforward neural networks. IEEE Trans Signal
Process 40:202–210
105. Kennedy J, Eberhart R (1995) Particle swarm optimization. In:
The proceedings of the IEEE international conference on neural
networks, vol 4, pp 1942–1948
106. Shi Y, Eberhart R (1998) Parameter selection in particle swarm
optimization. In: The proceedings of international conference on
evolutionary programming, pp 591–601
107. Engelbrecht AP (2007) Computational intelligence-an intro-
duction, 2nd edn. Wiley, Chapter 16, pp 289–357
108. Eberhart RC, Simpson PK, Dobbins RW (1996) Computational
intelligence PC tools, 1st edn. Academic Press Professional
109. Riedmiller M, Braun H (1993) A direct adaptive method for
faster backpropagation learning: The RPROP algorithm. In: The
proceedings of the IEEE international conference on neural
networks, pp 586–591
110. Abu El-Yazeed MF, Mohsen AAK (2003) A preprocessor for
analog circuit fault diagnosis based on Prony’s method. Int J
Electro Commun 57:16–22
530 Neural Comput & Applic (2013) 23:519–530
123