amit seminar report(viii sem) final
TRANSCRIPT
-
8/7/2019 AMIT SEMINAR REPORT(VIII SEM) FINAL
1/25
1
DIRECT TORQUE CONTROL OF INDUCTION MOTOR
USING ARTIFICIAL NEURAL NETWORK
A
SEMINAR REPORT
Submitted for the Partial fulfillment of the requirement of the
Degree
Of
BACHELOR OF TECHNOLOGY
in
ELECTRICAL ENGINEERING
Guided By: Submitted By:
Mr. Harish Khyani Mr.Amit Mathur
(Sr. Lecturer ) (IV B.Tech., Electrical Engg.)
Department of Electrical Engineering
Jodhpur Institute of Engineering & Technology, Jodhpur
Rajasthan Technical University, Kota (Raj.)
2011
-
8/7/2019 AMIT SEMINAR REPORT(VIII SEM) FINAL
2/25
2
CE T CATE
Thisist certi y that the Semi ar Report entitled FACE RECOGNITION
USING ARTIFICIAL NEURAL NET ORK s mitted by Mr. Nik il
Mathur orthe partial fulfillment ofthe requirement ofthe Degree of Bachelor
of Technologyin Electrical Engineering of Jodhpur Institute of Engineering &
Technology, Jodhpur,is a record ofthe seminar work carried out by him.
Mr. Hari h Khyani Prof. Kusum Agarwal
(Sr. L turer)
(Head,Electrical Engg.)Date:
Place: Jodhpur
-
8/7/2019 AMIT SEMINAR REPORT(VIII SEM) FINAL
3/25
3
ACKNOWLE GEMENT
The compilation of thisseminar would not have been possible without
the support and guidance of the following people and organization .With my
deep sense of gratitude ,I think my respected teachers forsupporting thistopic
of my seminar. This seminar report provides me with an opportunity to put
into knowledge of advanced technology. I t hereby take the privilege
opportunityto thank my guide and my friends whose help and guidance made
thisstudy a possibility.
As a student, I learnt manythings but unless I put all with the practical
knowledge asto how things really work and what are the problems generally
arise, I cannot expectto be an efficientstudent. So I thinksummer projectis an
indispensable part ofthe course.
His dedication & sincerity towards the project hel ped me a lot in
completion of project report and gave itthe present attractive look.
Last but notthe least, I would again like to express mysincere thanksto
all project guides fortheir constant friendly guidance during the entire stretch of
this report. Every new step I took was due to their persistent enthusiastic
backing and I acknowledge this with a deep sense of gratitude.
Date: Mr. Amit Mathur
Place: Jodhpur (IV B.Tech., ElectricalEngg.)
-
8/7/2019 AMIT SEMINAR REPORT(VIII SEM) FINAL
4/25
4
ABSTRACT
This paper presents an improved direct torque of induction machine based on
artificial neural networks.Thisintelligenttechnique was used to replace, on the
one hand the conventional comparators and the selection table in orderto reduce
torque ripple, flux and stator current, on the other hand and the classic integral
proportional (PI) in orderto increase the response time period ofthe system,to
optimize the performances of the closed loop control, and to adjust the
parameters ofthe regulatorto changesin the reference level. Then we estimated
the rotorspeed using the Model Reference Adaptive Control MRAS method
based on measurements of electrical quantities of the machine.The validity of
the proposed methodsis confirmed bythe simulation results .
-
8/7/2019 AMIT SEMINAR REPORT(VIII SEM) FINAL
5/25
5
CONTENTS
Sr. No. Topics Page No.
1. Introduction to A.N.N. 62. Resemblence with brain ... 7-83. Structure of neural network... 9-104. Architecture of neural networks ..... 11-125. Principle of DTC..... 13-156. Principle of artificial neural network... 16-177. Learning algorithm in neural networks...... 18-198. ANN structure for direct torque control..... 209. Simulation and interpretation of results...... 21-2310. Conclusion and future work...... 2411. Refrences ..... 25
-
8/7/2019 AMIT SEMINAR REPORT(VIII SEM) FINAL
6/25
6
CHAPTER 1.
INTRODUCTION
A neural networkis a powerful data modeling toolthatis able to capture and represent
complex input/output relationships . In the broadersense, a neural networkis a collection of
mathematical models that emulate some of the observed properties of biological nervous
systems and draw on the analogies of adaptive biological learning. It is composed of a large
number of highly interconnected processing elements that are analogous to neurons and are
tied together with weighted connectionsthat are analogousto synapses.
To be more clear,let usstudythe model of a neural network with the help of figure.1.
The most common neural network modelisthe multilayer perceptron (MLP). It is composed
of hierarchicallayers of neurons arranged so thatinformation flows from the inputlayerto the
outputlayer ofthe network. The goal ofthistype of networkisto create a modelthat correctlymapsthe inputto the output using historical data so thatthe model can then be used to produce
the output when the desired outputis unknown.
Figure 1.1. Graphical representation of MLP
-
8/7/2019 AMIT SEMINAR REPORT(VIII SEM) FINAL
7/25
7
CHAPTER 2.
RESEMBLENCE WITH BRAIN
The brain is principally composed of about 10 billion neurons , each connected to
about 10,000 other neurons. Each neuron receives electrochemicalinputs from other neurons
at the dendrites. If the sum of these electrical inputs is sufficiently powerful to activate the
neuron, it transmits an electrochemical signal along the axon, and passes this signal to the
other neurons whose dendrites are attached at any of the axon terminals. These attached
neurons maythen fire.
So, our entire brain is composed of these interconnected electro-chemical
transmitting neurons. From a very large number of extremely simple processing units (each
performing a weighted sum of its inputs, and then firing a binary signal if the total input
exceeds a certain level) the brain manages to perform extremely complex tasks. This is the
model on which artificial neural networks are based.
Neural networkis a sequence of neuron layers. A neuron is a building block of a neural
net. Itis veryloosely based on the brain's nerve cell. Neurons will receive inputs via weighted
links from other neurons. This inputs will be processed according to the neurons activation
function. Signals are then passed on to other neurons.
In a more practical way, neural networks are made up of interconnected processing
elements called units which are equivalentto the brains counterpart,the neurons.Neural network can be considered as an artificial system that could perform
"intelligent" taskssimilar to those performed by the human brain. Neural networks resemble
the human brain in the following ways:
1. A neural network acquires knowledge through learning.2. A neural network's knowledge isstored within inter-neuron connection strengths
known assynaptic weights.
3.Neural networks modify own topology just as neuronsin the brain can die and newsynaptic connections grow.
Graphicallylet us compare a artificial neuron and a neuron of a brain with the help of figures
2.1 and 2.2 given below
-
8/7/2019 AMIT SEMINAR REPORT(VIII SEM) FINAL
8/25
8
]
Figure 2.1. Neuron of an artificial neural network
Figure2.2.Neuron of a brain
-
8/7/2019 AMIT SEMINAR REPORT(VIII SEM) FINAL
9/25
9
CHAPTER 3.
STRUCTURE OF NEURAL NETWORK
According to Frank Rosenblattstheoryin 1958,the basic element of a neural network
is the perceptron, which in turn has5 basic elements: an n-vector input, weights, summing
function, threshold device, and an output. Outputs are in the form of -1 and/or +1. The
threshold has a setting which governsthe output based on the summation ofinput vectors. If
the summation falls below the threshold setting, a -1 isthe output. Ifthe summation exceeds
the threshold setting, +1 isthe output. Figure 3.1 depictsthe structure of a basic perceptron
which is also called artificial neuron.
Figure 3.1. Artificial Neuron ( Perceptron)
The perceptron can also be dealt as a mathematical model of a biological neuron.
While in actual neurons the dendrite receives electrical signals from the axons of other
neurons,in the perceptron these electricalsignals are represented as numerical values.
A more technicalinvestigation of a single neuron perceptron showsthatit can have an
input vector X of N dimensions (asillustrated in figure.5). These inputs go through a vector W
of Weights of N dimension. Processed bythe Summation Node, "a" is generated where "a" is
the "dot product" of vectors X and W plus a Bias. "A" isthen processed through an activation
function which compares the value of "a" to a predefined Threshold. If "a" is below the
Threshold,the perceptron will not fire. Ifitis above the Threshold,the perceptron will fire one
-
8/7/2019 AMIT SEMINAR REPORT(VIII SEM) FINAL
10/25
10
pulse whose amplitude is predefined.
Figure 3.2. Mathematical model of a perceptron
-
8/7/2019 AMIT SEMINAR REPORT(VIII SEM) FINAL
11/25
11
CHAPTER 4.
ARCHITECTURE OF NEURAL NETWORK
4.1.Feed-forward networks:-Feed-forward ANNs allow signalsto travel one way only; from inputto output. There
is no feedback (loops) i.e. the output of any layer does not affect that same layer. Feed-
forward ANNs tend to be straight forward networks that associate inputs with outputs. They
are extensively used in pattern recognition. This ty pe of organisation is also referred to as
bottom-up ortop-down.
4.2.Feed-backnetworks:-
Feed-back networks can have signalstravelling in both directions byintroducing loops
in the network. Feedback networks are very powerful and can get extremely complicated.
Feedback networks are dynamic; their 'state' is changing continuously until they reach an
equilibrium point. They remain at the equilibrium point until the input changes and a new
equilibrium needs to be found. Feedback architectures are also referred to as interactive or
recurrent, although the latterterm is often used to denote feedback connectionsin single-layer
organisations.
4.3.Networklayers:-
The commonest ty pe of artificial neural network consists of three groups, or
layers, of units: a layer of input unitsis connected to a layer of hidden units, which
is connected to a layer of output units.
1.The activity of the input units represents the raw information that is fed into the
network.
2. The activity of each hidden unitis determined bythe activities ofthe input units and
the weights on the connections between the input and the hidden units.
3. The behavior ofthe output units depends on the activity ofthe hidden units and the
weights between the hidden and output units.Thissimple type of networkisinteresting because the hidden units are free to construct
their own representations of the input. The weights between the input and hidden units
determine when each hidden unitis active, and so by modifying these weights, a hidden unit
can choose whatit represents.
We also distinguish single-layer and multi-layer architectures. The single-layer
-
8/7/2019 AMIT SEMINAR REPORT(VIII SEM) FINAL
12/25
12
organisation,in which all units are connected to one another, constitutesthe most general case
and is of more potential computational power than hierarchically structured multi-layer
organisations. In multi-layer networks, units are often numbered bylayer,instead of following
a global numbering.
-
8/7/2019 AMIT SEMINAR REPORT(VIII SEM) FINAL
13/25
13
CHAPTER-5
PRINCIPLE OF THE DTC
The diagram of CDTC for an induction motor drive isshown in Figure 1. Te* and s* are
torque and flux reference values;Te and s are the estimated torque and stator flux values;
* isthe command speed value; isthe realspeed value and s isthe stator flux angle.
Figure 5.1. Diagram of the CDTC method
A PI or PID controller is used to determine the reference torque, based on the difference
between the reference and the instantaneousspeed ofthe motor.The basic idea of the DTC
conceptisto choose the best vector ofthe voltage, which makesthe flux rotate and produce
the desired torque. During this rotation, the amplitude of the flux remainsin a pre-defined
band. In orderto control the induction motor, the supply voltage and stator current are
sampled. Only two phase currents are needed to measure iA and iB, the third phase can be
calculed as follow: iC=-iA-iB. The stator flux on the stationary reference axes is estimatedas follows:
-
8/7/2019 AMIT SEMINAR REPORT(VIII SEM) FINAL
14/25
14
Equation-1
Where isthe stator flux and Rsisthe stator resistance. The module of the stator flux is
given by equation (2),the developed electromagnetic torque Te ofthe motor can be evaluated
by equation (3) and the angle between the referential and sis presented by equation (4).
Equation-2
Equation-3
Equation-4
The estimated values ofthe torque and stator flux are compared to the command values,Te*
and s* respectively. It can be seen from figure 1 thatthe error between the estimated torque
Te and the reference torque Te* isthe input of a three level hysteresis comparator, where the
error between the estimated stator flux magnitude s and the reference stator flux magnitude
s* isthe input of a two level hysteresis comparator.Finally,the outputs ofthe comparators
with stator flux sector,where the stator flux s pace vector is located, select an appropriate
inverter voltage vector from the switching Table 1.The selected voltage vector will be applied
to the induction motor atthe end ofthe sample time .
-
8/7/2019 AMIT SEMINAR REPORT(VIII SEM) FINAL
15/25
15
Table 1. The switching table for basic DTC
Vectors V1,,V6 representthe six active vectorsthat can be generated by a voltage source
inverter (VSI) where V0 and V7 are the two zero voltage vectors. Figure 2 givesthe partition
ofthe complex plan in six angularsectors Si=16.
Figure 5.2. Partition of the complex plan in six angular sectors
When flux is in zone i, vector Vi+1 or Vi-1 isselected to increase the level ofthe flux, and
Vi+2 or Vi-2 isselected to decrease it. Atthe same time, vector Vi+1 or Vi-2 isselected toincrease the level oftorque, and Vi-1 or Vi-2 isselected to decrease it.
If V0 or V7 isselected,the rotation of flux isstopped and the torque decreases whereasthe
amplitude of flux remains unchanged. This shows that the choice of the vector tension
depends on the sign of the error of flux and torque independently from its amplitude. This
explains why the output ofthe hysteresis comparator of flux and torque must be a Boolean
variable. We can add a band of hysteresis around zero to avoid useless commutations when
the error of flux is verysmall.With thistype of hysteresis comparator, we can easily control
and maintain the end ofthe vector flux within a circular ring.
-
8/7/2019 AMIT SEMINAR REPORT(VIII SEM) FINAL
16/25
16
CHAPTER-6
PRINCIPLE OF ARTIFICIAL NEURAL NETWORK
One of the most important features of Artificial Neural Networks (ANN) is their
ability to learn and improve their operation using a training data . The basic element of an
ANN is the neuron which has a summer and an activation function asshown in Figure 6.1.
The mathematical model of a neuron is given by:
where (x1, x2 xN) are the input signals of the neuron, (w1, w2, wN) are their
corresponding weights and b a bias parameter. is a tangentsigmoid function and y is the
outputsignal ofthe neuron.
Figure 6.1. Representation of the artificial neuron
The ANN shown in Figure 6.2 can be trained by a learning algorithm which performs the
adaptation of weights ofthe networkiteratively untilthe error between target vectors and the
output of the ANN is less than a predefined threshold. Nevertheless, it is possible that the
learning algorithm did not produce any acceptable solution for allinputoutput association
problems. Anyway, results depend on several factors :
Network architecture (number oflayers, number of
neuronsin each layer, etc.).
Initial parameter values w (0).
The details ofthe inputoutput mapping.
Selected training data set (pairs ofinputs and their corresponding desired outputs).
The learning-rate constant.
-
8/7/2019 AMIT SEMINAR REPORT(VIII SEM) FINAL
17/25
17
Figure 6.2. Structure of neural network
-
8/7/2019 AMIT SEMINAR REPORT(VIII SEM) FINAL
18/25
18
CHAPTER-7
Learning Algorithms in Neural Networks
The most popularsupervised learning algorithm is back- propagation , which consists
of a forward and backward action. In the forward step,the free parameters ofthe network are
fixed, and the inputsignals are propagated throughoutthe network from the firstlayerto the
lastlayer. In the forward phase, we compute a mean square error.
where di is the desired response,yi is the actual output produced by the networkin
response to the input xi, kisthe iteration number and N isthe number ofinput-outputtraining
data.The second step ofthe backward phase,the errorsignalE(k)is propagated throughout
the network of Figure 6.2 in the backward direction in orderto perform adjustments upon the
free parameters ofthe network in order to decrease the errorE(k) in a statisticalsense. The
weights associated with the output layer of the network are therefore updated using the
following formula:
where wji isthe weight connecting thejth neuron ofthe outputlayerto the ith neuron
of the previous layer, is the constant learning rate. Large values of may accelerate the
ANN learning and consequently fasters convergence but may cause oscillations in the
network output, whereas low values will cause slow convergence. Therefore, the value of
hasto be chosen carefullyto avoid instability.
-
8/7/2019 AMIT SEMINAR REPORT(VIII SEM) FINAL
19/25
19
Figure 7.1. Flowchart for training backpropagation networks
To ensure fast convergence, we change the formula of equation (12) as shown in
equation (13) where is a positive constant called momentum constant.
The concrete back propagation training process isshown in the flowchart of Figure
12. Once the ANN is trained properly, it should be adequately tested using data which is
different from the training setin orderto testthe validity ofthe model.
-
8/7/2019 AMIT SEMINAR REPORT(VIII SEM) FINAL
20/25
20
CHAPTER-8
ANN structure for Direct Torque Control
The basic structure of Direct Torque Neural Network Control (DTNNC) method for
induction machine is presented in Figure 13. The artificial neural network replaces the
switching table selector block and the two hysteresis controllers. After several tests, we
choose an architecture 3-10-10-3, i.e. with two hidden layer, with a number of epochs of
3000 and an error of 10-3. The ANN inputs are the error between the estimated flux value
and its reference value,the difference between the estimated electromagnetic torque and the
torque reference and the position of flux stator vector represented by the number of
corresponding sector. The ANN output layer is composed of three neurons. Each neuron
representsthe state of one ofthe three pairs ofthe vectorthat will be applied to the induction
motor. The rest ofthe whole system isthe same like the classicalstructure of DTC presented
in Figure 8.1.
Figure 8.1. Structure of DTC using ANN strategy
-
8/7/2019 AMIT SEMINAR REPORT(VIII SEM) FINAL
21/25
21
CHAPTER-9
SIMULATION AND INTERPRETATION OF RESULTS
To test the performances of the fuzzy logic and neural networks control with direct
torque control,the simulation ofthe system was conducted using the MATLAB tool. Motors
parameters forsimulation are given in Table 3. Figures9.1-9.4show a comparison between
the CDTC, DTFC and DTNNC.
The torque and flux references used in the simulation results of the Fuzzy direct
torque controlstrategy are 10 N.m and 0.91 wb respectively. The machine is running at 100
rad/sec. The sampling period ofthe system is50 s. All four figures are the responsesto step
change torque command from zero to 10 N.M, which is applied at 0 sec.
The simulation resultsin Figure 9.1 (a, b and c) show the response of electromagnetic
torque of the CDTC, fuzzy DTC and neural network respectively. It can be seen that the
torque's ripples with fuzzy direct torque control in steady state is significantly reduced
compared to conventional and neural networks DTC. It is obvious from Figure 9.1.d that in
fuzzy direct torque control, the torque trajectory is established quickly than in the
conventional or the neural network DTC. The torque trajectories with conventional and
neural networks DTC in start- up are almostsimilar.
(a).CDTC. (b). DTFC. (c). DTNNC. (d) Conventional, Fuzzy
and neural DTC plots
Figure 9.1. Electromagnetic torque response
Figure 9.2 (a, b and c) illustratesthe response ofstator flux magnitude ofthe CDTC,
fuzzy DTC and neural network respectively. Compared with the CDTC, ripple ofstator flux
-
8/7/2019 AMIT SEMINAR REPORT(VIII SEM) FINAL
22/25
22
with fuzzy and neural network DTC is reduced significantly. The stator flux of the fuzzy
DTC hasthe fast response time in transientstate asshown in Figure 9.2.d.
(a).CDTC. (b). DTFC. (c). DTNNC. (d) Conventional, Fuzzy
and neural DTC
Figure 9.2. The stator flux magnitude
The simulation results in Figure9.3 (a, b and c) show that the current's stator ripples with
direct torque neural networks control in steady state is significantly reduced compared to
CDTC.Compared to the neural DTC, ripple of stator current with fuzzy DTC is almost
similar.
(a). Fuzzy DTC. (b).Neural network DTC. (c). CDTC.
Figure 9.3. The stator current magnitude
Figure 9.4 (a, b and c) describes the stator flux vectortrajectory which is almost
circular. In this figure it can be noticed that fuzzy controller offersthe fasttransient responses
and has better performance than the CDTC method. Compared to the CDTC, ripple ofstator
flux trajectory of neural networkissignificantly reduced.
-
8/7/2019 AMIT SEMINAR REPORT(VIII SEM) FINAL
23/25
23
(a). Fuzzy DTC. (b).Neural network DTC. (c). CDTC.
Figure 9.4. The stator flux vector trajectory
In allthe simulations presented here, we can easealy observethat our methods reaches
better performances than the CDTC method with respect to reducing the torque, flux and
current ripple and maintaining a good torque response.
Table 2. Induction Motor parameters
-
8/7/2019 AMIT SEMINAR REPORT(VIII SEM) FINAL
24/25
24
CHAPTER-10
CONCLUSION AND FUTURE WORK
In this paper, an improvement for direct torque control algorithm of induction
machine is proposed using two intelligent approaches which consists of replacing the
switching table selector block and the two hysteresis controllers. Simulations have shown that
the two proposed strategies have better performancesthan the CDTC. In fact,they alloaw a
significant reduced torque and stator flux ripples and a good starting behavior. Using the
intelligent techniques, the selection of the voltage vector becomes much convenient and the
switching state can be obtained when the error of the torque and stator flux is attained. The
validity ofthe proposed control is confirmed by the simulative results. None of the known
advantages ofthe CDTC are impacted by the proposed methods. It has been found that the
directtorque fuzzy controlstrategy allows a higher dynamic behaviorthan the conventional
and neural network DTC. In the future research, the simulative results will be brought into
the experimental system to prove the proposed neural network and fuzzy logic control. A
digitalimplementation ofthese intelligent controls may be performed using different devices
such as custom design, programmable logic, etc. In a Field Programmable Gate Array
(FPGA), which is a family of programmable devices, multiple operations can be executed in
parallelso that algorithms can run faster, which is required for controlsystems.
-
8/7/2019 AMIT SEMINAR REPORT(VIII SEM) FINAL
25/25
CHAPTER-11
REFRENCES
1. International Journal of Computer Applications, Volume 10, November 2010.2. ELECTRONICS FOR YOU- Part 1 April 2001 & Part 2 May 20013. ELECTRONIC WORLD - DECEMBER 20024. MODERN TELEVISION ENGINEERING- Gulati R.R5. IEEE IN TELLIGENT SYS TEMS - MAY/JUNE 20036. WWW.FACEREG.COM7. WWW. IMAGESTECHNOLOGY.COM8. WWW.IEEE.COM