![Page 1: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/1.jpg)
Dr. G. Nagamani
Assistant Professor
Dept. of Mathematics GRI
![Page 2: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/2.jpg)
History of the Artificial Neural Networks (ANNs)
Background of ANNs
Basic Neuronal Model
Basic Structure of Neural Networks
Learning Process
Dynamic Neural Network Model
Stability Analysis of ANNs
![Page 3: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/3.jpg)
History of the Artificial Neural Networks
![Page 4: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/4.jpg)
History of the Artificial Neural Networks
History of the ANNs stems from the 1940s, the decade of
the first electronic computer.
However, the first important step took place in 1957 when
Rosenblatt introduced the first concrete neural model, the
perceptron. Rosenblatt also took part in constructing the
first successful neuro-computer, the Mark I Perceptron.
After this, the development of ANNs has proceeded as
described in Figure.
Since then, new versions of the Hopfield network have
been developed. The Boltzmann machine has been
influenced by both the Hopfield network and the MLP.
![Page 5: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/5.jpg)
History of the Artificial Neural Networks
Rosenblatt's original perceptron model contained only one layer.
From this, a multi-layered model was derived in 1960. At first, the
use of the multi-layer perceptron (MLP) was complicated by the
lack of a appropriate learning algorithm.
In 1974, Werbos came to introduce a so-called back propagation
algorithm for the three-layered perceptron network.
In 1986, The application area of the MLP networks remained rather
limited until the breakthrough when a general back propagation
algorithm for a multi-layered perceptron was introduced by
Rummelhart and Mclelland.
1982, Hopfield brought out his idea of a neural network. Unlike the
in neurons in MLP, the Hopfield network consists of only one layer
whose neurons are fully connected with each other.
![Page 6: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/6.jpg)
In 1988, Radial Basis Function (RBF) networks were first
introduced by Broomhead & Lowe. Although the basic idea of
RBF was developed 30 years ago under the name method of
potential function, the work by Broomhead & Lowe opened a
new frontier in the neural network community.
In 1982, A totally unique kind of network model is the Self-
Organizing Map (SOM) introduced by Kohonen. SOM is a
certain kind of topological map which organizes itself based on
the input patterns that it is trained with.
The SOM originated from the LVQ (Learning Vector
Quantization) network the underlying idea of which was also
Kohonen's in 1972.
![Page 7: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/7.jpg)
A Neural Network (NN) is a massively parallel distributed
processor made up of simple processing units, which has a
natural propensity for storing experiential knowledge and
making it available for use. It resembles the brain in two
respects:
Knowledge is acquired by the network from its
environment through a learning process.
Interneuron connection strengths, known as synaptic
weights, are used to store the acquired knowledge.
![Page 8: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/8.jpg)
Nonlinearity: An artificial neuron can be linear or nonlinear. A neural network, made up of an interconnection of nonlinear neurons, is itself nonlinear.
Input-Output Mapping: A popular paradigm of learning called learning with a teacher or supervised learning involves modification of the synaptic weights of a neural network by applying a set of labeled training samples or task examples.
Adaptivity: NNs have a built-in capability to adapt
their synaptic weights to change in the surrounding
environment.
![Page 9: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/9.jpg)
Evidential Response: In the context of pattern classification, a NN can be designed to provide information not only about which particular pattern to select, but also about the confidence in the decision made.
Contextual Information: Knowledge is represented by the very structure and activation state of a NN.
Fault Tolerance: A NN, implemented in hardware form, has the potential to be inherently fault tolerant, or capable of robust computation, in the sense that its performance degrades gracefully under adverse operating conditions.
![Page 10: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/10.jpg)
Uniformity of Analysis and design: Basically, NNs enjoy universality as information processors. We say this in the sense that the same notation is used in all domains involving the application of NNs. This feature manifests itself in different ways:
Neurons, in one form to another, represent an ingredient common to all NNs.
This commonality makes if possible to share theories and learning algorithms in different applications of NNs.
Modular networks can be built through a seamless integration of modules.
![Page 11: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/11.jpg)
VLSI(Very-Large-Scale-Integrate) Implementability: The massively parallel nature of a NN makes it potentially fast for the computation of certain tasks.
Neurobiological Analogy: The design of a NN is motivated by analogy with the brain, which is a living proof that fault tolerant parallel processing is not only physically possible but also fast and powerful.
![Page 12: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/12.jpg)
The human nervous system may be viewed as a three-stage system, depicted as follows:
The receptors convert stimuli from the human body or the external environment into electrical impulses that convey information to the neural net (brain).
The effectors convert electrical impulses generated by the neural net into discernible responses as system outputs.
![Page 13: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/13.jpg)
![Page 14: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/14.jpg)
The basic building of the central nervous system, including the brain, retina, and spinal cord, is the neuron.
![Page 15: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/15.jpg)
Figure: A Simple Neuronal Model
![Page 16: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/16.jpg)
A neuron is an information-processing unit that is fundamental to the operation of a NN. Three basic elements of the neuronal model as follows:
A set of synapses or connecting links, each of which is characterized by a weight or strength of its own.
An adder for summing the input signals, weighted by the respective synapses of the neuron; the operations described here constitute a linear combiner.
An activation function for limiting the amplitude of the output of a neuron.
![Page 17: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/17.jpg)
Figure: Nonlinear Model of a Neuron
![Page 18: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/18.jpg)
![Page 19: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/19.jpg)
![Page 20: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/20.jpg)
![Page 21: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/21.jpg)
![Page 22: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/22.jpg)
For some applications of NNs, it is desirable to base the analysis on a stochastic neuronal model.
In an analytically tractable approach, the activation function of the McCulloch-Pitts model is given a probabilities interpretation in which the state of the neuron
๐ฅ = 1 ๐ค๐๐กโ ๐๐๐๐๐๐๐๐๐๐ก๐ฆ ๐(๐ฃ)
โ1 ๐ค๐๐กโ ๐๐๐๐๐๐๐๐๐๐ก๐ฆ 1 โ ๐(๐ฃ)
And P ๐ฃ is the sigmoid-shaped function
๐ ๐ฃ =1
1 + ๐โ๐ฃ๐
Where ๐ is a pseudo-temperature that is used to control the noise level.
![Page 23: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/23.jpg)
Input Layer: The activity of the input units represents the raw information that is fed into the network.
Hidden Layer: The activity of each hidden unit is determined by the activities of the input units and the weights on the connections between the input and the hidden units.
Output Layer: The behavior of the output units depends on the activity of the hidden units and the weights between the hidden and output units.
![Page 24: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/24.jpg)
This simple type of network is interesting because the hidden units are free to construct their own representations of the input.
The weights between the input and hidden units determine when each hidden unit is active, and so by modifying these weights, a hidden unit can choose what it represents.
![Page 25: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/25.jpg)
Figure: A simple Neural Network
![Page 26: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/26.jpg)
The output is a function of the input, that is affected by
the weights, and the transfer functions.
![Page 27: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/27.jpg)
The perceptron is the simplest form of a neural network used for the classification of patterns said to be linearly separable.
Figure: Single Layer Perceptron
![Page 28: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/28.jpg)
The input signal propagates through the network in a forward direction, on a layer-by-layer basis.
![Page 29: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/29.jpg)
An NN can be viewed as a weighted directed graph in which artificial neurons are nodes and directed weighted edges represent connections between neurons.
The flow of signals in the various parts of the graph is dictated by three basic rules:
Rule 1. A signal flows along a link only in the direction defined by the arrow on the sink.
Two different types of links may be distinguished:
Synaptic links. Whose behavior is governed by a linear input-output relation.
![Page 30: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/30.jpg)
Activation links. Whose behavior is governed by a nonlinear input-output relation.
Rule 2. A node signal equals the algebraic sum of all signals entering the pertinent node via the incoming links.
Rule 3. The signal at a node is transmitted to each outgoing link originating from that node, with the transmission being entirely independent of the transfer functions of the outgoing links.
![Page 31: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/31.jpg)
A neural network is a directed graph consisting of nodes with interconnecting synaptic and activation links, and is characterized by four properties:
Each neuron is represented by a set of linear synaptic links, an externally applied bias, and a possibly nonlinear activation link. The bias is represented by a synaptic link connected to an input fixed at +1.
The synaptic links of a neuron weight their respective input signals.
The weighted sum of the input signals defines the induced local field of the neuron in question.
The activation link squashes the induced local field of the neuron to produce an output.
![Page 32: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/32.jpg)
Signal-flow graph of a neuron.
![Page 33: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/33.jpg)
In general, we may identify three fundamentally different classes of network architectures:
Single-Layer Feed-forward Networks: An input layer of source node that projects onto an output layer of neurons, but not vice versa. This network is strictly a feed-forward or acyclic type.
![Page 34: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/34.jpg)
Multilayer Feed-forward Networks: This is distinguished itself by the presence of one or more hidden layers, whose computation nodes are correspondingly called hidden neurons or hidden units.
![Page 35: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/35.jpg)
Recurrent Networks: A recurrent neural network distinguishes itself from a feed forward NN in that it has at least one feedback loop. This loop has a profound impact on the learning capability of the network and on its performance.
![Page 36: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/36.jpg)
Learning is a process by which the free parameters of a neural networks are adapted through a process of stimulation by the environment in which the network is embedded. The type of learning is determined by the manner in which the parameter change take place.
Basic Learning rules:
Error-correction learning
Memory-based learning
Hebbian learning
Competitive learning
Boltzmann learning
![Page 37: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/37.jpg)
Adaptive networks are NNs that allow the change of weights in its connections.
The learning methods can be classified in two categories:
โข Supervised Learning
โข Unsupervised Learning
โข Reinforcement learning
![Page 38: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/38.jpg)
Supervised learning which incorporates an external
teacher, so that each output unit is told what its desired response to input signals ought to be.
An important issue concerning supervised learning is the problem of error convergence, i.e., the minimization of error between the desired and computed unit values.
The aim is to determine a set of weights which minimizes the error. One well-known method, which is common to many learning paradigms is the least mean square (LMS) convergence .
![Page 39: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/39.jpg)
In this sort of learning, the human teacherโs experience is used to tell the NN which outputs are correct and which are not.
This does not mean that a human teacher needs to be present at all times, only the correct classifications gathered from the human teacher on a domain needs to be present.
The network then learns from its error, that is, it changes its weight to reduce its prediction error.
![Page 40: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/40.jpg)
Unsupervised learning uses no external teacher and is based upon only local information. It is also referred to as self-organization, in the sense that it self-organizes data presented to the network and detects their emergent collective properties.
The network is then used to construct clusters of similar patterns.
This is particularly useful is domains were a instances are checked to match previous scenarios. For example, detecting credit card fraud.
![Page 41: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/41.jpg)
Like supervised learning, but:
Weights adjusting is not directly related to the error
value.
The error value is used to randomly, shuffle
weights!
Relatively slow learning due to โrandomnessโ.
Reinforcement Learning
![Page 42: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/42.jpg)
Computers vs. Neural Networks
โStandardโ Computers Neural Networks
one CPU highly parallel processing
fast processing units slow processing units
reliable units unreliable units
static infrastructure dynamic infrastructure
![Page 43: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/43.jpg)
Brain vs. Digital Computers
Computers require hundreds of cycles to simulate
a firing of a neuron.
The brain can fire all the neurons in a single step.
Parallelism
Serial computers require billions of cycles to
perform some tasks but the brain takes less than
a second.
e.g. Face Recognition
![Page 44: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/44.jpg)
Brain vs. Digital Computers
Future : combine parallelism of the brain with the
switching speed of the computer.
![Page 45: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/45.jpg)
How are the weights initialized?
How many hidden layers and how many neurons?
How many examples in the training set?
![Page 46: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/46.jpg)
In general, initial weights are randomly chosen, with typical values between -1.0 and 1.0 or -0.5 and 0.5.
There are two types of NNs. The first type is known as
โข Fixed Networks โ where the weights are fixed
โข Adaptive Networks โ where the weights are changed to reduce prediction error.
![Page 47: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/47.jpg)
Rule of thumb:
The number of training examples should be at least five to ten times the number of weights of the network.
Other rule:
a)-(1
|W| N
|W|= number of weights
a = expected accuracy on test set
![Page 48: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/48.jpg)
Feedforward networks are static in the sense that the output depends only on the present input, in exactly the same way a combinational logic circuit produces an output for a specific Boolean input vector. Mathematically,
๐ = ๐(๐)
Where ๐ = ๐ 1, . . . , ๐ ๐๐
โ โ๐ is the output signal
generated in response to an input vector ๐ โ โ๐.
Feedforward NNs are memory-less in the sense that their response to an input is independent of the previous network state.
![Page 49: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/49.jpg)
Recurrent or feedback networks, are classical examples of non-linear dynamical systems.
When a new input pattern is presented, the neuron outputs are computed as usual, but because these outputs are feed back as input to the system, the activations of neurons get subsequently modified , leading the network to a new state.
![Page 50: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/50.jpg)
Dynamic neural units (DNU) is the basic elements of dynamic NNs.
![Page 51: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/51.jpg)
![Page 52: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/52.jpg)
The mathematical model of dynamic single neuron with multiple nonlinear feedback operation
๐๐ฅ (๐ก)
๐๐ก= โ๐ผ๐ฅ ๐ก + ๐๐๐(๐๐๐ฅ ๐ก + ๐๐) + ๐ฃ
๐
๐=1
By considering ๐ = ๐1 ๐2 . . . ๐๐T, b= ๐1 ๐2 . . . ๐๐
T, c= ๐1 ๐2 . . . ๐๐
T. Then
๐๐ฅ (๐ก)
๐๐ก= โ๐ผ๐ฅ ๐ก + ๐๐๐(๐๐ฅ ๐ก + ๐) + ๐ฃ
Where ๐ ๐๐ฅ ๐ก + ๐ = ๐ ๐1๐ฅ ๐ก + ๐1 , โฆ , ๐(๐๐๐ฅ ๐ก + ๐๐) .
![Page 53: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/53.jpg)
![Page 54: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/54.jpg)
![Page 55: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/55.jpg)
A continuous-time dynamic NN containing ๐ DNUs,
introduced by Hopfield (1984), is described by the following
nonlinear differential equations
๐ถ๐
๐๐ฅ๐(๐ก)
๐๐ก= โ
๐ฅ๐ ๐ก
๐ ๐+ ๐ค๐๐๐๐(๐ฅ๐ ๐ก ) + ๐ ๐ ๐ก
๐
๐=1
for ๐ = 1,2, โฆ , ๐.
A general form of Hopfield dynamic NN can be expressed as
๐๐ฅ๐(๐ก)
๐๐ก= โ๐ผ๐๐ฅ๐ ๐ก + ๐ค๐๐๐๐(๐ฅ๐ ๐ก ) + ๐ ๐ ๐ก๐
๐=1 (4)
![Page 56: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/56.jpg)
Continuous-time NN:
๐๐ฅ(๐ก)
๐๐ก= โ๐ด๐ฅ ๐ก + ๐๐ ๐ฅ ๐ก + ๐ , (5)
Where
๐ฅ = ๐ฅ1 . . . ๐ฅ๐๐- State vector
s= ๐ 1 . . . ๐ ๐๐- input vector
๐ ๐ฅ = ๐1 ๐ฅ1 . . . ๐๐ ๐ฅ๐๐- output vector
๐ด = ๐๐๐๐{๐ผ1 . . . ๐ผ๐}- self feedback matrix
๐ =
๐ค11 โฏ ๐ค1๐
โฎ โฑ โฎ๐ค๐1 โฏ ๐ค๐๐
- synaptic weight matrix
![Page 57: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/57.jpg)
Pineda model is also called the continuous-time recurrent NN which consists of a single layer of neurons that are fully interconnected and contains recurrent connections and intra-layer connection.
๐๐ฅ๐(๐ก)
๐๐ก= โ๐ผ๐๐ฅ๐ ๐ก + ๐ฝ๐๐๐ ๐ค๐๐๐๐(๐ฅ๐ ๐ก ) + ๐ ๐
๐
๐=1
Equivalently,
๐๐ฅ(๐ก)
๐๐ก= โ๐ด๐ฅ ๐ก + ๐ต๐ ๐๐ฅ ๐ก + ๐ .
![Page 58: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/58.jpg)
Cohen and Grossberg attempted to represent a class of dynamic neural systems using a competitive dynamic system described as
๐๐ฅ๐(๐ก)
๐๐ก= ๐๐๐ฅ๐ ๐ก ๐๐ ๐ฅ๐ ๐ก โ ๐ค๐๐๐๐(๐ฅ๐ ๐ก )
๐
๐=1
For ๐ = 1,2, . . . , ๐.
![Page 59: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/59.jpg)
Stability: An equilibrium state is stable if whenever the initial state is near that point, the state remains near it, perhaps even tending toward the equilibrium point as time increases.
Unstable: For nonlinear systems the state may initially tend away from the equilibrium state of interest but subsequently may return to it.
![Page 60: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/60.jpg)
Asymptotic Stability: There is a domain of convergence such that whenever the initial condition belongs to this domain the solution approaches the equilibrium state at large times. Asymptotic stability does not imply anything about how long it takes to converge to a prescribed neighborhood of equilibrium point.
![Page 61: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/61.jpg)
0
x0
1
2
3
Curve 1: asymptotically stable
Curve 2: marginally stable
Curve 3: unstable
Figure 1. Concepts of stability
![Page 62: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/62.jpg)
Eigen value analysis concept does not hold good for non-linear systems.
Non-linear systems can have multiple equilibrium points and limit cycles.
Stability behavior of non-linear systems need not be always global (unlike linear systems).
Need of a systematic approach that can be exploited for control design as well.
Different Approaches to Analyze the Stability Condition
โข Phase Plane Approach
โข Differential Geometry (Feedback linearization)
โข Lyapunov Stability Theory
![Page 63: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/63.jpg)
Consider the continuous-time NN
๐ ๐(๐)
๐ ๐= ๐(๐ ๐ , ๐ ๐ , ๐) (5)
Where ๐(. ) is a continuously differentiable vector-valued function.
Equilibrium State: The state ๐ ๐ = ๐โ of NN(5) is said to be equilibrium if ๐ ๐โ, ๐ ๐ ,๐ = ๐.
![Page 64: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/64.jpg)
Stability: The equilibrium state ๐ฅ = ๐ฅโ of NN (5) is said to be stable, if for each ๐ > 0, there is ๐ฟ = ๐ฟ ๐ > 0 such that
๐ฅ(0)โ๐ฅโ < ๐ฟ โ ๐ฅ(๐ก)โ๐ฅโ < ๐, โ๐ก โฅ 0.
Locally Asymptotically Stable: If it is stable and ฮด can be chosen such that
x(0)โxโ < ฮด โ limtโโ
x(t)โxโ = 0.
Then the equilibrium state xโ of NN (5) is locally asymptotically stable.
![Page 65: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/65.jpg)
The most useful and general approach for studying
the stability of nonlinear systems is the theory
introduced in the late 19th century by the Russion
mathematician Alexandr Mikhailovich Lyapunov.
Lyapunovโs work, The General Problem of Motion
Stability, includes two methods for stability analysis
(the so-called linearization method and direct method)
and was published in 1892.
LYAPUNOV STABILITY THEORY
![Page 66: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/66.jpg)
The linearization method draws conclusions about a
nonlinear systemโs local stability around an
equilibrium point from the stability properties of its
linear approximation.
The direct method is not restricted to local motion,
and determines the stability properties of a nonlinear
system by constructing a scalar โenergy-likeโ function
for the system and examining the functionโs time variation.
![Page 67: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/67.jpg)
LYAPUNOV STABILITY THEORY
Jacobian of equilibrium: Let ๐ฅ = ๐ฅโ be an equilibrium point of the NN (5). The Jacobian at ๐ฅ = ๐ฅโ is ๐ฝ ๐ฅ =๐๐(๐ก,๐ฅ(๐ก))
๐๐ฅ .
Lyapunovโs First Method ( Indirect method): Let ๐(๐ฑ(๐โ)) be the eigenvalues of the Jacobian ๐ฑ(๐โ), states that
1. ๐โ is asymptotically stable if ๐น๐ ๐ ๐ฑ ๐โ < ๐ for all eigenvalues of ๐ฑ(๐โ).
2. ๐โ is unstable if ๐น๐ ๐ ๐ฑ ๐โ > ๐ for one or more of the eigenvalues of ๐ฑ ๐โ .
![Page 68: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/68.jpg)
Remark: When some of the eigenvalues are located on the imaginary axis in the complex plane, this approach fails to determine the stability of the equilibrium point.
Example:
Consider a two-neuron system described by ๐ ๐๐
๐ ๐= โ๐ถ๐๐๐ + ๐๐๐๐๐ ๐๐ , ๐ถ๐ > ๐
๐ ๐๐
๐ ๐= โ๐ถ๐๐๐ + ๐๐๐๐๐ ๐๐ , ๐ถ๐ > ๐.
Then, the condition for the stability of the equilibrium states of the above neural system is ๐ค < ๐ผ1๐ผ2.
![Page 69: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/69.jpg)
LYAPUNOV STABILITY THEORY
![Page 70: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/70.jpg)
LYAPUNOV STABILITY THEORY
nixQxxdefiniteseminegativeV
nixQxxdefinitenegativeV
nixQxxdefinitesemipositiveV
nixQxxdefinitepositiveV
i
T
i
T
i
T
i
T
,,2,1,00,0)(
,,2,1,00,0)(
,,2,1,00,0)(
,,2,1,00,0)(
Positive-definite function constitute the basic building block of the
Lyapunov theory.
![Page 71: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/71.jpg)
Lyapunov Stability Theory
(Continuous-time NNs)
Lyapunov Second Method (Direct Method): Let ๐ฅโ be an
equilibrium point of the NN (5), and let V: ๐ โ โ be a
continuously differentiable function such that
i) ๐ ๐ฅโ = 0 ๐๐๐ ๐ ๐ฅ > 0, โ๐ฅ โ ๐ฅโ
ii) ๐ ๐ฅ โ โ ๐คโ๐๐ | ๐ฅ | โ โ
iii) ๐๐(๐ฅ)
๐๐ก< 0, โ๐ฅ โ ๐ฅโ
Then ๐ฅ = ๐ฅโ is globally asymptotically stable.
![Page 72: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/72.jpg)
22
2
2
1212
2
22
2
2
111
xxxxx
xxxxx
To study the equilibrium point at the origin we define 2
2
2
1 xx2
1)x(V
Example: Consider the following system
22
2
2
1
2
2
2
1
22
2
2
1
2
22121
22
2
2
1
2
1
22
2
2
121
2
22
2
2
11
21
21
21
,
)(),(,
)()(
xxxx
xxxxxxxxxx
xxxx
xxxxxx
xfxfx
V
x
V
xfVxV
T
Thus, V(x)>0 and and dV(x)/dt<0, provided that
and it follows that the origin is an asymptotically equilibrium point.
22
2
2
1 xx
![Page 73: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/73.jpg)
Exponential stability: Suppose ๐ฅโ of NN (4) is exponentially stable in ๐, if ๐ ๐ฅ satisfies the condition (i) and
๐๐(๐ฅ)
๐๐กโค โ2๐๐(๐ฅ), โ๐ฅ โ ๐,
Where ๐ is degree of ๐ = {๐ฅ: ๐ฅ < ๐} โ โ๐ with constant ๐ > 0.
![Page 74: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/74.jpg)
Consider the continuous-time NN
๐๐ฅ๐(๐ก)
๐๐ก= โ๐ผ๐๐ฅ๐ ๐ก + ๐ค๐๐๐๐(๐ฅ๐ ๐ก ) + ๐ ๐ ๐ก๐
๐=1 . (6)
Local Asymptotic Stability: Let ๐ฅโ be an equilibrium state of the system (6). If
โ ๐ถ๐ + ๐๐๐๐๐๐ ๐๐
โ
๐๐๐+ ๐๐๐
๐๐๐ ๐๐โ
๐๐๐
๐๐=๐,๐โ i < ๐, ๐ = ๐, ๐,โฆ , ๐
Then ๐ฅโ is a locally asymptotically stable equilibrium state of the neural system (6).
![Page 75: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/75.jpg)
Consider the continuous-time NN
๐ ๐(๐)
๐ ๐= โ๐จ๐ ๐ + ๐พ๐ ๐ ๐ + ๐. (7)
Global Asymptotic Stability: Let all ๐ถ๐ โฅ ๐ in NN (6). The system in (7) is globally asymptotically stable if, for a positive diagonal matrix ๐ = ๐๐๐๐{๐1, ๐2, โฆ , ๐๐} with ๐๐ > ๐, there exists a positive matrix ๐ > 0 such that
๐ท๐พ + ๐พ๐ป๐ท = โ๐ธ + ๐๐ท๐จ.
![Page 76: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/76.jpg)
Local Exponential Stability: Let all ๐ผ๐ > 0 in the system given in system (7). For a constant ๐ > 0, the equilibrium point ๐ฅโ of this system is locally
exponentially stable in B = {๐ง: ๐ง < ๐} if there exists a constant ๐ > 0 such that for all ๐ง โ ๐ต and ๐ง โ 0
๐๐ผ๐ท โ ๐ธ โ ๐ท๐พ๐ญ ๐ โ ๐ญ ๐ ๐พ๐ป๐ท โฅ ๐
Where ๐ = ๐๐ > 0 is an arbitrary positive symmetric matrix and ๐ = ๐๐ > 0 is a unique positive symmetric solution of the Lyapunov function
๐ท๐จ + ๐จ๐ท = ๐ธ.
![Page 77: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/77.jpg)
Globally Exponential Stability: Let all ๐ถ๐ > ๐ in NN (7). The system (7) is globally exponentially stable if
๐พ๐
<๐ถ๐๐๐
๐
Where ๐ถ๐๐๐ = ๐๐๐ ๐ถ๐: ๐ = ๐, ๐, โฆ , ๐ .
In this case, the exponential convergence degree is
๐ผ =๐ถ๐๐๐
๐โ ๐พ
๐.
![Page 78: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/78.jpg)
Lyapunov Stability Theory
(Discrete-time NN)
Consider the discrete-time NN
๐ฅ(๐ + 1) = ๐(๐ฅ ๐ก , ๐ข ๐ก , ๐ค) (8)
Where ๐(. ) is a continuously differentiable vector-valued
function.
Jacobian of equilibrium: Let ๐ฅ = 0 be an equilibrium
point of the NN (8).
The Jacobian at ๐ฅ = 0 is ๐ฝ ๐ฅ =๐๐(๐ก,๐ฅ(๐ก))
๐๐ฅ .
![Page 79: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/79.jpg)
Lyapunovโs first method is to test the positions of all the eigenvalues of the Jacobian of the neural system (8).
Lyapunovโs Indirect Method: Let ๐โ = 0 be an equilibrium point of NN (3). Then
1. The origin is locally asymptotically stable if all the eigenvalues of ๐ฝ(๐ฅโ) are inside the unit circle in the complex plane.
2. The origin is unstable if one or more of the eigenvalues of ๐ฝ(๐ฅโ) are outside the unit circle in the complex domain.
![Page 80: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/80.jpg)
Lyapunov Direct Method: Let ๐ฅ = 0 be an equilibrium
point of the NN (3), and let V: ๐ โ โ be a continuously
differentiable function such that
i) ๐ 0 = 0 ๐๐๐ ๐ ๐ฅ > 0, โ๐ฅ โ 0
ii) ๐ ๐ฅ โ โ ๐คโ๐๐ | ๐ฅ | โ โ
iii) ฮ๐ ๐ฅ = ๐ ๐ฅ ๐ + 1 โ ๐ ๐ฅ ๐ < 0, ๐ฅ โ 0 & ๐ฅ โ ๐
Then ๐ฅ = 0 is globally asymptotically stable.
Lyapunov Stability Theory
(Discrete-time NN)
![Page 81: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/81.jpg)
Consider the discrete-time NN
๐ ๐ + ๐ = ๐พ๐ ๐ฟ๐ ๐ + ๐. (9)
Global Asymptotic Stability: The neural system (9) is globally asymptotically stable if for the positive diagonal matrix ๐ = ๐๐๐๐{๐1, ๐2, . . . , ๐๐} with ๐๐ > 0 for all 1 โค ๐ โค ๐, there exists a positive definite matrix ๐ > 0 such that
๐พ๐ป๐ท๐พ โ ๐ฟโ๐๐ = โ๐
Where ๐ฟโ๐ = ๐ฟโ๐๐ฟโ๐ = ๐๐ข๐๐ {๐ ๐๐๐ , ๐ ๐๐
๐ , . . . , ๐ ๐๐๐ }.
![Page 82: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/82.jpg)
Since neural networks are best at identifying patterns or trends in data, they are well suited for prediction or forecasting needs including:
sales forecasting
industrial process control
customer research
data validation
risk management
ANN are also used in the following specific paradigms: recognition of speakers in communications; diagnosis of hepatitis; undersea mine detection; texture analysis; three-dimensional object recognition; hand-written word recognition; and facial recognition.
![Page 83: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/83.jpg)
Another reason that justifies the use of NN technology, is the ability of NNs to provide sensor fusion which is the combining of values from several different sensors.
Sensor fusion enables the NNs to learn complex relationships among the individual sensor values, which would otherwise be lost if the values were individually analyzed.
In medical modeling and diagnosis, this implies that even though each sensor in a set may be sensitive only to a specific physiological variable, NNs are capable of detecting complex medical conditions by fusing the data from the individual biomedical sensors.
![Page 84: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/84.jpg)
Artificial Neural Networks (ANN) are currently a 'hot' research
area in medicine and it is believed that they will receive
extensive application to biomedical systems in the next few
years.
At the moment, the research is mostly on modeling parts of
the human body and recognizing diseases from various scans
(e.g. cardiograms, CAT scans, ultrasonic scans, etc.).
Neural networks are ideal in recognizing diseases using
scans since there is no need to provide a specific algorithm on
how to identify the disease.
![Page 85: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/85.jpg)
Neural networks learn by example so the details of how to recognize the disease are not needed.
What is needed is a set of examples that are representative of all the variations of the disease.
The quantity of examples is not as important as the 'quality'. The examples need to be selected very carefully if the system is to perform reliably and efficiently.
![Page 86: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/86.jpg)
Madan. M. Gupta, Liang Jin and Noriyasu Homma: Static and Dyanmic Neural Networks,Wiley,2003.
S. Russel, and P. Norvig (1995). Artificial Intelligence - A Modern Approach. Upper Saddle River, NJ, Prentice Hall.
S. Haykin, and N. Network, (2004). A comprehensive foundation. Neural networks, 2(2004), 41.
H. J. Marquez: Nonlinear Control Systems Analysis and Design, Wiley, 2003.
J.J. E. Slotine and W. Li: Applied Nonlinear Control, Prentice Hall, 1991.
H. K. Khalil: Nonlinear Systems, Prentice Hall, 1996
![Page 87: Dr. G. Nagamani Assistant Professor Dept. of Mathematics GRIicolsstem.fkip.unej.ac.id/wp-content/uploads/sites/43/2019/11/Neural-Networks.pdfNonlinearity: An artificial neuron can](https://reader031.vdocuments.us/reader031/viewer/2022012008/612785293d59ea66ca401ead/html5/thumbnails/87.jpg)
Tan Q