seminar report on "neural network and their applications"

38
SESSION: 2010-2011 A SEMINAR REPORT ON “Neural Networks &Their Application UNIVERSITY ROLL NO - 0806331112 SUBMITTED BY - SUBMITTED TO - VIVEK YADAV Mr. MANISH KASHYAP

Upload: vivek-yadav

Post on 07-Mar-2015

5.567 views

Category:

Documents


3 download

DESCRIPTION

SESSION: 2010-2011A SEMINAR REPORT ON“Neural Networks &Their Application”UNIVERSITY ROLL NO - 0806331112SUBMITTED BY -SUBMITTED TO -VIVEK YADAV EC(branch) 3rd (year) G (section) 53(roll no.)Mr. MANISH KASHYAP (seminar in-charge)ACKNOWLEDGEMENTI pose my copious gratitude and like to thank the entire staff of G.L.A.I.T.M, Mathura for their help and kind cooperation during my entire seminar preparation. I am extremely thankful to them for providing me with vital information about

TRANSCRIPT

Page 1: Seminar Report on "neural network and their applications"

SESSION: 2010-2011

A SEMINAR REPORT ON

“Neural Networks &Their

Application”

UNIVERSITY ROLL NO - 0806331112

SUBMITTED BY - SUBMITTED TO - VIVEK YADAV Mr. MANISH KASHYAP EC(branch) (seminar in-charge) 3rd (year) G (section) 53(roll no.)

Page 2: Seminar Report on "neural network and their applications"

ACKNOWLEDGEMENT

I pose my copious gratitude and like to thank the entire staff of

G.L.A.I.T.M, Mathura for their help and kind cooperation during my

entire seminar preparation. I am extremely thankful to them for

providing me with vital information about the topic. I rejoice in

expressing my prodigious gratification to Department of Electronics

&Communication Engineering Department, G.L.A.I.T.M, Mathura for his

indispensable guidance, generous help, perpetual encouragement,

constant attention offered throughout in preparing the seminar.

I take this opportunity to pay my sincere thanks to Mr. MANISH

KASHYAP, seminar in charge & lecturer, Electronics & Communication

Engineering Department, G.L.A. Institute of Technology & Management,

Mathura, for giving me the golden opportunity to present the seminar.

At last but not the least, I would like to thank my parents and all

my peers who have been a constant source of encouragement and

inspiration in every walk of life.

VIVEK YADAV

Page 3: Seminar Report on "neural network and their applications"

CERTIFICATE

This is to certify that the work which is being successfully presented in

the seminar report entitled “NEURAL NETWORKS & THEIR

APPLICATIONS” by me in the partial fulfillment of the requirement for

the award of Bachelor Of Technology Degree in Electronics &

Communication Engineering Department at G.L.A. Institute Of

Technology & Management, Mathura from Uttar Pradesh Technical

University, Lucknow.

The matter embodied in this dissertation has not been submitted by

me for award of any other degree.

DATE-: 21 ,APRIL , 2011

This is to certify that the above statement made by the candidate is

correct to the best of my knowledge.

SUBMITTED BY -: SUBMITTED TO-:

VIVEKYADAV (MR. MANISH KASHYAP)

B.TECH. III YEAR (EC) SEMINAR INCHARGE

ROLL NO.: 0806331112

Page 4: Seminar Report on "neural network and their applications"

CONTENTS

Topic Page no.

1. Abstract ………………………………………………………………………5

2. Introduction………………………………………………………………… 6

3. Biological neuron…………………………………………………………. 8

4. Artificial neuron…………………………………………………………….9

5. Different models of artificial neuron…………………………….10

6. Classical activation function………………………………………….14

7. Artificial neural network………………………………………………..16

8. Qualities of neural network……………………………………………17

9. Different architecture of ANN…………………………………………19

10.Learning ofANN………………………………………………………………20

11.Advantages&Disadvantages ofANN…………………………………22

12.Applications of ANN…………………………………………………………23

13.Recent advances in field of ANN……………………………………….25

14.Conclusion………………………………………………………………………27

15.Bibliography……………………………………………………………………28

Page 5: Seminar Report on "neural network and their applications"

ABSTRACT

Neural network are inspired by biological nervous system and re

composed of many simple computational elements operating in

parallel. In this study basic component of neural network are

introduced and brief on their working. Concept of activation function

is also discussed. Different learning algorithm are also enlisted.Topics

on advantage ,disadvantage and recent development in the field of

ANN’s are mentioned in this study.

Page 6: Seminar Report on "neural network and their applications"

Introduction

Neural networks have seen an explosion of interest over the last few

years and are being successfully applied across an extraordinary range

of problem domains, in areas as diverse as finance, medicine,

engineering, geology, physics and biology. The excitement stems from

the fact that these networks are attempts to model the capabilities of

the human brain. From a statistical perspective neural networks are

interesting because of their potential use inprediction and

classification problems.

Artificial neural networks (ANNs) are non-linear data driven self

adaptive approach as opposed to the traditional model based

methods. They are powerful tools for modelling, especially when the

underlying data relationship is unknown. ANNs can identify and learn

correlated patterns between input data sets and corresponding target

values. After training,ANNs can be used to predict the outcome of new

independent input data. ANNs imitate the learning process of the

human brain and can process problems involving non-linear and

complex data even if the data are imprecise and noisy. Thus they are

ideally suited for the modeling of agricultural data which are known to

Page 7: Seminar Report on "neural network and their applications"

be complex and often non-linear.

These networks are “neural” in the sense that they may have

been inspired by neuroscience but not necessarily because they are

faithful models of biological neural or cognitive phenomena. In fact

majority of the network are more closely related to traditional

mathematical and/or statistical models such as non-parametric

pattern classifiers, clustering algorithms, nonlinear filters, and

statistical regression models than they are to neurobiology models.

Neural networks (NNs) have been used for a wide variety of

applications where statistical methods are traditionally employed.

They have been used in classification problems, such as identifying

underwater sonar currents, recognizing speech, and predicting the

secondary structure of globular proteins. In time-series applications,

NNs have been used in predicting stock market performance. As

statisticians or users of statistics, these problems are normally solved

through classical statistical methods, such as discriminant analysis,

logistic regression, Bayes analysis, multiple regression, and ARIMA

time-series models. It is, therefore, time to recognize neural networks

as a powerful tool for data analysis.

Page 8: Seminar Report on "neural network and their applications"

The biological neuron

Neurons can be of many types and shapes, but ultimately they

function in a similar way and are connected to each other in a rather

complex network stylish way via strands of fibre called axons . A

neurons axon acts as a transmission line and are connected to

another neuron via that neurons dendrites , which are fibres that

emanate from the cell body ( soma ) of the neuron. The junction that

allows transmission between the axons and the dendrites are the

synapse. Synapses are elementary structural and functional units that

creates the signal connection between two or more neurons;

sometimes meaning the connection as whole.

Page 9: Seminar Report on "neural network and their applications"

FIG 1-: Biological neuron

The artificial neuron

Artificial neurons are information-processing units that are only

approximations (usually very crude ones) of the biological neuron.

Three basic elements of the artificial neuron can be identified as-:

FIG 2-: Artificial neuron

Input (xi)

Typically, these values are external stimuli from the environment or

come from the outputs of other artificial neurons. They can be

discrete values from a set, such as {0,1}, or real-valued numbers.

Page 10: Seminar Report on "neural network and their applications"

Weights (wi)

These are real-valued numbers that determine the contribution of

each input to the neuron's weighted sum and eventually its output.

The goal of neural network training algorithms is to determine the

best possible set of weight values for the problem under

consideration. Finding the optimal set is often a trade-off between

computation time and minimizing the network error.

Threshold (u)

The threshold is referred to as a bias value. In this case, the real

number is added to the weighted sum. For simplicity, the threshold

can be regarded as another input / weight pair, where w0 = u and x0

= -1.

Activation Function (f)

The activation function for the original McCulloch-Pitts neuron was

the unit stepfunction. However, the artificial neuron model has been

expanded to include other functions such as the sigmoid, piecewise

linear, and Gaussian.

Different models of artificial neuron

1.Adaline model

2.Madaline model

3.Rosenballet model

Page 11: Seminar Report on "neural network and their applications"

4.Mcculloch pits model

5.Widrow hoff model

6.Kohonen model

1.The adaptive linear element (Adaline)

In a simple physical implementation

Fig3-:The McCulloch-Pitts Model of Neuron

this device consists of a set of controllable resistors connected to a

circuit which can sum up currents caused by the input voltage signals.

Usually the central block, the summer, is also followed by a quantiser

which outputs either +1 of 1,depending on the polarity of the sum.

Although the adaptive process is here exemplified in a case when

there is only one output, it may be clear that a system with many

parallel outputs is directly implementable by multiple units of the

above kind.

If the input conductances are denoted by wi, i = 0; 1; : : : ; n, and

Page 12: Seminar Report on "neural network and their applications"

the input and output signals by xi and y, respectively, then the output

of the central block is defined to be:

where θ = w0.

2.Mcculloch pitts modelThe early model of an artificial neuron is introduced by Warren

McCulloch and Walter Pitts in 1943. The McCulloch-Pitts neural model

is also known as linear threshold gate. It is a neuron of a set of inputs

and one output The linear threshold gate simply

classifies the set of inputs into two different classes. Thus the output

is binary. Such a function can be described mathematically using these equations:

(2.1)

(2.2)

are weight values normalized in the range of either

or and associated with each input line, is the

weighted sum, and is a threshold constant. The function is a linear

Page 13: Seminar Report on "neural network and their applications"

step function at threshold as shown in figure. The symbolic representation

of the linear threshold gate is shown in figure [

Fig 4: Linear Threshold Function

Fig5-: The McCulloch-Pitts Model of Neuron

Page 14: Seminar Report on "neural network and their applications"

Classical Activation functionsWhile it is possible to define some arbitrary, cost function, frequentlya

particular cost will be used, either because it has desirable properties

(such as convexity) or because it arises naturally from a particular

formulation of the problem (e.g., in a probabilistic formulation the

posterior probability of the model can be used as an inverse cost).

Ultimately, the cost function will depend on the desired task.

Different types of activation functions can be used and three of them

are described in [activation functions] . The most commonly used is

Page 15: Seminar Report on "neural network and their applications"

nonlinear sigmoid activation functions such as the logistic function . A

logistic function assumes a continuous range of values form 0 and 1 in

contrary to the discrete threshold function. A binary threshold

function was used in the first model of an artificial neuron back in

1943, the so-called McCulloch-Pitts model . Threshold functions goes

by many names, e.g. step-function , heavyside function , hard-limiter

etc. Common for all is that they produce one of two scalar output

values (usually 1 and -1 or 0 and 1) depending on the value of the

threshold. Another type of activation function is the linear function or

some times called the identity function since the activation is just the

input. In general if the task is to approximate some function then the

output nodes are linear and if the task is classification then sigmoidal

output nodes are used

Artificial neural networkComputational models inspired by the human brain-:

1. Massively, parallel, distributed system, made up of simple

processing units.(neurons)

2. Synaptic connection strengths among neurons are used to

store the acquired knowledge.

3. Knowledge is acquired by the network from its environment

through a learning process.

Page 16: Seminar Report on "neural network and their applications"

FIG 3-: Simple artificial neural network

Qualities of artificial neural network

1. Real time operation

2. Parallel processing

3. Fault tolerance

4. Self organising

5. Ability to generalize

6. Complete computability

7. Continuous adaptability

Page 17: Seminar Report on "neural network and their applications"
Page 18: Seminar Report on "neural network and their applications"

Different architectures of ANN

Fig-:Back Propagation network

Fig-:Multi layered Perceptron network

Page 19: Seminar Report on "neural network and their applications"

Fig-:Hopfield network

Fig-:Kehonen network

Learning of ANN

Page 20: Seminar Report on "neural network and their applications"

An ANN learns from its experience. The usual process of learning

involves three tasks:

1.Compute output(s).

2.Compare outputs with desired patterns and feed-back the

error.

3.Adjust the weights and repeat the process

4.The learning process starts by setting the weights by some

rules . The difference between the actual output (y) and the

desired output(z) is called error (delta).

5.The objective is to minimize delta (error)to zero. The

reduction in error is done by changing the weights

1.Supervised learning-: or Associative learning in which the network

is trained by providing it with input and matching output patterns.

These input-output pairs can be provided by an external teacher, or

by the system which contains the neural network (self-supervised).

2.Unsupervised learning -:or Self-organisation in which an (output)

unit is trained to respond to clusters of pattern within the input. In

this paradigm the system is supposed to discover statistically salient

features of the input population. Unlike the supervised learning

paradigm, there is no a priori set of categories into which the

patterns are to be classified; rather the system must develop its own

representation of the input stimuli.

Page 21: Seminar Report on "neural network and their applications"

3.Reinforcement Learning -:This type of learning may be considered

as an intermediate form of the above two types of learning. Here the

learning machine does some action on the environment and gets a

feedback response from the environment. The learning system grades

its action good (rewarding) or bad (punishable) based on the

environmental response and accordingly adjusts its parameters.

Generally, parameter adjustment is continued until an equilibrium

state occurs, following which there will be no more changes in its

parameters. The selforganizing neural learning may be categorized

under this type of learning.

Page 22: Seminar Report on "neural network and their applications"

Advantages&Disadvantages of ANN

A.)Advantages

1.Adapt to unknown situations

2.Robustness: fault tolerance due to network redundancy

3.Autonomous learning and generalization

B.)Disadvantages

1.Not exact

2.Large complexity of the network structure.

Page 23: Seminar Report on "neural network and their applications"

APPLICATIONS OF ANN

Page 24: Seminar Report on "neural network and their applications"
Page 25: Seminar Report on "neural network and their applications"

RECENT ADVANCES IN ANN FIELD

Integration of fuzzy logic into neural networks

1. Fuzzy logic is a type of logic that recognizes more than simple true

and false values, hence better simulating the real world. For

example, the statement today is sunny might be 100% true if there

are no clouds, 80% true if there are a few clouds, 50% true if it's

hazy, and 0% true if rains all day. Hence, it takes into account

concepts like -usually, somewhat, and sometimes.

2. Fuzzy logic and neural networks have been integrated for uses as

diverse as automotive engineering, applicant screening for jobs,

the control of a crane, and the monitoring of glaucoma.

Pulsed neural networks

Page 26: Seminar Report on "neural network and their applications"

1. "Most practical applications of artificial neural networks are based

on a computational model involving the propagation of continuous

variables from one processing unit to the next. In recent years,

data from neurobiological experiments have made it increasingly

clear that biological neural networks, which communicate

through pulses, use the timing of the pulses to transmit

information and perform computation. This realization has

stimulated significant research on pulsed neural networks,

including theoretical analyses and model development,

neurobiological modeling, and hardware implementation."

Hardware specialized for neural networks

1. Some networks have been hardcoded into chips or analog

devices ? this technology will become more useful as the networks

we use become more complex.

2. The primary benefit of directly encoding neural networks onto

chips or specialized analog devices is SPEED!

3. NN hardware currently runs in a few niche areas, such as those

areas where very high performance is required (e.g. high energy

physics) and in embedded applications of simple, hardwired

networks (e.g. voice recognition).

4. Many NNs today use less than 100 neurons and only need

occasional training. In these situations, software simulation is

usually found sufficient

When NN algorithms develop to the point where useful things can be

done with 1000's of neurons and 10000's of synapses, high

Page 27: Seminar Report on "neural network and their applications"

performance NN hardware will become essential for practical

operation.

CONCLUSION

1. All current NN technologies will most likely be vastly improved

upon in the future. Everything from handwriting and speech

recognition to stock market prediction will become more

sophisticated as researchers develop better training methods and

network architectures

2. Although neural networks do seem to be able to solve many

problems, we must put our exuberance in check sometimes ? they

are not magic! Overconfidence in neural networks can result in

costly mistakes: see for a rather funny story about the government

and neural networks.

.

NNs might, in the future, allow: a. Robots that can see, feel, and predict the world around

them

Page 28: Seminar Report on "neural network and their applications"

b. Improved stock prediction

c. Common usage of self-driving cars

d. Composition of music

e. Handwritten documents to be automatically transformed into formatted word processing documents

f. Trends found in the human genome to aid in the under standing of the data compiled by the Human Genome Project

g. Self-diagnosis of medical problems using neural networks

h. And much more!

REFRENCES

1. Hertz, J., Palmer, R.G., Krogh. A.S. (1990)

Introduction to the theory of neural computation,

Peruses Books. ISBN 0-201-51560-1

2. B.Yegnarayana(2010) Artificial neural networks

PHI publication. ISBN-978-81-203-1253-1

3. “Neural Networks: A Comprehensive Foundation”

by HAYKINS

Page 29: Seminar Report on "neural network and their applications"