flnn question bank

23
Mrs. A. D. Belsare, Course Teacher 2014-15 Nagar Yuwak Shikshan Sanstha’s Yeshwantrao Chavan College of Engineering (An Autonomous Institution affiliated to Rashtrasant Tukadoji Maharaj Nagpur University) Hingna Road, Wanadongri, Nagpur - 441 110 Electronics & Telecommunication Engineering VIII: ET- 422: PE5: Fuzzy Logic & Neural Network VII BE: ET411: Free Elective: Soft Computing Question Bank Neural Network Supervised Neural network 1. Using linear separability concept, obtain the response for OR function. Use bipolar inputs and targets. 2. Implement AND function using MP Neuron. 3. Implement XOR function using MP Neuron. 4. Implement ANDNOT function using a) MP Neuron Model b) Perceptron neural network. c) ADALINE NN d) Hebb NN (bipolar inputs & Targets) 5. Implement OR function using Hebb NN (bipolar inputs & Targets) 6. Implement XOR function using Hebb NN (bipolar inputs & Targets) 7. What are different learning laws of neural network? Determine the weights of the network shown in figure below after one iteration using Hebbs law, Perceptron law and LMS law for the following set of input vectors. Input: [1100] T , [0011] T , [1001] T ,and [0110] T . Use Bipolar binary output function. 8. What are different activation functions used in neural network? 9. Using Hebb Rule, find weights required to perform following classification of the given input pattern. + + + + + + + + + + + + + + + ‘I’ ‘O’ + => +1; and empty square => -1 For ‘I’ target = +1 and for ‘O’ target = -1 10. The logic networks shown in Figure below, using the McCulloch-Pitts model neuron find the truth tables and the logic functions that are implemented by networks (a), (b), (c), and (d).

Upload: devlaxman

Post on 07-Feb-2016

290 views

Category:

Documents


10 download

DESCRIPTION

neural network

TRANSCRIPT

Page 1: Flnn Question Bank

Mrs. A. D. Belsare, Course Teacher 2014-15

Nagar Yuwak Shikshan Sanstha’s Yeshwantrao Chavan College of Engineering

(An Autonomous Institution affiliated to Rashtrasant Tukadoji Maharaj Nagpur University) Hingna Road, Wanadongri, Nagpur - 441 110

Electronics & Telecommunication Engineering

VIII: ET- 422: PE5: Fuzzy Logic & Neural Network VII BE: ET411: Free Elective: Soft Computing

Question Bank

Neural Network Supervised Neural network

1. Using linear separability concept, obtain the response for OR function. Use bipolar inputs and targets.

2. Implement AND function using MP Neuron. 3. Implement XOR function using MP Neuron. 4. Implement ANDNOT function using a) MP Neuron Model

b) Perceptron neural network. c) ADALINE NN d) Hebb NN (bipolar inputs & Targets)

5. Implement OR function using Hebb NN (bipolar inputs & Targets) 6. Implement XOR function using Hebb NN (bipolar inputs & Targets) 7. What are different learning laws of neural network? Determine the weights of the

network shown in figure below after one iteration using Hebbs law, Perceptron law and LMS law for the following set of input vectors. Input: [1100]T, [0011]T, [1001]T,and [0110]T. Use Bipolar binary output function.

8. What are different activation functions used in neural network? 9. Using Hebb Rule, find weights required to perform following classification of the given

input pattern. + + +

+

+ + +

+ + +

+ +

+ + +

‘I’ ‘O’ + => +1; and empty square => -1

For ‘I’ target = +1 and for ‘O’ target = -1 10. The logic networks shown in Figure below, using the McCulloch-Pitts model neuron find the

truth tables and the logic functions that are implemented by networks (a), (b), (c), and (d).

Page 2: Flnn Question Bank

Mrs. A. D. Belsare, Course Teacher 2014-15

a) b) c)

d)

11. Using Hebb rule, find the weights required to perform following classifications. The vectors (-1 -1 1 -1) and (1 1 1 -1) belong to class 1 (target = +1); vectors (-1 -1 1 1) and (1 1 -1 -1) do not belongs to class 1 (target = -1). Also using each of training x vectors as input, test the response of net.

12. Implement OR function with binary inputs & bipolar targets using perceptron training algorithm upto 3 epochs.

13. Explains the logic functions performed by the following with binary neurons

14. What is meant by topology of artificial neural networks? Give few basic topological structures of artificial neural network

15. Explain different performance evaluator criterion of neural network. 16. Train the perceptron for the following sequence with η=1 beginning from w0=[1 1 1]

a. Class 1: (3,1),(4,2),(5,3),(6,4) b. Class 2: (2,2),(1,3),(2,6)

O(x1, x2, x3) X3

1

1 1

1 X2

X1 T=1 T=1

O(x1, x2, x3)

X3

X2

X1

1

1

1

T=2

O(x1, x2, x3)

X3

X2

X1

1

1

1

T=2

O(x1, x2)

-1

1

1 X2

X1 T=1 T=0

Page 3: Flnn Question Bank

Mrs. A. D. Belsare, Course Teacher 2014-15

17. The network shown in fig uses neuron with continuous activation function. The o/p measured as o1=0.28 and o2=-0.73. Find the i/p vector X=[x1 x2]

T that has been applied to the n/w.

18. The network shown in fig uses neuron with f(net) has been designed to assign i/p vector x1x2x3 to cluster 1 or 2. The cluster no. is identical to the no. of neuron yielding the large response. Determine the most likely cluster membership for each of the following three

vectors. Assume λ=2. Activation function is ( )1

1 exp( )o

netλ=

+ −

c. 1 2 3

0.866 0.985 0.342, ,

0.15 0.174 0.94X X X

− = = = − −

19. Explain the neural network which uses nonlinear output function. 20. Calculate the output of neuron Y for the net shown in fig.1. Use binary & bipolar sigmoid

activation functions.

21. Classify the two dimensional pattern shown in fig.2 using perceptron network.

‘C’ ‘A’ Target value:+1Target value:-1

22. Implement AND function using McCulloch Pitts neuron with i) bipolar inputs & targets ii) binary inputs & targets

23. Assume the initial weight vector

1

1

0

0.5

TW

=

The set of three input vectors are X1=[1 -

2 1.0 0]T , X2=[1 -0.5 -2 -1.5]T, X3=[0 1 -1 1.5] T, obtain the output of the neuron using binary sigmoidal and bipolar sigmoidal activation function.

Page 4: Flnn Question Bank

Mrs. A. D. Belsare, Course Teacher 2014-15

24. Class1: (0.03,0.01), (0.04,0.02), (0.05,0.03), (0.06,0.04) and class2: (0.02,0.02), (0.01,0.03) using Perceptron neural net with bipolar targets with bias =1 and initial weights as (1,1,1).

25. Assume the initial weight vector

1

1

0

0.5

TW

=

The set of three input vectors are X1=[1 -

2 1.0 0]T , X2=[1 -0.5 -2 -1.5]T, X3=[0 1 -1 1.5] T, obtain the output of the neuron using binary sigmoidal activation function.

26. Class1: (0.03,0.01), (0.04,0.02) and class2: (0.02,0.02), (0.01,0.03) using Perceptron neural net with bipolar targets with bias =1 and initial weights as (1,1,1).

27. Implement OR function using ADALINE network with bipolar inputs and targets.

28. Implement an Adaline network to describe the function 1 2X X . The learning rate is 0.1

and initial weights are 0.2, 0.2 and bias 0.2 upto 2 ephochs. Use bipolar inputs and targets.

29. Implement an Adaline network to describe the function 1 2X X . The learning rate is 0.1

and initial weights are 0.2, 0.2 and bias 0.2 upto 2 ephochs. Use bipolar inputs and targets.

30. Derive the learning rule of Adaline network and explain the algorithm.

31. Implement an Adaline network to describe the function 1 2X X . The learning rate is 0.1

and initial weights are 0.2, 0.2 and bias 0.2 for 2 ephochs. Use bipolar inputs and targets. 32. Illustrate the development of weights of an ADALINE for the following data;

class1:(0,0,0),(1,1,1),(4,4,4), class2:(2,2,2),(3,3,3) 33. Discuss the parameters affecting the weight updation in case of back-propagation NN

with suitable example. 34. Derive the weight update rule for RBF neural network & explain RBF net’s. 35. Derive the Back-propagation neural network learning algorithm for nonlinearly

separable task and determine the appropriate values of weight changes (��) for the bipolar sigmoidal function.

36. Setting the parameter values in Back-propagation algorithm affect the performance of the network. Explain?

37. Derive Back-propagation training algorithm for the case where the activation function is an arctan function.

38. Discuss the method which is used for accelerating the learning process of backprop algorithm.

39. Derive the Back Propagation Neural Network learning rule and determine the appropriate equations of weight changes in two layer neural network.

40. Derive the Back-propagation neural network learning algorithm for nonlinearly separable

task and determine the appropriate values of weight changes (��) for the bipolar sigmoidal function in two layer network.

Page 5: Flnn Question Bank

Mrs. A. D. Belsare, Course Teacher 2014-15

41. Calculate the new weights for the network shown using backpropagation algorithm for input pattern [0,1], target output=1, learning rate = 0.25 and use binary sigmoidal activation function.

Unsupervised Neural network

1. Derive the weight update rule in recurrent neural network with William’s & Zipser training procedure.

2. Construct & test the Hamming network to cluster three vectors,e(1)=[1-1-1-1];e(2)=[-1 -1 -1 1], the bipolar input vectors are x1=[-1 -1 1 -1], x2=[-1 -1 1 1], x3=[-1 -1 -1 -1].

3. Cluster the following input vectors into three nodes. Use competitive learning algorithm

1 2 3 4 5 6{ (1.1,1.7,1.8), (0,0,0), (0,0.5,1.5), (1,0,0), (0.5,0.5,0.5), (1,1,1)}T i i i i i i= = = = = = =

4. Explain leaning vector quantizer algorithm?

5. Apply adaptive resonance theory algorithm on the following set of vectors.

{ } { } { } { } { }1 2 3 4 51,1,0,0,0,0,1 , 0,0,1,1,1,1,0 , 1,0,1,1,1,0 , 0,0,0,1,1,1,0 , 1,1,0,1,1,1,0i i i i i= = = = =Use vigilance parameter be ρ=0.7.

6. Apply self organizing map to the following set of vectors to cluster them to three clusters

with following topology network

1 2 3 4 5 6{ (1.1,1.7,1.8), (0,0,0), (0,0.5,1.5), (1,0,0), (0.5,0.5,0.5), (1,1,1)}T i i i i i i= = = = = = =

7. Develop the training algorithm for supervised recurrent network. Network output uses sigmoid function.

Network output

0.6

-0.1 -0.3

0.4

0.5 0.3

-0.2

0.1 0.4

X1 X2

Z2 Z1

Y1

1 1

1

1 0

Page 6: Flnn Question Bank

Mrs. A. D. Belsare, Course Teacher 2014-15

8. Cluster the following input vectors into three nodes. Use competitive learning algorithm

1 2 3

4 5 6

{ (1 .1,1 .7 ,1 .8 ) , ( 0 , 0 , 0 ) , ( 0 , 0 .5 ,1 .5 ) ,

(1, 0 , 0 ) , ( 0 .5 , 0 .5 , 0 .5 ) , (1,1,1) }

T i i i

i i i

= = = == = =

9. Apply adaptive resonance theory algorithm on the following set of vectors

{ } { } { }{ } { }

1 2 3

4 5

1,1,0,0,0,0,1 , 0,0,1,1,1,1,0 , 1,0,1,1,1,0 ,

0,0,0,1,1,1,0 , 1,1,0,1,1,1,0

i i i

i i

= = =

= =Use vigilance parameter be

ρ=0.

10. Explain leaning vector quantizer algorithm with example? 11. Illustrate with neat figure, ART1 architecture & discuss its training algorithm.

12. Design a MaxNet with 0.15δ = to cluster the input pattern x=[x1, x2, x3, x4]=[0.7, 0.6,

0.1, 0.8]. Show the step by step execution of the clustering algorithm.

13. Illustrate with neat figure, the two basic units of an ART1 network. 14. Consider ART1 net with 5 input units & 3 cluster units. After some training the net

attains the bottom up & top down weight matrices as shown, 0.2 0 0.2

1 1 1 1 10.5 0.8 0.2

; 0 1 1 1 00.5 0.5 0.2

1 1 1 1 10.5 0.8 0.2

0.1 0 0.2

B T

= =

Show the behavior of the net when it is presented with the training pattern s=[1, 1, 1, 1, 0]. Assume L=2; 0.8ρ =

15. Explain Hamming network used for unsupervised learning. Classify the input vector i=(1,1,1,-1,-1) for Hamming network, which is trained by the following patterns, i1=(1,-1,-1,1,1), i2=(-1,1,-1,1,-1), i3=(1,-1,1,-1,1).

16. Check the auto associative network for input vector [1 1 -1]. From the weight vector with

no self-connection. Test whether the net is able to recognize with one missing entry. 17. Develop the training algorithm for supervised recurrent network. Network output uses

sigmoid function. 18. Explain Hamming network used for unsupervised leaning. Classify the input vector

{ }1,1,1, 1, 1i = − − for Hamming network which is trained by the following pattern

{ } { } { }1 2 31, 1, 1,1,1 , 1,1, 1,1, 1 , 1, 1,1, 1,1i i i= − − = − − − = − −

19. What are the results obtained using five node Maxnet with initial node output vectors (0.5,0.9,1,1,0.9)? What would be more desirable value?

20. Explain associative memory? 21. Use non iterative procedure of association (matrix association memory) for the following

input and output pattern. (1,1)->(-1,1) and (1,-1)->(-1,-1)

Page 7: Flnn Question Bank

Mrs. A. D. Belsare, Course Teacher 2014-15

22. Use least square procedure to solve the hetro association problem for the following input and output pattern (1,1,1,:-1,1), (1,-1,1:-1,-1), (-1,1,1;1,-1).

23. Explains Bidirectional Associative memory (BAM)? Perform hetro association using BAM for the set of pairs

( )( )( )

1 1

2 2

3 3

100001 (11000)

011000 (10100)

001011 (01110)

A B

A B

A B

= =

= =

= =

24. Consider the following patterns

( )( )( )

1

2

3

1,1, 1,1

1,1,1, 1

1, 1, 1,1

A

A

A

= − −

= −

= − − −

Use Hopfield associative memory to recognize the stored pattern as well as the noisy

pattern ' (1,1,1,1)A =

25. Establish the association between the following input and output pairs using BAM

( )( )( )

1 1

2 2

3 3

1,1, 1, 1 (1,1)

1,1,1,1 (1, 1)

1, 1,1,1 ( 1,1)

A B

A B

A B

= − − =

= = −

= − − = −

26. Explain with example the concept of energy function for BAM 27. Recall the noisy pattern ( )' 1,1,1, 1, 1,1A = − − by exponential bidirectional association

( )( )( )

1 1

2 2

3 3

1, 1, 1, 1, 1,1 (1,1, 1, 1, 1)

1,1,1, 1, 1, 1 (1, 1,1, 1, 1)

1, 1,1, 1,1,1 ( 1,1,1,1, 1)

A B

A B

A B

= − − − − − = − − −

= − − − − = − − −

= − − − = − −

Page 8: Flnn Question Bank

Mrs. A. D. Belsare, Course Teacher 2014-15

Fuzzy Logic

1. For the two fuzzy sets given find

a. ~ ~ ~ ~ ~

, , cA B A B A B∪ ∩ ∪ ,

b. ~ 1 0.75 0.3 0.15 0

1.0 1.5 2.0 2.5 3.0A = + + + +

c. ~ 1 0.6 0.2 0.1 0

1.0 1.5 2.0 2.5 3.0B = + + + +

2. For following fuzzy sets defined on interval X = [0, 10] of real number with membership

functions, 2

1( ) 2 ; ( )

1 10( 2)xA x B x

x−= =

+ −, find algebraic sum and bounded sum.

3. Let the membership grade functions of fuzzy sets A & B are 1

( ) ; ( )2 1 10

xA x B x

x x= =

+ + on

universal set X = {0, 1, 2, ….. , 10}. Let f(x) = x2 for all x ε X. Using extension principle find f(A) and f(B).

4. Let P=Q={0, 1, 2} & R={0, 0.5, 1.5, 2} be crisp set & :f P X Q R= be a function signifying the mean of two numbers, f(x,y)=(x+y)/2, for all , .x P y Q∈ ∈ Construct the function table for f. Now consider the fuzzy set A representing the closeness of a number x to 1 & B represents distance from 1. A & B are defined on the references set P & Q respectively. The

sets A & B are0.5 1 0.5 1 0 1

; ;0 1 2 0 1 2

A B= + + = + + Extend the function

: :f P X Q R to f A X B C→ → using extension principle.

5. Let A & B be fuzzy sets defined on the universal set X=Z whose membership functions are

given by 0.5 1 0.5 0.3 0.5 1 0.5 0.3

( ) ; ( )1 0 1 2 2 3 4 5

A x B x= + + + = + + +−

. Let a function

: *f X X X→ be defined for all 1 2 1 2 1 2, ( , )x x X by f x x x x∈ = + . Calculate f(A,B).

6. State and prove third decomposition theorem of fuzzy set A. Consider

0.5 / 1 0.4 / 2 0.7 / 3 0.8 / 4 1/ 5A x x x x x= + + + + 7. Consider the fuzzy sets defined on interval X=[0,5] of real numbers by the membership grade

functions ( ) ; ( ) 22

xA A

xx x

xµ µ −= =

+ Determine the mathematical formulae & graphs of the

membership grade functions of each of following sets: __ _________

. ; . ; .i A B ii A B iii A B∪ ∩ ∪

8. If 0.5 1 0.5 0.5 1 0.5

4 & 32 3 4 5 6 7

= + + = + +

Then find 12 & 7 using extension principle.

9. Given a t-norm i and an involutive fuzzy complement c, the binary operation u on [0,1] is

[ ]( , ) ( ( ( ), ( ))); , , 0,1u a b c i c a c b for all a b c= ∈ is a t-conorm such that <i,u,c> is a dual triple.

Prove only boundary condition, monotonicity & associativity axiom for t-conorm.

Page 9: Flnn Question Bank

Mrs. A. D. Belsare, Course Teacher 2014-15

10. Give the skeletonzed axioms for t-conorm. Define & calculate standard union, algebraic sum,

Bounded sum & drastic union for fuzzy sets A & B : 0.65 0.1 0.5 0.65 0 1

; ;A Bx y z x y z

= + + = + +

11. For a fuzzy set, a normal, convex membership is defined as 0.2 1 0.2

10 1 2

= + +

Perform 1+1

using extension principle.

12. The fuzzy sets A & B are given as, 0.2 1 0.7 0.5 1

&1 2 4 1 2

A B = + + = +

Find

( , )f A B A X B= using extension principle.

13. Consider the crisp domain P={3,4,5}, Q={6,7,8} & R={0,1,2}. The following table shows the function :f P X Q R→ where f is defined as addition modulo: f= Addition Modulo 3

6 7 8

3 0 1 2

4 1 2 0

5 2 0 1

Q

P Now, consider the fuzzy sets A & B on P & Q repectively.

0.1 0.8 0.5 0.6 0.2 0.7;

3 4 5 6 7 8A B

= + + = + +

using extension principle, extend function

f=addition modulo 3 & fuzzy addition modulo 3 from A X B to C where, C is a fuzzy set defined on R.

x R for new set∈ .

14. Consider a LAN of interconnection workstations that communicate using Ethernet protocols

at maximum rate of 12 Mbits/s. the two fuzzy sets given below represents the loading of the

LAN,

1 1 0.8 0.2 0.1 0 0( )

0 1 2 5 7 9 10

0 0 0 0.5 0.7 0.8 1( )

0 1 2 5 7 9 10

s

c

x

x

µ

µ

= + + + + + +

= + + + + + +

where, s stands for ‘silent’ & c stands for

‘congestion’. Perform Algebric sum, algebraic product, bounded sum & bounded difference over these two fuzzy sets.

15. Let A, B be two fuzzy numbers,

0; 3& 5 0; 12& 32

( ) 3;3 4 ; ( ) 12 / 8;12 20

5 ;4 5 32 /12;20 32

x x x x

A x x x B x x x

x x x x

≤ > ≤ > = − < ≤ = − < ≤ − < ≤ − < ≤

Solve the following equations for x; A.X=B & A+X=B

Page 10: Flnn Question Bank

Mrs. A. D. Belsare, Course Teacher 2014-15

16. Let,

0; 1& 3 0; 1& 5

( ) 1/ 2; 1 1 ; ( ) 1/ 2;1 3

3 / 2;1 3 3 / 2;3 5

x x x x

P x x x Q x x x

x x x x

≤ − > ≤ > = + − < ≤ = − < ≤ − < ≤ − < ≤

Perform, P-Q & P+Q

17. Determine equivalent resistance of the circuit shown in Figure, where R1 and R2 are fuzzy sets describing the resistance of resistors R1 and R2, respectively, expressed in ohms. Since the resistors are in series, they can be added arithmetically. Using the extension principle, find the equivalent resistance: Req =R1 + R2. The membership functions for the two resistors are

0.5 1.0 0.61 ;

3 4 5

0.4 1.0 0.32

8 9 10

R

R

= + +

= + +

18. Let 0.5 1 0.5 0.5 1 1 1

;20 25 30 175 180 185 190y tµ µ = + + = + + +

where &y tµ µ are fuzzy sets of

‘young men’ & ‘tall men’ respectively. Find the relations ‘young tall men’ & ‘young but not tall men’.

19. Consider a universe of discourse of TA marks out of 10. If

0.1 0.4 0.6 0.9 1 1;

5 6 7 8 9 10

1 0.9 0.8 0.7 0.4 1

1 2 3 4 5 0

good score

bad score

= + + + + +

= + + + + +

then construct the following compound

fuzzy set: a. Very Good Score b. Not Bad Score c. Not Bad Score But Not Very Good Score

20. Let { }0,1,2,3,4U V= = be the universe of discourse on which the fuzzy set

1.0 0.5 0.2 0.1 0.0

0 1 2 3 4small

= + + + +

is defined. Again let R be the fuzzy relation ‘more or

less the same’, which is defined by the relation matrix shown, 0 1 2 3 4

0 1 0.5 0.1 0 0

1 0.5 1 0.5 0.1 0

2 0.1 0.5 1 0.5 0.1

3 0 0.1 0.5 1 0.5

4 0 0 0.1 0.5 1

R

=

Page 11: Flnn Question Bank

Mrs. A. D. Belsare, Course Teacher 2014-15

If the premise & the rule stated as Premise: x is small Rule: x is more or less the same as y

Then apply suitable fuzzy rule of inference to obtain the conclusion & express it suitably as a relation.

21. Consider a set P={P1,P2,P3,P4} of four varieties of paddy plants, set D={D1,D2,D3,D4} of the various diseases affecting the plants and S={S1,S2,S3,S4} be the common symptoms of the various diseases, Let R be the relation on P X D and S be the relation on D X S

1 2 3 4

1 0.6 0.6 0.9 0.8

2 0.1 0.2 0.9 0.8

3 0.9 0.3 0.4 0.8

4 0.9 0.8 0.1 0.2

D D D D

P

PR

P

P

=

1 2 3 4

1 0.1 0.2 0.7 0.9

2 1 1 0.4 0.6

3 0 0 0.5 0.9

4 0.9 1 0.8 0.2

S S S S

D

DR

D

D

=

Obtain the association of the plants with the different symptoms of the diseases using max- min Composition

22. State and prove third decomposition theorem of fuzzy set A. Consider 0.5 / 1 0.4 / 2 0.7 / 3 0.8 / 4 1/ 5A x x x x x= + + + +

23. A fuzzy tolerance relation, R, is reflexive and symmetric. Find the equivalence relation Re and then classify it according to λ-cut levels = {0.9, 0.7, 0.5, 0.4}.

1 0.7 0 0.2 0.1

0.7 1 0.9 0 0.4

0 0.9 1 0 0.3

0.2 0 0 1 0.5

0.1 0.4 0.3 0.5 1

R

=

24. In a pattern recognition test, four unknown patterns need to be classified according to three known patterns (primitives) a, b, and c. The relationship between primitives and unknown patterns is in the following table:

X1 X2 X3 X4

A 0.6 0.2 0.0 0.8

B 0.3 0.3 0.8 0.1

c 0.1 0.5 0.2 0.1

If a λ-cut level is 0.5, then into how many classes can these patterns be divided? Hint: Use a max–min method to first generate a fuzzy similarity relation R.

25. In the field of computer networking there is an imprecise relationship between the level of use of a network communication bandwidth and the latency experienced in peer-to-peer communication. Let X be a fuzzy set of use levels (in terms of the percentage of full bandwidth used) and Y be a fuzzy set of latencies (in milliseconds) with the following membership function:

0.2 0.5 0.8 1.0 0.6 0.1 0.3 0.6 0.9 1.0 0.6 0.3;

10 20 40 60 80 100 0.5 1 1.5 4 8 20X Y = + + + + + = + + + + +

Page 12: Flnn Question Bank

Mrs. A. D. Belsare, Course Teacher 2014-15

a. Find the Cartesian product represented by the relation R = X×Y. Now, suppose

we have second fuzzy set of bandwidth usage given by

0.3 0.6 0.7 0.9 1 0.5

10 20 40 60 80 100X = + + + + +

26. Find 1 6 6 6X XS Z R= � using (1) Max–min composition and (2) Using max–product composition.

27. The three variables of interest in the MOSFET are the amount of current that can be switched, the voltage that can be switched and the cost. The following membership function for the transistor was developed

28. The power is given by P = V I. 0.4 0.7 1 0.8 0.6

,0.8 0.9 1 1.1 1.2

0.2 0.8 1 0.9 0.7,

30 45 60 75 90

0.4 1 0.5cos

0.5 0.6 0.7

current I

Voltage V

t

= = + + + +

= = + + + +

= + +

a. Find the fuzzy Cartesian product P V X I=

b. Find the fuzzy Cartesian product T I X C= c. Using max–min composition find E P T= � d. Using max–product composition find E P T= �

29. Relating earthquake intensity to ground acceleration is an imprecise science. Suppose we have a universe of earthquake intensities I = {5, 6, 7, 8, 9} and a universe of accelerations, A = {0.2, 0.4, 0.6, 0.8, 1.0, 1.2} in 8s. The following fuzzy relation R exists on confession space I × A.

0.75 1 0.65 0.4 0.2 0.1

0.5 0.9 1 0.65 0.3 0

0.1 0.4 0.7 1 0.6 0

0.1 0.2 0.4 0.9 1 0.6

0 0.1 0.3 0.45 0.8 1

R

= Fuzzy set “intensing about 7” is defined as: 0.1 0.6 1 0.8 0.4

5 6 7 8 9I = + + + +

Determine the fuzzy membership of I7 on the universe of accelerations, A.

30. Two companies bid for a contract. The fuzzy set of two companies B1 and B2 is shown in the following figure. Find the defuzzified value z* using different methods.

Page 13: Flnn Question Bank

Mrs. A. D. Belsare, Course Teacher 2014-15

31. Amplifier capacity on a normalized universe say [0,100] can be linguistically defined by fuzzy

variable like here:

0 0.2 0.6 0.9 0.9 0.8 0.2 0.1;

1 10 50 100 1 10 50 100powerful weak = + + + = + + +

Find the membership functions for the following linguistic phases used to decrease the capacity of various amplifiers:

a. Powerful and not weak b. Very powerful or very weak c. Very, very powerful and not weak

32. In a computer system, performance depends to a large extent on relative spear of the components making up the system. The “speeds” of the CPU and memory are important factors in determining the limits of operating speed in terms of instruction executed per unit size

0 0 0.1 0.3 0.5 0.7 1

6 1 4 8 20 45 100

1 0.9 0.8 0.5 0.2 0.1 0

0 1 4 8 20 45 100

fast

slow

= + + + + + +

= + + + + + +

33. Calculate the membership function for the phases: - Not very fast and slightly slow - Very, very fast and not slow - Very slow are not fast

34. Give artificial neural network architectures? - Write the types of architectures - Draw the architectures diagrams - Write the formulas.

35. Explain Single layer feed forward network - Draw the network diagram - Write the step by step procedure.

36. Explain Multilayer feed forward network - Draw the network diagram - Write the step by step procedure.

Page 14: Flnn Question Bank

Mrs. A. D. Belsare, Course Teacher 2014-15

Genetic Algorithm

1. Explain roulette-wheel selection method in Genetic Algorithm. 2. Explain the mutation operation in Genetic Algorithm with suitable example. 3. Maximize the function f(x) = x2 with x = [13, 24, 8, 19]. Use binary encoding and

roulette-wheel selection. 4. Explain the methods of crossover with suitable example. 5. What are the different types of encoding? Explain value encoding and binary encoding in

genetic algorithm from known lower and upper bound value of 4 bit string. 6. Maximize the function f(x) = x2 with x = [13, 24, 8, 19]. Use binary encoding and rank

selection. 7. Explain the methods of crossover with suitable example. 8. What are the different types of encoding? Explain value encoding and binary encoding in

genetic algorithm from known lower and upper bound value of 4 bit string.

Page 15: Flnn Question Bank

Mrs. A. D. Belsare, Course Teacher 2014-15

One Mark Problems:

Neural network

1. Use non iterative procedure of association (matrix association memory) for the following input and output pattern.(1,1)->(-1,1) and (1,-1)->(-1,-1)

2. What are the results obtained using five node Maxnet with initial node output vectors (0.5,0.9,1,1,0.9)?

3. Find the winner of the three node with weight vector 0.2 0.7 0.3

0.1 0.1 0.9

1 1 1

If the pattern presented is (1.1,1.7,1.8) and also update the corresponding weigh using competitive learning

4. What is the difference between LVQ and clustering 5. Draw the ADALINE neural net model and write the Delta Rule for adjustment of weight

in the ADALINE network.

6. What is winner takes all or competitive learning. 7. Discuss the important features of Kohonen self organizing maps. 8. Implement OR function using MP neuron model. 9. Draw the network architecture for MADALINE I and MADALINE II. 10. Implement AND logic function using MP neuron model. 11. Discuss Linear Seprability using an example? Is XOR Gate Linear Separable? 12. What are the different learning methods in neural network? Give one example of each. 13. Discuss stability plasticity dilemma & explain how ART network solves this problem? 14. Give the difference between auto-associative & hetero-associative neural models. Also

give the learning laws for them. 15. Implement XOR logic function using MP neuron model. 16. Realize a Hebb net for the OR function with bipolar inputs & targets. 17. Write weight updation rule for recurrent network & draw fully connected recurrent NN. 18. Give the features for KSOM. 19. Describe BAM . 20. Implement OR function using MP neuron model. 21. Draw the network architecture for MADALINE I and MADALINE II and state the

difference between two. 22. How initialization of weights affect the performance of BPN training algorithm. 23. What is RBF net? How it can be applied to the corner isolation problem with one RBF

node and four RBF nodes. Draw labeled NN architecture.

Page 16: Flnn Question Bank

Mrs. A. D. Belsare, Course Teacher 2014-15

Fuzzy Logic

1. Verify the fuzzy DeMorgan’s law with thye following fuzzy sets A & B; 1 0.5 0 0 0.5 1

&0 1 2 0 1 2

A B= + + = + +

2. Find & draw alpha cuts & strong alpha cuts for the following fuzzy set 1 0.5 0 0 0.5 1

&0 1 2 0 1 2

A B= + + = + +

3. Give different fuzzy complement definitions & prove involutive property for fuzzy complement.

4. Prove second decomposition theorem for the fuzzy set A using graph also give its statement.

5. Find all alpha cuts and strong alpha cuts for fuzzy set, ~

1 2 3 4

0.5 0.4 0.7 0.8A

x x x x

= + + +

6. List axioms required for fuzzy t-norms and define algebraic product for t-norms.

7. Does the function ( ) (1 )wc a a= − qualify for each w > 0 as a fuzzy complement? Plot the function for w >1; w = [0, 5]

8. List the properties of alpha cuts for fuzzy sets. Give and plot all alpha cuts for following fuzzy set A

0.6 0.1 0.5 0.4 0.7A

a b c d e= + + + +

9. Find the defuzzified value by weighted average method shown in figure

10. Find the defuzzified values using (a) center of sums methods and (b) center of largest

area for the figure shown.

11. State and prove the excluded middle laws and De Morgan’s laws for classical sets. 12. What are the basic elements of a fuzzy logic control system?

Page 17: Flnn Question Bank

Mrs. A. D. Belsare, Course Teacher 2014-15

Genetic Algorithm 1. What is value encoding and binary encoding in GA? 2. Perform crossover operation on parents given below P1: 1 0 1 1 0 1 1 and P2: 0 1 1 0 0 1

0 with mask crossover. 3. What are the three basic operators of a simple genetic algorithm? 4. Perform crossover and inversion on P1: 0 0 1 1 0 0 1 and P2: 1 1 1 0 0 1 1 with crossover

sites 3 and 6.

Page 18: Flnn Question Bank

Mrs. A. D. Belsare, Course Teacher 2014-15

Programming Exercise

1.Write a MATLAB program to generate the following activation functions that are being a. used in neural networks using basic equations. Also plot them showing grid lines,

title b. and xlabel. Use axis square. c. Activation functions are defined as ,

2.Linear activation: f(x)=x; for all x 3. Binary Sigmoid Function: f(x)=1/1+exp(-x), 4. Bipolar Sigmoid Function: f(x)=1-exp(-x)/1+exp(-x), 5. Radial Basis Function: f(x)=exp(I)

1. Where,

2

12

( ( ) ( ))

2

N

i ii

W t X tI

σ=

− − =

6. Write a MATLAB function for McCulloch Pitts neural net and generate OR functions using this created function.

7. Using Hebb rule, find the weights required to perform following classification. The vectors (1 -1 1 -1) and (1 1 1 -1) belong to class with target +1; vectors (-1 -1 1 1) and (1 1 -1 -1) do not belong to class with target -1.

8. Write a MATLAB program for Hebb net to classify two dimensional input pattern in bipolar with their targets given below.’*’ indicates a ‘+1’ and ‘.’ Indicates ‘-1’.

* * * * * . . . * * * * * . . . * . . .

(E) (F)

Target +1 Target -1

9. Using unit step activation function, determine a set of weights using McCulloch Pitts neural net by writing M file for the following data:

X1 X2 O/P -0.2 0.5 0

0.2 -0.5 0

0.8 -0.5 1

0.8 0.8 1

* * * * * . . . * * * * * . . . * * * *

Page 19: Flnn Question Bank

Mrs. A. D. Belsare, Course Teacher 2014-15

10. Consider the surface described by sin( )cos( )z x y= defined on a square 3 3, 3 3x y− ≤ ≤ − ≤ ≤ ; Plot the surface z as a function of x & y and design a neural

network which will fit the data. Hint: Use 25, 25 numbers of neurons in first layer, show 50, 0.05 learning rate, 1e-3 goal and 300 epochs before training network.

11. Design a Hebb net as a gate traffic controller across the road to control the accidents occurring on the roads regularly. The gate across the road will come down across when the traffic light is red & pedestrian light is green. Also plot classification line and input/output target vectors. The state table for the traffic controller is as shown below:

Traffic Signal

Pedestrian Signal

Gate

Green (2) Red (0) Up (0)

Yellow (1) Yellow (1) Up (0)

Red (0) Green (2) Down (1)

Hint: First encode the light color as given in brackets for translation of state table into numbers, learning function for input weights and layer weights are to be set at Hebb net learning rue: ‘learnh’, use train function for net as ‘trainr’, train function for adapt function as ‘trains’, and number of epochs should be 150.

12. Implement XOR logic function using perceptron learning algorithm. 13. Using the Perceptron Learning Law design a classifier for the following problem:

Class C1 : [-2 2]', [-2 1.5]', [-2 0]', [1 0]' and [3 0]' Class C2: [ 1 3]', [3 3]', [1 2]', [3 2]', and [10 0]'

14. For the following 2-class problem determine the decision boundaries obtained by perceptron learning laws.

Class C1: [-2 2]', [-2 3]', [-1 1]', [-1 4]', [0 0]', [0 1]', [0 2]', [0 3]' and [1 1]' Class C2: [ 1 0]', [2 1]', [3 -1]', [3 1]', [3 2]', [4 -2]', [4 1]', [5 -1]' and [5 0]'

15. Determine the weights of a network with 4 input and 2 output units using Perceptron Learning Law for the following input-output pairs:

Input: [1100]' [1001]' [0011]' [0110]' Output: [11]' [10]' [01]' [00]' Discuss your results for different choices of the learning rate parameters. Use suitable values for the initial weights.

16. . Design a backpropogation NN for diagnosis of diabetes. Assume data as per table below:

Family History

Obese Thirst Increased Urination

Increase Urination (night)

Adult Diagnosis

1 1 1 1 1 1 1 1 1 0 0 0 1 2 1 1 1 0 0 1 2

Page 20: Flnn Question Bank

Mrs. A. D. Belsare, Course Teacher 2014-15

1 1 1 1 1 0 1 0 1 1 1 1 1 1 0 0 1 1 1 1 1 0 0 0 1 1 0 0 1 0 0 0 0 1 0 0 1 1 0 0 1 0 0 0 1 0 1 1 1 0 1 0 0 1 1 0

0: Not Diabetic, 1: Diabetic, 2: Prabably Diabetic Use NN commands to train & test BPN. Keep hidden neuron=25; learning rate=0.1; Show parameter=50; momentum constant =0.9 ; epochs=15000 and performance goal=1e-15 Hint: You can use newpr, train, sim Test the network for the data given below:

Family History Obese Thirst

Increased Urination

Increase Urination (night)

Adult Diagnosis

1 1 1 1 1 1 ? 0 0 0 0 0 1 ?

17. Write a MATLAB program to test auto associative network using outer product rule for storing input vector x1=[1 -1 1 -1] and x2=[1 1 -1 -1]

18. Write a program to implement Kohonen self organizing feature maps for given input pattern x=[1 1 0 0; 0 0 0 1; 1 0 0 0 ; 0 0 1 1] using learning rate as 0.6

% Program

clear all; clc; disp('Kohonen self organizing feature maps'); disp('The input patterns are'); x=[1 1 0 0; 0 0 0 1; 1 0 0 0 ; 0 0 1 1] t=1; alpha(t)=0.6; e=1; disp('Since we have 4 input pattern and cluster unit to be formed is 2, the weight matrix is'); w=[0.2 0.8; 0.6 0.4; 0.5 0.7; 0.9 0.3] disp('The learning rate of this epoch is'); alpha [w,alpha] = ksomfea(x,w,e,alpha); disp('Weight updation'); w

Page 21: Flnn Question Bank

Mrs. A. D. Belsare, Course Teacher 2014-15

disp('Learning rate updated for epoch') alpha

Write a function ‘ksomfea’ for Kohonen self organizing feature maps as given below; function [w,alpha] = ksomfea(x,w,e,alpha) % x is n X n input matrix % w is weight matrix % e is number of epoch % alpha is learning rule for self organising maps

19. Write a MATLAB program for an ART1 neural net with four F1 units & three F2 units. The weights after some training are given as, bij=[0.57 0 0.3;0 0 0.3;0 0.57 0.3;0 0.47 0.3], & tij=[1 1 0 0;1 0 0 1;1 1 1 1]. Find the new weight after the vector [1 0 1 1] is presented if vigilance parameter is 0.4.

20. Using MATLAB commands draw the triangular & Gaussian membership function for x = 0 to 10 with increment of 0.1. Triangular membership function is defined between [5 6 7] & Gaussian function is defined between 2 & 4.

21. Using following equation find the membership function for a fuzzy set.

2

2

0,

2 ,

( )

1 2 ,

1,

if x a

x aif a x m

b ax

x bif m x b

b a

if x b

µ

− ≤ ≤ − = − − ≤ ≤ −

22. Consider three fuzzy sets and one null set 0 1 0.5 0.4 0.6

2 4 6 8 10A

= + + + + 0 0.5 0.7 0.8 0.4

2 4 6 8 10B

= + + + +

0.3 0.9 0.2 0 1

2 4 6 8 10C

= + + + +

Write a program to implement fuzzy set operation and properties. 23. Consider the following fuzzy sets

24.

0.8 0.3 0.6 0.2

10 15 20 25A

= + + +

0.4 0.2 0.9 0.1

10 15 20 25B

= + + +

Calculate the Demorgan’s law A B A B∪ = ∩ and A B A B∩ = ∪ using MATLAB program

Page 22: Flnn Question Bank

Mrs. A. D. Belsare, Course Teacher 2014-15

25. An engineer is testing the properties, strength and weight of steel. Suppose he has two fuzzy sets A, defined on a universe of three discrete strengths, {s1, s2, s3}, and B, defined on a universe of three discrete weights {w1, w2, w3}. Suppose A & B represent a ‘high-strength steel’ & ‘near-optimum weight’, respectively, as shown,

1 0.5 0.2 1 0.5 0.3&

1 2 3 1 2 3A B

s s s w w w = + + = + +

a. Find the fuzzy relation for the Cartesian product of A & B. Here Cartesian product would represent the strength-weight characteristics of a near maximum steel quality.

b. Suppose we introduce another fuzzy set, C, which represents a set of moderately good steel strength, as given below,

0.1 0.6 1

1 2 3C

s s s = + +

Find the relation between C & B using Cartesian product c. Find C o R using max-min composition & C . R max-product composition.

26. To find whether the given matrix is transitivity or not

27.

1 1 0 0 0

1 1 0 0 0

0 0 1 0 0

0 0 0 1 1

0 0 0 1 1

R

=

28. To find whether the following relations is equivalence or not using MATLAB program? 1 0.8 0.4 0.5 0.8

0.8 1 0.4 0.5 0.9

0.4 0.4 1 0.4 0.4

0.5 0.5 0.4 1 0.5

0.8 0.9 0.4 0.5 1

R

=

29. Find whether the following relation is tolerance relation or not by writing MATLAB program

1 1 0 0 0

1 1 0 0 0

0 0 1 0 0

0 0 0 1 1

0 0 0 1 1

R

=

30. Using MATLAB program find the crisp lambda cut set relations for lambda=0.6. The fuzzy matrix is given by

0.1 0.6 0.8 1

1 0.7 0.4 0.2

0 0.6 1 0.5

0.1 0.5 1 0.9

R

=

31. Design of a Fuzzy controller for the following system using Fuzzy tool of Matlab a. Train Breaking system b. Water heater Controller

Page 23: Flnn Question Bank

Mrs. A. D. Belsare, Course Teacher 2014-15

Mrs. A. D. Belsare

Asst. Prof.

ET, YCCE

Nagpur