consortium leader peter pazmany catholic university · • adaline (adaptive linear neuron or...

93
11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 1 Development of Complex Curricula for Molecular Bionics and Infobionics Programs within a consortial* framework** Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY Consortium members SEMMELWEIS UNIVERSITY, DIALOG CAMPUS PUBLISHER The Project has been realised with the support of the European Union and has been co-financed by the European Social Fund *** **Molekuláris bionika és Infobionika Szakok tananyagának komplex fejlesztése konzorciumi keretben ***A projekt az Európai Unió támogatásával, az Európai Szociális Alap társfinanszírozásával valósul meg. PETER PAZMANY CATHOLIC UNIVERSITY SEMMELWEIS UNIVERSITY

Upload: others

Post on 18-Mar-2021

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 1

Development of Complex Curricula for Molecular Bionics and Infobionics Programs within a consortial* framework**

Consortium leader

PETER PAZMANY CATHOLIC UNIVERSITYConsortium members

SEMMELWEIS UNIVERSITY, DIALOG CAMPUS PUBLISHER

The Project has been realised with the support of the European Union and has been co-financed by the European Social Fund ***

**Molekuláris bionika és Infobionika Szakok tananyagának komplex fejlesztése konzorciumi keretben

***A projekt az Európai Unió támogatásával, az Európai Szociális Alap társfinanszírozásával valósul meg.

PETER PAZMANY

CATHOLIC UNIVERSITY

SEMMELWEIS

UNIVERSITY

Page 2: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 2

Peter Pazmany Catholic University

Faculty of Information Technology

Signal Processing by a Single Neuron

www.itk.ppke.hu

(Jelfeldolgozás Mesterséges Neuronnal)

Treplán Gergely

Digitális- neurális-, és kiloprocesszoros architektúrákon alapuló jelfeldolgozás

Digital- and Neural Based Signal Processing & Kiloprocessor Arrays

Page 3: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 3

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Outline• Historical notes

• Artificial neuron (McCulloch-Pitts neuron)

• Elementary set separation by a single neuron

• Implementation of a single logical function by a single neuron

• Pattern recognition by a single neuron

• The learning algorithm

• Questions

• Example problems

www.itk.ppke.hu

Page 4: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 4

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Historical notes• Threshold Logic Unit (TLU) proposed by Warren McCulloch

and Walter Pitts in 1943;

• Hebb’s first rule for self-organized learning in 1949;

• Perceptron developed by Frank Rosenblatt in 1957;

• ADALINE (Adaptive Linear Neuron or later Adaptive Linear Element) by Hoff and Widrow in 1960;

• Perceptron learning rule (LMS algorithm) by Widrow in 1960,

• Limitations of the perceptron (Minsky and Papert-1969);

• Back-propagation algorithm (1986);

• Radial-basis function network (Broomhead and Lowe-1988).

www.itk.ppke.hu

Page 5: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 5

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

The artificial neuron (1)• The artificial neuron is an information processing unit that is

elementary element of an artificial neural network.

• extracted from the biological model

www.itk.ppke.hu

Page 6: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 6

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

The artificial neuron (2)

• A crude simplification of a nerve cell is depicted in the Figure.Stimuli arrives from other neurons. From the synapses the dendrites carry the signal to the nerve cell body where it gets summed up and if it reaches a certain level an output is generated. A synapse is called excitatory if stimulating it increases the probability of generating an output or inhibitory if the stimulus on the synapse attenuates the overall sum.

www.itk.ppke.hu

StimuliOutput stimulus

(response)Synapse

Page 7: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 7

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

The artificial neuron (3)

• The following artificial model is just simple copy of the nerve cell, however some important features can be extracted from this model!

www.itk.ppke.hu

Dendrite

Soma

Nucleus Myelin sheath

Schwann cell

Nodes of Ranvier

Axon terminal

Page 8: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 8

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

The artificial neuron (4)• The artificial neuron is connected to the outside world with the

input signal is xi , where the synaptic strength is represented by the weight wi .

• Basically what arrives to the AN is a weighted sum of the input signal. Then this wi quantifies two general effects, if wi > 0 then the input is amplified, else it is attenuated, so it means that the wi is the descriptor of the synapse.

• There is also a b threshold what the nerve compares to the weighted sum of the input.

www.itk.ppke.hu

Page 9: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 9

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

The artificial neuron (5)• The output value is the value determined according to a φ(.)

nonlinearity, which is also called the threshold function. In order to have a more compact form the extended weight vector a new w0 = b can be defined, in order to have an inequality like this. This representation can be seen in the Figure.

www.itk.ppke.hu

Page 10: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 10

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

The artificial neuron (6)• Using this interpretation we have same model as above if the

input x0 of w0 is a constant 1, and now it can be easily seen that the final output is a nonlinearity of the inner product of the weight and inputs. Mathematically it means the following equation for the output y:

• Or using the threshold notation:

www.itk.ppke.hu

( )0=

= = ∑

NT

i ii

y w xϕ ϕ w x

1=

= − ∑

N

i ii

y w x bϕ

Page 11: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 11

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

The artificial neuron (7)Activation function (1)

• Let the activation or threshold function φ(.) be is a monotone differential increasing function, for example it can be the inverse of arctan(). Let it be called the soft nonlinearity function, which is showed in the next Figure.

• In this case the output

• where

www.itk.ppke.hu

( ) ( )2

, ,1 exp

= =+ −

y uu

ϕ λλ

1

.=

= −∑N

i ii

u w x b

Page 12: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 12

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

The artificial neuron (8)Activation function (2)

• Sigmoid nonlinearity function

www.itk.ppke.hu

Page 13: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 13

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

The artificial neuron (8)Activation function (3)

• If the activation function is the sgn(.)function then it is the so called hard nonlinearity function shown in the Figure. And now this is the formula which fully described the operation of an ANN.

• In this case the output

• where

www.itk.ppke.hu

( ) 1, if 0sgn ,

1, else

≥= = −

uy u

1

.=

= −∑N

i ii

u w x b

Page 14: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 14

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

The artificial neuron (8)Activation function (4)

• Signum hard linearity function

www.itk.ppke.hu

Page 15: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 15

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

The artificial neuron (9)• Relation between activation functions:

www.itk.ppke.hu

Page 16: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 16

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

The artificial neuron (10)• Formula which fully described the operation of an McCulloch-

Pitts or artificial neuron:

If an input vector arrives to the AN, 1. computes the weighted sum of it,

2. compare the result with a threshold,

3. then the output is getting throw on a nonlinearity function.

• Again: it is a growth simplification, getting some important feature of biological nerve cell.

• This is reveled soon, that it is a very complex model, with we can solve hard IT problems.

www.itk.ppke.hu

Page 17: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 17

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Elementary set separation by a single neuron (1)• Let be the φ(.) hard nonlinear function, and then the output is

discrete -1 or 1 with this assumption:

• Rewrite the formula substituting u to wTx and then the output +1, if the weighted sum of the input is greater than zero or −1 if the argument is smaller than zero.

( ) ( ) 1, if 0sgn .

1, else

+ ≥= = −

uu uϕ

( ) ( ) 1, if 0sgn .

1, else

+ ≥= = =

TTy uϕ w x

w x

Page 18: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 18

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Elementary set separation by a single neuron (2)• It is a very important formula, because if we look this formula

in an geometrical interpretation, it is a separation by a linear hyperspace. From elementary geometric we are aware of the fact that this is the equation of a hyper plane:

• While it is an equality of an N-dimensional hyper plane, the weights of the artificial neuron represent a linear decision boundary in a two class pattern-classification problem.

0=Tw x

Page 19: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 19

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Elementary set separation by a single neuron (3)

• Illustration of the hyper plane (in this example, a straight line) as decision boundary for a two-dimensional, two class pattern-classification problem.

Page 20: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 20

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Elementary set separation by a single neuron (4)• If we represent the hyperspace in a 2-D input space, this

equation will determine a hyper plane, which is a simple line. What is beyond that line is going to be classified +1, and what is under the line is going to be −1.

• The most simplest artificial neuron with a 2-D input.

0 1

1

x1

x2

x1 AND x2

111

-1-11

-11-1

-1-1-1

yx2x1

111

-1-11

-11-1

-1-1-1

yx2x1

w

A potential decision boundary

1

-1 Σ sgn(.)w1

w2

x2

x1

w01

( )02211 wxwxwsgny ++=

yΣ sgn(.)

w1

w2

x2

x1

w01

( )02211 wxwxwsgny ++=

y

Page 21: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 21

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Elementary set separation by a single neuron (5)• To give a specific example let the weight vector be

w = (3, 2, 1) so the hyper plane can be describer the following equation:

• Explicitly it means:

• Following Figure shows the decision line of this equation.

1 23 2 1 0+ + =x x

2 1 3 2 = − −x x

Page 22: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 22

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Elementary set separation by a single neuron (6)

• Decision boundary of the example.

Page 23: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 23

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Elementary set separation by a single neuron (7)• The decision domain can be easily caught, according to the

sign of w2. As a result when a given vector is the weight vector this is the way how we can visualize the set separation.

• Furthermore, if we change the weight vector’s components, then we will have different numbers in the equation, what means an other separation line.

• As a result that basically w vector represents the programmability of the artificial neuron and this fact can be carried out from the figure was shown above.

Page 24: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 24

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Elementary set separation by a single neuron (8)• Why it is so important to use set separation by hyper plane?

• We can come up with the applications when we would like to implement some logic functions.

• Furthermore there are plenty of mathematical and computational task which can be derived to a set separation problem by a linear hyper plane.

• Let us observe the implementation of logical functions by a single neuron!

Page 25: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 25

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Implementation of a single logical function by a single neuron (1)

• Firstly let us actually focus on 2d-and function. We can come up with the truth table of the logical AND function. Figure also shows the input space with its simple geometric interpretation : as far as x1 and x2 are the inputs.

• 2-D AND from truth table to visualization

Page 26: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 26

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Implementation of a single logical function by a single neuron (2)

• All the input points can be seen on the plot, and basically the 2-D AND function means set separation, because only one have to classified as +1, and all others must be minus one. So it is a set separation, because it can be implemented the good decision, if we choose the right weights. The truth table determines for given input vector whether the output is +1 or -1. On left side the geometric interpretation can be seen, and it is easy to notice that is really a problem, which can be solved with a linear separator. If we know the decision boundary we can give the weight form the equation of the line.

Page 27: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 27

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Implementation of a single logical function by a single neuron (3)

• Than what we have on the figure is actually the separation surface which we was needed, which mathematically is the following equation:

• As a result:

• And that means that this separation surface is look like in the figure, as a result it can be easily seen that the weight vector is w = (−1.5, 1, 1).

1 2 1.5 0 − + + =x x

2 1 1.5 = −x x

Page 28: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 28

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Implementation of a single logical function by a single neuron (4)

• Next figure shows us how to design the implementation of a 2-D AND function by an artificial neurons.

• Solution of the logical 2-D AND by a single neuron.

Page 29: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 29

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Implementation of a single logical function by a single neuron (5)

• Furthermore instead of 2D, we can actually come up with the Rdimensional AND function.

• The corresponding weight vector to implement an Rdimensional AND function are the following program.

• The weights corresponding to the inputs are all 1 and threshold should be R − 0.5. As a result the actual weights of the neuron are the following:

( )( )T 0.5 ,1, ,1= − − …Rw

Page 30: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 30

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Implementation of a single logical function by a single neuron (6)

• In a same way OR function can also be implemented by a single artificial neuron, being a linear separation problem, which is shown by the next Figure, and the weight must be w =(−0.5, 1, 1).

• 2-D OR problem solved by a linear separator.

Page 31: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 31

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Implementation of a single logical function by a single neuron (7)

• However we cannot implement every logical function by a linear hyper plane. Unfortunately there are some ones which cannot be implemented by a single neuron for example excluded OR (XOR) is like that, which entails a separation given in the next Figure.

• 2-D XOR problem can not be solved by one linear separator

Page 32: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 32

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Implementation of a single logical function by a single neuron (8)

• As it can be seen, it cannot be separated by one linear hyper plane so more neuron should be used, as in this example one of the neuron implement one line , the other is going to implement the other line and then the 3rd neuron realizes the AND function to combine the two separation.

• Is there any neural based solution for nonlinear separation problems?

• Let us build neural networks!

Page 33: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 33

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Implementation of a single logical function by a single neuron (9)

• As a result even the XOR function can be implemented by a neural network , for example with 3 neurons in a feed forward manner.

• 2-D XOR problem can be solved by a network of neurons.

Page 34: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 34

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Implementation of a single logical function by a single neuron (10)

• The very important conclusion is that elementary artificial neuron is a linear set separator in the N dimensional input space, where programmability is actually ensured by changing the free parameters of the system, depending how well it classifies, and then AN can implement a class of logical functions, more precisely the linear separable functions. So basically we need ANN to implement any logical function by deploying several neurons, which results a network

Page 35: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 35

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Implementation of a single logical function by a single neuron (11)

• Feed forward artificial neural network.

Page 36: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 36

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Pattern recognition by a single neuron (1)• Now it is going to be shown how to solve elementary pattern

recognition task with a single neuron, what is the reason that artificial neuron is also called perceptron, because it can intelligently recognize patterns.

• Let assume that at the input, there is a speech pattern, which is a continuous signal. Let assume that someone can say “yes” or can pronounce “no”.

• Many of the ATM systems in the US already follow this pattern: no client orders are executed without verbal validation. (your own “yes-or no” verbal verification order is usually needed to proceed with any of your financial actions.)

Page 37: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 37

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Pattern recognition by a single neuron (2)• Let assume that the input is a speech pattern in a form of

continuous signal which represents a “yes” or a “no”.

• This continuous signal is going in an A/D converter where it is transformed to a simple digital signal.

• After that, the next step is to carry out the features with an FFT, which is represented by X vector.

• Then all of the components are plugged into an artificial neuron, which computes the weighted sum of the input, compares with a threshold, and if the state is greater or equal than the threshold the output is going to be +1, otherwise it is going to be −1.

Page 38: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 38

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Pattern recognition by a single neuron (3)

• The block diagram of the speech pattern recognition by an artificial neuron.

Page 39: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 39

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Pattern recognition by a single neuron (4)• This is how we would like to solve the speech pattern

recognition task, with a separation by a linear hyper plane. The model is now correct, the next question is that what the weights are of this neuron, so what should be the program of the perceptron to get correct recognition.

• In a more general view (next Figure) every special pattern can arrive which has two possible value s(1) or s(2),which are going to represent the Fourier transformation of “yes” and the “no”.

Page 40: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 40

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Pattern recognition by a single neuron (5)

• Generalization of a pattern recognition task by an artificial neuron.

Page 41: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 41

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Pattern recognition by a single neuron (6)• Then this pattern hits the system, and a preprocessing

calculation is used, depending what the specific task.

• After the preprocessing, an artificial neuron should be implemented that finally provide +1 or -1.

• Having elaborated this model the mathematical analysis of the pattern recognition is going to be derived, which proves that an elementary artificial neuron can decide optimally under assumptions, or in other words a linear separator is so to speak good enough to carry out a pattern recognition task.

• Furthermore, the method is going to be given, how to implement the optimal weight vector.

Page 42: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 42

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Pattern recognition by a single neuron (7)Pattern recognition under Gaussian noise (1)

• To address the first theorem, let us assume whether the task can be implemented under Gaussian noise.

• The largest problem when someone pronounces “yes”, that everybody can pronounce with many type of “yes”, so there are several versions of the individual.

• Let us assume that we have a standard “yes” as s(1) and a standard “no” as s(2).

• Standard pattern basically as we mentioned earlier is either represented by “yes” or “no”.

Page 43: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 43

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Pattern recognition by a single neuron (8)Pattern recognition under Gaussian noise (2)

• However we want to be sure that a special input “yes” is going to be classified as “yes”.

• Let x be the observation which belongs to the spoken speech pattern.

• This observation differs from the standard one subject to a multidimensional Gaussian formulation with 0 mean and Kcovariance matrix.

• That is the most general assumption when the aim is to design a system.

Page 44: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 44

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Pattern recognition by a single neuron (9)Pattern recognition under Gaussian noise (3)

• Formally it means the following equality for the observation x:

• where the original signal must be one of the standard pattern:

• and the noise is:

,= +x ξ ν

( ) ( ){ }1 2, ,∈ξ s s

( ), .∼ Nν 0 K

Page 45: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 45

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Pattern recognition by a single neuron (10)Pattern recognition under Gaussian noise (4)

• That is the most general assumption when, the aim is to design a system. Generally the block diagram of the task can be seen in the next Figure.

• Pattern recognition under Gaussian noise solved by an AN.

x

Page 46: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 46

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Pattern recognition by a single neuron (11)Pattern recognition under Gaussian noise (5)

• Due to this statistic what we observed for the random vector that the probabilities of the pronounced speech vector, if the standard was “yes” or “no” are the followings:

( )( )( ) ( )

( )( ) ( ) ( )( )( )( )

( ) ( )( )( ) ( ) ( )( )

1 1 1 1

2 2 1 2

1 1| exp ,

22 det

1 1| exp .

22 det

N

N

P

P

π

π

= = − − −

= = − − −

x ξ s x s K x sK

x ξ s x s K x sK

Page 47: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 47

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Pattern recognition by a single neuron (12)Pattern recognition under Gaussian noise (6)

• These are the traditional multidimensional Gaussian density functions with different expected value depending on the condition.

• This means again (geometrically speaking) that a kind of separation problem given. If we observe x which before the distortion was s(1) or s(2), the best we can do is find the closest original point in the probability space.

Page 48: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 48

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Pattern recognition by a single neuron (13)Pattern recognition under Gaussian noise (7)

• Pattern classification from a geometrical point of view.

Page 49: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 49

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Pattern recognition by a single neuron (14)Pattern recognition under Gaussian noise (8)

• This is the classical Bayesian decision.

• This is the method how we can guarantee minimal error probability.

• Bayes decision is an optimal decision, because it always chooses according to the likelihood functions above, so it is going to choose the more possible original point.

Page 50: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 50

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Pattern recognition by a single neuron (15)Pattern recognition under Gaussian noise (9)

• Formally the following inequality has to be evaluated, to decide which standard is the more probable:

( )

( ) ( )( )( ) ( ) ( )( )1 1 1 11 1

: exp22 det

− − − − > Nπ

s x s K x sK

( ) ( )( ) ( )( )1 1 2: | | ,= > =P Ps x ξ s x ξ s

( ) ( )( )( ) ( ) ( )( )2 1 21 1

exp .22 det

− > − − − Nπ

x s K x sK

Page 51: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 51

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Pattern recognition by a single neuron (16)Pattern recognition under Gaussian noise (10)

• Which can be rewritten as:

• After decomposition:

( )( ) ( ) ( )( ) ( )( ) ( ) ( )( )T T1 1 1 2 1 21 1

.2 2

− −− − − > − − −x s K x s x s K x s

( ) ( )( ) ( ) ( )( ) ( ) ( )

( ) ( )( ) ( ) ( )( ) ( ) ( )

T T1 1 1 1 1 1T

T T1 2 1 2 1 2T

1 1

2 21 1

.2 2

− − −

− − −

− + − >

> − + −

x K x s K x s K s

x K x s K x s K s

Page 52: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 52

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Pattern recognition by a single neuron (17)Pattern recognition under Gaussian noise (11)

• Rearranging the inequality:

• And now it can be seen that if we choose:

• and

( ) ( )( ) ( ) ( )( ) ( ) ( ) ( )( ) ( ) ( )T T T1 2 1 1 1 1 2 1 21 1

.2 2

− − −− > −s s K x s K s s K s

( ) ( )( ) ( )T1 2 1T ,−= −w s s K

( )( ) ( ) ( ) ( )( ) ( ) ( )T T1 1 1 2 1 21 1

2 2− −= −b s K s s K s

T .>w x b

Page 53: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 53

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Pattern recognition by a single neuron (18)Pattern recognition under Gaussian noise (12)

• Therefore we get s(1) if:

• As a result it is a linear set separation problem, so it can be implemented on an artificial neuron in a way like next Figure shows, and it can carry out the solution of the task under Gaussian noise.

T .>w x b

Page 54: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 54

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Pattern recognition by a single neuron (19)Pattern recognition under Gaussian noise (13)

• Implementation of AN solving the pattern recognition task.

Page 55: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 55

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Pattern recognition by a single neuron (20)Pattern recognition under Gaussian noise (14)

• Basically the AN can decide if an observed pattern arrives, after we had downloaded the optimal weights what we could have calculated offline if the standard patterns and covariance matrix was given.

• The conclusion is that an elementary artificial neuron can solve any pattern recognition when two patterns has to be distinguished.

Page 56: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 56

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Pattern recognition by a single neuron (21)Pattern recognition under Gaussian noise (14)

• This model is very well defined now if the parameters are fully given, since s(1),s(2),K are given and w free parameters can be calculated, and the actual neuron can be implemented.

• On the other hand in a real life application this quantities are not known, an this is why the next issue is how it is possible to get w and b in the neck of these parameters.

• The next topic provides a learning algorithm, where neurons are updating themselves optimally.

Page 57: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 57

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

The learning algorithm (1)• We do not know what are the actually the standard patterns

and the covariance matrix is also unknown so what we have is only a set of examples which is called learning set:

• Example unlike that can be always given because it can be told by a human expert. An artificial neuron can function properly, if the two classes X+ and X− must be linearly separable.

{ }{ }

: 1

: 1

+

= = +

= = −

X d

X d

x

x

Page 58: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 58

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

The learning algorithm (2)• This, in turn, means that the patterns to be classified must be

sufficiently separated from each other to ensure that the decision surface consists of a hyper plane.

• The question is how to develop an algorithm which based on these examples can find the right decision, even the original parameters are unknown.

• So instead of the actual parameter only a learning set is available to us.

Page 59: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 59

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

The learning algorithm (3)• Suppose then that the input variables of the perceptron

originate from two linearly separable classes X+ and X−.

• Given the sets of vectors X+ and X− to train the classifier (the perceptron), the training process involves the adjustment of the weight vector wopt in such a way that the two classes X+ and X−

linearly separable.

• Furthermore separability means that there exists an optimal wopt vector, for which is true that the whole set of X+ and X−

fulfills the next relationships: { }{ }

Topt

Topt

: 0 ,

: 0 .

+

= ≥

= <

X

X

x w x

x w x

Page 60: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 60

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

The learning algorithm (4)• Furthermore this linear separation can be carried out with an

artificial neuron shown the next Figure, and the only problem is that the wopt program of the neuron is fully unknown.

• General artificial neuron.

Page 61: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 61

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

The learning algorithm (5)• But we have some examples which is represented as:

• Actually we are looking for the optimal weights which the neuron perform with perfectly to the learning set. Formally the object is to find w:

( ) ( ) ( )( ){ }: , , 1,..., .= =K k d k k Kτ x

( ) 1, sgn .

1,

+

+ ∈= =

− ∈

T Xd

X

xw x

x

Page 62: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 62

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

The learning algorithm (6)• Therefore basically we have to develop a recursive algorithm

called learning, which can learn step by step, based on observing the previous weight vector, the desired output and the actual output of the system.

• On these specific examples it is going to recursively adopt the weight vector in order to converge wopt. This can be described formally as follows:

( ) ( ) ( ) ( )( ) opt1 , ,+ = Ψ →k k d k y kw w w

Page 63: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 63

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

The learning algorithm (7)• In a more ambitious way it can be called intelligent, because

artificial intelligence is a kind of philosophy that we can learn from example, even the parameters are fully hidden.

• The so called Rosenblatt learning algorithm which is a manifestation of learning can be expressed with a special update rule.

• Given the sets of vectors X+ and X− and an initial weight vector w(0), this algorithm can set an optimal weights vector wopt to the perceptron.

Page 64: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 64

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

The learning algorithm (8)• The algorithm for adapting the weight vector of the elementary

perceptron may now be formulates as follows:

1. Initialization. Set w(0)=0. Then perform the following computations for time step n=1,2,…

2. Activation. At time step k, activate the perceptron by applying continuous-valued input vector x(k) and desired d(k).

3. Computation of Actual response. Compute the actual response of the perceptron:

( ) ( ) ( ){ }sgn .= Ty k k kw x

Page 65: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 65

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

The learning algorithm (9)4. Adaptation of the weight vector. Update the weight vector of

the perceptron according to rule:

where

and

is the error function.

( ) ( ) ( ) ( ) ( )1 , + = + − k k d k y k kw w x

( ) ( )( )

1 f belongs to class X,

1 if belongs to class X

+

= −

i kd k

k

x

x

( ) ( ) ( )= −k d k y kε

Page 66: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 66

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

The learning algorithm (10)5. Continuation. Increment time step n-by-one and go back to

step 2.

• Basically we feedback the error signal to adopt the weights more efficiently, and in next Figure it can be seen the block diagram of the algorithm.

• One can come with the following questions:

• if the algorithm converges to any fix point?

• if there is a fix point, what is the speed of convergence?

Page 67: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 67

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

The learning algorithm (11)

• The Rosenblatt learning algorithm.

Perceptron

Trainingalgorithm

x(k)

Input Output

Σ...

w0(k)

w1(k)

w2(k)

wN(k)

x2(k)

xN(k)

x1(k)

1

sgn(.)

( )(k))k(sgn)k(w)k(x)k(wsgn)k(y T0

N

1iii xw ⋅=

+= ∑=

y(k) y(k)

y(k)x(k)

d(k)

w(k)

Desired output

( ) ( ) ( ) ( ) ( )( )k,k,k,k1k ydxww Ψ=+

(Learning with Teacher)

w(k) wopt

Perceptron

Trainingalgorithm

x(k)

Input Output

Σ...

w0(k)

w1(k)

w2(k)

wN(k)

x2(k)

xN(k)

x1(k)

1

sgn(.)

( )(k))k(sgn)k(w)k(x)k(wsgn)k(y T0

N

1iii xw ⋅=

+= ∑=

y(k) y(k)

y(k)x(k)

d(k)

w(k)

Desired output

( ) ( ) ( ) ( ) ( )( )k,k,k,k1k ydxww Ψ=+

(Learning with Teacher)

w(k) wopt

Page 68: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 68

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

The learning algorithm (12)Proof of the algorithm convergence (1)

• Let us define the following sets:

• and:

{ }Topt : 0 ,+ = ≥X x w x

{ }Topt : 0 ,− = <X x w x

( ){ }Topt : 0 ,− = ≥ɶX -x w -x

{ }0 .− += ∪ = >ɶ ToptX X X : ( )x w x

Page 69: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 69

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

The learning algorithm (13)Proof of the algorithm convergence (2)

• Suppose that w(k)x(k)<0 for k=1,2, …, and the input vector x(k) belongs to the class X+. That is the perceptron incorrectly classifies the vectors x(0), x(1), x(2),….

• In this case the perceptron learning rule is the follow:

• where the error signal may be 0 (no error) and 2 (error decision with d(k)=+1 and y(k)=-1) :

( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )1 , + = + − = + k k d k y k k k k kεw w x w x

( ) ( ) ( ) { } 0,2 .= − ∈k d k y kε

Page 70: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 70

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

The learning algorithm (14)Proof of the algorithm convergence (3)

• We define two sets at the stage k (k=1,2, …) with the updatedw(k):

• and

• where XkOK is the Not OK (it’s not a correct classification) set,

and the XkNOK is the OK (it’s a correct classification) set.

( ) ( ){ }NOK 0 ,= <kX k kx : w x

( ) ( ){ }OK 0 ,= >kX k kx : w x

Page 71: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 71

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

The learning algorithm (15)Proof of the algorithm convergence (4)

• Then the input vectors are elements of the XkNOK set, and the

weight vector w(k) is updated by learning rule:

( )( )

( )

NOK0

NOK1

NOK1

0

1

...

1 −

− ∈ k

X

X

k X

x

x

x

( )( )

( )

0 2

1 2

...

1 2

=

=

− =

e

e

e k

( ) ( )( ) ( )

( ) ( )

1 (0) 2 0

2 (1) 2 1

...

( 1) 2 1

⇒ = +

⇒ = +

⇒ = − + −k k k

w w x

w w x

w w x

Page 72: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 72

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

The learning algorithm (16)Proof of the algorithm convergence (5)

• If the initial condition isthen we get:

• Hence, multiplying both sides by the woptT, we get

• where the dmin is defined as a positive number:

( ) ( )1

0

2 .−

=

= ∑k

i

k iw x

( ) ( )1 1

T Topt opt min min

0 0

2 2 2 ,− −

= =

= ≥ =∑ ∑k k

i i

k i d kdw w w x

Topt min 0.≥ >dw x

( ) ( ) ( ) ( ) ( ) ( )( )0 ; 1 2 0 ; 2 2 0 1= = = +w 0 w x w x x

Page 73: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 73

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

The learning algorithm (17)Proof of the algorithm convergence (6)

• Next we make use of an inequality known as the Cauchy-Schwarz:

• where ||.|| denotes the Euclidean norm of enclosed argument vector, and inner product is a scalar quantity.

• Given two vectors and , the Cauchy-Schwarz inequality states that:

2 2 2,⋅ ≤ ⋅a b a b

( ) ( )2 22 2 2min4 .≥ ≥T T

opt optk k k dw w w w

Page 74: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 74

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

The learning algorithm (18)Proof of the algorithm convergence (7)

• By taking the squared Euclidean norm of both sides, we obtain:

• But, under the assumption that the perceptron incorrectly classifies an input vector x(k) belonging to the class X-, we have:

( ) ( ) ( )2 21 2 1= − + − =k k kw w x

( ) 11 −− ∈ NOKkk Xx

( ) ( ) ( ) ( )2 21 4 1 1 4 1 .= − + − − + −k k k kw w x x

( ) ( )1 1 0.− − <k kw x

Page 75: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 75

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

The learning algorithm (19)Proof of the algorithm convergence (8)

• We therefore deduce that

• or equivalently

• where dmax is a positive number defined by

( ) ( ) ( )2 2 21 4 1 ,≤ − + −k k kw w x

( ) ( ) ( )2 2 2 2max1 4 1 4 ,− − ≤ − ≤k k k dw w x

max max .∈

=X

dx

x

Page 76: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 76

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

The learning algorithm (20)Proof of the algorithm convergence (9)

• Adding the upper inequalities for k=1,2,…, and invoking the assumed initial condition w(0)=0, we get the following inequality:

• Then we get an upper bound:

( ) ( )2 2 2max

1

1 4 .=

≤ − ≤∑k

n

k n kdw x

( ) 2 2max4 .≤k kdw

Page 77: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 77

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

The learning algorithm (21)Proof of the algorithm convergence (10)

• Analyzed the upper and lower bound at the same time:

• we can take cognizance of that the lower bound increases faster (O(k2)) than upper bound (O(k)):

( )2 2

2 2minmax2

44 ,≤ ≤

Topt

k dk kdw

w

2 22minmax2

44 .≤

Topt

k dkd

w

Page 78: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 78

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Problems and Questions (1)

• Describe the perceptron convergence theorem (algorithm)! (proof of convergence and order of convergence)!

• Design an analog circuit to realize a single artificial neuron!

• Design an analog circuit to realize a single artificial neuron!

• Give the weights and the biases of the neurons implementing the 5 dimensional NAND and NOR logical functions ! Give the weights in the case of 12 dimensional input!

Page 79: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 79

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Problems and Questions (2)

• Why is it impossible to solve XOR problem with a single neuron? Give the network implementation of XOR problem using 3 artificial neuron!

• Can a pattern recognition task (recognizing only two distinct patterns under Gaussian noise) solved by a single neuron (justify your answer) ?

Page 80: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

Example problems (1)• Solve the following classification task using a single neuron!

• Solve the problem with designing the weight vector analytically!

• Solve the problem, adapting the weights by using Rosenblatt learning algorithm, where after the initial weighs an learning rate are w1(0)=0.7, w2(0)=2, b(0) = 0.9 and µ = 0.5.

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 80

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Page 81: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 81

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Example problems (2)• In a bearing factory the quality of the produced pillows have to

be analyzed. The bearings have to be classified according to two parameter given a mean value with a given limited range:

Radius: r = 10mm + −1mm

Sleekness: d = 0.4mm + −0.2mm

• The task have to be solved by a neural network. If the given bearing have parameters in the limited range, the output of the neural network has to be +1, other way it has to be −1.

• Give the weights by analytical solution, in such a way, that the network separate well! Plot you solution!

• Give the block diagram of the separator network!

Page 82: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 82

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Example problems (3)• Give the number of logic functions which can be implemented

on a single AN with 2 input!

• Adopt the weights optimally using the perceptron learning rule to realize NOR logic function on the perceptron!

• The initial weights are w(0) = (0,−0.5, 1), and the learning rate µ = 1!

• Plot the sample points and separator lines at time step 0, and after the training!

Page 83: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 83

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Example problems (4)• Which logic function is implemented on the given network?

• Can another one layer perceptron network realize this function? If yes, give the network!

Page 84: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 84

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Example problems (5)• The subscribers of an ISP have to be categorized in to 4 class

according to the download and upload speed. We would like to make a neuron based classification, which can decide whether class the subscriber belongs to.

• Give the structure of the perceptron, which can solve the task!

• Give the weights, computing it analytically if the ISP defines the following class:

• Draw down the separator lines in the x1 - x2 input space!

Page 85: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 85

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Example problems (6)• A two layer neural network is given as the Figure shows. Give

the weights, with which the network can solve the set separation problem given in the Table!

• Plot the solution giving the decision boundaries!

Page 86: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 86

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Example problems (7)• Give the specific neural network, which solve the three classed

separation problem given in the Figure!

• Is it possible to train the given network with Rosenblatt learning rule?

Page 87: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 87

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Example problems (8)• A perceptron network and a classification task are given in the

Figure. Train the network to decide optimally with perceptron learning rule! Let the initial weights as follows:

Page 88: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 88

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Example problems (9)• Plot the decision ranges of the following perceptron network!

Page 89: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 89

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Example problems (10)• We would like to implement the A⊕B +C logic function with

the perceptron network given in the Figure.

• Give the weights of the network!

• How this function could be implemented on a two layer perceptron network? Draw the network and give the weights!

Page 90: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 90

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Example problems (11)• The classification task is given in Figure.

• Give the perceptron network (with certain weights) solving the problem.

• Can the network be trained with perceptron learning rule?

• How many classes can be separated with this network?

Page 91: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 91

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Example problems (12)• The classification task is given in Figure.

• Give the perceptron network (with certain weights) solving the problem.

• Can the network be trained with perceptron learning rule?

Page 92: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

Example problems (13)• We would like to approximate the nonlinear decision curve of

a set separation task given in the Figure with lines using a network, which contains three perceptron in the hidden layer.

• Give the weights of network! Create a τ(5) training set with K = 5 elements for the nonlinear set separation problem!

• Draw the polygonal decision boundary of the AN network!

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 92

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Page 93: Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY · • ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) by Hoff and Widrow in 1960; • Perceptron learning rule

www.itk.ppke.hu

11/27/2011. TÁMOP – 4.1.2-08/2/A/KMR-2009-0006 93

Signal processing on digital, neural, and kiloprocessor based architectures: Signal Processing by a Single Neuron

Summary• McCulloch-Pitts neuron can implement plenty of logical

functions, moreover combination of this kind of neurons(neural network) can implement any type of logical function.

• McCulloch-Pitts (or artificial) neuron acts as a linear separator.

• Pattern recognition can be solved using perceptron (one ANorone layer of neurons)

• The artificial neuron adapts to the environment using trainingset (input and desired output pairs) in polynomial time,.

Next lecture: Hopfield network, Hopfield net as associative

memory and combinatorial optimizer