introduction to neural networks - github pages · network model • we focused on one example...

50
Introduction to Neural Networks 1

Upload: others

Post on 22-Mar-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Introduction to Neural Networks

1

Page 2: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

ECLT 5810 Classification ‐ Neural Networks 2

Demonstrating Some Intelligence 

“Mastering the game of Go with Deep Neural Networks and Tree Search”, Nature 529, Jan 28, 2016.

Page 3: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Basic Unit

• Consider a supervised learning problem where we have access to labeled training examples 

• Neural networks give a way of defining a complex, non‐linear form of hypotheses , , with parameters  that we can fit to our data.

3

Page 4: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Basic Unit

• We will describe neural network by describing the simplest possible neural network, one which comprises a single “neuron.” The following diagram is used to denote a single neuron:

4

Page 5: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Basic Unit

• This “neuron” is a computational unit that takes as input  (and a +1 intercept term), and outputs  ,

where  is called the activation function

• We will choose  to be the sigmoid function:

5

Page 6: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Basic Unit

• Our single neuron corresponds exactly to the input‐output mapping defined by logistic regression.

• Although we will use the sigmoid function, it is worth noting that another common choice for is the hyperbolic tangent, or tanh, function:

6

Page 7: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Basic Unit

• Recent research has found a different activation function, the rectified linear function, often works better in practice for deep neural networks. 

• This activation function is different from sigmoid and tanh because it is not bounded or continuously differentiable. 

• The rectified linear activation function is given by:

7

Page 8: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Basic Unit

8

The following diagram are plots of the sigmoid, tanhand rectified linear functions.

Page 9: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Basic Unit

• The  function is a rescaled version of the sigmoid, and its output range is  instead of 

. • The rectified linear function is piece‐wise linear 

and saturates at exactly 0 whenever the input is less than 0.

9

Page 10: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Basic Unit

One identity that’ll be useful later: • If  is the sigmoid 

function, then its derivative is given by:

• If  is the tanh function, then its derivative is given by:

10

Page 11: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Basic Unit

• The rectified linear function has gradient 0 when and 1 otherwise. 

• The gradient is undefined at  , though this doesn’t cause problems in practice because we average the gradient over many training examples during optimization.

11

Page 12: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Network Model

• A neural network is put together by hooking together many of our simple “neurons,” so that the output of a neuron can be the input of another. 

• For example, the following diagram is a small neural network.

12

Page 13: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Network Model

13

Page 14: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Network Model

• Circles denote the inputs to the network. The circles labeled “+1” are called bias units, and correspond to the intercept term.

• The leftmost layer of the network is called the input layer, and the rightmost layer the output layer (In this example has only one node).

14

Page 15: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Network Model

• The middle layer of nodes is called the hidden layer, because its values are not observed in the training set. 

• Our example neural network has 3 input units(not counting the bias unit), 3 hidden units, and 1 output unit.

15

Page 16: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Network Model

• Let  denote the number of layers in our network; thus  in our example. 

• Label layer  as  , so layer  is the input layer, and layer  the output layer.

16

Page 17: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Network Model

• The neural network has parameters where we write to 

denote the parameter (or weight) associated with the connection between unit  in layer  , and unit  in layer  . (Note the order of the indices.) 

17

Page 18: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Network Model

• is the bias associated with unit  in layer . Thus, in our example, we have , 

and  . • Note that bias units don’t have inputs or 

connections going into them, since they always output the value +1. 

• We also let  denote the number of nodes in layer  (not counting the bias unit).

18

Page 19: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Network Model

• denotes the activation (meaning output value) of unit  in layer  . 

• For  , we use to denote the  ‐thinput.

• Given a fixed setting of the parameters  , our neural network defines a hypothesis  ,that outputs a real number.

19

Page 20: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Network Model

• Specifically, the computation that the previous neural network represents is given by:

,

20

Page 21: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Network Model

• In the sequel, denote the total weighted sum of inputs to unit  in layer  , including the bias term (e.g.,  ), so that 

• Note that this easily lends itself to a more compact notation.

21

Page 22: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Network Model• Specifically, if we extend the activation function 

to apply to vectors in an element‐wise fashion (i.e., ), then we can 

write the previous equations more compactly as follows.

22

,

• We call this step forward propagation.

Page 23: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Network Model

• Recalling that we also use  to denote the values from the input layer, then given layer  ’s activations  , we can compute layer  ’s activations  as:

• By organizing our parameters in matrices and using matrix‐vector operations, we can take advantage of fast linear algebra routines to quickly perform calculations in our network.

23

Page 24: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Network Model

• We focused on one example neural network, but one can also build neural networks with other architectures (meaning patterns of connectivity between neurons), including ones with multiple hidden layers. 

• The most common choice is a  ‐layered network where layer  is the input layer, layer  is the output layer, and each layer  is densely connected to layer  .

24

Page 25: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Network Model

• In this setting, to compute the output of the network, we can successively compute all the activations in layer  , then layer  , and so on, up to layer  , using the forward propagation step. 

• This is one example of a feedforward neural network, since the connectivity graph does not have any directed loops or cycles.

25

Page 26: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Network Model

26

Neural networks can also have multiple output units. 

Page 27: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Network Model

• To train this network, we need training examples where  . 

• This sort of network is useful if there’re multiple outputs that you’re interested in predicting. 

• For example, in a medical diagnosis application, the vector  might give the input features of a patient, and the different outputs  ’s might indicate presence or absence of different diseases.

27

Page 28: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Backpropagation Algorithm

• Suppose we have a fixed training set of  training 

examples. We can train our neural network using batch gradient descent.

• For a single training example , we define the cost function with respect to that single example to be:

,

28

Page 29: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Backpropagation Algorithm

• This is a (one‐half) squared‐error cost function. Given a training set of m examples, we then define the overall cost function to be:

,1

, ; , 2

1 12 , 2

29

Page 30: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Backpropagation Algorithm

• The first term in the definition of  is an average sum‐of‐squares error term. 

• The second term is a regularization term (called a weight decay term) that tends to decrease the magnitude of the weights, and helps prevent overfitting.

• Usually weight decay is not applied to the bias terms , as reflected in our definition for 

30

Page 31: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Backpropagation Algorithm

• Applying weight decay to the bias units usually makes only a small difference to the final network.

• The weight decay parameter  controls the relative importance of the two terms. 

• Note also the slightly overloaded notation: is the squared error cost with 

respect to a single example;  is the overall cost function, which includes the weight decay term.

31

Page 32: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Backpropagation Algorithm

• This cost function above is often used both for classification and for regression problems. 

• For classification, let  or  represent the two class labels (the sigmoid activation function outputs values in  ; if we were using a tanhactivation function, we would instead use ‐1 and +1 to denote the labels).

32

Page 33: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Backpropagation Algorithm

• For regression problems, first scale our outputs to ensure that they lie in the  range (or if we were using a tanh activation function, then the 

range).

33

Page 34: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Backpropagation Algorithm

• Our goal is to minimize  as a function of and  .

• To train our neural network, initialize each parameter  and each  to a small random value near zero (say according to a 

distribution for some small , say 0.01), and then apply an optimization algorithm such as batch gradient descent.

34

Page 35: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Backpropagation Algorithm

• Since  is a non‐convex function, gradient descent is susceptible to local optima.

• However, in practice gradient descent usually works fairly well.

• Note that it is important to initialize the parameters randomly, rather than to all 0’s.

35

Page 36: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Backpropagation Algorithm

• If all the parameters start off at identical values, then all the hidden layer units will end up learning the same function of the input (more formally,  will be the same for all values of  , 

so that for any input  ). • The random initialization serves the purpose of 

symmetry breaking.

36

Page 37: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Backpropagation Algorithm

• One iteration of gradient descent updates the parameters  as follows:

where  is the learning rate. 

37

Page 38: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Backpropagation Algorithm

• The key step is computing the partial derivatives . • We will describe the backpropagation algorithm, 

which gives an efficient way to compute these partial derivatives.

• First describe how backpropagation can be used to compute  , ; , and , ; , ,

the partial derivatives of the cost function defined with respect to a single 

example  .

38

Page 39: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Backpropagation Algorithm

• Once we compute these, we see that the derivative of the overall cost function can be computed as:

,1

, ; ,

,1

, ; ,

The two lines above differ slightly because weight decay is applied to  but not  .

39

Page 40: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Backpropagation Algorithm

• The intuition behind the backpropagation algorithm is as follows.

• Given a training example  we run a “forward pass” to compute all the activations throughout the network, including the output value of the hypothesis  , .

• For each node  in layer  , we would compute an “error term”  that measures how much node was “responsible” for any errors in output. 

40

Page 41: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Backpropagation Algorithm

• For an output node, we can directly measure the difference between the network’s activation and the true target value, and use that to define (where layer  is the output layer).

• For hidden units, we compute  based on a weighted average of the error terms of the nodes that uses  as an input. 

41

Page 42: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Backpropagation Algorithm

• Here is the backpropagation algorithm:1. Perform a feedforward pass, computing the 

activations for layers  , and so on up to the output layer  .

2. For each output unit  in layer  (the output layer), set

12 , ·

42

Page 43: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Backpropagation Algorithm

3. For For each node  in layer  , set

4. Compute the desired partial derivatives, which are given as:

.

43

Page 44: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Backpropagation Algorithm

• We can also re‐write the algorithm using matrix‐vectorial notation. 

• We will use ” •” to denote the element‐wise product operator (denoted .* in Matlab or Octave, and also called the Hadamard product), so that if  • c, then 

• Similar to how we extended the definition of to apply element‐wise to vectors, we do the same for  . (

)

44

Page 45: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Backpropagation Algorithm

• The algorithm can be written:1. Perform a feedforward pass, computing the 

activations for layers  , up to the output layer  , using the equations defining the forward propagation steps

2. For output layer (layer  ), set•

45

Page 46: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Backpropagation Algorithm

3. For  , set

4. Compute the desired partial derivatives

.

46

Page 47: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Backpropagation Algorithm

• In steps 2 and 3, we need to compute for each value of  .

• Assuming  is the sigmoid activation function, we would already have  stored away from the forward pass through the network.

• Using the expression that we worked out earlier for  , we can compute this as 

47

Page 48: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Backpropagation Algorithm

• We are ready to describe the full gradient descent algorithm.

• In the following pseudo‐code,  is a matrix (of the same dimension as  ), and  is a vector (of the same dimension as  ).

• In this notation, “ ” is a matrix, and in particular it is not “Δ times  .”

48

Page 49: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Backpropagation Algorithm

• We implement one iteration of batch gradient descent as follows:

1. Set  (matrix/vector of zeros) for all  .

2. For  to  ,1. Use backpropagation to compute  , ; ,

and  , ; , .

2. Set ∆ ≔ ∆ , ; ,3. Set ∆ ≔ ∆ , ; ,

49

Page 50: Introduction to Neural Networks - GitHub Pages · Network Model • We focused on one example neural network, but one can also build neural networks with other architectures (meaning

Backpropagation Algorithm

3. Update the parameters:

• To train our neural network, we can repeatedly take steps of gradient descent to reduce our cost function  .

50