hopfield networks

32
Hopfield Nets KANCHANA RANI G MTECH R2 ROLL No: 08

Upload: kanchana-rani-g

Post on 04-Nov-2014

23 views

Category:

Education


2 download

DESCRIPTION

 

TRANSCRIPT

  • 1. KANCHANA RANI G MTECH R2 ROLL No: 08

2. Hopfield Nets Hopfield has developed a number of neural networks based on fixed weights and adaptive activations. These nets can serve as associative memory nets and can be used to solve constraint satisfaction problems such as the "Travelling Salesman Problem. Two types: Discrete Hopfield Net Continuous Hopfield Net 3. Discrete Hopfield Net The net is a fully interconnected neural net, in the sense that each unit is connected to every other unit. The net has symmetric weights with no self-connections, i.e., and 4. Hopfield net differ from iterative auto associative net in 2 things. 1. Only one unit updates its activation at a time (based on the signal it receives from each other unit) 2. Each unit continues to receive an external signal in addition to the signal from the other units in the net. 5. The asynchronous updating of the units allows a function, known as an energy function, to be found for the net. The existence of such a function enables us to prove that the net will converge to a stable set of activations, rather than oscillating. The original formulation of the discrete Hopfield net showed the usefulness of the net as content-addressable memory. 6. Architecture 7. Algorithm There are several versions of the discrete Hopfield net. Binary Input Vectors To store a set of binary patterns s ( p ) , p = 1 , . . . , P, where ))().....().....(()( 1 pspspsps ni 8. The weight matrix W = is given by and }{ ijw ]12][12[ )()( pj p piij ssw for ji .0iiw 9. Bipolar Inputs To store a set of binary patterns s ( p ) , p = 1 , . . . , P, where The weight matrix W = is given by, for and ))().....().....(()( 1 pspspsps ni }{ ijw )()( pj p piij ssw ji 0iiw 10. Application algorithm is stated for binary patterns. The activation functions can be modified easily to accommodate bipolar patterns. 11. Application Algorithm for the Discrete Hopfield Net Step 0. Initialize weights to store patterns. (Use Hebb rule.) While activations of the net are not converged, do Steps 1-7. Step 1. For each input vector x, do Steps 2-6. Step 2. Set initial activations of net equal to the external input vector x: , ( i=1,2n) Step 3. Do Steps 4-6 for each unit (Units should be updated in random order.) Step 4. Compute net input: ii xy iY j jijii wyxiny _ 12. Step 5. Determine activation (output signal): Step 6. Broadcast the value of to all other units. (This updates the activation vector.) Step 7. Test for convergence. The threshold, is usually taken to be zero. The order of update of the units is random, but each unit must be updated at the same average rate. iy i 13. Applications A binary Hopfield net can be used to determine whether an input vector is a "known or an "unknown" vector. The net recognizes a "known" vector by producing a pattern of activation on the units of the net that is the same as the vector stored in the net. If the input vector is an "unknown" vector, the activation vectors produced as the net iterates will converge to an activation vector that is not one of the stored patterns. 14. Example Consider an Example in which the vector (1, 1, 1,0) (or its bipolar equivalent (1, 1, 1, - 1)) was stored in a net. The binary input vector corresponding to the input vector used (with mistakes in the first and second components) is (0, 0, 1, 0). Although the Hopfield net uses binary vectors, the weight matrix is bipolar. The units update their activations in a random order. For this example the update order is 2341 yyyy 15. Step 0. Initialize weights to store patterns: Step 1. The input vector is x = (0, 0, 1, 0). For this vector, Step 2. y = (0, 0, 1, 0). Step 3. Choose unit to update its activation: step 4. step 5. step 6. y=(1,0,1,0). iY j jj wyxiny 10_ 111 10_ 11 yiny 16. Step 3. Choose unit to update its activation: step 4. step 5. step 6. y=(1,0,1,0). step 3. Choose unit to update its activation: step 4. step 5. Step 6. y=(1,0,1,0). 4y j jjwyxiny )2(0444 00 44 yiny .11333 j jjwyxiny 10 33 yiny 3y 17. step 3. Choose unit to update its activation: step 4. step 5. step 6. y=(1,1,1,0) Step 7. Test for convergence. 2y 20_ 222 j jj wyxiny 10_ 22 yiny 18. Analysis Energy Function. An energy function is a function that is bounded below and is a non increasing function of the state of the system. For a neural net, the state of the system is the vector of activations of the units. Thus, if an energy function can be found for an iterative neural net, the net will converge to a stable set of activations. 19. Energy function for the discrete Hopfield net is given by, If the activation of the net changes by an amount , the energy changes by an amount, iy 20. consider the two cases in which a change will occur in the activation of neuron If is positive, it will change to zero if, This gives a negative change for In this case, If is zero, it will change to positive if, This gives a negative change for In this case, iy iy iy j ijiji wyx iy .0E iy j ijiji wyx iy .0E 21. Storage Capacity. Hopfield found experimentally that the number of binary patterns that can be stored and recalled in a net with reasonable accuracy, is given approximately by, n= The number of neurons in the net. 22. Continuous Hopfield Net A modification of the discrete Hopfield net with continuous- valued output functions, can be used either for associative memory problems or constrained optimization problems such as the travelling salesman problem. Here, denote the internal activity of a neuron. Output signal is iu ).( ii ugv 23. If we define an energy function, Then the net will converge to a stable configuration that is a minimum of the energy function as long as, For this form of the energy function, the net will converge if the activity of each neuron changes with time according to the differential equation 24. In the original presentation of the continuous Hopfield net the energy function is, where T is the time constant. If the activity of each neuron changes with time according to the differential equation the net will converge. 25. In the Hopfield-Tank solution of the travelling salesman problem, each unit has two indices. The first index--x, y, etc. denotes the city, the second-i,j, etc.-the position in the tour. The Hopfield-Tank energy function for the travelling salesman problem is, 26. The differential equation for the activity of unit UX,I is, The output signal is given by applying the sigmoid function (with range between 0 and 1), which Hopfield and Tank expressed as 27. Architecture The units used to solve the 10-city travelling salesman problem are arranged as, 28. There is a contribution to the energy if two units in the same row are "on."More explicitly, the weights between units Uxi and Uyj are, Where is Dirac delta, which is 1 if i = j and 0 otherwise. In addition, each unit receives an external input signal. The parameter N is usually taken to be somewhat larger than the number of cities, n. 29. Algorithm 30. Application Hopfield and Tank used the following parameter values in their solution of the problem: A = B = 500, C = 200, D = 500, N = 15, = 50. Hopfield and Tank claimed a high rate of success in finding valid tours; they found 16 from 20 starting configurations. Approximately half of the trials produced one of the two shortest paths. The best tour found was A D E F G H I J B C with length 2.71 31. Best tour for travelling salesman problem found by Hopfield and Tank