image compression using neural networks vishal agrawal (y6541) nandan dubey (y6279)
TRANSCRIPT
![Page 1: Image Compression Using Neural Networks Vishal Agrawal (Y6541) Nandan Dubey (Y6279)](https://reader034.vdocuments.us/reader034/viewer/2022042717/56649d8d5503460f94a75fa7/html5/thumbnails/1.jpg)
Image Compression Using Neural Networks
Vishal Agrawal (Y6541)Nandan Dubey (Y6279)
![Page 2: Image Compression Using Neural Networks Vishal Agrawal (Y6541) Nandan Dubey (Y6279)](https://reader034.vdocuments.us/reader034/viewer/2022042717/56649d8d5503460f94a75fa7/html5/thumbnails/2.jpg)
Overview
Introduction to neural networksBack Propagated (BP) neural networkImage compression using BP neural networkComparison with existing image compression techniques
![Page 3: Image Compression Using Neural Networks Vishal Agrawal (Y6541) Nandan Dubey (Y6279)](https://reader034.vdocuments.us/reader034/viewer/2022042717/56649d8d5503460f94a75fa7/html5/thumbnails/3.jpg)
What is a Neural Network?
An artifi cial neural network is a powerful data modeling tool that is able to capture and represent complex input/output relationships.Can perform "intell igent" tasks similar to those performed by the human brain.
![Page 4: Image Compression Using Neural Networks Vishal Agrawal (Y6541) Nandan Dubey (Y6279)](https://reader034.vdocuments.us/reader034/viewer/2022042717/56649d8d5503460f94a75fa7/html5/thumbnails/4.jpg)
Neural Network Structure
A neural network is an interconnected group of neurons
A Simple Neural Network
![Page 5: Image Compression Using Neural Networks Vishal Agrawal (Y6541) Nandan Dubey (Y6279)](https://reader034.vdocuments.us/reader034/viewer/2022042717/56649d8d5503460f94a75fa7/html5/thumbnails/5.jpg)
Neural Network Structure
An Artificial Neuron
![Page 6: Image Compression Using Neural Networks Vishal Agrawal (Y6541) Nandan Dubey (Y6279)](https://reader034.vdocuments.us/reader034/viewer/2022042717/56649d8d5503460f94a75fa7/html5/thumbnails/6.jpg)
Activation Function
Depending upon the problem variety of Activation function is used: Linear Activation function like step
function Nonlinear Activation function like
sigmoid function
![Page 7: Image Compression Using Neural Networks Vishal Agrawal (Y6541) Nandan Dubey (Y6279)](https://reader034.vdocuments.us/reader034/viewer/2022042717/56649d8d5503460f94a75fa7/html5/thumbnails/7.jpg)
Typical Activation Functions
F(x) = 1 / (1 + e -k ∑ (wixi) )Shown for k = 0.5, 1 and 10
Using a nonlinear function which approximates a linear threshold allows a network to approximate nonlinear functions using only small number of nodes.
![Page 8: Image Compression Using Neural Networks Vishal Agrawal (Y6541) Nandan Dubey (Y6279)](https://reader034.vdocuments.us/reader034/viewer/2022042717/56649d8d5503460f94a75fa7/html5/thumbnails/8.jpg)
What can a Neural Net do?
Compute a known functionApproximate an unknown functionPattern RecognitionSignal Processing
Learn to do any of the above
![Page 9: Image Compression Using Neural Networks Vishal Agrawal (Y6541) Nandan Dubey (Y6279)](https://reader034.vdocuments.us/reader034/viewer/2022042717/56649d8d5503460f94a75fa7/html5/thumbnails/9.jpg)
Learning Neural Networks
Learning/Training Neural Networks means adjustment of the weights of the connections such that the cost function is minimized. Cost function:
Ĉ = (∑(xi – xi’)2)/NWhere xi’s are desired output and
xi’s are the output of the neural network.
![Page 10: Image Compression Using Neural Networks Vishal Agrawal (Y6541) Nandan Dubey (Y6279)](https://reader034.vdocuments.us/reader034/viewer/2022042717/56649d8d5503460f94a75fa7/html5/thumbnails/10.jpg)
Learning Neural Network:Back Propagation
Main Idea: distribute the error function across the hidden layers, corresponding to their effect on the outputWorks on feed-forward networks
![Page 11: Image Compression Using Neural Networks Vishal Agrawal (Y6541) Nandan Dubey (Y6279)](https://reader034.vdocuments.us/reader034/viewer/2022042717/56649d8d5503460f94a75fa7/html5/thumbnails/11.jpg)
Back Propagation
Repeat: Choose training pair and copy it to input
layer Cycle that pattern through the net Calculate error derivative between output
activation and target output Back propagate the summed product of the
weights and errors in the output layer to calculate the error on the hidden units
Update weights according to the error on that unit
Until error is low or the net settles
![Page 12: Image Compression Using Neural Networks Vishal Agrawal (Y6541) Nandan Dubey (Y6279)](https://reader034.vdocuments.us/reader034/viewer/2022042717/56649d8d5503460f94a75fa7/html5/thumbnails/12.jpg)
Back Propagation: Sharing the Blame
We update the weights of each connection in the neural network.Done using Delta Rule.
![Page 13: Image Compression Using Neural Networks Vishal Agrawal (Y6541) Nandan Dubey (Y6279)](https://reader034.vdocuments.us/reader034/viewer/2022042717/56649d8d5503460f94a75fa7/html5/thumbnails/13.jpg)
Delta Rule
ΔWji = η * δj* xi
δj = (tj – yj) *f’(hj)Where η is the learning rate of the neural network, tj and yj are targeted and actual output of the jth neuron, hj is the weighted sum of the neuron’s inputs and f’ is the derivate of the activation function f.
![Page 14: Image Compression Using Neural Networks Vishal Agrawal (Y6541) Nandan Dubey (Y6279)](https://reader034.vdocuments.us/reader034/viewer/2022042717/56649d8d5503460f94a75fa7/html5/thumbnails/14.jpg)
Delta Rule for Multilayer Neural Networks
Problem with Multilayer Network is that we don’t know the targeted output value for the Hidden layer neurons.This can be solved by a trick:
δi = ∑(δk*Wki)*f’(hi)The first factor in parenthesis involving the sum over k is an approximation to (ti-ai) for the hidden layers when we don't know ti.
![Page 15: Image Compression Using Neural Networks Vishal Agrawal (Y6541) Nandan Dubey (Y6279)](https://reader034.vdocuments.us/reader034/viewer/2022042717/56649d8d5503460f94a75fa7/html5/thumbnails/15.jpg)
Image Compression using BP Neural Network
Future of Image Coding(analogous to our visual system)Narrow Channel K-L transformThe entropy codingof the state vectorhi’s at the hidden layer.
![Page 16: Image Compression Using Neural Networks Vishal Agrawal (Y6541) Nandan Dubey (Y6279)](https://reader034.vdocuments.us/reader034/viewer/2022042717/56649d8d5503460f94a75fa7/html5/thumbnails/16.jpg)
Image Compression using continued…
A set of image samples is used to train the network. This is equivalent to compressing the input into the narrow channel and then reconstructing the input from the hidden layer.
![Page 17: Image Compression Using Neural Networks Vishal Agrawal (Y6541) Nandan Dubey (Y6279)](https://reader034.vdocuments.us/reader034/viewer/2022042717/56649d8d5503460f94a75fa7/html5/thumbnails/17.jpg)
Image Compression using continued…
Transform coding with multilayer Neural Network: The image to be subdivided into non-overlapping blocks of n x n pixels each. Such block represents N-dimensional vector x, N = n x n, in N-dimensional space. Transformation process maps this set of vectors into
y=W (input) output=W-1y
![Page 18: Image Compression Using Neural Networks Vishal Agrawal (Y6541) Nandan Dubey (Y6279)](https://reader034.vdocuments.us/reader034/viewer/2022042717/56649d8d5503460f94a75fa7/html5/thumbnails/18.jpg)
Image Compression continued…
The inverse transformation need to reconstruct original image with minimum of distortions.
![Page 19: Image Compression Using Neural Networks Vishal Agrawal (Y6541) Nandan Dubey (Y6279)](https://reader034.vdocuments.us/reader034/viewer/2022042717/56649d8d5503460f94a75fa7/html5/thumbnails/19.jpg)
Analysis
The bit rate can be defined as follows:
(mKT + NKt)/ mN bits/pixel where input images are divided into
m blocks of N pixels , t stand for the number of bits used to encode each hidden neuron output and T for each coupling weight from the hidden layer to the output layer. NKt is small and can be ignored.
![Page 20: Image Compression Using Neural Networks Vishal Agrawal (Y6541) Nandan Dubey (Y6279)](https://reader034.vdocuments.us/reader034/viewer/2022042717/56649d8d5503460f94a75fa7/html5/thumbnails/20.jpg)
Output of this Compression Algorithm
![Page 21: Image Compression Using Neural Networks Vishal Agrawal (Y6541) Nandan Dubey (Y6279)](https://reader034.vdocuments.us/reader034/viewer/2022042717/56649d8d5503460f94a75fa7/html5/thumbnails/21.jpg)
Other Neural Network Techniques
Hierarchical back-propagation neural networkPredictive CodingDepending upon weight function we haveHebbian learning-based image compression
Wi (t + 1)= {W(t) + αhi(t)X(t)}/||Wi (t) + αhi(t)X(t)||
![Page 22: Image Compression Using Neural Networks Vishal Agrawal (Y6541) Nandan Dubey (Y6279)](https://reader034.vdocuments.us/reader034/viewer/2022042717/56649d8d5503460f94a75fa7/html5/thumbnails/22.jpg)
References
Neural networks Wikipedia (http://en.wikipedia.org/wiki/Neural_network)
Ivan Vilovic' : An Experience in Image Compression Using Neural Networks
Robert D. Dony, Simon Haykin: Neural Network Approaches to Image Compression
Constantino Carlos Reyes-Aldasoro, Ana Laura Aldeco: Image Segmentation and compression using Neural Networks
Image compression with neural networks - A survey --J. Jiang*
![Page 23: Image Compression Using Neural Networks Vishal Agrawal (Y6541) Nandan Dubey (Y6279)](https://reader034.vdocuments.us/reader034/viewer/2022042717/56649d8d5503460f94a75fa7/html5/thumbnails/23.jpg)
Questions??
![Page 24: Image Compression Using Neural Networks Vishal Agrawal (Y6541) Nandan Dubey (Y6279)](https://reader034.vdocuments.us/reader034/viewer/2022042717/56649d8d5503460f94a75fa7/html5/thumbnails/24.jpg)
Thank You