dip manual

74
 Laboratory Manual Digital Image Processing CSE-407 Supervised by Dr. Abdul Bais Department of Computer Systems Engineering N-W.F.P University of Engineering and Technology Peshawar [email protected] Prepared by Muhammad Ikram Department of Computer Systems Engineering N-W.F.P University of Engineering and Technology Peshawar [email protected] February 2008

Upload: fatilily

Post on 05-Oct-2015

243 views

Category:

Documents


10 download

DESCRIPTION

Digital image processing

TRANSCRIPT

  • 5/18/2018 Dip Manual

    1/74

    Laboratory Manual

    Digital Image ProcessingCSE-407

    Supervised by

    Dr. Abdul BaisDepartment of Computer Systems Engineering

    N-W.F.P University of Engineering and TechnologyPeshawar

    [email protected]

    Prepared by

    Muhammad IkramDepartment of Computer Systems Engineering

    N-W.F.P University of Engineering and TechnologyPeshawar

    [email protected]

    February 2008

  • 5/18/2018 Dip Manual

    2/74

    Preface

    This Digital Image Processing (DIP) Lab Manual is designed for the 8th semester studentsof Department of Computer System Engineering (DCSE), Spring 2008. Following are theprerequisites for the labs.

    DSP First

    Signals and Systems

    MATLAB Basics

    The students are advised to obtain a copy of the following.

    MATLAB

    dipum tool box pcode

    The labs will be carried out by first simulating the objective in MATLAB. The code for the

    labs is not given in the manual but the students are guided to perform the procedures, inorder to encourage them using their own thoughts and ideas in implementing the labs.Students will observe strict discipline in following all the labs and will submit a DIP relatedproject at the end of the semester. The details of the projects will be decided during thelabs.

    The textbooks used in this lab are [?] and [?].

  • 5/18/2018 Dip Manual

    3/74

    Contents

    1 MATLAB Specifics Digital Image Processing 1

    1.1 Basic MATLAB Function Related to a Digital Image . . . . . . . . . . . . . . 11.2 Data classes and Image Types. . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 Image Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

    1.3.1 Intensity Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.3.2 Binary Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

    1.4 Converting between Data Classes and Image Types . . . . . . . . . . . . . . . 41.4.1 Converting between Data Classes . . . . . . . . . . . . . . . . . . . . . 41.4.2 Converting between Image Classes and Types . . . . . . . . . . . . . . 4

    1.5 Indexing Rows and Columns in 2-D Image and Image Rotation . . . . . . . . 41.6 Practical. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    1.6.1 Activity No.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.6.2 Activity No.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.6.3 Hint for Activity No.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.6.4 Activity No.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.6.5 Hint for Activity No.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.6.6 Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

    2 Image Shrinking and Zooming 7

    2.1 Shrinking of an Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.2 Zooming of an Image. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.3 Practical. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

    2.3.1 Activity No.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.3.2 Hint for Activity No.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.3.3 Activity No.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    2.3.4 Hint for Activity No.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.3.5 Activity No.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.3.6 Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    3 Basic Relationships Between Pixels 12

    3.1 Neighbor of a Pixel in an Image. . . . . . . . . . . . . . . . . . . . . . . . . . 123.2 Adjacency of a pixel in an Image . . . . . . . . . . . . . . . . . . . . . . . . . 133.3 Connectivity, Regions, and Boundaries in an Image . . . . . . . . . . . . . . . 143.4 Practical. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

    3.4.1 Activity No.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153.4.2 Hint for Activity No.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

    3.4.3 Activity No.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153.4.4 Hint for Activity No.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

    2

  • 5/18/2018 Dip Manual

    4/74

    3.4.5 Activity No.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153.4.6 Activity No.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

    3.4.7 Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

    4 Basic Gray Level Transformation 17

    4.1 Thresholding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174.2 Negative Transformation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174.3 Log Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184.4 Power-law Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184.5 Practical. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

    4.5.1 Activity No.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194.5.2 Activity No.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194.5.3 Activity No.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204.5.4 Activity No.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204.5.5 Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

    5 Spatial Domain Processing 22

    5.1 Contrast Stretching. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225.2 Gray Level Slicing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225.3 Bit Plan Slicing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235.4 Practical. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

    5.4.1 Activity No.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235.4.2 Activity No.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245.4.3 Activity No.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245.4.4 Activity No.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245.4.5 Activity No.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245.4.6 Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

    6 Histogram Processing 27

    6.1 Histogram of an Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276.2 Histogram Equalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286.3 Histogram Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296.4 Practical. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

    6.4.1 Activity No.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306.4.2 Activity No.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306.4.3 Hint for Activity No.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

    6.4.4 Activity No.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306.4.5 Activity No.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306.4.6 Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

    7 Local Enhancement and Spatial Domain Filtering 32

    7.1 Enhancement Based on Local Histogram Processing . . . . . . . . . . . . . . 327.2 Introduction to Local Enhancement Based on Neighborhood Statistics . . . . 337.3 Local Enhancement Based on Spatial(Mask) Filtering . . . . . . . . . . . . . 34

    7.3.1 Smoothing Spatial Filtering . . . . . . . . . . . . . . . . . . . . . . . . 357.3.2 Order Statistic Nonlinear Filters . . . . . . . . . . . . . . . . . . . . . 357.3.3 Median( 50th Percentile) Filter . . . . . . . . . . . . . . . . . . . . . . 35

    7.3.4 Min (0th Percentile) Filter . . . . . . . . . . . . . . . . . . . . . . . . . 367.3.5 Max (100th Percentile) Filter . . . . . . . . . . . . . . . . . . . . . . . 36

    3

  • 5/18/2018 Dip Manual

    5/74

    7.4 Practical. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367.4.1 Activity No.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

    7.4.2 Activity No.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367.4.3 Hint for Activity No.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 367.4.4 Activity No.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367.4.5 Activity No.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377.4.6 Hint for Activity No. 4 . . . . . . . . . . . . . . . . . . . . . . . . . . 377.4.7 Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

    8 Spatial Domain Sharpening Filters 39

    8.1 Introduction of Sharpening Filters . . . . . . . . . . . . . . . . . . . . . . . . 398.2 Using Sharpening Filters to Recover Background . . . . . . . . . . . . . . . . 408.3 Unsharp masking and high-boost filtering . . . . . . . . . . . . . . . . . . . . 418.4 Image Enhancement Using Gradients. . . . . . . . . . . . . . . . . . . . . . . 418.5 Practical. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

    8.5.1 Activity No.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428.5.2 Hint for Activity No.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 428.5.3 Activity No.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438.5.4 Activity No.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438.5.5 Activity No.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438.5.6 Activity No. 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438.5.7 Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

    9 Enhancement in Frequency Domain 45

    9.1 Introduction to Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . 459.2 Basic Filtering in Frequency Domain . . . . . . . . . . . . . . . . . . . . . . . 469.3 Smoothing Frequency Domain Filters. . . . . . . . . . . . . . . . . . . . . . . 479.4 Sharpening Frequency Domain Filters . . . . . . . . . . . . . . . . . . . . . . 479.5 Unsharp Masking, High Boost Filtering, and High Frequency Filtering . . . . 489.6 Practical. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

    9.6.1 Activity No.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499.6.2 Hint for Activity No. 1 . . . . . . . . . . . . . . . . . . . . . . . . . . 499.6.3 Activity No. 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499.6.4 Hint for Activity No. 2 . . . . . . . . . . . . . . . . . . . . . . . . . . 499.6.5 Activity No. 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499.6.6 Hint for Activity No. 3 . . . . . . . . . . . . . . . . . . . . . . . . . . 499.6.7 Activity No. 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499.6.8 Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

    10 Image Degradation and Restoration Process 51

    10.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5110.2 Restoration in the Presence of Noise Only . . . . . . . . . . . . . . . . . . . . 51

    10.2.1 Mean Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5210.2.2 Order Statistic Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . 5210.2.3 Least Mean Square (LMS) Wiener Filtering . . . . . . . . . . . . . . . 53

    10.3 Practical. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5310.3.1 Activity No.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

    10.3.2 Activity No. 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

    4

  • 5/18/2018 Dip Manual

    6/74

    10.3.3 Hint for Activity No. 2 . . . . . . . . . . . . . . . . . . . . . . . . . . 5310.3.4 Activity No. 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

    10.3.5 Hint for Activity No. 3 . . . . . . . . . . . . . . . . . . . . . . . . . . 5410.3.6 Activity No. 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5410.3.7 Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

    11 Color Image Processing 55

    11.1 Introduction to Color Image Processing . . . . . . . . . . . . . . . . . . . . . 5511.2 Practical. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

    11.2.1 Activity No.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5511.2.2 Activity No. 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5611.2.3 Activity No. 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5611.2.4 Hint for Activity No. 2 . . . . . . . . . . . . . . . . . . . . . . . . . . 5611.2.5 Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

    12 Color Image Smoothing and Sharpening 57

    12.1 Color Image Smoothing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5712.2 Color Image Sharpening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5712.3 Practical. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

    12.3.1 Activity No.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5812.3.2 Activity No. 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5812.3.3 Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

    13 Wavelets and Multiresolution Process 59

    13.1 Introduction to Wavelet Transform . . . . . . . . . . . . . . . . . . . . . . . . 59

    13.2 Practical. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5913.2.1 Activity No.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5913.2.2 Activity No. 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5913.2.3 Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

    14 Image Compression 61

    14.1 Practical. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6114.1.1 Activity No.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6114.1.2 Activity No.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6114.1.3 Activity No.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6114.1.4 Activity No.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

    14.1.5 Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

    15 Discrete Cosine Transform 63

    15.1 Introduction to Wavelet Transform . . . . . . . . . . . . . . . . . . . . . . . . 6315.2 Practical. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

    15.2.1 Activity No. 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6415.2.2 Activity No. 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6415.2.3 Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

    16 Image Segmentation 65

    16.1 Practical. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

    16.1.1 Activity No.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6516.1.2 Activity No. 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

    I

  • 5/18/2018 Dip Manual

    7/74

    16.1.3 Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

    II

  • 5/18/2018 Dip Manual

    8/74

    Abbreviations

    IPT Image Processing Toolbox

    D-Neighbors Diagonal Neighbors

    4-Neighbors Four Neighbors

    8-Neighbors All Neighbors

    FT Fourier Transform

    DFT Discrete Fourier Transform

    LSB Least Significant Bit

    MSB Most Significant Bit

    FFT Fast Fourier Transform

    LUT Look Up Table

    PDF Probability Density Function

    CDF Comulative Density Function

    LPF Lowpass Filter

    HPF Highpass Filter

    IFFT Inverse Fast Fourier Transform

    IIR Infinite Impulse Response

    LF Linear Filter

    NLF Nonlinear Filter

    DFS Discrete Fourier Transform

    ILPF Ideal Lowpass Filter

    BLPF Butterworth Lowpass Filter

    GLPF Gaussian Lowpass Filter

    IHPF Ideal Highpass Filter

    BHPF Butterworth Highpass Filter

    GHPF Gaussian Highpass Filter

    DIPUM Digital Image Processing Using MATLAB

    LMS Least Mean Square

    LZW Lempel-Ziv-Welch coding

    RGB Red Green Blue

    DWT Discrete Wavelet Transform

    DCT Discrete Cosine Transform

    III

  • 5/18/2018 Dip Manual

    9/74

    Lab 1

    MATLAB Specifics Digital ImageProcessing

    The objective of this lab is to read a digital image and to access pixels in it for applying differ-ent arithmetic operations. The functions like: imread(), imwrite(), imshow(), imfinfo(),imunit8(), imuint16(), mat2gray(), imdouble(), andim2bw() will be discussed in this lab.

    1.1 Basic MATLAB Function Related to a Digital Image

    Digital image is a two dimensional array, having finite rows and columns. In MATLABit is stored in two dimensional matrices. For example Fig. 1.1 shows pixels in an image.The top left corner of this array shows the origin (0,0). In MATLAB we have originstarts from (1,1) because MATLAB addresses first element in two dimensional matrix by< matrixname >(1,1).

    This digital image can be retrieved into a MATLAB matrix using imread() function. Theimread() loads an image stored on specific location to a 2-D array. Its Syntax is:

    F =imread(< f ilename.extension >)

    The imwrite() command is used for writing an image to the disk. The syntax imwrite()command is:

    F =imwrite(2 Darray, < f ilename.extension >, ...)

    The imshow() function is used for displaying the stored images. The syntax ofimshow()function is:

    F =imshow(< f ilename.extension >)

    The imfinfocommand to collect the statistics about an image in a structure variable. Theoutput of command is given in Fig. 1.2.

    1

  • 5/18/2018 Dip Manual

    10/74

    Lab 1 MATLAB Specifics Digital Image Processing

    Figure 1.1: Digital Image represented in Matrix

    Figure 1.2: Output of imfinfo command

    1.2 Data classes and Image Types

    MATLAB and IPT support various data classes for representing pixels values. These data

    classes include: double, uint8, uint16,uint32,int8,int15, in32, single, char, and logical. Thefirst eight are numeric data classes and the last two are character and logical data classes

    2

  • 5/18/2018 Dip Manual

    11/74

    Lab 1 MATLAB Specifics Digital Image Processing

    respectively. The double data type is most frequently used data type and all numeric com-putations in MATLAB require quantities represented in it. The uint8 data classes is also

    used frequently especially when reading data from storage device, a 8-bit images are the mostcommon representation found in practice. We will focus mostly on these two data types inMATLAB. Data type double requires that quantity represented in 8-bytes, uint8 and int8require 1-byte each, uint16 andint16 require 2-bytes, and uint32, int32, andsingle, require4-bytes each. The char data class holds character in Unicode representation. A characterstring is merely a 1xn array of characters. A logical array contains only the values 0 and 1,with each element being stored in memory using one byte per element. Logical array a recreated by using function, logical, or by using relational operations.

    1.3 Image Types

    In this section we will study the four types of images supported by IPT toolbox.

    Intensity images

    Binary images

    Indexed images

    RGB Images

    We will initially focus on Intensity images and Binary images because most of themonochrome image processing operation are carried out using these two image types.

    1.3.1 Intensity Images

    An intensity image is a data matrix whose values have been scaled to represent intensities.When the elements of an intensity image are class uint8, or class uint16, they have integervalues in the range [0, 255] and [0,65535], respectively. Values are scaled in the range [0,1]whendouble class is used to represent the values in data matrix.

    1.3.2 Binary Images

    Binary images are logical array of 0s and 1s. Logical array, B, is obtained from numeric array,A, using logical function.

    B = logical(A)%convertingnumericarrayintologicalarray

    islogical function is used to test the logical value of, B.

    islogical(B) % To check whether B is logical or not

    3

  • 5/18/2018 Dip Manual

    12/74

    Lab 1 MATLAB Specifics Digital Image Processing

    1.4 Converting between Data Classes and Image Types

    It is frequently desired in IPT application to convert between data classes and image types.

    1.4.1 Converting between Data Classes

    The general syntax for converting between data classes is:

    B = data classes name(A)

    We are frequently required to convert array A, data class uint8, into double precision (floatingpoint numbers) array, B, by command B = double(A) for numerical operations. Conversely,

    C= uint8(B) is used for converting fractional parts of array C, double, in [0, 255] into uint8array C.

    1.4.2 Converting between Image Classes and Types

    The table in Fig. 1.3contains necessary functions for converting between image classes andtypes. The conversion of 3 x 3 double image into uint8 image is given in Fig. 1.4.

    Figure 1.3: Functions for converting between image classes and types.

    1.5 Indexing Rows and Columns in 2-D Image and Image Ro-

    tation

    As we studied in Sec. 1.1that a digital image is represented 2-D matrix and its pixels, rowsand columns are addressed as:

    Individual pixel < matrixname >(row, col)

    Complete row < matrixname >(row, :)

    Complete column < matrixname >(:,col)

    The above array indexing is used in 90o rotation of an image and is shown in Fig. 1.5

    4

  • 5/18/2018 Dip Manual

    13/74

    Lab 1 MATLAB Specifics Digital Image Processing

    Figure 1.4: Conversion between double and uint8 data classes

    Figure 1.5: Original Lena (left) and 90o Degree Rotated Lena (right)

    1.6 Practical

    Use these basic concept and perform the following activities:

    1.6.1 Activity No.1

    Read various image ,having different file extensions, stored at workdirectory and c: mydocmypicusing imread() function and show them using imshow function.

    1.6.2 Activity No.2

    Use MATLAB help to compress an image, with .jpeg extension, withimwrite() function andand find the compression ratio with original image.

    5

  • 5/18/2018 Dip Manual

    14/74

    Lab 1 MATLAB Specifics Digital Image Processing

    1.6.3 Hint for Activity No.2

    Use Fig. 1.2for finding the sizes of original and compressed image and then divide the sizeof original image by the size of compressed image to find the compression ratio.

    1.6.4 Activity No.3

    Read an image and rotate it by 180o and 270o, using array indexing. Add the originalimage and rotated image using imadd() function. Repeat this activity for imsubtract(),immultiply(), imabsdiff(), imlincomb(), and imcomplement() functions.

    1.6.5 Hint for Activity No.3

    First convert the images into double from gray levels and the apply these function and thenconvert it to gray levels using mat2gray() function.

    1.6.6 Questions

    1. Why we convert gray levels into double before using numeric operations?

    2. What is the effect of compression of an image on its quality?

    6

  • 5/18/2018 Dip Manual

    15/74

    Lab 2

    Image Shrinking and Zooming

    The objective of this lab is implement two operations shrinking and zooming of a digitalimage. These two topics are related to image sampling and quantization. Zooming canbe viewed as oversampling and shrinking can be viewed as undersampling. But, they aredifferent from sampling and quantization because they are applied on digital image.

    2.1 Shrinking of an Image

    Spatial resolution refers to the smallest number of discernible line pairs per unit distance i.e

    500 lines per millimeter. Shrinking of an image, Fig. 2.1, in spatial domain can be achievedby deleting its rows and columns. In Fig. 2.2, alternate rows and columns are deleted toshrink the image. Similarly, we can extend it to result outputs in Fig. 2.3, Fig. 2.4, andFig. 2.5.

    2.2 Zooming of an Image

    As discussed in Sec. 2.1, that shrinking requires undersampling while in zooming requiresoversampling i.e creation of a new pixels locations, and assigning gray levels to those newlocations as shown in Fig. 2.7. When can assign values to the newly created pixel location

    using one of the three techniques:

    Nearest Neighbor Interpolation

    Pixels Replication

    Bilinear Interpolation

    In nearest neighbor interpolation values of newly created pixels location, in the imaginaryzoomed image, are assigned from the pixels in the nearest neighborhood. In pixelsreplication the location pixels and gray levels are predetermined and it is used when we

    want to increase the size of an image integer number of times. It is achieved by duplicatingrows in horizontal direction and columns vertical direction of an image. The bilinear

    7

  • 5/18/2018 Dip Manual

    16/74

    Lab 2 Image Shrinking and Zooming

    Figure 2.1: Lena 512 x 512

    Figure 2.2: The zoom out lena (256 x 256) at left and (128 x 128) at right

    interpolation is more sophisticated gray levels assignment technique. The four neighborsFig. 2.6of a new pixels v(x, y) location are used for calculating its gray level usingrelation given by

    v(x, y) =ax + by + cxy + d

    where four coefficients are determined from the four equations in four unknowns that can bewritten using the four newares neighbors of point (x, y)

    8

  • 5/18/2018 Dip Manual

    17/74

    Lab 2 Image Shrinking and Zooming

    Figure 2.3: The zoom out lena (64 x 64) at left and (32 x 32) at right

    Figure 2.4: The zoom out lena (16 x 16) at left and (8 x 8) at right

    2.3 Practical

    Use these basic concept and perform the following activities:

    2.3.1 Activity No.1

    Write MATLAB scripts (your own programming code) for obtaining results as discussed inSec. 2.1.

    Figure 2.5: The zoom out lena (4 x 4)

    9

  • 5/18/2018 Dip Manual

    18/74

    Lab 2 Image Shrinking and Zooming

    Figure 2.6: Newly created pixel (red) and four neighbors (blue)

    Figure 2.7: The zoomed out lena (512 x 512) at right from lena 256 x 256 at left

    2.3.2 Hint for Activity No.1

    Take Fig.2.1as input and delete relevant numbers of rows and column by using array indexingto obtain the required results.

    2.3.3 Activity No.2

    Write MATLAB scripts for zooming a digital image from various spatial resolution i.e, 32 x32, 64 x 64, 128 x 128, to spatial resolution 512 x 512 using pixels replication.

    2.3.4 Hint for Activity No.2

    Use Fig. ??fig:iminfo) for finding the sizes of original and compressed image and then dividethe size of original image by the size of compressed image to find the compression ratio.

    2.3.5 Activity No.3

    Repeat the problem solving in Sec. 2.3.3using nearest neighbor interpolation and bilinearinterpolation.

    10

  • 5/18/2018 Dip Manual

    19/74

    Lab 2 Image Shrinking and Zooming

    2.3.6 Questions

    1. What is the effect of sever shrinking of a digital image in term of its size and its quality?

    2. What is the effect of zooming from low resolution to high resolution?

    3. What are the advantages and disadvantages of pixels replication over nearest neighborinterpolation?

    11

  • 5/18/2018 Dip Manual

    20/74

    Lab 3

    Basic Relationships Between Pixels

    The objective of this is find basic relationships between pixels. We will discuss various typesof neighborhoods, adjacency, connectivity, regions and boundaries.

    3.1 Neighbor of a Pixel in an Image

    The Fig. 3.1shows three types of neighborhoods listed:

    4-neighborhood

    D-neighborhood

    8-neighborhood

    Figure 3.1: (a) 4-neighborhood, (b) D-neighborhood, (c) 8-neighborhood

    The spatial coordinates of 4-neighbors (blue), N4(V), of a pixels (red), V(x, y) ,in Fig.3.1

    (a) are:

    (x - 1, y) spatial coordinate of V1 (x , y - 1) spatial coordinate of V2

    12

  • 5/18/2018 Dip Manual

    21/74

    Lab 3 Basic Relationships Between Pixels

    (x , y + 1) spatial coordinate of V3

    (x + 1, y) spatial coordinate of V4

    The spatial coordinates of D-neighbors (blue), ND(V), of a pixels (red), V(x, y), in Fig.

    3.1(b) are:

    (x - 1, y - 1) spatial coordinate of V1

    (x - 1, y + 1) spatial coordinate of V2

    (x + 1 , y - 1) spatial coordinate of V3

    (x + 1, y + 1) spatial coordinate of V4

    The spatial coordinates of 8-neighbors (blue), N8(V), of a pixels (red), V(x, y), in Fig.

    3.1(c) are given

    (x - 1, y - 1) spatial coordinate of V1

    (x - 1, y) spatial coordinate of V2

    (x - 1 , y + 1) spatial coordinate of V3

    (x , y - 1) spatial coordinate of V4

    (x , y + 1) spatial coordinate of V5

    (x + 1, y - 1) spatial coordinate of V6

    (x + 1, y) spatial coordinate of V7

    (x + 1, y + 1) spatial coordinate of V8

    3.2 Adjacency of a pixel in an Image

    Connectivity between pixels provides foundations for understanding other digital image pro-cessing concepts. Two pixels are said to be connected if they are neighbors and have eithersame gray level or their gray levels belongs already decided criterion of similarity.

    Lets consider Z = 1 be the set of gray level value used to define adjacency between pixels ofa binary image shown in Fig. 3.2(a). There are three types of adjacency between the pixels:

    4 adjacency. Two pixelsp and qwith values from Z are 4-adjacent ifq is in the setN4(P). The 4-adjacency of of the pixel p is shown by green line in Fig. 3.2(b).

    8 adjacency. Two pixelsp and qwith values from Z are 8-adjacent ifq is in the setN8(P). The 8-adjacency of of the pixel p is shown by green lines in Fig. 3.2(c).

    m adjacency. Two pixelsp and qwith values from Z are 4-adjacent, shown in Fig.3.3,if

    qis in N4(p) or

    qis in ND(p) and the set N4(q)

    N4(q) has no pixels whose values from Z.

    13

  • 5/18/2018 Dip Manual

    22/74

    Lab 3 Basic Relationships Between Pixels

    Figure 3.2: (a) Binary image, (b) pixels that are 4-adjacent (shown by green line) to thecenter pixel, (c) pixels that are 8-adjacent to the center pixel

    Figure 3.3: Pixels that are m-adjacent (shown by green line) to the center pixel,

    3.3 Connectivity, Regions, and Boundaries in an Image

    As discussed in Sec. 3.2 that connectivity between pixels provides foundations for under-standing other digital image processing concepts i.e digital paths, regions, boundaries. Twopixels are said to be connected if they are neighbors and have either same gray level or theirgray levels belongs already decided criterion of similarity.

    A digital path or curve from pixels p with coordinates (x, y) to pixels qwith coordinates(s, t) is a sequence of distinct pixels with coordinates

    (x0, y0), (x1, y1), ... , (xn, yn)

    where (x0, y0) = (x, y), (xn, yn) = (s, t) and pixels (xi, yi) and (xi1, yii) are adjacent for1 i n. In this case,nis the length of path. Forclosed path(x0, y0) = (xn, yn). Dependingupon the 4-, 8-, or m-adjacency we can define 4path, 8 path, or m path, respectively.

    Two pixels p and qare said to be connected if they there exist a path between them andthey belong to S, a subset of pixels in an image. For any pixel p in S, the set of pixels that

    14

  • 5/18/2018 Dip Manual

    23/74

    Lab 3 Basic Relationships Between Pixels

    are connected to it inS is called a connected components ofS. if it only has one connectedcomponent, the set S is called a connected set.

    LetR be a subset of pixels in an image. We call R a region of the image if R is a connectedset. The boundary (also called contour or border) of a region R is the set of pixels in theregion that have one or more neighbors that are not in R. IfR happen to be an entireimage, then its boundary is defined as the set of pixels in the first and last rows andcolumns of the image.

    3.4 Practical

    Use these basic concept and perform the following activities:

    3.4.1 Activity No.1

    Write MATLAB script to find 4-neighbors, D-neighbors, and 8-neighbors of a pixel.

    3.4.2 Hint for Activity No.1

    Take an arbitrary 5 x 5 image using magic() function and prompt user to input the pixel forfinding the required neighborhoods.

    3.4.3 Activity No.2

    Write MATLAB script to find 4-adjacency, 8-adjacency, and m-adjacency of a pixel in abinary image and take Z = 0.

    3.4.4 Hint for Activity No.2

    Take an arbitrary 3 x 3 binary image using rand() andround() function and prompt user toinput the pixel for finding the required adjacency.

    3.4.5 Activity No.3

    Write MATLAB script to find 4-adjacency, 8-adjacency, and m-adjacency of an image, shownin Fig. 2.1and take Z = {50:80}.

    3.4.6 Activity No.4

    Modify your MATLAB code in Act. 3.4.3to find 4-path, 8-path, and m-path of a pixel in a

    binary image and take Z = 0. Extend this idea to find the regions of an image in Fig. 2.1by taking R = {65:80}.

    15

  • 5/18/2018 Dip Manual

    24/74

    Lab 3 Basic Relationships Between Pixels

    3.4.7 Questions

    1. Why we convert gray levels into double before using numeric operations?

    2. What is the effect of compression of an image on its quality?

    16

  • 5/18/2018 Dip Manual

    25/74

    Lab 4

    Basic Gray Level Transformation

    In this lab we will focus on spatial domain enhancement, gray level transformations or map-ping functions. Pixels are processed directly in spatial domainmethods and these processesare generally represented

    g(x, y) = T[f(x, y)]

    wheref(x, y) and g(x, y) represent input and processed output image, respectively. T[]represents spatial domain process; processes individual pixel or groups of related pixels i.eneighbors. WhenT[] processes neighborhood of size 1 x 1 then g depends only on the value

    off at (x, y) andT[] become gray level trasformation f unction of the form.

    s = T[r]

    4.1 Thresholding

    Fig. 4.1 shows the thresholded of Fig. 2.1. In this method we increase the contrast bydarkening the gray levels below m and brightening the levels above m in the original image.As a result we obtain a narrower range of gray levels Fig. 4.2.

    4.2 Negative Transformation

    Generally, negative transformation of an image having gray levels [0, L-1] is obtained as

    s = L - 1 - r .

    Fig. 4.3show the negative transformed image of Lena and its histogram. While inputimage having original Lena and its histogram is shown in Fig. 4.4. The intensity levels, in

    the histogram of negative transformed image, are reversed than that of input image.

    17

  • 5/18/2018 Dip Manual

    26/74

    Lab 4 Basic Gray Level Transformation

    Figure 4.1: Lena after Thresholding

    Figure 4.2: Gray level transformations for contrast stretching

    4.3 Log Transformation

    Log transformation can be obtained by the following relation between input, s, and outputimage, r .

    s = clog(r+ 1)

    Log transformation expands the values of dark pixels in an image while compressing thehigher-level values. The inverse is true for inverse log transformation. Log transformationcompresses the dynamic range of images with large variation in pixels values and shown inFig. 4.5

    4.4 Power-law Transformation

    The power law transformation between input image, r , and output image, s is given by

    s = c r

    18

  • 5/18/2018 Dip Manual

    27/74

    Lab 4 Basic Gray Level Transformation

    Figure 4.3: (B) Negative transformed image, LenahB(l) Histogram of negative transformed

    image

    Figure 4.4: (A) Original image, Lena hA(l) Histogram of Lena

    wherec and are positive constants. For example Fig. 4.6(b) shows the power lawtransformation of the image in Fig. 4.6(b).

    4.5 Practical

    Use these basic concept and perform the following activities:

    4.5.1 Activity No.1

    Write MATLAB script to implement thresholding.

    4.5.2 Activity No.2

    Write MATLAB script to implement negative transformation of an input image.

    19

  • 5/18/2018 Dip Manual

    28/74

    Lab 4 Basic Gray Level Transformation

    Figure 4.5: (A) Fourier Spectrum (a) Result after applying log transformation

    Figure 4.6: (a) Original Lena (b) Result after applying power law transformation with = 0.2

    4.5.3 Activity No.3

    Write MATLAB code to implement log transformation of an image.

    4.5.4 Activity No.4

    Write MATLAB code to implement power law transformation of an image.

    4.5.5 Questions

    1. What the situation in which negative transformation is useful?

    2. How the input gray levels are transformed into to the output gray levels after negativetransformation?

    20

  • 5/18/2018 Dip Manual

    29/74

    Lab 4 Basic Gray Level Transformation

    3. In negative transformation the lower gray levels are assigned to lower gray levels orhigher gray levels?

    4. How the input gray levels are transformed into to the output gray levels afterlog/antilog transformation?

    21

  • 5/18/2018 Dip Manual

    30/74

    Lab 5

    Spatial Domain Processing

    In this lab we will focus on some complementary spatial domain enhancement, piecewiselinear transformation, functions as we have discussed in Lab. 4. The piecewise linearfunction are arbitrary complex and require more user inputs and we are going to discussthem in the following sections.

    5.1 Contrast Stretching

    Low contrast images can result from poor illumination, lake of dynamic range in imaging

    sensor, wrong setting of a lens aperture during image, and many other factors. So, our goalis to increase the dynamic range of gray level to enhance the visual appearance in an image.

    Lets consider contrast stretching function in Eq. 5.1, in Fig. 5.1(a), is applied on a lowcontrast image Fig. 5.1(b) to produce a high contrast image shown in Fig. 5.1(c).

    s= T(r) =

    a1r, 0 r < r1 and s1= T(r1);a2(r r1) + s1, r1 r < r2 & s2= T(r2);a3(r r2) + s2, r2 r (L 1).

    (5.1)

    Here a1, a2, and a3 control the result of contrast stretching. Ifa1 = a2 = a3 then there willbe no change in the gray levels. Conversely, ifa1 = a3 = 0 and r1 = r2 then T(.) is athresholding function discussed in Sec. 4.1. In general, ifr1 r2 and T(r1) T(r2) thenT(.) is single valued and monotonically increasing function, which preserve the order ofgray levels and prevents the creation of intensity artifacts.

    5.2 Gray Level Slicing

    Gray level slicing is used to highlight a specific range of gray levels in an image. It is usedto enhance the features such as masses of water in satellite images, or flaws in x-ray images.

    22

  • 5/18/2018 Dip Manual

    31/74

    Lab 5 Spatial Domain Processing

    Figure 5.1: (a) Contrast stretching function, (b) Low contrast input image, (c) High con-trast processed output image

    The methods to achieve gray level slicing are divided into two categories as given by Eq.5.2(a) and Eq. 5.3(b), graphed in Fig. 5.2(a) and Fig. 5.2(b), respectively.

    s=

    T(r) =sH, rrangeofinerest;T(r) =sL, otherwise.

    (5.2)

    s=

    T(r) =sH, rrangeofinerest;T(r) =r, otherwise. (5.3)

    5.3 Bit Plan Slicing

    In this section we will discuss the importance of individual bit in image appearance. Bitplane slicing aid in determining the adequacy of the number of bits used to quantize eachpixel which is useful for compression. In gray scale image each pixels is represented by8-bits. We can imagine an image consist of eight 1-bit planes, Fig. 5.4, ranging from frombit-plane 0 for the least significant bit (LSB) to bit-plane 7 for the most significant bit

    (MSB). The results of bit planes slicing with specific masking bit patterns are shown in Fig.5.3.

    5.4 Practical

    Use these basic concept and perform the following activities:

    5.4.1 Activity No.1

    Write MATLAB script to obtain g(l), piecewise transformation function, froml as shown inFig. 5.5.

    23

  • 5/18/2018 Dip Manual

    32/74

    Lab 5 Spatial Domain Processing

    Figure 5.2: (a) Function that highlights [A,B] gray levels and reducing others (b) Func-tion that highlights [A,B] gray levels and preserve others (c) Input image (d)Processed output image by Transfer function in (a)

    5.4.2 Activity No.2

    Write MATLAB script to implement negative transformation of an input image.

    5.4.3 Activity No.3

    Write MATLAB code to take an input image and to implement gray level slicing discussedin Sec. 5.2.

    5.4.4 Activity No.4

    Write MATLAB code to implement bit-plane slicing.

    5.4.5 Activity No.5

    Take an image and find hits histogram and match it with the histogram of equalizedhistogram.

    24

  • 5/18/2018 Dip Manual

    33/74

    Lab 5 Spatial Domain Processing

    Figure 5.3: 4th bit-plane set is to 0 (b) 4th&5th bit-planes are set to 0 (c) 4th, 5th, & 6th

    bit-plans are set to 0 (d) 4th, 5th, 6th, & 7th bit-plans are set to 0

    Figure 5.4: Bit Planes representing 8 - bit image

    5.4.6 Questions

    1. What is the impact on the appearance of image when bit-plane 0 is set to zero.

    2. What is the impact on the appearance of image when bit-plane 7 is set to zero.

    3. What effect would setting to zero the lower-order bit planes have on the histogram ofan image in general?

    4. What would be the effect on the histogram if we set to zero the higher order bitplanes instead?

    25

  • 5/18/2018 Dip Manual

    34/74

    Lab 5 Spatial Domain Processing

    Figure 5.5: Contrast Stretching

    26

  • 5/18/2018 Dip Manual

    35/74

    Lab 6

    Histogram Processing

    In this lab we will find histogram of an image and will interpret an image in term of itshistogram. We will focus on different histogram processing techniques to enhance an image.

    6.1 Histogram of an Image

    Histogram provides us global statistics about an image.Let Sbe a set and define |S|to bethe cardinality of this set, i.e |S| is the number of elements in S. The histogramhA(l)(l= 0, ..., 255) of the image A is defined as:

    hA(l) =|{(i, j) | A(i, j) = l, i = 0, ..., N 1, j = 0, ..., M 1}| (6.1)

    255i=0

    hA(l) =Number of pixels in A (6.2)

    In other words, histogram is a discrete function p(rk) versus rk where

    p(rk) = nk

    n (6.3)

    rk = kth gray level (0 k L 1) (6.4)

    and n is the total number of pixels in the image. The function p(rk) estimates probabilityof occurrence of gray level rk. Histogram of an image A 250 x 250, Fig. 6.1(a), is shown inFig.6.1 (b). As we can see that the image A has half of its portion as black and half aswhite and its histogram of this image shows equal number occurrence of black and whitepixels.

    Histogram of an image does not depend upon it shape. Its is proved in Fig. 6.2. Again, wehave equal parts of black and white portion ,indicated in Fig. 6.2 (a), but its shape isdifferent from the image in Fig. 6.1(a). Surprisingly, histogram in Fig. 6.2(b) is exactlysimilar to the histogram in Fig. 6.1(b).

    27

  • 5/18/2018 Dip Manual

    36/74

    Lab 6 Histogram Processing

    Figure 6.1: (a) Input imageA (b) Histogram of image A

    Figure 6.2: (a) Input imageA (b) Histogram of image A

    6.2 Histogram Equalization

    Histogram equalization is also called histogram flattening. In histogram equalization wetransform an image so that each gray level appears an equal number of times i.e theresulting probability density function is uniformly distributed.

    Let consider continuous, single valued, monotonically increasing, and normalized gray leveltransformation function, T(.). Let Eq. 6.5perform histogram equalization then it mustsatisfies the following conditions:

    1. T(r) is single-valued and monotonically increasing in the inter val 0 r 1, topreserve the order from black to white

    2. and0 T(r) 1 f or 0 r 1

    s= T(r) (6.5)

    28

  • 5/18/2018 Dip Manual

    37/74

    Lab 6 Histogram Processing

    Letpr(r) and ps(s) be the probabilities of input and output gray levels, respectively. If theabove two conditions are true then

    r= T

    1

    (s) (6.6)Generally, we know pr(r) andT(r) andT

    1(s) satisfies condition (1), then the probabilitydensity functionps(s) of the transformed variable s can be obtained using a rather simpleformaula;

    ps(s) =pr(r)|dr

    ds| (6.7)

    In discrete variable, probability density function is defined by Eq. 6.3and the Eq. 6.8define histogram equalization or histogram linearization.

    sk = T(rk) =k

    j=0

    pr(rj) =k

    j=0

    njn

    (6.8)

    Fig. ??shows the equalized histogram of input image and processed output image. We cansee that the output histogram has dynamic range of gray level as compared to thehistogram of input image.

    6.3 Histogram Specification

    Histogram specification is also called histogram matching. In histogram specification aninput image is transformed according to the specified gray level histogram. While in case of

    histogram equalization method generates only one result i.e the output image withapproximately uniform histogram (without any flexibility). Fig. 6.3shows theimplementation of histogram matching and has the following steps.

    1. Obtain transformation functionT(rk) E q. 6.9

    2. Obtain transformation functionG(zk)E q. 6.10

    3. Obtain the inverse functinG1(.)

    4. Finally, obtain output imagezk Eq. 6.11

    sk = T(rk) =k

    j=0

    pr(rj) =k

    j=0

    njn

    , k= 0, . . . , L 1 (6.9)

    vk = G(zk) =k

    j=0

    pz(zj) =sk, k= 0, . . . , L 1 (6.10)

    zk =G1(zk) =G

    1[T(rk)], k= 0, . . . , L 1 (6.11)

    29

  • 5/18/2018 Dip Manual

    38/74

    Lab 6 Histogram Processing

    Figure 6.3: Histogram specification

    6.4 Practical

    Use these basic concept and perform the following activities:

    6.4.1 Activity No.1

    Prove that the images in Fig. 6.1(a) and Fig. 6.2(a) have the same histogram.

    6.4.2 Activity No.2

    Prove that dark , bright, low contrast, and high contrast images have their histograms

    similar to the histograms in Fig.6.4(a), Fig.6.4(b),Fig.6.4 (c), and Fig. 6.4(d),respectively.

    6.4.3 Hint for Activity No.2

    Take dark, bright, low contrast, and high contrast image that accompany with this lab andsketch their histogram either using LUT or for loops.

    6.4.4 Activity No.3

    Apply histogram specification on a low contrast image. Sketch and compare the histogramof input and output image.

    6.4.5 Activity No.4

    Write MATLAB code to implement bit-plane slicing.

    30

  • 5/18/2018 Dip Manual

    39/74

    Lab 6 Histogram Processing

    Figure 6.4: (a) Histogram of dark image (b) Histogram of bright image (c) Histogram oflow contrast image (d) Histogram of high contrast image

    6.4.6 Questions

    1. What are the situations where histogram equalization and histogram specification aresuitable?

    2. What issues are related to the use of histogram equalization?3. Explain difference between histogram of low contrast image and high contrast image.

    4. What will be the visual effect of histogram equalization on high contrast input image?

    5. What will be the visual effect of histogram equalization on high low contrast inputimage?

    6. Explain why the discrete histogram equalization technique does not, in general, yielda flat histogram.

    7. suppose that a digital image is subjected to histogram equalization. Show that asecond pass of histogram equalization will produce exactly the same result as the firstpass.

    31

  • 5/18/2018 Dip Manual

    40/74

    Lab 7

    Local Enhancement and SpatialDomain Filtering

    The histogram processing methods, discussed in Lab.6,are global. These transformationfunctions designed according to the gray-level distribution over the entire gray levels of animage. They are good when we want to enhance the entire range of gray levels and notsuitable for enhancing details over small areas. These small areas have negligible influenceon designing the global transformation function. In this lab our objective to study thetransformation functions that are designed for small areas of gray levels in an image.

    7.1 Enhancement Based on Local Histogram Processing

    In spatial domain filtering square or rectangular neighborhood(block)is defined andhistogram in the local block is computed. The design of transformation functions are basedon the local gray-level distribution. Then the local enhancement transformation functione.g histogram equalization or specification method is used to generate the transformationfunction, which perform the gray level mapping for each pixels in the block. The center ofthe block is moved to an adjacent pixel location and repeat this process is repeated wereach to the last pixel in the image.

    Since, the block only shifts one pixels each time the local histogram can be updated eachtime without re-computing the histogram over all pixels in the new block. If utilizingnon-overlapping region shift, the processed image usually has an undesirable checkerboardeffect.

    Fig. 7.1shows difference between global histogram and local histogram processing. Fig.7.1(a), Fig. 7.1(b),and Fig. 7.1(c) show noised and blurred image, output from applyingglobal histogram, and result from local histogram processing, respectively. We can seesignificant visual enhancement in 7.1(c) over Fig. 7.1(b). In Fig. Fig. 7.1(b) the noisecontents were also enhanced. While, in Fig. 7.1(c) their effect were minimized due the use

    of 7 x 7 neighborhood which has too small influence to the global histogram specification.

    32

  • 5/18/2018 Dip Manual

    41/74

    Lab 7 Local Enhancement and Spatial Domain Filtering

    Figure 7.1: (a) Input noised and blurred image (b) Output image after global histogramprocessing (c) Output image after local histogram processing

    7.2 Introduction to Local Enhancement Based on

    Neighborhood Statistics

    In addition to histogram processing, Local enhancement functions are also based on otherstatistical properties of gray levels in the block i.e mean s(x, y) and variance s(x, y).Mean is used for averaging brightness and variance is used for contrast in an image. Letsconsider that Sxy represents a neighborhood subimage of size NSxy; a block centered at(x, y) ands(x, y) in Eq. 7.1and S(x, y) in Eq. 7.2represent gray level mean andstandard deviation inSxy, respectively. LetG and G denote global mean and standarddeviation in image f(x, y).

    S(x, y) = (s,t)Sxy

    f(s, t)

    NSxy(7.1)

    2S(x, y) =

    (s,t)Sxy

    f(s, t) S(x, y)

    NSxy(7.2)

    There are two methods to implement these statistical properties for local enhancement.Mathematically, the first method is shown in Eq. 7.3. Where, A(x, y) is called local gainfactor which is inversely proportional to standard deviation shown in Eq. 7.4.

    g(x, y) =A(x, y).[f(x, y) S(x, y)] + S(x, y) (7.3)

    A(x, y) =k. G

    S(x, y); 0 k 1 (7.4)

    The method represented by Eq.7.3 is implemented for input image Fig. 7.2(a) and 15 x 15block which results in locally enhanced output image shown in Fig. 7.2(b).

    The second method is represented by Eq. 7.2(a). Here the successful selection ofparameters (E, k0, k1, k2) requires experiment for different images. Where E cannot be toolarge to affect the general visual balance of the image and k0< 0.5. The selection ofNxyshould be as possible to preserve detail and reduce computational load.

    g(x, y) = E.f(x, y), if[x(x, y) k0G] [k1G s(x, y) k2G]

    f(x, y), otherwise, . . .E q. 7.2(a)

    33

  • 5/18/2018 Dip Manual

    42/74

    Lab 7 Local Enhancement and Spatial Domain Filtering

    Figure 7.2: (a)Input image (b) Locally enhanced output image with 15 x 15 block

    7.3 Local Enhancement Based on Spatial(Mask) Filtering

    In this section we will focus spatial filtering and its contribution to the enhancement of animage. A subimage Fig. 7.3(b) calledfilter, mask, kernal, template, or window ismasked with input image Fig. 7.3(a) as in Eq. 7.5. The values in the window are calledfilter coefficients. Based upon the filter coefficient we can classified spatial filters as linearfilters(LF), nonlinear filters(NLF), lowpas filters(LPF), and highpass filters(HPF). base onspatial filtering.

    g(x, y) =a

    i=a

    bj=b

    (i, j).f(x + i, y+j) (7.5)

    Specifically, the response of a 3 x 3 mask with a subimage with gray levels z1, z2,z2, . . . ,z9 isgiven in Eq. ??

    R= w1z1+ w2z2+ . . . + w9z9 =9

    i=1

    wizieq: shortmask (7.6)

    Mask operation near the image border some part of the masks is located outside the image

    Figure 7.3: Spatial Mask with w1, w2 etc as filter coefficients

    plane; to handle this problem we use

    34

  • 5/18/2018 Dip Manual

    43/74

    Lab 7 Local Enhancement and Spatial Domain Filtering

    1. Discard the problem pixels (e.g. 512 x 512 input becomes 510 x 510 output image ifthe mask is 3 x 3)

    2. Zero padding at the first and last rows and columns. So, 512 x 512 image becomeequal to 514 x 514 intermediate output image. To get the final output 512 x 512image the first and last rows and columns are discarded to.

    7.3.1 Smoothing Spatial Filtering

    Smoothing filters are also called LPF. Because it attenuate or eliminate high-frequencycomponents that characterize edges and sharp details in an input image. It results inblurring or noise reduction. Blurring is usually used in preprocessing steps, e.g., to removesmall details from an image prior to object extraction or to bridge small gaps in lines or

    curves. These filters are based on neighborhood averaging and in general M x M mask withthe same filter coefficients, Eq. 7.7are used to design a LPF. Some filters have weightedmasks. To preserve the nearest neighborhood effect they are weighted with some weightfactor. These filters produce some undesirable edge blurring.

    wi = 1

    M2; 1 wi M (7.7)

    7.3.2 Order Statistic Nonlinear Filters

    The order statistic filters are nonlinear filters whose response is based on ordering (ranking)

    the pixels contained in the image area surrounded by the filter, and then the center value isreplaced with the value determined by the ranking result. Based on ranking criteria we canclassified order statistic filters into three types.

    7.3.3 Median( 50th Percentile) Filter

    The transformation function of median filter is given in Eq. 10.7. Median filters are usefulin situation where impulse noise, salt and pepper noise. The median, , of maskedneighbors, zk(x, y), is calculated by ranked in ascending order of gray levels. Ultimately,half of the pixels above the median, , and half are below than . And finally assigned to

    the output pixel R(x, y), at (x, y).

    R(x, y) =(zk(x, y)|k= 1, 2, . . . , 9) (7.8)

    Generally, The transfer function of median filter forces the output gray levels to be moresimilar to the neighbors. If the isolated group of pixels have area A n

    2

    2 ; are eliminated byn x n median filter. Conversely, the larger cluster are less affected by it.

    35

  • 5/18/2018 Dip Manual

    44/74

    Lab 7 Local Enhancement and Spatial Domain Filtering

    7.3.4 Min (0th Percentile) Filter

    The transformation function ofmin filter is given by Eq. 7.9.

    Rk(x, y) =min(zk|k= 1, 2, 3, . . . , 9) (7.9)

    These filters are applied, similar to the median filters, on input images to result maskedintermediate outputszk(x, y) but the ranking and assigning criteria is different from medianfilters. The assignment of minimum value, to the outputzk(x, y), in the neighborhood makeit useful to remove salt noise.

    7.3.5 Max (100th Percentile) Filter

    The transformation function ofmin filter is given by Eq. 7.10.

    Rk(x, y) =max(zk|k= 1, 2, 3, . . . , 9) (7.10)

    These filter are applied, similar to min filters, on input images to result maskedintermediate outputs zk(x, y) but the ranking and assigning criteria is different from bothmeanandmedian filters. The assignment of maximum value, in the neighborhood, to theoutput zk(x, y) make them useful to remove pepper noise.

    7.4 Practical

    Use these basic concept and perform the following activities:

    7.4.1 Activity No.1

    Use the figures accompanied with this lab and generate the results discussed in Sec. 7.1.

    7.4.2 Activity No.2

    Take the image in Fig. 7.4and take out hidden object in the background using the

    techniques discussed in Sec. 7.1.

    7.4.3 Hint for Activity No.2

    Apply a 3 x 3 moving average filter on the input image and detect areas of higher contrastlike edges and then use the co-efficient in g (x, y) to produce the output image.

    7.4.4 Activity No.3

    Take the input image shown in Fig. 7.5and apply smoothing filter mask of order: i3 x 3,

    ii 5 x 5,iii 9 x 9, and iv 15 x 15. Sketch the output image and comment on yourresults.

    36

  • 5/18/2018 Dip Manual

    45/74

    Lab 7 Local Enhancement and Spatial Domain Filtering

    Figure 7.4: Input image having hidden object in background

    Figure 7.5: Input image having hidden object in background

    7.4.5 Activity No.4

    Take a noised image and prove that which order statistic filter is best.

    7.4.6 Hint for Activity No. 4

    Read an image and add salt and pepper noise with it using imnoise() function. And thenwrite you code to implement Eq. 10.7, ??, and ??.

    7.4.7 Questions

    1. What is enhancement and what is its objective?

    2. What is the basic difference between histogram processing and spatial domainfiltering?

    37

  • 5/18/2018 Dip Manual

    46/74

    Lab 7 Local Enhancement and Spatial Domain Filtering

    3. Explain the deference between local histogram and global histogram.

    4. How the checkerboard effect is produced in an image and how we can avoid it?

    5. Discuss the limiting effect of repeatedly applying a 3 x 3 lowpass spatial filter to adigital image; ignoring the border effects.

    6. What will be the size of intermediate output image when we use padding for mask oforder 7 x 7?

    7. What will be the size of output image when we are not using padding for mask oforder 7 x 7?

    8. What is salt and pepper noise and which filter is useful to remove it at a time.

    9. What are the factors which effect the designing of order statistic filters.10. Which type of filter is useful for bridging the gaps?

    38

  • 5/18/2018 Dip Manual

    47/74

    Lab 8

    Spatial Domain Sharpening Filters

    Some time we need to highlight the fine details or enhance the blurred details in an image.It has been discussed in Lab. 7 that blurring is done by averaging the gray levels in theneighborhood. Sharpening is exactly opposite to blurring and we require appositemathematical tool, derivative(continuous domain) or difference(digital domain) to definesharpening; as we used averaging in blurring. Difference operation is equivalent to digitaldifferentiation. It highlights the discontinuities(noise or edges) in a digital image anddeemphasizes the regions.

    In this lab we will discuss how to design and apply of sharpening filters and will discuss itsdifferent types.

    8.1 Introduction of Sharpening Filters

    As discussed in Sec. 7.3that spatial domain filtering is performed by designing a mask orwindow and convolving it with the input image. These filters are isotropic-rotationinvariantand are obtained by approximating second order derivatives Eq. 8.1in digitaldomain; described in Eq. 8.2to Eq. 8.4. The implementation of Eq. 8.4is given in thesimplest window in Fig. 8.1and its application on an input image Fig. 8.2(a) is given inFig. 8.2(b).

    Figure 8.1: Sharpening Mask

    2f= 2f

    x2 +

    2f

    y2 (8.1)

    39

  • 5/18/2018 Dip Manual

    48/74

    Lab 8 Spatial Domain Sharpening Filters

    Figure 8.2: Sharpening Mask

    2f

    x2 = [f(x + 11, y) + f(x 1, y)] 2f(x, y) (8.2)

    2f

    y2 = [f(x, y+ 1) + f(x, y 1)] 2f(x, y) (8.3)

    2f = [f(x + 1), y) + f(x 1, y) + f(x, y+ 1) + f(x, y 1)] 4f(x, y) (8.4)

    8.2 Using Sharpening Filters to Recover Background

    As said, that sharpening filters, Laplacian filters, deemphasizes the features of slowlyvarying gray levels regions and highlights discontinuities of gray levels, edges, in an image.So, background can be recovered by simply adding Laplacian output to the input image,given in Eq. 8.5(when the center of filter coefficients of the Laplacian mask is negative)and Eq. 8.6(when the center of filter coefficients of the Laplacian mask is positive). In thisway the background tonality can be perfectly preserved while details are enhanced as shownin Fig. 8.3.

    Figure 8.3: (a) Input image (b) Laplacian output (c) Enhanced Image

    g(x, y) =f(x, y) 2f(x, y) (8.5)

    g(x, y) =f(x, y) + 2f(x, y) (8.6)

    40

  • 5/18/2018 Dip Manual

    49/74

    Lab 8 Spatial Domain Sharpening Filters

    8.3 Unsharp masking and high-boost filtering

    Images can be sharpen,fs(x, y), by subtracting their blurred versions, f(x, y), from themf(x, y). This technique of sharpening is calledunsharp masking, represented in Eq. ??.

    fs(x, y) =f(x, y) f(x, y) (8.7)

    Highbost filteringis generalized version of Eq. 8.7and is represented in Eq. 8.8where A1.FromEq. ??andEq. 8.8wecanrepresenthigh-boostfilteringintermofunsharpfiltering,Eq. 8.9.fhb(x, y) =A.f(x, y) f(x, y)(8.8)

    fhb(x, y) = (A 1).f(x, y) + fs(x, y) (8.9)

    We can represent the highboost filtering sharpening in term of Laplacian masks in Fig.8.4. The Eq. 8.10 and Eq. 8.11 highboost filtering when the center coefficient of the

    Laplacian mask is negative and positive, respectively.fhb(x, y) =A.f(x, y)

    2f(x, y) (8.10)

    fhb(x, y) =A.f(x, y) 2f(x, y) (8.11)

    The constant A is inversely proportional to the sharpening. Sharpening decreases with in-crease ofA.

    Figure 8.4: High-boost Masks

    8.4 Image Enhancement Using Gradients

    The 2-D derivatives (2f(x, y))in image processing is implemented using the magnitude ofthe gradient. As we know that gray levels increase or decrease the f(x, y) has local maximaor local minima, respectively and2f(x, y) has zero crossing when there is discontinuities inimage. The Eq. 8.12

    f= [GxGy] = [f

    x

    f

    x] (8.12)

    The magnitude of gradient, Eq. 8.13, is approximated to P rewittoperator in Eq. 8.14andimplemented in 3 x 3 mask in Fig. 12.1. We can use another operator called Sobel operatorin Eq. 8.15to find the magnitude of gradient in horizontal and vertical direction representedin Fig. ??.

    f(x, y) = [(f(x, y)

    x )2, (

    f(x, y)

    y )2]

    12 (8.13)

    41

  • 5/18/2018 Dip Manual

    50/74

    Lab 8 Spatial Domain Sharpening Filters

    f(x, y) =|(z7+ z8+ z9) (z1+ z2+ z3)| + |(z3+ z6+ z9) (z1+ z4+ z7)| (8.14)

    Where z1 denotes f(x 1, y 1) andz5 represents f(x, y), and so on.

    Figure 8.5: (a) Pixels arrangement (b) Mask for extracting horizontal edges (c) Mask for

    extracting vertical edges

    f(x, y) =|(z7+ 2z8+ z9) (z1+ 2z2+ z3)| + |(z3+ 2z6+ z9) (z1+ 2z4+ z7)| (8.15)

    Figure 8.6: (a) Pixels arrangement (b)Sobel Mask for extracting horizontal edges (c) SobelMask for extracting vertical edges

    8.5 Practical

    Use these basic concept and perform the following activities:

    8.5.1 Activity No.1

    Take a blurred image and apply Laplacian masks on it sketch the resultant image. Also,prove that the mask in Fig. 8.7isotropic for rotation in increment of 900 and the mask inFig. 8.7 is rotation invariant increment of 450.

    8.5.2 Hint for Activity No.1

    Compare the results of Laplacian masks on input images with that of rotated ( discussed inSec. 1.5images.

    42

  • 5/18/2018 Dip Manual

    51/74

    Lab 8 Spatial Domain Sharpening Filters

    Figure 8.7: Laplacian Mask

    8.5.3 Activity No.2

    Prove that we can obtain the same results for masks in given in Fig. 8.8.

    Figure 8.8: Laplacian Masks

    8.5.4 Activity No.3

    Obtain a sharped image from a blurred image using unsharp masking and high-boost maskson a blurred. Repeat the boosting of image for different values of A.

    8.5.5 Activity No.4

    Compare and contrast P rewittandSobel Operators.

    f |Gx| + |Gy| (8.16)

    8.5.6 Activity No. 5

    Compare and contrast the sharpen images after applying Sobel operators and Laplacianimages. and Laplacian

    43

  • 5/18/2018 Dip Manual

    52/74

    Lab 8 Spatial Domain Sharpening Filters

    8.5.7 Questions

    1. What is meant by rotation invariant masks?

    2. Give a 3 x 3 mask for performing unsharp masking in a single pass through an image.

    3. Show that subtracting the Laplacian from an image is proportional to unsharp masking.Use the definition for the Laplacian.

    4. Show that the magnitude of gradient in Eq. 8.14and Eq. 8.15are isotropic operations.

    5. Show that the isotropic property is lost in general if the gradient is computed using Eq.8.16.

    44

  • 5/18/2018 Dip Manual

    53/74

    Lab 9

    Enhancement in Frequency Domain

    The Transform theory is important in image processing and it application involve imageenhancement, restoration, encoding, and description. In this lab we will concentrate 2-DFourier transform and its use in representation of digital images. We will also study thefrequency domain techniques for image enhancement.

    9.1 Introduction to Fourier Transform

    Frequency is the number of times that a periodic function repeats the same sequence ofvalues during a unit variation of the independent variable. In image processing, we havedigital images which are 2-D signals. Frequency can be defined as the number of repeatedpattern of gray levels in an image. We can transform the spatial domain representation intofrequency domain with the help of Fourier Transform(FT). MATLAB uses Discrete FourierTransform(DFT) and Fast Fourier Transform(FFT) algorithm to obtain FT of an image.

    Lets consider an image f(x, y) Eq. 9.1. This image is a aperiodic sequence represented asone period of periodic sequence with a period M x N in Eq. 9.2. Here, M and N are periodin x-spatial and y-spatial coordinate, respectively. The Discrete Fourier Series(DFS) pair ofapproximated image f(x, y) and image in frequency domain F(u, v) is given in Exp. 9.3and illustrated in Fig. 9.1.

    f(x, y) =

    f(x, y), 0 x (M 1) and 0 y (N 1)0, Otherwise.

    (9.1)

    f(x, y) =

    r1=

    r2=

    f(x r1M, y r2N) (9.2)

    f(x, y) F(u, v) (9.3)

    After mathematical simplification we can express DFT in, Analysis equation Eq. 9.4andits inverse in Synthesis Equation Eq. 9.5.

    45

  • 5/18/2018 Dip Manual

    54/74

    Lab 9 Enhancement in Frequency Domain

    Figure 9.1: (a) Image in spatial domain (x,y) (b) Image in frequency domain(u,v)

    F(u, v) = 1N

    .M1x=0

    N1y=0

    f(x, y). expj2(uxM+ vy

    N)

    N

    (9.4)

    F(x, y) = 1

    N.M1u=0

    N1y=0

    f(u, v). expj2(uxM+ vy

    N) (9.5)

    Where x( 0 x (M 1)) and y(0 y (N 1)) represent spatial coordinates whileu(0 u (M 1) and v(0 u (N 1)) denote frequency spectra. It has been observedthat dynamic range of frequencies, Fourier Spectra, is usually large and we need logarithmicfunction to compress the dynamic range of Fourier magnitude spectra shown in Fig. 9.2.

    Figure 9.2: (a) Input image (b) Fourier spectra (c) Fourier spectra after application of logtransformation

    9.2 Basic Filtering in Frequency Domain

    The basic steps for taking Fourier transform of an image are illustrated in Fig. 9.3and arelisted below.

    1. Multiply the input image by (1)x+y to center the transform

    2. Compute the DFT of shifted imgae, i.eF(u, v)

    46

  • 5/18/2018 Dip Manual

    55/74

    Lab 9 Enhancement in Frequency Domain

    3. MultiplyF(u, v) by a filter functionH(u, v) to obtain output image, i.e G(u, v)

    G(u, v) =H(u, v)F(u, v) (9.6)

    4. Compute the iDFT ofG(u, v) to obtain f(x, y)

    5. Extract the real part off(x, y)

    6. Multiply the real part off(x, y) by (1)x+y to reverse shift the image to the originalposition.

    Figure 9.3: Steps for filtering in frequency domain

    9.3 Smoothing Frequency Domain Filters

    In this section we will implement some basic type of smoothing filters. We will implementideal lowpass filters (ILPF) Hilp(u, v) Eq. 9.7, Butterworth lowpass filter (BLPF)Hblp(u, v) Eq. 9.8, Guassian lowpass filter (GLPF)Hglp(u, v) Eq. 9.9.

    Hilp(u, v) =

    1, if D(u, v) D0;0, if D(u, v) D0

    (9.7)

    Hblp(u, v) = 1

    1 + [D(u,v)D0

    ]2n(9.8)

    Hglp(u, v) = expD2(u,v)22 (9.9)

    whereD0 = is a specified nonnegative number andD(u, v) is a distance from center pointof frequency rectangle (u, v) andn is the order of butterworth lowpass filter.

    9.4 Sharpening Frequency Domain Filters

    Sharpening filters attenuates the lower frequencies without disturbing higher frequencies. Inthis section we will discuss highpass filters generally expressed in Eq. 9.10. We will implement

    47

  • 5/18/2018 Dip Manual

    56/74

    Lab 9 Enhancement in Frequency Domain

    ideal highpass filters(IHPF)Hihp(u, v) Eq. 9.11,Butterworth highpass filter(BHPF)Hbhp(u, v) Eq. 9.12, Guassian highpass filter (GLPF) Hghp(u, v) Eq. 9.13.

    Hhp(u, v) = 1 Hlp(u, v) (9.10)

    Hihp(u, v) =

    0, if D(u, v) D0;1, if D(u, v) D0

    (9.11)

    Hbhp(u, v) = 1

    1 + [ D0D(u,v) ]

    2n (9.12)

    Hghp(u, v) = 1 exp

    D2(u,v)

    22

    (9.13)

    9.5 Unsharp Masking, High Boost Filtering, and High Fre-quency Filtering

    Background of an image is reduced to near black when its is passed through highpass filters.So, we need to add original image to the highpass filtered image to preserve the gray levelsin the background. This technique is calledunsharp masking Eq. 9.14and generally calledhighboost filtering Eq. 9.15.

    fum(x, y) =f(x, y) flp(x, y) (9.14)

    fhb(x, y) =A.f(x, y) flp(x, y) (9.15)

    Where flp(x, y) is a filtered image by applying any of the lowpass filters discussed in Sec.9.3. The outputs fum(x, y) andfhb(x, y) are highboosted images.

    Sometime its is advantageous to emphasize on the contribution to enhancement made bythe high-frequency components of an image. In this case we simply multiply the HPF by aconstant,b, and add an offset, a so that the zero frequency component is not eliminated by

    the filter. This process is called high frequency emphasize. We will implement thisprocess by using the transfer function, Hhfe(x, y), in Eq. 9.16.

    Hhfe(x, y) =a + b.Hhp(x, y) (9.16)

    Where Hhp(x, y) can be any highpass filter discussed in Sec. 9.4.

    9.6 Practical

    Use these basic concept and perform the following activities:

    48

  • 5/18/2018 Dip Manual

    57/74

    Lab 9 Enhancement in Frequency Domain

    9.6.1 Activity No.1

    Compute the Fourier Transform of an image and show the fourier spectra. Comment of thePSD of image by indicating lower and higher frequencies.

    9.6.2 Hint for Activity No. 1

    Use f f t() and fftshift() function to shift and take fourier transform of image and thendisplay the fourier spectra.

    9.6.3 Activity No. 2

    Implement and compare the results of ILPF, BLPF, and GLPF for different values ofD0 =.

    9.6.4 Hint for Activity No. 2

    You can use both, your own MATLAB code and DIPUMbuilt-in function lpfilter(), tofind out transfer functions of these filters and then take there DFT for find the final output.

    9.6.5 Activity No. 3

    Implement and compare the results of IHPF, BHPF, and GHPF for different values ofD0 =.

    9.6.6 Hint for Activity No. 3

    You can use both, your own MATLAB code and DIPUM built-in function hpfilter(), tofind out transfer functions of these filters and then take there DFT for find the final output.

    9.6.7 Activity No. 4

    Take an image and implement the sharpening techniques discussed in Sec. 9.5.

    9.6.8 Questions

    1. Predict about the frequencies of slowly varying and repidly gray levels in an image andshow them in frequency rectangle.

    2. Why we need to multiply an image with (1)x+y what if we dont do so?

    3. What are the effects on output image when thecutoff fequency in ILPF andBLPF

    is increased or decreased.

    49

  • 5/18/2018 Dip Manual

    58/74

    Lab 9 Enhancement in Frequency Domain

    4. What are the effect of ringing effects and how we can resolve them?

    5. What are the effects on output image when the standard deviation is changed in

    GLPF and GHP F.

    6. Why we use the offset in high frequency emphasize filters?

    50

  • 5/18/2018 Dip Manual

    59/74

    Lab 10

    Image Degradation and RestorationProcess

    In this lab we will study different models of noise and techniques for restoring the degradationcaused by noise and degradation function.

    10.1 Introduction

    Image restoration is ob jective process as compare to subjective on, image enhancement. Thebasic concept of restoration is to reconstruct or recover a degraded image by using priori

    knowledge of the degradation phenomenon and statistical nature of noise. Statistical modelsare used to model the noises produce during image acquisition, transmission, and duringsudden off and on of camera, etc. Our approach in this lab will be application of reverseprocess of degradation. We will use spatial domain restoration when noise is additive to theinput image and will use frequency domain restoration for the degradations such as imageblur.

    The Fig. ?? shows a model of the degradation process and its restoration. The inputimage f(x, y) is degraded by degradation function and additive noise n(x, y). The degradedoutput ,g(x, y) Eq. 10.1, is estimated to f(x, y) Eq. 10.2by restoration filters.

    g(x, y) =H[f(x, y)] + (x, y) (10.1)

    f(x, y) =R[g(x, y)] (10.2)

    Principal sources of noise are image acquisition and transmission. Noise is either as-sumed to be independent of spatial coordinates or periodic noise; that depend upon spatialcoordinates.

    10.2 Restoration in the Presence of Noise Only

    We will use spatial filtering when only additive noise is present in the image. Their application

    is exactly similar to the spatial filters discussed in Lab. 8, but here, they will have differentcomputation nature.

    51

  • 5/18/2018 Dip Manual

    60/74

    Lab 10 Image Degradation and Restoration Process

    10.2.1 Mean Filters

    In this section of the lab, we will to implement the spatial filters which restore a noisy image.Let Sxy represent the set of coordinates in a rectangular subimage window of size m x n,centered at point (x, y). Thearithematic meanfilter process Eq. 10.3compute the averagevalue of the corrupted image g (x, y) in the area defined by Sxy.

    f(x, y) = 1

    mn.

    (s,t)Sxy

    g(s, t). (10.3)

    While geometric mean Eq. 10.4filtering of this noisy will result a smoother image but willlast some detail.

    f(x, y) = [(x,y)Sxyg(x, y)] 1mn . (10.4)

    Harmonic mean and contraharmonic filter given in Eq. 10.5 and Eq. 10.5, respectively.Harmonic mean works better in salt noise but fails to produce desired restore image in caseof pepper noise. While contraharmonic Eq. 10.6depends upon the value of filter order Q.

    f(x, y) = [(x,y)Sxyg(x, y)] 1mn . (10.5)

    f(x, y) =

    (x,y)Sxy

    g(x, y)Q+1(x,y)Sxy

    g(x, y)Q (10.6)

    10.2.2 Order Statistic Filters

    As discussed in Sec. 7.2that the response of order statistic filters are based on the rankingof pixels in neighborhood area. We will use median filter Eq. 10.7, max Eq. 10.8and minEq. 10.9 filters, midpoint filter Eq. 10.9, and aplha-trimmed mean filter Eq. 10.11 torestore a noisy image.

    f(x, y) =median(x,t)Sxy [g(s, t)] (10.7)

    f(x, y) =max(x,t)Sxy [g(s, t)] (10.8)

    f(x, y) =min(x,t)Sxy [g(s, t)] (10.9)

    f(x, y) =1

    2.[max(x, t)Sxy[g(s, t)] + min(x, t)Sxy[g(s, t)] (10.10)

    f(x, y) = 1

    mn d.

    (x,t)Sxy

    [gr(s, t)] (10.11)

    52

  • 5/18/2018 Dip Manual

    61/74

    Lab 10 Image Degradation and Restoration Process

    Figure 10.1: Sharpening Mask

    10.2.3 Least Mean Square (LMS) Wiener Filtering

    The Fig. 10.1compares the implementation ofleast sqaure mean(LM S) or Weiner filteringwith inverse filtering. Weiner filter incorporate degradation function and noise. It restoresthe image f(x, y), in Eq. 10.12from original image f(x, y) such that the mean square errorbetween them is minimum. Fig. ??.

    F(u, v) = [ 1

    H(u, v).

    |H(u, v)|2

    |H(u, v)|2 + S(u, v)/Sf(u, v)]G(u, v) (10.12)

    10.3 Practical

    Use these basic concept and perform the following activities:

    10.3.1 Activity No.1

    Compare and contrast the image and its histograms of an image given in Fig. ??by addingthe following noises to it.

    1. Gaussian

    2. Raleigh

    3. Gamma4. Exponential

    5. Uniform

    6. Salt Pepper

    10.3.2 Activity No. 2

    10.3.3 Hint for Activity No. 2

    You can use both, your own MATLAB code and DIPUMbuilt-in function lpfilter(), tofind out transfer functions of these filters and then take there DFT for find the final output.

    53

  • 5/18/2018 Dip Manual

    62/74

    Lab 10 Image Degradation and Restoration Process

    10.3.4 Activity No. 3

    Implement and compare the results of IHPF, BHPF, and GHPF for different values ofD0 =.

    10.3.5 Hint for Activity No. 3

    You can use both, your own MATLAB code and DIPUM built-in function hpfilter(), tofind out transfer functions of these filters and then take there DFT for find the final output.

    10.3.6 Activity No. 4

    Take a degraded image and implement the Weiner filter discussed in

    Sec. 10.2.3.

    10.3.7 Questions

    1. Which noise model best describe the electrical and electromechanical interference duringimage acquisition?

    2. Which noise model best describe the phenomena of electronic noise and sensor noisedue to poor illumination and/or high temperature?

    3. Which noise model best characterizes the phenomena in range imaging?

    4. Which noise model best describe the phenomena of laser imaging?

    5. Which noise model best describe when faulty switching take place during image acqui-sition?

    6. Uniform density noise model is used to describe which type of noisy situation?

    54

  • 5/18/2018 Dip Manual

    63/74

    Lab 11

    Color Image Processing

    In this lab we will expand our image processing from gray scale images to color images. Wewill implement color image enhancement techniques.

    11.1 Introduction to Color Image Processing

    Color slicing highlights an object in an image of interest by separating it from surroundingobjects, as shown in Fig. 11.1. Color image histogram processing techniques uniformly

    Figure 11.1: Sharpening Mask

    spread the intensities components, without the effecting hue and saturation.

    11.2 Practical

    11.2.1 Activity No.1

    Write MATLAB code that takes a color image and plot its various color space components.

    55

  • 5/18/2018 Dip Manual

    64/74

    Lab 11 Color Image Processing

    11.2.2 Activity No. 2

    Prove that RGB transformation functions and complement of a color image are identical.

    11.2.3 Activity No. 2

    Find the histogram of a color image and use histogram equalization to uniformly distributethe entire intensities components.

    11.2.4 Hint for Activity No. 2

    Read a color image and take its complement and RGB transformation.

    11.2.5 Questions

    1. What will be the color components in the complement of an RGB image?

    2. Does the intensity equalization process alter the values of hue and saturation of animage?

    56

  • 5/18/2018 Dip Manual

    65/74

    Lab 12

    Color Image Smoothing andSharpening

    In this lab we will implement techniques to smooth and sharp a colored image. The imple-mentation of these techniques are exactly similar to those studied in Lab. 7

    12.1 Color Image Smoothing

    The main difference between gray scale images smoothing and color images smoothing isthat we deal with component vectors of the form given in Eq. 12.1

    c(x, y) =

    cR(x, y)cG(x, y)

    cB(x, y)

    =

    R(x, y)G(x, y)

    B(x, y)

    (12.1)

    The average of pixels in the neighborhood, Sxy, centered at (x, y) is given by Eq. 12.2andaverage of component vectors in Eq. 12.3.

    c(x, y) = 1

    K.

    (x,y)Sxy

    c(x, y) (12.2)

    c(x, y) =

    1K

    .

    (x,y)SxyR(x, y)

    1K

    .(x,y)SxyG(x, y)1K.

    (x,y)SxyB(x, y)

    (12.3)The averaging is carried out on per-color-plane basis i.e averaging Red, Green, and Blue colorplanes.

    12.2 Color Image Sharpening

    Color image sharpening is carried out using Laplacian masks and Laplacian of color image isequal to the Laplacian of individual scalar component of input vector, given in Eq. 12.4.

    2

    [c(x, y)] =

    2cR(x, y)

    2cG(x, y)2cB(x, y)

    =

    2R(x, y)

    2G(x, y)2B(x, y)

    (12.4)

    57

  • 5/18/2018 Dip Manual

    66/74

    Lab 12 Color Image Smoothing and Sharpening

    12.3 Practical

    12.3.1 Activity No.1

    Read a color image and smooth it with 5 x 5 averaging mask.

    12.3.2 Activity No. 2

    Read a color image and sharp it with Laplacian mask.

    12.3.3 Questions

    1. Sharp the edges of a color image using the gradient in Fig. 12.1by summing the threeindividual gradient vectors. Is there are any erroneous results? How we can solve it?

    Figure 12.1: (a) Pixels arrangement (b) Mask for extracting horizontal edges (c) Mask forextracting vertical edges

    58