04-unit4.pdf

Upload: vajahat07

Post on 08-Feb-2018

215 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/22/2019 04-Unit4.pdf

    1/17

    Digital Image Processing Unit 4

    Sikkim Manipal University Page No. 58

    Unit 4 Image Enhancement

    Structure

    4.1 Introduction

    Objective

    4.2 Contrast Manipulation

    Amplitude Scaling

    4.3 Histogram Modification

    4.4 Noise Cleaning

    Linear Noise Cleaning

    Non Linear Noise Cleaning

    4.5 Edge CrispeningLinear Technique

    Statistical Differencing

    4.6 Color Image Enhancement

    Natural Color Image enhancement

    Pseudo Color

    False Color

    4.7 Multispectral Image Enhancement

    4.8 Summary

    4.9 Terminal questions

    4.10 Answers

    4.1 Introduction

    In the previous unit we studied representation of digital images and dealt

    with basic aspects of pixel and image operations on pixels. The focus was

    on sampling and quantization in representing the image digitally. Continuing

    with this, we now focus on the image enhancement. Image enhancement

    processes improve the visual appearance of an image. There is no general

    unifying theory of image enhancement at present because there is no

    general standard of image quality that can serve as a design criterion for an

    image enhancement processor. The emphasis will be on studying these

    different techniques.

  • 7/22/2019 04-Unit4.pdf

    2/17

    Digital Image Processing Unit 4

    Sikkim Manipal University Page No. 59

    Objectives

    By the end of this unit you will understand:Contrast manipulation using amplitude scaling.

    Histogram modification technique

    Methods for noise cleaning.

    Linear edge crispening technique.

    Image enhancement techniques for color images

    Multispectral image enhancement

    4.2 Contrast Manipulation

    Poor contrast of photographic or electronic images is a common thing and

    one of the most common defects results from a reduced, and perhaps

    nonlinear, image amplitude range. Image contrast can often be improved by

    amplitude rescaling of each pixel.

    Fig. 4.1 illustrates a transfer function for contrast enhancement of a typical

    continuous amplitude low-contrast image. For continuous amplitude images,

    the transfer function operator can be implemented by photographic

    techniques, but it is often difficult to realize an arbitrary transfer function

    accurately. For quantized amplitude images, implementation of the transfer

    function is a relatively simple task. However, in the design of the transfer

    function operator, consideration must be given to the effects of amplitudequantization.

    Figure 4.1: Continuous image contrast enhancement

  • 7/22/2019 04-Unit4.pdf

    3/17

    Digital Image Processing Unit 4

    Sikkim Manipal University Page No. 60

    With reference to Fig. 4.2, suppose that an original image is quantized to J

    levels, but it occupies a smaller range. The output image is also assumed tobe restricted to J levels, and the mapping is linear. In the mapping strategy,

    the output level chosen is that level closest to the exact mapping of an input

    level. It is obvious from the diagram that the output image will have

    unoccupied levels within its range, and some of the gray scale transitions

    will be larger than in the original image. The latter effect may result in

    noticeable gray scale contouring. If the output image is quantized to more

    levels than the input image, it is possible to approach a linear placement of

    output levels, and hence, decrease the gray scale contouring effect.

    Figure 4.2: Quantized image contrast enhancement

    4.2.1 Amplitude Scaling

    A digitally processed image may occupy a range different from the range of

    the original image. In fact, the numerical range of the processed image may

    encompass negative values, which cannot be mapped directly into a light

    intensity range. There are several possibilities of scaling an output image

    back into the domain of values occupied by the original image.

    By the first technique as shown in Fig 4.3 (a), the processed image is

    linearly mapped over its entire range. Fig. 4.3 illustrates the amplitude

    scaling of the Q component of the YIQ transformation of a monochrome

    image containing negative pixels. Fig. 4.3(b) presents the result of amplitude

    scaling with the linear function of Fig. 4.3(a) over the amplitude range of the

    image. In this example, the most negative pixels are mapped to black (0.0),

    and the most positive pixels are mapped to white (1.0).

  • 7/22/2019 04-Unit4.pdf

    4/17

    Digital Image Processing Unit 4

    Sikkim Manipal University Page No. 61

    Figure 4.3 (a): Linear image scaling (b) Example [full range, 0.147 to 0.169]

    In the second technique shown in Fig. 4.4 (a), the extreme amplitude values

    of the processed image are clipped to maximum and minimum limits. The

    second technique is often subjectively preferable, especially for images in

    which a relatively small number of pixels exceed the limits. Contrast

    enhancement algorithms often possess an option to clip a fixed percentage

    of the amplitude values on each end of the amplitude scale. In medical

    image enhancement applications, the contrast modification operation shownin Fig. 4.4(a), for a0, is called a window-level transformation. The window

    value is the width of the linear slope, b - a ; the level is located at the

    midpoint c of the slope line. Amplitude scaling in which negative value pixels

    are clipped to zero is shown in Fig. 4.4(b). The black regions of the image

    correspond to negative pixel values of the Q component.

  • 7/22/2019 04-Unit4.pdf

    5/17

    Digital Image Processing Unit 4

    Sikkim Manipal University Page No. 62

    Fig. 4.4 (a): Linear image scaling with clipping

    (b) Example [Clipping, 0.000 to 0.169]

    The third technique of amplitude scaling, shown in Fig. 4.5(a), utilizes an

    absolute value transformation for visualizing an image with negatively

    valued pixels. This is a useful transformation for systems that utilize the

    two's complement numbering convention for amplitude representation. In

    such systems, if the amplitude of a pixel overshoots +1.0 (maximumluminance white) by a small amount, it wraps around by the same amount to

    1.0, which is also maximum luminance white. Similarly, pixel undershoots

    remain near black. Absolute value scaling is presented in Fig. 4.5(b).

  • 7/22/2019 04-Unit4.pdf

    6/17

    Digital Image Processing Unit 4

    Sikkim Manipal University Page No. 63

    Figure 4.5 (a) Absolute value scaling

    (b) Example [Absolute value, 0.000 to 0.169]

    4.3 Histogram Modification

    The luminance histogram of a typical natural scene that has been linearlyquantized is usually highly skewed toward the darker levels; a majority of

    the pixels possess a luminance less than the average. In such images,

    detail in the darker regions is often not perceptible. One means of

    enhancing these types of images is a technique called histogram

    modification, in which the original image is rescaled so that the histogram of

    the enhanced image follows some desired form. Andrews, Hall and others

    have produced enhanced imagery by a histogram equalization process for

    which the histogram of the enhanced image is forced to be uniform.

    Fig. 4.6 gives an example of histogram equalization. In the figure, HF(c) for

    c = 1, 2, ... C, represents the fractional number of pixels in an input image

    whose amplitude is quantized to the cth reconstruction level.

  • 7/22/2019 04-Unit4.pdf

    7/17

    Digital Image Processing Unit 4

    Sikkim Manipal University Page No. 64

    Figure 4.6: Approximate gray level histogram equalization with unequal

    number of quantization levels.

    Histogram equalization seeks to produce an output image field G by point

    rescaling such that the normalized gray-level histogram HG(d) = 1 D for

    d = 1, 2,..., D. In the example shown in Fig. 4.6, the number of output levels

    is set at one-half of the number of input levels.

    The scaling algorithm is developed as follows. The average value of the

    histogram is computed. Then, starting at the lowest gray level of the original,

    the pixels in the quantization bins are combined until the sum is closest to

    the average. All of these pixels are then rescaled to the new first

    reconstruction level at the midpoint of the enhanced image first quantization

    bin. The process is repeated for higher-value gray levels. If the number of

    reconstruction levels of the original image is large, it is possible to rescale

    the gray levels so that the enhanced image histogram is almost constant. It

    should be noted that the number of reconstruction levels of the enhanced

    image must be less than the number of levels of the original image to

    provide proper gray scale redistribution if all pixels in each quantization level

    are to be treated similarly. This process results in a somewhat larger

    quantization error. It is possible to perform the gray scale histogram

    equalization process with the same number of gray levels for the original

  • 7/22/2019 04-Unit4.pdf

    8/17

    Digital Image Processing Unit 4

    Sikkim Manipal University Page No. 65

    and enhanced images, and still achieve a constant histogram of the

    enhanced image, by randomly redistributing pixels from input to outputquantization bins.

    Fig. 4.6 provides an example of histogram equalization for an x-ray of a

    projectile. The original image and its histogram are shown in Fig. 4.6 (a) and

    (b). In the histogram equalized result of Fig. 4.7 (a) and (b), ablating material

    from the projectile, not seen in the original, is clearly visible. Histogram

    equalization usually performs best on images with detail hidden in dark

    regions. Good quality originals are often degraded by histogram

    equalization.

    Figure 4.6 (a): Original image (b) Original image histogram

    Figure 4.7 (a): Enhanced image (b) Enhanced image histogram

  • 7/22/2019 04-Unit4.pdf

    9/17

    Digital Image Processing Unit 4

    Sikkim Manipal University Page No. 66

    4.4 Noise Cleaning

    An image may be subject to noise and interference from several sources,including electrical sensor noise, photographic grain noise and channel

    errors. Image noise arising from a noisy sensor or channel transmission

    errors usually appears as discrete isolated pixel variations that are not

    spatially correlated. Pixels that are in error often appear visually to be

    markedly different from their neighbors.

    4.4.1 Linear Noise Cleaning

    Noise added to an image generally has a higher-spatial-frequency spectrum

    than the normal image components because of its spatial decorrelatedness.

    Hence, simple low-pass filtering can be effective for noise cleaning. We will

    now disucss convolution method of noise cleaning.

    A spatially filtered output image G(j,k) can be formed by discrete convolution

    of an input image F(m,n) with a L * L impulse response array H(j,k)

    according to the relation

    G(j,k)= F(m,n) H(m+j+C, n+k+C) where C=(L+1)/2 [Eq 4.8]

    For noise cleaning, H should be of low-pass form, with all positive elements.

    Several common pixel impulse response arrays of low-pass form are used

    and two such forms are given below.

    These arrays, called noise cleaning masks, are normalized to unit weighting

    so that the noise-cleaning process does not introduce an amplitude bias in

    the processed image.

    Another linear noise cleaning technique Homomorphic Filtering.

    Homomorphic filtering (16) is a useful technique for image enhancement

    when an image is subject to multiplicative noise or interference. Fig. 4.9

    describes the process.

  • 7/22/2019 04-Unit4.pdf

    10/17

    Digital Image Processing Unit 4

    Sikkim Manipal University Page No. 67

    Figure 4.9: Homomorphic Filtering

    The input image F(j,k) is assumed to be modeled as the product of a noise-

    free image S(j,k) and an illumination interference array I(j,k). Thus,

    F(j,k) = S(j,k) I(j,k)

    Taking the logarithm yields the additive linear result

    log{F(j, k)} = log{I(j, k)} + log{S(j, k)

    Conventional linear filtering techniques can now be applied to reduce the log

    interference component. Exponentiation after filtering completes the

    enhancement process

    4.4.2 Non Linear Noise Cleaning

    The linear processing techniques described previously perform reasonably

    well on images with continuous noise, such as additive uniform or Gaussian

    distributed noise. However, they tend to provide too much smoothing for

    impulse like noise. Nonlinear techniques often provide a better trade-off

    between noise smoothing and the retention of fine image detail.

    Median filtering is a nonlinear signal processing technique developed that isuseful for noise suppression in images. In one-dimensional form, the median

    filter consists of a sliding window encompassing an odd number of pixels.

    The center pixel in the window is replaced by the median of the pixels in the

    window. The median of a discrete sequence a1, a2,..., aN for N odd is that

    member of the sequence for which (N 1)/2 elements are smaller or equal

    in value and (N1)/2 elements are larger or equal in value. For example, if

    the values of the pixels within a window are 0.1, 0.2, 0.9, 0.4, 0.5, the center

    pixel would be replaced by the value 0.4, which is the median value of the

    sorted sequence 0.1, 0.2, 0.4, 0.5, 0.9. In this example, if the value 0.9 were

    a noise spike in a monotonically increasing sequence, the median filterwould result in a considerable improvement. On the other hand, the value

    0.9 might represent a valid signal pulse for a wideband width sensor, and

    the resultant image would suffer some loss of resolution. Thus, in some

  • 7/22/2019 04-Unit4.pdf

    11/17

    Digital Image Processing Unit 4

    Sikkim Manipal University Page No. 68

    cases the median filter will provide noise suppression, while in other cases it

    will cause signal suppression.

    4.5 Edge Crispening

    Psychophysical experiments indicate that a photograph or visual signal with

    accentuated or crispened edges is often more subjectively pleasing than an

    exact photometric reproduction. We will discuss Linear and Statistical

    differencing technique for edge crispening.

    4.5.1 Linear Edge Crispening

    Edge crispening can be performed by discrete convolution, as defined by

    Eq. 4.8 in which the impulse response array H is of high-pass form. Several

    common high-pass masks are given below

    These masks possess the property that the sum of their elements is unity, to

    avoid amplitude bias in the processed image. Figure 4.10 provides example

    of edge crispening on a monochrome image with mask 2.

    4.10 (a): Original 4.10 (b): Mask 2

  • 7/22/2019 04-Unit4.pdf

    12/17

    Digital Image Processing Unit 4

    Sikkim Manipal University Page No. 69

    4.5.2 Statistical Differencing

    Statistical differencinginvolves the generation of an image by dividing eachpixel value by its estimated standard deviation D(j, k) according to the basic

    relation

    G(j, k) = F(j, k) / D(j, k)

    where the estimated standard deviation

    is computed at each pixel over some W * W neighborhood where W = 2w + 1.

    The function M(j,k) is the estimated mean value of the original image at

    point (j, k), which is computed as

    The enhanced image G(j,k) is increased in amplitude with respect to the

    original at pixels that deviate significantly from their neighbors, and is

    decreased in relative amplitude elsewhere.

    4.6 Color Image Enhancement

    The image enhancement techniques discussed previously have all been

    applied to monochrome images. We will now consider the enhancement of

    natural color images and introduce the pseudocolor and false color image

    enhancement methods. Pseudocolor produces a color image from a

    monochrome image, while false color produces an enhanced color image

    from an original natural color image or from multispectral image bands.

  • 7/22/2019 04-Unit4.pdf

    13/17

    Digital Image Processing Unit 4

    Sikkim Manipal University Page No. 70

    4.6.1 Natural Color Image Enhancement

    The monochrome image enhancement methods described previously canbe applied to natural color images by processing each color component

    individually. It is accomplished by intracomponent and inter-component

    processing algorithms.

    Intracomponent Processing: Typically, color images are processed in the

    RGB color space. This approach works quite well for noise cleaning

    algorithms in which the noise is independent between the R, G and B

    components. Edge crispening can also be performed on an intracomponent

    basis, but more efficient results are often obtained by processing in other

    color spaces. Contrast manipulation and histogram modification

    intracomponent algorithms often result in severe shifts of the hue andsaturation of color images. Hue preservation can be achieved by using a

    single point transformation for each of the three RGB components. For

    example, form a sum image, and then compute a histogram equalization

    function, which is used for each RGB component.

    Intercomponent Processing: The intracomponent processing algorithms

    previously discussed provide no means of modifying the hue and saturation

    of a processed image in a controlled manner. One means of doing so is to

    transform a source RGB image into a three component image, in which the

    three components form separate measures of the brightness, hue and

    saturation (BHS) of a color image. Ideally, the three components should be

    perceptually independent of one another. Once the BHS components are

    determined, they can be modified by amplitude scaling methods discussed

    in 4.2.1 section.

    4.6.2 Pseudocolor

    Pseudocolor is a color mapping of a monochrome image array which is

    intended to enhance the detectability of detail within the image. The

    pseudocolor mapping of an array is defined as

    R(j, k) = OR{F(j, k)}

    G(j, k) = OG{F(j, k)}

    B(j, k) = OB{F(j, k)}

  • 7/22/2019 04-Unit4.pdf

    14/17

    Digital Image Processing Unit 4

    Sikkim Manipal University Page No. 71

    where R(j, k) , G(j, k), B(j, k) are display color components and

    OR{F(j, k)}, OG{F(j, k)}, OB{F(j, k)} are linear or nonlinear functionaloperators. This mapping defines a path in three-dimensional color space

    parametrically in terms of the array F(j, k). Figure 4.11 illustrates the RGB

    color space and two color mappings that originate at black and terminate at

    white. Mapping A represents the achromatic path through all shades of gray;

    it is the normal representation of a monochrome image. Mapping B is a

    spiral path through color space. Another class of pseudocolor mappings

    includes those mappings that exclude all shades of gray. Mapping C, which

    follows the edges of the RGB color cube, is such an example.

    Figure 4.11: Black-to-white and RGB perimeter pseudocolor mappings

    4.6.3 False color

    False color is a point-by-point mapping of an original color image. It is

    described by its three primary colors (or of a set of multispectral image

    planes of a scene) to a color space defined by display tristimulus values that

    are linear or nonlinear functions of the original image pixel values. A

    common intent is to provide a displayed image with objects possessing

    different or false colors from what might be expected. For example, blue skyin a normal scene might be converted to appear red, and green grass

    transformed to blue. One possible reason for such a color mapping is to

    place normal objects in a strange color world so that a human observer will

    pay more attention to the objects than if they were colored normally.

  • 7/22/2019 04-Unit4.pdf

    15/17

    Digital Image Processing Unit 4

    Sikkim Manipal University Page No. 72

    Another reason for false color mappings is the attempt to color a normal

    scene to match the color sensitivity of a human viewer. For example, it isknown that the luminance response of cones in the retina peaks in the green

    region of the visible spectrum. Thus, if a normally red object is false colored

    to appear green, it may become more easily detectable. Another

    psychophysical property of color vision that can be exploited is the contrast

    sensitivity of the eye to changes in blue light. In some situations it may be

    worthwhile to map the normal colors of objects with fine detail into shades of

    blue.

    In a false color mapping, the red, green and blue display color components

    are related to natural or multispectral images Fiby

    RD= OR{F1, F2,....}

    GD= OG{ F1, F2, ....}

    BD= OB{ F1, F2,...}

    where OR{ }, OG{ }, OB{ } are general functional operators. As a simple

    example, the set of red, green and blue sensor tristimulus values (RS= F1,

    GS= F2, BS= F3) may be interchanged according to the relation

    Green objects in the original will appear red in the display, blue objects will

    appear green and red objects will appear blue. A general linear false color

    mapping of natural color images can be defined as

  • 7/22/2019 04-Unit4.pdf

    16/17

    Digital Image Processing Unit 4

    Sikkim Manipal University Page No. 73

    This color mapping should be recognized as a linear coordinate conversion

    of colors reproduced by the primaries of the original image to a new set ofprimaries.

    4.7 Multispectral Image Enhancement

    Enhancement procedures are often performed on multispectral image bands

    of a scene in order to accentuate salient features to assist in subsequent

    human interpretation or machine analysis. These procedures include

    individual image band enhancement techniques, such as contrast

    stretching, noise cleaning and edge crispening, as discussed earlier. Other

    methods involve the joint processing of multispectral image bands.

    Multispectral image bands can be subtracted in pairs according to the

    relation

    Dm, n(j, k) = Fm(j, k)Fn(j, k)

    in order to accentuate reflectivity variations between the multispectral bands.

    An associated advantage is the removal of any unknown but common bias

    components that may exist. Another simple but highly effective means of

    multispectral image enhancement is the formation of ratios of the image

    bands. The ratio image between the mth and nth multispectral bands is

    defined as

    Rm n j k Fm j k Fn j kIt is assumed that the image bands are adjusted to have nonzero pixel

    values. In many multispectral imaging systems, the image band Fn( j, k) can

    be modeled by the product of an object reflectivity function Rn( j, k) and an

    illumination function I(j, k) that is identical for all multispectral bands.

    Ratioing of such imagery provides an automatic compensation of the

    illumination factor. The ratio Fm(j, k) / [Fn(j, k) (j, k)] for which (j, k)

    represents a quantization level uncertainty, can vary considerably if Fn(j, k)

    is small. This variation can be reduced significantly by forming the logarithm

    of the ratios defined by

    Lm nj k = log Rm nj k = log Fmj k log Fnj k

  • 7/22/2019 04-Unit4.pdf

    17/17

    Digital Image Processing Unit 4

    Sikkim Manipal University Page No. 74

    4.8 Summary

    We have studied different image enhancement techniques formanipulating contrast, noise cleaning and edge crispening.

    Histogram modification technique for image enhancement.

    We discussed image enhancement technique for color images and also

    multi spectral image enhancement

    Self Assessment Questions

    1. Amplitude Scaling is a method for.

    2. An example for linear noise cleaning technique is.

    3. In case of linear edge crispening, masks which are used possess a

    property that the sum of their elements is.

    4. produces a color image from a monochrome image

    4.9 Terminal Questions

    1. Discuss Amplitude scaling method for contrast manipulation?

    2. Explain Histogram modification technique?

    3. Explain different linear methods for noise cleaning?

    4. Write a note on Statistical differencing technique for edge crispening?

    5. Discuss different methods for color image enhancement?

    4.10 Answers

    Self Assessment Questions

    1. Contrast Manipulation

    2. Homomorphic Filtering or Convolution method

    3. unity

    4. Pseudo color

    Terminal Questions

    1. Refer Section 4.2.1

    2. Refer Section 4.33. Refer Section 4.4

    4. Refer Section 4.5.2

    5. Refer Section 4.6