geology module - epgp.inflibnet.ac.in

21
GEOLOGY Paper: Remote Sensing and GIS Module: Digital Image Fusion Subject Geology Paper No and Title Remote Sensing and GIS Module No and Title Digital Image Fusion Module Tag RS & GIS XI Principal Investigator Co-Principal Investigator Co-Principal Investigator Prof. Talat Ahmad Vice-Chancellor Jamia Millia Islamia Delhi Prof. Devesh K Sinha Department of Geology University of Delhi Delhi Prof. P. P. Chakraborty Department of Geology University of Delhi Delhi Paper Coordinator Content Writer Reviewer Dr. Atiqur Rahman Department of Geography, Faculty of Natural Sciences, Jamia Millia Islamia Delhi Dr. Iqbal Imam Aligarh Muslim University Aligarh Dr. Atiqur Rahman Department of Geography, Faculty of Natural Sciences, Jamia Millia Islamia Delhi

Upload: others

Post on 16-Oct-2021

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: GEOLOGY Module - epgp.inflibnet.ac.in

GEOLOGY

Paper: Remote Sensing and GIS

Module: Digital Image Fusion

Subject Geology

Paper No and Title Remote Sensing and GIS

Module No and Title Digital Image Fusion

Module Tag RS & GIS XI

Principal Investigator Co-Principal Investigator Co-Principal Investigator

Prof. Talat Ahmad

Vice-Chancellor

Jamia Millia Islamia

Delhi

Prof. Devesh K Sinha

Department of Geology

University of Delhi

Delhi

Prof. P. P. Chakraborty

Department of Geology

University of Delhi

Delhi

Paper Coordinator Content Writer Reviewer

Dr. Atiqur Rahman

Department of Geography,

Faculty of Natural Sciences,

Jamia Millia Islamia

Delhi

Dr. Iqbal Imam

Aligarh Muslim University

Aligarh

Dr. Atiqur Rahman

Department of Geography,

Faculty of Natural Sciences,

Jamia Millia Islamia

Delhi

Page 2: GEOLOGY Module - epgp.inflibnet.ac.in

GEOLOGY

Paper: Remote Sensing and GIS

Module: Digital Image Fusion

Table of Content

1. Introduction

2. Concept of image fusion

2.1. At Pixel level

2.2. At Feature level

2.3. At Decision level

3. Objectives of image fusion

4. Image fusion techniques

4.1. Numerical Method

4.1.1. Multiplicative Algorithm

4.1.2. The Brovey transform image fusion technique

4.1.3. Fusion technique based on subtractive method

4.1.4. Wavelet image fusion technique

4.2. Colour related technique

4.2.1. The intensity-hue-saturation (IHS) image fusion technique

4.3. Statistical Method

4.3.1. Principal Component Analysis (PCA)

4.3.2. Fusion technique based on high-pass filter

4.4. Feature level technique

4.4.1. Ehlers method

5. Application of image fusion

5.1. Object identification

5.2. Classification

5.3. Change Detection

Page 3: GEOLOGY Module - epgp.inflibnet.ac.in

GEOLOGY

Paper: Remote Sensing and GIS

Module: Digital Image Fusion

1. Introduction

In remote sensing, image fusion is the combination of two or more different images

to form a new image by using a certain algorithm to obtain more and better

information about an object or a study area.

Remote sensing image fusion is an effective way to use a large volume of data from

multisensor images. Most earth satellites such as SPOT, Landsat 7, IKONOS and

Quick Bird provide both panchromatic (PAN) images at a higher spatial resolution

and multispectral (MS) images at a lower spatial resolution and many remote sensing

applications require both high spatial and high spectral resolutions, especially for GIS

based applications. An effective image fusion technique can produce such remotely

sensed images. There are several benefits in using image fusion: wider spatial and

temporal coverage, decreased uncertainty, improved reliability, and increased

robustness of system performance.

The objective of information fusion is to improve the accuracy of image interpretation

and analysis by making use of complementary information. Many image fusion

techniques have been developed to merge a Pan image and a MS image into a

multispectral image with high spatial and spectral resolution simultaneously. An ideal

image fusion technique should have three essential factors, i.e. high computational

efficiency, preserving high spatial resolution and reducing colour distortion.

The image fusion is performed at three different processing levels, which are pixel

level, feature level and decision level according to the stage at which the fusion takes

place. In the past few years, many image fusion methods have been proposed, such as

Page 4: GEOLOGY Module - epgp.inflibnet.ac.in

GEOLOGY

Paper: Remote Sensing and GIS

Module: Digital Image Fusion

intensity hue saturation- (IHS), Brovey transform (BT), principal component analysis

(PCA) and wavelets.

2. Concept of image fusion

Data fusion is a process dealing with data and information from multiple sources to

achieve refined /improved information for decision-making. A general definition of

image fusion is given as ‘Image fusion is the combination of two or more different

images to form a new image by using a certain algorithm’. Image fusion is performed

at three different processing levels according to the stage at which the fusion takes place:

2.1. At Pixel level

Image fusion at pixel level means fusion at the lowest processing level

referring to the merging of measured physical parameters. It uses raster data

that is at least co-registered but most commonly geocoded. The geocoding

plays an essential role because miss-registration causes artificial colours or

features in multisensor data sets, which falsify the interpretation. Later on, it

includes the re-sampling of image data to a common pixel spacing and map

projection.

2.2. At Feature level

Fusion at feature level requires the extraction of objects recognised in the

various data sources, e.g. using segmentation procedures. Features correspond

to characteristics extracted from the initial images, which are depending on

their environment such as extent, shape and neighbourhood. These similar

objects from multiple sources are assigned to each other and then fused for

further assessment using statistical approaches or Artificial Neural Networks

(ANN).

2.3. At Decision level

Decision-or interpretation level fusion represents a method that uses value-

added data where the input images are processed individually for information

extraction. The obtained information is then combined applying decision rules

Page 5: GEOLOGY Module - epgp.inflibnet.ac.in

GEOLOGY

Paper: Remote Sensing and GIS

Module: Digital Image Fusion

to reinforce common interpretation, resolve differences, and furnish a better

understanding of the observed objects.

3. Objectives of image fusion

Image fusion is a tool to combine multisource imagery using advanced image

processing techniques. It aims at the integration of disparate and complementary data

to enhance the information apparent in the images as well as to increase the reliability

of the interpretation. This leads to more accurate data and increased utility. It is also

stated that fused data provides for robust operational performance, i.e., increased

confidence, reduced ambiguity, improved reliability and improved classification.

Image fusion is applied to digital imagery in order to:

Image sharpening

Improve geometric corrections

Provide stereo-viewing capabilities for stereo-photogrammetry

Enhance certain features not visible in either of the single data alone

Complement data sets for improved classification

Detect changes using multi-temporal data

Substitute missing information in one image with signals from another sensor

image

Replace defective data.

4. Image Fusion Techniques

The standard methods of image fusion are based on Red-Green-Blue (RGB) to

Intensity-Hue-Saturation (IHS) transformation. The usual steps involved in satellite

image fusion are as follows:

Resize the low-resolution multispectral images to the same size as the

panchromatic image and co-register to coincide on a pixel-by-pixel basis

depending on the height variations in the area contained in the data.

Subsequently, the data can be fused using one of the fusion techniques

described herewith.

Page 6: GEOLOGY Module - epgp.inflibnet.ac.in

GEOLOGY

Paper: Remote Sensing and GIS

Module: Digital Image Fusion

Transform the R, G and B bands of the multispectral image into IHS

components.

Modify the panchromatic image with respect to the multispectral image. This

is usually performed by histogram matching of the panchromatic image with

Intensity component of the multispectral images as reference.

Replace the intensity component by the panchromatic image and perform

inverse transformation to obtain a high-resolution multispectral image.

Following are some methods used for image fusion:

4.1. Numerical Method

4.1.1. Multiplicative Algorithm: In order to improve quality of spatial and

spectral information Multiplicative transformation is a simple

multiplication fusion method. Its fused image can reflect the mixed

message of low-resolution images and high-resolution images. The

fusion algorithm can be written as:

MLijk = (XSijk x PNij)1/2

Where,

MLijk is the fusion image pixel value,

XSijk is pixel value of multispectral image,

PNij is the pixel value of Panchromatic.

Multiplicative that is the simplest fusion technique. The Multiplicative

algorithm stretches the histogram of all the MS bands and decreases the

standard deviation values. This technique also helps in detection of small

targets like cars and trees, and facilitates the mapping of the buildings.

However, multiplicative fusion techniques cause changes in the colours

of the original images and make the photo-interpretation more difficult.

While using this technique for image fusion, colour of the vegetation

changes from green to blue when blue band is used in natural colour

combinations. Multiplicative algorithm improves the spatial resolution

of the input MS image. The resultant image is having darker tone than

the input MS image, which results in loss of shadow (Fig. 2).

Page 7: GEOLOGY Module - epgp.inflibnet.ac.in

GEOLOGY

Paper: Remote Sensing and GIS

Module: Digital Image Fusion

Fig. 2 (a) Original PAN image, (b) Original MS image, (c) MLT-fused

image, (d) MB-fused image, (e) HPF-fused image, (f) SFIM-fused

image.

4.1.2. The Brovey transform image fusion technique: Brovey technique

(BT) was first introduced by Bob Brovey. It is also known as colour-

normalized fusion. It is a simple method to merge data from different

sensors, which can preserve the relative spectral contributions of each

pixel but replace its overall brightness with the high spatial resolution

image. The mathematical formulas of the Brovey transform can be

showed as a combination of the Panchromatic (PAN) and multispectral

(MS) images. Each MS image is multiplied by a ratio of the PAN image

and divided by the sum of the MS images. The fused R, G, and B images

are defined by the following equations:

Rnew = R / (R + G + B) ×PAN (1)

Gnew = G / (R + G + B) ×PAN (2)

Bnew = B / (R + G + B) ×PAN (3)

Page 8: GEOLOGY Module - epgp.inflibnet.ac.in

GEOLOGY

Paper: Remote Sensing and GIS

Module: Digital Image Fusion

Many researchers used the BT to fuse a RGB image with a high-

resolution image. The BT image fusion is used to combine Landsat TM

and radar SAR (ERS-1) images. It also give improved image when Spot

image is fused with PAN image. The advantages of Brovey transform

image fusion technique is that; it is a simple and fast method to merge

the data from different sensors. It provide superior visual and high-

resolution multispectral image. It is very useful for visual interpretation.

The disadvantage of this technique is that it ignores the requirement of

high quality synthesis of spectral information. It produces spectral

distortion. Results from Brovey transformation are presented in Fig. 3.

Fig. 3: Brovey transformation

4.1.3. Fusion technique based on subtractive method: Subtractive

Resolution merge uses a subtractive algorithm to PAN for sharpening

multi-spectral (MS) images. Specifically, it was designed for

Quickbird, Ikonos and Formosat images that simultaneously acquire

data in PAN and MS, with all 4 MS bands present. A ratio between the

MS and PAN image pixels sizes of approximately 4:1 is considered. By

using this technique, fused images of highly preserved spatial and

spectral resolution are produced. The subtractive image fusion method

Page 9: GEOLOGY Module - epgp.inflibnet.ac.in

GEOLOGY

Paper: Remote Sensing and GIS

Module: Digital Image Fusion

is a fast, user-friendly and radiometrically accurate technique for

merging PAN and MS data. This method uses a subtractive algorithm

to PAN sharpen MS images. The input consists of overlapping PAN

and MS images. The output is a MS image that retains the spectral

contents of the MS image while maintaining the spatial detail of the

PAN image. This algorithm was designed specifically to provide a

solution that was fast, yet produced quality results for the most common

types of merges. Specifically, it was designed for QuickBird, IKONOS

and FORMOSAT images that have simultaneous acquisition of the

PAN and MS images, with all four MS bands present, and a ratio

between the MS and PAN image pixels sizes of approximately 4:1. The

MS and PAN images to be fused with this approach should have

roughly a 4:1 pixel size ratio. In addition, PAN and MS images should

be from the same system (QuickBird, IKONOS, and so on) and should

be acquired simultaneously. The output-fused image has lighter tone

than the input MS image, which results in merging of shadow with dark

tone areas.

4.1.4. Wavelet image fusion technique: Invented in 1980, Wavelet theory is

related to multi-resolution analysis theory. It extracts spatial details

from a high-resolution PAN image and then adds them to the MS bands.

Due to its multi-resolution analysis (MRA) characteristic, in recent

year, Wavelet transform has been introduced in image fusion domain.

The MRA depends on the discrete wavelet. The wavelets are

characterized by using two functions, which are the scaling function

f(x), and the wavelet function or mother wavelet. Mother wavelet ψ(x)

undergoes translation and scaling operations to give self-similar

wavelet series. The traditional wavelet-based image fusion can be

performed by decomposing the two input images separately into

approximate coefficients and detailed coefficients then high detailed

coefficients of the multi-spectral image are replaced with those of the

Page 10: GEOLOGY Module - epgp.inflibnet.ac.in

GEOLOGY

Paper: Remote Sensing and GIS

Module: Digital Image Fusion

PAN image. The new wavelet coefficients of the multi-spectral image

are transformed with the inverse wavelet transform to obtain the fusion

multi-spectral image. The wavelet image fusion technique can improve

the spatial resolution while preserve the spectral characteristics at a

maximum degree. However, the method discards low frequency

component of the PAN image completely and then by performing the

inverse PC transform. In the output-fused image, the colour seems not

being smoothly integrated into the spatial features. Details of small

objects are lost and the edges of buildings are also distorted in the output

image. The results of image fusion by wavelet transformation are

presented in Fig. 4.

Fig. 4 Wavelets based image fusion

Page 11: GEOLOGY Module - epgp.inflibnet.ac.in

GEOLOGY

Paper: Remote Sensing and GIS

Module: Digital Image Fusion

4.2. Colour related technique

4.2.1. The intensity-hue-saturation (IHS) image fusion technique: The

intensity-hue-saturation (IHS) technique is a standard procedure in

image fusion, with the major limitation that only three bands are

involved. The IHS fusion technique is used for sharpening. This

technique works best in image analysis for colour enhancement, feature

enhancement, improvement of spatial resolution and the fusion of

disparate data sets. The IHS fusion technique converts a colour image

from the red, green, and blue (RGB) space into the IHS colour space.

The intensity band (I) in the IHS space is replaced by a high-resolution

PAN image and then transformed back into the original RGB space

together with the previous hue band (H) and the saturation band (S),

resulting in an IHS fused image.

Four steps used in IHS:

Transform the red, green, and blue (RGB) channels

(corresponding to three multispectral bands) to IHS

components.

Match the histogram of the panchromatic image with the

intensity component.

Replace the intensity component with the stretched

panchromatic image; and

Inverse transform IHS channels to RGB channels. The resultant

colour composite will then have a higher spatial resolution in

terms of topographic texture information.

Page 12: GEOLOGY Module - epgp.inflibnet.ac.in

GEOLOGY

Paper: Remote Sensing and GIS

Module: Digital Image Fusion

The HSI components can be defined as follows:

I = (R+ G+ B)/3 (1)

H= (B-R)/3(I-R), S=1-R/I, when R= Minimum (R, G, B) (2)

H= (R-G)/3(I-G), S=1-G/I, when G= Minimum (R, G, B) (3)

H= (G-B)/3(I-B), S=1-B/I, when B= Minimum (R, G, B) (4)

Where, I, H, S stand for intensity, hue and saturation components

respectively; R, G, B mean Red, Green, and Blue bands of multi-

spectral image. The advantage with this technique is that it is a simple

method to merge the images attributes. It provides high spatial quality

and better visual effect. It gives the best result for fusion of remote

sensing images. Besides these, it does not require radiometric

corrections or radiometric enhancements. 2. It does not require the

assessment of training areas. 3. It produces a new data set in which the

burned areas are well discriminated. The disadvantage is that, it

produces a significant colour distortion with respect to the original

image. It suffers from artifacts and noise, which tends to higher contrast.

The major limitation is that only three bands are involved (Fig. 5).

Fig. 5 Intensity-hue-saturation image fusion technique

Page 13: GEOLOGY Module - epgp.inflibnet.ac.in

GEOLOGY

Paper: Remote Sensing and GIS

Module: Digital Image Fusion

4.3. Statistical Method

4.3.1. Principal Component Analysis (PCA): Principal Component

Analysis (PCA) is mathematical method that transforms interrelated

variables into the unrelated variable. The verbosity of image can be

decreased or eliminated using this technique.

This is a spatial domain technique of image fusion, in which pixel values

are modified to achieve the final results. The advantage of PCA is that

the redundancy of data is decreased and large amounts of inputs are

decreased without the actual loss of the information at the output image.

The fusion is accomplished by weighted average of images to be fused.

Eigen vector related to the largest Eigen value of the covariance

matrices of each source are used to obtain weights for each source

image. It computes a compress and best description of the data set. The

first principal component is fixed along the direction with utmost

variance and the second principal component lie in the subspace

perpendicular to the first. The third principal component is taken along

the direction with utmost variance and in the subspace perpendicular to

the first two and so on.

The information flow diagram of PCA-based image fusion algorithm is

shown in Fig. 1. The input images (images to be fused) I1 (x, y) and I2

(x, y) are arranged in two column vectors and their empirical means are

Page 14: GEOLOGY Module - epgp.inflibnet.ac.in

GEOLOGY

Paper: Remote Sensing and GIS

Module: Digital Image Fusion

subtracted. The resulting vector has a dimension of n x 2, where n is

length of the each image vector. Eigenvector and eigenvalues for this

resulting vector are computed and the eigenvectors corresponding to the

larger eigenvalue obtained. The normalized components P1 and P2 (i.e.,

P1 + P2 = 1) are computed from the obtained eigenvector.

4.3.2. Fusion technique based on high-pass filter: The high-pass filter

(HPF) image fusion technique allows us to combine high-resolution

PAN data with lower-resolution MS data, resulting in an output with

both excellent detail and a realistic representation of spectral contents

of the original MS scene.

The process involves a convolution using a HPF on the high-resolution

data and then combining this with the lower-resolution MS data. The

general steps of HPF algorithm are as follows:

Read pixel sizes from image files and calculate R, the ratio of

MS cell size to high-resolution cell size.

High pass filter the high spatial resolution image.

Resample the MS image to the pixel size of the high-pass image.

Add the HPF image to each MS band. The HPF image is

weighted relative to the global standard deviation of the MS

band.

Stretch the new MS image to match the mean and standard

deviation of the original (input) MS image.

The output-fused image has lighter tone than the input MS

image. In addition, the texture of the image is very smooth and

the sharpness is less, which results in less clarity of objects.

4.4. Feature level technique

4.4.1. Ehlers method: The Ehlers fusion algorithm was invented by Prof.

Manfred Ehlers. This fusion technique is based on an IHS transform

coupled with an adaptive Fourier domain filtering. This method is

extended to include more than three bands by using multiple IHS

Page 15: GEOLOGY Module - epgp.inflibnet.ac.in

GEOLOGY

Paper: Remote Sensing and GIS

Module: Digital Image Fusion

transforms until the number of bands is exhausted. Using fast Fourier

transform (FFT) technique, the spatial components to be enhanced or

suppressed can be directly accessed. The panchromatic spectrum is

filtered with an inverse HPF and the intensity spectrum is filtered with

a low-pass filter. After filtering, both the images are transformed back

into the spatial domain with an inverse FFT and added together to form

a fused intensity component with the high-frequency information from

the high-resolution PAN image and the low-frequency information

from the low-resolution MS image. As the last step, an inverse IHS

transformation produces a fused RGB image. These steps can be

repeated with successive three-band selection until all bands are fused

with the PAN image. The Ehlers fusion shows the best spectral

preservation but also takes the highest computation time. The Ehlers

fusion technique improves the resolution of the fused image, but it

decreases the tonal variance in the resulting image. Some buildings are

found to be merged with other objects. The trees part is clearer in the

fused image.

5. Application of Image Fusion

Following are some application of image fusion:

5.1. Object identification

Image fusion technique increases the capability for enhancing features. The

feature enhancement capability of image fusion is visually apparent in

VIR/VIR combinations that often results in images that are superior to the

original data. In order to maximize the amount of information extracted from

satellite image data useful products can be found in fused images.

5.2. Classification

New image after fusion increases the classification accuracy. Classification is

one of the key tasks of remote sensing applications. The classification

accuracy of remote sensing images is improved when multiple source image

Page 16: GEOLOGY Module - epgp.inflibnet.ac.in

GEOLOGY

Paper: Remote Sensing and GIS

Module: Digital Image Fusion

data are introduced to the processing. Images from microwave and optical

sensors offer complementary information that helps in discriminating the

different classes.

5.3. Change Detection

Change detection is the process of identifying differences in the state of an

object or phenomenon by observing it at different times. Image fusion plays

greater role in this by making the objects more clear. Change detection is the

process of identifying differences in the state of an object or phenomenon by

observing it at different times. Image fusion for change detection takes

advantage of the different configurations of the platforms carrying the sensors.

The combination of these temporal images in same place enhances

information on changes that might have occurred in the area observed.

Frequently Asked Questions-

Q1. What do you mean by image fusion in remote sensing?

Ans: In remote sensing, image fusion is the combination of two or more different

images to form a new image by using a certain algorithm to obtain more and

better information about an object or a study area. The objective of information

fusion is to improve the accuracy of image interpretation and analysis. Merging

of a PAN image with high spatial resolution and MS image with low spatial

resolution to get a multispectral image with high spatial and spectral resolution

simultaneously is a typical example of image fusion in remote sensing.

Page 17: GEOLOGY Module - epgp.inflibnet.ac.in

GEOLOGY

Paper: Remote Sensing and GIS

Module: Digital Image Fusion

Q2. How image fusion is done? Discuss the standard method in brief?

Ans: The standard methods of image fusion are based on Red-Green-Blue (RGB) to

Intensity-Hue-Saturation (IHS) transformation. Following are the steps usually

taken in satellite image fusion:

Resize the low-resolution multispectral images to the same size as the

panchromatic image and co-register to coincide on a pixel-by-pixel basis

depending on the height variations in the area contained in the data.

Subsequently, the data can be fused using one of the fusion techniques

described herewith.

Transform the R, G and B bands of the multispectral image into IHS

components.

Modify the panchromatic image with respect to the multispectral image.

This is usually performed by histogram matching of the panchromatic image

with Intensity component of the multispectral images as reference.

Replace the intensity component by the panchromatic image and perform

inverse transformation to obtain a high-resolution multispectral image

Q3. What is the application of image fusion in remote sensing?

Ans: Following are the main application of image fusion in remote sensing:

Object identification: Image fusion enhances the features and these enhanced

features are visually apparent in VIR/VIR combinations and new fused images

are superior to the original data.

Classification: New image after fusion increases the classification accuracy. The

classification accuracy of remote sensing images is improved when multiple

source image data are introduced to the processing. Images from microwave and

optical sensors offer complementary information that helps in discriminating the

different classes.

Change Detection: Change detection is the process of identifying differences in

the state of an object or phenomenon by observing it at different times. Image

fusion plays greater role in this by making the objects more clear. Image fusion

Page 18: GEOLOGY Module - epgp.inflibnet.ac.in

GEOLOGY

Paper: Remote Sensing and GIS

Module: Digital Image Fusion

for change detection takes advantage of the different configurations of the

platforms carrying the sensors. The combination of these temporal images in

same place enhances information on changes that might have occurred in the

area observed.

Q4. Discuss the Principal Component Analysis (PCA) technique used for image

fusion in remote sensing?

Ans: Principal Component Analysis (PCA) is mathematical method that transforms

interrelated variables into the unrelated variable. This is a spatial domain

technique of image fusion, in which pixel values are modified to achieve the

final results. The fusion is accomplished by weighted average of images to be

fused. Eigen vector related to the largest Eigen value of the covariance matrices

of each source are used to obtain weights for each source image. It computes a

compress and best description of the data set into; (i) First PCA, (ii) Second

PCA and (iii) Third PCA.

The figure shows the flow diagram of PCA-based image fusion algorithm. The

input images (images to be fused) I1 (x, y) and I2 (x, y) are arranged in two

column vectors and their empirical means are subtracted. The resulting vector

has a dimension of n x 2, where n is length of the each image vector. Eigenvector

and eigenvalues for this resulting vector are computed and the eigenvectors

corresponding to the larger eigenvalue obtained. The normalized components P1

and P2 (i.e., P1 + P2 = 1) are computed from the obtained eigenvector.

Page 19: GEOLOGY Module - epgp.inflibnet.ac.in

GEOLOGY

Paper: Remote Sensing and GIS

Module: Digital Image Fusion

Q5. How image fusion is performed at the processing level of Pixel?

Ans: Images can be fused at Pixel level, Feature level and Decision level. Image

fusion at pixel level means fusion at the lowest processing level referring to the

merging of measured physical parameters. It uses raster data that is at least co-

registered but most commonly geocoded. The geocoding plays an essential role

because miss-registration causes artificial colours or features in multisensor data sets,

which falsify the interpretation. Later on, it includes the re-sampling of image data to

a common pixel spacing and map projection.

Multiple Choice Questions-

1. Images can be fused at the level of

(a) Pixel

(b) Feature

(c) Decision level

(d) All of the above

Ans: d

2. Which of the following statement is not true with respect to image fusion?

(a) Image fusion does not require registration of images.

(b) Transformation of R, G and B bands of the multispectral image into IHS

components.

(c) Replacement of the intensity component by the panchromatic image and

performing inverse transformation to obtain a high resolution

multispectral image.

(d) Fusion of data by using one of the fusion techniques.

Ans: a

3. In subtractive method of image Fusion, a ratio between the MS and PAN image

pixels sizes is considered to be approximately

(a) 4:1

(b) 5:1

(c) 1:4

(d) All of the above

Ans: a

Page 20: GEOLOGY Module - epgp.inflibnet.ac.in

GEOLOGY

Paper: Remote Sensing and GIS

Module: Digital Image Fusion

4. Which of the following statement is not true with respect to IHS

(a) Transform the red, green, and blue (RGB) channels to IHS components.

(b) No need to match the histogram of the panchromatic image with the

intensity component.

(c) Replace the intensity component with the stretched panchromatic image

(d) Inverse transform IHS channels to RGB channels

Ans: b

5. In the following technique Fourier domain filtering is needed while doing the

image fusion?

(a) The intensity-hue-saturation (IHS) image fusion technique

(b) Principal Component Analysis (PCA)

(c) Ehlers method

(d) Wavelet image fusion technique

Ans: c

Suggested Readings:

1. Dahiya, S., Garg, P. K., & Jat, M. K. (2013). A comparative study of various

pixel-based image fusion techniques as applied to an urban

environment. International Journal of Image and Data Fusion, 4(3), 197-213.

2. Sahu, D. K., & Parsai, M. P. (2012). Different image fusion techniques–a

critical review. International Journal of Modern Engineering Research

(IJMER), 2(5), 4298-4301.

3. Helmy, A. K., Nasr, A. H., & El-Taweel, G. S. (2010). Assessment and

evaluation of different data fusion techniques. International Journal of

Computers, 4(4), 107-115.

4. Kaur, A., & Khullar, S. (2013). Image Fusion using HIS, PCA and Wavelet

Technique. International Journal of Computer Science and Communication

Engineering, 2(2).

Page 21: GEOLOGY Module - epgp.inflibnet.ac.in

GEOLOGY

Paper: Remote Sensing and GIS

Module: Digital Image Fusion

5. Lemeshewsky, G. P. (2002, July). Multispectral image sharpening using a

shift-invariant wavelet transform and adaptive processing of multiresolution

edges. In Aero Sense 2002 (pp. 189-200). International Society for Optics and

Photonics.

6. Prasad, N., Saran, S., Kushwaha, S. P. S., & Roy, P. S. (2001). Evaluation of

various image fusion techniques and imaging scales for forest features

interpretation. Current Science, 1218-1224.

7. Zhang, Y. (2004). Understanding image fusion. Photogrammetric engineering

and remote sensing, 70(6), 657-661.