transform in this paper, we extend the pansharpening...

7
IET Image Processing Research Article Pansharpening scheme using filtering in two- dimensional discrete fractional Fourier transform ISSN 1751-9659 Received on 8th September 2017 Revised 22nd November 2017 Accepted on 19th January 2018 E-First on 15th February 2018 doi: 10.1049/iet-ipr.2017.0961 www.ietdl.org Nidhi Saxena 1 , Kamalesh K. Sharma 1 1 Department of Electronics and Communication Engineering, Malaviya National Institute of Technology, Jaipur 302017, Rajasthan, India E-mail: [email protected] Abstract: The aim of the pansharpening scheme is to improve the spatial information of multispectral images using the panchromatic (PAN) image. In this study, a novel pansharpening scheme based on two-dimensional discrete fractional Fourier transform (2D-DFRFT) is proposed. In the proposed scheme, PAN and intensity images are transformed using 2D-DFRFT and filtered by highpass filters, respectively. The filtered images are inverse transformed and further used to generate the pansharpened image using appropriate fusion rule. The additional degree of freedom in terms of its angle parameters associated with the 2D-DFRFT is exploited for obtaining better results in the proposed pansharpening scheme. Simulation results of the proposed technique carried out in MATLAB are presented for IKONOS and GeoEye-1 satellite images and compared with existing fusion methods in terms of both visual observation and quality metrics. It is seen that the proposed pansharpening scheme has improved spectral and spatial resolution as compared to the existing schemes. 1 Introduction Many of the remote sensing satellites, such as SPOT, IRS, Landsat 7, IKONOS, QuickBird, Pleiades and Worldview-2, capture a lot of images for topographic mapping and map updating, land use, agriculture and forestry, flood monitoring, ice and snow monitoring and so on. These images can be categorised into multispectral (MS) and panchromatic (PAN) images. The MS images have high spectral resolution and low spatial resolution while PAN image has high spatial resolution and low spectral resolution of the same captured area [1]. It is difficult to obtain an image directly from the satellite sensors having both high spatial and spectral resolution of the captured area due to some technical constraints [2]. Many pansharpening schemes have been proposed for achieving the goal of high spatial and spectral resolution in a single image [3]. The pansharpening scheme refers to the fusion of information derived from PAN and MS images captured simultaneously over the same area [4, 5]. These pansharpening methods are most commonly classified into four categories: component substitution (CS), multiresolution analysis (MRA), hybrid and model-based methods [3]. The CS-based pansharpening methods based on intensity-hue- saturation (IHS) transformation, Gram–Schmidt (GS) orthogonalisation, principal component analysis and so on are discussed in [6–8]. The MRA-based pansharpening schemes using Hilbert vibration decomposition (HVD), wavelet packet, non- subsampled shearlet transform have also been presented in [9–11]. The hybrid pansharpening methods make use of both CS and MRA techniques as discussed in [12, 13] and so on. The model-based pansharpening methods based on online coupled dictionary learning, spatial correlation modelling, MRF model, compressive sensing-based (CS) technique and so on are discussed in [14–17]. The effect of aliasing and mis-registration on pansharpened imagery is deeply discussed in [18]. Recently, HVD-based pansharpening scheme is proposed [9]. This method decomposed the PAN image into the decreasing order of the energy in terms of instantaneous amplitude and frequency components and highest energy component of the PAN image provides the instantaneous frequency-based lowpass filtering. A fast Fourier transform (FFT)-based pansharpening scheme is proposed in [19]. This method is based on IHS transform with FFT filtering of both the PAN image and intensity image component of the original MS images, one of the drawbacks of this algorithm is that original MS images have to contain only three bands. It reduces the general applicability of this algorithm to very few cases like LANDSAT and SPOT satellite images in which MS sensor has three bands of the visible spectrum and a fourth band on the infrared. The infrared sensor has a spatial resolution much lower than the three visible sensors, and it is usually discarded for task related to the production of visual products or applications [12]. It may be mentioned here that due to the finite spatial size of MS and PAN images, these images will not be band limited in FFT domain and therefore FFT-based pansharpening may not be suitable in such cases. In addition, the PAN and MS images collected through different sensors satellite environment suffer from different types of noise which may not be stationary in nature. Therefore, conventional Fourier transform is not suitable for handling such non-stationary signals and noise [20, 21]. The fractional Fourier transform (FRFT) is known to handle such non-stationary noise in a more effective way than the conventional Fourier transform [22]. It would, therefore, be interesting to investigate the intermediate domains (known as FRFT domains) between spatial and FFT domains for pansharpening purpose. The FRFT is a generalised version of the conventional Fourier transform. The angle parameter of two-dimensional discrete FRFT (2D-DFRFT) can be varied to provide infinite representations of the given signal in different 2D- DFRFT domains each corresponding to the different value of the angle parameter. The 2D-DFRFT provides a free degree of freedom in terms of its angle parameters. The 2D-DFRFT has been applied in many image processing applications [23–31] but the use of discrete FRFT (DFRFT) in pansharpening has not been investigated so far. In this paper, we extend the pansharpening scheme using the conventional Fourier transform presented in [19] to the DFRFT domains. Simulations are carried out on IKONOS and GeoEye-1 satellite datasets. The results are compared with the existing pansharpening schemes based on HVD (HVD_F) [9], optimal filter (OMF) [32], adaptive IHS (AIHS) [6], generalised Laplacian pyramid with modulation transfer function matched filter [33], decimated wavelet transform using an additive injection model (Indusion) [34] and a trous wavelet transform using the model-2 (ATWTM2) [35]. It is seen that the proposed scheme shows improvement in the quality of the fused image as compared to the existing schemes which may be attributed to fractional domain filtering provided by 2D-DFRFT-based approach. We have also investigated the effects of aliasing and mis-registration on our IET Image Process., 2018, Vol. 12 Iss. 6, pp. 1013-1019 © The Institution of Engineering and Technology 2018 1013

Upload: others

Post on 07-Aug-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: transform In this paper, we extend the pansharpening ...static.tongtianta.site/paper_pdf/93b0817c-b7fc-11e... · Nidhi Saxena1, Kamalesh K. Sharma1 1Department of Electronics and

IET Image Processing

Research Article

Pansharpening scheme using filtering in two-dimensional discrete fractional Fouriertransform

ISSN 1751-9659Received on 8th September 2017Revised 22nd November 2017Accepted on 19th January 2018E-First on 15th February 2018doi: 10.1049/iet-ipr.2017.0961www.ietdl.org

Nidhi Saxena1 , Kamalesh K. Sharma1

1Department of Electronics and Communication Engineering, Malaviya National Institute of Technology, Jaipur 302017, Rajasthan, India E-mail: [email protected]

Abstract: The aim of the pansharpening scheme is to improve the spatial information of multispectral images using thepanchromatic (PAN) image. In this study, a novel pansharpening scheme based on two-dimensional discrete fractional Fouriertransform (2D-DFRFT) is proposed. In the proposed scheme, PAN and intensity images are transformed using 2D-DFRFT andfiltered by highpass filters, respectively. The filtered images are inverse transformed and further used to generate thepansharpened image using appropriate fusion rule. The additional degree of freedom in terms of its angle parametersassociated with the 2D-DFRFT is exploited for obtaining better results in the proposed pansharpening scheme. Simulationresults of the proposed technique carried out in MATLAB are presented for IKONOS and GeoEye-1 satellite images andcompared with existing fusion methods in terms of both visual observation and quality metrics. It is seen that the proposedpansharpening scheme has improved spectral and spatial resolution as compared to the existing schemes.

1 IntroductionMany of the remote sensing satellites, such as SPOT, IRS, Landsat7, IKONOS, QuickBird, Pleiades and Worldview-2, capture a lot ofimages for topographic mapping and map updating, land use,agriculture and forestry, flood monitoring, ice and snow monitoringand so on. These images can be categorised into multispectral (MS)and panchromatic (PAN) images. The MS images have highspectral resolution and low spatial resolution while PAN image hashigh spatial resolution and low spectral resolution of the samecaptured area [1]. It is difficult to obtain an image directly from thesatellite sensors having both high spatial and spectral resolution ofthe captured area due to some technical constraints [2]. Manypansharpening schemes have been proposed for achieving the goalof high spatial and spectral resolution in a single image [3]. Thepansharpening scheme refers to the fusion of information derivedfrom PAN and MS images captured simultaneously over the samearea [4, 5]. These pansharpening methods are most commonlyclassified into four categories: component substitution (CS),multiresolution analysis (MRA), hybrid and model-based methods[3]. The CS-based pansharpening methods based on intensity-hue-saturation (IHS) transformation, Gram–Schmidt (GS)orthogonalisation, principal component analysis and so on arediscussed in [6–8]. The MRA-based pansharpening schemes usingHilbert vibration decomposition (HVD), wavelet packet, non-subsampled shearlet transform have also been presented in [9–11].The hybrid pansharpening methods make use of both CS and MRAtechniques as discussed in [12, 13] and so on. The model-basedpansharpening methods based on online coupled dictionarylearning, spatial correlation modelling, MRF model, compressivesensing-based (CS) technique and so on are discussed in [14–17].The effect of aliasing and mis-registration on pansharpenedimagery is deeply discussed in [18].

Recently, HVD-based pansharpening scheme is proposed [9].This method decomposed the PAN image into the decreasing orderof the energy in terms of instantaneous amplitude and frequencycomponents and highest energy component of the PAN imageprovides the instantaneous frequency-based lowpass filtering.

A fast Fourier transform (FFT)-based pansharpening scheme isproposed in [19]. This method is based on IHS transform with FFTfiltering of both the PAN image and intensity image component ofthe original MS images, one of the drawbacks of this algorithm isthat original MS images have to contain only three bands. It

reduces the general applicability of this algorithm to very few caseslike LANDSAT and SPOT satellite images in which MS sensor hasthree bands of the visible spectrum and a fourth band on theinfrared. The infrared sensor has a spatial resolution much lowerthan the three visible sensors, and it is usually discarded for taskrelated to the production of visual products or applications [12]. Itmay be mentioned here that due to the finite spatial size of MS andPAN images, these images will not be band limited in FFT domainand therefore FFT-based pansharpening may not be suitable in suchcases. In addition, the PAN and MS images collected throughdifferent sensors satellite environment suffer from different typesof noise which may not be stationary in nature. Therefore,conventional Fourier transform is not suitable for handling suchnon-stationary signals and noise [20, 21]. The fractional Fouriertransform (FRFT) is known to handle such non-stationary noise ina more effective way than the conventional Fourier transform [22].It would, therefore, be interesting to investigate the intermediatedomains (known as FRFT domains) between spatial and FFTdomains for pansharpening purpose. The FRFT is a generalisedversion of the conventional Fourier transform. The angle parameterof two-dimensional discrete FRFT (2D-DFRFT) can be varied toprovide infinite representations of the given signal in different 2D-DFRFT domains each corresponding to the different value of theangle parameter. The 2D-DFRFT provides a free degree offreedom in terms of its angle parameters. The 2D-DFRFT has beenapplied in many image processing applications [23–31] but the useof discrete FRFT (DFRFT) in pansharpening has not beeninvestigated so far.

In this paper, we extend the pansharpening scheme using theconventional Fourier transform presented in [19] to the DFRFTdomains. Simulations are carried out on IKONOS and GeoEye-1satellite datasets. The results are compared with the existingpansharpening schemes based on HVD (HVD_F) [9], optimal filter(OMF) [32], adaptive IHS (AIHS) [6], generalised Laplacianpyramid with modulation transfer function matched filter [33],decimated wavelet transform using an additive injection model(Indusion) [34] and a trous wavelet transform using the model-2(ATWTM2) [35]. It is seen that the proposed scheme showsimprovement in the quality of the fused image as compared to theexisting schemes which may be attributed to fractional domainfiltering provided by 2D-DFRFT-based approach. We have alsoinvestigated the effects of aliasing and mis-registration on our

IET Image Process., 2018, Vol. 12 Iss. 6, pp. 1013-1019© The Institution of Engineering and Technology 2018

1013

Page 2: transform In this paper, we extend the pansharpening ...static.tongtianta.site/paper_pdf/93b0817c-b7fc-11e... · Nidhi Saxena1, Kamalesh K. Sharma1 1Department of Electronics and

proposed method and compared it with other existingpansharpening methods using the methodology discussed in [18].

The rest of the paper organised is as follows. In Section 2, thedetails of the FRFT are explained; Section 3 provides the details ofthe proposed pansharpening scheme; Section 4 describes thesimulation results and comparative analysis of the proposedscheme with existing schemes. Conclusions are drawn in Section 5.

2 Review of FRFTThe FRFT is a generalisation of the conventional Fourier transform[36] and has been applied in different applications [25–27]. TheFRFT is a linear operator which provides a representation of thesignal along the axis making an angle α with the time axis [37] andit reduces to conventional Fourier transform for an angle parameterequal to π /2.

The algorithms for 2D-DFRFT have also been proposed [36]and have found similar results as of 2D continuous FRFT. The 2D-DFRFT transform of input image h(i, j) of size U × V is computedas [36]

Hα, β(m, n) = ℱα, β[h(i, j)] = ∑i = 0

U − 1∑j = 0

V − 1h(i, j)ℛα, β(i, j, m, n) (1)

where ℛα, β(i, j, m, n) is the 2D transform kernel defined as [38, 39]

ℛα, β = ℛα ⊗ ℛβ, (2)

where ℛα, ℛβ are the 1D-DFRFT transformation matrix andsymbol ⊗ denotes Kronecker product of matrices given in [38, 39].The angle parameters α and β can be varied for obtaining bettersignal representation.

For α = β = π /2, the 2D-DFRFT reduces to the conventional2D-discrete Fourier transform while for α = β = 0, the 2D-DFRFTprovides the original signal itself.

The inverse 2D-DFRFT (2D-IDFRFT) is computed as [36]

h(i, j) = ∑m = 0

U − 1∑n = 0

V − 1Hα, β(m, n)ℛ−α, − β(i, j, m, n), (3)

3 Proposed pansharpening schemeThe images captured from different sensors in remote sensingapplications may suffer from spatial and spectral distortionproblems [40]. Some pansharpening schemes improve the spatialquality and reduce the spectral distortion in the pansharpenedimage obtained from the MS and PAN images by performingfiltering in the time domain or joint time–frequency domains [35,41]. These filtering-based pansharpening schemes, however,assume that the image is band-limited in conventional Fourierdomains. However, the finite size of the images acquired precludesthis assumption to hold true. In addition, the PAN and MS imagescollected through different sensors satellite environment sufferfrom different types of noise which may not be stationary in nature.Therefore, conventional Fourier transform is not suitable forhandling such non-stationary signals and noise [20, 21]. The FRFTis known to handle such non-stationary noise in a more effectiveway than the conventional Fourier transform [22]. The angleparameter of 2D-DFRFT can be varied to provide infiniterepresentations of the given signal in different DFRFT domainseach corresponding to the different value of the angle parameter.The DFRFT provides a free degree of freedom in terms of its angleparameters. Therefore, it would be interesting to extend thefiltering-based pansharpening approach to intermediate domainscalled FRFT domains.

In this section, we present a generalisation of FFT-basedpansharpening scheme proposed in [19]. This method is based on2D-DFRFT filtering of both the PAN image and intensity imagecomponent of the original MS images.

The block diagram of the proposed method is given in Fig. 1. Inthis method, low-resolution MS input images are up-sampled to thesize of PAN image by using interpolation scheme describe in [42].These up-sampled MS images are converted into the intensityimage (I) using averaging operation. The intensity image (I) istransformed using 2D-DFRFT defined in (1) as given below

Iα1, β1(m, n) = ∑i = 0

U − 1∑j = 0

V − 1I(i, j)ℛα1, β1(i, j, m, n), (4)

where Iα1, β1 is the 2D-DFRFT domain representation of image Icorresponding to angles α1 = a1π /2 and β1 = b1π /2.

The image Iα1, β1 given in (4) is filtered by the highpass filter andthen filtered image I^α1, β1 is transformed using the 2D-IDFRFTgiven in (3) to obtain

Iα1, β1′ (i, j) = ∑m = 0

U − 1∑n = 0

V − 1I^α1, β1(m, n)ℛ−α1, − β1(m, n, i, j) . (5)

where image Iα1, β1′ is the 2D-IDFRFT domain representation offiltered image I^α1, β1.

The high-resolution PAN image P is transformed using the 2D-DFRFT given in (1) to obtain

Pα1, β1(m, n) = ∑i = 0

U − 1∑j = 0

V − 1P(i, j)ℛα1, β1(i, j, m, n), (6)

where Pα1, β1 is the 2D-DFRFT domain representation of image Pcorresponding to angles α1 and β1.

The image Pα1, β1 given in (6) is filtered by highpass filter andthen filtered image P^

α1, β1 is transformed using the 2D-IDFRFTdefined in (3) as given below

Pα1, β1′ (i, j) = ∑m = 0

U − 1∑n = 0

V − 1P^

α1, β1(m, n)ℛ−α1, − β1(m, n, i, j), (7)

Fig. 1  Block diagram of the proposed panshapening method

1014 IET Image Process., 2018, Vol. 12 Iss. 6, pp. 1013-1019© The Institution of Engineering and Technology 2018

Page 3: transform In this paper, we extend the pansharpening ...static.tongtianta.site/paper_pdf/93b0817c-b7fc-11e... · Nidhi Saxena1, Kamalesh K. Sharma1 1Department of Electronics and

where image Pα1, β1′ is the 2D-IDFRFT domain representation offiltered image P^

α1, β1.The high-frequency filtered Pα1, β1′ image given in (7) and Iα1, β1′

image given in (5) are added for obtaining the new image Iα1, β1′′ asgiven by

I′α1, β1′ (i, j) = Pα1, β1′ (i, j) + Iα1, β1′ (i, j), (8)

In (8), image Iα1, β1′′ has spatial information of both the PAN and MSimages.

Using image Iα1, β1′′ , we propose the following pansharpeningrule as

MS(α1, β1)r = MSr + βGrIα1, β1′′ , r = 1, 2, …, N, (9)

where MS(α1, β1)r is the high spatial and spectral pansharpenedimage, MSr is the rth interpolated MS images in the given totalnumber of N band images, β is the tuning factor obtained by thesimulation trials and Gr are the injection coefficients obtained fromthe regression between each MS band image and image Iα1, β1′′ [40].The injection coefficients are calculated using [43]

Gr =cov(MSr, I′′α1, β1)

var(I′′α1, β1), r = 1, …, N, (10)

where var(Iα1, β1′′ ) is the variance of Iα1, β1′′ image and cov(MSr, Iα1, β1′′ )indicates the covariance between two images MSr and Iα1, β1′′ . Algorithm 1: 2D-DFRFT-based pansharpening algorithm

1: Obtain up-sampled MS images of the size of PAN image;2: Obtain the IHS images using up-sampled MS images by RGB toIHS conversion;3: Compute the image Iα1, β1 using 2D-DFRFT of I imagecorresponding to angles α1 and β1;4: Compute the image I^α1, β1 using lowpass filtering;

5: Obtain the image Iα1, β1′ using 2D-IDFRFT of I^α1, β1 image;

6: Obtain histogram matched image P using input PAN and Iimage;7: Compute the image Pα1, β1 using 2D-DFRFT of P imagecorresponding to angles α1 and β1;8: Compute the image P^

α1, β1 using highpass filtering;

9: Obtain the image Pα1, β1′ using 2D-IDFRFT of P^α1, β1 image;

10: Obtain the Iα1, β1′′ by adding Iα1, β1′ and Pα1, β1′ images;11: Calculate the injection coefficients Gr;12: Obtain the high spatial and spectral pansharpened imageMS(α1, β1)r using pansharpening rule

MS(α1, β1)r = MSr + βGrI′α1, β1′ , r = 1, 2, …, N,

where β is the tuning factor.

4 Simulation resultsTo test the proposed 2D-DFRFT-based pansharpening scheme,datasets collected by the IKONOS and GeoEye-1 satellites areused. The site location selected for the IKONOS satellite is China-Sichuan (58208_0000000.20001108) collected on May 2008 andfor GeoEye-1 satellite is Hobart, Tasmania, Australia collected onFebruary 2009. The spatial resolutions of MS and PAN images forIKONOS are 4 and 1 m while for GeoEye-1 are 2 and 0.5 m,respectively. The 20 image pairs of each satellite datasets are usedfor the simulation purpose in which size of each image pairs (MSand PAN images) are 320 × 320 and 1280 × 1280, respectively.

For obtaining the pansharpening results, we follow the Wald'sprotocol [44] in which the PAN and MS images are degraded to thelower resolution to compare the pansharpened image with thereference original MS images [40]. The quality metrics forevaluating the quality of the pansharpened images considered inthis paper are Q-index (Q4) [45], spectral angle mapper (SAM)[46], relative dimensionless global error (ERGAS) [44] and qualitywith no-reference (QNR). The QNR is composed of a spectral (Dλ)and spatial (DS) distortion indices, without requiring a high-resolution reference MS images [47].

In the block diagram of the proposed scheme shown in Fig. 1,the highpass filters are chosen as first-order Butterworth filterswith cutoff radius 40 and 350 at degraded and full scale,respectively, and the values of the tuning factor β are 2 and 4 forIKONOS and GeoEye-1 satellite images, respectively. Theselection of cutoff radius and tuning factor β has been done usingthe simulation trials. The input PAN and MS images for IKONOSand GeoEye-1 satellite dataset are shown in Figs. 2a and b andFigs. 2c and d, respectively. The value of N is four in the MSimages; it combines red, blue, green and infrared componentimages.

Simulation results of the spatial information image I′α1, β1′ givenin (8) in the proposed scheme using different values of the angleparameters (α1, β1) of the 2D-DFRFT for IKONOS satellite datasetare shown in Figs. 3a–f and the pansharpened images usingdifferent values of the angle parameters (α1, β1) of the 2D-DFRFTare shown in Figs. 4a–f. It is seen from Figs. 3c and 4c for thevalues of the angle parameters α1 = β1 = 0.98π /2 of the 2D-DFRFT are provided the images with maximum spatialinformation as compared to the other images and Figs. 3d and 4dfor the angle parameters α1 = β1 = π /2 of the 2D-DFRFT conditionof the conventional Fourier transform are provided the images withsome spatial distortion.

The quality assessment of the pansharpened images of theproposed scheme on the 20 image pairs of the IKONOS satellitedatasets using different values of the angle parameters (α1, β1) ofthe 2D-DFRFT is tabulated in Table 1 in terms of Q4, SAM andERGAS with mean and standard deviation (SD). It is observedfrom Table 1 that the values of the angle parametersα1 = β1 = 0.98π /2 of the 2D-DFRFT quality metrics Q4, SAM andERGAS provided the best results as compared to the other valuesof the angle parameters.

Fig. 2  Input dataset images(a) PAN image for IKONOS satellite, (b) MS images for IKONOS satellite, (c) PANimage for GeoEye-1 satellite, (d) MS images for GeoEye-1 satellite 2

IET Image Process., 2018, Vol. 12 Iss. 6, pp. 1013-1019© The Institution of Engineering and Technology 2018

1015

Page 4: transform In this paper, we extend the pansharpening ...static.tongtianta.site/paper_pdf/93b0817c-b7fc-11e... · Nidhi Saxena1, Kamalesh K. Sharma1 1Department of Electronics and

Simulation results are also performed to compare thepansharpened images in terms of mean and SD of variousperformance parameters of IKONOS and GeoEye-1 satellitedataset images using the proposed pansharpening scheme for thevalues of angle parameters α1 = β1 = 0.98π /2 of the 2D-DFRFTwith existing pansharpening schemes based on HVD (HVD_F) [9],OMF [32], AIHS [6], Indusion [34] and ATWTM2 [35].

The pansharpened images obtained using the proposed methodfor value of angle parameters α1 = β1 = 0.98π /2 of the 2D-DFRFT

and existing methods are shown in Figs. 5a–f and 6a–f. It can beobserved from the middle portion of the image shown in Fig. 5athat this fused image reveals additional spatial information than thefused images obtained by the existing methods. Similarly, spectralquality of the fused image shown in Fig. 5a is better than thesimulation results obtained by the other existing methods. Similarremarks hold good for the other set of images shown in Fig. 6,particularly the left corner portion of it. Using the abovepansharpened images shown in Figs. 5a and 6a, the quality metrics

Fig. 3  Spatial information image I′α1, β1′ for IKONOS satellite dataset at different values of angle parameters of the 2D-DFRFT(a) α1 = β1 = 0.94π /2, (b) α1 = β1 = 0.96π /2, (c) α1 = β1 = 0.98π /2, (d) α1 = β1 = 1π /2, (e) α1 = β1 = 1.02π /2, (f) α1 = β1 = 1.04π /2

Fig. 4  Pansharpend images of the proposed scheme for IKONOS satellite dataset at different values of angle parameters of the 2D-DFRFT(a) α1 = β1 = 0.94π /2, (b) α1 = β1 = 0.96π /2, (c) α1 = β1 = 0.98π /2, (d) α1 = β1 = 1π /2, (e) α1 = β1 = 1.02π /2, (f) α1 = β1 = 1.04π /2

1016 IET Image Process., 2018, Vol. 12 Iss. 6, pp. 1013-1019© The Institution of Engineering and Technology 2018

Page 5: transform In this paper, we extend the pansharpening ...static.tongtianta.site/paper_pdf/93b0817c-b7fc-11e... · Nidhi Saxena1, Kamalesh K. Sharma1 1Department of Electronics and

Q, SAM, ERGAS for degraded scale assessment and Dλ, DS, QNRfor full-scale assessment are computed and the results are tabulatedin Tables 2 and 3. The best values of the performance measures arehighlighted as boldface numerals. It can be seen from thequantitative results given in Table 2 for IKONOS dataset that theproposed method outperforms the other methods in terms of SAM,DS and QNR quality metrics while in Q4, ERGAS and Dλ arecomparable in values obtained for other existing methods.Similarly from the results given in Table 3 for the GeoEye-1 imagedataset, it can be observed that the proposed method outperformsthe other methods in terms of SAM, ERGAS, DS and QNR qualitymetrics while the Q4 and Dλ are comparable in values obtained forother existing methods.

The pansharpening results are obtained using the proposedmethod for the values of angle parameters (α1, β1) of the 2D-DFRFT for IKONOS and GeoEye-1 satellite datasets and it isobserved that pansharpened images of the proposed method for thevalues (0.98π /2, 0.98π /2) provided best quality and have therobustness against the effect of aliasing and misregistration errorsas compared to the existing schemes.

5 Conclusion

In this paper, a 2D-DFRFT-based pansharpening scheme isproposed. The additional degree of freedom in terms of its angleparameters associated with the 2D-DFRFT is exploited forobtaining better results in the proposed pansharpening scheme. Thequalitative and quantitative analyses of the presented simulationresults show that the proposed technique provides improvedspectral and spatial quality fused image as compared to some of theexisting pansharpening techniques for the IKONOS and GeoEye-1satellite datasets.

Table 1 Quality assessment of the pansharpened images of the proposed scheme using different values of the angleparameters (α1, β1) of the 2D-DFRFT for IKONOS satellite dataset

Degraded scale(α1, β1) Q4 SAM ERGAS

Mean SD Mean SD Mean SDRef.Val. 1 0 0(0.94π /2, 0.94π /2) 0.429 0.0951 5.091 1.562 6.697 3.182(0.96π /2, 0.96π /2) 0.536 0.095 3.654 0.865 3.693 1.388(0.98π /2, 0.98π /2) 0.648 0.095 2.881 0.435 2.363 0.507(π /2, π /2) 0.574 0.129 3.267 0.623 2.979 0.756(1.02π /2, 1.02π /2) 0.633 0.099 2.971 0.514 2.491 0.630(1.04π /2, 1.04π /2) 0.509 0.112 3.982 1.076 4.285 1.870

Fig. 5  Pansharpened images using different pansharpening methods for IKONOS satellite dataset(a) Proposed MS(α1, β1)r method, (b) HVD_F method, (c) OMF method, (d) AIHS method, (e) Indusion method, (f) ATWTM2 method

IET Image Process., 2018, Vol. 12 Iss. 6, pp. 1013-1019© The Institution of Engineering and Technology 2018

1017

Page 6: transform In this paper, we extend the pansharpening ...static.tongtianta.site/paper_pdf/93b0817c-b7fc-11e... · Nidhi Saxena1, Kamalesh K. Sharma1 1Department of Electronics and

6 References[1] Song, Y., Wu, W., Liu, Z., et al.: ‘An adaptive pansharpening method by using

weighted least squares filter’, IEEE Geosci. Remote Sens. Lett., 2016, 13, (1),pp. 18–22

[2] Zhang, Y.: ‘Understanding image fusion’, Photogramm. Eng. Remote Sens.,2004, 70, (6), pp. 657–661

[3] Ghassemian, H.: ‘A review of remote sensing image fusion methods’, Inf.Fusion, 2016, 32, pp. 75–89

[4] Chen, C., Li, Y., Liu, W., et al.: ‘Sirf: simultaneous satellite image registrationand fusion in a unified framework’, IEEE Trans. Image Process., 2015, 24,(11), pp. 4213–4224

[5] Shah, V.P., Younan, N.H., King, R.L.: ‘An efficient pan-sharpening methodvia a combined adaptive pca approach and contourlets’, IEEE Trans. Geosci.Remote Sens., 2008, 46, (5), pp. 1323–1335

[6] Rahmani, S., Strait, M., Merkurjev, D., et al.: ‘An adaptive ihs pan-sharpening method’, IEEE Geosci. Remote Sens. Lett., 2010, 7, (4), pp. 746–750

Fig. 6  Pansharpened images using different pansharpening methods for GeoEye-1 satellite dataset(a) Proposed MS(α1, β1)r method, (b) HVD_F method, (c) OMF method, (d) AIHS method, (e) Indusion method, (f) ATWTM2 method

Table 2 Spectral quality assessment of the pansharpened images for IKONOS dataset at degraded and full-scale optimised2D-DFRFT angle parameters α1 = 0.98π /2 and β1 = 0.98π /2

Degraded scale Full scaleQ4 SAM ERGAS Dλ DS QNR

Mean SD Mean SD Mean SD Mean SD Mean SD Mean SDRef.Val. 1 — 0 — 0 — 0 — 0 — 1 —MS(α1, β1)r 0.648 0.095 2.881 0.440 2.363 0.502 0.127 0.037 0.135 0.047 0.755 0.071

HVD_F 0.675 0.082 2.935 0.469 2.297 0.398 0.122 0.017 0.147 0.034 0.747 0.037OMF 0.574 0.096 3.058 0.484 2.692 0.522 0.172 0.023 0.223 0.048 0.642 0.049AIHS 0.561 0.065 3.976 0.708 2.737 0.433 0.170 0.031 0.191 0.024 0.67 0.043Indusion 0.586 0.099 2.997 0.483 2.603 0.502 0.168 0.035 0.214 0.049 0.654 0.064ATWTM2 0.637 0.074 3.048 0.474 2.378 0.393 0.166 0.027 0.19 0.056 0.675 0.063

Table 3 Spectral quality assessment of the pansharpened images for GeoEye-1 dataset at degraded and full-scale optimised2D-DFRFT angle parameters α1 = 0.98π /2 and β1 = 0.98π /2

Degraded scale Full scaleQ4 SAM ERGAS Dλ DS QNR

Mean SD Mean SD Mean SD Mean SD Mean SD Mean SDRef.Val. 1 — 0 — 0 — 0 — 0 — 1 —MS(α1, β1)r 0.767 0.028 3.742 0.28 2.717 0.209 0.0482 0.027 0.046 0.015 0.908 0.039

HVD_F 0.77 0.025 3.788 0.269 2.782 0.204 0.07 0.011 0.088 0.008 0.847 0.005OMF 0.665 0.051 4.219 0.343 3.371 0.346 0.056 0.015 0.128 0.016 0.821 0.013AIHS 0.701 0.031 4.529 0.306 3.2 0.186 0.109 0.017 0.105 0.019 0.796 0.032Indusion 0.646 0.048 4.085 0.309 3.424 0.296 0.092 0.01 0.093 0.012 0.822 0.014ATWTM2 0.69 0.032 4.418 0.332 3.071 0.198 0.085 0.007 0.105 0.010 0.817 0.007

1018 IET Image Process., 2018, Vol. 12 Iss. 6, pp. 1013-1019© The Institution of Engineering and Technology 2018

Page 7: transform In this paper, we extend the pansharpening ...static.tongtianta.site/paper_pdf/93b0817c-b7fc-11e... · Nidhi Saxena1, Kamalesh K. Sharma1 1Department of Electronics and

[7] Laben, C.A., Brower, B.V.: ‘Process for enhancing the spatial resolution ofmultispectral imagery using pan-sharpening’. US Patent, 6,011,875, January 42000

[8] Sides, S.C., Anderson, J.A.: ‘Comparison of three different methods to mergemultiresolution and multispectral data- landsat tm and spot panchromatic’,Photogramm. Eng. Remote Sens., 1991, 57, (3), pp. 295–303

[9] Saxena, N., Sharma, K.K.: ‘A novel pansharpening approach using hilbertvibration decomposition’, IET Image Process., doi:10.1049/iet-ipr.2017.0133

[10] Czaja, W., Doster, T., Murphy, J.M.: ‘Wavelet packet mixing for image fusionand pan-sharpening’. SPIE Defense + Security, 2014, p. 908 803

[11] Moonon, A.-U., Hu, J., Li, S.: ‘Remote sensing image fusion method basedon nonsubsampled shearlet transform and sparse representation’, SensingImaging, 2015, 16, (1), pp. 23

[12] Nunez, J., Otazu, X., Fors, O., et al.: ‘Multiresolution-based image fusionwith additive wavelet decomposition’, IEEE Trans. Geosci. Remote Sens.,1999, 37, (3), pp. 1204–1211

[13] González-Audícana, M., Saleta, J.L., Catalán, R.G., et al.: ‘Fusion ofmultispectral and panchromatic images using improved ihs and pca mergersbased on wavelet decomposition’, IEEE Trans. Geosci. Remote Sens., 2004,42, (6), pp. 1291–1299

[14] Guo, M., Zhang, H., Li, J., et al.: ‘An online coupled dictionary learningapproach for remote sensing image fusion’, IEEE J. Sel. Top. Appl. EarthObs. Remote Sens., 2014, 7, (4), pp. 1284–1294

[15] Mirzapour, F., Ghassemian, H.: ‘Improving hyperspectral image classificationby combining spectral, texture, and shape features’, Int. J. Remote Sens.,2015, 36, (4), pp. 1070–1096

[16] Golipour, M., Ghassemian, H., Mirzapour, F.: ‘Integrating hierarchicalsegmentation maps with mrf prior for classification of hyperspectral images ina Bayesian framework’, IEEE Trans. Geosci. Remote Sens., 2016, 54, (2), pp.805–816

[17] Ghahremani, M., Ghassemian, H.: ‘A compressed-sensing-basedpansharpening method for spectral distortion reduction’, IEEE Trans. Geosci.Remote Sens., 2016, 54, (4), pp. 2194–2206

[18] Baronti, S., Aiazzi, B., Selva, M., et al.: ‘A theoretical analysis of the effectsof aliasing and misregistration on pansharpened imagery’, IEEE J. Sel. Top.Signal Process., 2011, 5, (3), pp. 446–453

[19] Ling, Y., Ehlers, M., Usery, E.L., et al.: ‘Fft-enhanced ihs transform methodfor fusing high-resolution satellite images’, ISPRS J. Photogramm. RemoteSens., 2007, 61, (6), pp. 381–392

[20] Lu, W., Xie, J., Wang, H., et al.: ‘Non-stationary component extraction innoisy multicomponent signal using polynomial chirping Fourier transform’,SpringerPlus, 2016, 5, (1), pp. 1177

[21] Ozaktas, H.M., Barshan, B., Mendlovic, D.: ‘Convolution and filtering infractional Fourier domains’, Opt. Rev., 1994, 1, (1), pp. 15–16

[22] Kutay, A., Ozaktas, H.M., Ankan, O., et al.: ‘Optimal filtering in fractionalFourier domains’, IEEE Trans. Signal Process., 1997, 45, (5), pp. 1129–1143

[23] Miah, K.H., Potter, D.K.: ‘Geophysical signal parameterization and filteringusing the fractional Fourier transform’, IEEE J. Sel. Top. Appl. Earth Obs.Remote Sens., 2014, 7, (3), pp. 845–852

[24] Candan, Ç, Kutay, M.A., Ozaktas, H.M.: ‘The discrete fractional Fouriertransform’, IEEE Trans. Signal Process., 2000, 48, (5), pp. 1329–1337

[25] Ozaktas, H.M., Erkaya, N., Kutay, M.A.: ‘Effect of fractional Fouriertransformation on time-frequency distributions belonging to the cohen class’,IEEE Signal Process. Lett., 1996, 3, (2), pp. 40–41

[26] Ozaktas, H.M., Mendlovic, D.: ‘Fractional Fourier transforms and theiroptical implementation. ii’, JOSA A, 1993, 10, (12), pp. 2522–2531

[27] Pei, S.-C., Ding, J.-J.: ‘Eigenfunctions of Fourier and fractional Fouriertransforms with complex offsets and parameters’, IEEE Trans. Circuits Syst. I,Regul. Pap., 2007, 54, (7), pp. 1599–1611

[28] Sharma, K.K., Joshi, S.D.: ‘Time delay estimation using fractional Fouriertransform’, Signal Process., 2007, 87, (5), pp. 853–865

[29] Zayed, A.I.: ‘Advances in shannon's sampling theory’ (CRC Press, Florida,1993)

[30] Sharma, K.: ‘Approximate signal reconstruction using nonuniform samples infractional Fourier and linear canonical transform domains’, IEEE Trans.Signal Process., 2009, 57, (11), pp. 4573–4578

[31] Sharma, K.K., Joshi, S.D.: ‘Image registration using fractional Fouriertransform’. APCCAS 2006-2006 IEEE Asia Pacific Conf. on Circuits andSystems. IEEE, 2006, pp. 470–473

[32] Shahdoosti, H.R., Ghassemian, H.: ‘Fusion of ms and pan images preservingspectral quality’, IEEE Geosci. Remote Sens. Lett., 2015, 12, (3), pp. 611–615

[33] Lee, J., Lee, C.: ‘Fast and efficient panchromatic sharpening’, IEEE Trans.Geosci. Remote Sens., 2010, 48, (1), pp. 155–163

[34] Khan, M.M., Chanussot, J., Condat, L., et al.: ‘Indusion: fusion ofmultispectral and panchromatic images using the induction scalingtechnique’, IEEE Geosci. Remote Sens. Lett., 2008, 5, (1), pp. 98–102

[35] Ranchin, T., Wald, L.: ‘Fusion of high spatial and spectral resolution images:the arsis concept and its implementation’, Photogramm. Eng. Remote Sens.,2000, 66, (1), pp. 49–61

[36] Pei, S.-C., Yeh, M.-H.: ‘Two dimensional discrete fractional Fouriertransform’, Signal Process., 1998, 67, (1), pp. 99–108

[37] Narayanan, V.A., Prabhu, K.: ‘The fractional Fourier transform: theory,implementation and error analysis’, Microprocess. Microsyst., 2003, 27, (10),pp. 511–521

[38] Pei, S.-C., Yeh, M.-H.: ‘Discrete fractional Fourier transform’. Circuits andSystems, 1996. ISCAS'96, Connecting the World, 1996 IEEE Int. Symp. onIEEE, vol. 2, 1996, pp. 536–539

[39] Pei, S.-C., Yeh, M.-H.: ‘Improved discrete fractional Fourier transform’, Opt.Lett., 1997, 22, (14), pp. 1047–1049

[40] Vivone, G., Alparone, L., Chanussot, J., et al.: ‘A critical comparison amongpansharpening algorithms’, IEEE Trans. Geosci. Remote Sens., 2015, 53, (5),pp. 2565–2586

[41] Vivone, G., Restaino, R., Dalla Mura, M., et al.: ‘Contrast and error-basedfusion schemes for multispectral image pansharpening’, IEEE Geosci. RemoteSens. Lett., 2014, 11, (5), pp. 930–934

[42] Aiazzi, B., Baronti, S., Selva, M., et al.: ‘Bi-cubic interpolation for shift-freepan-sharpening’, ISPRS J. Photogramm. Remote Sens., 2013, 86, pp. 65–76

[43] Aiazzi, B., Baronti, S., Selva, M.: ‘Improving component substitutionpansharpening through multivariate regression of ms + pan data’, IEEE Trans.Geosci. Remote Sens., 2007, 45, (10), pp. 3230–3239

[44] Wald, L.: ‘Data fusion: definitions and architectures: fusion of images ofdifferent spatial resolutions’ (Presses des MINES, 2002), pp. 81–84

[45] Wang, Z., Bovik, A.C.: ‘A universal image quality index’, IEEE SignalProcess. Lett., 2002, 9, (3), pp. 81–84

[46] Yuhas, R.H., Goetz, A.F., Boardman, J.W.: ‘Discrimination among semi-aridlandscape endmembers using the spectral angle mapper (sam) algorithm’,1992

[47] Alparone, L., Aiazzi, B., Baronti, S., et al.: ‘Multispectral and panchromaticdata fusion assessment without reference’, Photogramm. Eng. Remote Sens.,2008, 74, (2), pp. 193–200

IET Image Process., 2018, Vol. 12 Iss. 6, pp. 1013-1019© The Institution of Engineering and Technology 2018

1019