envisat asar wide swath and spot‐vegetation image fusion for wetland mapping: evaluation of...

12
This article was downloaded by: [University of Waterloo] On: 11 November 2014, At: 07:13 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Geocarto International Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/tgei20 ENVISAT ASAR Wide Swath and SPOTVEGETATION Image Fusion for Wetland Mapping: Evaluation of Different Waveletbased Methods Toon Westra a , Koen C. Mertens a & Robert R. De Wulf a a Laboratory of Forest Management and Spatial Information Techniques, Faculty of BioScience Engineering , University of Ghent , Coupure Links 653, B-9000, Belgium E- mail: Published online: 02 Jan 2008. To cite this article: Toon Westra , Koen C. Mertens & Robert R. De Wulf (2005) ENVISAT ASAR Wide Swath and SPOTVEGETATION Image Fusion for Wetland Mapping: Evaluation of Different Waveletbased Methods, Geocarto International, 20:2, 21-31 To link to this article: http://dx.doi.org/10.1080/10106040508542342 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http:// www.tandfonline.com/page/terms-and-conditions

Upload: robert-r

Post on 13-Mar-2017

213 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: ENVISAT ASAR Wide Swath and SPOT‐VEGETATION Image Fusion for Wetland Mapping: Evaluation of Different Wavelet‐based Methods

This article was downloaded by: [University of Waterloo]On: 11 November 2014, At: 07:13Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number: 1072954 Registered office: MortimerHouse, 37-41 Mortimer Street, London W1T 3JH, UK

Geocarto InternationalPublication details, including instructions for authors and subscription information:http://www.tandfonline.com/loi/tgei20

ENVISAT ASAR Wide Swath and SPOT‐VEGETATIONImage Fusion for Wetland Mapping: Evaluation ofDifferent Wavelet‐based MethodsToon Westra a , Koen C. Mertens a & Robert R. De Wulf aa Laboratory of Forest Management and Spatial Information Techniques, Faculty ofBio‐Science Engineering , University of Ghent , Coupure Links 653, B-9000, Belgium E-mail:Published online: 02 Jan 2008.

To cite this article: Toon Westra , Koen C. Mertens & Robert R. De Wulf (2005) ENVISAT ASAR Wide Swath andSPOT‐VEGETATION Image Fusion for Wetland Mapping: Evaluation of Different Wavelet‐based Methods, GeocartoInternational, 20:2, 21-31

To link to this article: http://dx.doi.org/10.1080/10106040508542342

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) containedin the publications on our platform. However, Taylor & Francis, our agents, and our licensors make norepresentations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose ofthe Content. Any opinions and views expressed in this publication are the opinions and views of the authors,and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be reliedupon and should be independently verified with primary sources of information. Taylor and Francis shallnot be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and otherliabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to orarising out of the use of the Content.

This article may be used for research, teaching, and private study purposes. Any substantial or systematicreproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in anyform to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

Page 2: ENVISAT ASAR Wide Swath and SPOT‐VEGETATION Image Fusion for Wetland Mapping: Evaluation of Different Wavelet‐based Methods

21Geocarto International, Vol. 20, No. 2, June 2005 E-mail: [email protected] by Geocarto International Centre, G.P.O. Box 4122, Hong Kong. Website: http://www.geocarto.com

ENVISAT ASAR Wide Swath and SPOT-VEGETATION ImageFusion for Wetland Mapping: Evaluation of Different Wavelet-based Methods

Toon Westra, Koen C. Mertens and Robert R. De WulfLaboratory of Forest Management and Spatial Information TechniquesFaculty of Bio-Science EngineeringUniversity of Ghent, Coupure Links 653B-9000 BelgiumE-mail: [email protected], [email protected], [email protected]

Abstract

Three wavelet-based fusion methods (ARSIS, PSIMA and à trous method) are applied to combine ENVISAT ASARWide Swath and SPOT-VEGETATION images. The objectives of the data fusion are feature enhancement andimprovement of classification accuracy of a tropical wetland located in the Chad basin, Africa. The fusion results arecompared to those obtained by the intensity hue saturation (IHS) method and the principal component (PC) method.Several quantitative tests and a visual inspection are performed to evaluate the different methods. The fused imagesare classified by means of a maximum likelihood classifier and the classification accuracies are calculated.

The results show that the fusion methods based on the wavelet transform perform better then the IHS and PCmethod for both objectives. From all methods, the ARSIS and the à trous method preserve best the spectral contentsof the SPOT-VEGETATION image. However, the à trous method outperforms the ARSIS method in terms of spatialinformation preservation. When the PSIMA method is used, artefacts are minimized. The highest classificationaccuracies are obtained with the PSIMA and à trous methods.

Introduction

The African wetlands are both economically andecologically of exceptional importance. Spreading acrossthe borders of Niger, Nigeria, Cameroon and Chad, theInterior Niger Delta, Lake Chad and adjacent floodplainscover an extensive area, up to 320 000 square km, in theSahelian zone. The extent and inaccessibility of majorfloodplains render remote sensing the only feasible methodfor monitoring flooding and vegetation cover.

Remote sensing uses different portions of theelectromagnetic spectrum at different spatial, temporal, andspectral resolutions to observe the earth’s surface. Imagefusion techniques enable combination of data with differentcharacteristics. According to Pohl and Van Genderen (1998),image fusion is the combination of two or more images tocreate a new image containing more information by using acertain algorithm.

In this study, Synthetic Aperture Radar (SAR) WideSwath data is combined with optical data. SAR data is verysuitable for mapping wetlands, since data acquisition isweather independent and can occur day and night. Bothvegetation characteristics, such as density, distribution,orientation, dielectric constant and height, and sensorcharacteristics, such as polarization, incidence angle, andwavelength, are important in determining the amount of

radiation backscattered towards the radar antenna (Hess etal. 1990). When vegetation is flooded, an enhanced radarbackscatter can be observed under certain conditions, due tothe double bounce interaction of the microwave radiationwith the water surface and the emerging vegetation. Henceflooded vegetation can easily be separated from non-floodedvegetation on SAR images. However, not all land coverclasses can be as easily detected using SAR data. Opticaldata, banking on spectral reflectance response patterns,provides complementary information, facilitating the imageinterpretation process. The benefits of combining SAR andoptical data for monitoring wetlands have been illustrated byKushwaha et al. (2000). Held et al. (2003) used highresolution hyperspectral and SAR data for high resolutionmapping of tropical mangrove ecosystems. They obtainedmore accurate classification results when both datasets wereintegrated.

Image fusion can be performed at three differentprocessing levels: pixel, feature and decision level (Pohl andVan Genderen, 1998). In this study, only pixel level fusiontechniques are used. Apart from well-known image fusionmethods based on the intensity, hue and saturation (IHS)color transformation and on the principal component (PC)transformation, more advanced techniques based on multi-scale wavelet analysis are used. The “Amélioration de laResolution Spatiale par Injections de Structures” (ARSIS)

Dow

nloa

ded

by [

Uni

vers

ity o

f W

ater

loo]

at 0

7:13

11

Nov

embe

r 20

14

Page 3: ENVISAT ASAR Wide Swath and SPOT‐VEGETATION Image Fusion for Wetland Mapping: Evaluation of Different Wavelet‐based Methods

22

method (Ranchin and Wald, 2000) uses the wavelet transformto fuse high resolution panchromatic images with lowresolution multi-spectral images. This method adds themodeled finer frequency components of the high resolutionimage to the coarser resolution image, improving the spatialresolution of the multi-spectral image, while preserving itsspectral contents. A similar approach is followed by Núñezet al. (1999) and Teggi et al. (2003), but they applied adifferent algorithm, known as the “à trous” (“with holes”)algorithm, to decompose the data in wavelet coefficients. Duet al. (2003) proposed a method similar to ARSIS, called“Preserving spatial information and minimizing artefacts”(PSIMA), to combine images having significantly differentspectral properties and spatial resolution. They used thismethod to fuse RADARSAT-1 ScanSAR data (125 m spatialresolution) with NOAA AVHRR data (1100 m spatialresolution).

Comparing different fused images is a difficult task,since no universal quantitative measure of image fusionquality exists. The quality of fused images depends on thepurpose of image fusion. Some criteria for assessing thequality of fused images have been suggested by Wald et al.(1997) for the fusion of panchromatic and multi-spectralimages. In such case, the main objective is to improve thespatial resolution of a multi-spectral image, while preservingspectral contents. In this paper, the main objectives arefeature enhancement and improved classification accuracy.Therefore, quality assessment is performed by visualinspection of the fusion result and by quantitatively evaluatingthe classification results, using a parametric classifier.

In this paper, both spatial resolution and pixel size areused to characterize an image. It is important to make a cleardistinction between both terms. Spatial resolution definesthe size of the smallest spatial structures that can still beobserved by a sensor. Pixel size of an image simply definesthe ground area covered by a pixel, independent of thespatial resolution. Often the pixel size of an image is equal tothe spatial resolution. Yet when, for example, such an imageis expanded (number of pixels is increased), a smaller pixelsize is obtained while the spatial resolution remainsunchanged.

Study area

The Logone floodplain is located in the Lake Chad basin,Africa (Figure 1). It stretches from Nigeria across northernCameroon to Chad, approximately between 12º30’N and10º50’N, and between 14º0’E and 15º20’E. The floodplainmeasures about 200 km from north to south and its width is40 km. The average altitude of the study area is 300 in a.s.l.(Burgis and Symoens, 1987).

Flooding of the plain starts with the onset of the rainyseason in early July. In September, the Logone River risesand the entire plain is flooded with up to several meters ofwater. In November, the flood starts receding and by the endof February, the plain is entirely dry again.

The wetland vegetation consists mainly of different speciesof grasses that have adapted to varying degrees of inundation.Oryza longistaminata A. Chev. & Roehr. and Echinochloapyramidalis (Lam.) Hitchc. & Chase. constitute single ortwo-species stands on the floodplain (Figure 2, Figure 3),but Vetiveria nigritana (Benth.) Stapf. is abundant on thelevees of the drainage ditches and on the higher parts of theintact plain. At the margins of the plains, which are onlyirregularly inundated, thickets of woody species, such asAcacia seyal Del. and Piliostigma reticulatum (DC.) Hochst.,occur.

Figure 1 Lake Chad and adjacent floodplains, location of the study area

Figure 2 Dense wetland vegetation (Oryza longistaminata A, Chev. &Roehr..)

Dow

nloa

ded

by [

Uni

vers

ity o

f W

ater

loo]

at 0

7:13

11

Nov

embe

r 20

14

Page 4: ENVISAT ASAR Wide Swath and SPOT‐VEGETATION Image Fusion for Wetland Mapping: Evaluation of Different Wavelet‐based Methods

23

The natural dry land vegetation consists of woodlandsavanna, but is now largely replaced by small-scale agricultureand grassy areas with thorny shrubs (Figure 4, Figure 5).Depending upon edaphic conditions, this degraded vegetationmay vary from sparse and short to dense and tall grasses.

Woody vegetation may likewise vary from small (< 0.5 m)bushes to tall (> 5 m) trees.

Satellite data

The data used consists of a SPOT-VEGETATION D-10(VGT) image and an ENVISAT Advanced Synthetic ApertureRadar (ASAR) Wide Swath standard image. The maincharacteristics of these images are listed in Table 1.

The VEGETATION instrument is optimized for globalscale vegetation monitoring. The 10-day synthesis imageproduct is designed in order to minimize the effects of cloudcover. The images were acquired between the 11th and the20th of February 2003, near the end of the flooding cycle.

The ENVISAT ASAR Wide Swath standard image isgenerated using the ScanSar technique and processed to 150m resolution. The product includes slant range to groundrange corrections. The image was acquired on the 15th ofFebruary 2003. The pixel values of the product are expressedin radar brightness values (β0). These values were transformedto gamma naught (γ0) values, to normalize the effect of thevarying incidence angle (α), and expressed in dB, to obtain anormal pixel distribution:

γ0 = 10.log(β0.tan(α)) (1)

Wavelet transform and multi-resolution analysis

Similar to the Fourier transform, the wavelet transformperforms a decomposition of a signal on a base of elementaryfunctions, called wavelets. The Fourier transform onlyprovides information on the frequency content of an image,but does not provide any spatial information. The waveletrepresentation, however, is able to provide localization inboth time and space domain. The base is generated bydilatations and translations of a single function ψ called themother wavelet:

1 x-bψa,b(x) = ———ψ (———) (2)

√—a a

a = dilation stepb = translation stepa, b ∈ ℜa > 0

The wavelet transform can be used to decompose animage in a set of approximation images, representing theoriginal scene at a lower scale or resolution, and a set ofdetail images, containing information about the details thatexist between two successive resolution levels. For thepractical implementation of the wavelet transform, differentalgorithms exist. Two algorithms that have often been usedfor image fusion are the Discrete Wavelet Transform (DWT)and the à trous Wavelet Transform (AWT).

The Discrete Wavelet Transform for 2-D signalsThe DWT can be implemented using a filter bank structure,

Figure 3 Dense wetland vegetation (Echinochloa pyramidalis (Lam.)Hitchc. & Chase)

Figure 4 Dense and tall dry land vegetation: woodland savanna

Figure 5 Dry land with moderate vegetation cover: grasses and shrubs

Dow

nloa

ded

by [

Uni

vers

ity o

f W

ater

loo]

at 0

7:13

11

Nov

embe

r 20

14

Page 5: ENVISAT ASAR Wide Swath and SPOT‐VEGETATION Image Fusion for Wetland Mapping: Evaluation of Different Wavelet‐based Methods

24

as shown in Figure 6. Ij (x, y) represents a digital image atresolution level j. H and G are the four-tap orthogonalwavelet filters, designed by Daubechies (1988); H is alow-pass filter, G is a high-pass filter. The impulseresponse of H and G are represented by h(n) and g(n)respectively, and their relation is:

g(k) = (-1)k h(l-k) (3)

The filter coefficients of H are listed in Table 2. Allcoefficients have to be divided by √

–2 for normalization

purposes.First, filter H and filter G are applied to the columns of

Ij (x, y). Both resulting images are sub-sampled by removingone over two columns. Then, H and G are applied to therows of each sub-sampled image. Again the resultingimages are sub-sampled, this time by removing one overtwo rows. This results in one approximation image Ij+1 (x,y), with half the spatial resolution of Ij(x, y), and threedetail images DH

j+1(x, y), DVj+1(x, y) and DD

j+1(x, y), givinginformation about respectively the horizontal, verticaland diagonal details, having sizes comprised betweenresolution level j and j + 1.

If the original image I0 (x,y) has a spatial resolution of,for example, 10 m, the resolution of I1 (x, y) would be 20m and DH

1 (x, y) would contain the horizontal spatialstructures with sizes between 10 and 20 m. If I0 (x,y)consists of 100 rows and 100 columns, I1 (x, y) wouldconsist of 50 rows and 50 columns, which means pixelsize doubles.

Figure 7 represents the results of a three level waveletdecomposition of an image I0 (x,y), in the scheme proposed byRanchin and Wald (1993), in which every portion has distinctfrequency and spatial properties.

From the approximation image Ij+1 (x, y) and the detailimages DH

j+1(x, y), DVj+1(x, y) and DD

j+1(x, y), the original image Ij(x,

SPOT-VEGETATION D-10 ENVISAT ASAR Wide Swath

Spectral Bl: blue (0.43-0.47 µm) C-band (5.6 cm)characteristics B2: red (0.61-0.68 µm)

B3: near infrared (0.79-0.89 µm)MIR: short wave infrared(1.58-1.75 µm)

Polarization n.a. VVPixel spacing (m) 1000 75Spatial resolution (m) 1000 125Cover Africa 400 km x 400 kmAcquisition date February 11th -20th 2003 February 15 th 2003

Table 1 Main characteristics of SPOT-VEGETATION D-10 and ENVISAT ASAR Wide Swath images

Position Value

h0 (1+ √—3 )/ 4√

—2)

h1 (3 + √—3 )/ 4√

—2)

h2 (3 - √—3 )l 4√

—2)

h3 (1 - √—3 )/ 4√

—2)

Table 2 Coefficients of H, used to generate the four-tap Daubechieswavelet

Figure 6 Implementation of the discrete wavelet transform into a filter bankstructure (Ranchin and Wald, 2000)

Figure 7 Three-level wavelet decomposition of an image I0 (Ranchin andWald, 2000)

Dow

nloa

ded

by [

Uni

vers

ity o

f W

ater

loo]

at 0

7:13

11

Nov

embe

r 20

14

Page 6: ENVISAT ASAR Wide Swath and SPOT‐VEGETATION Image Fusion for Wetland Mapping: Evaluation of Different Wavelet‐based Methods

25

Wavelet image fusion

Assume a fusion of a low resolution image AO with ahigh resolution image BO, with pixel sizes PA0

and PB0

respectively, and with pixel size ratio S = PA0 / PB0

. Imagefusion can be accomplished by applying the waveletdecomposition to image B0, replacing the approximationimage Bj by image A0 and finally performing the inversewavelet transform using the (modeled) detail images ofB0. In order to replace the approximation image Bj withthe low resolution image A0, both should have the samepixel size ( PA0

= PBj ). Depending on which algorithm is

used for the wavelet decomposition, different scenariosshould be considered.

In the case of DWT, pixel size doubles when movingfrom one decomposition level to the next, due to the sub-sampling operation in rows and columns. This meansthat, in order to fuse image A0 and B0, the pixel size ratioS should be a power of 2. If this is not the case, eitherimage A0 or image B0 should be resampled appropriately,so that the new pixel size ratio S' equals 2n.

When AWT is used, the pixel size of the consecutiveapproximation images remains unchanged (PBj

= PB0),

since no sub-sampling is applied. Image fusion can beaccomplished by resampling the low resolution image A0

to the same pixel size of the high resolution image B0. Inthis way the detail images Dj can simply be added pixelby pixel to the resampled image A'0.

Methods

In this study the ASAR image (1670 x 1670) with apixel size of 75 m was fused with the red, the nearinfrared and the mid infrared band of the VGT image(128 x 128) with a pixel size of approximately 1000 m.The five methods used (three wavelet-based methods,IHS method and PC method) are explained in detailbelow.

ARSIS methodIn the ARSIS (Amélioration de la Resolution Spatiale

par Injection de Structures) method the DWT algorithmis used. Since the pixel size ratio was not a power of two,resampling one of the two images was needed. It wasdecided to resample the ASAR image to a pixel size ofapproximately 125 m, obtaining a 1024 x 1024 image.This way, only one resampling was required instead ofthree (one for each VGT band). A bilinear resamplingprocedure was used in order to remove part of the specklenoise in the image.

Since we wanted both input images to contributeequally to the fusion result, the ASAR image was rescaledto match the variances of the VGT bands. Then, for eachrescaled ASAR image, a three-level waveletdecomposition was applied and the third-levelapproximation (128 x 128) was replaced by the

Figure 8 Implementation of the inverse discrete wavelet transform into afilter bank structure (Ranchin and Wald, 2000)

y) can be reconstructed exactly. The synthesis process isillustrated in Figure 8. First, an over-sampling is applied to therows of the approximation image and the detail images. This isdone by adding a zero between the pixels. Then, either H

~ or G

~ is

applied to the rows of each over-sampled image. Results aresummed two by two, as shown in Figure 8, and an over-sampling is applied to the columns. When H

~ or G

~ are applied to

the columns, the resulting images are added and the sum ismultiplied by four, resulting in the original image Ij(x, y).

The à trous wavelet transformWhen the AWT is used, an approximation image Ij(x, y) is

obtained through convolutions of an approximation imageobtained at the previous step, using a dilated filter F:

Ij(x, y) = F Ij -1(x, y) (4)

The filter F has a B3 cubic spline profile (Nuñez et al 1999).The use of a B3 cubic spline leads to a convolution with a 5 x 5mask:

1 4 6 4 14 16 24 16 4

—1 ( 6 24 36 24 6 ) (5)

256 4 16 24 16 41 4 6 4 1

The dilatation of the filter is obtained by inserting (2j - 1) zerosbetween the non-zero elements of F.

The detail image Dj (x, y) is calculated as the differencebetween two consecutive approximation images:

Dj (x, y) = Ij -1(x, y) - Ij (x, y) (6)

The original image Ij (x, y) can be reconstructed by adding alldetail images to the last approximation image.

As with DWT, an approximation image Ij (x, y) has half theresolution of the previous approximation image Ij -1(x,y).However, the number of pixels in the consecutive approximationsremains the same as in the original image, since no sub-samplingis applied. This means that, for example, Ij (x, y) might have apixel size of 10 m but a spatial resolution of 20 m, i.e. it containsno spatial structures smaller than 20 m.

Dow

nloa

ded

by [

Uni

vers

ity o

f W

ater

loo]

at 0

7:13

11

Nov

embe

r 20

14

Page 7: ENVISAT ASAR Wide Swath and SPOT‐VEGETATION Image Fusion for Wetland Mapping: Evaluation of Different Wavelet‐based Methods

26

corresponding VGT band. Finally, the inverse wavelettransformation was performed for each VGT band, resultingin three fused images with a pixel size of 125 m.

PSIMA methodThe PSIMA (Preserving Spatial Information and

Minimizing Artefacts) method was designed by Du et al.(2003) for radar and visible-infrared image fusion. Theirmain objective was feature enhancement. This was done byfusing the lower frequency components at lower scales, andretaining the higher frequency components at finer scales forconservation of spatial information.

Since the PSIMA method also uses the DWT algorithm,the same resampling procedure as in the ARSIS method wasapplied. Again, as with the ARSIS method, the ASAR imagewas rescaled to match the variances of the different VGTbands and a three-level wavelet decomposition was appliedto each rescaled ASAR image. Fusion of the low frequencycomponents was conducted by multiplying the third-levelapproximation with the corresponding VGT band. Themultiplication algorithm enhances the stronger commonfeatures and rejects the weaker common features; featuresthat only exist in individual images will normally be retained(Du et al., 2003). Next, the fused result for each band wasrescaled to match the variance and mean of the original bandand an inverse wavelet transform was performed, resultingin three fused images with pixel size of 125 m.

A trous methodIn order to be able to use the AWT method the three VGT

bands were first resampled to a pixel size of 75 m (1670 x1670), using a bilinear resampling procedure. As in the twoprevious methods the ASAR image was rescaled to matchthe variances of the VGT bands. Then, each rescaled imagewas decomposed to the fourth level. For each band, thisresulted in four detail images D75-150, D150-300, D300-600 andD600-1200, all having a pixel size of 75 m. Fusion wasaccomplished by adding D150-300, D300-600 and D600-1200 to thecorresponding resampled VGT bands, resulting in three fusedimages with a pixel size of 75 m. The detail image D75-150,representing the structures with scales between 75 m and150 m, was not used in the image fusion, since this imagecontained mainly speckle noise.

IHS MethodThe IHS method consists of following steps. The

resampled VGT bands are transformed from RGB colorspace into IHS space. Then, the intensity component issubstituted by the ASAR image. Before doing so, theASAR image is first rescaled to match the standarddeviation and the mean of the intensity image. Finally, aninverse color transformation is performed, resulting inthree fused bands.

PC MethodThe PC method is very similar to the IHS method. First, a

PC transformation is applied with the resampled VGT bands.Then, the first PC is replaced with the ASAR image. Again,the ASAR image is first rescaled to match the mean andstandard deviation of the first PC. Finally, an inverse PCtransform is performed, resulting in three merged bands.

Evaluation of fused images

The quality of the fused images was both testedquantitatively and by visual inspection of a color compositeimage of the merged bands.

In order to evaluate how the different sensors contributeto the fusion results, for each fused band the correlation withthe corresponding VGT band and with the ASAR image wascomputed. To do so, the original VGT was resampled to theappropriate pixel size, using the nearest neighbor method.This way the original spectral values were preserved.

For each method, the correlation matrix of the fusedbands was calculated, and compared to the correlation matrixof the original VGT bands. When the correlation betweenbands increases, it may be more difficult to discriminatebetween different spectral classes.

Another test was based on the analysis of classificationresults obtained by using a maximum likelihood classifier.For each method, the fused bands together with the ASARimage were classified and the Kappa (K) coefficient (Cohen,1960) was calculated. The following methodology was used.

1. An unsupervised classification, using the isoclustalgorithm, was performed using the original VGT bandsand the ASAR image.

2. Class signatures were constructed based on the outcomeof the unsupervised classification and terrain information(vegetation description, previously classified images,...)

3. The class signatures were analyzed, by means of scatterplots and preliminary supervised classification results.Some classes were merged, others were split up. At theend of this stage, seven Land cover/ Land use classeswere kept: one water class, three wetland classes andthree dry land classes.

4. From the class signatures, a training set and a test set wasselected at random.

5. The training set was used to perform a supervisedclassification using the maximum likelihood classifierwith the fused bands and the ASAR image, and with theresampled VGT bands and the ASAR image.

6. The overall accuracy and Kappa coefficient werecalculated using the test set.

The Z-test for normal distributions was used to detectsignificant differences between classified image pairs forthe kappa coefficients (Sharma and Sarkar, 1998):

K2-K1Z = ———————— (7)—————√σ2

K1 + σ2

K1

Dow

nloa

ded

by [

Uni

vers

ity o

f W

ater

loo]

at 0

7:13

11

Nov

embe

r 20

14

Page 8: ENVISAT ASAR Wide Swath and SPOT‐VEGETATION Image Fusion for Wetland Mapping: Evaluation of Different Wavelet‐based Methods

27

Method Merged band Corresponding ASARVGT band

ARSIS ASAR-B2 0.830 0.171ASAR-B3 0.834 0.365

ASAR-MIR 0.836 0.405PSIMA ASAR-B2 0.604 0.679

ASAR-B3 0.672 0.763ASAR-MIR 0.684 0.782

A trous ASAR-B2 0.818 0.210ASAR-B3 0.829 0.388

ASAR-MIR 0.829 0.434IHS ASAR-B2 0.050 0.972

ASAR-B3 0.213 0.980ASAR-MIR 0.124 0.965

PC ASAR-B2 0.032 0.936ASAR-B3 0.143 0.987

ASAR-MIR 0.297 0.939

Table 3 Correlation between fused bands and the corresponding VGTbands and between fused bands and the ASAR image

ASARVGT-B2 -0.100VGT-B3 0.131VGT-MIR 0.176

Table 4 Correlation between VGT bands and ASAR image

Results and discussion

Five methods were used to fuse the ENVISAT ASARWide Swath image with the red (B2), the near infrared (B3)and the short wave infrared band (MIR) of the SPOT-VEGETATION image.

Each fusion resulted in three merged bands. Figure 9shows color composite images of the resulting merged bandsfor the different methods, together with the original ASARimage and a color composite of the original VGT bands.

Correlation between fused images and original imagesFor each method, Table 3 shows the correlation between

the merged bands and the corresponding resampled VGTbands and between the merged bands and the ASAR image.The correlation between the original VGT bands and theASAR image is listed in Table 4. This correlation is low forall bands (even a negative coefficient is observed for B2),which can be explained by the different characteristics ofboth sensors.

The highest correlation between the merged bands andthe original VGT bands is observed for the ARSIS and the àtrous method (0.818 to 0.836). It indicates that these twomethods better preserve the spectral contents of the originalVGT image as compared to the other methods. The sameconclusion is drawn when Figure 9b is compared to Figure9c and Figure 9d. A low correlation can be observed for theIHS and the PC method (0.032 to 0.297), resulting incompletely altered spectral values, as can be observed inFigure 9e and Figure 9f. This can be explained by the lowcorrelation between the ASAR image and the image that issubstituted (intensity component and PC 1 respectively).

The correlation between the merged bands and the ASARimage is highest for the IHS and PC method (around 0.95). Amuch lower correlation exists for the ARSIS and the à trousmethod, which is the result of the fusion technique used, inwhich only the high frequency components of the ARSISimage are added. The correlation coefficients for the à trousmethod are slightly higher compared to the ARSIS method.Comparison of Figure 9c and Figure 9d, reveals that thespatial structures are more apparent in Figure 9d. It cantherefore be concluded that for the image combination usedin this study, the à trous method better preserves the spatialdetails compared to the ARSIS method.

Intermediate correlation coefficients are observed for thePSIMA method. Compared to the other wavelet transformmethods, the PSIMA merged bands show a higher correlationwith the ASAR image (0.679 to 0.782), since low frequencyinformation of ASAR is incorporated as well. As a result ofthe multiplication operation, the original spectral values arechanged, explaining the lower correlation observed in Table3. The same conclusion can be made when comparing Figure9b with Figure 9e.

Correlation between merged bands for the differentfusion methods

In Table 5 the correlation matrices of the original VGT

bands and the merged bands are shown. Higher correlationcoefficients are observed for all fusion methods, whencompared to the correlation matrix of the VGT bands. Forthe ARSIS and à trous method, the increase in correlation iscomparable. The addition of similar (high resolution)information to all bands, caused this increase in correlation.The highest increase is noticed for the PSIMA and IHSmethod. Because of this high correlation, automaticclassification procedures may yield poor results. In case ofthe PC method, only a very small increase in correlation isobserved. Yet this method changes the spectral contents ofthe original bands completely, as mentioned above.

Classification resultsFor each fusion method a classification was performed,

as explained above, using the merged bands together withthe ASAR image. For reasons of comparison, the VGTbands together with the ASAR image, and both data sourcesseparately were classified as well. Only the top left part ofthe images, containing the Logone floodplain, was classified.Seven land cover/ land use classes were considered: openwater, three different wetland types and three different dryland types. Table 6 provides a more detailed description ofthe different classes.

Overall accuracy and Kappa coefficients are shown inTable 7. All classifications, except for the classification ofthe ASAR image, show high levels of accuracy, since onlywell separable classes were considered. When the original

Dow

nloa

ded

by [

Uni

vers

ity o

f W

ater

loo]

at 0

7:13

11

Nov

embe

r 20

14

Page 9: ENVISAT ASAR Wide Swath and SPOT‐VEGETATION Image Fusion for Wetland Mapping: Evaluation of Different Wavelet‐based Methods

28

Table 8, which summarizes the results of theZ-test for each image pair. A significantlylower kappa coefficient is obtained using theARSIS method (0.9283). The classificationsobtained using the IHS and PC method showthe lowest kappa values of all fusion methods(0.8533 and 0.8275 respectively).

Producer accuracies and consumeraccuracies are shown in Table 9 and Table10. It is apparent that the accuracies for bothWater and Wetland type III are very high forall fusion methods. This is because allclassifications make use of the ASARinformation, in which Water shows very lowbackscatter due to specular reflection, andWetland type III shows very high backscatterdue to double bounce interactions with thewater surface and the vegetation. From allland cover/ land use classes, Wetland type IIis mapped with the lowest accuracy. In caseof the à trous method, a producer accuracy of98% and a much lower consumer accuracy of87% is observed. Hence the extent of Wetlandtype II seems to be overestimated in theclassification result. The opposite is true forthe PSIMA method, having a consumeraccuracy of 92% and a producer accuracy of88% for Wetland type II.

The classification results with the highest

Figure 9 Original images and color composite images of fused bands(blue=B2; green=B3; Red=MIR) (a) ENVISAT ASAR WideSwath image; (b) SPOT-VEGETATION image; (c) ARSIS;(d) à trous; (e) PSIMA; (f) IHS; (g) PC

Figure 10 Classification results with (a) SPOT-VEGETATION; (b)SPOT-VEGETATION + ASAR; (c) PSIMA + ASAR; (d) àtrous + ASAR

Table 5 Correlation matrix of the SPOT-VEGETATION bands and correlation matrices ofthe fused bands

VGT B2 B3 MIR ARSIS ASAR- ASAR- ASARB2 B3 MIR

B2 1.000 0.897 0.744 ASAR-B2 1.000 0.924 0.811B3 0.897 1.000 0.886 ASAR-B3 0.924 1.000 0.917MIR 0.744 0.886 1.000 ASAR-MIR 0.811 0.917 1.000

PSIMA ASAR- ASAR- ASAR- A trous ASAR- ASAR- ASARB2 B3 MIR B2 B3 MIR

ASAR-B2 1.000 0.964 0.904 ASAR-B2 1.000 0.922 0.804ASAR-B3 0.964 1.000 0.960 ASAR-B3 0.922 1.000 0.917ASAR-MIR 0.904 0.960 1.000 ASAR-MIR 0.804 0.917 1.000

PC ASAR- ASAR- ASAR- IHS ASAR- ASAR- ASARB2 B3 MIR B2 B3 MIR

ASAR-B2 1.000 0.905 0.776 ASAR-B2 1.000 0.969 0.894ASAR-B3 0.905 1.000 0.912 ASAR-B3 0.969 1.000 0.942ASAR-MIR 0.776 0.912 1.000 ASAR-MIR 0.894 0.942 1.000

VGT bands together with ASAR image are used in the classification asignificantly higher Kappa coefficient is obtained then when both imagesare classified separately (0.9642 compared to 0.8708 for VGT and 0.6283for ASAR). This proves that both data sources provide complementaryinformation.

The highest kappa coefficients are obtained with the results of the à trousmethod and the PSIMA method, and when the VGT + ASAR bands are used(0.9592, 0.9575 and 0.9642 respectively). There is no significant difference(95 % confidence level) between these kappa coefficients, as can be seen in

Dow

nloa

ded

by [

Uni

vers

ity o

f W

ater

loo]

at 0

7:13

11

Nov

embe

r 20

14

Page 10: ENVISAT ASAR Wide Swath and SPOT‐VEGETATION Image Fusion for Wetland Mapping: Evaluation of Different Wavelet‐based Methods

29

Kappa coefficients are shown in Figure 10, together with theclassification obtained when only the VGT bands are used.When Figure 10a is compared to Figure 10b, it can benoticed that by using the high resolution ASAR data in theclassification, considerable spatial detail is induced.Especially the classes Water and Wetland type III appear tobe more detailed. For example, in the north of the lake, ricefields are well separated and in the south-east of the lake,small water bodies are mapped. Other classes are mappedwith lower spatial detail, which indicates that these classesare separated mainly based on the low resolution VGTspectral data. With the à trous and PSIMA method spatialdetail is induced for all classes without loss of accuracy(Figure 10c and Figure 10d). This can be observed moreclearly in Figure 11, which is a [more] detailed subscene ofthe classifications in Figure 10.

A negative effect of the wavelet fusion method is thatartefacts may appear, resulting in classification errors. Thisis especially the case for the à trous and ARSIS method.When the PSIMA fusion method is used, this problem isminimized, due to the multiplication operation performed.This can be seen in Figure 12, showing a subscene in detail,containing Water and Wetland III. In the classificationresulting from the à trous method, classification errorsresulting from artefacts can be observed at the border

Figure 11 Subscene taken from the classification with à trous + ASAR(left) and with SPOT-VEGETATION + ASAR (right)

Figure 12 Subscene taken from the classification with PSIMA + ASAR(top) and with à trous + ASAR (bottom)

Table 6 Description of land cover/ land use classes

Land cover/ land use classDescription

Open water Clear open water

Wetland type I Drying floodplain with sparse wetlandvegetation

Wetland type II Recently flooded floodplains: wet soils withvarying degree of vegetation cover

Wetland type III Floodplain with dense wetland vegetation andrice fields

Dry land type I Sparsely vegetated dry land, mainly grasses

Dry land type II Densely vegetated dry land including marginalwoodland

Dry land type III Dry land with intermediate vegetation cover

Table 7 Overall accuracy (O.A.), Kappa coefficient (K) and Kappavariance (σK

2) of the classifications obtained using the differentmethods

Method O.A. K σK2

A trous + ASAR 97 0.9642 0.000029VGT + ASAR 97 0.9592 0.000033PSIMA + ASAR 96 0.9575 0.000034ARSIS + ASAR 94 0.9283 0.000056VGT 88 0.8708 0.000096IHS + ASAR 87 0.8533 0.000107PC fusion + ASAR 85 0.8275 0.000122ASAR 68 0.6283 0.000211

between the two classes. This is not the case for the PSIMAmethod.

Conclusions

Three wavelet fusion methods, ARSIS, A trous andPSIMA were used to combine ENVISAT ASAR and SPOT-VEGETATION data. The purpose of the data fusion wasfeature enhancement and improving classification accuracyof a tropical wetland located in the Lake Chad basin, Africa.The fusion methods based on the wavelet transform showedbetter results for both objectives compared to the PC andIHS methods.

Both the ARSIS and the à trous method added highfrequency details of the ASAR image to the SPOT-VEGETATION data, while spectral values of the VGT imagewere well preserved. These high frequency details appearedmuch clearer when the à trous method was used, as could beobserved in a color composite image of the à trous fusedbands. When the PSIMA method was used, the commonlower scales of both images were fused as well. As aconsequence, fewer artefacts occurred in the fused images,compared to the ARSIS and à trous method. However, interms of preservation of spectral contents, the PSIMA methodwas outperformed by the ARSIS and à trous method.

Dow

nloa

ded

by [

Uni

vers

ity o

f W

ater

loo]

at 0

7:13

11

Nov

embe

r 20

14

Page 11: ENVISAT ASAR Wide Swath and SPOT‐VEGETATION Image Fusion for Wetland Mapping: Evaluation of Different Wavelet‐based Methods

30

The highest classification accuracies were obtained usingthe results of the à trous and the PSIMA fusion. Althoughthe Kappa coefficients were not significantly different, visualinspection revealed that some misclassification errors,resulting from artefacts, occurred in the classification of theà trous-fused images.

In remote sensing, multi-scale wavelet fusion methodshave primarily been used to combine panchromatic andmulti-spectral data. This work has shown that, in case ofwetland mapping, wavelet fusion methods can also beeffective for the combination of ENVISAT ASAR data andSPOT-VEGETATION optical data, and can lead to improvedclassification results. Furthermore, since both image types

have a high temporal resolution and cover large areas, theproposed fusion methods can be incorporated in wetlandmonitoring schemes.

Acknowledgments

Funding of this project was provided by the FederalOffice for Scientific, Technical and Cultural Affairs (OSTC,Brussels, Belgium) through a PRODEX contract (project No15447). The ENVISAT ASAR data were made available byESA in the framework of AO ID 467. Thanks are due to EvaDe Clercq, Frieke Van Coillie and Lieven Verbeke for theirvaluable comments.

VGT + ARSIS + PSIMA + A trous + IHS +ASAR ASAR ASAR ASAR ASAR

ARSIS + ASAR Y

PSIMA + ASAR / Y

à trous + ASAR / Y /

IHS+ASAR Y Y Y Y

PC+ ASAR Y Y Y Y /

Table 8 Significance of the difference between Kappa coefficients (Y=Statistically significant difference at95% confidence interval)

Land cover/ land use classMethod A B C D E F G

VGT + ASAR 100 98 86 100 100 99 94VGT 98 94 78 91 99 88 78ASAR 100 54 47 96 66 59 58ARSIS + ASAR 100 90 83 100 98 98 88PSIMA + ASAR 100 95 92 100 99 98 91à trous + ASAR 100 97 87 100 99 98 99IHS + ASAR 100 84 76 99 85 78 93PC fusion + ASAR 100 79 73 98 82 84 81

Land cover/ land use classMethod A B C D E F G

VGT + ASAR 100 89 92 100 100 100 96VGT 97 87 87 86 93 88 86ASAR 98 76 24 88 46 83 62ARSIS + ASAR 100 91 80 99 98 99 92PSIMA + ASAR 100 95 88 100 99 98 96à trous + ASAR 100 90 98 100 98 100 94IHS + ASAR 100 77 84 95 77 87 93PC fusion + ASAR 100 85 80 98 79 87 68

Table 9 Class producer accuracies (%) for the different methods (A=Water; B=Wetland I; C=Wetland II;D=Wetland III; E=Dry land I; F=Dry Land II; C,-Dry Land III)

Table 10 Class consumer accuracies (%) for the different methods (A=Water; B=Wetland I; C=Wetland II;D=Wetland III; E=Dry land I; F=Dry land II; G=Dry land III)

Dow

nloa

ded

by [

Uni

vers

ity o

f W

ater

loo]

at 0

7:13

11

Nov

embe

r 20

14

Page 12: ENVISAT ASAR Wide Swath and SPOT‐VEGETATION Image Fusion for Wetland Mapping: Evaluation of Different Wavelet‐based Methods

31

References

Burgis, MJ. and J.J. Symoens, 1987. African wetlands and shallowwater bodies. Editions de I’ORSTOM. Institut Français de recherchescientifique pour le dévelopment en cooperation. CollectionTRAVEAUX et DOCUMENTS no. 211. Paris. ISBN 2-7099-0881-6.

Cohen, J., 1960. A coefficient of agreement for nominal scale.Educational and Psychological Measurement 20(l): 37-46.

Daubechies, L, 1988. Orthonormal bases of compactly supportedwavelets. Communications on pure and applied mathematics 41(7):909-996.

Du, Y., P.W. Vachon and J.J. van der Sanden, 2003. Satellite imagefusion with multiscale analysis for marine applications: preservingspatial information and minimizing artifacts (PSIMA). CanadianJournal of Remote Sensing 29(l): 14-23.

Held, A., C. Ticehurst, L. Lymburner and N. Williams, 2003. Highresolution mapping of tropical mangrove ecosystems usinghyperspectral and radar remote sensing. International Journal ofRemote Sensing 24(13): 2739-2759.

Hess, L.L., J.M. Melack and D.S. Simonett, 1990. Radar detection offlooding beneath the forest canopy: a review. International Journalof Remote Sensing 11 (7): 1313-1325.

Kushwaha, S.P.S., R.S. Dwivedi and B.R.M. Rao, 2000. Evaluation ofvarious digital image processing techniques for detection of coastal

wetlands using ERS-1 SAR data. International Journal of RemoteSensing 21(3): 565-579.

Núñez, J., X. Otazu, O. Fors, A. Prades, V. Palà and R. Arbiol, 1999.Multiresolution-Based image fusion with Additive WaveletDecomposition. IEEE Transactions on Geoscience and RemoteSensing 37 (3): 1204-1211.

Pohl, C. and J.L. Van Genderen, 1998. Multisensor image fusion inremote sensing: concepts, methods and applications. InternationalJournal of Remote Sensing 19(5): 823-854.

Ranchin, T. and L. Wald, 1993. The wavelet transform for the analysisof remotely sensed data. International Journal of Remote Sensing14(3): 615-619.

Ranchin, T. and L. Wald, 2000. Fusion of High Spatial and SpectralResolution Images: The ARSIS Concept and Its Implementation.Photogrammetric Engineering & Remote Sensing 66(l): 49-61.

Sharma K.M.S., and A. Sarkar, 1998. A Modified ContextualClassification Technique for Remote Sensing Data.Photogrammetric Engineering & Remote Sensing 64(4): 273-280.

Teggi, S., R. Cecchi and F. Serafini,. 2003. TM and IRS-lC-PAN datafusion using multiresolution decomposition methods based on the‘a tròus’ algorithm. International Journal of Remote Sensing 24(6):1287-1301.

Wald, L., T. Ranchin and M. Mangolini, 1997. Fusion of SatelliteImages of Different Spatial Resolutions: Assessing the Quality ofResulting Images. Photogrammetric Engineering & Remote Sensing63(6): 691-699.

Dow

nloa

ded

by [

Uni

vers

ity o

f W

ater

loo]

at 0

7:13

11

Nov

embe

r 20

14