structural feature modeling of high-resolution remote sensing images using directional spatial...

5
IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 11, NO. 10, OCTOBER 2014 1727 Structural Feature Modeling of High-Resolution Remote Sensing Images Using Directional Spatial Correlation Yixiang Chen, Kun Qin, Shunzi Gan, and Tao Wu Abstract—In the classification of high-resolution remote sensing images, spatial correlations between pixel values are important spatial information. Traditional methods of measuring spatial correlation are inadequate for the extraction of spatial information about the shape and structure of object classes. In this letter, we propose a novel method using directional spatial correlation (DSC) to model and extract spatial information in neighborhoods of pixels. Two sets of descriptors DSC_I and DSC_C are defined to describe spatial structural features. The effectiveness of the proposed method was tested by image classification on two data sets. Results show that the DSC-based approach can drastically improve the classification, and it is found by comparison that it has better performance than some existing methods. Index Terms—Classification, directional spatial correlation (DSC), high resolution, structural feature. I. I NTRODUCTION H IGH-resolution remote sensing images can provide more detailed geometric information but also show larger spec- tral heterogeneity, leading to the inadequacy of conventional spectral-based classification for image interpretation. A large number of studies have shown that combining spatial infor- mation and spectral information is a good strategy to improve the image classification. Features extracted by using gray-level cooccurrence matrices (GLCMs), Gabor wavelets, and Markov random fields have been widely used in literature to model spatial information in neighborhoods of pixels [1]. Some other spatial feature extraction methods concerning structure [2] and shape [3] have been also proposed. Another approach of modeling spatial structural features is the use of spatial statistics. Spatial autocorrelation statistics, such as Moran’s I and Geary’s C indexes, can be applied in measuring the spatial correlation in neighborhoods of pixels, providing valuable spatial information for image interpretation. The strength of spatial correlation can reflect the differing Manuscript received June 12, 2013; revised October 3, 2013 and January 17, 2014; accepted February 6, 2014. This work was supported in part by the National Key Basic Research and Development Program under Grant 2012CB719903 and in part by the Research Project on High Resolution Remote Sensing for Transportation under Grant 07-Y30A05-9001-12/13. Y. Chen, K. Qin, and S. Gan are with the School of Remote Sensing and Information Engineering, Faculty of Information Sciences, Wuhan University, Wuhan 430079, China (e-mail: [email protected]). T. Wu is with the School of Information Science and Technology, Zhanjiang Normal University, Zhanjiang 524048, China. Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/LGRS.2014.2306972 spatial structures of smooth and rough surfaces. For instance, Myint et al. [4] used global statistics Geary’s C and Getis indexes to measure the spatial correlation among neighborhood pixels and compared their performance in improving the clas- sification. Mallinis et al. [5] used local statistics Moran’s I , Geary’s C , and Getis indexes as texture measures to detect local patterns in a QuickBird image. However, most of the existing studies use a window-based approach to measure the spatial correlation. The main problems of this approach include: 1) the size of the window having an important impact on the validity of feature extraction and the computational cost; and 2) the square window being often used. As a result, the window-based spatial statistical approach can only extract the information of spectral variability or surface texture, and it is difficult to extract the geometric informa- tion about the shape and structure of object classes. In high- resolution images, object classes with a geometric shape and structure usually correspond to a range of relatively homo- geneous connected pixels. Therefore, the spatial arrangement of these homogeneous pixels is important spatial information, which can reflect the features of the shape and structure of object classes. In this letter, we propose a novel method to model spatial structural information in neighborhoods of pixels, which con- siders both the strength and range of spatial correlation. For each pixel, spatial correlations in eight directions are consid- ered, and two sets of descriptors, i.e., DSC_I and DSC_C, are defined, respectively. To validate the proposed method, the features extracted by DSC_I and DSC_C are used for image classification, respectively. II. METHODOLOGY A. Spatial Correlation Spatial correlation (also called spatial autocorrelation) is a measure of the spatial dependence between pixel values in an image. Moran’s I and Geary’s C are two spatial statistics com- monly used in measuring spatial correlation. The definitions of the two statistics, respectively, are as follows [6]: I = n i=1 n j=1 w ij (x i x)(x j x) n i=1 n j=1 w ij S 2 I (1) C = n i=1 n j=1 w ij (x i x j ) 2 2 n i=1 n j=1 w ij S 2 C (2) 1545-598X © 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Upload: lelien

Post on 27-Jan-2017

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Structural Feature Modeling of High-Resolution Remote Sensing Images Using Directional Spatial Correlation

IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 11, NO. 10, OCTOBER 2014 1727

Structural Feature Modeling of High-ResolutionRemote Sensing Images Using Directional

Spatial CorrelationYixiang Chen, Kun Qin, Shunzi Gan, and Tao Wu

Abstract—In the classification of high-resolution remote sensingimages, spatial correlations between pixel values are importantspatial information. Traditional methods of measuring spatialcorrelation are inadequate for the extraction of spatial informationabout the shape and structure of object classes. In this letter,we propose a novel method using directional spatial correlation(DSC) to model and extract spatial information in neighborhoodsof pixels. Two sets of descriptors DSC_I and DSC_C are definedto describe spatial structural features. The effectiveness of theproposed method was tested by image classification on two datasets. Results show that the DSC-based approach can drasticallyimprove the classification, and it is found by comparison that ithas better performance than some existing methods.

Index Terms—Classification, directional spatial correlation(DSC), high resolution, structural feature.

I. INTRODUCTION

H IGH-resolution remote sensing images can provide moredetailed geometric information but also show larger spec-

tral heterogeneity, leading to the inadequacy of conventionalspectral-based classification for image interpretation. A largenumber of studies have shown that combining spatial infor-mation and spectral information is a good strategy to improvethe image classification. Features extracted by using gray-levelcooccurrence matrices (GLCMs), Gabor wavelets, and Markovrandom fields have been widely used in literature to modelspatial information in neighborhoods of pixels [1]. Some otherspatial feature extraction methods concerning structure [2] andshape [3] have been also proposed.

Another approach of modeling spatial structural features isthe use of spatial statistics. Spatial autocorrelation statistics,such as Moran’s I and Geary’s C indexes, can be applied inmeasuring the spatial correlation in neighborhoods of pixels,providing valuable spatial information for image interpretation.The strength of spatial correlation can reflect the differing

Manuscript received June 12, 2013; revised October 3, 2013 and January17, 2014; accepted February 6, 2014. This work was supported in part bythe National Key Basic Research and Development Program under Grant2012CB719903 and in part by the Research Project on High Resolution RemoteSensing for Transportation under Grant 07-Y30A05-9001-12/13.

Y. Chen, K. Qin, and S. Gan are with the School of Remote Sensing andInformation Engineering, Faculty of Information Sciences, Wuhan University,Wuhan 430079, China (e-mail: [email protected]).

T. Wu is with the School of Information Science and Technology, ZhanjiangNormal University, Zhanjiang 524048, China.

Color versions of one or more of the figures in this paper are available onlineat http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/LGRS.2014.2306972

spatial structures of smooth and rough surfaces. For instance,Myint et al. [4] used global statistics Geary’s C and Getisindexes to measure the spatial correlation among neighborhoodpixels and compared their performance in improving the clas-sification. Mallinis et al. [5] used local statistics Moran’s I ,Geary’s C, and Getis indexes as texture measures to detect localpatterns in a QuickBird image.

However, most of the existing studies use a window-basedapproach to measure the spatial correlation. The main problemsof this approach include: 1) the size of the window having animportant impact on the validity of feature extraction and thecomputational cost; and 2) the square window being often used.As a result, the window-based spatial statistical approach canonly extract the information of spectral variability or surfacetexture, and it is difficult to extract the geometric informa-tion about the shape and structure of object classes. In high-resolution images, object classes with a geometric shape andstructure usually correspond to a range of relatively homo-geneous connected pixels. Therefore, the spatial arrangementof these homogeneous pixels is important spatial information,which can reflect the features of the shape and structure ofobject classes.

In this letter, we propose a novel method to model spatialstructural information in neighborhoods of pixels, which con-siders both the strength and range of spatial correlation. Foreach pixel, spatial correlations in eight directions are consid-ered, and two sets of descriptors, i.e., DSC_I and DSC_C,are defined, respectively. To validate the proposed method, thefeatures extracted by DSC_I and DSC_C are used for imageclassification, respectively.

II. METHODOLOGY

A. Spatial Correlation

Spatial correlation (also called spatial autocorrelation) is ameasure of the spatial dependence between pixel values in animage. Moran’s I and Geary’s C are two spatial statistics com-monly used in measuring spatial correlation. The definitions ofthe two statistics, respectively, are as follows [6]:

I =n∑

i=1

n∑j=1

wij(xi − x)(xj − x)

/n∑

i=1

n∑j=1

wijS2I (1)

C =n∑

i=1

n∑j=1

wij(xi − xj)2

/⎛⎝2

n∑i=1

n∑j=1

wijS2C

⎞⎠ (2)

1545-598X © 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Page 2: Structural Feature Modeling of High-Resolution Remote Sensing Images Using Directional Spatial Correlation

1728 IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 11, NO. 10, OCTOBER 2014

Fig. 1. DSC. (a) Object classes in a QuickBird image with anisotropicspatial correlation. (b) Eight directions with equal spacing. (c) Polygon pro-duced by sequentially connecting the endpoints of the eight directional lines.(d) Normalized DSC feature values for three classes from the image shown in(a), and 1, 2, and 3 in the horizontal axis represent S′, Ik , and Ck , respectively.

where n is the number of pixels within a predefined window,xi is the pixel value at location i, wij is the spatial weight,and a binary spatial weight matrix is often used, i.e., if i andj are adjacent, then wij = 1; otherwise, wij = 0. In addition,x = (

∑ni=1 xi)/n is the mean of the pixel values, and S2

I =(∑n

i=1(xi − x)2)/n and S2C = (

∑ni=1(xi − x)2)/(n− 1) are

two different forms of variances.

B. DSC

The traditional window-based approaches of spatial autocor-relation statistics mainly utilize the information of the degreeor strength of spatial correlation. Using a square neighborhoodwindow, it is difficult to extract the features of the shape andgeometric structure of object classes. Given a pixel belonging tosome object class [e.g., road A, building B, or sports ground Cshown in Fig. 1(a)], its homogeneous pixels in different di-rectional lines usually vary in the spectral variability and thespatial extension. From the perspective of spatial statistics, thischaracteristic can be called the anisotropy of spatial correlation,which can provide us more useful spatial information.

In this letter, we use directional spatial correlation (DSC)to describe the spatial dependence in neighborhoods of pixels.For each pixel in the image, we consider its neighboring pixelsin eight directional lines with equal spacing [as shown inFig. 1(b)]. To describe the properties of spatial correlation ineach direction, both the strength and length of DSC are takeninto consideration in the following.

In the kth (k = 1, 2, . . . , 8) direction, the strength of DSCis measured using the following two revised formulas derivedfrom Moran’s I and Geary’s C, respectively:

Ik =

Nk∑i=1

Nk∑j=1

wkij(xi − x)(xj − x)

/Nk∑i=1

Nk∑j=1

wkijS

2II (3)

Ck =

Nk∑i=1

Nk∑j=1

wkij(xi − xj)

2

/⎛⎝2

Nk∑i=1

Nk∑j=1

wkijS

2CC

⎞⎠ (4)

where Nk is the number of pixels in the kth directional line, andwk

ij is the spatial adjacent weight, i.e., if xi and xj are adjacentin the kth directional line, then wk

ij = 1; otherwise, wkij = 0.

In addition, x = (∑N

i=1 xi)/N , S2II = (

∑Ni=1(xi − x)2)/N ,

S2CC = (

∑Ni=1(xi − x)2)/(N − 1), and N is the number of all

pixels in the image. In (3) and (4), parameter Nk is unknown;therefore, we first need to decide how to calculate Nk for eachpixel. Referring to the concept of the directional line of a pixeland its calculation method proposed in literature [3], we givethe following algorithm to calculate the number of pixels ineach direction.

In each direction, take the central pixel as the starting pixel,sequentially compare the spectral similarity between each newpixel and the central pixel, and if the following condition holds:

|pl − pc| < T (5)

then continue to the next comparison. Here, T is a predefinedthreshold, pc represents the value of the central pixel, and plis the value of the lth pixel in this directional line. The searchprocess will cease if condition (5) is not met. In each direction,all the pixels between the central pixel and the end pixel will beused for the description of DSC, and Nk is equal to the numberof these pixels used.

For each pixel in the image, Nk may take different values indifferent directions due to the anisotropy of spatial correlation.Therefore, we define a new parameter Lk to characterize thelength or range of DSC, and let

Lk = Nk. (6)

In this letter, we will use (Lk, Ik) and (Lk, Ck) to quan-titatively describe the spatial correlation in the kth direction,respectively. Here, Lk characterizes the length or range of DSC,which is correlated with the size and shape of an object class[as shown in Fig. 1(a)], whereas Ik and Ck characterize thestrength or amplitude of DSC, which is correlated with thespectral variability of the object class.

C. Feature Extraction

Based on the aforementioned methods, we can extract thestructural features for each pixel, which can be used to com-plement the spectral features in image classification. Therefore,it is expected that the pixels in the same object class willhave the same or similar spatial features. For this purpose, wecompute two descriptors DSC_I and DSC_C, which integratethe properties of spatial correlation in different directions. Thedetailed steps are as follows.

1) In each direction, calculate Ik, Ck, and Lk for each pixel(k = 1, 2, . . . , 8).

2) Use the average strength Ik or Ck of DSC to describe thesurface texture in neighborhoods of pixels, i.e.,

Ik =

8∑k=1

Ik

/8 (7)

Ck =

8∑k=1

Ck

/8. (8)

Page 3: Structural Feature Modeling of High-Resolution Remote Sensing Images Using Directional Spatial Correlation

CHEN et al.: STRUCTURAL FEATURE MODELING OF HIGH-RESOLUTION REMOTE SENSING IMAGES 1729

Both Ik and Ck can reflect the spectral dependence orthe spectral variability. Compared with a window-basedcomputation for Moran’s I and Geary’s C, this methodmay lead to the loss of some spatial information, but it cangreatly reduce the computational cost as it only uses pixelsin eight directional lines instead of in a square window.

3) Sequentially connecting the endpoints of eight directionallines, we can obtain a polygon [see Fig. 1(c)], whichcan reflect the features of the size, shape, and geometricstructure of object classes. Use the area S of the polygonto describe these features in neighborhoods of pixels.According to the sine theorem, we have

S =12

8∑k=1

Lk · Lk+1 · sin2π8

(9)

where L1 = L9. In the calculation, the constant term in(9) can be neglected, and S can be substituted by itssimplified version S ′, i.e., S ′ =

∑8k=1 Lk · Lk+1.

4) Let DSC_I = (Ik, S′) and DSC_C = (Ck, S

′), and then,DSC_I and DSC_C are the final two sets of feature vec-tors or descriptors characterizing the spatial correlation inneighborhoods of pixels.

To better see how DSC_I and DSC_C behave, the histogramsof their values for the three classes (road A, building B, andsports ground C) shown in Fig. 1(a) are given in Fig. 1(d). Itis shown that both DSC_I and DSC_C achieve different featurevalues for the different classes; therefore, the proposed DSC-based features could help in improving the image classification.

III. EXPERIMENTS

In this section, the effectiveness of features DSC_I andDSC_C in improving the image classification are tested on twodata sets. Moreover, some other spatial feature extraction meth-ods, including window-based Moran’s I (W_I), window-basedGeary’s C (W_C), GLCMs, a differential morphological profile(DMP), and wavelet transformation (WT), are also employedfor the purpose of performance comparisons. Four commonlyused parameters including contrast, correlation, energy, andhomogeneity [7] are employed in the GLCM feature extraction,and for the WT feature extraction, the used parameters includelog energy, Shannon’s index, angular second moment, andentropy [8]. The DMP feature uses a multiscale approach basedon a range of different structuring element (SE) sizes [2]. In ourexperiments, a circular SE with a step-size increment of 1 wasused. The coding platform for feature extraction is MATLAB(R2010b) running on a personal computer with Pentium Dual-Core central processing unit processor with 2.62-GHz clockspeed and 1.96 GB of random access memory.

In order to realize the effective integration of spectral andspatial features, a support vector machine (SVM) is em-ployed for the classification of multifeatures [9]. In the experi-ments, the classification procedure was implemented using theLIBSVM package software [10], and the selected kernel func-tion is the radial basis function (RBF), which has been provento be very effective in many classification problems. Theimplementation of an SVM with RBF involves two parame-ters, i.e., the regularization factor and the kernel parameter. A

Fig. 2. (a) Original image. (b)–(d) Classification maps with spectral features,DSC_I+ spectral features and DSC_C+ spectral features, respectively.

commonly used method of the optimal parameter selection isthe grid search method by cross validation. In this letter, theparameters were empirically obtained by comparing the resultswith different parameter settings. Two statistics, i.e., the overallaccuracy (OA) and the kappa coefficient (KC), are used toevaluate the classification results. The statistical significanceof the differences in the classification accuracy derived usingdifferent spatial features was assessed by using McNemar’s test,which is based on the standardized normal test statistic [11] asfollows:

Z = (f12 − f21)/√

f12 + f21 (10)

where f12 and f21 are the number of samples that are correctlyclassified with one method but incorrectly classified with theother method. The difference in accuracy between two clas-sifications is considered statistically significant at a level ofconfidence of 95% if |Z| > 1.96.

A. Data Set 1

The first image data set is acquired by the ZiYuan-3 (ZY-3)high-resolution satellite of China. The panchromatic band witha size of 1200 × 800 pixels and a resolution of 2.1 m overXingyang, China, was used for the experiment. The objectclasses on this image [see Fig. 2(a)] include built-up areas(BAs), roads (R), water (W), vegetation (V), and bare soil (BS).Table I shows the number of samples in the training and test setsfor different object classes.

Page 4: Structural Feature Modeling of High-Resolution Remote Sensing Images Using Directional Spatial Correlation

1730 IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 11, NO. 10, OCTOBER 2014

TABLE INUMBER OF THE TRAINING AND TEST SAMPLES

FOR THE FIRST DATA SET

Fig. 3. Relationships between the KCs and threshold T for the proposedfeatures. (a) Results for the first data set. (b) Results for the second data set.

TABLE IICLASSIFICATION RESULTS OF THE FIRST DATA SET

EMPLOYING DIFFERENT FEATURES

Since the computations of the proposed features depend onthreshold T , we first tested how the value of T influences theclassification results. Fig. 3(a) shows the relationships betweenthe KCs and threshold T ranging from 1 to 45 with a step lengthof 1. It can be seen that, as a whole, the KCs show a varyingtrend of first increasing and then decreasing. In particular,when T = 21 and T = 22, features DSC_I and DSC_C makethe classification reach the optimization, respectively. For theformer, the highest KC is 0.8450, whereas for the latter, itis 0.8572. At this time, the corresponding classification mapsproduced by features DSC_I and DSC_C are shown in Fig. 2(c)and (d), respectively. Fig. 2(b) shows the classification maponly employing the gray feature, with the OA and the KC being62.69% and 0.4464, respectively. In this map, the misclassifica-tion is obvious, and the road class and the BA class cannot bediscriminated due to their gray similarity.

For a fair comparison with the other spatial features, wetested different window sizes (3× 3, 5× 5, . . . , 21× 21) forthe Moran’s I , Geary’s C, and GLCM approaches in the exper-iments. For the WT-based approach, four window sizes (3 × 3,5 × 5, 7 × 7, and 9 × 9) were tested due to their unacceptablecomputation time with a larger window size. For example, whenwindow 9 × 9 is used, the computation time for the WT-basedfeature is as high as 57.82 min, whereas for the DSC-based ap-proaches, the computation time is only 5.13 min for the DSC_Ifeature (with T = 21) and 5.35 min for the DSC_C feature(with T = 22). For the DMP feature extraction, we tested dif-ferent SE sizes. Table II presents the highest accuracy obtainedand the related parameters (threshold/window size/SE size). In

TABLE IIISTATISTICAL SIGNIFICANCE OF THE DIFFERENCES IN THE

CLASSIFICATION ACCURACY FOR THE FIRST DATA SET

Fig. 4. (a) Original image. (b)–(d) Classification maps with spectral features,DSC_I+ spectral features and DSC_C+ spectral features, respectively.

Table II, it is shown that features DSC_I and DSC_C performbetter than the others for the improvement of classification. TheMcNemar’s test results are reported in Table III, where the firstrow gives the differences of the accuracy between the spectraland spectral/spatial classifications, whereas the last two rowsare the comparisons between the DSC features and the otherspatial features. It is clear that |Z| > 1.96 for each test, whichindicates that the differences are statistically significant.

B. Data Set 2

The second data set is a pan-sharpened Quickbird multispec-tral image from Wuhan, China, with a size of 1000 × 800 pixelsand a resolution of 0.6 m, and RGB bands were used forclassification. The color composite image is shown in Fig. 4(a).According to the ground truth, the main object classes in thisimage include water (W), trees (T), asphalt roads (R1), cementroads (R2), red buildings (B1), green buildings (B2), deep-graybuildings (B3), high buildings (B4), and shadow (S). Table IVshows the distribution of samples in the training and test setsamong the nine classes.

The classification result is poor when only employing spec-tral features, with the OA and the KC being 78.48% and 0.7419,respectively. After adding the spatial features, the classification

Page 5: Structural Feature Modeling of High-Resolution Remote Sensing Images Using Directional Spatial Correlation

CHEN et al.: STRUCTURAL FEATURE MODELING OF HIGH-RESOLUTION REMOTE SENSING IMAGES 1731

TABLE IVNUMBER OF THE TRAINING AND TEST SAMPLES

FOR THE SECOND DATA SET

TABLE VCLASSIFICATION RESULTS OF THE SECOND DATA SET

EMPLOYING DIFFERENT FEATURES

TABLE VISTATISTICAL SIGNIFICANCE OF THE DIFFERENCES IN THE

CLASSIFICATION ACCURACY FOR THE SECOND DATA SET

is improved. Fig. 3(b) shows that the KCs vary with thresholdT ranging from 0 to 30 with a step length of 1, and it can beseen that the curves also show an overall varying trend of firstincreasing and then decreasing. For the DSC_I feature, whenT = 20, it achieves the highest KC of the classification, withthe OA and the KC being 86.77% and 0.8410, respectively,whereas for the DSC_C feature, the classification reaches themaximum value when T = 18, with the OA and the KC being86.03% and 0.8324, respectively (see Table V).

Table V presents the accuracy evaluation of the classificationwith different spatial features, where “Param” is the optimalparameter of the classification. In this table, it can be seenthat, as a whole, the proposed features and the WT-basedfeature show better performance in accuracy than the others. Inparticular, the WT-based feature achieved the highest OA andKC among these features. However, a drawback of the WT-based feature is that it needs too much computation time. Inthis experiment, it consumes as high as 140.85 min, whereas forfeatures DSC_I and DSC_C, they only need 9.96 and 9.88 min,respectively. Again, it is shown in Table VI that the differencesin the classification accuracy are statistically significant as|Z| > 1.96 for each test.

IV. CONCLUSION

In this letter, we have used the proposed DSC-based methodto model spatial correlation in neighborhoods of pixels. Theexperiments show that the proposed method can be used tosignificantly improve the classification of high-resolution im-ages and show competitive performance against some existingapproaches in accuracy or computation time. In addition, theDSC-based method involves threshold T characterizing thespectral homogeneity between pixels. In our experiments,the optimal values of T were empirically obtained by com-paring the results with different settings. In fact, with selectedtraining samples and testing samples, this process can be auto-matically implemented by testing various values for T from 0 to2b − 1 (b denotes the number of bits per pixel) with a step sizeof 1 and by picking the value with the highest KC. Usually, thefirst local maximum point is the optimal or approximate optimalvalue of T . The automatic selection problem of threshold T willbe further studied in future work.

REFERENCES

[1] H. G. Akcay and S. Aksoy, “Automatic detection of geospatial objectsusing multiple hierarchical segmentations,” IEEE Trans. Geosci. RemoteSens., vol. 46, no. 7, pp. 2097–2111, Jul. 2008.

[2] J. A. Benediktsson, M. Pesaresi, and K. Arnason, “Classification andfeature extraction for remote sensing images from urban areas basedon morphological transformations,” IEEE Trans. Geosci. Remote Sens.,vol. 41, no. 9, pp. 1940–1949, Sep. 2009.

[3] L. Zhang, X. Huang, B. Huang, and P. Li, “A pixel shape index cou-pled with spectral information for classification of high spatial resolutionremotely sensed imagery,” IEEE Trans. Geosci. Remote Sens., vol. 44,no. 10, pp. 2950–2961, Oct. 2006.

[4] S. W. Myint, E. A. Wentz, and S. J. Purkis, “Employing spatial metricsin urban land-use/land-cover mapping: Comparing the Getis and Gearyindices,” Photogramm. Eng. Remote Sens., vol. 73, no. 12, pp. 1403–1415,Dec. 2007.

[5] G. Mallinis, N. Koutsias, M. Tsakiri-Strati, and M. Karteris, “Object-based classification using Quickbird imagery for delineating forest vegeta-tion polygons in a Mediterranean test site,” ISPRS J. Photogramm. RemoteSens., vol. 63, no. 2, pp. 237–250, Mar. 2008.

[6] A. D. Cliff and J. K. Ord, Spatial Processes, Models and Applications.London, U.K.: Pion Ltd, 1981.

[7] R. M. Haralick, K. Shanmugan, and I. Dinstein, “Textural features forimage classification,” IEEE Trans. Syst. Sci. Cybern., vol. SMC-3, no. 6,pp. 610–621, Nov. 1973.

[8] S. W. Myint, N. S.-N. Lam, and J. M. Tyler, “Wavelets for urban spatialfeature discrimination: Comparisons with fractal, spatial autocorrelation,and spatial co-occurrence approaches,” Photogramm. Eng. Remote Sens.,vol. 70, no. 7, pp. 803–812, Jul. 2006.

[9] C. Huang, L. S. Davis, and J. R. G. Townshend, “An assessment of sup-port vector machines for land cover classification,” Int. J. Remote Sens.,vol. 23, no. 4, pp. 725–749, Feb. 2002.

[10] C. C. Chang and C. J. Lin, LIBSVM: A library for support vectormachines 2001. [Online]. Available: http://www.csie.ntu.edu.tw/~cjlin/libsvm

[11] G. M. Foody, “Thematic map comparison: Evaluating the statistical sig-nificance of differences in classification accuracy,” Photogramm. Eng.Remote Sens., vol. 70, no. 5, pp. 627–633, May 2004.