november 2006 - tu berlin185 abstract the eurosdr “sensor and data fusion contest” is concerned...

36
European Spatial Data Research EuroSDR Projects Evaluation of Building Extraction Report by Harri Kaartinen and Juha Hyyppä Change Detection Report by Klaus Steinnocher and Florian Kressler Sensor and Data Fusion Contest: Information for Mapping from Airborne SAR and Optical Imagery (Phase I) Report by Anke Bellmann and Olaf Hellwich Automated Extraction, Refinement, and Update of Road Databases from Imagery and Other Data Report by Helmut Mayer, Emmanuel Baltsavias, and Uwe Bacher Official Publication N o 50 November 2006

Upload: others

Post on 13-Mar-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: November 2006 - TU Berlin185 Abstract The EuroSDR “Sensor and Data Fusion Contest” is concerned with the question of whether mapping information usually obtained from optical imagery

European Spatial Data Research

EuroSDR Projects

Evaluation of Building ExtractionReport by Harri Kaartinen and Juha Hyyppä

Change DetectionReport by Klaus Steinnocher and Florian Kressler

Sensor and Data Fusion Contest: Information for Mappingfrom Airborne SAR and Optical Imagery (Phase I)

Report by Anke Bellmann and Olaf Hellwich

Automated Extraction, Refinement, and Updateof Road Databases from Imagery and Other Data

Report by Helmut Mayer, Emmanuel Baltsavias, and Uwe Bacher

Official Publication No 50

November 2006

Page 2: November 2006 - TU Berlin185 Abstract The EuroSDR “Sensor and Data Fusion Contest” is concerned with the question of whether mapping information usually obtained from optical imagery

The present publication is the exclusive property of European Spatial Data Research

All rights of translation and reproduction are reserved on behalf of EuroSDR. Published by EuroSDR

Printed by Gopher, Utrecht, The Netherlands

Page 3: November 2006 - TU Berlin185 Abstract The EuroSDR “Sensor and Data Fusion Contest” is concerned with the question of whether mapping information usually obtained from optical imagery

EUROPEAN SPATIAL DATA RESEARCH

PRESIDENT 2006 – 2008: Stig Jönsson, Sweden

SECRETARY-GENERAL: Kevin Mooney, Ireland

DELEGATES BY MEMBER COUNTRY:

Austria: Michael Franzen Belgium: Ingrid Vanden Berghe Cyprus: Christos Zenonos; Michael Savvides Denmark: Joachim Höhle; Lars Jørgensen Finland: Risto Kuittinen; Juha Vilhomaa France: Marc Pierrot-Deseilligny; Franck Jung Germany: Dietmar Grünreich; Günter Nagel; Dieter Fritsch Hungary: Arpad Barsi Ireland: Colin Bray Italy: Carlo Cannafoglia; Riccardo Galetto Netherlands: Jantien Stoter; Aart-jan Klijnjan Norway: Jon Arne Trollvik; Ivar Maalen-Johansen Portugal: Berta Cipriano Spain: Antonio Arozarena, Francisco Papí Montanel Sweden: Stig Jönsson; Anders Östman Switzerland: Francois Golay; André Streilein-Hurni United Kingdom: Keith Murray; David Chapman

COMMISSION CHAIRPERSONS:

Sensors, Primary Data Acquisition and Georeferencing: Michael Cramer, Germany Image Analysis and Information Extraction: Juha Hyyppä, Finland Production Systems and Processes: Eberhard Gülch, Germany Core GeoInformation Databases: Keith Murray, United Kingdom Integration and Delivery of Data and Services: Mike Jackson

OFFICE OF PUBLICATIONS:

Bundesamt für Kartographie und Geodäsie (BKG) Publications Officer: Andreas Busch Richard-Strauss-Allee 11 60598 Frankfurt Germany Tel.: + 49 69 6333 312 Fax: + 49 69 6333 441

Page 4: November 2006 - TU Berlin185 Abstract The EuroSDR “Sensor and Data Fusion Contest” is concerned with the question of whether mapping information usually obtained from optical imagery

CONTACT DETAILS:

Web: www.eurosdr.net President: [email protected] Secretary-General: [email protected] Secretariat: [email protected]

EuroSDR Secretariat Faculty of the Built Environment Dublin Institute of Technology Bolton Street Dublin 1 Ireland

Tel.: +353 1 4023933

The official publications of EuroSDR are peer-reviewed.

Page 5: November 2006 - TU Berlin185 Abstract The EuroSDR “Sensor and Data Fusion Contest” is concerned with the question of whether mapping information usually obtained from optical imagery

EuroSDR-Project

Commission 2 “Image analysis and information content”

In conjunction with IEEE GRSS data fusion

technical committee (DFC)

“Sensor and Data Fusion Contest

Information for Mapping from Airborne SAR

and Optical Imagery”

Final Report on Phase I

Report by Anke Bellmann and Olaf Hellwich

Computer Vision & Remote Sensing - University of Technology, Berlin

Page 6: November 2006 - TU Berlin185 Abstract The EuroSDR “Sensor and Data Fusion Contest” is concerned with the question of whether mapping information usually obtained from optical imagery

185

Abstract

The EuroSDR “Sensor and Data Fusion Contest” is concerned with the question of whether mapping information usually obtained from optical imagery can also be derived from radar data with a compa-rable quality. In the first phase of the contest both optical and SAR data were processed visually in four test regions. To prepare a standardized evaluation procedure, reference data sets were generated for each test region providing the basis for qualitative as well as quantitative data comparison. In the first part of this report the four test site areas are introduced. Subsequently the description of the generation of the reference data is given. The evaluation methods are then explained in detail. Results of the comparison of contest participants' map data with reference data are in the form of percentage values of false positive and false negative extractions with respect to existing objects and a visual interpretation of the map data. Furthermore, taking the individual nature of each participant's results into consideration, we conclude that a pixel accurate evaluation is not possible for all information types. Finally, short conclusions are given summarizing the results of the visual interpretation phase of the contest.

1 Introduction

In the last few years, high quality images of the earth acquired by synthetic aperture radar (SAR) systems carried on a variety of airborne and spaceborne platforms have become increasingly available. During the past couple of decades much research has been performed regarding Synthetic Aperture Radar (SAR) satellite systems such as ERS-1/2, JERS, and RADARSAT-1, demonstrating the potential of SAR for a wide variety of applications, such as deriving DEMs, geomorphological mapping or extracting object of different types.

1.1 Aim of the Project

The aim of the contest was to answer the questions (1) Can state-of-the-art airborne SAR competes with optical sensors in the mapping domain? (2) What can be gained when SAR and optical images are used in conjunction, i.e. when methods for information fusion are applied?

On the one hand there are indications that airborne SAR images will play a major role in topographic mapping in the future, due to its major advantages, namely independence of day light and cloud cover. On the other hand the interpretation of SAR images is difficult for a number of reasons: The geometry and spectral range is different compared to optical imagery and more importantly differs from the imagery perceived through human eyes. In addition, the reflectance properties of objects in the microwave range depend on the frequency band used and may significantly differ from the usual assumption of more or less diffuse reflection at the earths surface. This effect can be particularly strong for buildings or metal surfaces. Speckle, a consequence of coherent radiation needed to exploit the SAR principle and other disturbing factors complicate the interpretation. Therefore, mapping staff, for example photogrammetric operators, often have difficulties interpreting SAR imagery for topog-raphic mapping.

The EuroSDR sensor and data fusion contest was initiated in order to find out to what extent SAR imagery can compete with high resolution optical imagery for topographic mapping applications and to investigate the potential of fusing data of both sensor types.

Page 7: November 2006 - TU Berlin185 Abstract The EuroSDR “Sensor and Data Fusion Contest” is concerned with the question of whether mapping information usually obtained from optical imagery

186

Phase Image Data Linear Objects Areal Objets Participants

I II III SAR Optical roads railways rivers agricult. forest built-up lakes

Budapest University of Technology

and Economics, HUNGARY

Photogrammetry Department

General Command of Mapping, TURKEY

University of Hannover, GERMANY

Institute for Photogrammetry and GeoInformation

University of Rome “La Sapienza”

Department Info-Com, ITALY

Fraunhofer Institute

Information and Data Processing, GERMANY

Technical University Munich Photogrammetry and Remote Sensing, GERMANY

Ness A.T Inc.

ISRAEL

TNO Human Factors

THE NETHERLANDS

Chinese Academy of Surveying

and Mapping, CHINA

1.2 Contest Phases

In phase 1 test participants have visually interpreted the imagery and captured interactively topog-raphic objects as ortho-images of one or more study areas. In the second phase, test participants used automatic methods for object detection and classification. Finally, data fusion will be used in the third phase to derive results using both sensors (Table 1).

Phase I Visual Interpretation

Phase II Automatic Object Extraction and Classification

Phase III Sensor Fusion

Table 1: Contest Phases

In a preliminary phase of the contest, the test site data were organised and prepared for use in the contest. After this work, the first phase started in July 2003. The duration of each contest phase was approximately one year.

1.3 Participants

In total nine institutes are participating in the contest. Table 2 shows a list of these institutes, the phases they are taking part in, and their interests in term of sensor type and object classes.

Table 2: EuroSDR Contest Participants

Page 8: November 2006 - TU Berlin185 Abstract The EuroSDR “Sensor and Data Fusion Contest” is concerned with the question of whether mapping information usually obtained from optical imagery

187

2 Test Sites and Test Data

The test sites investigated in the contest were selected to be neither too complex nor to contain overly simplistic structures. For example, very densely built-up urban regions are hardly to be found. They certainly would have posed a particularly difficult challenge for SAR imaging, which is why it seemed to be appropriate not to investigate them intensively. Therefore, selected test areas include agricultural and forested rural regions as well as industrial and urban areas. Four different test sites with different contents were selected for the contest. The test areas were located in

• Germany – Trudering: Area around the new fair in Munich - industrial and rural regions

• Germany – Oberpfaffenhofen: Campus and runway of DLR - agricultural and airport regions

• Denmark – Copenhagen: City of Copenhagen - residential and industrial areas

• Sweden – Fjärdhundra - rural area, forested and agricultural regions

2.1 Specification of the Test Data

The data used in the contest consisted of optical color as well as black and white photography of photogrammetric quality and state-of-the-art airborne SAR with multiple polarizations and different wavelengths. To meet the objectives only single-pass SAR data is used. In order to account for the large variety of modern SAR systems, different frequencies as well as polarimetric data were selected. Certainly, there are even more parameters strongly influencing the appearance of topographic objects in SAR imagery – such as incidence angle and direction. In spite of their importance regarding object appearance, these parameters could not be considered, as the high demands of the contest regarding resolution, and other more practical matters, such as data costs and copyright, already impose restric-tive conditions. The specifications of the SAR images are shown in Table 3.

Test Site Sensor PolarisationTest Data

ResolutionLooks

Trudering, Germany

AeS-1 None X-Band 1.5 m 16

Oberpfaffenhofen,Germany

E-SAR Lexicographic L-Band 3.0 m 4

Copenhagen, Denmark

EMISAR Pauli C-Band 4.0 m 8

Fjärdhundra,Sweden

EMISAR Pauli C-Band 4.0 m 8

Table 3: SAR images used for different test areas

The resolution of the test data ranges from 1.5 m to 4.0 m and depends on the resolution of the SAR sensors. The corresponding optical images were of higher resolution so that they had to be resampled to the pixel size of the SAR images to enable a fair comparison. Thus, the resolutions of the corre-sponding optical images are the same as those of the SAR images. The following figures (Figure 1 - Figure 4) show the four chosen test data sets. On the left side of the figure the optical image is shown; on the right side the corresponding SAR image. The sections shown are co-registered with pixel accuracy.

Page 9: November 2006 - TU Berlin185 Abstract The EuroSDR “Sensor and Data Fusion Contest” is concerned with the question of whether mapping information usually obtained from optical imagery

188

Figure 1: Data set Copenhagen, urban and suburban areas

Figure 2: Data set Trudering, industrial and rural areas

Page 10: November 2006 - TU Berlin185 Abstract The EuroSDR “Sensor and Data Fusion Contest” is concerned with the question of whether mapping information usually obtained from optical imagery

189

Figure 3: Data set Fjärdhundra, agricultural and forested areas

Figure 4: Data set Oberpfaffenhofen, agricultural and airport areas

Page 11: November 2006 - TU Berlin185 Abstract The EuroSDR “Sensor and Data Fusion Contest” is concerned with the question of whether mapping information usually obtained from optical imagery

190

The pictures show the same display level of detail, but do not possess the same geocoding. To create reference data, an identical geocoding of corresponding images was necessary to display the pictures with pixel accuracy in the same position. This operation was carried out using ERDAS software from Leica Geosystems. It should be noted, however, that the information of interest in the contest is not a correct coordinate interpretation concerning terrestrial coordinates but the correct detection and interpretation of objects and their classification. Therefore a superordinate georeferencing based on terrestrial coordinates was not essential.

SAR data sets from Copenhagen and Fjärdhundra were provided by DDRE, Copenhagen; the data set from Oberpfaffenhofen was provided by DLR-IHR, Oberpfaffenhofen; and the data set from Truder-ing was provided by AeroSensing GmbH (now Intermap). The optical data set form Trudering was provided by Bayerisches Landesvermessungsamt, Munich; from Oberpfaffenhofen by DLR-IHR; from Fjärdhundra by Lantmäteriet, Sweden; and finally the data set from Copenhagen was provided by KAMPSAX, DDOland Denmark.

The acquisition times were different for the four test sites. Fjärdhundra imagery was taken in July 1995, Copenhagen in June 1996, Oberpfaffenhofen in May 1999 and finally the imagery from Trudering was taken in May 2000.

2.2 Additional Data

In addition to SAR and optical data, ground truth data were needed as additional sources of informa-tion about test regions and object types. Therefore, topographic maps for the areas around the test sites were collected. Topographic maps are, in general, approximations in that small objects are broadened but they also include more detailed information about the type of land use and classification. Table 4 shows the topographic maps used to create the reference data for the four test areas.

Test Site Map Name Scale Provider, Publication Date

Copenhagen Danmark, 1514 II SØ, Holte

1:25.000 Kort- & Matrikelstyrelsen, 1995

FjärdhundraGröna kartan, Encöping 11H NV

1:50.000 Lantmäteriverket Gävle, 1993

Trudering München – Trude-ring, 7836

1:25.000 Bayrisches Landesvermessungsamt München, 1997

Oberpfaffenhofen Weßling, 7933 1:25.000 Bayrisches Landesvermessungsamt, 2002

Table 4: Topographic Maps for the test areas

Page 12: November 2006 - TU Berlin185 Abstract The EuroSDR “Sensor and Data Fusion Contest” is concerned with the question of whether mapping information usually obtained from optical imagery

191

The map of Sweden is only available at a scale of 1:50.000 because no larger scale was offered for this area. Furthermore, the maps from Germany were obtainable also as digital maps. The maps from Sweden and Denmark were digitized and georeferenced to be subsequently used in digital form. Figure 5 shows the used topographic maps.

Figure 5: Topographic Maps: a) Trudering, b) Oberpfaffenhofen, c) Copenhagen (scanned), d)

Fjärdhundra (scanned)

a) b)

c) d)

Page 13: November 2006 - TU Berlin185 Abstract The EuroSDR “Sensor and Data Fusion Contest” is concerned with the question of whether mapping information usually obtained from optical imagery

192

3 Reference Data

In order to have a basis for analyzing the interpretations produced as part of the contest, it was necessary to create reference data sets for the different test sites. Reference data contents should ideally include information from optical and SAR imagery, as well as information from topographic maps, accounting for the fact that visual interpretation of image data alone – as conducted by test participants – can only achieve an image-specific interpretation of reality excluding non-image information. Therefore, the outcome of reference data compilation was mapping content considered optimal for the tests purposes from a mapping experts’ point of view – admittedly to a certain degree a subjective view point. This data was used as the basis for comparing participants’ results.

3.1 Generation of Reference Data

The scanned and the digital topographic maps had to be registered with the corresponding optical and SAR images. For reasons of simplicity, geometric reference was taken from the optical images to register the maps. After registration, the maps were geometrically compatible with optical and SAR imagery. However, quantitative (pixel-accurate) interpretations of georeferenced topographic maps were not possible due to inaccuracies of these types of maps. They were only used to examine the content.

Reference data were generated using the vector-based desktop GIS software ArcView of ESRI. Information from SAR and optical data were overlayed and were used to produce an exact reference map; missing information was obtained from the topographic map. Interaction and thorough process-ing guaranteed highly accurate reference data for all data sets (Figure 6). To avoid a bias of the reference maps with respect to optical or SAR imagery the reference information was extracted from both sensor’s images on equal terms.

Figure 6: From three basic information sources to the reference map (left: optical image,

topographic map, SAR image; right: reference map of Copenhagen)

Page 14: November 2006 - TU Berlin185 Abstract The EuroSDR “Sensor and Data Fusion Contest” is concerned with the question of whether mapping information usually obtained from optical imagery

193

3.2 Accounting for Different Acquisition Times

Since data acquisition times were not the same for optical and SAR images, the data sets have marginal discrepancies in some places. Nevertheless, to achieve equal data from both sensor types, masks were created to cut out objects and terrain not clearly defined (Figure 7). Areas covered by the masks were not included in the analysis in order to avoid incorrect interpretation of information content. The same procedure had to be adopted for a part of the airport area of Oberpfaffenhofen.

Figure 7: Areas with different content in SAR and optical images as well as in topographic map

(left); region with different content covered with a mask (right)

In order to achieve quantitative results and due to the fact that data analysis was conducted on raster images, each layer of the reference maps was converted into a raster data set with pixel-size corre-sponding to the imagery. This data conversion was carried out using ERDAS Imagine of Leica Geosystems.

3.3 Reference Maps

As a basis for data evaluation, object classes were categorized into linear and areal (extended) area classes. Object classes used for test sites were:

- Built-up - Railways

- Watercourses - Roads

- Agriculture - Alleys

- Forest

In addition to these general classes, which were present in all data sets, region-specific object classes were defined for each data set considering the specific data content (Table 5). For some areas in the data sets of Copenhagen and Trudering it was not possible to obtain a correct classification. Neither the image content nor the map information allowed the unambiguous definition of a class. These areas were classified as undefined spots and not included in the analysis.

Mask

Page 15: November 2006 - TU Berlin185 Abstract The EuroSDR “Sensor and Data Fusion Contest” is concerned with the question of whether mapping information usually obtained from optical imagery

194

a)

d)

b)

c)

Copenhagen Fjärdhundra Trudering Oberpfaffenhofen

Sport field Sport field Industrial areas Industrial areas Green corridor Green corridor Green corridor

River Excavation area Excavation area

Trench Railway Parking area

Railway

Only in SAR image Airport area

undefined undefined

Table 5: Additional object classes

Figure 8 shows the reference maps compiled manually with a vector based GIS software system. For each object class a separate layer was created. The reference maps are of the same size as the optical and SAR images. To compare the reference data with the participants’ data to pixel accuracy, each layer of the vector-based maps was converted into a raster data layer of the same pixel size as the original images.

Figure 8: Reference Maps, a) Trudering, b) Oberpfaffenhofen, c) Copenhagen, d) Fjärdhundra

Page 16: November 2006 - TU Berlin185 Abstract The EuroSDR “Sensor and Data Fusion Contest” is concerned with the question of whether mapping information usually obtained from optical imagery

195

4 Analysis

4.1 Basic Information

In the initial stages of analyzing the interpreted data, numerous problems became apparent. One problem that needed to be dealt with is the individual interpretation style of each interpreter. As previously shown in ALBERTZ (1970), the quality of image interpretation is very much affected by the level of experience of the interpreter. This is especially true for SAR imagery because understanding of different imaging characteristics is required. A second issue concerned the different programs used by interpreters. Some interpreters use vector-based programs with different object layers for different objects, while others work with raster-based programs and do not separate different layers.

To achieve consistent analysis, the following basic principles of data analysis must be predefined:

• Interpretations which were computed by participants as vector data were converted into raster images, since data analyses are conducted based on raster images.

• Raster images were separated into different layers; one layer for each object class. • Comparisons of the results were conducted for corresponding layers. • Results of the data analysis are expressed in percentage values. Distances and area sizes were

not computed.

4.2 Evaluation Procedure

To evaluate the interpretation results, objects were separated into areal areas and linear object types, since analysis is different for both object types. Image processing for both types of analysis have been carried out with ERDAS Imagine from Leica.

4.2.1 Areal Areas

Comparison of areal objects can take place with pixel accuracy. Each object class was evaluated separately. Therefore, each layer from the reference image was subtracted from the corresponding layer in the participants’ images.

Figure 9: a) Reference data, b) participant’s data, c) result of subtraction

a) b c)

Page 17: November 2006 - TU Berlin185 Abstract The EuroSDR “Sensor and Data Fusion Contest” is concerned with the question of whether mapping information usually obtained from optical imagery

196

Figure 9 shows one example of image processing inputs and result for part of a built-up area analysis. The reference data set is displayed on the left, the participant’s interpretation in the middle, and the result of image subtraction is shown on the right hand side. Different colours in the resulting image are defined as followed:

• Black: Correctly interpreted areas • Orange: False positively interpreted areas • Yellow: False negatively interpreted areas

Correctly interpreted areas are those parts which were located in the interpreter’s image as well as in the reference image. The parts which are defined as false positively interpreted are those areas which were located by the interpreter but do not appear in the reference map of the corresponding object class. And finally, the false negatively interpreted areas were located in the reference map but not detected by the interpreter. Each object class layer of all interpreters was compared with the corre-sponding layer of the reference data in this way.

Figure 10: Image processing procedure

To achieve comparable information for the results, the number of pixels of reference content of each object layer was set to 100%. Thus the number of pixels of false negatively interpreted areas (yellow) and correctly interpreted areas (grey) will add-up to 100% in the resulting image. Those areas which were false positively classified (orange) by the interpreter were added as percentage values above 100% (Figure 10).

Image processing result

W ald - SA R

0

20

40

60

80

100

120

Int 1 Int 3 Int 5 Int 8 Int 10

Referenceimage

Interpreter’simage

Result

Page 18: November 2006 - TU Berlin185 Abstract The EuroSDR “Sensor and Data Fusion Contest” is concerned with the question of whether mapping information usually obtained from optical imagery

197

4.2.2 Linear Objects

Since it is not meaningful and almost impossible to compare one-pixel-wide lines in two images with pixel accuracy in a pixel by pixel comparison, another approach to comparing linear features had to be developed. To compare linear objects, buffer zones were created around reference and interpreted linear objects. The lines were buffered twice. Since an ordinary expressway is of a width of about 25 m to 30 m the first buffer zone was created with a width of 30 m. This is to ensure that marginal discrepancies in interpretation do not directly imply a failure of the interpretation. The second buffer zone was created with twice the width of the first. For a particular comparison, a buffered line and a one-pixel-wide reference line were processed, either stemming from reference and interpretation or – vice versa. If the one-pixel-wide reference line is compared to the buffered interpreter’s line, one obtains the false negative part outside of the buffer. On the other hand, if the one-pixel-wide inter-preter’s line is compared to the buffered reference line, one obtains the false positive part outside of the buffer.

Figure 11: Definition of false positively and false negatively classified linear objects

Using image processing techniques, the image of the one-pixel-wide line and the image of the buffered lines were combined in one image. If the one-pixel-wide line is placed in the first buffer (Figure 11, black line) the line is correctly located. If the one-pixel-wide line is placed outside of the first buffer but within the second buffer (Figure 11, dashed line), the linear object is considered detected, but in a minimally incorrect position. If the line is placed outside of the second buffer (Figure 11, dotted line), it is considered as false positive or false negative. Comparing the one-pixel-wide reference line with the buffered interpreter’s line yields the false negative part and vice versa for the false positive part.

Page 19: November 2006 - TU Berlin185 Abstract The EuroSDR “Sensor and Data Fusion Contest” is concerned with the question of whether mapping information usually obtained from optical imagery

198

Railways & Roads - SAR

0,0

20,0

40,0

60,0

80,0

100,0

120,0

Int 1 Int 2 Int 3 Int 4 Int 5 Int 6 Int 9

To obtain standardised values for the quality of interpretation, the numbers of pixels of correctly classified objects (from negative and positive analysis) and the numbers of pixels of incorrectly positioned objects were averaged to give AVcor and AVinc respectively. To calculate the total number of pixels, which will represent 100%, the following measure was used:

100% = AVcor + AVinc + fn were fn is the number of false negative pixels (1)

As was the case for areal objects, false positively classified pixels (orange) were added to 100% to display the number of all interpreted pixels. An example of a graph resulting from analysing linear objects is shown in Figure 12.

Whereas the grey part corresponds to correct interpretations, the green part is well classified with a marginally incorrect position. The yellow part represents false negatively interpreted classifications, i.e. not detected, and the orange part represents false positively interpreted classifications.

Figure 12: Example for linear analysis

4.2.3 Qualitative Comparison

In addition to the quantitative analysis for areal and linear objects, a qualitative comparison was conducted to evaluate special objects which were not detected by each interpreter, such as sports fields, bridges or land use boundaries. The qualitative comparison was carried out visually and was necessary since some interpreters provide a more detailed classification than others. Similarly, there is no possibility for evaluating the detection of boundaries, ruins, barracks and parking lots because these detections depend on the interpreter’s classification and can neither be examined pixel-by-pixel nor validated by the topographic maps.

Page 20: November 2006 - TU Berlin185 Abstract The EuroSDR “Sensor and Data Fusion Contest” is concerned with the question of whether mapping information usually obtained from optical imagery

199

A griculture A reas - o pt ical

0,0

20,0

40,0

60,0

80,0

100,0

120,0

incorrect pos. 7,6 6 ,5 10 ,1

incorrect neg . 14 ,3 2 1,8 14 ,4

correct 8 5,7 78 ,2 8 5,6

Int 2 Int 3 Int 7 Int 8

A griculture A rea - SA R

0,0

20,0

40,0

60,0

80,0

100,0

120,0

incorrect pos. 12 ,5 7,7 9 ,3 4 ,5 9 ,6

incorrect neg . 18 ,5 19 ,0 3 1,9 2 1,9 2 2 ,6

correct 8 1,5 8 1,0 6 8 ,1 78 ,1 77,4

Int 1 Int 2 Int 3 Int 4 Int 5 Int 6

B uilt -Up A reas - o ptical

0,0

20,0

40,0

60,0

80,0

100,0

120,0

i nc or r e c t ne g. 8 , 6 4 , 2 2 , 5

i nc or r e c t pos. 7 , 8 9 , 1 14 , 0

c or r e c t 9 2 , 2 9 0 , 9 8 6 , 0

I nt 2 I nt 3 I nt 7 I nt 8

Built-Up Areas - SAR

0,0

20,0

40,0

60,0

80,0

100,0

120,0

i nc or r e c t pos. 8 , 3 9 , 4 8 , 4 4 , 0 6 , 8

i nc or r e c t ne g. 10 , 4 9 , 9 12 , 9 19 , 3 2 9 , 5

c or r e c t 8 9 , 6 9 0 , 1 8 7 , 1 8 0 , 7 7 0 , 5

I nt 1 I nt 2 I nt 3 I nt 4 I nt 5 I nt 6

5 Results of Phase I

5.1 Test Site Copenhagen

5.1.1 Areal and Linear Objects

The test site data set of Copenhagen contains urban and suburban areas and is of a resolution of 4 m per pixel. The images have a size of 1077 x 729 pixels. The evaluation of the optical image was carried out by four participants and the evaluation of the SAR image by seven participants. The following figures (Figure 13 - Figure 17) show the results of the analysis for areal and linear objects.

Figure 13: Copenhagen Result, Agricultural Area

Figure 14: Copenhagen Result, Built-Up Area

Page 21: November 2006 - TU Berlin185 Abstract The EuroSDR “Sensor and Data Fusion Contest” is concerned with the question of whether mapping information usually obtained from optical imagery

200

F o rest A reas - o pt ical

0,0

20,0

40,0

60,0

80,0

100,0

120,0

140,0

160,0

inco rrect po s. 11,6 7,0 2 5,1 7,7

inco rrect neg . 10 ,6 14 ,0 5,2 17,3

correct 8 9 ,4 8 6 ,0 9 4 ,8 8 2 ,7

Int 2 Int 3 Int 7 Int 8

F o rest A rea - SA R

0,0

20,0

40,0

60,0

80,0

100,0

120,0

140,0

160,0

i nc or r e c t pos. 10 , 5 9 , 0 15 , 8 5 0 , 6 2 5 , 5 7 , 2

i nc or r e c t ne g. 16 , 2 2 0 , 7 19 , 8 5 , 2 16 , 7 3 2 , 8

c or r e c t 8 3 , 8 7 9 , 3 8 0 , 2 9 4 , 8 8 3 , 3 6 7 , 2

I nt 1 I nt 2 I nt 3 I nt 4 I nt 5 I nt 6

Lakes - o pt ical

0,0

20,0

40,0

60,0

80,0

100,0

120,0

140,0

inco rrect p os. 12 ,9 11,1 13 ,2 7,6

inco rrect neg . 3 5,3 4 0 ,1 3 1,2 4 2 ,9

co rrect 6 4 ,7 59 ,9 6 8 ,8 57,1

Int 2 Int 3 Int 7 Int 8

Lakes - SA R

0,0

20,0

40,0

60,0

80,0

100,0

120,0

140,0

inco rrect p o s. 2 0 ,5 3 0 ,7 2 0 ,8 10 ,7 5,8 2 ,1

inco rrect neg . 4 6 ,2 4 9 ,1 50 ,1 4 6 ,2 72 ,5 8 5,1

co rrect 53 ,8 50 ,9 4 9 ,9 53 ,8 2 7,5 14 ,9

Int 1 Int 2 Int 3 Int 4 Int 5 Int 6

R ailways & R o ads - o pt ical

0,0

20,0

40,0

60,0

80,0

100,0

120,0

incorr . po sit ive 2 ,7 12 ,7 0 ,7

incorr . negat ive 9 ,6 14 ,6 19 ,1

incorr . po sit ion 4 ,2 3 ,9 3 ,6

correct 8 6 ,2 8 1,4 77,3

Int 2 Int 3 Int 7 Int 8

Railways & Roads - SAR

0,0

20,0

40,0

60,0

80,0

100,0

120,0

inco rr. po sit ive 5,4 10,6 9,2 14,4 8,3 5,8 0,0

inco rr. negative 23,6 20,5 23,0 16,8 28,2 35,5 51,7

inco rr. po sit io n 6,0 7,0 6,0 6,7 8,0 9,3 3,7

co rrect 70,4 72,5 71,0 76,6 63,8 55,2 44,6

Int 1 Int 2 Int 3 Int 4 Int 5 Int 6 Int 9

Figure 15: Copenhagen Result, Forested Area

Figure 16: Copenhagen Result, Lakes

Figure 17: Copenhagen Result, Linear Objects

Page 22: November 2006 - TU Berlin185 Abstract The EuroSDR “Sensor and Data Fusion Contest” is concerned with the question of whether mapping information usually obtained from optical imagery

201

5.1.2 Qualitative Comparison

In addition to the numerical analysis, a qualitative comparison of special objects was performed, because some participants classified more and detailed objects than others. The results of qualitative comparison are shown in Table 6.

SAR image Optical image

Freeway detected detected

Main Road predominantly detected detected

Alleys few to none predominantly detected

Trenches none few to none

Lakes big = good, small = partially big = all, small = predominantly

Trees depends on interpreter, but possible separate ones mostly detected

Buildings big ones detected, smaller ones partially but not always correct

big and small ones detected exemplary

Golf Course only one detection only two detections

Sports Field partially detected nearly all of them

land use boundary, ruins, barracks, parking lots

Table 6: Qualitative Comparison of Copenhagen

5.1.3 Summary – Copenhagen

The results of visual interpretation of Copenhagen show that big and continuous areas like forest, agricultural and built-up areas were detected equally well in SAR and in optical imagery. Smaller areal objects like lakes and separate buildings are more difficult to extract from SAR imagery. Concerning linear objects it can be ascertained that highways and main roads were detected equally well in both types of image. Smaller linear objects, such as alleys, trenches and secondary roads, cannot be detected in SAR but partially in optical imagery. In addition, classification of roads versus alleys appeared to be easier in optical images.

The golf course, which is placed on the left side of the images, was mostly detected as green corridor. Only two interpreters of optical and one interpreter of SAR imagery classified that area as a golf course. The classification of this area in the ground truth (reference) data could only be done well with information from the topographic map.

The visual analysis shows that some interpreters used a more detailed classification than others. They additionally detected land use boundaries, ruins, barracks and parking lots. These objects cannot be approved with certainty, because there is no evidence for them in the available ground truth data.

Page 23: November 2006 - TU Berlin185 Abstract The EuroSDR “Sensor and Data Fusion Contest” is concerned with the question of whether mapping information usually obtained from optical imagery

202

A griculture A reas - o pt ical

0,0

20,0

40,0

60,0

80,0

100,0

120,0

inco rrect po s. 1,1 4 ,1 3 ,3

inco rrect neg . 7,1 5,6 4 ,0

correct 9 2 ,9 9 4 ,4 9 6 ,0

Int 2 Int 3 Int 7 Int 8

A griculture A rea - SA R

0,0

20,0

40,0

60,0

80,0

100,0

120,0

inco rrect p os. 7,4 4 ,5 6 ,9 5,5 6 ,5

inco rrect neg . 3 ,2 8 ,3 6 ,9 7,7 10 ,5

correct 9 6 ,8 9 1,7 9 3 ,1 9 2 ,3 8 9 ,5

Int 1 Int 2 Int 3 Int 4 Int 5 Int 6

B uilt -Up A reas - o pt ical

0,0

20,0

40,0

60,0

80,0

100,0

120,0

140,0

i nc or r e c t ne g. 11, 6 9 , 5 2 , 1 3 , 5

i nc or r e c t pos. 13 , 2 2 6 , 4 4 0 , 7 6 1, 1

c or r e c t 8 6 , 8 7 3 , 6 5 9 , 3 3 8 , 9

I nt 2 I nt 3 I nt 7 I nt 8

Built-Up Areas - SAR

0,0

20,0

40,0

60,0

80,0

100,0

120,0

140,0

i nc or r e c t pos. 10 , 8 7 , 6 16 , 2 19 , 5 11, 4 0 , 9

i nc or r e c t ne g. 4 0 , 3 3 9 , 3 3 8 , 9 5 4 , 3 5 3 , 2 7 6 , 0

c or r e c t 5 9 , 7 6 0 , 7 6 1, 1 4 5 , 7 4 6 , 8 2 4 , 0

I nt 1 I nt 2 I nt 3 I nt 4 I nt 5 I nt 6

5.2 Test Site Fjärdhundra

5.2.1 Areal and Linear Objects

The test site data set of Fjärdhundra contains mainly agricultural and forested areas and has a resolu-tion of 4 m per pixel. The images have a size of 910 x 956 pixels. The evaluation of the optical image was carried out by five participants and the evaluation of the SAR image by seven participants. The following figures (Figure 18 - Figure 22) show the results of analysis for areal and linear objects.

Figure 18: Fjärdhundra Result, Agricultural Area

Figure 19: Fjärdhundra Result, Built-Up Area

Page 24: November 2006 - TU Berlin185 Abstract The EuroSDR “Sensor and Data Fusion Contest” is concerned with the question of whether mapping information usually obtained from optical imagery

203

F o rest A rea - SA R

0,0

20,0

40,0

60,0

80,0

100,0

120,0

140,0

i nc or r e c t pos. 10 , 8 9 , 4 10 , 1 2 5 , 0 7 , 9 8 , 7

i nc or r e c t ne g. 10 , 8 9 , 4 11, 6 9 , 6 18 , 3 18 , 9

c or r e c t 8 9 , 2 9 0 , 6 8 8 , 4 9 0 , 4 8 1, 7 8 1, 1

I nt 1 I nt 2 I nt 3 I nt 4 I nt 5 I nt 6

F o rest A reas - o ptical

0,0

20,0

40,0

60,0

80,0

100,0

120,0

140,0

inco rrect p o s. 4 ,5 9 ,8 12 ,2 5,5

inco rrect neg . 5,4 7,5 1,8 10 ,0

co rrect 9 4 ,6 9 2 ,5 9 8 ,2 9 0 ,0

Int 2 Int 3 Int 7 Int 8

R ailways & R o ads - o pt ical

0,0

20,0

40,0

60,0

80,0

100,0

120,0

incorr . po sit ive 2 ,7 12 ,7 0 ,7

incorr . negat ive 9 ,6 14 ,6 19 ,1

incorr . po sit ion 4 ,2 3 ,9 3 ,6

correct 8 6 ,2 8 1,4 77,3

Int 2 Int 3 Int 7 Int 8

Railways & Roads - SAR

0,0

20,0

40,0

60,0

80,0

100,0

120,0

inco rr. po sit ive 5,4 10,6 9,2 14,4 8,3 5,8 0,0

inco rr. negative 23,6 20,5 23,0 16,8 28,2 35,5 51,7

inco rr. po sit io n 6,0 7,0 6,0 6,7 8,0 9,3 3,7

co rrect 70,4 72,5 71,0 76,6 63,8 55,2 44,6

Int 1 Int 2 Int 3 Int 4 Int 5 Int 6 Int 9

Fjärdhundra - River, Optical

0,020,040,060,080,0

100,0120,0140,0160,0180,0200,0220,0

incorr. positive 0,6 15,3 0,6 118,7

incorr. negative 1,2 5,5 1,1 8,6

incorr. position 3,5 3,4 2,1 2,8

correct 95,3 91,1 96,7 88,6

Int 8 Int 9 Int 10 Int 11

Fjärdhundra - River, SAR

-30,0

20,0

70,0

120,0

incorr. positive 7,8 3,1 2,6 19,2 5,2

incorr. negative 20,8 10,1 26,6 30,1 37,6

incorr. position 6,8 6,1 3,8 3,3 8,0

correct 72,3 83,8 69,7 66,6 54,5

Int 1 Int 2 Int 3 Int 4 Int 5 Int 6 Int 7

Figure 20: Fjärdhundra Result, Forested Area

Figure 21: Fjärdhundra Result, Linear Objects: Roads

Figure 22: Fjärdhundra Result, Linear Objects: River

Page 25: November 2006 - TU Berlin185 Abstract The EuroSDR “Sensor and Data Fusion Contest” is concerned with the question of whether mapping information usually obtained from optical imagery

204

well detectedmostly detectedRiver

detectedpredominantly detectedRoads

land use boundary, gardening

Sports Field

Little grown Tree

Areas

Built-Up Areas

Trees

Trenches

Alleys

Main Road

nearly all of thempartially detected

Detection easier -> Trees can be found even if they are of little growing

Detection hardly possible -> don’t looks like Forrest Areas

Big contiguous areas well detected, even small areas can be found

Big contiguous areas mostly well detected, but small areas seemed to

be hardly detected

separate ones mostly detecteddepends on interpreter, but possible

few to nonenone

predominantly detectedfew to none

detecteddetected

Optical imageSAR image

well detectedmostly detectedRiver

detectedpredominantly detectedRoads

land use boundary, gardening

Sports Field

Little grown Tree

Areas

Built-Up Areas

Trees

Trenches

Alleys

Main Road

nearly all of thempartially detected

Detection easier -> Trees can be found even if they are of little growing

Detection hardly possible -> don’t looks like Forrest Areas

Big contiguous areas well detected, even small areas can be found

Big contiguous areas mostly well detected, but small areas seemed to

be hardly detected

separate ones mostly detecteddepends on interpreter, but possible

few to nonenone

predominantly detectedfew to none

detecteddetected

Optical imageSAR image

5.2.2 Qualitative Comparison

Similarly to the analysis of Copenhagen, a qualitative comparison was performed in addition to the main object classes. The results of qualitative comparison are shown in Table 7.

Table 7: Qualitative Comparison of Fjärdhundra

5.2.3 Summary – Fjärdhundra

The results are similar to the results of Copenhagen. Only the continuous and large areas of agricul-ture and forest are detected in SAR as well as in optical imagery. Even though lakes were not located in the reference map (they were not located in the topographic map either), some participants never-theless classified small lakes in the SAR imagery.

Considering the graph of the built-up areas, it is obvious that there are more difficulties in classifying and identifying such areas in SAR images. Some of the built-up areas are located separately and next to the forest areas, so that they can be easily ignored or classified as the objects near by. This also happened in the optical imagery, but not to the same degree.

Regarding the linear objects, large rivers were located in these data sets in addition to the roads. Main roads and the rivers were detected in SAR as successfully as in the optical images, whereas smaller streets were not detected in SAR but in the optical imagery. The smaller streets (alleys, secondary roads) were not included in the quantitative analysis. Concerning the results of river classification, it is obvious that some parts are not detectable in SAR imagery (yellow part of the graph). Considering the result of interpreter 11, a big mistake was made while classifying land use boundaries instead of rivers in the optical image. This leads to the large value (orange part) in the graph.

Page 26: November 2006 - TU Berlin185 Abstract The EuroSDR “Sensor and Data Fusion Contest” is concerned with the question of whether mapping information usually obtained from optical imagery

205

Agriculture - SAR

0,0

20,0

40,0

60,0

80,0

100,0

120,0

Int 1 Int 3 Int 5 Int 8 Int 10

Agriculture - optical

0,0

20,0

40,0

60,0

80,0

100,0

120,0

Int 2 Int 4 Int 9

5.3 Test Site Oberpfaffenhofen

5.3.1 Areal and Linear Objects

The data set of the Oberpfaffenhofen test site contains agricultural and industrial areas, especially on the airport area, and is of a resolution of 3 m per pixel. The images have a size of 907 x 658 pixels. The evaluation of the optical image was carried out by four participants and the evaluation of the SAR image by six participants.

The airport area, which covers one third of the image, caused more problems in interpretation than expected. The classification from the participants concerning this region turned out to be extremely diverse and it was impossible to find a way of analyzing them consistently. Hence, a mask was created for this region to avoid quantitative analyses. The result of a visual analysis can be found in the next chapter. Additionally, there were some regions of excavation which were, surprisingly, found by nearly all interpreters. Therefore, an additional class “excavation area” was introduced for quantitative analysis.

The following figures (Figure 23 - Figure 28) show the results of quantitative analysis for areal and linear objects.

Figure 23: Oberpfaffenhofen Result, Agricultural Area

Page 27: November 2006 - TU Berlin185 Abstract The EuroSDR “Sensor and Data Fusion Contest” is concerned with the question of whether mapping information usually obtained from optical imagery

206

Built-Up - SAR

0,0

20,0

40,0

60,0

80,0

100,0

120,0

140,0

Int 1 Int 3 Int 5 Int 8 Int 10

Built-Up - optical

0,0

20,0

40,0

60,0

80,0

100,0

120,0

140,0

Int 2 Int 4 Int 9

Forest - SAR

0,0

20,0

40,0

60,0

80,0

100,0

120,0

Int 1 Int 3 Int 5 Int 8 Int 10

Forest - optical

0,0

20,0

40,0

60,0

80,0

100,0

120,0

Int 2 Int 4 Int 9

Waterbodies - SAR

0,0

20,0

40,0

60,0

80,0

100,0

120,0

140,0

160,0

Int 1 Int 3 Int 5 Int 8

Waterbodies- optical

0,0

20,0

40,0

60,0

80,0

100,0

120,0

140,0

160,0

Int 2 Int 4 Int 9

Figure 24: Oberpfaffenhofen Result, Built-Up Area

Figure 25: Oberpfaffenhofen Result, Forested Area

Figure 26: Oberpfaffenhofen Result, Water bodies

Page 28: November 2006 - TU Berlin185 Abstract The EuroSDR “Sensor and Data Fusion Contest” is concerned with the question of whether mapping information usually obtained from optical imagery

207

Excavation- SAR

0,0

20,0

40,0

60,0

80,0

100,0

120,0

140,0

Int 1 Int 3 Int 5 Int 8 Int 10

Excavation - optical

0,0

20,0

40,0

60,0

80,0

100,0

120,0

140,0

Int 2 Int 4 Int 9

Roads - optical

0,0

20,0

40,0

60,0

80,0

100,0

120,0

140,0

160,0

Int 2 Int 4 Int 6 Int 9

Roads - SAR

0,0

20,0

40,0

60,0

80,0

100,0

120,0

140,0

160,0

Int 1 Int 3 Int 5 Int 7 Int 8 Int 10

Figure 27: Oberpfaffenhofen Result, Excavation Area

Figure 28: Oberpfaffenhofen Result, Roads

5.3.2 Qualitative Comparison

As stated before, the airport area was not analysed with pixel accuracy, because of the different approaches to classification. Nevertheless, a qualitative comparison of these areas will be given. The big runways for planes were detected as well in SAR as in the optical imagery. Even the smaller tarmac covered area could be located, but were not classified correctly. Some smaller objects, which cannot be classified, are located in the airport area. These small objects were located in both sensors’ images completely. Some classified them as hangar or parking areas for airplanes. Even small spots on the green fields were located in SAR images. Concerning the airport area, there are no big differ-ences between SAR and optical image interpretation and classification.

Regarding the interpretations of linear objects, the result is similar to the test sites of Copenhagen and Fjärdhundra. Railways are the exception. They were classified and detected in all cases of optical interpretation but not in SAR images. The lines of railways were detected also by all SAR interpreters but three out of five participants classified the railways as main roads. This leads to the large false positive part (up to 30%) in the results in Figure 28. It shows that the detection of different linear objects is trouble-free, in contrast to the classification.

Page 29: November 2006 - TU Berlin185 Abstract The EuroSDR “Sensor and Data Fusion Contest” is concerned with the question of whether mapping information usually obtained from optical imagery

208

mostly all parking areas were detected big parking areas could be detectedParking

detecteddetectedRunway

all interpreters detected the railways wellpartially detectedRailway

land use boundary, bridges

detectedpredominantly detectedRoads

Built-Up Areas

Alleys

Main Road

well detected but no distinction of industrial or urban areas, separate houses were detected

well, even small ones

well detected but no distinction of industrial or urban areas, separate big

houses were detected well

predominantly detectedfew to none

detecteddetected

Optical imageSAR image

mostly all parking areas were detected big parking areas could be detectedParking

detecteddetectedRunway

all interpreters detected the railways wellpartially detectedRailway

land use boundary, bridges

detectedpredominantly detectedRoads

Built-Up Areas

Alleys

Main Road

well detected but no distinction of industrial or urban areas, separate houses were detected

well, even small ones

well detected but no distinction of industrial or urban areas, separate big

houses were detected well

predominantly detectedfew to none

detecteddetected

Optical imageSAR image

Detection of buildings was not a big problem in SAR and the optical imagery, due to the large size of industrial buildings. Regarding the classification of parking areas, one can say that the big ones were detected and classified in optical and even in SAR images. The smaller ones lead to more problems in SAR imagery. The same is essentially true for bridges and land use boundaries. Table 8 shows an overview of qualitative comparison.

Table 8: Qualitative Comparison of Oberpfaffenhofen

5.3.3 Summary – Oberpfaffenhofen

Although the resolution of sensor data is a bit higher compared to the test sites of Copenhagen and Fjärdhundra (3 m instead of 4 m), there are no significant discrepancies between interpretations of these test sites. Accuracy and quality of interpreting forest, agricultural and built-up areas as well as linear objects show results similar to those in Copenhagen and Fjärdhundra.

Regarding the special areas of this test site, it was surprising that nearly all participants detected and classified the excavation area correctly. More problems occurred in the interpretation of the airport area. Special regions on this site could be detected, but were not classified well. These problems, however, were equal in SAR and optical imagery. Thus, it appears that industrial areas with very special land uses and objects are almost impossible to interpret correctly.

5.4 Test Site Trudering

5.4.1 Areal and Linear Objects

The data set of the Trudering test site mainly contains rural and industrial areas and has a resolution of 1.5 m per pixel, which is the highest resolution among the test imagery. The images have a size of 2813 x 2289 pixels. The evaluation of the optical image was carried out by four participants and the evaluation of the SAR image by seven participants.

This test site predominantly consists of agricultural and built-up areas. The forest areas are small parts with few trees and only cover one percent of the image area. A small lake is located in the north west of the images, which appears different in SAR and optical images due to the exposure date of the images. Therefore, two different reference maps were created for this small area. Two other small parts of the images have different content so they were covered with a small mask to avoid discrepan-

Page 30: November 2006 - TU Berlin185 Abstract The EuroSDR “Sensor and Data Fusion Contest” is concerned with the question of whether mapping information usually obtained from optical imagery

209

Agriculture - sar

0,00

20,00

40,00

60,00

80,00

100,00

120,00

falsch positiv 1,18 2,12 1,12 1,42 0,44

falsch negativ 9,47 5,32 6,85 7,93 9,33

richtig 90,53 94,68 93,15 92,07 90,67

Interpret 1 Interpret 3 Interpret 5 Interpret 7 Interpret 9

Agriculture - optic

0,00

20,00

40,00

60,00

80,00

100,00

120,00

falsch positiv 1,92 0,47 3,76 0,00

falsch negativ 5,40 9,19 2,67 0,00

richtig 94,60 90,81 97,33 0,00

Interpret 2 Interpret 4 Interpret 8 Interpret 10

Built-Up - optic

0,00

20,00

40,00

60,00

80,00

100,00

120,00

falsch positiv 13,51 12,00 8,58 0,00

falsch negativ 1,81 1,55 4,04 0,00

richtig 98,19 98,45 95,96 0,00

Interpret 2 Interpret 4 Interpret 8 Interpret 10

Built-Up - sar

0,00

20,00

40,00

60,00

80,00

100,00

120,00

falsch positiv 5,13 11,05 10,39 12,82 19,44

falsch negativ 11,97 3,45 3,46 5,11 5,46

richtig 88,03 96,55 96,54 94,89 94,54

Interpret 1 Interpret 3 Interpret 5 Interpret 7 Interpret 9

Forest - optic

0,00

20,00

40,00

60,00

80,00

100,00

120,00

140,00

160,00

falsch positiv 0,36 3,50 19,03 33,67

falsch negativ 21,58 24,39 54,91 24,72

richtig 78,42 75,61 45,09 75,28

Interpret 2 Interpret 4 Interpret 8 Interpret 10

Forest - sar

0,00

20,00

40,00

60,00

80,00

100,00

120,00

140,00

160,00

falsch positiv 14,37 1,24 4,57 45,10 48,04

falsch negativ 55,05 39,95 33,52 41,84 49,88

richtig 44,95 60,05 66,48 58,16 50,12

Interpret 1 Interpret 3 Interpret 5 Interpret 7 Interpret 9

cies in analysis. The following figures (Figure 29 - Figure 34) show the results of analyses for areal and linear objects.

Figure 29: Trudering Result, Agricultural Area

Figure 30: Trudering Result, Built-Up Area

Figure 31: Trudering Result, Forested Area

Page 31: November 2006 - TU Berlin185 Abstract The EuroSDR “Sensor and Data Fusion Contest” is concerned with the question of whether mapping information usually obtained from optical imagery

210

Waterbodies - sar

0,00

20,00

40,00

60,00

80,00

100,00

120,00

falsch positiv 13,28 5,03 8,65 4,61 6,15

falsch negativ 2,97 1,93 1,41 11,44 6,89

richtig 97,03 98,07 98,59 88,56 93,11

Interpret 1 Interpret 3 Interpret 5 Interpret 7 Interpret 9

Waterbodies - optic

0,00

20,00

40,00

60,00

80,00

100,00

120,00

falsch positiv 0,54 2,05 4,40 4,18

falsch negativ 11,91 4,04 1,28 7,99

richtig 88,09 95,96 98,72 92,01

Interpret 2 Interpret 4 Interpret 8 Interpret 10

Excavation - sar

0,00

20,00

40,00

60,00

80,00

100,00

120,00

140,00

falsch positiv 2,77 22,11 31,19 15,20 13,75

falsch negativ 10,11 3,72 5,85 7,85 7,38

richtig 89,89 96,28 94,15 92,15 92,62

Interpret 1 Interpret 3 Interpret 5 Interpret 7 Interpret 9

Excavation - optic

0,00

20,00

40,00

60,00

80,00

100,00

120,00

140,00

falsch positiv 2,79 4,03 24,79 0,00

falsch negativ 5,39 24,88 2,77 0,00

richtig 94,61 75,12 97,23 0,00

Interpret 2 Interpret 4 Interpret 8 Interpret 10

Roads - optic

0,00

20,00

40,00

60,00

80,00

100,00

120,00

140,00

160,00

falsch positiv 5,20 1,17 16,02 46,25

falsch negativ 5,31 12,03 4,77 3,91

lagefalsch 5,42 2,93 2,92 5,38

lagerichtig 89,27 85,04 92,31 90,72

Interpret 2 Interpret 4 Interpret 8 Interpret 10

Roads- sar

0,00

20,00

40,00

60,00

80,00

100,00

120,00

140,00

160,00

falsch positiv 7,20 2,60 1,53 1,46 7,67 6,21 1,46

falsch negativ 9,21 8,70 14,86 23,43 29,95 20,35 23,43

lagefalsch 7,30 6,78 4,63 1,67 2,78 2,00 1,67

lagerichtig 83,50 84,52 80,51 74,90 67,27 77,66 74,90

Interpret 1 Interpret 3 Interpret 5 Interpret 6 Interpret 7 Interpret 9Interpret

11

Figure 32: Trudering Result, Water bodies

Figure 33: Trudering Result, Excavation Area

Figure 34: Trudering Result, Roads

Page 32: November 2006 - TU Berlin185 Abstract The EuroSDR “Sensor and Data Fusion Contest” is concerned with the question of whether mapping information usually obtained from optical imagery

211

5.4.2 Qualitative Comparison

The railways in the north of the images were located by all interpreters (just one interpreter only evaluated roads) and classified correctly by all SAR interpreters. Only two optical interpreters classified them as roads. Some SAR and optical interpreters included several houses in the analysis. In several cases, both optical and SAR images allow the detection and location of houses, even the smaller ones. In addition some interpreters, SAR as well as optical, detected land use boundaries, bridges, parking areas, trees and hedges. It was possible to detect them well in optical and in SAR images.

Some undefined lines, which look like historical objects, can be found in a small portion of the SAR image. These lines are only located in the SAR image. Three interpreters of the SAR image located them well; the other classified this area as an undefined area. That is the reason why it is not included in the quantitative analyses.

5.4.3 Summary – Trudering

The figures relating to the quantitative analyses show the best result of the test sites. The continuous and large parts of agriculture and built-up areas were located and classified with best results (mostly up to 90% correct). Even the smaller areas, such as water bodies and excavation areas were identified with a high degree of accuracy. The interpretation of forests – usually conducted very accurately – resulted in more failures. Some interpreters added hedges and several trees to the forest class which might be a reason for misinterpretation. Due to the high resolution of the image data, it was possible to detect several houses and trees comparatively well.

The detection of linear objects shows best results equally well in SAR and in optical imagery. Just the classification of railways might be difficult, surprisingly in optical imagery. The fact, that this test site’s data is of a high resolution leads to the best visual interpretation result among all test sites. The accuracy and quality for SAR and optical imagery is equal, even for smaller objects.

Page 33: November 2006 - TU Berlin185 Abstract The EuroSDR “Sensor and Data Fusion Contest” is concerned with the question of whether mapping information usually obtained from optical imagery

212

6 Conclusion – Phase I

The results of all test sites show a consistent status of visual interpretation. The goal of the first phase was to compare optical versus SAR image interpretation. The question to be answered was whether it would be possible to obtain the same accuracy and quality when interpreting SAR instead of optical image data.

In general all of the linear and areal features could be interpreted quite well in SAR images. The results for the four different test sites look similar. It is obvious that main roads and highways, i.e. big linear objects, can be detected and well located in both sensor imageries. Although it is no problem to locate big linear objects, classification appears to be much more difficult in SAR images, as railways were sometimes classified as streets. Smaller linear objects, such as roads, alleys and little rivers, lead to more errors when locating and classifying them in SAR images.

While comparing areal areas, large contiguous areas are usually well detected while small areas seem to be barely detectable in SAR imagery, whereas they can be found in optical images. Nevertheless, large buildings can also be detected and well located in SAR while, as before, the smaller ones are not always located correctly. The same results are obtained with sport fields, bridges and several trees, only partially detected in SAR but nearly completely detected in optical imagery.

Small forest areas are barely detectable in SAR because they don’t look like forest areas in the images (which may also depend on the experience of the interpreters). Another problem seems to be the detection of water surfaces: large ones can be located and classified well in SAR images, but smaller ones can not be found, unlike in optical images.

Concerning the optical data there is no difference in performance between panchromatic and color images. This can be inferred from the results of Fjärdhundra and Copenhagen test sites, where the data are of the same resolution, whereas Fjärdhundra is panchromatic and Copenhagen color image data. Regarding the results for test site Trudering, it becomes obvious that a higher resolution leads to better results for interpretations of SAR images. Consequently, discrepancies between SAR and optical sensing are shrinking with increasing resolution of images.

To answer one of the main questions of this contest, it seems that SAR performs as well as optical sensing for large areal and linear objects but is not satisfactory with respect to smaller objects. Even though optical imagery was resampled to the pixel size of SAR sensing, it allows much more detailed analysis if the resolution is about three or four meters. This changes when using SAR data with higher resolutions, at least as far as the test site of Trudering is concerned. Results could be even better when using modern SAR data with resolution in the range of decimeters, as then speckle filtering could be done intensively without losing the interpretability of the data.

Without training the interpreters in finding small or thin objects of a certain kind they are not familiar with in SAR imagery, it is nearly impossible for them to extract those objects correctly. Due to the similarities between the human visual system and optical imaging, this is much easier when using optical images. The analysis shows that under the conditions set by the test regarding resolution of the data and experience of the interpreters SAR sensing alone is not satisfactory for mapping small objects of the size of a few pixels in one or two dimensions.

Page 34: November 2006 - TU Berlin185 Abstract The EuroSDR “Sensor and Data Fusion Contest” is concerned with the question of whether mapping information usually obtained from optical imagery

213

Acknowledgments

We thank all participants of the contest for contributing their work and analyses. Further thanks belong to Gert König, Hartmut Lehmann and Andreas Reigber who helped to initiate the contest, as well as Sübeyde Demirok and Andreas Krause who finished their seminar thesis analysing a part of the data.

Support of several authorities in providing data for the fusion contest, especially Intermap Corp. and Bayrisches Landesvermessungsamt München, IHR/DLR (E-SAR data), Aerosensing GmbH (AeS-1 data) as well as the DDRE (EmiSAR data) and KAMPSAX is gratefully acknowledged.

References

Albertz, J., 1970: Sehen und Wahrnehmen bei der Luftbildinterpretation, Bildmessung und Luftbild-wesen 38, (1970), Karlsruhe.

Bellmann, A., Hellwich, O., 2005: Sensor and Data Fusion Contest: Comparison of Visual Analysis between SAR- and Optical Sensors, in Publikationen der DGPF, Band 14, 2005, pp. 217-223

Hellwich, O., Heipke, C., Wessel, B., 2001: Sensor and Data Fusion Contest: Information for Mapping from Airborne SAR and Optical Imagery, in International Geoscience and Remote Sensing Symposium 01, Sydney. IEEE, 2001, vol. VI, pp. 2793-2795.

Hellwich, O., Reigber, A., Lehmann, H., 2002: OEEPE Sensor and Data Fusion Contest: Test Imagery to Compare and Combine Airborne SAR and Optical Sensors for Mapping, in Publikationen der DGPF, Band 11, 2002, pp. 279-283.

Lohmann, P., Jacobsen, K., Pakzad, K., Koch, A., 2004: Comparative Information Extraction from SAR and optical Imagery: IntArchPhRS. Band XXXV, Part B3. Istanbul, 2004, pp. 535-540.

Page 35: November 2006 - TU Berlin185 Abstract The EuroSDR “Sensor and Data Fusion Contest” is concerned with the question of whether mapping information usually obtained from optical imagery

214

Index of Figures

Figure 1: Data set Copenhagen, urban and suburban areas ..................................................188Figure 2: Data set Trudering, industrial and rural areas .......................................................188Figure 3: Data set Fjärdhundra, agricultural and forested areas ...........................................189Figure 4: Data set Oberpfaffenhofen, agricultural and airport areas ....................................189Figure 5: Topographic Maps: a) Trudering, b) Oberpfaffenhofen, c) Copenhagen (scanned),

d) Fjärdhundra (scanned)..............................................................................................191Figure 6: From three basic information sources to the reference map (left: optical image,

topographic map, SAR image; right: reference map of Copenhagen)..........................192Figure 7: Areas with different content in SAR and optical images as well as in topographic

map (left); region with different content covered with a mask (right) .........................193Figure 8: Reference Maps, a) Trudering, b) Oberpfaffenhofen, c) Copenhagen,

d) Fjärdhundra ..............................................................................................................194Figure 9: a) Reference data, b) participant’s data, c) result of subtraction ...........................195Figure 10: Image processing procedure................................................................................196Figure 11: Definition of false positively and false negatively classified linear objects .......197Figure 12: Example for linear analysis .................................................................................198Figure 13: Copenhagen Result, Agricultural Area ...............................................................199Figure 14: Copenhagen Result, Built-Up Area.....................................................................199Figure 15: Copenhagen Result, Forested Area .....................................................................200Figure 16: Copenhagen Result, Lakes ..................................................................................200Figure 17: Copenhagen Result, Linear Objects ....................................................................200Figure 18: Fjärdhundra Result, Agricultural Area................................................................202Figure 19: Fjärdhundra Result, Built-Up Area .....................................................................202Figure 20: Fjärdhundra Result, Forested Area......................................................................203Figure 21: Fjärdhundra Result, Linear Objects: Roads ........................................................203Figure 22: Fjärdhundra Result, Linear Objects: River .........................................................203Figure 23: Oberpfaffenhofen Result, Agricultural Area.......................................................205Figure 24: Oberpfaffenhofen Result, Built-Up Area ............................................................206Figure 25: Oberpfaffenhofen Result, Forested Area ............................................................206Figure 26: Oberpfaffenhofen Result, Water bodies..............................................................206Figure 27: Oberpfaffenhofen Result, Excavation Area ........................................................207Figure 28: Oberpfaffenhofen Result, Roads .........................................................................207Figure 29: Trudering Result, Agricultural Area ...................................................................209Figure 30: Trudering Result, Built-Up Area.........................................................................209Figure 31: Trudering Result, Forested Area .........................................................................209Figure 32: Trudering Result, Water bodies ..........................................................................210Figure 33: Trudering Result, Excavation Area.....................................................................210Figure 34: Trudering Result, Roads......................................................................................210

Page 36: November 2006 - TU Berlin185 Abstract The EuroSDR “Sensor and Data Fusion Contest” is concerned with the question of whether mapping information usually obtained from optical imagery

215

Index of Tables

Table 1: Contest Phases ........................................................................................................186Table 2: EuroSDR Contest Participants ...............................................................................186Table 3: SAR images used for different test areas................................................................187Table 4: Topographic Maps for the test areas.......................................................................190Table 5: Additional object classes ........................................................................................194Table 6: Qualitative Comparison of Copenhagen.................................................................201Table 7: Qualitative Comparison of Fjärdhundra .................................................................204Table 8: Qualitative Comparison of Oberpfaffenhofen........................................................208