reconstruction of buildings from airborne laser scanning … · reconstruction of buildings from...

10

Click here to load reader

Upload: lekhuong

Post on 18-Jun-2018

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Reconstruction of Buildings from Airborne Laser Scanning … · RECONSTRUCTION OF BUILDINGS FROM AIRBORNE LASER SCANNING DATA Abo Akel Nizar ... With these two filters applied,

ASPRS 2006 Annual Conference Reno, Nevada May 1-5, 2006

RECONSTRUCTION OF BUILDINGS FROM AIRBORNE LASER SCANNING DATA

Abo Akel Nizar Sagi Filin

Yerach Doytsher Dept. of Transportation and Geo-information. Technion

Israel Institute of Technology Haifa, 32000. Israel

[email protected] [email protected]

[email protected]

ABSTRACT For mapping applications the reconstruction of buildings from the point cloud offers promising prospects for rapid generation of large scale 3D models, e.g., for city modeling. Reconstruction requires, however, knowledge on a variety of parameters that refer both to the point cloud and to the modeled building. For the point cloud the separation of the laser points describing the buildings from the rest of the data is one concern; the other is ensuring that this subset reflects indeed a building and not another object, e.g., vegetation. The generation of a building model from this subset requires then to learn the roof parts from the data, and then convert these elements into an actual building model that complies with topological and geometrical rules.

The complexity of this task has led many researchers to use external information, mostly in the form of detailed ground plans to identify the subset of the point cloud and to provide first approximation of the building shape. This information is however not available everywhere and generally cannot be taken for granted. In this paper we present a reconstruction model that automatically detects buildings within the point cloud and reconstruct their shape. For the detection we develop safeguards that reduce the chance of misclassification of a building to a minimum. The reconstruction involves the aggregation of the point set into individual faces, and learning the building shape from these aggregates. We show the effect of imposing geometric constrains on the reconstruction to generate realistic models of the buildings.

INTRODUCTION

Building information for city modeling is fundamental for a growing number of applications, among them urban planning, telecommunication, and environmental monitoring. Over large areas such as the ones urban environments offer their extraction by manual techniques is time consuming and labor intensive. As a result, considerable amount of photogrammetric research has been focused on the development of automatic and semi-automatic techniques to reconstruct the shape of buildings from aerial photography, satellite based imagery and other remotely sensed data. Reviewing the work related to building extraction shows that most strategies are making use of elevation data that were either acquired by active sensors or derived from optical ones. In this regard, Light Detection and Ranging (LiDAR) technology is proving to be the most promising data source for the extraction of such information. By directly providing measurements of surface heights with high point density and high level of accuracy, dense 3D description of the surveyed surfaces can lead to subsequent extraction of building information.

LiDAR data have indeed been used as a source of information for the reconstruction of buildings with complex shapes by a growing number of researchers (see e.g., Vosselman, 1999; Wang and Schenk, 2000; Brenner and Haala, 1998; Brenner, 2000 as some examples). The reconstruction process requires the detection of buildings in the point cloud following then by the reconstruction of their shape. Detection concerns the separation buildings from the background so methods that involve finding high regions compared to their background are usually applied. For the detection of building we list some of the common approaches that have been proposed. Oda et al., 2004 applies morphological opening filters to identify the terrain and then subtract it from the digital surface model (DSM) to isolate building patches. Alharthy and Bethel, 2004 propose local segmentation to identify detached solid objects. Seo and Schenk (2003) propose using contour graphs to compute the slopes between contours and by analyzing

Page 2: Reconstruction of Buildings from Airborne Laser Scanning … · RECONSTRUCTION OF BUILDINGS FROM AIRBORNE LASER SCANNING DATA Abo Akel Nizar ... With these two filters applied,

ASPRS 2006 Annual Conference Reno, Nevada May 1-5, 2006

them extract building boundaries. Wang (1998), is using edge operators to localize buildings in a point cloud. Others e.g., Vosselman and Dijkman (2001), and Schwalbe et al. (2004) are making use of external information in the form of ground plans to localize the buildings. Using them allows extracting a subset of the point cloud where the building is expected to appear.

While the detection part concerns localizing the building in the point cloud, the reconstruction part concerns analyzing their shape. The representation of building shape can be carried out in various forms that generally dictate the reconstruction process. Shape representation can be classified into three main categories: boundary representation, parametric models and CSG models. The boundary representation (B-rep) model is similar to polyhedral models, where the object surfaces are described by their boundary lines and surfaces. This model can represent generic types of buildings without geometric constraints among the entities. However, if there are missing features in the data, the grouping process may be hampered and the corresponding object structure may not be represented well. A Parametric model represents a building by a set of parameters such as length, width, height and others. This model can be used to extract buildings via semi-automatic methods, where it reduces the number of point measurement times and also retains the geometric constraints in building structures. In automatic reconstruction, missing features can be predicted by some geometric constraints in the predefined model structures. However, the building shapes to be reconstructed are limited to certain types of predefined building models (Mass and Vosselman, 1999; Brenner and Halla, 1998). Constructive Solid Geometry (CSG) model represents a building as a combination of building parts. A complete building model is described by a tree structure, where the nodes represent the building parts and the edges the operations between building parts such as union, intersection and difference. The operations for CGS model generation are useful for grouping building parts into complete building models during building reconstruction (Vosselman and Dijkman, 2001).

In spite of the large number of research conducted on this subject, yet no automatic and generic solution for the problem reconstruction of building shapes has been given. Existing algorithms show success dealing with narrow spectrum of buildings types. Moreover, in some cases these algorithms are limited while dealing with complex building. The algorithm we propose for building extraction and reconstruction does not assume any prior information about their shape.

BUILDING RECONSTRUCTION

Detection Our algorithm begins with the separation of detached objects from the ground; it follows by classification of the

detached objects into buildings and other objects. The separation of detached objects from the ground allows reducing the overhead in analyzing the point cloud to well defined point clusters rather then the whole point cloud (see Figure 1.b). Our separation strategy is driven by a filtering approach that uses global functions in the form of orthogonal polynomials for a coarse separation between the terrain and the detached objects and is followed by surface refinement (see Abo-Akel et al., 2004 for greater details). The polynomial coefficients are estimated robustly with a guiding assumption that when a function is fitted to a mixture of terrain and off-terrain points, off-terrain points will have positive residuals while terrain points will have negative ones. To reduce the effect of off-terrain points on the fitted function, the weight of points with a positive residual is reduced between iterations, thereby reducing their influence (Using robust estimation for detecting terrain points has also been suggested by Kraus and Pfeifer (1998), however, the authors use local functions to describe the terrain). The orthogonal polynomials are relatively insensitive to noise and are numerically stable; so, the process begins with a high degree polynomial that passes between the laser points. In the subsequent iterations, the shape of the polynomial is simplified by a decrease of its degree; the controlled shape of the function, coupled with the reduced influence of the off-terrain points limits further the influence of the latter while offering a closer description of the terrain. At the final iteration the shape of the terrain does not change, namely off-terrain points seized influencing the shape of the polynomial and only the terrain points influence the results.

Page 3: Reconstruction of Buildings from Airborne Laser Scanning … · RECONSTRUCTION OF BUILDINGS FROM AIRBORNE LASER SCANNING DATA Abo Akel Nizar ... With these two filters applied,

ASPRS 2006 Annual Conference Reno, Nevada May 1-5, 2006

Figure 1. Building detection, a) A shaded relief of a point cloud acquired over an urbanscape, b) detached objects

separated from the terrain, c) segments that were kept following the size and height filtering.

The detached segments are further filtered by eliminating segments that are too small to be considered as buildings (namely area to small to form a building) and ones that are too close to the bare earth (implemented by a height threshold). With these two filters applied, a significant amount of spurious segments are removed. The subsequent phase of segment classification and further segmentation into surfaces eliminates vegetation segments that were left in the data. Data Classification and Segmentation

Further classification and segmentation of the laser data is performed via clustering the point cloud. Clustering is motivated by the recognition that points that constitute a segment tend to cluster if represented by adequate measures. The clustering is based on grouping points with similar surface texture measures. Evaluation of attributes that are sufficient to separate surface types shows that height variation and surface trend allows distinguishing between surfaces. So, for each point a feature-vector of four dimensions is created. The vector consists of a height difference measure and the three parameters defining the tangent plain to a point. A feature space with dimensions similar to the feature vector is formed where the coordinates of each laser point are determined by the values of the feature vector. In this space, solid surfaces as roofs will tend to cluster while vegetation or vegetation like segments will not. Therefore, the segmentation into building parts also filters out non-building segments.

Clusters in this space only constitute a "surface class" containing all points that share similar features. These may consist of more than one point cluster, so following the surface class extraction, point clusters are identified in object space by proximity measures. Clustering of the data is conducted via unsupervised classification, surface clusters are extracted by a mode-seeking algorithm, which is efficient, better suited for identifying planar surface elements and does not require a predefined number of classes to be defined. Following the extraction of surfaces they are validated via surface fitting – if the cluster is big enough and a planar representation is inadequate a smooth surface model is tested; the determination of the actual surface shape (planar or smooth) is done at a later stage. Validation involves testing whether the cluster is homogeneous and indeed composed of only one surface class; and if that is the case, validating that all points in the cluster belong to the same class. The algorithm handles these scenarios as follows, the null-hypothesis is that the cluster represents only one class; therefore, the existence of outliers is tested first. Its failure is an indication that the cluster may be composed of more than one surface. Testing for the existence of more than one surface is implemented here by local evaluation of the set of points. The extraction and validation of clusters defines elemental segments in the data; these are then refined by extending them by adding unsegmented points and merging clusters that are part of the same surface. Merging of clusters is decided by testing whether neighboring clusters share similar mean (which are the estimated surface parameters) and standard deviation. The size of the segments is controlled by the standard deviation thresholds that are being set. In addition to the upper limit max a lower bound limit, min is also set to avoid under-segmentation. The value is

a

b

c

Page 4: Reconstruction of Buildings from Airborne Laser Scanning … · RECONSTRUCTION OF BUILDINGS FROM AIRBORNE LASER SCANNING DATA Abo Akel Nizar ... With these two filters applied,

ASPRS 2006 Annual Conference Reno, Nevada May 1-5, 2006

set in accordance with the expected accuracy of the laser points themselves. When a segment is extended and its std. is below the minimum threshold, min is used instead.

a

b

c

d

Figure 2. Result of the segmentation of a building (resolution 1.2 p/m2), a) A shaded relief of a point cloud acquired

over an urbanscape, b) detached objects separated from the terrain, c) segments that were kept following the size and height filtering.

Figure 2 demonstrates the segmentation result on a hipped roof with a dormer. The max value is set to 10cm, max

to 5cm, and the minimal segment size is set to 10 points. The results show that the five dominant faces in the roof were identified, but some holes can be noticed. Observing Figure 2.b one sees that these holes relate to either a small subset of point (< 10 points) or subsets that do not form a plane. Treating those voids is carried out in the following steps. The delineation of the segments is performed by identifying boundary points in each segment (such points that are neighboring to points from other segments) and the linking them. Analysis of Roof Topology and Approximated Shape

Establishing the topology between roof faces is an important step in the reconstruction of buildings. Topology records the relations between the different roof parts and allows transforming the extracted segments into higher level entities describing the roof geometry. The most relevant topological relations in our case are adjacency, and inclusion. Adjacency relations between features can be represented by an adjacency graph which link adjacent faces using graph edges. Surface boundary lines are being used as those entities from which we infer if two segments are indeed adjacent or not. Usually, adjacent segments will form a crease edge where two surface patches intersect. Adjacency graph can also help inferring about the type of roof we are processing.

For the building we are studying one finds four adjacent roof parts (each connected to two other parts, one from the left and one from the right) and one roof part that is included within another part (the dormer). The adjacency graph is featured in Figure 3.a (the graph nodes are placed at the segment centroid, therefore for the left segment the

Page 5: Reconstruction of Buildings from Airborne Laser Scanning … · RECONSTRUCTION OF BUILDINGS FROM AIRBORNE LASER SCANNING DATA Abo Akel Nizar ... With these two filters applied,

ASPRS 2006 Annual Conference Reno, Nevada May 1-5, 2006

node appears insider the dormer).

a

b

Figure 3. Hipped roof with a dormer a) adjacency graph b) the crease edges for resulting from it.

Following the establishment of the roof faces the crease edges between them can be computed by plane intersection. These edges define well and very accurately the roof structure. In Figure 3.b we show the crease edges that were computed for the hipped roof we study. Notice the intersection of the dormer with the roof part including it.

Following the determination of the crease edges in our building scheme we turn to delineate their boundary. With the lack of another source of knowledge we assume that the location of the building boundary is at the mid-point of each boundary edge. As the extracted points provide a discrete representation of the boundary, we use Hough Transform to detect the edges that form the linear boundary representation. The lines derived by the Hough transformation provide an approximated solution, which we then improve via least-squares adjustment. Geometric Constraints

Man-made objects tend to be characterized by relatively simple shapes. Most buildings, for example, will be characterized by lines parallel to one another or ones that are perpendicular. The lines extracted through the process we presented, both crease edges and boundaries, generally, will not follow those rules and therefore should be fixed. Fixing them involves setting constraints between the entities involved; those constraints should fix the geometry, however, without tempering the topology. We therefore apply the corrections in the form of an adjustment of the whole system via 3D constrains. For the implementation of such adjustment, the accuracy and the reliability of extracted lines should be considered as a factor as different types of lines were extracted with different level of accuracy. We classify the extracted lines into the following three categories:

1. Horizontal crease edges, horizontal lines extracted as intersection lines between two segments. 2. Non- Horizontal crease edges, lines extracted as intersection lines between two segments. 3. Border lines, lines extracted using Hough Transform. This classification provides us with a good strategy for weighting the constraints imposed by each line in the

adjustment progress.

To form the constraints we apply an automatic procedure for checking the possibility of parallelism or perpendicularity between lines. We first check the possibility these relations between lines from the first category (termed Type I), then lines from the second category are checked against Type I lines and against themselves. Finally lines from the third category are checked against lines from the first and second categories and against themselves. Performing those tests in this manner allows us to provide the appropriate weights for each relation.

For adjustment we use the adjustment with constrains model as given in Equations (1)

CxW = (1)

Page 6: Reconstruction of Buildings from Airborne Laser Scanning … · RECONSTRUCTION OF BUILDINGS FROM AIRBORNE LASER SCANNING DATA Abo Akel Nizar ... With these two filters applied,

ASPRS 2006 Annual Conference Reno, Nevada May 1-5, 2006

with C the coefficient matrix for the constrains, x is the coordinate of the unknown points and W is the residual of the constrains before the adjustment. This model is adjusted by

WCCPCPx TT 111 )( −−− ⋅= (2)

Figure 4. Geometrical constrains between the extracted building primitives.

Figure 4 illustrates the type of process we are applying in constraining the building geometry. The realization of

these constraints is as follows: i) Parallelism

Constraining the two lines to be parallel in 3D space requires correcting each end point of an edge by iii ZYX ΔΔΔ ,, with {i= A, B, C, D} so that

( ) ( ) ( ) ( ) ( ) ( )

( ) ( ) ( ) ( ) ( ) ( )c

ZZZZb

YYYYa

XXXXc

ZZZZb

YYYYa

XXXX

DDCCDDCCDDCC

BBAABBAABBAA

Δ+−Δ+=

Δ+−Δ+=

Δ+−Δ+

Δ+−Δ+=Δ+−Δ+=Δ+−Δ+

(3) with iii YYX ,, the coordinates of each end point as obtained from the reconstruction. From (3) the following constraints can be written to model parallelism.

0)()()()( =−⋅−−−⋅− aAaBaCaDaCaDaAaB YYXXYYXX (4)

0)()()()( =−⋅−−−⋅− aAaBaCaDaCaDaAaB ZZXXZZXX ii) Perpendicularity between lines pairs

Two lines will be perpendicular if the scalar product is equal to zero. Doing so should provides the next equation with iiai XXX +Δ=

0=⋅+⋅−⋅−⋅+

⋅+⋅−⋅−⋅+⋅+⋅−⋅−⋅

AaCaBaCaAaDaBaDa

AaCaBaCaAaDaBaDa

AaCaBaCaAaDaBaDa

ZZZZZZZZXXXXXXXX

YYYYYYYY

(5)

AC

D B

A

C

D

B

Page 7: Reconstruction of Buildings from Airborne Laser Scanning … · RECONSTRUCTION OF BUILDINGS FROM AIRBORNE LASER SCANNING DATA Abo Akel Nizar ... With these two filters applied,

ASPRS 2006 Annual Conference Reno, Nevada May 1-5, 2006

We point out that in the formation process vertices that appear in closer proximity to one another will be merged unless this merging will violate a topological rule.

Figure 5. Lines extracted in the reconstruction process, a) extraction without constrains, b) extracted lines following the application of the constraints, c) the reconstructed building in 3D.

With the adjustment of the bounding lines (and in fact their end points) the reconstruction of the building shape

is completed. In the following Section we demonstrate the application of our proposed approach on building with varying level of complexity.

EXAMPLES

We demonstrate our building reconstruction algorithm on a small neighborhood consisting of four buildings. The points spacing is ~1.2 p/m2. The buildings in this example consist of hipped roof buildings, saddle-back roofs and cross-hipped roofs. Figure 6 shows the subset we are processing, where Figure 6.b presents the point set after the buildings were detected and the detached objects were segmented. As can be seen all roof parts have been detected well. In this regard it is worth noting the potential complexity in the segmentation of roof parts from data in such resolution where small facets may be described by a small number of points.

a

b

Figure 6. A residential area with several buildings in it a) shaded relief depicting the point cloud, b) the results after building detection and segmentation

The processing steps are illustrated in Figure 7. Beginning with segmented point cloud, following with the

delineated boundaries and the evaluation of the roof faces topology. Notice that even though the bounding polygons do not depict and ideal shape of a roof face the topology is still extracted correctly.

a

b c

Page 8: Reconstruction of Buildings from Airborne Laser Scanning … · RECONSTRUCTION OF BUILDINGS FROM AIRBORNE LASER SCANNING DATA Abo Akel Nizar ... With these two filters applied,

ASPRS 2006 Annual Conference Reno, Nevada May 1-5, 2006

a

b

c

d

e

f

g

Figure 7. Result of the segmentation of a building (resolution 1.2 p/m2), a) segments that were kept following the size and height filtering, b,c)segments boundary, d) adjacency graph, e) the crease edges for resulting from it, f)

extraction without constrains, g) extracted lines following the application of the constraints.

Page 9: Reconstruction of Buildings from Airborne Laser Scanning … · RECONSTRUCTION OF BUILDINGS FROM AIRBORNE LASER SCANNING DATA Abo Akel Nizar ... With these two filters applied,

ASPRS 2006 Annual Conference Reno, Nevada May 1-5, 2006

The reconstruction of the crease and bounding lines that form the building (Figure 7.f) are far from their final shape, this may be attributed to the point density. Figure 7.g shows however that the geometrical adjustment that was applied managed to recover their shape correctly. Furthermore, one can see that with the building down to the right the reconstruction recovered two levels of roofs. From Figure 7.e one sees that this reconstruction follow the actual way the roof are built. It therefore shows that while adjusting the shape no destructive operation are being applied.

Figure 8. Rendering of the buildings in their reconstructed shape.

SUMMARY

In this paper we have described an algorithm for autonomous reconstruction of building for laser scanning point clouds. The algorithm requires no prior knowledge, e.g., in the form ground plans. It has been demonstrated that even in point density that is lower than what is commonly used building can correctly been reconstructed. As the example shows the geometrical constrains we introduced improve the reconstruction making it more robust to errors and in many regards to the density of points.

REFERENCES

Alharty, A., Bethel, J., 2004. Detailed building reconstruction from airborne laser data using a moving surface

method. In: IAPRSIS XXXV - B3, pp. 213-218. Akel N.A., Zilberstein O., Doytsher Y., 2004 A Robust Method Used with Orthogonal Polynomials and Road

Network for Automatic Terrain Surface Extraction from LIDAR Data in Urban Areas. International Archives of Photogrammetry and Remote Sensing, ISPRS

Brenner, C. and Haala, N., 1998. Rapid acquisition of virtual reality city models from multiple data sources. In: H. Chikatsu and E. Shimizu (eds), IAPRS Vol. 32 Part 5, pp. 323.330.

Brenner, C., 2000. Towards fully automated generation of city Boundary points will be clustered at the maximum if they are within a limited spacing and above a certain threshold models. ISPRS, vol. XXXIII, Amsterdam 2000.

Haala, N., Brenner, C., Anders, K.-H., 1998: 3D Urban GIS from Laser Altimeter and 2D Map Data. International Archives of Photogrammetry and Remote Sensing, Vol. 32, Part 3, pp.339-346

Mass, H.-G. and Vosselman, G., 1999. Two Algorithms for Extracting Building Models from Raw Laser Altimetry Data In: ISPRS Journal of Photogrammetry and Remote Sensing,54(23):153-163.

Page 10: Reconstruction of Buildings from Airborne Laser Scanning … · RECONSTRUCTION OF BUILDINGS FROM AIRBORNE LASER SCANNING DATA Abo Akel Nizar ... With these two filters applied,

ASPRS 2006 Annual Conference Reno, Nevada May 1-5, 2006

Oda, K. Takano, T. Doihara, T. Shibaski, R Automatic Building Extraction And 3-D City Modeling From LiDAR Data Based On Hough Transform. International Archives of Photogrammetry and Remote Sensing, Commission III, PS WG III/3

Schwalbe, E., 2004: 3D building model generation from airborne laserscanner data by straight line detection in specific orthogonal projections. International Archives of Photogrammetry and Remote Sensing, Vol. 35, Part B, pp. 249- 254

Seo, S., Schenk, T, 2003, A study of integration methods of aerial imagery and LIDAR data for a high level of automation in 3D building reconstruction. In: SPIE Aerosense 2003, Multisensor, Multisource Information Fusion: Architectures, Algorithms and Applications VII. Orlando, FL.

Vosselman, G., 1999. Building reconstruction using planning faces in very high density height data. In: ISPRS Conference of Automatic Extraction of GIS Objects from Digital Imagery, Munich, 8-10 September, pp. 87-92.

Vosselman, G., Dijkman S., 2001. 3D building model reconstruction from point clouds and ground plans. International Archives of Photogrammetry and Remote Sensing. 34(3/W4): 37–43.

Wang, Z. and Schenk, T., 2000. Building extraction and reconstruction from lidar data. In: International Archives of Photogrammetry and Remote Sensing, 33(B3):958-964.

Wang, Z., 1998, Extracting building information from LIDAR data, ISPRS Commission III Symposium on Object Recognition and Scene Classification from Multi-Spectral and Multi-Sensor Pixels, Columbus, Ohio.