classification and segmentation of 3d terrestrial laser

183
Department of Spatial Sciences Cooperative Research Centre for Spatial Information Classification and Segmentation of 3D Terrestrial Laser Scanner Point Clouds David Belton This thesis is presented for the degree of Doctor of Philosophy of Curtin University of Technology April 2008

Upload: others

Post on 28-Oct-2021

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Classification and Segmentation of 3D Terrestrial Laser

Department of Spatial Sciences

Cooperative Research Centre for Spatial Information

Classification and Segmentation of 3D Terrestrial Laser

Scanner Point Clouds

David Belton

This thesis is presented for the degree of

Doctor of Philosophy

of

Curtin University of Technology

April 2008

Page 2: Classification and Segmentation of 3D Terrestrial Laser

Declaration

This thesis contains no material which has been accepted for the award of any

other degree or diploma in any university. To the best of my knowledge and belief

this thesis contains no material previously published by any other person except

where due acknowledgement has been made.

Signature:...............................................

Date:...............

i

Page 3: Classification and Segmentation of 3D Terrestrial Laser

To whoever wants it.

ii

Page 4: Classification and Segmentation of 3D Terrestrial Laser

Abstract

With the use of terrestrial laser scanning, it is possible to efficiently capture

a scene as a 3D point cloud. As such, it is seeing increasing deployment in

traditional surveying and photogrammetric fields, as well as being adapted to

applications not traditional associated with surveying and photogrammetry. The

problem with utilising the technology is that, since the point cloud captured

is so densely populated, the processing of the data can be extremely labour-

intensive. This is due to the large volume of data that must be examined to

identify the features sampled and to remove extraneous information. Research

into automated processing techniques aims to alleviate this bottleneck in the

work-flow of terrestrial laser scanner (TLS) processing.

A segmentation method is proposed in this thesis to identify and isolate the

salient surface that comprises a scene sampled as a 3D point cloud. The cut-plane

based region growing (CPRG) segmentation method uses the classification results,

approximated surface normals, and the directions of principal curvature to locally

define the extents of the surfaces present in a point cloud. These generalised

surfaces can be of arbitrary structure, as long as they satisfy the imposed surface

conditions. These conditions are that, within the identified extents of the surface,

the surface is considered to be continuous and without discontinuities. As such,

a novel metric is introduced to determine points sampled near discontinuous or

changes in the surface structure that is independent of the underlying structure

iii

Page 5: Classification and Segmentation of 3D Terrestrial Laser

of the surfaces. In addition, an iterative method of neighbourhood correction

is also introduced to remove the effects of multiple surfaces and outliers on the

attributes calculated through the use of local neighbourhoods.

The CPRG segmentation are tested on practical 3D point clouds captured by a

TLS. These point clouds contain a variety of different scenes and objects, as well

as different resolutions, sampling densities, and attributes. It was shown that the

majority of surfaces contained within the point clouds are isolated as long as they

have a sufficient sampling to be resolved. In addition, different surfaces types,

such as corrugated surface, cylinders, planes and other complex smooth surfaces,

are segmented and treated similarly, regardless of the underlying structure. This

illustrates the CPRG segmentation method’s ability to segment arbitrary surface

types without a prior knowledge.

iv

Page 6: Classification and Segmentation of 3D Terrestrial Laser

Acknowledgements

First up, I would like to express my gratitude to my initial PhD supervisor,

Dr Derek D. Lichti. He provided me with advice, guidance and an excellent

environment in order to pursue my research. He also was able to helped keep

me (mostly) on track. Next I would like to thank Dr Jonathan F. Kirby, who

took over the task of supervision and, more importantly, help turn an incoherent

bunch of words and pages into a coherent thesis. This task was greatly aided by

Lori Patterson and my co-supervisor and friend, Dr Kwang-Ho Bae who, along

side with Cuong Q. Tang and Johnny Lo, I had the pleasure of sharing a lab and

many discussions with, only a few of which probably made any sense.

I would also like to thank the members of Western Australia Centre for Geodesy,

CRC for Spatial Information and the staff and students of the Department of Spa-

tial Sciences at Curtin university for the support, friendship and copious amounts

of beer when things got too much. In addition, thanks go out to the Dale and Baz

at McMullan and Nolan Surveying, along with Stuart, Neil and Glen at AAM

Hatch who help greatly with the direction of the project, equipment and data for

testing, and assisting a poor mathematician/computer scientist understand what

surveying is all about.

I would also like to thank to Curtin University’s post-graduate scholarship award

v

Page 7: Classification and Segmentation of 3D Terrestrial Laser

and the Cooperative Research Centre for Spatial Information for funding of the

research in this thesis.

Finally, most importantly and most un-originally, thanks go to my family and

friends, and most especially my parents. Their help and support through a lot

of shit during the last few years, some of it project related, some of it not, was

what got me through it. Sorry about the ulcers, but at least you can stop saying

’Dave? He’s still at Uni’.

vi

Page 8: Classification and Segmentation of 3D Terrestrial Laser

Contents

1 Introduction 11.1 Terrestrial Laser Scanning and Applications . . . . . . . . . . . . 21.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.3 Previous Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.4 Objective of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . 61.5 Thesis Organisation . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2 Background 92.1 3D Point Cloud Data Structure . . . . . . . . . . . . . . . . . . . 102.2 Classification and Segmentation of Spectral Information . . . . . . 122.3 Classification and Segmentation of Geometric Information . . . . 15

2.3.1 Edge-Based Techniques . . . . . . . . . . . . . . . . . . . . 152.3.2 Surface-Based Techniques . . . . . . . . . . . . . . . . . . 172.3.3 Other Approaches . . . . . . . . . . . . . . . . . . . . . . . 19

2.4 Region Growing and Clustering . . . . . . . . . . . . . . . . . . . 192.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

3 Classification 213.1 Classes of Low-Level Features . . . . . . . . . . . . . . . . . . . . 22

3.1.1 Surface Points . . . . . . . . . . . . . . . . . . . . . . . . . 223.1.2 Boundary Points . . . . . . . . . . . . . . . . . . . . . . . 233.1.3 Edge Points . . . . . . . . . . . . . . . . . . . . . . . . . . 233.1.4 Summary of Classes . . . . . . . . . . . . . . . . . . . . . 24

3.2 Point Attributes for Classification . . . . . . . . . . . . . . . . . . 243.2.1 Local Neighbourhood Selection . . . . . . . . . . . . . . . 253.2.2 Principal Component Analysis . . . . . . . . . . . . . . . . 273.2.3 Curvature Approximation . . . . . . . . . . . . . . . . . . 283.2.4 Variance of Curvature . . . . . . . . . . . . . . . . . . . . 30

3.2.4.1 Properties of Variance of Curvature . . . . . . . . 313.2.4.2 Example of the Variance of Curvature Metric . . 323.2.4.3 Summary of Variance of Curvature . . . . . . . . 35

3.2.5 Metric for Detection of Boundary Points . . . . . . . . . . 363.2.5.1 Boundary Points Through Examination of InteriorAngles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

vii

Page 9: Classification and Segmentation of 3D Terrestrial Laser

CONTENTS

3.2.5.2 Boundary Points Through First Order TensorFramework . . . . . . . . . . . . . . . . . . . . . . . . . . 373.2.5.3 Boundary Points Through Unorganised Neigh-bourhood Examination . . . . . . . . . . . . . . . . . . . . 38

3.2.6 Summary of Point Attributes . . . . . . . . . . . . . . . . 403.3 Classification Decision Rules . . . . . . . . . . . . . . . . . . . . . 403.4 Summary of Classification . . . . . . . . . . . . . . . . . . . . . . 43

4 Refining Classification Results and Attributes 454.1 Improving Neighbourhood Selection Through Iterative Updating . 46

4.1.1 Internal and External Relationship Between Points . . . . 474.1.2 Iteratively Updating the Neighbourhood Point Weights . . 504.1.3 2D Case Study . . . . . . . . . . . . . . . . . . . . . . . . 514.1.4 Test with a 3D Point Cloud . . . . . . . . . . . . . . . . . 544.1.5 Summary of Improving Neighbourhood . . . . . . . . . . . 57

4.2 Extending Curvature Attributes . . . . . . . . . . . . . . . . . . . 584.2.1 Principal Curvature Directions . . . . . . . . . . . . . . . . 594.2.2 Radius of Curvature Approximation . . . . . . . . . . . . . 644.2.3 Test with a 3D Point Cloud . . . . . . . . . . . . . . . . . 69

4.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

5 Segmentation 725.1 Basics of Region Growing . . . . . . . . . . . . . . . . . . . . . . 735.2 Cut-Plane Region Growing (CPRG) Segmentation Procedure . . . 745.3 Refining Segmentation Results . . . . . . . . . . . . . . . . . . . . 80

5.3.1 Unsegmented Points . . . . . . . . . . . . . . . . . . . . . 805.3.1.1 Singular Points . . . . . . . . . . . . . . . . . . . 815.3.1.2 Edge and Resolvable Points . . . . . . . . . . . . 825.3.1.3 Complex and Potentially Irresolvable Features . . 87

5.3.2 Over-Segmentation . . . . . . . . . . . . . . . . . . . . . . 895.3.3 Under-Segmentation . . . . . . . . . . . . . . . . . . . . . 95

5.4 Segmentation Summary . . . . . . . . . . . . . . . . . . . . . . . 97

6 3D Point Cloud Results 996.1 Simple Building Facade . . . . . . . . . . . . . . . . . . . . . . . . 1006.2 Industrial Plant . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

6.2.1 Classification of the Processing Plant Results . . . . . . . 1086.2.2 Enhancing the Information of the Processing Plant . . . . 1126.2.3 Segmentation Results for the Processing Plant . . . . . . . 115

6.3 Large-scale Building Scene . . . . . . . . . . . . . . . . . . . . . . 1186.4 Selection of Threshold Values . . . . . . . . . . . . . . . . . . . . 1226.5 Summary of 3D Point Cloud Results . . . . . . . . . . . . . . . . 124

7 Conclusions and Discussion 1257.1 Summary of Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . 125

viii

Page 10: Classification and Segmentation of 3D Terrestrial Laser

CONTENTS

7.2 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1287.3 Future Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

References 131

A Overview of Principal Component Analysis 148

B Neighbourhood Correction Methods 152B.1 Outlier Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . 152B.2 RANSAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154B.3 Anisotropic Filtering . . . . . . . . . . . . . . . . . . . . . . . . . 157B.4 Optimisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159B.5 Voting Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163

ix

Page 11: Classification and Segmentation of 3D Terrestrial Laser

List of Figures

1.1 Various laser scanners compared in Mechelke et al. (2007). Fromleft to right: Trimble GX (Trimble, 2008), Leica ScanStation (LeicaGeosystems HDS, 2008), FARO LS 880 (FARO, 2008) and Z+FIMAGER 5006 (Zoller+Frohlich, 2008). . . . . . . . . . . . . . . . 2

1.2 Diagram of a laser scanner sampling a surface by pulse lasers. . . 31.3 Point clouds of (a) Agia Sanmarina church in Greece and (b) an

industrial scene (Leica Geosystems HDS, 2008) . . . . . . . . . . 5

2.1 Projection of a square onto a 2D coordinate system in the direc-tion of the arrows. This is not a one-to-one projection since theprojection can have more than one sampled point associated with it. 11

2.2 Cross section of a centre kerb strip in a freeway. (a) Residualsfrom a first order plane fit against the fitted planar surface. (b)The intensity values against the fitted planar surface. . . . . . . . 13

3.1 Example of the size of the nearest neighbourhood selection. Thewhite and black points denote different surfaces. . . . . . . . . . . 26

3.2 (a) The effect of neighbourhood size on λ0. (b) The effect of neigh-bourhood size on curvature approximation. . . . . . . . . . . . . . 29

3.3 (a) 2D sample of an intersection between two surfaces. (b) is thecurvature approximation at each point based on a neighbourhoodof 30 points. (c) is the variance of curvature approximation at eachpoint based on a neighbourhood of 30 points. . . . . . . . . . . . 33

3.4 (a) Grey scale of curvature approximation. (b) Grey scale of vari-ance of curvature . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

3.5 Ordered and normalised value for curvature and the variance ofcurvature for the point cloud in Figure 3.4. . . . . . . . . . . . . 35

3.6 (a) Histogram of curvature approximation. (b) Histogram of vari-ance of curvature. Values calculated for the point cloud in Figure3.4 and contains 500 bins for each histogram. . . . . . . . . . . . . 36

3.7 Ordered points in the neighbourhood surrounding a point of inter-est po close to a boundary. . . . . . . . . . . . . . . . . . . . . . . 37

x

Page 12: Classification and Segmentation of 3D Terrestrial Laser

LIST OF FIGURES

3.8 (a) is the projected neighbourhood for a point of interest (X) withinthe interior of a surface, while (b) is the projected neighbourhoodfor a point of interest (X) near the extent of a surface. The ellipsesdenote the 39.4% confidence interval. The intersection of the el-lipse with the axis ei denotes a value of

√λi for the corresponding

eigenvalues and eigenvectors. . . . . . . . . . . . . . . . . . . . . . 383.9 Illustrates the differences when the utilising the condition in Eq.

3.10. (a) Without the imposed condition of the curvature being lessthan the average of the curvature in the local neighbourhood, and(b) with the additional condition. White points denote classifiededges and green points are the classified surfaces points. . . . . . . 42

4.1 Example of an internal relationship. The threshold is defined inred by Eq. 4.3 with all points inside considered to have an internalrelationship. (a) shows the case for an intersection and (b) for aslightly curving, noisy surface . . . . . . . . . . . . . . . . . . . . 48

4.2 Example of an external relationship. The threshold is defined inred by Eq. 4.4 with all points inside considered to have an externalrelationship. (a) shows the case for an intersection and (b) for aslightly curving, noisy surface . . . . . . . . . . . . . . . . . . . . 48

4.3 Normal directions of points on a 2D intersection example. Theblue lines represent the initial normal approximation and red linesrepresent the correct normal approximations without points fromthe non-dominant surface included in the neighbourhoods. . . . . 51

4.4 Angle of the normal orientation. The values for the surfaces shouldbe approximately -45 and 45 degrees. The blue lines representsthe orientation of the initial normal approximation and red linesrepresent the correct values without points from the non-dominantsurface included in the neighbourhoods. . . . . . . . . . . . . . . . 52

4.5 Updated normal (red) from the original (blue) using just the inter-nal relationship. The top plots show the normal directions over-layed with the structure and the bottom plot shows the orientationangle. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

4.6 Updated normal (red) from the original (blue) using just the ex-ternal relationship. The top plots show the normal directions over-layed with the structure and the bottom plot shows the orientationangle. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

4.7 Trend of the weights for points in a neighbourhood affected bythe presence of multiple surface. At the initial neighbourhood, allpoints are weighted the same. as iterations occur, the values forweights either tend to zero or a non-zero constant. The line repre-sents the value of the weights for the points in the neighbourhoodand how they change with iterations of the procedure. . . . . . . . 54

xi

Page 13: Classification and Segmentation of 3D Terrestrial Laser

LIST OF FIGURES

4.8 Updated normal (red) from the original (blue) using both inter-nal relationships and external relationship with the centroid of theneighbourhood used in the calculations set as the mean of theneighbourhood, x. The top plots show the normal directions over-layed with the structure and the bottom plot shows the orientationangle. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

4.9 Updated normal (red) from the original (blue) using both the in-ternal relationships and external relationship with the centroid ofthe neighbourhood used in the calculations set as the point of in-terest, x0. The top plots show the normal directions overlayed withthe structure and the bottom plot shows the orientation angle. . . 55

4.10 Point cloud sampled from a section of a door archway with a Le-ica Scanstation. Axis units are in metres and colour is reflectselevation changes. . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

4.11 (a) The Gaussian sphere of the uncorrected normal directions.(b) The Gaussian sphere of the corrected normal directions. Thecolour indicates the density of normal directions from blue repre-senting zero, to red representing in excess of a hundred. . . . . . . 56

4.12 Histograms of the angles of orientation for the normal directions.(a) The uncorrected normal approximations. (b) The correctednormal approximations. Theta and phi in the histogram are thetwo angular ordinations defining the normal direction . . . . . . . 57

4.13 Histograms for edge points of the orientation angles for the nor-mal directions. (a) The uncorrected normal approximations. (b)The corrected normal approximations. Theta and phi in the his-togram are the two angular ordinations defining the normal di-rection. Peaks in the histogram denote orientation of the surfacepresent in the point cloud . . . . . . . . . . . . . . . . . . . . . . 58

4.14 (a) Principal component directions for a neighbourhood containingpoints sampled from a cylinder. (b) The projected neighbourhoodonto the two largest principal component directions with the ellipserepresenting the 90% confidence interval. . . . . . . . . . . . . . . 60

4.15 (a) Normal directions and their negated values for the neighbour-hood given in Figure 4.14. (b) The normal values projected ontothe local tangential plane. . . . . . . . . . . . . . . . . . . . . . . 61

4.16 (a) Conic section for the neighbourhood of point coordinates. (b)Conic section for the neighbourhood of point normal directions. . 66

4.17 Neighbourhood of point normals overlayed on the neighbourhoodof point coordinates. This illustrates the scaling that occurs alongthe normal direction of each point, between the neighbourhood ofpoint coordinates and the neighbourhood of normal directions. . . 67

xii

Page 14: Classification and Segmentation of 3D Terrestrial Laser

LIST OF FIGURES

4.18 Point cloud sampled from an industrial scene containing multiplepipe sections consisting of 6696 points with an average spacing of0.020 m. The colours ar based on a simple threshold on the radiusof curvature values to delineate different pipes. . . . . . . . . . . . 70

5.1 Depiction of how the region growing process can traverse a surface.The valid legs between points are denoted by black solid lines, withinvalid legs denoted by red dashed lines. Classified surface andedge points are represented by empty circles and striped circles,respectively. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

5.2 Depiction of how the region growing process can traverse a surfacewith a misclassification of edge points as surface points. The validlegs between points are denoted by black solid lines, with invalidlegs denoted by red dashed lines. Classified surface and edge pointsare represented by empty circles and striped circles, respectively,with the misclassified points denoted by a cross. . . . . . . . . . . 77

5.3 Depiction of how the region growing process can traverse a sur-face with a misclassification of edge points as surface points. Thevalid legs between points are denoted by black solid lines, with in-valid legs denoted by red dashed lines. Classified surface and edgepoints are represented by empty circles and striped circles, respec-tively. The cut planes for edge points are shown in red dotted linesthrough the edge points. . . . . . . . . . . . . . . . . . . . . . . . 78

5.4 Identified non-surface points (red) for the intersection of two sur-faces with different sampling densities. A bias of points towardsthe sparsely sampled surface can be clearly seen. . . . . . . . . . . 83

5.5 Recombination of non surface points that lay near an edge. Thedotted lines represent the residuals of the points to the extendedplanar surfaces for each neighbouring segment. Points 1 and 2 willbe candidates for adding to segment 1, and points 3 and 4 will becandidates for segment 2 . . . . . . . . . . . . . . . . . . . . . . . 84

5.6 Recombination of non surface points that lay near an edge. Thebias in the local surface fit causes points 2 and 3 to be more prob-able candidates for segment 1, and points 1 and 4 for candidatesfor segment 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

5.7 Calculated normal approximation for points on the intersection ofsegments with differing sample densities. . . . . . . . . . . . . . . 86

5.8 (a) The segmentation before non-segmented points are absorbedinto candidate segments. (b) The results after the absorbtion pro-cedure takes place. Different isolated surface segments are de-noted by different colours, with white points representing edge andboundary points. . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

xiii

Page 15: Classification and Segmentation of 3D Terrestrial Laser

LIST OF FIGURES

5.9 (a) Initial segmentation of a wall section containing windows anddown pipes and illustrates how a continuous wall section can bebroken up by features on its surface. (b) Recombination of thesegments. White points indicate edge points and differing colourshighlight different surface segments. . . . . . . . . . . . . . . . . . 91

5.10 Segmentation of a section of wall containing recessed windows. (a)Over-segmentation caused by changes in sampling density beingdetected as discontinuities. (b) Recombining the segments by theproposed method. White points indicate points classified as dis-continuities and differing colours highlight different surface segments. 92

5.11 Two surface segments where the difference between the local sur-faces is not insignificant and cannot be considered continuous. Thepoints belonging to different surface segments are represented bydifferent circles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

5.12 Two segments where the difference between the local surfaces andalignments are insignificant and so can be considered continuousand differentiable across the gap between the extents of the twosegments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

5.13 Vent comprising of four angled slats. (a) Under-segmenting theslats into a single segment. (b) Slats being correctly isolated bytightening the thresholds and reducing the neighbourhood size. . . 96

5.14 Two planar surfaces joined by a small arc that ensures a smoothtranslation from one surface to the other. . . . . . . . . . . . . . . 96

6.1 Point cloud of a building facade displaying intensity returns usingHSI colour model. Approximate dimensions of the building aregiven as (H, W, L) ≈ (21.25 m, 12.8 m, 6 m), where H, W and Lrepresent the height, width and length of the point cloud. . . . . . 101

6.2 Values of the attributes used in classification. (a) The curvaturemetric values. (b) The variance of curvature. (c) The values of theboundary metric. . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

6.3 Classification results of the building facade. White points indi-cate classified edge points, red and green points indicate classifiedsurface points and blue points denote classified boundary points. . 103

6.4 Segmentation results produced by the CPRG segmentation method.(a) The initial segments produced. (b) The segments after re-incorporation of valid edge points and removal of insignificant seg-ments. The colouring of the segments has been randomised so adifferent colour reflects a different surface. . . . . . . . . . . . . . 104

6.5 Results of the top left windows on the front of the building facadewith a change in sampling density from Figure 6.1. (a) Classifica-tion results. (b) Segmentation results. . . . . . . . . . . . . . . . . 105

xiv

Page 16: Classification and Segmentation of 3D Terrestrial Laser

LIST OF FIGURES

6.6 Results of the bottom left windows on the front of the buildingfacade from Figure 6.1. (a) Classification results. (b) Segmentationresults. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

6.7 Point cloud of an industrial scene provided through Leica (LeicaGeosystems HDS, 2008). Approximate dimensions of the pointcloud are given as (H, W, L) ≈ (19.6 m, 19.2 m, 27.8 m), where H,W and L represent the height, width and length of the point cloud. 106

6.8 Values of the attributes used in classification. (a) Curvature met-ric values. (b) The variance of curvature. (c) The values of theboundary metric. . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

6.9 Classification of the point cloud with white denoting edge points,blue denoting boundary points. Red and green points both denotesurface points, with green points having curvature less than themean value for the neighbourhood. . . . . . . . . . . . . . . . . . 109

6.10 Details of the vent in box 1 from Figure 6.9. (a) Results with avariance of curvature threshold of 5.0 × 10−5. (b) Results with avariance of curvature threshold of 2.0×10−5. (c) The profile of thevent. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

6.11 Details of the corrugated surface in box 2 from Figure 6.9. (a)Results with a variance of curvature threshold of 5.0 × 10−5. (b)The profile of the surface. . . . . . . . . . . . . . . . . . . . . . . 110

6.12 Details of the pipe in box 4 from Figure 6.9. (a) The results witha variance of curvature threshold of 5.0× 10−5. (b) The profile ofthe surface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

6.13 Details of the complex structure in box 5 from figure 6.9 with(a) showing the results with a variance of curvature threshold of5.0× 10−5 and (b) shows meshed surface. . . . . . . . . . . . . . . 111

6.14 Approximate radius of curvature for the point cloud of the process-ing plant. (a) Radius of curvature in the direction of maximumcurvature. (b) Radius of curvature in the direction of minimumcurvature. (c) The approximation of mean curvature, (d) The ap-proximation of Gaussian curvature. . . . . . . . . . . . . . . . . . 113

6.15 (a) Section of a pipe where the connector perturbs the normaldirection. (b) Histogram for the angle of alignment of both theuncorrected and corrected normal to the z (vertical) axis . . . . . 114

6.16 Histograms displaying the alignment of the uncorrected and cor-rected normal directionss for a cross section of the vent in Figure6.10. The alignment is to the z (vertical) axis . . . . . . . . . . . 114

6.17 Segmentation results of the processing plant point cloud. (a) Be-fore re-incorporation of the edge and boundary points. (b) Afterall valid edge and boundary points have been absorbed into segments.115

xv

Page 17: Classification and Segmentation of 3D Terrestrial Laser

LIST OF FIGURES

6.18 Detail of the vent in box 1 from Figure 6.17(b) with (a) showingthe results with a variance of curvature threshold of 5.0×10−5 and(b) showing the results with a variance of curvature threshold of2.0× 10−5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

6.19 Details of the structure in box 2 from Figure 6.17(b) with (a) show-ing the results with a variance of curvature threshold of 5.0× 10−5

and (b) showing the results with a variance of curvature thresholdof 2.0× 10−5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

6.20 Segmentation results of the pipes in box 3 from Figure 6.17(b)with (a) and (b) being the segmentation results before and afterabsorption of all the possible edge points, respectively. . . . . . . 117

6.21 Segmentation results of the pipes in box 4 from Figure 6.17(b)with (a) and (b) being the segmentation results before and afterabsorption of all the possible edge points, respectively. . . . . . . 117

6.22 Elevation map of a large point cloud taken from a high elevationcontaining a scene including a building facade and site works. Ap-proximate dimensions of the scene are given as (H, W, L) ≈ (18m, 153 m, 169 m), where H, W and L represent the height, widthand length of the point cloud. . . . . . . . . . . . . . . . . . . . . 118

6.23 Values of the attributes used in the classification defined in Chapter3. (a) Curvature metric values. (b) The variance of curvature. (c)Values of the boundary metric. . . . . . . . . . . . . . . . . . . . 119

6.24 Classification results of the building scene. White points indicateclassified edge points, red and green points indicate classified sur-face points and blue points denote classified boundary points. . . 120

6.25 Segmentation results of the building scene. White points indicateun-incorporated edge and boundary points, while the segmentedsurfaces have been randomly coloured. . . . . . . . . . . . . . . . 121

6.26 Segmentation results of the main building present in the point cloud.1236.27 Segmentation results of the construction site to the left of the build-

ing in the point cloud. . . . . . . . . . . . . . . . . . . . . . . . . 123

A.1 A neighbourhood of points and the principle components foundthrough decomposition of the covariance matrix . . . . . . . . . . 150

B.1 Process of removing the worst point, circled in red, based on theresiduals as shown in step (a) to (f). Because the neighbourhood isbalanced around the intersection, the removal process will not leadto the normal approximate aligning to a surface normal of just onesurface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154

B.2 Dominant surface based on area will not necessarily be the same asthe dominant surface by number points, although (due to the equalneighbourhood size in all directions) the point will most likely tobelong to the dominant surface in terms of area covered by points. 157

xvi

Page 18: Classification and Segmentation of 3D Terrestrial Laser

LIST OF FIGURES

B.3 Example of systematic sampling at regular intervals around thepoint of interest. . . . . . . . . . . . . . . . . . . . . . . . . . . . 158

B.4 (a) shows a simple mask based on a angular span from the pointof interest while (b) shows the sliding of the sampling window ina sampling direction. . . . . . . . . . . . . . . . . . . . . . . . . . 159

xvii

Page 19: Classification and Segmentation of 3D Terrestrial Laser

List of Tables

4.1 Surface attributes associated with mean and Gaussian curvature. . 634.2 Surface attributes and tenors associated with λ

(n)1 and λ

(n)2 . . . . . 64

4.3 Results for the approximated radius of curvature. . . . . . . . . . 70

xviii

Page 20: Classification and Segmentation of 3D Terrestrial Laser

Chapter 1

Introduction

Terrestrial laser scanning (TLS) is still a developing technology in terms of hard-

ware, software and potential applications. As such, the need for developing auto-

mated techniques for the various stages in the workflow of TLS is recognised as

an important research focus (Stanek, 2004; Pfeifer and Briese, 2007). The vari-

ous stages in the TLS workflow start from the 3D point cloud acquisition, and

proceed through calibration (Amiri Parian and Grun, 2005; Lichti and Franke,

2005; Lichti and Licht, 2006), registration (Rusinkiewicz and Levoy, 2001; Sharp

et al., 2002; Grun and Akca, 2005; Bae and Lichti, 2008; Barnea and Filin, 2008;

Brenner et al., 2008), and classification and segmentation of features (Belton and

Lichti, 2006; von Hansen et al., 2006; Brenner and Dold, 2007; Rabbani et al.,

2007). It is the classification and segmentation stage that commands the majority

of the research (Pfeifer and Briese, 2007) since the objectives of the large number

of various applications lead to a large number of distinct features that need to

be isolated. Since the features are distinct, varying and often complex in nature,

this frequently forces the need for manual processing. Combined with the large

volume of point cloud data captured, this translates into a significant proportion

of time and resources in the workflow of TLS being allocated to classification and

segmentation. With automated feature extraction techniques, the amount of user

intervention and processing time can be significantly reduced.

1

Page 21: Classification and Segmentation of 3D Terrestrial Laser

INTRODUCTION

In this thesis, a novel and automated procedure for isolating and segmenting

arbitrary surface features will be presented. The proposed process will comprise

primarily of two steps. The first is a method for classifying the points into

geometric classes (Kalaiah and Varshney, 2003; Belton and Lichti, 2006). The

second step is to segment the point cloud based on the information gained in

the classification stage. In addition, the practical application and effectiveness of

the proposed procedure will be demonstrated as applied to real world 3D point

clouds from various applications.

1.1 Terrestrial Laser Scanning and Applications

Figure 1.1: Various laser scanners compared in Mechelke et al. (2007). From leftto right: Trimble GX (Trimble, 2008), Leica ScanStation (Leica Geosystems HDS,2008), FARO LS 880 (FARO, 2008) and Z+F IMAGER 5006 (Zoller+Frohlich,2008).

A point cloud contains a 3D representation of the surfaces of objects sampled

at fixed regular intervals. Similar datasets (often limited 2D or 2.5D (Schneider

and Weinrich, 2004)) were acquired in the past by various methods including

range imaging, radar (radio detection and ranging), sonar (sound navigation and

ranging) and photogrammetry systems. With the increasing rate of technological

development, more cost effective and accurate systems to acquire 3D information

have been developed, with one such system being laser scanners. Several examples

of commercial terrestrial laser scanners are presented in Figure 1.1. Laser scanner

technology builds a 3D representation by measuring the distance to a surface at

regular intervals in a schematic manner as illustrated in Figure 1.2. The range

can be measured by various techniques such as time of flight, phase-based or

optical triangulation. An overview of the different methods with their limitations

2

Page 22: Classification and Segmentation of 3D Terrestrial Laser

INTRODUCTION

and benefits are detailed in Schulz and Ingensand (2004). Further summaries and

discussions into laser scanning technology can be found in Schulz and Ingensand

(2004), Slob and Hack (2004), Gordon (2005) and Mechelke et al. (2007).

Figure 1.2: Diagram of a laser scanner sampling a surface by pulse lasers.

Technological development has considerably reduced the limitations of early TLS

systems in areas such as data capture rate, spatial and angular resolution, errors

in range and angular measurement, and the volume of data. This has also led

to reducing overall costs of TLS which has led to increased adoption in many

fields of traditional photogrammetry and surveying as well as new applications.

Some of these applications include: structural deformation (Gordon et al., 2003;

Schafer et al., 2004), modelling of industrial plants as built features (Staiger, 2002;

Sternberg et al., 2004), maritime architecture (Bishup et al., 2007), recording and

cataloguing historical and cultural heritage sites (Langer et al., 2000; Abmayr

et al., 2005), landslide mapping (Ono et al., 2000), building facades and virtual

cities (Bohm, 2005; Becker and Haala, 2007; Boulaassal et al., 2007) inventory

management (Thies and Spiecker, 2004) and forensics investigations (Pagounis

et al., 2006).

1.2 Motivation

There are many advantages for utilising TLS technology such as fast data capture,

non-surface contact measurement, and the complete sampling of a scene (Slob

and Hack, 2004). These benefits occur mainly in the acquisition stage of the

3

Page 23: Classification and Segmentation of 3D Terrestrial Laser

INTRODUCTION

workflow. The primary disadvantage of TLS occurs within the data processing

stage. Due to the completeness and size of the sampled point cloud, it will

contain a large amount of redundant and extraneous information (Lichti, 2005).

Furthermore, the point cloud will consist of a vast number of features varying

in size and complexity that need to be processed and identified, often through

manual intervention. This causes the processing stage of the workflow to be

several orders of magnitude longer than the acquisition stage (Stanek, 2004),

which results in the majority of the resources being consumed by the processing

stage.

For example, some of the features of a building facade can include both simple ele-

ments (e.g. walls), and progressing through more complex features (e.g. windows

and doors) and their individual components (e.g. door frames, handle, panels,

brickwork etc) (Boulaassal et al., 2007; Pu and Vosselman, 2007). Similarly, a

point cloud of an industrial scene has a large and complex catalogue of structures

that include pipes, beams, flanges, and individual bolts (Tangelder et al., 2003;

Rabbani and van den Heuvel, 2004). Point clouds for such examples are presented

in Figure 1.3. By developing automated algorithms and techniques to help with

isolating the features and content of point clouds, the amount of resources spent

on this stage is significantly reduced. Therefore this thesis aims to contribute to

the field of TLS by presenting a novel and automated procedure for classification

and segmentation of 3D point clouds.

1.3 Previous Work

The previous section highlighted the importance of research into automated pro-

cessing techniques. In this section, a brief survey of classification and segmenta-

tion procedures previously employed in 3D point clouds from laser scanners will

be summarised. An overview and discussion on existing methods can be found in

Hoover et al. (1996), Zhoa and Zhang (1997), Vosselman et al. (2004), Rabbani

et al. (2006) and Pfeifer and Briese (2007).

4

Page 24: Classification and Segmentation of 3D Terrestrial Laser

INTRODUCTION

(a) (b)

Figure 1.3: Point clouds of (a) Agia Sanmarina church in Greece and (b) anindustrial scene (Leica Geosystems HDS, 2008)

Rabbani et al. (2006) detailed a segmentation procedure based on normal approx-

imations and Gaussian sphere methods (Horn, 1984; Varady et al., 1998). The

methods were tested for the identification of geometric structures such as planes

and cylinders in industrial scenes. A similar segmentation method was utilised on

airborne laser systems (ALS) point clouds by Rottensteiner et al. (2005) to de-

lineate roof sections. Vosselman et al. (2004) studied Hough transforms (Hough,

1962; Illingworth and Kittler, 1988) for recognition of geometric structures. Vos-

selman and Dijkman (2001) and Pu and Vosselman (2007) presented an example

of the application of Hough transforms on building facades. Details on Gaussian

spheres, segmentation and surface fitting can be found in Varady et al. (1998).

Exploration of special surface structures was given in Pottmann et al. (2002) and

Peternell (2004) with emphasis on how they could be identified with Gaussian

spheres.

Bauer et al. (2003), von Hansen et al. (2006) and Boulaassal et al. (2007) investi-

gated the usage of RANSAC (random sampling consensus), developed by Fischler

and Bolles (1981), for detecting planes in 3D point clouds. Its usage in detecting

and fitting cylinders were tested by Chaperon and Goulette (2001) and the auto-

mated detection of other geometric shapes was outlined in Schnabel et al. (2007).

Conventional fitting of geometric primitives through least squares was explored

5

Page 25: Classification and Segmentation of 3D Terrestrial Laser

INTRODUCTION

in Lukas et al. (1998) and Shakarji (1998). Besl and Jain (1988b) presented a

method for segmentation through fitting of variable order surfaces while Taubin

(1991) outline a segmentation method by the fitting of implicit surfaces. Briese

(2006) also utilised surface fitting to extract linear structures in 3D point clouds.

Pauly et al. (2002) presented a curvature approximation based on principal com-

ponents (Johnson and Wichern, 2002) that has been used in many applications

(Adamson and Alexa, 2003; Pauly et al., 2003; Cohen-Steiner et al., 2004; Kobbelt

and Botsch, 2004). Tang and Medioni (1999) also presented a robust curvature

approximation with the utilisation of the tensor voting methodology for point

cloud processing which was outlined in Tang and Medioni (2002). A method for

segmentation can be found in Visintini et al. (2006), which use approximated

Gaussian and mean curvatures.

The work presented in this thesis will be related to the edge-based method seg-

mentation method, as outlined in Zhoa and Zhang (1997). Edges will be detected

from a metric derived from PCA and the work of Pauly et al. (2003), but will

focus on the change in curvature value instead of curvature value. In this way, the

is no reliance on the surface structure or surface fitting to perform segmentation

of the point cloud.

1.4 Objective of the Thesis

The overall objective of this thesis is to provide a robust classification and segmen-

tation procedure to isolate the salient features contained in a TLS point cloud.

The definition of a salient feature to be extracted is any smooth surface structure

which is considered to be continuous and differentiable throughout the sampled

region of the surface. To this end, the objectives for this thesis are:

• Define attributes from the geometric information for use in classification

6

Page 26: Classification and Segmentation of 3D Terrestrial Laser

INTRODUCTION

of points into geometric primitives. The defined attributes should only

be minimally affected by the properties of the point cloud such as noise,

resolution and sample density and be primarily affected only by the surface

structure within the point cloud.

• To refine and extend these attributes in order to retrieve observable infor-

mation such as radius and direction of curvature.

• To outline an automated segmentation process that utilises these attributes

to segment the point cloud into salient surface features with a minimal num-

ber of parameters. These parameters should not be significantly affected

by the point cloud attributes. In other words the effects of the point cloud

attributes on the values of the parameters need to be easily understood and

predicted.

• No prior knowledge of the point cloud properties and the features it contains

should be required for determining the segmentation parameters.

• Within reasonable limitations, the procedure should perform consistently

regardless of the features and structures that the point cloud contains. For

example, a building facade should give a similar degree of results as an

industrial scene.

• The surface segments should only comprise points that are sampled for

the salient feature that it represents. All points should either belong to

a surface segment or be deemed unresolvable due to noise, resolution or

sample density at the point.

1.5 Thesis Organisation

To achieve these objectives, the thesis will be divided into chapters and appen-

dices, outlining and describing the proposed method at each stage.

Chapter 2 presents more detail about the research involving point cloud pro-

7

Page 27: Classification and Segmentation of 3D Terrestrial Laser

INTRODUCTION

cessing with attention to how the techniques are employed and related to the

objective of processing point clouds.

Chapter 3 outlines the basic metrics to approximate curvature, change in surface

structure, and the effect of occlusion in sampling. These will be used to classify

points into the categories of surfaces, edges and surface boundaries.

Chapter 4 explains the procedures for refining and extending the results from the

classification results explained in Chapter 3. The curvature and change in surface

will be used to approximate the radius and principal directions of curvature at

each sampled point. Edge and boundary points will be refined from a region of

classified points to a string of points. Finally, methods for removing the influence

of multiple surface structures from the normal direction approximation and local

neighbourhood will be given.

Chapter 5 outlines how the information described in the previous two chapters is

used in a segmentation procedure to isolate each of the resolvable salient surface

features contained within the point cloud. In addition, further techniques to clean

the segmentation results are presented.

Chapter 6 shows the results produced from each of the preceding chapters, as

well as the final segmented point cloud as applied to several data sets ranging

from industrial scenes to building facades that will vary in resolution, attributes

and complexity.

A summary of the thesis will be given in Chapter 7. It highlights the achieve-

ments of the techniques presented as well as the drawbacks and directions for

additional research. The appendices contain additional information, derivations

and techniques.

8

Page 28: Classification and Segmentation of 3D Terrestrial Laser

Chapter 2

Background

Many research disciplines have studied classification and segmentation of different

data sets. Some of these fields include photogrammetry, remote sensing, computer

vision, pattern recognition, artificial intelligence and medical imaging (Haralick

and Shapiro, 1993; Pham et al., 2000; McGlone et al., 2004). Regardless of the

data sets involved, the fundamental principles and concepts in one research field

have been frequently applied to problems in other areas.

In the case of 3D point clouds from TLS, the goal is to segment the data through

an automated process into salient surface features. In general, these salient fea-

tures consist of geometric or surface primitives that comprise the scanned scene.

The process for isolating these features can be preformed by utilising the geomet-

ric and spectral properties of the points in the TLS data set.

This chapter will briefly introduce some of these important characteristics of

TLS point clouds, as well as review some of the relevant methods and principles

from these different fields. How these methods can be applied to the problem of

classifying and segmenting TLS point clouds will also be presented.

9

Page 29: Classification and Segmentation of 3D Terrestrial Laser

BACKGROUND

2.1 3D Point Cloud Data Structure

One of the most important aspects of the research on TLS point clouds is develop-

ing data structures for managing the large volumes of data (Barber et al., 2003).

A large amount of memory is required to store several million points, which can

sometimes restrict the practical application (Huising and Pereira, 1998). Further-

more, the efficient retrieval of points and their nearby neighbourhoods is required

for efficient computation of local point cloud properties (Arya et al., 1998; Lalonde

et al., 2005). With the increasing resolution and decreasing capture time of each

new laser scanner, the volume of data that must be dealt with will always be

increasing.

Another significant problem with TLS data is the requirement that the point

clouds are treated as being unorganised in nature. In the case of 2D imagery,

a regular 2D data structure can be used since the imagery is regularly sampled

and organised into rows and columns (Haralick and Shapiro, 1993). This allows

a pixel and the surrounding neighbourhood to be easily retrieved. The 2.5D

nature of airborne laser scanner (ALS) point clouds means that the data can also

be stored in a 2D data structure (Maas and Vosselman, 1999; Filin and Pfeifer,

2006). For each easting and northing coordinate value (X,Y) in the ALS point

cloud, a single elevation value (Z) can often be associated since, unlike TLS, the

occluded surfaces (in the elevation direction) will never be sampled in ALS data

sets (Maas, 2002). Therefore, the elevation coordinate can be represented either

by a one-to-one function with respect to the easting and northing coordinates, or

be reconstructed as a 2-manifold1 surface (Lambers et al., 2007). Both allow for a

2D data structure to store and retrieve the points. In the presence of overlapping

scans (Dorninger and Nothegger, 2007) and the allowance for multiple return

values (Persson et al., 2005), the data behaves more like a fully 3D data set,

however the mentality behind ALS point clouds often still allows the data to be

stored in a 2D structure.

1A two-manifold M is a topological space in which each point has a neighbourhood that ishomeomorphic to an open disk of R2 (Falcidieno and Ratto, 1992).

10

Page 30: Classification and Segmentation of 3D Terrestrial Laser

BACKGROUND

For TLS point clouds, this is often not the case. If the point cloud comprises a

scan from a single location, it is possible to either project it onto a 2D plane as

done for ALS point clouds, or use the regular interval in the angular coordinate

system as done for 2D imagery. However, in most cases, TLS data will comprise a

registered point cloud from multiple scan locations. Figure 2.1 shows how a simple

projection cannot be performed without occluding sampled data. Therefore, there

is no simple projection to reduce the number of dimensions, nor can a single 2-

manifold be fitted to the data.

Figure 2.1: Projection of a square onto a 2D coordinate system in the directionof the arrows. This is not a one-to-one projection since the projection can havemore than one sampled point associated with it.

The simplest way to cope with this problem is to evenly divide the 3D domain

into voxels or bins (Gorte and Pfeifer, 2004). This method can be considered to

provide a 3D raster with each voxel representing a 3D pixel that contains all the

points where their coordinates fall inside the voxel. This method provides a fast

retrieval time for a voxel. However, since TLS data comprises sampled surfaces

and is unorganised so that it is not regularly sampled throughout the 3D space,

the point membership will be unevenly distributed throughout the voxels, with

many that will have zero member points. This means that the method may not

be regarded as an efficient methodology in terms of memory management.

A more efficient storage structure is one that utilises tree-based structures such as

oct-tree and kd-tree (Sedgewick, 1988; Samet, 1989, 1990). The oct-tree works by

11

Page 31: Classification and Segmentation of 3D Terrestrial Laser

BACKGROUND

recursively splitting the point cloud domain into eight segments (in half along each

coordinate axis) to obtain a new leaf node (Bucksch and van Wageningen, 2006).

The splitting continues in this manner for each leaf node until a leaf has only

one point associated with it. The searching and creation time and methodology

is similar for ordinary trees (Zach et al., 2004). However, the resulting tree may

be unbalanced and therefore have its search time hindered (Zach et al., 2004).

The kd-tree is similar in construction except it splits the domain in half through

a point so that the number of points on one side of the split is the same as on

the other side (Bentley, 1975). Because of the method of splitting, unless points

are added or deleted, the tree that will be created will be balanced allowing for

optimal search time (Arya et al., 1998).

The classification and segmentation method in this dissertation is not dependent

on a specific data structure, and hence any of the mentioned methods can be

used in theory. In the practical application of the proposed classification and seg-

mentation method in this thesis, the kd-tree data structure developed by Arya

et al. (1998) was utilised. This choice was made primarily because of the bal-

anced nature of the created tree, comparable efficiency with other methods, and

the simplicity in retrieving the k-nearest neighbourhood which consists of the k

number of points closest to a point of interest.

2.2 Classification and Segmentation of Spectral

Information

The type of information available for classification and segmentation can be

broadly categorised as either spectral information or geometric information. Of

the two, the geometric properties are most widely used since spectral informa-

tion, such as intensity and colour, is dependent on the surface properties such as

reflectivity, surface texture, incidence angle and scanner specifications (Lichti and

Harvey, 2002; Pfeifer et al., 2007). In particular, when multiple scans for different

scanner locations are registered, a common point can have a large variation in

12

Page 32: Classification and Segmentation of 3D Terrestrial Laser

BACKGROUND

the different sampled intensity values, necessitating a filter or correction method

(Hofle, 2007).

Once the intensity values have been corrected or normalised so that they are

locally consistent, they can be examined to identify local discontinuities using

methods from 2D photogrammetry and computer vision such as Canny, Sobel or

Laplace filters or methods (Ziou and Tabbone, 1998). Since TLS point clouds

usually do not have a simple global structure, the local neighbourhoods around

points of interest are independently examined. The simplest method to perform

this is by projecting the 3D neighbourhood onto a 2D reference plane (Pauly

et al., 2002) and assigning the intensities as a response value.

Using intensity values is beneficial for the classification of features from a con-

tinuous surface. Two examples for this are given in the classification of glacier

surfaces (Hofle, 2007) and the differentiation between mortar and bricks in a

scanned wall (Gordon et al., 2001). This is also useful for automatic target iden-

tification (Valanis and Tsakiri, 2004). However, the intensity value can contain a

large level of noise when compared with the residual values from the 2D reference

plane, which limits its adoption without a filtering procedure (Hofle, 2007). Fig-

ure 2.2 shows the noise in the unfiltered intensity compared to that in residual

values for a first order plane fit.

(a) (b)

Figure 2.2: Cross section of a centre kerb strip in a freeway. (a) Residuals froma first order plane fit against the fitted planar surface. (b) The intensity valuesagainst the fitted planar surface.

13

Page 33: Classification and Segmentation of 3D Terrestrial Laser

BACKGROUND

Research into the utilisation of RGB colour information has recently seen an

increase. The driving factor is that more scanners come with in-built colour

acquisition capabilities and more processing software supports overlaying 2D im-

agery onto the 3D point cloud coordinate system. There have been some issues in

using RGB information, which have included insufficient resolution of the colour

imagery compared to the resolution of the 3D point cloud, misalignment or poor

registration between the coordinate systems of the colour imagery and 3D point

cloud, temporal effects if not acquired at in the same epoch, and occlusion if cap-

tured from different setup locations (Jansa et al., 2004; Abdelhafiz and Niemeier,

2006). These issues are becoming less of a concern with the incorporation of

higher resolution imagery, and the development of external registration tech-

niques between 2D colour imagery and 3D point clouds to compensate for these

shortcomings (Forkuo and King, 2004; Abdelhafiz and Niemeier, 2006; Al-Manasir

and Fraser, 2006).

Compared to intensity, the colour information does not suffer from the same

shortcomings since it is considered consistent over the entire point cloud and

between different scanners and setup locations, allowing for global processing

techniques to be used. In practice, the classification and segmentation methods

applied to colour information in point clouds are often derived from methods

in remote sensing and photogrammetry, such as learning techniques, supervised

and unsupervised methods and so on (Lillesand and Kiefer, 2000; McGlone et al.,

2004; Jensen, 2005). An example of a multi-spectral classification of a point cloud

through colour information was given in Lichti (2005). In addition, the classifi-

cation and segmentation can be performed on the 2D image and the results can

then be applied to the 3D point cloud. The benefit in applying the segmenta-

tion from the 2D imagery to the 3D point cloud is that the topic has been well

researched in the fields of computer vision and photogrammetry.

The majority of research into the fusion of 2D imagery and 3D point cloud fo-

cuses on combining the strengths and overcoming the shortcomings of using them

individually (Jansa et al., 2004). For this thesis, the spectral information will not

be utilised. One reason is because many point clouds still do not have available

spectral information. The other, more important, reason is that the primary goal

14

Page 34: Classification and Segmentation of 3D Terrestrial Laser

BACKGROUND

of this thesis is to present a procedure to segment the salient surface features,

which can be accomplished by using the 3D coordinate information of a point

cloud alone to examine local geometric information.

2.3 Classification and Segmentation of Geomet-

ric Information

Most classification and segmentation procedures rely purely on geometric infor-

mation derived from the 3D point coordinates. One reason behind this is that

the information is common to all point clouds regardless of scanner hardware or

setup. Another reason is that the objective of the classification and segmentation

procedures is often targeted towards extracting geometric features. Procedures

using geometric information can be categorised as edge-based or surface-based

(Zhoa and Zhang, 1997).

2.3.1 Edge-Based Techniques

In edge-based techniques, point clouds are segmented by first identifying the sur-

face extents such as boundaries and intersections. Then each surface is isolated

from each other by the identified extents. The surface points can be grouped

together into common segments enclosed by the identified surface extents. An

overview of these methods was given by Wani and Batchelor (1994). The varia-

tions between the different edge-based procedures arise primarily from the method

employed to determine the surface extents.

Edge points are determined by a metric that represents either curvature or surface

variation (Rabbani et al., 2006). The simplest approach is to estimate a first order

planar surface through the local neighbourhood surrounding a point of interest

15

Page 35: Classification and Segmentation of 3D Terrestrial Laser

BACKGROUND

in order to approximate the surface normal (Hoppe et al., 1992). This is often

performed through principal component analysis (PCA) (Johnson and Wichern,

2002) or least squares regression, with both methods producing equivalent results

(Shakarji, 1998). The amount of variation in the surface normal direction provides

an indication of the level of curvature or local surface change (Pauly et al., 2002).

A problem with this method is that there is no directional component to the

curvature approximation, it is dimensionless and unitless, and is also affected by

noise (Mitra et al., 2004).

A method to overcome these shortcomings is to estimate a higher order surface,

either explicitly to the local coordinate system or implicitly to the global coor-

dinate system (Bolle and Vemuri, 1991). In general, it is sufficient to represent

the underlying surface2 in a neighbourhood using a second-order polynomial or

quadratic surface since the size of the neighbourhood restricts the complexity of

the structure that it contains (Yang and Lee, 1999). The primary advantage of

using a higher order surface is that the estimated parameters can be used for

the approximation of the local algebraic curvature and the associated directions

(Ohtake et al., 2004). This method is computationally expensive since it requires

an iterative least-squares fitting method (Mikhail and Ackermann, 1976). In ad-

dition, the surface discontinuity at edges cannot properly be represented with

this method (Briese, 2006). In order to accurately model the discontinuity, the

piecewise nature of the surface must be taken into account to recover the exact

edge location (Gumhold et al., 2001; Briese, 2006).

A less computationally expensive approach is to examine the estimated surface

normal directions for each point within a local normal neighbourhood. The vari-

ation in the normal orientation provides an indication of whether there is sur-

face change or curvature present in the local neighbourhood (Page et al., 2002).

Second-order tensor voting is a common framework for examining the normal di-

rections (Tang and Medioni, 2002). The tensors are derived from the eigenvalue

decomposition of the covariance matrix formed from the approximated normal

directions. The tensors are then examined using a set of prototypes, namely

2The underlying surface is the true surface description that the points are sampled from.An estimated surface approximates this underlying surface.

16

Page 36: Classification and Segmentation of 3D Terrestrial Laser

BACKGROUND

stick, plate and ball tensors, to determine the surface type. A similar method

is proposed in Jiang et al. (2005) to estimate surface change and the associated

principal curvature directions. Their method provides the ability to approximate

not only a curvature metric, but also the mean and Gaussian curvature as well.

Although this method can provide the directions of curvature, the curvature met-

ric is still dimensionless. A method for applying a unit of measurement to the

curvature approximation will be presented in Chapter 4.

2.3.2 Surface-Based Techniques

In surface-based segmentation methods, points that exhibit similar surface at-

tributes are grouped together. They can be categorised as either following a

bottom-up or top-down approach (Rabbani et al., 2006). A top-down approach

works by iteratively sub-dividing the point cloud until each sub-division contains

points that all exhibit the same properties. A bottom-up approach performs by

selecting a seed point and adding surrounding points exhibiting the same proper-

ties into a common region until no more points can be added. Existing methods

predominately employed a bottom-up approach.

A commonly employed bottom-up method for segmentation is to estimate a sur-

face model to a neighbourhood of points and then to include all surrounding points

that comply with the approximated surface parameters. Most often a planar sur-

face patch is used because of its simple nature (Rottensteiner and Briese, 2003;

Pu and Vosselman, 2007). More complex surfaces such as geometric primitives

(Marshall et al., 2001; Schnabel et al., 2007) or higher order polynomial surfaces

(Gotardo et al., 2004) are also used to allow for more complex structures. When

fitting higher order surfaces, care must be taken to ensure that problems such as

over-fitting or surface irregularities will not occur (Lempitsky and Boykov, 2007;

Barnea et al., 2007). To provide a good initial surface approximation, techniques

such as RANSAC are employed to define the initial neighbourhood (Fischler and

Bolles, 1981; Schnabel et al., 2007). An alterative to the bottom-up approach of

these methods is to fit a surface to the entire point cloud and then sub-divide into

17

Page 37: Classification and Segmentation of 3D Terrestrial Laser

BACKGROUND

smaller surface regions. Examples of these top-down approaches include the split

and merge procedure (Xiang and Wang, 2004) and fitting variational surfaces

(Wu and Kobbelt, 2005).

An alternative approach is to examine the parameter space of the point cloud by

techniques such as Hough transformation (Hough, 1962). This is performed by

clustering points that exhibit similar surface parameters into common elements.

In addition, the most common surface structure used for Hough transformation

are planar surfaces (Vosselman and Dijkman, 2001). More complex surfaces such

as cylinders (Rabbani and van den Heuvel, 2005) and spheres (Ogundana et al.,

2007) have also been examined (Vosselman et al., 2004). However, the use of

more complex structures may lead to increased computational complexity since

the parameter space has a higher dimensional order than the 3D coordinate space

(Khoshelham, 2007).

A similar approach to Hough parametric transformations are Gaussian sphere

methods (Horn, 1984; Dold, 2005). They remove some of the increased complexity

by examining the approximated normal directions through their projection onto

a unit sphere. For example, points on a plane are mapped to a single point on the

sphere. In a similar manner, the points on a cylinder will be mapped to a circle

on a plane through the origin of the unit sphere (Rabbani and van den Heuvel,

2005). Similar methods using normal approximations are presented by Peternell

(2004) and Pottmann et al. (2002) for segmenting developable surfaces produced

by revolution, or molding along a path.

These types of methods are generally used when there is a priori information on

the structure contained in the sampled point cloud. As such, they are usually

optimised and targeted for specific applications.

18

Page 38: Classification and Segmentation of 3D Terrestrial Laser

BACKGROUND

2.3.3 Other Approaches

A method to examine the individual scan lines can reduce the complexity of

searching in the 3D coordinate domain to reduce the problem to 2D (Jiang and

Bunke, 1994). Individual scan lines in the point cloud must be retained in order

to use this approach. Otherwise, the point cloud must be either re-sampled

into a regularised structure or examined by extracting profiles taken at regular

intervals through the point cloud (Sithole and Vosselman, 2005). This limits the

usefulness of this method as it is an assumption for this thesis that the point

cloud is unorganised in nature.

For extraction of complex structures, point clouds are, in most cases, broken into

a set of relationships between simpler geometric primitives and these primitives

are then searched for (Tangelder et al., 2003; Rabbani and van den Heuvel, 2004).

Once a primitive is found, the surrounding locality is tested to determine if any

other primitives are located nearby and whether their relationship to each other

satisfies the relation between primitives in the prototype definition (Rappoport

and Spitz, 1997). Searching the point cloud for the complex structure without

reducing the structure into primitives is computationally prohibitive as it is sim-

ply a registration exercise. The drawbacks to using complex prototypes is that

they need to be defined prior to segmentation and the library of such prototypes

can grow large.

2.4 Region Growing and Clustering

Surface segmentation methods need to utilise either region growing or clustering.

A region growing starts by examining a seed point, or a point known to belong to

a segment. The segment is then grown by interrogating the points surrounding

the seed points, starting from the closest point until it exhausts the candidate

points by removing them as either unsuitable, or including them into the segment.

This decision is based on whether the points exhibit common properties that are

19

Page 39: Classification and Segmentation of 3D Terrestrial Laser

BACKGROUND

shared by the entire segment, where the properties are examined at a local level

around a point. An overview of region growing methods can be found in Wani

and Batchelor (1994), von Hansen et al. (2006) and Rabbani et al. (2006).

Clustering can be categorised as being either hierarchical or partitional. Hierar-

chical clustering starts by examining a point (or cluster) with its closest neigh-

bouring point (or cluster). If the difference between the two is small, then they

are merged together. This method is repeated until the difference between two

clusters or its point members is considered too great for the clusters to be merged.

A partitional clustering method is performed in reverse to a hierarchical cluster-

ing method. It is performed by continually splitting clusters until the difference

between two split clusters is considered insignificant. An overview of clustering

methodologies is presented in Jain and Dubes (1988).

While there are similarities between region growing and clustering, there are

advantages of using one over the other. One of the benefits of region growing

methods is that it can be performed in one pass through the point cloud with

each point being examined once, while clustering methods often require multiple

passes. However, while there is some opportunity for parallelisation in region

growing, clustering lends itself much more to parallel processing because of the

recursive nature of the process (Co et al., 2003). The primary means of segmen-

tation used in this thesis is the region growing technique.

2.5 Summary

This chapter has provided an overview of the properties of a point cloud captured

with TLS systems. Background on the current methods and techniques was

presented, along with their advantages and limitations. The next chapter will

outline a novel edge-detection and classification procedure for classification of a

3D point cloud. This will provide the basis for segmentation of the salient surface

features.

20

Page 40: Classification and Segmentation of 3D Terrestrial Laser

Chapter 3

Classification

Classification aims at reducing a data set into groups of points that are deemed

to belong to the same class. These classes are determined by examining the

attributes of the different points (e.g. curvature, intensity, geometric properties,

etc.) and then grouping points together that exhibit similar properties. The

benefit of a classification method is that it reduces the associated attributes of

a point into a single discrete value (Dash and Liu, 1997). A general overview of

classification and techniques is provided in Michie et al. (1994) and Dash and Liu

(1997).

This chapter will present a classification process for TLS point clouds. First,

some important classes for 3D point clouds will be defined, and their relationship

to segmentation will be shown. Then a description of the attributes used in

the classification process will be given, followed by an explanation of how these

attributes relate to the defined point classes and the physical properties they

reflect. Finally, the proposed decision model for determining the point class

membership using the described attributes will be outlined.

21

Page 41: Classification and Segmentation of 3D Terrestrial Laser

CLASSIFICATION

3.1 Classes of Low-Level Features

Often in applications of classification, there is no prior knowledge on what the

analysed data set contains. Hence the goals and classes are undefined and the

classification method needs to naturally evolve the definitions of the classes. How-

ever, in many applications, such as point cloud processing, it is known what types

of feature the data contains and the expected goal has been defined prior to pro-

cessing. As such, the resulting classes and attributes can be chosen prior to

classification in order to ensure the best results (Dash and Liu, 1997).

For the method presented in this thesis, the goal is to isolate and extract smooth

surface segments. The algebraic properties exhibited by these segments are that

they are continuous and differentiable over the entire region of the surface seg-

ment. Continuity means that for any two points on the surface segment, it is

possible to traverse the surface from one point to the other without leaving the

surface. In a similar manner, if the surface is differentiable over the entire region,

then the surface segment can be represented by one smooth function. From these

surface properties, three classes are defined prior to the classification procedure,

comprising surface, boundary and edge points.

3.1.1 Surface Points

Surface points are those points that are sampled from within the extents of a

surface segment. Since a point only contains the coordinate information, the

neighbourhood of a point of interest must be taken into account when determining

whether the local surface properties are continuous and differentiable. As such,

two attributes must primarily be used to reflect continuity and differentiability.

Continuity, in this case, describes whether the surrounding points denote that

the surface continues in all directions around a point of interest. Differentiability

describes whether surrounding points reflect a smooth surface that undergoes a

consistent change. As such, it relies on the calculation of surface change at each

22

Page 42: Classification and Segmentation of 3D Terrestrial Laser

CLASSIFICATION

point in the neighbourhood being able to determine whether a consistent rate of

change is present throughout the neighbourhood. Hence, two attributes reflecting

continuity and differentiability are sufficient for determining points belonging to

the surface class under the defined conditions for surfaces in this thesis.

3.1.2 Boundary Points

When the surface in the neighbourhood of a point is not continuous, that point

is classed as a boundary point. Boundary points are those that are located at the

extent of a sampled surface. The attribute used to determine this case is found by

examining the neighbourhood. For surface points, there should be sampled points

in all directions for the neighbourhood around the point of interest. If the point of

interest is not completely surrounded by neighbouring points in all directions, this

indicates a discontinuity in the sampling method. A discontinuity in sampling is

primarily caused by the point being located on or near the boundary of a surface.

Note that the boundary is that of the sampled surface, hence it can be caused

not only by the extent of the true surface, but also by surface occlusions that

create discontinuities in the sampling method (Sotoodeh, 2006).

3.1.3 Edge Points

Edge points are those that occur either on or near an intersection between two or

more surface segments. Since the local neighbourhood will contain points from

different surfaces, the properties of the points, such as local surface change, will

not exhibit the same value locally. Hence the surface will not be considered to be

differentiable. In this definition, edge points will contain not only intersections

between two surfaces, but will also contain corner points and points on highly

complex surfaces where the individual surface segments cannot be isolated due

to either insufficient sampling or underlying complex structures.

23

Page 43: Classification and Segmentation of 3D Terrestrial Laser

CLASSIFICATION

3.1.4 Summary of Classes

These three defined classes will be required for the segmentation process pre-

sented in Chapter 5, which is based on the edge-based segmentation techniques

highlighted in Chapter 2. Surface points will be used to identify which points

belong to the internal regions of surface segments. The combination of edge and

boundary points are employed to define the extents of the surface regions and

separate them from one another (Gumhold et al., 2001).

The surface class may contain sub-classes such as parabolic, hyperbolic and el-

liptical surfaces often determined through mean and Gaussian curvature values

(Visintini et al., 2006). More commonly-used surface classes are based on geomet-

ric structures such as planes, cylinders, spheres, cones and tori (Marshall et al.,

2001; Schnabel et al., 2007). In a similar manner, the edge class can also con-

tain sub-classes such as edge points between two surfaces, corner points between

three or more, and points on highly complex surfaces. Further classes have also

been used, such as points belonging to a line or singular points. In addition, the

classes can be based on the function of the object that a point belongs to, such as

windows, doors, walls, pipes etc (Pu and Vosselman, 2007). These further classes

are aimed at the identification of the surface type. Since the aim of this thesis is

to isolate general surface segments and not identify specific surface types, they

will not be used in the classification procedure for this thesis. For classification

of the points into the proposed classes, it has been illustrated in this Section that

attributes for describing continuity, local surface change and differentiability need

to be utilised. The next Section will present a description of the properties for

these attributes and how they are calculated.

3.2 Point Attributes for Classification

There are many attributes that can be ascribed to a point. These come from either

geometric or spectral information, as outlined in Chapter 2. In the case of the

24

Page 44: Classification and Segmentation of 3D Terrestrial Laser

CLASSIFICATION

classes defined in the previous section, the focus will be on geometric information.

Specifically, the attributes will come from examining the local neighbourhood of

a point of interest. Principal component analysis (Appendix A) is applied to the

neighbourhood to derive the attributes that describe surface continuity, surface

change and surface differentiability or consistency. The values of these attributes

will be dependent on the properties of the neighbourhood of a point of interest,

and on how it is selected.

3.2.1 Local Neighbourhood Selection

The first step in many classification procedures is selecting a neighbourhood sur-

rounding a point of interest. A neighbourhood is selected by choosing the closest

points based on a distance metric such as fixed Euclidian distance, geocentric

distance and k nearest neighbours (Arya et al., 1998; Page et al., 2002; Mitra

et al., 2004).

The benefit of a fixed Euclidian distance is that it can be chosen using the size of

the surface features to be extracted. However, this makes it difficult to adapt the

neighbourhood size based on spatial density and spacing of points. Geocentric

distance is more adaptable as it works by forming concentric circles using the

surrounding points around the point of interest, similar to a mesh or web (Page

et al., 2002). A method using the number of concentric circles as a distance metric

is more computationally expensive when forming the neighbourhoods, and can

suffer problems when dealing with discontinuities in sampling which causes a

break in the formation of the circles.

A simpler method is to form the neighbourhood with k points closest to the

point of interest. This allows the scaling of the neighbourhood size based on the

density of points. A sparse sampling will result in a larger neighbourhood and a

dense sampling will result in a smaller neighbourhood, with each neighbourhood

having a guaranteed level of redundancy. Generally, a smaller neighbourhood

allows for smaller surface features to be resolved. However, if an insufficient level

25

Page 45: Classification and Segmentation of 3D Terrestrial Laser

CLASSIFICATION

of redundancy or neighbourhood size is used, then the underlying structure will

not be sufficiently sampled and will not be resolved from the surface noise.

Figure 3.1: Example of the size of the nearest neighbourhood selection. Thewhite and black points denote different surfaces.

For example, if a neighbourhood of k = 10 was chosen (Figure 3.1), then any

attributes calculated from it would be affected by only one surface. However if

a neighbourhood of k = 30 was chosen, then the attributes calculated will be

affected by two surfaces. Therefore, the size of the neighbourhood must be large

enough so that the structure can be adequately resolved regardless of sampling

density and surface noise, but small enough to minimise the probability of a

neighbourhood containing points from multiple surfaces.

This introduces the concept of equating a 3D neighbourhood to 2D pixel. In a

neighbourhood (pixel), if an attribute derived from that neighbourhood indicates

a surface property (such as an edge), then the property can occur anywhere

within the neighbourhood (pixel). To refine the location, a small neighbourhood

size should be used, which equates to a finer resolution.

This also leads to another important concept when using neighbourhoods to cal-

culate attributes for determining the properties of a point. If an attribute that is

based on zero order information (that which is sampled at the point), then if it

is used to determine discontinuities or some other property, the discontinuity or

propriety will be located at the sampled point. If a first order attribute (one that

26

Page 46: Classification and Segmentation of 3D Terrestrial Laser

CLASSIFICATION

is based on a neighbourhood and examining zero order attributes) is used to find

discontinuities or some other property, then the location of the discontinuity or

property will be within the limits of the neighbourhood. This can be extended to

when an n-order attribute is used to find discontinuities or some other structure,

it will be contained in n neighbourhoods around the point.

Some methods are available to modify the neighbourhood to remove the effect

of points belonging to different surfaces, and these will be discussed in Chapter

4. In most cases, however, the neighbourhood size is set so that attributes of

the smallest-scale feature can be calculated without being affected by multiple

surfaces. While the presented classification uses the k nearest neighbourhood,

another method may be employed in its place. The k nearest neighbourhood

was chosen for its simplicity in formulating the neighbourhoods and its ability to

scale the neighbourhood when dealing with changes in point density and sparsely

sample surfaces. Varying point density is a common problem with TLS point

clouds unless a re-sampling method was used. Once a neighbourhood for a point

is selected, then the next stage in the classification process, principal component

analysis can be preformed.

3.2.2 Principal Component Analysis

Principal component analysis (PCA) uses eigenvalue decomposition on the covari-

ance matrix to get both the local orientation of a neighbourhood of points and the

variance in the principle directions. Some common applications of PCA in point

cloud processing have included approximating the normal direction (Berkmann

and Caelli, 1994; Mitra et al., 2004), fitting first order planar surfaces (Wein-

garten et al., 2003), defining the tensors for tensor voting (Tang and Medioni,

2002; Tong et al., 2004), and providing a local point coordinate system (Bolle and

Vemuri, 1991; Daniels et al., 2007). Appendix A provides a detailed overview of

the formulation of the covariance matrix and how to derive the principal compo-

nents, with their associated properties highlighted in regards to a neighbourhood

of 3D points.

27

Page 47: Classification and Segmentation of 3D Terrestrial Laser

CLASSIFICATION

The major use of principal component analysis in this thesis lies in defining the

local orientation of a neighbourhood of points and the calculation of the attributes

required for classification. The first of these attributes is an approximation of

surface curvature.

3.2.3 Curvature Approximation

Curvature is defined as the ratio of changes in both the surface normal direction

and the tangential directions of a point on a surface (Stewart, 1995). There are

a variety of techniques to approximate curvature such as surface fitting (Besl

and Jain, 1988a,b), variation in surface normals (Berkmann and Caelli, 1994;

Jiang et al., 2005), tensor voting (Tang and Medioni, 2002), and angles between

neighbourhood members (Dyn et al., 2001). Pauly et al. (2002) utilised the results

of the PCA on a local neighbourhood in order to approximate surface variation

as a measure of curvature. It is this method that is used primarily as a means of

approximating the level of curvature in this thesis.

From Appendix A, eigenvector e0 (associated with the smallest eigenvalue λ0),

approximates the surface normal direction (Pauly et al., 2002). If the neigh-

bourhood contains a planar surface with no surface noise, then the eigenvalue λ0

should have a value of zero. If the surface is not planar, then λ0 will be non-zero

and provides an indication of the level of curvature in the neighbourhood. How-

ever, λ0 will not only be affected by geometric curvature, it will also be affected

by noise in the sampling. These two contributing factors cannot be separated

without a priori knowledge of the surface and its error model. In most cases, the

curvature will have the dominant effect on λ0 and noise will have an insignifi-

cant effect (Pauly et al., 2002). This is mainly because a sufficiently large size of

the neighbourhood may suppress the noise of point clouds with the assumption

that the distribution of the neighbourhood is well balanced. As such, λ0 can, in

principle, be used to approximate the level of curvature and for classification of

points.

28

Page 48: Classification and Segmentation of 3D Terrestrial Laser

CLASSIFICATION

Using λ0 to approximate curvature does suffer from some shortcomings. One of

these is that λ0 is not only affected by surface noise and curvature, but also by the

size of the neighbourhood when curvature is non-zero. The effect of using different

neighbourhood size can be seen in Figure 3.2(a) where a larger neighbourhood size

causes more variation in the approximate surface normal direction. This means

that identical surface structures will give different results if the neighbourhoods

have different sample densities.

(a) (b)

Figure 3.2: (a) The effect of neighbourhood size on λ0. (b) The effect of neigh-bourhood size on curvature approximation.

To compensate for this problem illustrated in Figure 3.2(a), the percentage of

the population variance in the normal direction can be used to approximate the

level of surface curvature. This was presented by Pauly et al. (2002), where the

approximate curvature (κ) is calculated as:

κ ≈ λ0

λ0 + λ1 + λ2

(3.1)

As shown in 3.2(b), the value of κ approximated in Eq. 3.1 exhibits a more

constant value regardless of the size and density of a neighbourhood. The reason

for this is that the effect of the size of the neighbourhood exists in all eigenvalues

and when λ0 is divided by λ0 + λ1 + λ2, the numerator and denominator cancel

out the effect of the size of the neighbourhood. Similarly, this also means that

if the covariance is calculated by either the sample or population statistics, the

results for κ will be equivalent for both methods. This is because the degrees of

29

Page 49: Classification and Segmentation of 3D Terrestrial Laser

CLASSIFICATION

freedom used in Eq. A.1 are both present in the numerator and denominator and

will cancel out. It should be noted that the approximate curvature value from Eq.

3.1 is unitless and dimensionless, thus it cannot provide the direction and radius

of the estimated curvature values. However, it does provide a robust metric for

curvature that is simplistic in calculation when compared to the complexity order

of the approximation derived through the fitting of a higher order surface (Pratt,

1987). These properties are why it is used as the method of curvature estimation

in this thesis.

The main concern of classification is in determining the local discontinuity, which

was explained in Section 3.1. In theory, if a point of interest is on a surface

discontinuity, the it will be non-differentiable and its curvature value must be in-

finite. In practice, regardless of the curvature approximation used, the curvature

value will assume an underlying continuous and differentiable surface. This is due

primarily to the discrete sampling and the use of the neighbourhood of points to

calculate attributes. The next Section will focus on presenting an attribute to

highlight these discontinuous and non-differentiable points between surfaces using

the curvature approximation presented in this Section.

3.2.4 Variance of Curvature

In this section, a novel metric will be presented for use in detecting surface inter-

sections. If a straightforward curvature approximation is used for classification

of a surface, boundary or edge features, it can suffer from a few shortcomings

regardless of the method for approximating curvature. First the curvature ap-

proximation is affected by noise and surface texture, although methods examining

the surface normal approximations can alleviate these effects (Jiang et al., 2005).

Second, because most approximation methods assume an underlying continuous

and differentiable surface in the point neighbourhood, the curvature approxima-

tion for edges is treated as a highly curved surface. Theoretically, the curvature

value for edges should be infinite because of the discontinuity between surfaces.

To separate them from other curved surfaces, a threshold must be used which is

30

Page 50: Classification and Segmentation of 3D Terrestrial Laser

CLASSIFICATION

dependent on the sampling density, noise and neighbourhood size, which are not

constant throughout a point cloud. Finally, the curvature approximation does

not accurately reflect complex surfaces. Most of these shortcomings arise when

curvature is approximated, it is done under the assumption of a single underlying

surface that is continuous and differentiable. This, in conjunction with insufficient

sampling to resolve simple features and the effects of complex structures, causes

problems in the curvature approximation and its subsequent use in classification.

If a neighbourhood contains points scanned by a laser scanner which belong to

one surface, then the geometric properties should have similar values locally (with

the differences being insignificant). Conversely, if a point is near an extent of a

surface such that the neighbourhood contains points scanned from more than

one surface, the geometric properties of the points in the neighbourhood should

show significant variation. The majority of curvature approximations will return

a value on the assumption that the neighbourhood of a point contains a single

surface feature, e.g. Eq. 3.1. A new measure is proposed in this thesis to highlight

those neighbourhoods for which this assumption is invalid and are affected by the

intersection of more than one surface structure. This measure is based on the

variance of the curvature approximation and is defined as follows:

var(κ) = E(κ2)− E(κ)2 =1

k

k∑i=1

(κi − κ)2 (3.2)

3.2.4.1 Properties of Variance of Curvature

Regardless of the underlying surface structure and attributes of which a point

of interest belong, if all the points belong to the same surface, then the points

should exhibit similar attributes locally. In the presented metric in Eq. 3.2, the

attribute being examined is the level of curvature (although it is possible to

examine other attributes). If the level of curvature is constant locally, then the

value for var(κ) should be zero. Conversely, if there is an intersection between two

or more surfaces, then the level of curvature will not be constant as it fluctuates

from a high approximate value on the edge to a lower value for points further

31

Page 51: Classification and Segmentation of 3D Terrestrial Laser

CLASSIFICATION

away from the edge. As such, the value of var(κ) will not be zero.

Because var(κ) relies on the first order approximation of curvature κ, it can be

argued that var(κ) reflects the second order approximation of change in surface.

This argument behind this is that κ reflects the first order change, and var(κ)

reflects the change in the first order value, hence reflecting the second order

change. For a feature like a cylinder, the first order change in surface would be

non-zero, but constant over the entire surface, resulting in the second order change

to be valued at zero. This is reflected in var(κ) as κ would have a constant non-

zero value over the surface resulting in var(κ) being calculated as zero, reflecting

the expected value of the second order change in surface.

Even for highly texture or noisy surfaces such as corrugated surfaces, the vari-

ance of curvature of each point within the neighbourhood remains nominally zero

whereas their curvature value from Eq. 3.1 will not be. These properties of var(κ)

significantly help a classification method to achieve a high level of the successful

results when identifying surface intersections (Belton and Lichti, 2006).

3.2.4.2 Example of the Variance of Curvature Metric

The effect of geometrical inconsistent surface curvature is shown for a 2D inter-

section in Figure 3.3. The var(κ) will be non-zero if the neighbourhood contains

a point whose curvature is affected by an intersection, as the curvature values of

the neighbourhood widely fluctuate, shown in Figure 3.3(b). If the neighbour-

hood does not contain a point affected by an intersection, the curvature values

should be nearly constant, resulting in a nominally zero value for var(κ).

Figure 3.4 presents the values of the curvature approximation and the respective

values of the variance of curvature in the same neighbourhood of a 3D point cloud.

If the pipe in the foreground is examined, the curvature values do not appear to

be significantly different over the values for the edge intersection between the pipe

32

Page 52: Classification and Segmentation of 3D Terrestrial Laser

CLASSIFICATION

(a) (b)

(c)

Figure 3.3: (a) 2D sample of an intersection between two surfaces. (b) is the cur-vature approximation at each point based on a neighbourhood of 30 points. (c) isthe variance of curvature approximation at each point based on a neighbourhoodof 30 points.

33

Page 53: Classification and Segmentation of 3D Terrestrial Laser

CLASSIFICATION

and the floor. In contrast, the value of the variance of curvature does demonstrate

a significant difference between the pipe and the intersection, with the points on

the pipes exhibiting a value closer to that of the other surfaces present.

(a) (b)

Figure 3.4: (a) Grey scale of curvature approximation. (b) Grey scale of varianceof curvature

Figure 3.5 shows the difference between the distribution of values for the curva-

ture and variance of curvature approximation. For a scene that is populated by

simple planar surfaces, the distribution of the values of curvature and variance

of curvature is such that most of the values will be small (nominally zero). For

a scene comprising high order or highly complex surfaces with widely varying

curvature values and few intersections, the values for curvature will vary widely,

while the values for variance of curvature will be distributed so that most of the

values are still small (nominally zero). It is only when the scene is complex con-

sisting of many intersections between surfaces that the values for the variance

of curvature are widely distributed. Where the distribution of curvature values

relies on the surface structures, the distribution of variance of curvature relies on

the complexity of a scene and the number of intersections present. This trend is

shown in Figure 3.5 by the green arrow (flatness) for curvature, and the blue arrow

(smoothness) for variance of curvature. The separation between the two becomes

clear in industrial scenes such as Figure 3.4 where there a wide range of curvature

values, but a small number of surface segments and hence intersections between

them. This is further illustrated in Figure 3.6 where only approximately 12000

points where identified with zero curvature, where as approximately 50000 points

where identified with zero surface variation. When the classification process de-

34

Page 54: Classification and Segmentation of 3D Terrestrial Laser

CLASSIFICATION

scribed in Section 3.3 was applied, approximately 90000 points were identified as

surface points.

Figure 3.5: Ordered and normalised value for curvature and the variance ofcurvature for the point cloud in Figure 3.4.

3.2.4.3 Summary of Variance of Curvature

There are some minor drawbacks in using this method. While the computational

order is the same as for curvature, it does increase the computational time by

requiring another pass through the data set, or to be run in conjunction with the

curvature calculations. However, this cost is not substantial when compared with

the time required to compile the Kd-tree for calculating the neighbourhood.

The other drawback is that the edge effect extends further from the edge more

noticeably than with the curvature value only does. This results from the con-

siderations highlighted in Section 3.2.1 as the variance of curvature is a second

order attribute, so the affect of an intersection will affect points within a dis-

tance of two neighbourhood, whereas curvature as a first order attribute will be

affected within a distance of only one neighbourhood. Figure 3.3 illustrates that

an intersection affects points further from its location with the variance of curva-

ture metric compared to just curvature. This extended effect of intersections on

35

Page 55: Classification and Segmentation of 3D Terrestrial Laser

CLASSIFICATION

(a) (b)

Figure 3.6: (a) Histogram of curvature approximation. (b) Histogram of vari-ance of curvature. Values calculated for the point cloud in Figure 3.4 and contains500 bins for each histogram.

variance of curvature will not prohibit the ability to find edge points. A method

to remove the extended area of the effect on variance of curvature when finding

edge points will be proposed later in the chapter. This will be based on using the

curvature approximation in conjunction with the variance of curvature value.

The variance of curvature can determine whether a neighbourhood contains a

surface that can be considered to be differentiable. Hence it is useful in detecting

whether a point belongs to a surface or is sampled near an intersection. The next

section will discuss an attribute, which will help determine if a neighbourhood

contains a continuous sampling or not.

3.2.5 Metric for Detection of Boundary Points

As mentioned, boundary points are important because, in conjunction with edge

points, they help define the extents and discontinuities that encapsulate surface

features. This section outlines the commonly used techniques employing interior

angles and PCA. From these techniques, a definition of a new simple metric

examining the relationship between the neighbourhood centroid and the point of

interest through the chi-squared test is presented.

36

Page 56: Classification and Segmentation of 3D Terrestrial Laser

CLASSIFICATION

3.2.5.1 Boundary Points Through Examination of Interior Angles

A conventional method for determining boundary points examines the maximum

angle between the ordered neighbourhood of points around a point of interest

(Gumhold et al., 2001). This is done by projecting the neighbourhood onto a

reference plane found through principal component analysis. The points are then

ordered around the point of interest by the angles ∠pjp0pj+1, as depicted in

Figure 3.7. The angles between the points should be roughly equal with a value

of 2π/(k − 1) (with k as the number of points in the neighbourhood), unless the

point of interest is close to a boundary point, as shown in Figure 3.7. If this

is the case, then there will be a value θj significantly greater than the others,

indicating the neighbourhood is possibly affected by a boundary. A value of π for

θj means that the point p0 is likely to be on a straight boundary. This method

is more often used on mesh surfaces because it requires the neighbourhood to be

organised.

Figure 3.7: Ordered points in the neighbourhood surrounding a point of interestpo close to a boundary.

3.2.5.2 Boundary Points Through First Order Tensor Framework

In order to remove the need for an organised neighbourhood of points, a method

will be proposed to examine the eigenvalue decomposition of the unorganised

neighbourhood of points. An established method for this is first order tensor

voting (Tong et al., 2004), and is achieved by examining the to largest eigenvalues

λ1 and λ2.

37

Page 57: Classification and Segmentation of 3D Terrestrial Laser

CLASSIFICATION

Figure 3.8(a) shows how a neighbourhood around a point in the interior of the

surface will have similar values for λ1 and λ2 as the variation in the direction of

e1 and e2 are similar. Figure 3.8(b) highlights a neighbourhood for a point near

the extent or boundary of the surface and shows that the values for λ1 and λ2

will be significantly different as the variation in the direction of e1 and e2 is not

similar. By examining the difference between the two eigenvalues, a boundary

can be detected (Gumhold et al., 2001). A problem with examining the difference

between λ1 and λ2 is that the eigenvalues are dependent on the size and density

of a neighbourhood. Since these factors are assumed not to be constant for a

TLS point cloud, it makes setting a global threshold difficult.

(a) (b)

Figure 3.8: (a) is the projected neighbourhood for a point of interest (X) withinthe interior of a surface, while (b) is the projected neighbourhood for a point ofinterest (X) near the extent of a surface. The ellipses denote the 39.4% confidenceinterval. The intersection of the ellipse with the axis ei denotes a value of

√λi

for the corresponding eigenvalues and eigenvectors.

3.2.5.3 Boundary Points Through Unorganised Neighbourhood Ex-

amination

Because of the limitations of examining the eigenvalues, it is proposed to examine

the distance between the centroid of the neighbourhood and the point of interest.

As illustrated in Figure 3.8, an interior surface point will be close to the centroid

38

Page 58: Classification and Segmentation of 3D Terrestrial Laser

CLASSIFICATION

of the neighbourhood, while a point on or near the boundary or extent of a surface

will have the centroid of the neighbourhood significantly biased away from it.

This distance will also be dependent on the size and density of a neighbourhood.

To reduce this impact in the proposed method, the ratio of the distance between

the centroid (c) and the point of interest (p0) against the radius or span of the

neighbourhood can be used as a dimensionless metric. This can be calculated as:

rc,p0 =‖p0 − c‖

r(3.3)

with ‖p0 − c‖ being the distance between c and p0 and r is the radius or span

of the neighbourhood. Note the r may be calculated in the specific direction −→cp0

since the neighbourhood forms an ellipsoid.

However, a better result can be achieved from utilising the confidence interval

(Belton and Lichti, 2006). For the point of interest, the significance level can be

found by examining the equation:

(u0)2

λ1

+(v0)

2

λ2

= c2 (3.4)

with ui and vi being the projected coordinate system of point pti (point pt0

denoting the point of interest) with

ui = e1 · (pti − µ) (3.5)

vi = e2 · (pti − µ) (3.6)

with a·b denoting the dot product between a and b and µ representing the centroid

of the neighbourhood. Note that the significance level of α can be calculated from

the chi-square distribution as follows:

c2 = χ22(α) (3.7)

If the point of interest is highly significant (α ≈ 1) then the point will be an

39

Page 59: Classification and Segmentation of 3D Terrestrial Laser

CLASSIFICATION

interior point. If the point is not significant (α � 1), then it will be an exterior

or bound point. By examining the value of c2 a point can be tested to see if it is

influenced by a boundary within the neighbourhood.

The benefit of the proposed method is that the metric is not dependent on the

neighbourhood size and sampling density and can be examined through statistical

testing. Additionally, the direction of the boundary can be approximated by a

vector that is orthogonal to both the normal direction, and the vector from p0 to

µ.

3.2.6 Summary of Point Attributes

For all of the classes that have been specified, various attributes have been out-

lined to help with determining to which class a point will belong. For points

that are near a surface discontinuity, an approximated curvature metric and the

proposed variance of curvature can be used to determine if a point is affected.

The proposed chi-square metric based on the distance between the centroid of the

neighbourhood and a point of interest can be used to gauge the effect caused by a

surface boundary. Surface points can be analysed using a combination of metrics.

Now that these attributes are defined, and how they are affected by properties

within the neighbourhood, decision rules can be developed to classify the points.

3.3 Classification Decision Rules

From the definition of the class descriptions and necessary attributes, the classi-

fication decision model will be specified. The first step is to define a neighbour-

hood surrounding a point. Experience from applying the classification procedure

to various point clouds (some of which are highlighted in Chapter 6) has shown

that the closest 30 or 50 neighbours usually gives a good sample on which to per-

40

Page 60: Classification and Segmentation of 3D Terrestrial Laser

CLASSIFICATION

form classification, however the impact of the previously outlined considerations

in Section 3.2.1 must be taken into account when selecting the neighbourhood.

The next step is to determine whether a point is on a surface or near an edge.

By examination of the curvature approximation κ, a threshold value thresκ can

be set such that a point can be classified such as

surface =

{true, if κ ≤ thresκ

false, if κ > thresκ

}(3.8)

Using a threshold on curvature is a commonly used method (Zhoa and Zhang,

1997; Gumhold et al., 2001; Pauly et al., 2003). As previously illustrated, since

the curvature approximation is affected by surface texture and noise as well as

curvature, this threshold must take these factors into consideration. If the noise

was known beforehand, an individual threshold could be calculated locally, (e.g.

Bae et al., 2005). However, since the proposed metric of variance of curvature

eliminates some of the effects of these factors and is affected primarily by inter-

section, it can provide a better method of classifying surface points based on the

decision model:

surface =

{true, if var(κ) ≤ thresvar(κ)

false, if var(κ) > thresvar(κ)

}(3.9)

This will test whether the curvature values in the neighbourhood are consistent

with being sampled from a surface. The value thresvar(κ) is set as a tolerance for

a near zero value. From the conditions mentioned in Section 3.2.1, and since the

variance of curvature is a second order attribute, the effects produced by an edge

intersection (or discontinuity) can extend to points within two neighbourhoods

around the edge location. The effect on curvature values can extend up to within

one neighbourhood of the edge location. Figure 3.3 previously illustrated this

property. To limit the distance from the intersection that a point will be clas-

sified as an edge, a threshold can be placed on both the curvature and variance

of curvature. It should be obvious that if a point of interest is on an edge, its

curvature should be significantly greater than the other values in the neighbour-

hood. Therefore, a good value for thresκ, if it is assumed that the neighbourhood

41

Page 61: Classification and Segmentation of 3D Terrestrial Laser

CLASSIFICATION

contains an edge (by the threshold in Eq. 3.9 being satisfied), is:

thresκ = κ + m√

var(κ) (3.10)

where κ is the mean of the curvature values and m is the number of standard de-

viations from the mean based on the normal distribution. In the results presented

in Chapter 6, m is set to zero. In this case, if a neighbourhood surrounding a point

on the edge is taken, then all the values classed as edge points should be within

one neighbourhood from the intersection as only these values will be greater

than the mean value for the neighbourhood. Figure 3.9 shows the difference

in classification when utilising the extra condition placed on the curvature value.

Classification was done on a neighbourhood size of 40 and thresvar(κ) = 2.0×10−5.

(a) (b)

Figure 3.9: Illustrates the differences when the utilising the condition in Eq.3.10. (a) Without the imposed condition of the curvature being less than theaverage of the curvature in the local neighbourhood, and (b) with the additionalcondition. White points denote classified edges and green points are the classifiedsurfaces points.

In the classification process presented in this thesis, the threshold in Eq. 3.10 is

applied simultaneously with the variance of curvature threshold, described in Eq.

3.9, to all points. Only if both thresholds are satisfied is the point considered

to be classed as an edge point. The different colours for surface points used in

Chapter 6 depict whether Eq. 3.10 is satisfied or not. In addition, Eq. 3.10 can

be applied after Eq. 3.9 is found to be true, or vice versa, instead of being tested

simultaneously. However, an edge point will only be classed as such if both Eq.

3.9 and Eq. 3.10 are satisfied.

42

Page 62: Classification and Segmentation of 3D Terrestrial Laser

CLASSIFICATION

Initially, boundary points will be classed as surfaces because only one surface will

be seen in the neighbourhood (although it is not necessary to test if it is a surface

first). To determine if the point is also a boundary point, the value c2 is tested

against a threshold thresc2 such that:

boundary =

{true, if c2 ≤ thresc2

false, if c2 > thresc2

}(3.11)

The value thresc2 can be set as previously stated from the chi-squared distribution

as:

thresc2 = χ22(α) (3.12)

where α the significance level, or with the boundary causing the point of interest

to be outside the confidence interval of 1 − α. A confidence interval of 39.4% is

recommended from testing with 3D point clouds.

The total classification method for a point is summarised in Algorithm 1. This

makes use of the three metrics discussed in this chapter to classify it into three

primary classes. This information can then be used to segment the surface with

relative ease. Before this is done, some additional attributes for points and their

neighbourhoods will be defined in the next chapter to enhance the classification

results and segmentation process.

3.4 Summary of Classification

In this chapter, the attributes and methods for classifying the points in a point

cloud as being effected by a surface, edge or boundary entity within its local

neighbourhood was outlined. The results of the application of the process will be

presented in Chapter 6. These classes reduce the problem in the segmentation

stage from examining the entire point cloud and attributes, to a problem of just

extracting the surfaces containing classified surface points surrounded by the dis-

continuities highlighted by the boundary and edge points. While the classified

43

Page 63: Classification and Segmentation of 3D Terrestrial Laser

CLASSIFICATION

Algorithm 1 The classification algorithm.

1: procedure Classify(Point p0)2: Get k neighbours surrounding p0

3: Calculate var(κ) for neighbourhood4: Calculate c2 for neighbourhood5: if κ > κ + m

√var(κ) and var(κ) > thresvar(κ) then

6: p0 = Edge7: else8: if c2 < thresc2 then9: p0 = Surface

10: else11: p0 = Boundary12: end if13: end if14: return15: end procedure

points can be directly used to segment the point cloud, there are several tech-

niques that can be applied to enhance the classification results, and to extend and

correct the attributes from this chapter to allow for a more robust segmentation

procedure. Therefore, the next chapter will overview these techniques before the

segmentation process is outlined.

44

Page 64: Classification and Segmentation of 3D Terrestrial Laser

Chapter 4

Refining Classification Results

and Attributes

The previous chapter outlined the basis of the proposed classification procedure.

This included an overview on the formulation of the attributes, the physical

properties they reflect, and how they are utilised in the decision model. These

attributes included an approximation of curvature, variance of curvature and a

metric for boundary detection which allowed the points to be categorised into

classes consisting of surface, boundary and (near) edge points.

The result from this proposed classification procedure is sufficient to be utilised in

an edge-based segmentation method, e.g. Wani and Batchelor (1994). However,

these attributes do not reflect all the surface properties, leading to some loss

of information that is useful in the segmentation stage. The most prominent is

that the curvature approximation does not have either a directional component

or an associated unit of measurement. In addition, the attributes (e.g. surface

orientation) can be adversely affected by the presence of multiple discrete surface

entities in the local neighbourhood.

45

Page 65: Classification and Segmentation of 3D Terrestrial Laser

REFINING CLASSIFICATION RESULTS AND ATTRIBUTES

This chapter is aimed at refining the attributes and results from the classification

method proposed in Chapter 3. The first refinement is for the neighbourhood

definition to ensure that multiple surface features can be filtered out. A novel it-

erative procedure is presented to refine the neighborhood solution. This solution

will be a reduced neighbourhood, where the presence of outliers and multiple

surface entities have been removed from the original neighbourhood selection.

The next refinement is to improve the curvature approximation to incorporate

the principal directions of curvature, along with the radius of curvature through

simple principal component analysis (PCA). These improvements allow for addi-

tional information to be used in the segmentation procedure presented in Chapter

5, to produce the results in a more robust manner.

4.1 Improving Neighbourhood Selection Through

Iterative Updating

Most of the attributes used in point cloud processing rely on the examination of

a local neighbourhood around a point of interest. As such, they rely on a good

neighbourhood definition to obtain an accurate value. Insufficient neighbourhood

size, outliers and the presence of multiple surfaces can adversely affect attributes

such as surface normal approximation (Dey et al., 2005), local surface fitting

(OuYang and Feng, 2005) and local coordinate systems (Daniels et al., 2007).

As described in Chapter 3, a larger neighbourhood size means an increase in

the likelihood of multiple discrete surfaces being presented in the neighbourhood.

The presence of these multiple surfaces will produce a bias in the attributes

being calculated, e.g. surface normal approximation. While reducing the size of

the neighbourhood will reduce this probability, the size must be sufficiently large

enough to reduce the effects of random errors and noise. Mitra et al. (2004) and

Bae et al. (2005) outlined some practical considerations for effectively choosing

the neighbourhood size. While this reduces the effect of noise and random errors,

it still does not eliminate the possibility of the neighbourhood being affected by

46

Page 66: Classification and Segmentation of 3D Terrestrial Laser

REFINING CLASSIFICATION RESULTS AND ATTRIBUTES

both outliers and multiple surfaces.

Existing methods for removing the effects of outliers and multiple surfaces are

presented in Appendix B. These include outlier detection (Danuser and Striker,

1998), random sampling (Fischler and Bolles, 1981), filtering (Clode et al., 2005;

Tang et al., 2007), voting (Page et al., 2002) and optimisation techniques. Most

of the existing techniques rely on a random or systematic re-sampling. In this

Section, the aim is to present a method that will iteratively converge to the correct

solution for a neighborhood. This will be achieved by adjusting the weights of

points within a neighbourhood depending on the likelihood of that point being

sampled from the dominant surface entity sampled within the neighbourhood.

The likelihood will be determined by examining the principal components of

each neighbourhoods at each iteration and using statistical significance testing

to both determine and adjust weights for each point. This will be done until

the method converges to a stable solution (i.e. the weights remain constant).

It will be demonstrated that the solution has an uniform weighting for those

points that are determined to belong to a dominant surface structure within the

neighbourhood, and zero for those that do not.

The first stage of the proposed method consists of outlining how to determine

the relationship between two points within a neighbourhood and whether they

belong to the same surface entity. This relationship is separated into two models:

an internal relationship between a point and the neighbourhood being corrected,

and an external relationship between the point of interest and the neighbourhood

of another point.

4.1.1 Internal and External Relationship Between Points

An internal relationship is defined as the relation of a point xi to the neighbour-

hood N0 surrounding a point of interest x0, where xi ∈ N0. On the other hand,

an external relationship is defined as the relation of a point of interest x0 to the

neighbourhood Ni surrounding xi, where xi ∈ N0. Illustration of the internal and

47

Page 67: Classification and Segmentation of 3D Terrestrial Laser

REFINING CLASSIFICATION RESULTS AND ATTRIBUTES

external relationship is given in Figure 4.1 and Figure 4.2, respectively. Basically,

an internal relationship of xi to x0 will be equivalent to an external relationship

of x0 to xi.

(a) (b)

Figure 4.1: Example of an internal relationship. The threshold is defined in redby Eq. 4.3 with all points inside considered to have an internal relationship. (a)shows the case for an intersection and (b) for a slightly curving, noisy surface

(a) (b)

Figure 4.2: Example of an external relationship. The threshold is defined inred by Eq. 4.4 with all points inside considered to have an external relationship.(a) shows the case for an intersection and (b) for a slightly curving, noisy surface

The main concept of this method is that a geometric attribute of a point x0, which

is related to its underlying surface, is dependent on its internal and external

relationship with other surrounding points. In other words, the surrounding

neighbourhood N0 for the point x0 should reflect not only the attributes within

the neighbourhood, but also the attributes for the neighbourhood around xi if

they are to be considered to belong to the same surface entity.

48

Page 68: Classification and Segmentation of 3D Terrestrial Laser

REFINING CLASSIFICATION RESULTS AND ATTRIBUTES

To determine these relationships, two definitions of distance are introduced. Let

c0 and n0 denote the centroid and normal approximation for neighbourhood N0

around point x0, repsectively. The equation for the distance for the internal

relationship is defined as:

distir = |(c0 − xi) · n0| (4.1)

where c0 is the mean value of the neighbourhood of point coordinates and n0

is specified through the use of PCA. In a similar manner, let ci and ni denote

the centroid and normal approximation for neighbourhood Ni around point xi,

respectively. The equation for the distance for the external relationship is defined

as:

dister = |(ci − x0) · ni| (4.2)

Again, ci is calculated from the mean of the neighbourhood and ni is specified

through using PCA.

In order to determine whether an internal or external relationship exists, the

Boolean operators can be defined, respectively, as:

ir =

{true, if distir ≤ tν, α

2s0

false, if distir > tν, α2s0

}(4.3)

for the internal relationship and:

er =

{true if dister ≤ tν, α

2si

false if dister > tν, α2si

}(4.4)

for the external relationship. These Boolean operators come from a statistically

significance test to determine whether the relationship is likely to exist. s0 and si

are the error in the approximate normal direction for neighbourhood N0 and Ni,

respectively. These values can be set as√

λ0 from the results of the PCA of the

respective neighbourhoods. In addition, tν, α2

is the t test statistic with ν degrees

of freedom and a significance factor of α.

49

Page 69: Classification and Segmentation of 3D Terrestrial Laser

REFINING CLASSIFICATION RESULTS AND ATTRIBUTES

4.1.2 Iteratively Updating the Neighbourhood Point Weights

To obtain the initial approximation for the variance and normal directions, the

PCA is performed as previously stated, but the proposed method in this thesis

will use a slightly modified version of the covariance matric formula such that:

Σ =k∑

i=1

pi(xi − x)(xi − x)T (4.5)

where pi is the weight of point xi in the neighbourhood. Initially, the points will

all be equally weighted as pi = 1/k, with k being the number of points in the

neighbourhood. In a similar manner, the formula for the centroid value will be

modified to:

x =k∑

i=1

pixi (4.6)

If a point pi is not related either internally or externally to point p0, it is likely

that they are not sampled from the same surface. This then means that its

weighting is decreased. Conversely, if a point pi has an internal and external

relationship with point p0, then it is likely that they belong to the same surface,

resulting in its weighting being increased. If there is only one relationship, then it

is likely that at least one of the neighbourhoods (N0 or Ni) is affected by multiple

surfaces, and as such the weights are left unchanged until the neighbourhoods

become more refined. From the internal and external relationships, the rules for

updating the weights can be specified as:

p′i =

pi + δ, if (ir = true) and (er = true)

pi − δ, if (ir = false) and (er = false)

pi, otherwise

(4.7)

where p′i are the adjusted weights and δ is a small change. If p′i assumes a negative

weighting, then it is set to be zero in order to ensure all weights are non-negative.

The new weights are then normalised by:

p′′i =p′i∑k

j=1 p′j(4.8)

50

Page 70: Classification and Segmentation of 3D Terrestrial Laser

REFINING CLASSIFICATION RESULTS AND ATTRIBUTES

so that the summation of the weights equals unity. p′′i is then used as the weight

in the recalculation of the covariance matrix for the next iteration.

The value δ represents the change in weight for each point between successive

iterations. As δ tends towards zero, a continuous change in the weight will be

observed. If a large value for δ is chosen, then the process will converge within

fewer iterations than for a small value of δ. The problem with a large value is

that it does not adequately simulate the continuous change in the weight, and

can lead to convergence to an erroneous solution. From experience, δ is chosen to

take on a value that is less than 5% of the possible value of pi. Ideally the value of

δ should be a function of the properties of the internal and external relationship,

but more research is required to deal with the binary nature of the relationships.

4.1.3 2D Case Study

Figure 4.3: Normal directions of points on a 2D intersection example. Theblue lines represent the initial normal approximation and red lines represent thecorrect normal approximations without points from the non-dominant surfaceincluded in the neighbourhoods.

In this section, a 2D example of an intersection will be presented in order to test

and demonstrate the proposed method and its effectiveness. A test data set is

presented in Figure 4.3, with every point examined based on a neighbourhood

size of 20 points. As can be seen, the surface normals for the points near the

51

Page 71: Classification and Segmentation of 3D Terrestrial Laser

REFINING CLASSIFICATION RESULTS AND ATTRIBUTES

edges are initially perturbed away from the true normal direction of the surface

that they are sampled from, since the neighbourhood is affected by more than

one discrete surface structure. As the proposed method is applied, the weights

are iteratively updated until they become stable, i.e. successive iterations do not

affect the solution and p′′i ≈ pi in Eq. 4.8. The results are presented in Figure

4.4.

Figure 4.4: Angle of the normal orientation. The values for the surfaces shouldbe approximately -45 and 45 degrees. The blue lines represents the orientationof the initial normal approximation and red lines represent the correct valueswithout points from the non-dominant surface included in the neighbourhoods.

If the weights were updated using only the internal relationships, only the points

not significantly affected by the presence of multiple surfaces within the neigh-

bourhood would be corrected. This is because the internal relationship shares

similarities with outlier removal methods, and will behave accordingly. If only

the external relationships are used, while the results are similar to those when

both are used, the process becomes unstable with the possibility of all the points

in the neighbourhoods being removed. An example of just using the internal or

external points to determine weighting is shown in Figure 4.5 and Figure 4.6, re-

spectively. In the case of Figure 4.6, the number of iterations was stopped before

instability occurred. If the weights of the points are examined at every iteration,

as shown in Figure 4.7, then they are stabilised with either a zero value, or an

uniform value for those that are non-zero.

It should be noted that the point closest to the intersection did not get corrected,

and hence its normal direction did not became aligned with one of the surfaces

present within its local neighbourhood. This happened because the surfaces are

52

Page 72: Classification and Segmentation of 3D Terrestrial Laser

REFINING CLASSIFICATION RESULTS AND ATTRIBUTES

Figure 4.5: Updated normal (red) from the original (blue) using just the inter-nal relationship. The top plots show the normal directions overlayed with thestructure and the bottom plot shows the orientation angle.

Figure 4.6: Updated normal (red) from the original (blue) using just the ex-ternal relationship. The top plots show the normal directions overlayed with thestructure and the bottom plot shows the orientation angle.

53

Page 73: Classification and Segmentation of 3D Terrestrial Laser

REFINING CLASSIFICATION RESULTS AND ATTRIBUTES

Figure 4.7: Trend of the weights for points in a neighbourhood affected by thepresence of multiple surface. At the initial neighbourhood, all points are weightedthe same. as iterations occur, the values for weights either tend to zero or a non-zero constant. The line represents the value of the weights for the points in theneighbourhood and how they change with iterations of the procedure.

equally represented in the neighbourhood and therefore one surface cannot be

considered better than the other. The normal can be forced to align with one

of the surfaces by replacing the centroid of x with x0. Figure 4.8 and Figure 4.9

display the comparisons between specifying the centroid as x or x0, respectively.

This instability caused in the covariance matrix allows it to become aligned to

one of the surfaces present, however it can also have detrimental effects on the

normal approximation, providing biased solutions.

4.1.4 Test with a 3D Point Cloud

This section will demonstrate the outlined correction procedure as applied to a 3D

point cloud. The point cloud, presented in Figure 4.10, is scanned from a door

arch with a Leica ScanStation (Leica Geosystems HDS, 2008) with a nominal

point spacing of 0.01m. The correction procedure is applied to the point cloud

with a neighbourhood size of 30 and the threshold for determining internal and

external relationships is set at a significance level of α = 0.05.

In Figure 4.11(a), the initial normal approximations are displayed on a Gaussian

54

Page 74: Classification and Segmentation of 3D Terrestrial Laser

REFINING CLASSIFICATION RESULTS AND ATTRIBUTES

Figure 4.8: Updated normal (red) from the original (blue) using both internalrelationships and external relationship with the centroid of the neighbourhoodused in the calculations set as the mean of the neighbourhood, x. The top plotsshow the normal directions overlayed with the structure and the bottom plotshows the orientation angle.

Figure 4.9: Updated normal (red) from the original (blue) using both the inter-nal relationships and external relationship with the centroid of the neighbourhoodused in the calculations set as the point of interest, x0. The top plots show thenormal directions overlayed with the structure and the bottom plot shows theorientation angle.

55

Page 75: Classification and Segmentation of 3D Terrestrial Laser

REFINING CLASSIFICATION RESULTS AND ATTRIBUTES

Figure 4.10: Point cloud sampled from a section of a door archway with a LeicaScanstation. Axis units are in metres and colour is reflects elevation changes.

sphere. The clusters on the sphere represent the presence of surfaces with a

specific normal orientation. The striping effect occurring between the clusters

represents the normal approximations being affected by more than one surface.

Figure 4.11(b) shows the Gaussian sphere of the corrected normal approximations

using the proposed method. As can be seen, the striping effect is significantly

reduced as those points affected by more than one surface are corrected to align

with the dominant surface element in the neighbourhood. The neighbourhood

weights are stabilised within 50 iterations of the procedure.

(a) (b)

Figure 4.11: (a) The Gaussian sphere of the uncorrected normal directions. (b)The Gaussian sphere of the corrected normal directions. The colour indicates thedensity of normal directions from blue representing zero, to red representing inexcess of a hundred.

56

Page 76: Classification and Segmentation of 3D Terrestrial Laser

REFINING CLASSIFICATION RESULTS AND ATTRIBUTES

On examining those points near a surface intersection, Figure 4.12(a) shows the

angles for the uncorrected approximations and Figure 4.12(b) presents the cor-

rected approximations. Because of the high number of points on planar surfaces

with the initial solution already providing a stable weighting, it is difficult to

observe the benefits in the histogram. If only the points classed as edge points

were examined, as in Figure 4.13, it is easier to observe the benefits of applying

this method, particularly for non surface points. This illustrates an increase in

the accuracy of normal alignment after the correction with approximately 90% of

the edge points now being aligned to within 5 degrees of their correct orientation.

(a) (b)

Figure 4.12: Histograms of the angles of orientation for the normal directions.(a) The uncorrected normal approximations. (b) The corrected normal approxi-mations. Theta and phi in the histogram are the two angular ordinations definingthe normal direction

4.1.5 Summary of Improving Neighbourhood

Many approximated attributes such as normal direction are calculated under the

assumption that the neighbourhood contains only one underlying surface struc-

ture. If the neighbourhood contains more than one surface entity or outliers, then

a bias will be introduced into the attributes, perturbing them from their true val-

ues. To alleviate the problem, the neighbourhood can be examined to determine

if the neighbourhood is influenced by these factors through a variety of methods.

This section has presented an iterative method based on the relationship of the

points to surrounding neighbourhoods to create an iterative method to filter out

both outliers and multiple surfaces.

57

Page 77: Classification and Segmentation of 3D Terrestrial Laser

REFINING CLASSIFICATION RESULTS AND ATTRIBUTES

(a) (b)

Figure 4.13: Histograms for edge points of the orientation angles for the nor-mal directions. (a) The uncorrected normal approximations. (b) The correctednormal approximations. Theta and phi in the histogram are the two angular ordi-nations defining the normal direction. Peaks in the histogram denote orientationof the surface present in the point cloud

Correction of the neighbourhood does have computational cost associated with it.

In cases where there is only one surface entity affecting the local neighbourhood, a

simple outlier detection method will be sufficient to remove any erroneous points

quickly and efficiently. For points that are affected by more than one surface, e.g.

the classified edge points, an outlier detection method will not remove the effect of

multiple surface entities, and a more rigorous method is required. Therefore, for

more efficient processing, an outlier removal method should be used on the surface

points, and the presented method should be limited to the points classified edges.

In this way, the approximated attributes calculated on local neighbourhoods can

be calculated free from these biases, if required.

4.2 Extending Curvature Attributes

The previously defined curvature approximation was adequate for determining

the relative value of curvature of a point compared to the others in the point

cloud for the classification procedure. The shortcomings with this curvature

approximation is that several properties usually associated with curvature were

58

Page 78: Classification and Segmentation of 3D Terrestrial Laser

REFINING CLASSIFICATION RESULTS AND ATTRIBUTES

not retained. These properties include the maximum and minimum curvature

directions and a unit of measurement related to the radius of curvature.

The curvature directions are important in approximating values for mean and

Gaussian curvature, which can be used to separate corner points from edges.

The principal curvature directions can be also used to determine the alignment

of the edge direction for edge tracking procedures used in developing a line model

representation of the point cloud, or to determine the axis alignment of cylinders

for finding the path of pipes. Finally, without an associated unit of measurement,

it is difficult to equate the approximate curvature value to the true radius of

curvature, which is useful in separating and identifying features, e.g. pipe work.

The principal curvature directions are of primary importance to the segmentation

procedure to be outlined in Chapter 5 since they are used to define the orien-

tation of local cut-planes on edges to limit the region growing procedure. This

section will present a method to calculate the principal curvature directions and

approximate the radius of curvature associated with these metrics. This will be

done by utilising PCA on a neighbourhood of points in conjunction with PCA

on the neighbourhood of normal approximations.

4.2.1 Principal Curvature Directions

There are many uses for principal curvature directions. These include finding

axis of pipes and cylinders (Pottmann et al., 2002), determining edge directions

and tracking edges (Briese, 2006), calculating and orientating a local coordinate

system (Daniels et al., 2007), and solving surface flow problems (Schafhitzel et al.,

2007). For this thesis, the primary consideration lies in determining the local

orientation of edges to develope local cut-planes between surface intersections.

These will be utilised in the segmentation procedure to be outlined in Chapter 5

in order to limit the region-growing on surface segments and to create a robust

segmentation procedure.

59

Page 79: Classification and Segmentation of 3D Terrestrial Laser

REFINING CLASSIFICATION RESULTS AND ATTRIBUTES

First, PCA on the neighbourhood of points that are sampled from the surface

of a cylinder, shown in Figure 4.14(a), will be examined. From the eigenvalue

decomposition, the eigenvector associated with the largest eigenvalue (e2) will be

aligned to the direction of minimum curvature. The reason for this is that the

change in spatial sampling density across the neighbourhood, displayed in Figure

4.14(b), causes a difference in the variation of the neighbourhood in this direction.

However, this direction is caused by the variation in the sampling density and it

is only in this instance that the surface structure causes the variation to align

with the principal directions of curvature. In practice, thanks to the inconsistent

sampling density and noise in TLS point clouds, this method is not effective and

a higher order method must be applied.

(a) (b)

Figure 4.14: (a) Principal component directions for a neighbourhood contain-ing points sampled from a cylinder. (b) The projected neighbourhood onto thetwo largest principal component directions with the ellipse representing the 90%confidence interval.

To do this, the PCA of the normal approximations are examined instead of the

PCA of the coordinate values. The method presented here is a variation of

the method outlined by Jiang et al. (2005). It also shares similarities to the

second-order tensor voting framework (Tang and Medioni, 2002) and the use of

Gaussian spheres (Varady et al., 1998). By examining the normal directions

of the neighbourhood in Figure 4.14(a), as depicted in Figure 4.15(a), it can

be clearly seen that the variation in the normal directions corresponds to the

principal curvature directions. If the normals are projected onto the tangential

surface for the point of interest, shown in Figure 4.15(b), then the direction of

the largest variation is the direction of maximum curvature, and the direction

of the smallest variation is the direction of minimum curvature. This comes

60

Page 80: Classification and Segmentation of 3D Terrestrial Laser

REFINING CLASSIFICATION RESULTS AND ATTRIBUTES

from the fact that curvature is defined by the change of the tangential surface,

and the normal directions represent the tangential orientation throughout the

neighbourhood. Therefore the variation in the normal should reflect the changes

in both surfaces and curvature. It is also possible to transform the points into

a local coordinate system to examine this information’ however the formulation

of the problem will be the same and requires the solution for the directions of

curvature to be transformed back into the global coordinate system.

(a) (b)

Figure 4.15: (a) Normal directions and their negated values for the neighbour-hood given in Figure 4.14. (b) The normal values projected onto the local tan-gential plane.

To find the directions of maximum and minimum curvature, PCA can again be

utilised, but on the normal directions instead of point coordinates. The first step,

before the PCA is performed, is to ensure that the normal directions are aligned

to a common orientation. One method is to check the angles between the normal

direction of the point of interest (n0) and the normal direction of a point in the

neighbourhood (nj). n0 is the normal for the point of interest (p0) and nj is the

normal direction for point pj in the neighbourhood around p0. The angle θj can

be simply calculated using the dot product as:

θj = cos−1(n0 · nj) (4.9)

If θj is greater than 90◦, then the normal directions nj can be inverted. Another

method is to examine the negated normal directions in conjunction with the

normal directions. Since the negated normal directions should also have the same

distribution as the normal directions, the combined properties of both follows the

distribution for just the normal or negated normal directions.

61

Page 81: Classification and Segmentation of 3D Terrestrial Laser

REFINING CLASSIFICATION RESULTS AND ATTRIBUTES

From the definition of curvature, the direction of maximum and minimum cur-

vature should occur on the the tangential plane to the surface. This means that

these directions must be orthogonal to the normal direction of the point of inter-

est. In order to constrain the solution, the normal directions are projected onto

the tangential plane, and these values are examined. The projection is performed

using the previous eigenvalue decomposition on the neighbourhood of point co-

ordinates, which is defined as:

n(p)j = nj − (nj · n0)n0 (4.10)

with n(p)j representing the normal direction projected onto the tangential plane

defined by n0. The covariance matrix of the normal directions can then be spec-

ified as:

Σ(n) =1

k

k∑j=1

(n

(p)j

)(n

(p)j

)T

(4.11)

If the negated normal values are used in conjunction with the normal values, the

covariance matrix is specified as:

Σ(n) =1

2 ∗ k

k∑j=1

(n

(p)j

)(n

(p)j

)T

+(−n

(p)j

)(−n

(p)j

)T

(4.12)

The eigenvalue decomposition of the covariance matrix Σ(n) is performed in order

to obtain the principal components such that:

Σ(n) =2∑

i=1

λ(n)i e

(n)i e

(n)Ti

=(

e(n)0 e

(n)1 e

(n)2

) λ(n)0 0 0

0 λ(n)1 0

0 0 λ(n)2

e

(n)T0

e(n)T1

e(n)T2

(4.13)

In this case, since the covariance matrix in either Eq. 4.11 or Eq. 4.12 was

calculated on the projected normals, e(n)0 will be in the same direction as e0 and

λ(n)0 will be zero since there is no variation in the normal direction. In addition,

e(n)1 will be the direction of minimum curvature and e

(n)2 will be the direction of

62

Page 82: Classification and Segmentation of 3D Terrestrial Laser

REFINING CLASSIFICATION RESULTS AND ATTRIBUTES

maximum curvature since they represent the directions with the minimum and

maximum variation in the normal direction respectively. In addition, e(n)0 , e

(n)1

and e(n)2 will form an orthogonal basis. This basis will reflect the local surface

change with e(n)0 , e

(n)1 and e

(n)2 encoding the surface normal, minimum curvature

direction and maximum curvature direction, respectively.

In addition to these directions, λ(n)1 and λ

(n)2 provide a measure of the curvature

in each of the directions and can be used to approximate the mean (H) and

Gaussian (K) curvature as suggested in Jiang et al. (2005) by the follwing:

H =

√λ

(n)1 +

√λ

(n)2

2(4.14)

K =

√λ

(n)1

√λ

(n)2 (4.15)

with attributes being associated with these approximations shown in Table 4.1

(Visintini et al., 2006). λ(n)1 and λ

(n)2 can also approximate the two largest tensors

in second order tensor voting (Tang and Medioni, 2002). These approximate

tensors are given in Table 4.2.

Table 4.1: Surface attributes associated with mean and Gaussian curvature.

Mean Curvature Gaussian Curvature PropertyK = 0 H = 0 Flat surface

(change in no directions)K = 0 H > 0 Cylinder or edge

(change in only one direction)K > 0 H > 0 Sphere or corner

(change in both directions)

Note that these values λ(n)1 and λ

(n)2 , while having a directional property, do not

have a specific unit of measurement. In addition, as was outlined in Chapter 3,

the eigenvalues will be dependent on the size of the neighbourhood. It is possible

to remove this dependency by combining the results of the PCA on the coordinate

values with the results of the PCA on the normal directions.

63

Page 83: Classification and Segmentation of 3D Terrestrial Laser

REFINING CLASSIFICATION RESULTS AND ATTRIBUTES

Table 4.2: Surface attributes and tenors associated with λ(n)1 and λ

(n)2 .

Mean Curvature Gaussian Curvature Property

λ(n)1 ≈ 0 λ

(n)2 ≈ 0 Stick tensor or planar surface

(surface not changing in any directions)

λ(n)1 ≈ 0 λ

(n)2 � 0 Plate tensor or an edge or cylinder

(surface changing in only one direction)

λ(n)1 � 01 λ

(n)2 � 0 Ball tensor or a corner or sphere

(surface changing in only one direction)

4.2.2 Radius of Curvature Approximation

In this section, PCA on both the neighbourhood of coordinate values and normal

directions will be combined to not only remove the dependency on the neigh-

bourhood size, but also to approximate the radius of curvature measurement in

each direction. The curvature approximation utilised in the previous chapter was

essentially unitless.

It allows for the comparison of the curvature values between different points, but

the lack of a unit of measurement makes comparisons between the curvature of

the point and the structure it was sampled from difficult, if not impossible. By

approximating the radius of curvature, a direct comparison can be made between

the curvature at a point and the structure it is sampled from, e.g. the curvature at

a point can be compared to the radius of the cylinder from which it was sampled.

The proposed method of approximating the radius of curvature in this section

will be derived under the assumption that the neighbourhood contains a single,

curved surface structure.

First, the properties of the PCA on the neighbourhood of point coordinates will

be re-examined, with the focus on how λ1 and λ2 are related to the span of the

neighbourhood. In the conic section displayed in Figure 4.16(a), a 2D neighbour-

hood of point coordinates is presented. If the distance d and the angle θ are

known, then it becomes a simple matter of determining the radius of curvature

64

Page 84: Classification and Segmentation of 3D Terrestrial Laser

REFINING CLASSIFICATION RESULTS AND ATTRIBUTES

by the following equation:

sin θ =d

r(4.16)

Since the value for d is currently unknown, it can be statistically approximated

using the confidence interval as:

d = s√

λ1 (4.17)

where λ1 is the variance in this direction calculated from PCA of the neighbour-

hood, and s is the scale factor or number of standard deviations on either side of

the mean that covers the span of the points. The value of s is set from the normal

distribution so that a distance of d on either side of the centroid will contain a

certain percentage the neighbourhood of points. This leaves the only unknown

in the formula as θ.

Examining the conic section of the 2D neighbourhood of normal directions (Figure

4.16(b)), it can be seen that it shares similarities with the conic section for the

neighbourhood of point coordinates. In this case, θ can be approximated as:

sin θ = d(n) (4.18)

with d(n) again being statistically approximated as:

d(n) = t

√λ

(n)1 (4.19)

where λ(n)1 is the variance in this direction calculated from the PCA of the normal

directions, and t is the number of standard deviations that covers the span of the

normal variation. While

√λ

(n)1 ≤ 1, t must be set so that set so that the condition

d(n) is satisfied. In most instances, this condition is easily satisfied.

If the values for θ are the same for the neighbourhood of point coordinates and

normal directions, then the information from both can be combined to remove the

effect of the neighbourhood size. This will produce a curvature approximation for

the principal directions of curvature that is not dependent on the neighbourhood

65

Page 85: Classification and Segmentation of 3D Terrestrial Laser

REFINING CLASSIFICATION RESULTS AND ATTRIBUTES

(a) (b)

Figure 4.16: (a) Conic section for the neighbourhood of point coordinates. (b)Conic section for the neighbourhood of point normal directions.

size. If the conic sections were overlayed, as in Figure 4.17, under the assumption

that they contain a single curved surface, then the values of θ will be the same.

This is because it can be seen that the neighbourhood for the normal directions is

a scaled down version of the neighbourhood of point coordinates by a factor of r.

The reason is because the scaling of the coordinate values would occur along the

normal direction for each point. Because of this, the results of the PCA on the

normal and point coordinates should have the same type of distribution (since

one is just a scaled version of the other), and the eigenvalues for both should be

equally affected by the neighbourhood size. This gives rise to the possibility of a

curvature approximation that has a directional component, based on the PCA of

the normal directions, and will have the effect of the size of the neighbourhood

cancelled out. One such possibility is that the approximate curvature κ is specified

as:

κ ≈

√λ

(n)1

λ1

(4.20)

The information for the PCA on the neighbourhood of point normals can be used

to solve for the value of θ.

This means that Eq. 4.16 and Eq. 4.18 can be combined such to produce:

d

r= d(n) (4.21)

66

Page 86: Classification and Segmentation of 3D Terrestrial Laser

REFINING CLASSIFICATION RESULTS AND ATTRIBUTES

Figure 4.17: Neighbourhood of point normals overlayed on the neighbourhoodof point coordinates. This illustrates the scaling that occurs along the normaldirection of each point, between the neighbourhood of point coordinates and theneighbourhood of normal directions.

which can be rearranged into the form:

r =s√

λ1

t

√λ

(n)1

(4.22)

to provide an approximation of the radius of curvature. As previously mentioned,

the neighbourhood of point coordinates will have the same type of distribution as

the neighbourhood of normal directions since one is a scaled version of the other.

Thus s and t will be the same value and will be cancelled with each other, leaving

the approximate radius of curvature r as follows:

r =

√λ1

λ(n)1

(4.23)

Since the curvature is defined as:

κ =1

r(4.24)

the results of Eq. 4.23 can be substituted into the curvature definition to obtain

67

Page 87: Classification and Segmentation of 3D Terrestrial Laser

REFINING CLASSIFICATION RESULTS AND ATTRIBUTES

the final approximation of curvature in the direction of e1 as:

κ =

√λ

(n)1

λ1

(4.25)

which was the approximation given previously in Eq. 4.20. This allows for the

approximation of the radius of curvature and the curvature value by combining

the results of PCA of both the normal directions and point coordinates for a local

neighbourhood.

The approximation in Eq. 4.25 assumes that the direction of e1 is aligned with

e(n)1 . For the 2D case, e0 and e1 are aligned with e

(n)0 and e

(n)1 . This does not carry

over into the 3D case, as only e0 and e(n)0 will be aligned. Therefore, a method is

required to calculate the values for λ′1 and λ′

2, such that they correspond to the

same directions as the values for λ(n)1 and λ

(n)2 respectively. A simple method to

ensure alignment will be based on the error ellipse defined by:

1 =((x− c) · e1)

2

λ1

+((x− c) · e2)

2

λ2

(4.26)

The ellipse will define the standard deviation from the PCA on a neighbourhood

of point coordinates in any direction around the centroid. This can then be used

to calculate the variances λ′1 and λ′

2 in the directions defined by e(n)1 and e

(n)2 .

This performed as follows:

1

λ′1

=

(e(n)1 · e1

)2

λ1

+

(e(n)1 · e2

)2

λ2

(4.27)

1

λ′2

=

(e(n)2 · e1

)2

λ1

+

(e(n)2 · e2

)2

λ2

(4.28)

Now λ′1 and λ′

2 will correspond to the same directions as λ(n)1 and λ

(n)2 and can

be used to calculate curvature approximation, regardless of any misalignment

between the eigenvalue decomposition of the PCA of point coordinates and the

PCA of point normal directions. This reduces the final curvature approximation

68

Page 88: Classification and Segmentation of 3D Terrestrial Laser

REFINING CLASSIFICATION RESULTS AND ATTRIBUTES

for both the maximum and minimum directions of curvature to the form of:

κmin =

√λ

(n)1

λ′1

(4.29)

κmax =

√λ

(n)2

λ′2

(4.30)

where the maximum and minimum directions of curvature are defined as:

dmin = e(n)1 (4.31)

dmax = e(n)2 (4.32)

4.2.3 Test with a 3D Point Cloud

This section will present the approximate radius of curvature measure applied

to a small practical test. The point cloud used is shown in Figure 4.18 and was

captured with a Leica 4500 (Leica Geosystems HDS, 2008).

The results for the radius of curvature are shown in Table 4.3 where r is the radius

of curvature obtained through the pipe fitting routine in Cyclone (Leica Geosys-

tems HDS, 2008), r is the mean value for the approximated radius of curvature

on for the pipe section and σr is the standard deviation of the approximated

values. The noise in a practical data set can cause errors and variations in the

results when compared to simulated data sets. In this case, the error between the

actual radius and the mean of the approximate radius is within 0.035 m. All error

values will fall within one standard deviation of the approximate results except

for Pipe 1, and all are within a 99% confidence interval. The errors and standard

deviations of the radius approximation may indicate insufficient accuracy to be

used to definitively define the radius of a pipe over the results produced by a pipe

fitting method utilising every point. However, it does provide a good approximate

value of the radius of curvature that can be used for classification or detecting

the presence of multiple pipe sections of varying radii in a point cloud, which is

the intention of this metric. In addition, it does not require multiple iterations,

69

Page 89: Classification and Segmentation of 3D Terrestrial Laser

REFINING CLASSIFICATION RESULTS AND ATTRIBUTES

Figure 4.18: Point cloud sampled from an industrial scene containing multi-ple pipe sections consisting of 6696 points with an average spacing of 0.020 m.The colours ar based on a simple threshold on the radius of curvature values todelineate different pipes.

as with required by higher order surface fitting, allowing for faster computations.

The results may be improved by applying a smoothing filter beforehand.

Table 4.3: Results for the approximated radius of curvature.

Pipe Label r r r − r σr

1 0.215 m 0.240 m -0.025 m 0.0147 m2 0.310 m 0.334 m -0.024 m 0.0298 m3 0.588 m 0.600 m -0.012 m 0.0204 m4 0.820 m 0.823 m -0.003 m 0.0751 m5 0.630 m 0.646 m -0.016 m 0.0837 m

4.3 Summary

The previously defined curvature approximation in Chapter 3 may suffer from

some problems related to the fact that it was both unitless and directionless.

Through the simple examination of the approximated normals and the geometry

of the neighbourhood, the approximate curvature directions and the radius of

70

Page 90: Classification and Segmentation of 3D Terrestrial Laser

REFINING CLASSIFICATION RESULTS AND ATTRIBUTES

curvature in these directions were formalised in this chapter. These allow the

use of a metric that has a direction and the unit is that same as for the point

coordinates of the point cloud, regaining much of the information that was lost

with the original curvature approximation. The next chapter will utilise this

additional information and the classification procedure in Chapter 3 to propose

a segmentation procedure.

71

Page 91: Classification and Segmentation of 3D Terrestrial Laser

Chapter 5

Segmentation

The previous chapters were aimed at providing background on TLS point clouds

and procedures, as well as presenting the necessary attributes and results to be

utilised in the segmentation process. This information included the classification

results from Chapter 3, and the correction to the neighbourhood and additional

curvature information presented in Chapter 4. These provide sufficient informa-

tion in order to segment the point cloud into salient surface features and segments.

In this chapter, a procedure for segmentation named Cut-Plane Region Growing

(CPRG) will be proposed. This will include a brief overview of the basic seg-

mentation procedure, the goal of the segmentation process, and how the basic

segmentation procedure was utilised and modified to achieve these goals. Causes

of under- and over-segmentation with this procedure will be identified and meth-

ods to help overcome and alleviate these limitations will be given.

72

Page 92: Classification and Segmentation of 3D Terrestrial Laser

SEGMENTATION

5.1 Basics of Region Growing

The objective of classification procedures is to group either similar discrete entities

or data points into common classes. For segmentation, the goal is to identify and

isolate the discrete entities from one another. For example, many of the points

present in the point cloud can be categorised as being sampled from a surface

feature. In most cases, these points, while belonging to the same class, will be

sampled from different discrete and disjoint surface entities. Therefore, the goal

of the segmentation procedure to be presented in this chapter is to take all the

classified surface points, and segment them into the separate surface entities that

they were sampled from.

There are two main methods to achieve the surface segmentation: region grow-

ing and clustering, which were briefly described in Chapter 2. The proposed

segmentation method of CPRG in this Chapter is based on the region growing

methodology (Wani and Batchelor, 1994; Rabbani et al., 2006). Region growing

is performed by initially examining a seed point. A seed point is a point known to

belong to a segment. Often it is located nominally at the centre of the segment,

although this is not a prerequisite, and its attributes are usually representative

of those exhibited by other points that belong to the segment. The segment is

then grown from the seed point by interrogating the surrounding neighbouring

points staring from the closest point. Surrounding points are either added to

the segment or rejected, based on the attributes and properties displayed by the

points. Region growing continues until all points have been either included or

excluded. In this case, a new seed point can be selected and the process repeated

until all points are exhausted.

This decision on whether a point is included or excluded from a segment can be

made by examining the attributes and properties exhibited by the points and

whether they reflect those exhibited by the segment, such as the attributes of a

geometric primitive (Marshall et al., 2001) and variable order surfaces (Besl and

Jain, 1988b) or the normal difference from the reference normal at the seed (von

Hansen et al., 2006). In addition, the process can be performed by examining the

73

Page 93: Classification and Segmentation of 3D Terrestrial Laser

SEGMENTATION

attributes of the segment at a local level, instead in order to observe whether the

change in properties is either small or consistent, as the distance from the seed

point increases, such as in the difference between surface normals (Rabbani et al.,

2006) and curvature (Visintini et al., 2006). A combination of these methods can

be also used.

5.2 Cut-Plane Region Growing (CPRG) Segmen-

tation Procedure

The goal of this thesis is to isolate and segment the surfaces of which the point

clouds are composed. A surface segment has been defined as being continuous and

differentiable throughout the region of points that belong to the surface segment,

as described in Chapter 3. This means that two points within a segment must

be reachable from each other by traversing a common surface entity without

encountering a discontinuity. Because of this property, region growing is the

ideal basis for the CPRG segmentation method in this thesis. As the method

grows a surface from a seed point, if the correct attributes are utilised it will only

include common points into a region.

In the CPRG segmentation method, three principle conditions for the proposed

segmentation procedure are established, where a point cloud (P) comprised of n

points (pi) can be segmented in m regions (Rj):

1. P = R1 ∪R2 ∪ . . . ∪Rm

2. If pi ∈ Rj then pi /∈ Rk for 1 ≤ (k 6= j) ≤ m.

3. If pi ∈ Rk and pj ∈ Rk, then there must be a relationship between pi and

pj such that C(pi, pj) = true, where C(pi, pj) is the condition(s) that are

necessary for pi and pj to be consider to belong to the same surface.

74

Page 94: Classification and Segmentation of 3D Terrestrial Laser

SEGMENTATION

In practice, these conditions are often relaxed to a certain degree. For the CPRG

segmentation method outlined in this thesis, the first condition will not be strictly

adhered to. The first condition specifies that every point should belong to one

of the regions. This is not always possible because of the presence of outliers

and unresolvable features. Outliers can not be considered to belong to a surface

region since they are caused by physical and environmental effects, (e.g. dust on

the mirrors, flaring on surface edges, intensity saturation, scene contamination

(Sotoodeh, 2006)) and therefore are not sampled from a true surface. In addition

to these outliers, there will be points that are sampled from surfaces that cannot

be identified or segmented. These surfaces are the ones that were either insuffi-

ciently sampled or lacking in resolution to identify the composite surface entities.

For fuzzy segmentation methods (Biosca and Lerma, 2008), the second condition

is relaxed. Since TLS utilises a surface sampling method, only one surface can

be sampled at a time, hence a sampled point will only be allowed to belong to

one surface. For the CPRG segmentation method, this condition is proposed to

be incorporated into the definition of C(pi, pj). As such, the third condition will

be the one of primary importance for the CPRG segmentation procedure in this

thesis.

Based on the description of a surface segment, C(pi, pj) = true if pj is reachable

from pi without crossing a surface discontinuity, otherwise C(pi, pj) = false. In

Chapter 3, points classified as belonging to either the edge or boundary class

were identified as representing a surface discontinuity. Therefore, if the region

growing procedure was performed from pi to pj such that an edge or boundary

point was not encountered, then these two points are reachable from each other

and C(pi, pj) = true. For point pj to be reachable from pj, it can be said that

there is a valid walk between the two points in the CPRG segmentation method,

a walk W (pi, pj) is defined as:

W (pi, pj) = L(pi, pk) ∪ . . . ∪ L(pk, pl) ∪ . . . ∪ L(pl, pj) (5.1)

where L(pk, pl) is a leg of the walk. A valid leg L(pk, pl) means that point pl is

within the neighbourhood of point pk, the points must be classified as surface

points, and there is no discontinuity existing between pk and pl.

75

Page 95: Classification and Segmentation of 3D Terrestrial Laser

SEGMENTATION

Figure 5.1: Depiction of how the region growing process can traverse a surface.The valid legs between points are denoted by black solid lines, with invalid legsdenoted by red dashed lines. Classified surface and edge points are representedby empty circles and striped circles, respectively.

A simple way to test if there is a discontinuity is to see if there is non-surface

point closer to the pk than pl. If this is the case, then L(pk, pl) will probably cross

a discontinuity. Otherwise, it can then be assumed that the leg does not cross a

discontinuity and therefore the leg is valid. An example is shown in Figure 5.1

where the valid legs between points are shown by black solid lines and invalid

legs are represented by red dashed lines. Points connected together by traversing

of only valid legs will be on the same surface segment.

The CPRG segmentation procedure will depend on how well the classification

was performed. It may be possible to obtain a situation as depicted in Figure 5.2

where a leg can cross an discontinuity due to either a gap in the point sampling,

misclassification or some other erroneous effects. To improve this, it is proposed

that the method is modified so that for every point defined as a discontinuity,

a cut-plane is created. The cut-plane is defined for an arbitrary point x on the

plane as:

(x− pi)T (ni × dirmin,i) = 0 (5.2)

where ni is the surface normal of an edge point pi, dirmin,i is the direction of

minimum curvature at the edge point, and a× b is the cross product between two

76

Page 96: Classification and Segmentation of 3D Terrestrial Laser

SEGMENTATION

Figure 5.2: Depiction of how the region growing process can traverse a surfacewith a misclassification of edge points as surface points. The valid legs betweenpoints are denoted by black solid lines, with invalid legs denoted by red dashedlines. Classified surface and edge points are represented by empty circles andstriped circles, respectively, with the misclassified points denoted by a cross.

vectors. This means that if there is a non-surface point pi within the neighbour-

hood of p0, then for a leg L(pk, pl) between to surface points (pk and pl) within

the neighbourhood of p0 to be considered valid, pk and pl must be on the same

side of the cut-plane. This condition can be tested with:

dk =((pk − pi)

T · (ni × dirmin,i))

(5.3)

dl =((pl − pi)

T · (ni × dirmin,i))

(5.4)

If dkdl > 0, then they will be on the same side of the cut-plane as each other.

Otherwise, they are on the opposite sides of the cut-plane to each other. A value

of dkdl = 0 is only possible if at least one point is on the cut plane. The addition

of the concept of cut-planes means that the effects of the misclassification is

reduced, as illustrated in Figure 5.3.

Without cut-planes, the gaps caused in the classified discontinuities allows the

region growing procedure to cross from one surface to another. By using cut-

planes, nearby edge points will limit the region growing process by not allowing

the region to pass the locally defined cut-plane. Hence a small gap caused by

misclassification will not affect the region growing process. A cut-plane based on

77

Page 97: Classification and Segmentation of 3D Terrestrial Laser

SEGMENTATION

the edge point pi will restrict the region growing process for all surface points

where their neighbourhood includes point pi.

Figure 5.3: Depiction of how the region growing process can traverse a surfacewith a misclassification of edge points as surface points. The valid legs betweenpoints are denoted by black solid lines, with invalid legs denoted by red dashedlines. Classified surface and edge points are represented by empty circles andstriped circles, respectively. The cut planes for edge points are shown in reddotted lines through the edge points.

From these definitions, the CPRG segmentation method can be summarised as

follows. Starting from a surface point p0, the points pi within the neighbourhood

that have a valid leg between p0 and pi are added to the surface region. The

CPRG process is then repeated in a recursive manner for all new points to be

added to the region until no more valid legs can be found. A new surface point

that has not been grouped into the region is then selected as a new seed point and

the process is repeated. Selection of a new seed point is continued until either

every surface point is included into a region, or the point has been examined.

Algorithm 2 contains the algorithm for the iterative implementation of the CPRG

segmentation method.

As previously stated, not all points will be grouped into the regions due to outliers

and features that cannot be resolved. If a region contains too few points, then it

usually contains too little information to be of use. Therefore, after the region

growing process is completed, if there is any region Rj that contains less than

a threshold number of points, then the region can be ignored under the basis

78

Page 98: Classification and Segmentation of 3D Terrestrial Laser

SEGMENTATION

Algorithm 2 The CPRG segmentation algorithm.

1: procedure Segmentation(Points Xi)2: for i = 1 to n do3: if Xi is a surface point then4: if Xi = Unlabelled then5: Xi ← new label6: end if7: Get k nearest neighbours (Nj) for Xi

8: if No edge present in Nj then9: for j = 1 to k do

10: if Nj is unlabelled then11: Label Nj the same as Xi

12: else13: For all points labelled the same as Nj,14: label the same as Xi

15: end if16: end for17: else18: for j = 1 to k do19: if Nj is on same side of the cut plane then20: if Nj is unlabelled then21: Label Nj the same as Xi

22: else23: For all points labelled the same as Nj,24: label the same as Xi

25: end if26: else27: Break from loop since an discontinuity28: was encountered29: end if30: end for31: end if32: end if33: end for34: return35: end procedure

79

Page 99: Classification and Segmentation of 3D Terrestrial Laser

SEGMENTATION

of insufficient information to resolve the underlying surface of the region. In

addition, using this method no classified edge or boundary points have been

included in the regions. The next section will examine how to possibly improve

and refine the segmentation results. This will include re-incorporating edge and

boundary points, as well as examining the effects of over- and under-segmentation.

5.3 Refining Segmentation Results

The aim of a segmentation process in this and other bodies of work is to perfectly

isolate each of the surface entities to which a point cloud is composed. In reality, it

is improbable that all features will be fully resolved. While the CPRG procedure

is developed to achieve the best results possible, external factors including noise,

sampling interval and density, and poor parameter values may cause problems in

the execution of the procedure. Three problems commonly arise in a segmentation

procedure, which are categorised as over-segmentation, under-segmentation and

un-resolved points (Cho and Meer, 1997). The CPRG method detailed in this

thesis is also not exempted from these anomalies, even though its design attempts

to ensure that the procedure is resilient to such problems.

5.3.1 Unsegmented Points

As mentioned, not every point in a point cloud will be incorporated into a surface

segment utilising the proposed CPRG segmentation procedure. For points that

are considered to be outliers or belong to unrecovered surfaces, this is desired.

If they were included, they would create errors and biases in the segmentation

results. However, as mentioned before, none of the edge or boundary points will

be included in the segmented regions at this stage. While they contain points

that are outliers or belong to unresolved features, a vast majority can be deemed

to belong to the previously identified segments. In addition, the points that

80

Page 100: Classification and Segmentation of 3D Terrestrial Laser

SEGMENTATION

can not be segmented by the proposed CPRG method may still encode useful

information. As such, this section is aim at identifying the three primary causes

of unsegmented points.

5.3.1.1 Singular Points

Singular points are points that are considered to be gross errors, or that can

not be considered to belong to the features and structures that comprise of a

point cloud. Some causes could be particles in the atmosphere such as dust or

rain, as well as on the scanner lenses or mirrors, and material properties that

interfere with the laser signal returned by either multiple reflections or intensity

flaring. In addition, they include scattering of the beam on an edge of a structure

causing multiple returns from a single point sampling (often referred to as flaring),

and dynamic objects such as moving parts, cars and peoples (Sotoodeh, 2006). In

most cases, these effects are not repeatable between individual scans and different

setup locations.

A singular point will be identified by the fact that it is isolated from nearby

points. One method is to compare the point to the underlying surface definition

and determine if it is an outlier through statistical testing. Another method is

to check the points (pi) in the closest neighbourhood to the singular point (p0).

It is unlikely that a singular point will be contained in the neighbourhoods of its

neighbouring points (pi) since it is isolated from its neighbours. The benefit of

this method is that it can be easily incorporated into the region growing process

stage, as done for the results presented in Chapter 6. Finally, a singular point can

often be identified by checking whether the sampling of the point is repeatable.

If a point cloud is composed of multiple scan setups, then the singular point will

not exist in more than one overlapping scan setup. Of these methods, the first is

predominantly used.

Since these singular points are isolated from nearby segments, they do not un-

duly influence the results of a segmentation method. If the singular points do

81

Page 101: Classification and Segmentation of 3D Terrestrial Laser

SEGMENTATION

influence surrounding points, then these points can be processed in the same

manner as edge points resulting in the surrounding points being re-incorporated

into previously defined segments, and the singular points being rejected by the

re-absorbtion method. This method will be presented in the next section.

5.3.1.2 Edge and Resolvable Points

The majority of the points that do not belong to an identified segment consist

of classified edge and boundary points. These points were utilised by the CPRG

segmentation method to restrict the region growing process when determining the

surface segments. Therefore, these points can probably be determined to belong

to one of the surface segments that they define the extents for. As previously

highlighted, the edge points will form a region of points encompassing an inter-

section. It is possible to refine these points into just the sampled points closest to

the intersection (Belton and Lichti, 2005) or to approximate the true intersection

location (Cooper and Campbell, 2004; Belton and Lichti, 2006) prior to applying

the segmentation procedure. By doing so, the number of edge and boundary

points that are not incorporated into the surface segments can be significantly

reduced. However, it is also a simple matter to re-incorporate the classified edge

and boundary points after segmentation has been performed, by utilising the

information about the defined surface segments.

The first step of this re-incorporation process is finding the candidate surface

segments to which they may belong. First a neighbourhood of sufficient size

around the edge point is retrieved. The neighbourhood size should be large

enough that any edge point is likely to have a neighbourhood that contains surface

points on either side of the intersection. From the discussion in Chapter 3, this

size should be larger than the neighbourhood size used in the classification stage.

However, if it is too large, it is possible that surface points belonging to surfaces

other than those creating the intersection will be retrieved.

Once the candidate segments are found, the next step is to determine to which

82

Page 102: Classification and Segmentation of 3D Terrestrial Laser

SEGMENTATION

segment the point is best associated. One method is to select the segment to

which the edge point is closest. In principle, it should be closer to the surface

it was sampled from, than to other surrounding surfaces. A problem for TLS

point clouds is that intersecting surfaces can have different sampling densities,

as highlighted in Figure 5.4. The result is that an edge point may be closer to

a surface point belonging to a different surface than to the one it was sampled

from.

Figure 5.4: Identified non-surface points (red) for the intersection of two surfaceswith different sampling densities. A bias of points towards the sparsely sampledsurface can be clearly seen.

A more rigorous method relies on comparing the classified edge and boundary

points to the properties of the candidate surfaces. The surface can be defined by

fitting a surface to points in the entire surface segment. To reduce the associated

computational cost, it is simpler to examine only the surface at a local level by

fitting a surface to the local neighbourhood of a point belonging to the candidate

segments. The surface point selected for each candidate segment is the one in

the neighbourhood closest to the edge point. As already highlighted by OuYang

and Feng (2005), if only a small local neighbourhood is used, then a first order

planar surface can be used with satisfactory results. This is easy to define with

the surface normal approximation found previously through PCA. The first order

planar surface can then be used to extrapolate the surface segment at the location

of the edge point. Figure 5.5 shows an example of this, where the first order planar

surface is fitted to the extents of the segments and extended to the intersection.

83

Page 103: Classification and Segmentation of 3D Terrestrial Laser

SEGMENTATION

The residuals of the non-segmented points to each candidate surface point are

calculated as:

ri,j = (pi − sj) · nj (5.5)

where ri,j is residual of point pi to the surface point sj in the surface normal nj.

In this manner, the surface segment that the edge point is deemed to belong to

is determined by the surface with the smallest residual formed by the edge point.

Figure 5.5: Recombination of non surface points that lay near an edge. Thedotted lines represent the residuals of the points to the extended planar surfacesfor each neighbouring segment. Points 1 and 2 will be candidates for adding tosegment 1, and points 3 and 4 will be candidates for segment 2

At this stage, the most likely surface candidate can be chosen, but this does not

guarantee that any of the surface candidates is valid. A tolerance for ri,j can be

helpful to guarantee this fact. The tolerance is usually set through experience,

however it can be also based on the sample density, noise and orientation of the

scanner, and may be calculated by the analysis of the scanner errors as done by

Bae et al. (2005). In most cases, it is simpler to perform a statistical test on the

residual against the neighbourhood of the surface segment to decide if it can be

considered an inlier.

When a segment is curved (concave or convex), or if the local neighbourhood

of a candidate surface point contains points not belonging to the same surface,

a problem can arise. Figure 5.6 shows how edge points may be incorporated

84

Page 104: Classification and Segmentation of 3D Terrestrial Laser

SEGMENTATION

into the wrong surface segments when the surface normal is biased by erroneous

points.

Figure 5.6: Recombination of non surface points that lay near an edge. Thebias in the local surface fit causes points 2 and 3 to be more probable candidatesfor segment 1, and points 1 and 4 for candidates for segment 2.

From the definition in Chapter 3, a surface point should not be influenced within

the neighbourhood by any point not considered to be in the same segment. In the

majority of cases, a bias in the surface normal of the plane towards the adjacent

segment will not be an issue. To further ensure that the neighbourhood is free

from bias, the proposed neighbourhood correction in Chapter 4 can be performed

on the neighbourhood.

In addition, the previous method of re-incorporating edge points can be combined

with the residual method by adding a weight to the residual distances based on

the distance between the edge point and the surface candidate point. Adding a

distanced based weight to the residuals helps to blend the metric between the clos-

est segment and smallest surface residual. If this were utilised, re-incorporation

the wrong points in Figure 5.6 would not occur. This is because, even through a

point fits the surface definition of the wrong segment better, the weighted residual

would mean that the metric would favour the correct segment. An example of

such a weighting could be developed using multiplication (Eq. 5.6) or addition

85

Page 105: Classification and Segmentation of 3D Terrestrial Laser

SEGMENTATION

(Eq. 5.7). Examples of such metrics, mi,j, are:

mi,j = di,jri,j (5.6)

mi,j = (λ)di,j + (1− λ)ri,j (5.7)

where di,j is the distance between the edge point pi and the surface point sj.

The third method of re-absorbing points is by examining the attributes of the edge

points to see which surface candidate point shares the most similarities. These

attributes can be based on spectral information, intensity, normal direction or

some combination of these and others, as previously detailed in Chapter 2. The

most common attribute to use is the normal orientation. In most circumstances,

besides from where there is a large discrepancy between sampling density, the

normal will be most closely aligned with that of the segment it is most closely

related to, as seen in Figure 5.7. This can be further enhanced by the use of

the neighbourhood correction method in Chapter 4 to eliminate the effects of

multiple surfaces in the neighbourhood of edge points. A threshold can also be

incorporated, as with the surface residual method, to limit the allowable deviation

of the normal, although this may result in some points not being re-incorporated

into their appropriate segments.

Figure 5.7: Calculated normal approximation for points on the intersection ofsegments with differing sample densities.

86

Page 106: Classification and Segmentation of 3D Terrestrial Laser

SEGMENTATION

From the presented methodologies, a combination of metrics is proposed to en-

sure robust results. To do this, the candidate segments are chosen as those that

are within a neighbourhood of 2k of the point of interest, where k is the number

of points in the neighbourhood used in the classification stage. Using a neigh-

bourhood of this size helps ensure that all the adjacent segments are found as

candidates while limiting the probability of finding non-adjacent segments. Any

point within this neighbourhood that belongs to a segment is used as a candidate

surface point. A surface and the residual of the edge point with respect to this

surface is calculated for every candidate surface point.

The segment with the smallest standardised residual within a set confidence in-

terval will be chosen as the best candidate segment. To verify this, the deviation

of the approximate surface normal at the point of interest to the approximate

surface normal of the point on the candidate surface segment is calculated. If

this deviation indicates that the normal direction of the edge point is nominally

aligned, or has the best alignment with the best candidate segment found from

the surface residual, then the edge point is re-incorporated. Otherwise, if the

normal deviation and the surface residual indicate different candidate surfaces,

then the point will not be incorporated with any surface segment.

While there are other methods that can be used, the proposed method outlined in

this section is simple and robust, and does not explicitly require that all surfaces

adjacent to the point be identified. For the case where a segment is next to

a complex structure that cannot be resolved, all likely points surrounding the

structure will be absorbed by the surrounding segments, as shown in Figure 5.8.

This also extends to points affected by singular erroneous points, as previously

mentioned.

5.3.1.3 Complex and Potentially Irresolvable Features

Not all points that were not initially associated with a segmented surface are

recoverable or deemed as an outlier to the point cloud. The point cloud contains

87

Page 107: Classification and Segmentation of 3D Terrestrial Laser

SEGMENTATION

(a) (b)

Figure 5.8: (a) The segmentation before non-segmented points are absorbedinto candidate segments. (b) The results after the absorbtion procedure takesplace. Different isolated surface segments are denoted by different colours, withwhite points representing edge and boundary points.

many elements where their composite surface features can not be resolved with

the current information. These remaining points are deemed to be sampled from

features with a complex structure that cannot be either identified or segmented

into their composite features due to lack of information. Such examples include

vegetation, valves and gauges in industrial scenes, door handles, window recesses

and many others.

What makes these features unresolvable is a lack of sufficient information, usu-

ally a lack of sampling density or low spatial accuracy. If an object is densely

sampled with very low noise, it is a simple matter to segment the object into

detailed surface segments. However, this is not always the case with TLS since

obstructions and access to sites limit the achievable spatial resolution, and noise

in the sampling method will place a limit on how small the sample spacing can

be. Insufficient information can also be in the form of lack of a priori knowledge

of a structure. If it is known that a point cloud contains certain complex struc-

tures, e.g. valves, then it may be possible to search the unresolved points and

apply this a priori knowledge of the structure to identify and resolve them in the

point cloud. In most cases, there is no knowledge of what a point cloud will be

88

Page 108: Classification and Segmentation of 3D Terrestrial Laser

SEGMENTATION

comprised, which is the assumption of this thesis.

Therefore, in most cases, there is not much that can be done with these points.

One option is to ignore them as they do not represent a significant surface feature

present in the point cloud. Another option is to cluster these points into regions.

In this way, a cluster of points will contain a complex feature and can be treated

a single entity. Most often these clusters will be separated by identified segments

and hence the clusters can be formed by using a simple Euclidian based distance

metric. An example of this is often done with vegetation such as trees (Thies and

Spiecker, 2004; Bae et al., 2007). This allows the definition of an unknown entity

for use in applications such as collusion detection.

Another option is to fit a complex surface to the cluster of points to form a surface

representation of the points. This can be done by using a mesh, NURBS surface,

variational surface or some other developable surface method (Remondino, 2003;

Pauly et al., 2003; Amenta and Kil, 2004; Cohen-Steiner et al., 2004; Peternell,

2004; Wu and Kobbelt, 2005). However, these methods assume that the noise

level and sampling density allows for this. The last option is to fit a pre-defined

model or prototype to see if the cluster of points sufficiently matches the model

representation. This is an exhaustive process of searching for the correct proto-

type and relies on a prior knowledge of the structures within a point cloud, which

is assumed to be available.

5.3.2 Over-Segmentation

Over-segmentation occurs when a single surface is erroneously divided into mul-

tiple segments. Usually this happens when the tolerances or parameters are set

too strictly for a particular point cloud. In the case of the CPRG segmentation

method, it usually occurs when points in the classification stage are mislabelled

as edge or boundary points by the threshold values being set too tight. The

result is that a surface is segmented on an discontinuity that does not physically

exist. For some procedures of segmentation, over-segmentation is often intention-

89

Page 109: Classification and Segmentation of 3D Terrestrial Laser

SEGMENTATION

ally produced. An example of such is described by Cohen-Steiner et al. (2004)

for the fitting variational surfaces. The reason behind intentionally causing over-

segmentation is that it is much simpler to identify and correct over-segmentation,

while it is much harder to identify and correct cases of under-segmentation, with-

out user intervention. Therefore, in most methodologies, it is under-segmentation

that is avoided at the cost of possible over-segmentation (Tovari and Pfeifer,

2005).

The CPRG segmentation method does limit the occurrence of over-segmentation

as follows: the edge points in the CPRG segmentation method are determined

as such if the neighbourhood is influenced by the presence of some non-surface

feature or discontinuity. This feature causes the values of the attributes in the

neighbourhood to vary significantly. If there is no variation of the attributes, then

they are considered to be similar because they are all taken from one surface.

Therefore, it limits how strictly the tolerance in the classification procedure can

be set since it is more likely that the entire surface segment will be classified as a

region of non-surface points than for a surface to be over-segmented, unless there

is a physical entity causing a discontinuity.

This means that some physical cause must be present to create a discontinuity,

or there is a gap in the sampling of the surface. An example of such is presented

in Figure 5.9. Here it is shown that a wall has been segmented into many surface

entities because of rainwater pipes affixed to the surface. Similarly the bars in

the windows causes each pane of glass to be isolated. In addition, changes in

sampling densities caused by registering overlapping scans into a single point

cloud can be classified as discontinuous. Another example is presented in Figure

5.10, where a problem is caused by points along the change in sampling density

being classified as boundary points by the classification procedure in Chapter 3.

The reason is that the change in sampling density displays similar characteristics

to the boundary points in that the centroid of the neighbourhood is significantly

distanced from the point of interest. Strictly speaking, this is not a case of over-

segmentation since each surface segment is separated by a discontinuity. However,

it may be beneficial to group these segments together if they can be deemed to

belong to the same underlying surface.

90

Page 110: Classification and Segmentation of 3D Terrestrial Laser

SEGMENTATION

(a) (b)

Figure 5.9: (a) Initial segmentation of a wall section containing windows anddown pipes and illustrates how a continuous wall section can be broken up byfeatures on its surface. (b) Recombination of the segments. White points indicateedge points and differing colours highlight different surface segments.

Determining whether two or more segments can be merged into a common un-

derlying surface is done in a similar fashion to the incorporation of edge points

into surface segments. Two adjacent surfaces can be found by looking at a lo-

cal neighbourhood. If a surface point is on the extent of the segment, then a

neighbourhood of sufficient size will likely result in the presence of points belong-

ing to another segment. A simple method to see if they can be merged is to fit

surfaces to two adjacent surfaces and test whether the parameters are the same.

The problem is that doing this for surfaces other than geometric primitives or

low-order surfaces becomes computationally complex. An easier solution is to

examine the segments at a local level to see if they can be merged.

If two segments were merged, then the points between the segments must be con-

sidered to be continuous and differentiable, as discussed in Chapter 3. Therefore,

if a local surface is fitted to the adjacent points belonging to different surfaces,

then the properties of these surfaces should reflect one another, i.e. the sur-

faces are close to one another and the surface normal orientations are nominally

aligned.

91

Page 111: Classification and Segmentation of 3D Terrestrial Laser

SEGMENTATION

(a) (b)

Figure 5.10: Segmentation of a section of wall containing recessed windows.(a) Over-segmentation caused by changes in sampling density being detected asdiscontinuities. (b) Recombining the segments by the proposed method. Whitepoints indicate points classified as discontinuities and differing colours highlightdifferent surface segments.

92

Page 112: Classification and Segmentation of 3D Terrestrial Laser

SEGMENTATION

To do this, a local surface is fitted to the points on the adjacent extents. These

points are found by a nearest neighbourhood search with the size of the neigh-

bourhood being set large enough to ensure that points from both surfaces will be

found. If two points from different surfaces are present in the same neighbour-

hood, then it is reasonable to assume that the two points belong to segments

that are adjacent to each other, and that the points lie either on or near the

extents of the surface segments. In most cases, these points are found before the

re-incorporation of edge and boundary points. This is to ensure that the adjacent

points reflect the surface and remove the possibility of errors in the re-absorbtion

of edge points affecting the process.

For two segments to be considered for merging, the resulting surface must be con-

sidered to still be continuous and differentiable. In practical terms, this translates

to the tangential surfaces being aligned, and with no significant gap between the

tangential surfaces at the location between the two surface segments.

Firstly, to test if a significant gap between the tangential surfaces occurs, a po-

sition roughly halfway between the two segment extent points is calculated, and

the distance between the tangential surfaces at this point is tested. If the gap is

less than a set tolerance, or the gap is within a confidence interval calculated by

each surface, then it can be assumed that the surface is continuous. One method

that does not utilise approximation of a point between the two surface segments

uses a statistical test for the residual of each of the adjacent points against the

fitted surface of the other, based on the standard deviation from the surface fit.

If both are within a set tolerance or confidence interval for the surface, then it

can be assumed that the segments are continuous. Figure 5.11 shows this process

for a step edge where it can be seen that, while the segments are aligned at the

extents, there is a significant gap.

It is not sufficient just for two local surfaces to be considered to meet (to be

continuous), but the two surfaces must nominally be aligned in terms of their

surface normal orientation (to be differentiable). To do this, the surface normals

are examined. If a higher order surface were used, then the normal would be

calculated at the closest position of intersection to both points or just to the

93

Page 113: Classification and Segmentation of 3D Terrestrial Laser

SEGMENTATION

Figure 5.11: Two surface segments where the difference between the local sur-faces is not insignificant and cannot be considered continuous. The points be-longing to different surface segments are represented by different circles.

fitted surface normal. As with the incorporated edge points, the neighbourhood

can be refined by the method presented in Chapter 4 in order to ensure that it

is free of biases. If the two normals are closely aligned, then it can be assumed

that there is a smooth transition between one surface to the next, otherwise there

is probably an intersection between two different surfaces. To determine if the

normals are closely aligned, a simple threshold on the angles between the two

normal directions is often used, such as described by Rabbani et al. (2006).

Figure 5.12 illustrates the case when two adjacent segments satisfy both criteria

and can be considered to be likely candidates for merging. Whether they will be

merged depends on the goal of the segmentation. As stated, in most cases, an

over-segmentation will occur using the outlined methodology when the surface is

physically bisected by a surface feature (whether it is resolvable or not). In most

cases, the segments should not be merged. Also this method works at a local level,

depending on the size of the neighbourhood used to search for adjacent surfaces.

It will not merge two segments that are separated by a large gap, often caused

by a shadow in the point cloud. In this case, there is no statistical confidence in

merging the segments by the described method and it is necessary to resort to a

94

Page 114: Classification and Segmentation of 3D Terrestrial Laser

SEGMENTATION

Figure 5.12: Two segments where the difference between the local surfaces andalignments are insignificant and so can be considered continuous and differentiableacross the gap between the extents of the two segments.

global surface fitting for each segment to see if they are candidates for merging.

Even so, depending on the size of the gap and the point cloud properties, there

still may be a significant discrepancy occurring unless both segments are fitted

simultaneously, which is an exhaustive process.

5.3.3 Under-Segmentation

Under-segmentation occurs when more than one surface feature has been labelled

as belonging to a single surface. For the proposed CPRG segmentation method,

this is normally caused by poor classification results due to the threshold values

described in Chapter 3. An example of this occurrence is presented in Figure

5.13 where tightening the thresholds in the classification stage removes the cause

of under-segmentation.

Apart from mis-classification, under-segmentation can also be caused by physical

properties or structures within a point cloud. An example depicted in Figure

95

Page 115: Classification and Segmentation of 3D Terrestrial Laser

SEGMENTATION

(a) (b)

Figure 5.13: Vent comprising of four angled slats. (a) Under-segmenting theslats into a single segment. (b) Slats being correctly isolated by tightening thethresholds and reducing the neighbourhood size.

5.14, where the structure consists of primarily two planar segments. However,

because there is a smooth arc section between both planes, the segmentation

process will have this structure as one segment, and will not decompose it into

the correct two planar and one arc segment. The reason is that it will not violate

the surface segment definition used, as it is possible to traverse from one plane

to the other without crossing a discontinuity. However, it should be pointed out

that a sufficient neighbourhood size is used in segmentation, then the cut-planed

formed on the intersection of the two planar surface may extend through the

arc to segment the structure into two surfaces, but it will not allow the for the

isolation of the arc surface segment.

Figure 5.14: Two planar surfaces joined by a small arc that ensures a smoothtranslation from one surface to the other.

96

Page 116: Classification and Segmentation of 3D Terrestrial Laser

SEGMENTATION

Another prominent occurrence of this type of under segmentation is for pipe

works. The segmentation process will create a single pipe run into one segment,

but it may, in fact, be made up of multiple segments such as cylinders, elbows

and reducers. It is possible to separate these by examining the other attributes

which are local to a segment. For the case presented in Figure 5.14, if clustering

was performed on the normal directions, the planar segments would easily be sep-

arated out based on the peaks in the clustering. In a similar manner, for surface

segments containing a pipe run, if clustering was performed on the direction of

minimum curvature (which will correspond to the direction of the pipe) and the

radius of curvature, it is possible to isolate the components that form the pipe

run. Examples of these are presented by Vosselman et al. (2004) and Rabbani

and van den Heuvel (2005). This can also be applied to other segments to see if

they contain multiple comports that are joined by a smooth transaction.

The problem is that it is difficult to identify under-segmentation without human

intervention. This is why it is often best to force over-segmentation to occur,

since this is easier to cope with through automation than coping with under-

segmentation. As illustrated, many suspected cases of under-segmentation do

not violate the definition of the surface segments, and hence are considered valid.

However, it is possible to apply another more intensive method to the segments

extracted by the CPRG segmentation method, as the computational costs will

be significantly reduced by looking at individual segments instead of the entire

point cloud.

5.4 Segmentation Summary

In this chapter, the CPRG segmentation method was proposed utilising the for-

mation of local cut-planes to restrict the region growing process. The cut-planes

were based on the classification results in Chapter 3 and the principal curvature

directions specified in Chapter 4. As identified, common problems in many seg-

mentation routines are the presence of unincorporated points, under-segmentation

and over-segmentation. The effects and causes of these concerns were outlined

97

Page 117: Classification and Segmentation of 3D Terrestrial Laser

SEGMENTATION

with regards to the proposed segmentation process, and several methods and

techniques were presented to help alleviate their presence. It should be men-

tioned that vegetation, such as trees and bushes, will not be segmented into a

common surface segment. This is due to the fact that the points will exhibit a

large variation of curvature values due to their complexity. As such, points sam-

pled from vegetation will be classed as non-surface points. The next chapter will

present the application of the techniques discussed so far in this thesis to several

real world sampled data sets.

98

Page 118: Classification and Segmentation of 3D Terrestrial Laser

Chapter 6

3D Point Cloud Results

The previous chapters have proposed the methodologies and techniques employed

for segmenting a point cloud into surface features. Chapter 3 outlined the at-

tributes and metrics used in classification of surface, edge and boundary points

with attention to how they were derived, what properties they reflect, and how

they are utilised in the proposed classification procedure. This led to refinement

of some of the attributes defined in Chapter 4 with attention was focused on

removing the presence of multiple surfaces from neighbourhoods for removing

biases in the calculated attributes. In addition, the method extended the cur-

vature approximation. The extension of the curvature approximation included

the directions of maximum and minimum curvatures, as well as an approxima-

tion of the radius of curvature using just the PCA detailed in Appendix A. The

results provided by these chapters were utilised in the creation of the proposed

CPRG segmentation method in Chapter 5. From combining these methods and

results, a point cloud can be segmented into isolated surface features. These

surface segments are represented by a region of sampled points that are consid-

ered to be continuous and differentiable throughout the region, with no surface

discontinuities being present within the extents of the regions.

Results of the work performed in the previous chapters are now presented within

99

Page 119: Classification and Segmentation of 3D Terrestrial Laser

3D POINT CLOUD RESULTS

this chapter. The CPRG segmentation procedures, and other methodologies for

its improvement, are applied to a variety of point clouds containing differing

structures. These point clouds include a simple building facade, a more complex

industrial plant, a building with differing elements and features, and finally a

section from a complex industrial scene. The results will be interrogated to

illustrate the workings of the procedure, along with the benefits and eccentricities

involved.

6.1 Simple Building Facade

Processing results of a building facade captured with a Leica Scan station (Le-

ica Geosystems HDS, 2008) are presented in this section, with the point cloud

comprising three scan setups registered together with the total number of points,

which number just in excess of 2.2 million. The point cloud is presented in Figure

6.1 with the majority of points sampled with a spacing between 0.5 cm and 3 cm.

The first step in applying the CPRG segmentation method is to calculate the val-

ues of required attributes for the classification procedure, as described in Chapter

3. These values are displayed in the grey-scaled images in Figure 6.2. They were

calculated utilising a neighbourhood size of 40 points, which was selected based

on the considerations outlined in Chapter 3. Note that neighbourhoods of size

30 to 50 were also tested without significantly affecting the classification results.

This neighbourhood size allowed the recovery of small scale features, while negat-

ing the majority of effects from surface noise. The classification decision model

(Algorithm 1) was then applied with a threshold of 1.5× 10−5 on the variance of

curvature attribute and a threshold of 0.9 on the boundary point metric.

The final results for the classification procedure are presented in Figure 6.3. White

points are the classified as edge points and blue points are classified as boundary

points. These points are used to define the extents of the surface in the CPRG

segmentation procedure proposed in Chapter 5. The red and green points are

100

Page 120: Classification and Segmentation of 3D Terrestrial Laser

3D POINT CLOUD RESULTS

Figure 6.1: Point cloud of a building facade displaying intensity returns usingHSI colour model. Approximate dimensions of the building are given as (H, W,L) ≈ (21.25 m, 12.8 m, 6 m), where H, W and L represent the height, width andlength of the point cloud.

101

Page 121: Classification and Segmentation of 3D Terrestrial Laser

3D POINT CLOUD RESULTS

(a) (b) (c)

Figure 6.2: Values of the attributes used in classification. (a) The curvaturemetric values. (b) The variance of curvature. (c) The values of the boundarymetric.

those points classified as surface points. Red points satisfy the extra condition

in Eq. 3.10 where the curvature approximation is less than the mean curvature

present in the local neighbourhood, while green points do not satisfy this condi-

tion.

The results of the classification procedure were then utilised by the CPRG seg-

mentation process to isolate the simple surface features, using a neighbourhood

size of 40 for the region growing procedure. Initial segmentation results are

presented in Figure 6.4(a) with 578 segments being found. The majority of

these segments are not valid segments of the building since they come from over-

segmentation of the ground by the presence of debris from re-construction and

extraneous objects, and the identification of small sections such as the recesses

of doorways and walls. In most cases, they do not contain a sufficient number of

points to be considered valid sections (typically less than 70 points), and so are

removed from the segmented surfaces. The edge points are then re-incorporated

into adjacent surface segments if they are deemed valid and the final results are

shown in Figure 6.4(b), with the building facade consisting of just over 300 seg-

102

Page 122: Classification and Segmentation of 3D Terrestrial Laser

3D POINT CLOUD RESULTS

Figure 6.3: Classification results of the building facade. White points indicateclassified edge points, red and green points indicate classified surface points andblue points denote classified boundary points.

103

Page 123: Classification and Segmentation of 3D Terrestrial Laser

3D POINT CLOUD RESULTS

ments.

(a) (b)

Figure 6.4: Segmentation results produced by the CPRG segmentation method.(a) The initial segments produced. (b) The segments after re-incorporation ofvalid edge points and removal of insignificant segments. The colouring of thesegments has been randomised so a different colour reflects a different surface.

A vast majority of the segments come from the individual panes of glass that form

the windows, though not all these panes were isolated correctly. As illustrated

in Figure 6.5, there are two regions of different sampling density. The top region

is sampled at a sample spacing approximately 2.2 cm and the bottom region has

a sample spacing of approximately 1.4 cm. It can be seen in Figure 6.5(b), that

the panes of glass are separately segmented in the region of denser sampling, but

are not segmented in the region of sparser sampling. The problem is that in the

top region, the frames between the panes are not being detected. While it is

possible to tighten the threshold for the classification procedure, the thickness

of the framing is less than 3 cm and does not differ greatly from the surface

noise and the sampling spacing. As such, the sampling density is required to be

higher (as it is for the bottom region) for the framing to be detected as being

significantly different from the surface texture. This fact is further illustrated by

the edge points on the framing being detected as valid points for re-incorporation

104

Page 124: Classification and Segmentation of 3D Terrestrial Laser

3D POINT CLOUD RESULTS

into the surface segments.

(a) (b)

Figure 6.5: Results of the top left windows on the front of the building facadewith a change in sampling density from Figure 6.1. (a) Classification results. (b)Segmentation results.

Figure 6.6 presents a case where the CPRG segmentation method is not hindered

by small mis-classifications. At the bottom of the left window frame, not all

the edge points have been correctly classified since the frame has been been

eroded at this point to leave a smoother transition between the window and

the wall, as presented in Figure 6.6(a). Figure 6.6(b) shows how the use of cut-

planes employed in the CPRG segmentation method has still restricted the region

growing process. If they were not employed, then the region growing process

would be able to grow across the discontinuity.

(a) (b)

Figure 6.6: Results of the bottom left windows on the front of the buildingfacade from Figure 6.1. (a) Classification results. (b) Segmentation results.

105

Page 125: Classification and Segmentation of 3D Terrestrial Laser

3D POINT CLOUD RESULTS

6.2 Industrial Plant

This section presents the processing results of a point cloud containing an in-

dustrial plant, shown in Figure 6.7. It comprises a single scan setup using Cyrax

2500 scanner (Leica Geosystems HDS, 2008), and is provided by Leica for demon-

stration proposes. As such, it has previously been presented in use by other seg-

mentation methodologies. The point cloud is sampled in a range of 4 m to 12 m,

with the sample intervals ranging from approximately 0.01 m to 0.15 m.

Figure 6.7: Point cloud of an industrial scene provided through Leica (LeicaGeosystems HDS, 2008). Approximate dimensions of the point cloud are givenas (H, W, L) ≈ (19.6 m, 19.2 m, 27.8 m), where H, W and L represent the height,width and length of the point cloud.

106

Page 126: Classification and Segmentation of 3D Terrestrial Laser

3D POINT CLOUD RESULTS

(a)

(b)

(c)

Fig

ure

6.8:

Val

ues

ofth

eat

trib

ute

suse

din

clas

sifica

tion

.(a

)C

urv

ature

met

ric

valu

es.

(b)

The

vari

ance

ofcu

rvat

ure

.(c

)T

he

valu

esof

the

bou

ndar

ym

etri

c.

107

Page 127: Classification and Segmentation of 3D Terrestrial Laser

3D POINT CLOUD RESULTS

6.2.1 Classification of the Processing Plant Results

The first step is to calculate the necessary attributes to be used in the classification

process. The values of these attributes are presented in Figure 6.8, and were

calculated on a neighbourhood of size 40 points. This size was chosen since it

allowed the recovery of small scale features, but was large enough to remove the

effects of surface noise. Note that neighbourhoods of size 30 to 50 were also tested

without significantly affecting the classification results. For the classification of

surface points, a threshold value of 5.0× 10−5 for the variance of curvature and a

threshold value of 1.0 for the decision of boundary points were used to classify the

point cloud. Figure 6.9 shows the results, with blue points denoting boundary

points, white denoting edge points, and red and green points denoting surface

points. The difference between red and green points is that the green points also

satisfy the condition in Eq. 3.10 that the curvature value is less than the mean

curvature value, while the red points signify that the curvature value is greater

that the mean curvature value. As was stated previously, without this additional

condition, the region of points classified as edges will extend further away from

the true edge location.

The boxes in Figure 6.9 highlight the regions of interest that will be further ex-

amined. Box 1 contains points sampled from a vent consisting of two recessed

sections each containing four angled slats. The detailed results of the vent are

presented in Figure 6.10. Figure 6.10(a) illustrates the classification results with

a variance of curvature threshold of 5.0×10−5. The regular green and red regions

inside the vent indicates that a regular structure may have been missed, e.g. the

individual slats. If the threshold was tighten to 2.0× 10−5, then the intervals be-

tween the slats are classified as edges, as shown in Figure 6.10(b). Figure 6.10(c)

is a cross section of the vent and illustrates that the results of classification with

tighten thresholds define a surface discontinuity. However, it also indicates how

little difference there is from one slat to the other, under 3.4 cm. This difference

is small, especially when noise and sample spacing are considered. While this

result will lead to a segmentation of the individual slats, it highlights how the

differences between surfaces, compared to the surface noise and sample spacing,

is hard to determine.

108

Page 128: Classification and Segmentation of 3D Terrestrial Laser

3D POINT CLOUD RESULTS

Figure 6.9: Classification of the point cloud with white denoting edge points,blue denoting boundary points. Red and green points both denote surface points,with green points having curvature less than the mean value for the neighbour-hood.

(a) (b) (c)

Figure 6.10: Details of the vent in box 1 from Figure 6.9. (a) Results witha variance of curvature threshold of 5.0 × 10−5. (b) Results with a variance ofcurvature threshold of 2.0× 10−5. (c) The profile of the vent.

109

Page 129: Classification and Segmentation of 3D Terrestrial Laser

3D POINT CLOUD RESULTS

Boxes 2 and 3 in Figure 6.9 show a similar occurrence of regular surface structures.

In these instances, they contain a corrugated surface structure, as highlighted in

Figure 6.11. Since the structure is consistently corrugated, it will be segmented

into one surface (which will be presented shortly). Again, it is possible to tighten

the threshold to classify and separate each groove, but since it is a surface com-

prised of a regular and consistent structure, by the definition in Chapter 5, it

should be one surface.

(a) (b)

Figure 6.11: Details of the corrugated surface in box 2 from Figure 6.9. (a)Results with a variance of curvature threshold of 5.0 × 10−5. (b) The profile ofthe surface.

Box 4, highlighted in Figure 6.9, contains points sampled from a pipe, with Figure

6.12 providing a more detailed view. The classification in Figure 6.12(a) indicates

that there may be three slight bands around the pipe from the three regular rings

shown in red. This is supported by the cross section in Figure 6.12(b). Again

it may be possible to classify these as edges, but Figure 6.12(b) shows how the

surface change is not substantially significant, and is less than approximately

0.025 m. Another occurrence of this can just be seen on the wall that contains

the vent, as illustrated by the striping effect of the green points. For most surfaces,

the green and red points should be randomly and evenly distributed, as is the case

for most of the other surfaces. This is because the red points correspond to points

that are greater than the average curvature, and the green points correspond to

points that are less that the average curvature. If these points belong to a single

smooth surface, then the green and red points will be based on noise in the surface

sample, which should be uniform over the surface.

Finally, box 5 in Figure 6.9 depicts a complex structure that was not classified

properly. Figure 6.13 shows the classification results as well as the surface struc-

110

Page 130: Classification and Segmentation of 3D Terrestrial Laser

3D POINT CLOUD RESULTS

(a) (b)

Figure 6.12: Details of the pipe in box 4 from Figure 6.9. (a) The results witha variance of curvature threshold of 5.0× 10−5. (b) The profile of the surface.

ture. Because of the number of points, density and the low surface change, it

is hard to accurately identify and classify the points belonging to the structure.

Most techniques will struggle with this structure, or else miss it entirely. It will

be shown that the CPRG segmentation procedure will also suffer difficulties in

isolating 100% of the underlying segments, however, it still manages to retrieve

a significant number of the segments.

(a) (b)

Figure 6.13: Details of the complex structure in box 5 from figure 6.9 with (a)showing the results with a variance of curvature threshold of 5.0× 10−5 and (b)shows meshed surface.

In most instances, the points were classified such that all the extents of the

significant and retrievable surface features were identified as either boundary or

edge points. These results will be utilised by the CPRG segmentation method.

111

Page 131: Classification and Segmentation of 3D Terrestrial Laser

3D POINT CLOUD RESULTS

Also presented is how the presence of regular surface structures in the point

cloud can form regular patterns in the classification results, through the imposed

condition in Eq. 3.10, which provides a distinction between points that have

curvature above or below the mean curvature values in a neighbourhood. This

may indicate the possibility of using a method based on counting the types of

points or a non-parametric statistic within the neighbourhood to detect these

structures.

6.2.2 Enhancing the Information of the Processing Plant

The CPRG segmentation method can be applied to the results of the classifica-

tion. Before this is done however, the techniques outlined in Chapter 4 will be

applied to illustrate how the attributes can be extended or improved. Specifically,

the radius and direction of curvature approximations introduced in Chapter 4 will

be presented, as well as the effect of the normal correction.

Figure 6.14 shows the approximation of the radius of curvature for the direction

of maximum and minimum curvature as calculated by the method in Chapter

4, along with the mean and Gaussian curvature values calculated with these

approximate values. These values can be considered noisier than those found

through higher order surface fitting of the individual segments. These can be

more accurately approximated if a larger neighbourhood size is used, although it

is already sufficiently accurate to provide results for further interrogation of the

point cloud.

The results from the normal correction method will now be examined for the

section of pipe presented in Figure 6.15(a). The normal alignment to the vertical

axis is given in Figure 6.15(b) for both the uncorrected and corrected normal

values. The majority of the points should be nominally aligned at 90◦. As can

be seen in the histogram, just under 80% of points have been corrected to an

alignment within 5◦ of the correct values.

112

Page 132: Classification and Segmentation of 3D Terrestrial Laser

3D POINT CLOUD RESULTS

(a) (b)

(c) (d)

Figure 6.14: Approximate radius of curvature for the point cloud of the pro-cessing plant. (a) Radius of curvature in the direction of maximum curvature.(b) Radius of curvature in the direction of minimum curvature. (c) The approx-imation of mean curvature, (d) The approximation of Gaussian curvature.

113

Page 133: Classification and Segmentation of 3D Terrestrial Laser

3D POINT CLOUD RESULTS

(a) (b)

Figure 6.15: (a) Section of a pipe where the connector perturbs the normaldirection. (b) Histogram for the angle of alignment of both the uncorrected andcorrected normal to the z (vertical) axis

The correction process was also applied to the vent in Figure 6.10. Figure 6.16

displays the histogram of the angular alignment of both the uncorrected and cor-

rected normal direction to the horizontal axis. These points should be nominally

aligned to 90◦ for the area around the vent, and nominally 85◦ for the slates in

the vent. The histogram shows an improvement of approximately 75% of the

misaligned normal directions, with the corrected normals being within 7◦ of the

true angular alignment.

Figure 6.16: Histograms displaying the alignment of the uncorrected and cor-rected normal directionss for a cross section of the vent in Figure 6.10. Thealignment is to the z (vertical) axis

114

Page 134: Classification and Segmentation of 3D Terrestrial Laser

3D POINT CLOUD RESULTS

6.2.3 Segmentation Results for the Processing Plant

Based on the results and information in the previous sections, the CPRGsegmen-

tation process can be performed. Application of the procedure results in an initial

segmentation shown in Figure 6.17(a) with 103 surfaces being extracted that con-

tain at least 40 associated points. Each surface is assigned a distinctive colour.

The region growing process was performed on a neighbourhood of 40 points.

As can be observed, these edge points have not been included in the segmenta-

tion. Figure 6.17(b) presents the segmentation results after the re-incorporation

method proposed in Chapter 4 was applied.

(a) (b)

Figure 6.17: Segmentation results of the processing plant point cloud. (a) Beforere-incorporation of the edge and boundary points. (b) After all valid edge andboundary points have been absorbed into segments.

For a more detailed look, the points inside the boxes of Figure 6.17(b) will be

individually examined. Again, box 1 contains the vent previously shown in Figure

6.10. Figure 6.18(a) shows the segmentation results of the points with the variance

of curvature threshold set at 5.0×10−5. If the threshold is tightened to 2.0×10−5,

it can be seen in Figure 6.18(b) that the individual slates can be correctly isolated

and segmented.

Box 2 in Figure 6.17(b) contains the complex structure that was introduced in

Figure 6.13. Figure 6.19(a) shows the segmentation results of the points with the

115

Page 135: Classification and Segmentation of 3D Terrestrial Laser

3D POINT CLOUD RESULTS

(a) (b)

Figure 6.18: Detail of the vent in box 1 from Figure 6.17(b) with (a) showingthe results with a variance of curvature threshold of 5.0× 10−5 and (b) showingthe results with a variance of curvature threshold of 2.0× 10−5.

variance of curvature threshold set at 5.0 × 10−5 and Figure 6.18(b) shows the

results if the threshold was tightened to 2.0 × 10−5. As can be seen, it is very

difficult to recover the entire structure because of factors such as point sampling

density, noise and low change in surface structures. Even with these factors,

Figure 6.18(b) does demonstrate that several planar facets were recovered.

(a) (b)

Figure 6.19: Details of the structure in box 2 from Figure 6.17(b) with (a)showing the results with a variance of curvature threshold of 5.0× 10−5 and (b)showing the results with a variance of curvature threshold of 2.0× 10−5.

Finally, the points contained in Box 3 and Box 4, from Figure 6.17(b) are shown.

The structures present are relatively simple and are easily retrieved. However, a

vast majority of point clouds contain such structures, and they are displayed to

highlight the effect of the re-absorption of the edge points under closer examina-

tion. Figures 6.20 and 6.21 show the results for Boxes 3 and 4 respectively.

116

Page 136: Classification and Segmentation of 3D Terrestrial Laser

3D POINT CLOUD RESULTS

(a) (b)

Figure 6.20: Segmentation results of the pipes in box 3 from Figure 6.17(b)with (a) and (b) being the segmentation results before and after absorption of allthe possible edge points, respectively.

(a) (b)

Figure 6.21: Segmentation results of the pipes in box 4 from Figure 6.17(b)with (a) and (b) being the segmentation results before and after absorption of allthe possible edge points, respectively.

117

Page 137: Classification and Segmentation of 3D Terrestrial Laser

3D POINT CLOUD RESULTS

6.3 Large-scale Building Scene

This section presents the processing results of a large scene which includes a

building facade and adjacent site works, depicted in Figure 6.22. The point

cloud was captured with a Leica Scan station (Leica Geosystems HDS, 2008) and

consists of over 2 million points. Sampling spacing is approximately between 1

cm and 10 cm throughout the majority of the point cloud.

Figure 6.22: Elevation map of a large point cloud taken from a high elevationcontaining a scene including a building facade and site works. Approximatedimensions of the scene are given as (H, W, L) ≈ (18 m, 153 m, 169 m), whereH, W and L represent the height, width and length of the point cloud.

Figure 6.23 presents the attributes required for classification and were calculated

on a neighbourhood size of 40 points. A threshold value of 1.0 × 10−5 for the

variance of curvature and a threshold value of 0.9 for the decision of boundary

points were used to classify the point cloud. Figure 6.24 shows the results, with

blue points denoting boundary points, white denoting edge points, and red and

green points denoting surface points. The difference between red and green points

is that the red points also satisfy the condition in Eq. 3.10 that the curvature

value is less than the mean curvature value, while the green points do not.

118

Page 138: Classification and Segmentation of 3D Terrestrial Laser

3D POINT CLOUD RESULTS

(a)

(b)

(c)

Figure 6.23: Values of the attributes used in the classification defined in Chapter3. (a) Curvature metric values. (b) The variance of curvature. (c) Values of theboundary metric. 119

Page 139: Classification and Segmentation of 3D Terrestrial Laser

3D POINT CLOUD RESULTS

Fig

ure

6.24

:C

lass

ifica

tion

resu

lts

ofth

ebuildin

gsc

ene.

Whit

epoi

nts

indic

ate

clas

sified

edge

poi

nts

,re

dan

dgr

een

poi

nts

indic

ate

clas

sified

surf

ace

poi

nts

and

blu

epoi

nts

den

ote

clas

sified

bou

ndar

ypoi

nts

.

120

Page 140: Classification and Segmentation of 3D Terrestrial Laser

3D POINT CLOUD RESULTS

Fig

ure

6.25

:Seg

men

tati

onre

sult

sof

the

buildin

gsc

ene.

Whit

epoi

nts

indic

ate

un-inco

rpor

ated

edge

and

bou

ndar

ypoi

nts

,w

hile

the

segm

ente

dsu

rfac

eshav

ebee

nra

ndom

lyco

loure

d.

121

Page 141: Classification and Segmentation of 3D Terrestrial Laser

3D POINT CLOUD RESULTS

The results from classification are then utilised by the CPRG segmentation pro-

cess in order to isolate the simple surface features, using a neighbourhood size

of 40 for the region growing procedure. Segmentation results after the CPRG

segmentation method has been applied and all valid edge points have been incor-

porated into the isolated segments are presented in Figure 6.25 with 988 segments

being found.

A closer view of the segmentation results focusing on just the building is dis-

played in Figure 6.26. Observed here is how the majority of the elements such

as windows, chimneys and air-conditioning units have been segmented, although

the segments often contain sparse sampling and few point members. Similarly,

a closer view of the construction site on the left of the building is displayed in

Figure 6.27. Even though the sampling was sparse, since this part of the scene

was not the primary focus of the point capture, structures such as cars, boxes,

rolls of cabling and the concrete barrier were still isolated into their component

sections.

6.4 Selection of Threshold Values

In the procedures presented in Chapters 3 to 5 which have been applied to point

clouds in this chapter, there are three main values that need to be set. These are

the size of the neighbourhood, the threshold on the variance of curvature, and

the threshold on boundary points. The guidelines for selecting the size of the

neighbourhood have already been highlighted in Section 3.2.5.

For the threshold on the boundary selection, it is set under the assumption that

for a surface point not located near a scan extent, the point should be located

approximately at the centroid of the neighbourhood. This is tested by Eq. 3.12.

If a small value is chosen, then the region of points classified as boundary points

will be larger than if the threshold was selected as a larger value. In most cases,

a value of approximately thresc2 = 1 is sufficient to detect boundary points.

122

Page 142: Classification and Segmentation of 3D Terrestrial Laser

3D POINT CLOUD RESULTS

Figure 6.26: Segmentation results of the main building present in the pointcloud.

Figure 6.27: Segmentation results of the construction site to the left of thebuilding in the point cloud.

123

Page 143: Classification and Segmentation of 3D Terrestrial Laser

3D POINT CLOUD RESULTS

However, if the point cloud is more densely populated with a low level of noise, a

smaller value can be used, as is the case in the point clouds presented in Sections

6.1 and 6.3.

The value for the threshold on the variance of curvature test whether there is a

zero variation in the curvature values within a neighbourhood. For neighbour-

hoods that contain an edge, there will be a large variation in the curvature values

so that the method is not significantly sensitive to the threshold. The use of

Eq. 3.10 also restricts how sensitive the classification is when over-tightening the

threshold used in Eq. 3.9, as highlighted in Figure 3.9.

Where the threshold is sensitive to varying values is in the presence of small scale

features (as shown in Section 6.2.1). This is due to the fact that there is very

little difference in the variation of surface curvature cased by the structure of

the feature when compared to variation caused by noise in data capture. The

result is that it is difficult to accurately set the threshold to resolve the small

scale features from the noise in the data. As such, trial and error is involved to

differentiate between the two cases, and the threshold has to be set greater than

the variation of curvature caused by noise in the point sampling. In general, a

point cloud consisting of a high density of points sampled with low noise can have

a tighter threshold value.

6.5 Summary of 3D Point Cloud Results

This chapter has presented the processing procedure and results proposed in

Chapters 3 to 5, as applied to several different point clouds. These results have

included an exploration of the different attributes and metrics involved, as well

as the effect that different surface structures have on the results. The majority of

the salient surface features have been identified and isolated, including difficult

small scale features, where the surface elements that compose the feature are only

isolated from each other by a small scale change in the underlying structure.

124

Page 144: Classification and Segmentation of 3D Terrestrial Laser

Chapter 7

Conclusions and Discussion

This dissertation has proposed and outlined the CPRG segmentation method

for 3D points cloud predominately captured by TLS. The methods utilised a

classification scheme outlined in Chapter 3, and additional information, such as

surface normals and the directions of principal curvature, to identify and define

the extents of the surface segments at a local level for limiting the region growing

process. Results from the application of the segmentation method were presented

in Chapter 6. This illustrated how the methodology was applied and the infor-

mation retrieved through the use of the CPRG segmentation method from a 3D

point cloud.

7.1 Summary of Thesis

The aim and motivation of the research presented in this thesis, as outlined in

Chapter 1, was to develop a segmentation method to isolate and identify the

surface segments that comprise a TLS point cloud. Surface segments were de-

fined as a general surface that satisfies the conditions of being continuous and

125

Page 145: Classification and Segmentation of 3D Terrestrial Laser

CONCLUSIONS AND DISCUSSION

differentiable within the extents of the surface, i.e. any point with the surface

segment can be reached from another without encountering a surface discontinu-

ity. Previous contributions to this research were outlined in Chapter 1, with the

generalised processing techniques highlighted in Chapter 2. These existing proce-

dures utilised a combination of the numerous spectral or geometric attributes and

were mainly based on either region growing or clustering techniques. Common

methods will mainly employ region growing methodologies on the geometric at-

tributes and are categorised as surface or edge based methods (Zhoa and Zhang,

1997).

The CPRG segmentation method can be categorised as an edge based method,

since it utilises the classification of the points into surface, edge and boundary

points to define the extents of a surface segment. Extents of a surface are defined

in Chapter 3 as discontinuities in the surface segments. These discontinuities are

caused by either edges (intersection of two sampled surfaces) or boundaries (the

sampling extent of a surface) point. In order that any arbitrary surface can be

identified, regardless of the underlying surface structure, a metric was introduced

derived from the variance of curvature (Eq. 3.2) in a local neighbourhood. The

benefit of this metric over others is that it only takes on a significant value if

there is a change in the local underlying surface structure. This is normally only

formed by an intersection of two or more surfaces, since the large variation in

curvature values between the points on the surfaces and the points sampled near

the intersection results in the variance of curvature metric taking on a significant

value. The boundary points are then found by examining the distance between

the point of interest and the centroid of the surrounding local neighbourhood,

through the attributes from the PCA and a chi-squared test, as given in Eq. 3.4.

The classification of the points can then be performed, as explained in Algorithm

1.

The attributes derived in the classification stage through the examination of the

local neighbourhood, i.e. the surface normal direction, can be perturbed by out-

liers and the presence of points sampled from multiple surfaces, especially for

points classified as edge points. Such information was used by the CPRG seg-

mentation method to help isolate the surface segments and then re-incorporate

126

Page 146: Classification and Segmentation of 3D Terrestrial Laser

CONCLUSIONS AND DISCUSSION

the edge points that were not initially attributed to a surface segment. To esti-

mate attributes free from such erroneous effects, a neighbourhood correction can

be applied, some of which are highlighted in Appendix B. Most of the commonly

employed methods are based on either random or systematic re-sampling of the

neighbourhood. The re-sampling is performed until a neighbourhood sampling

is found that is deemed to be adequate. An iterative method was proposed in

Chapter 4 that, based on a first order fit, altered the normalised weights of the

points within a neighbourhood until they converged to a stable solution. This

stable solution was shown to take on uniform weights for those points deemed to

belong to the dominant surface within the neighbourhood, or a weight value of

zero otherwise.

Presented in Chapter 4 was a method to extend the available curvature informa-

tion from the unitless and directionless approximation given in Chapter 3, and

was initially proposed by Pauly et al. (2002). The method, based of the work

by Jiang et al. (2005), utilised the PCA of the approximated normal directions

to determine the principal directions to curvature. This information was then

combined with the PCA of the point coordinates in order to propose a metric

for curvature that had a directional component, and eliminated the dependency

of the value on the neighbourhood size. It was shown that this metric approxi-

mates the radius of curvature, under the assumption that there is a single curved

surface.

This information from the classification of the points, the normal directions and

principal curvature directions were used in the proposed CPRG segmentation

method. The method works by creating a cut-plane to define the extent of the

surface segment at a local level. A cut-plane was formed at an identified edge

point by the plane defined with the surface normal direction and direction of

minimum curvature, as formulated by Eq. 5.2. Region growing was performed

on the classified surface points using the cut-planes to define the extents of the

surface segments and limit the region growing process. The entire process is

summarised in Algorithm 2. A benefit of this method is that misclassified or

un-identified sections on surface discontinuity will not lead to region growing

occurring across multiple surfaces, as the cut-planes will limit this by closing

127

Page 147: Classification and Segmentation of 3D Terrestrial Laser

CONCLUSIONS AND DISCUSSION

such gaps in the discontinuity by extrapolating the local intersection through

the un-identified edge points. Once this is done, points not associated with a

surface segment, i.e. classified edge points, were then re-incorporated into the

identified segments if they did not violate the local surface at the extents of the

surface segments. Instances and causes of over- and under-segmentation were

then examined with regards to the CPRG segmentation method.

Results from the proposed methods and procedures were presented in Chapter

6, as applied to several practical data sets. These included scenes that captured

a variety of different elements with multiple scan setups and differing proper-

ties. They included a simple building facade, an industrial plant, and a complex

building scene capturing features with different scales, complexity and resolution.

Details of each point cloud were closely examined to highlight how the process

was applied, type of surfaces and features resolved, the benefits of the proposed

CPRG segmentation method, and any shortcomings that became evident.

7.2 Conclusion

The classification method, attributes and CPRG segmentation procedure pro-

posed in this thesis provide a method for isolating and identifying the surface

segments that comprise a point cloud scene. From the metrics introduced, the sur-

face discontinuities between surfaces were identified based on change in structure,

not on specific surface properties. This allows arbitrary surfaces to be segmented

regardless of their underlying surface structure, as long as they satisfy the con-

dition that the surface segment is considered to be continuous and differentiable

within the extents of the segment. A benefit of the proposed method is the low

number of thresholds required. These were reduced primarily to three, including

neighbourhood size, the threshold for boundary detection, and the threshold on

the variance of curvature. Another benefit of primary importance is that these

thresholds do not rely on the geometric structure of the surface, and can be

applied to the classification of arbitrary surfaces

128

Page 148: Classification and Segmentation of 3D Terrestrial Laser

CONCLUSIONS AND DISCUSSION

In addition to the classification proposed, a method for neighbourhood correction,

and a method for approximating the direction and radius of curvature through the

PCA was also presented. These additional attributes contribute to the robustness

of the CPRG segmentation procedure. An advantage of the CPRG was that,

since it utilised the classification results, only the neighbourhood size for region

growing was required to be specified. In addition, because of the use of local

cut-planes to restrict the region growing process, gaps in the classified surface

continuities will not affect the CPRG segmentation process detrimentally as it

would for other edge-based regrowing growing procedures. Finally, the proposed

CPRG segmentation procedure is aimed at segmentation arbitrary surface regions

regardless of the underlying geometric structure of the surface segment. The

results of the procedure are illustrated by its application to practical point cloud.

These point clouds contain a variety of different properties and features. Many

such point clouds were provided through the involvement of industries that utilise

TLS point clouds, to help present a cross-section of the different applications for

point clouds.

7.3 Future Directions

Although much development has been done in the area of 3D point cloud automa-

tion, there is still much research to explore, especially considering the continuous

development of technology and hardware. The aim of this thesis was to develop a

generalised segmentation method for a point cloud, regardless of their underlying

structure or properties. One such area is the processing time. The efficiency of

the CPRG segmentation method was not the primary focus of the thesis. How-

ever, because of the use of cut-planes to restrict segmentation, not all the points

along a discontinuity needs to be classified. This means that a reduced point

cloud can be used to identify the extents of the surface segments. The result

is that only a portion of the entire point cloud is required to be interrogated,

reducing the number of points to be processed significantly since the cut-planes

will close any gap in the discontinuities caused by down-sampling.

129

Page 149: Classification and Segmentation of 3D Terrestrial Laser

CONCLUSIONS AND DISCUSSION

In addition, the neighbourhood correction method outlined in Chapter 4 can

be used to define relationships between a point and its surrounding neighbours.

These relationships can be given a metric to define the strength (in terms of the

relation of a point to its neighbours) and those points that share a strong interde-

pendency can be grouped together to form segments. Since the strength is locally

determined using internal and external relationships, it will be independent on

the underlying surface structure and can be performed for arbitrary surfaces.

Finally, the isolated surface segments should be able to be replaced by a surface

model representation. Some methods for this were outlined in Chapter 2. Since

the isolated surfaces where extracted under the assumption of being continuous

and differentiable, the points belonging to each of these segments should satisfy

the definition of an algebraic surface and could be modelled as such.

130

Page 150: Classification and Segmentation of 3D Terrestrial Laser

References

Abdelhafiz, A. and W. Niemeier 2006. Developed technique for automatic point

cloud texturing using multi images applied to a complex site. International

Archives of the Photogrammetry, Remote Sensing and Spatial Information Sci-

ences XXXVI (part 5), 1–7.

Abmayr, T., F. Hartl, M. Reinkster, and C. Frohlich 2005. Terrestrial laser

scanning - applications in cultural heritage conservation and civil engineering.

International Archives of Photogrammetry, Remote Sensing and Spatial Infor-

mation Sciences XXXVI (part 5/W17), 18–23.

Adamson, A. and M. Alexa 2003. Approximating and intersecting surfaces from

points. In SGP ’03: Proceedings of the 2003 Eurographics/ACM SIGGRAPH

symposium on Geometry processing, Aire-la-Ville, Switzerland, pp. 230–239.

Eurographics Association.

Al-Manasir, K. and C. S. Fraser 2006. Registration of terrestrial laser scanner

data using imagery. The Photogrammetric Record 21 (115), 255–268.

Amenta, N. and Y. J. Kil 2004. Defining point-set surfaces. ACM Transactions

on Graphics 23 (3), 264–270.

Amiri Parian, J. and A. Grun 2005. Integrated lsser scanner and intensity image

calibration and accuracy assessment. International Archives of Photogramme-

try, Remote Sensing and Spatial Information Sciences XXXVI (part 3/W19),

18–23.

Arya, S., D. Mount, N. S. Netanyahu, R. Silverman, and A. Y. Wu 1998. An

optimal algorithm for approximate nearest neighbour searching. A Journal of

the ACM (Association for Computing Machinery) 45, 891–923.

131

Page 151: Classification and Segmentation of 3D Terrestrial Laser

REFERENCES

Bae, K. H., D. Belton, and D. D. Lichti 2005. A framework for position un-

certainty of unorganised three-dimensional point clouds from near-monostatic

laser scanners using covariance analysis. In Proceedings of the ISPRS Workshop

Laser scanning 2005, Enschede, Netherlands, pp. 7–12. IRSPS.

Bae, K.-H., D. Belton, and D. D. Lichti 2007. Pre-processing procedures for raw

point clouds from terrestrial laser scanners. Journal of Spatial Science 52 (2),

65–74.

Bae, K.-H. and D. D. Lichti 2008. A method for automated registration of un-

organised point clouds. ISPRS Journal of Photogrammetry and Remote Sens-

ing 63 (1), 36–54.

Barber, D., J. Mills, and P. Bryan 2003. Towards a standard specification for

terrestrial laser scanning of cultural heritage. In Proceedings of CIPA XIX

International Symposium, Antalya, Turkey, pp. 619–625.

Barnea, S. and S. Filin 2008. Keypoint based autonomous registration of ter-

restrial laser point-clouds. ISPRS Journal of Photogrammetry and Remote

Sensing 63 (1), 19–35.

Barnea, S., S. Filin, and V. Alchanatis 2007. A supervised approach for object

extraction from terrestrial laser point clouds demonstrated on trees. Interna-

tional Archives of Photogrammetry, Remote Sensing and Spatial Information

Sciences XXXVI (part 3/W49A), 135–140.

Bauer, J., K. Karner, K. Schindler, A. Klaus, and C. Zach 2003. Segmentation

of building from dense 3D point-clouds. In 27th Workshop of the Austrian

Association for Pattern Recognition, Laxenburg, Austria, pp. 253–259.

Becker, S. and N. Haala 2007. Refinement of building facades by integrated

processing of lidar and image data. International Archives of Photogrammetry,

Remote Sensing and Spatial Information Sciences XXXVI (part 3/W49A), 7–

12.

Belton, D. and D. D. Lichti 2005. Classification and feature extraction of 3d

point clouds from terrestrial laser scanners. In Proceedings of SSC 2005 Spatial

Intelligence, Innovation and Praxis: The national biennial Conference of the

Spatial Institue, pp. 39–48. Melbourne, Australia.

132

Page 152: Classification and Segmentation of 3D Terrestrial Laser

REFERENCES

Belton, D. and D. D. Lichti 2006. Classification and segmentation of terres-

trial laser scanner point clouds using local variance information. International

Archives of the Photogrammetry, Remote Sensing and Spatial Information Sci-

ences XXXVI (part 5), 44–49.

Bentley, J. L. 1975. Multidimensional binary search trees used for associative

searching. Communications of the ACM 18 (9), 509–517.

Berkmann, J. and T. Caelli 1994. Computation of surface geometry and segmen-

tation using covariance techniques. IEEE Transaction on Pattern Analysis and

Machine Intelligence 16 (11), 1114–1116.

Besl, P. J. and R. C. Jain 1988a. Invariant surface characteristics for 3D object

recognition in range images. Computer Vision, Graphics, and Image Process-

ing 33 (1), 33–80.

Besl, P. J. and R. C. Jain 1988b. Segmentation through variable-order surface fit-

ting. IEEE Transactions on Pattern Analysis and Machine Intelligence 10 (2),

167–192.

Biosca, J. M. and J. L. Lerma 2008. Unsupervised robust planar segmentation of

terrestrial laser scanner point clouds based on fuzzy clustering methods. ISPRS

Journal of Photogrammetry and Remote Sensing 63 (1), 84–98.

Bishup, K., P. Arias, H. Lorenzo, and J. Armesto 2007. Application of terrestrial

laser scanning for shipbuilding. International Archives of Photogrammetry,

Remote Sensing and Spatial Information Sciences XXXVI (part 3/W52), 56–

61.

Bohm, J. 2005. Terrestrial laser scanning a supplementary approach for 3d

documentation and animation. In Photogrammetric Week ’05, pp. 263–271.

Wichmann, Heidelberg.

Bolle, R. M. and B. C. Vemuri 1991. On three-dimensional surface reconstruc-

tion methods. IEEE Transactions on Pattern Analysis and Machine Intelli-

gence 13 (1), 1–13.

Boulaassal, H., T. Landes, P. Grussenmeyer, and F. Tarsha-Kurdi 2007. Auto-

matic segmentation of building facades using terrestrial laser data. Interna-

133

Page 153: Classification and Segmentation of 3D Terrestrial Laser

REFERENCES

tional Archives of Photogrammetry, Remote Sensing and Spatial Information

Sciences XXXVI (part 3/W52), 65–70.

Brenner, C. and C. Dold 2007. Automatic relative orientation of terrestrial

laser scans using planar structures and angle constraints. International

Archives of the Photogrammetry, Remote Sensing and Spatial Information Sci-

ences XXXVI (part 3/W52), 84–89.

Brenner, C., C. Dold, and N. Ripperda 2008. Coarse orientation of terrestrial

laser scans in urban environments. ISPRS Journal of Photogrammetry and

Remote Sensing 63 (1), 4–18.

Briese, C. 2006. Structure line modelling based on terrestrial laserscanner data.

International Archives of the Photogrammetry, Remote Sensing and Spatial

Information Sciences XXXVI (part 5).

Bucksch, A. and H. A. van Wageningen 2006. Skeletonization and segmentation of

point clouds using octrees and graph theory. International Archives of the Pho-

togrammetry, Remote Sensing and Spatial Information Sciences XXXVI (part

5).

Burden, R. L. and J. D. Faires 2001. Numerical Analysis (7th ed.). California,

USA: Brooks/Coles.

Chaperon, T. and F. Goulette 2001. Extracting cylinders in full 3D data using a

random sampling method and the gaussian image. In VMV ’01: Proceedings

of the Vision Modeling and Visualization Conference 2001, pp. 35–42.

Cho, K. and P. Meer 1997. Image segmentation from consensus information.

Computer Vision and Image Understanding 68 (1), 72–89.

Chum, O. and J. Matas 2005. Matching with prosac - progressive sample consen-

sus. In IEEE Computer Society Conference on Computer Vision and Pattern

Recognition 2005. (CVPR), Volume I, pp. 220–226. San Diego, CA.

Chum, O., J. Matas, and J. Kittler 2003. Locally optimized ransac. In DAGM-

Symposium, Volume 2781, pp. 236–243. Magdeburg, Germany Springer.

Clode, S. P., F. Rottensteiner, and P. Kootsookos 2005. Improving city model

determination by using road detection from LIDAR data. International

134

Page 154: Classification and Segmentation of 3D Terrestrial Laser

REFERENCES

Archives of Photogrammetry, Remote Sensing and Spatial Information Sci-

ences XXXVI (part 3/W24), 159–164.

Co, C. S., B. Heckel, H. Hagen, B. Hamann, and K. I. Joy 2003. Hierarchical

clustering for unstructured volumetric scalar fields. In VIS ’03: Proceedings of

the 14th IEEE Visualization 2003.

Cohen-Steiner, D., P. Alliez, and M. Desbrun 2004. Variational shape approxi-

mation. ACM Transactions on Graphics 23 (3), 905–914.

Cooper, O. and N. Campbell 2004. Augmentation of sparsely populated point

clouds using planar intersection. In Visualisation, Image and Image Processing

(VIIP), pp. 359–364. Spain.

Daniels, J. D., L. Ha, T. Ochotta, and C. T. Silva 2007. Robust smooth feature

extraction from point clouds. In IEEE International Conference on Shape

Modeling and Applications 2007 (SMI ’07), pp. 123–136. Lyon, France.

Danuser, C. and M. Striker 1998. Parametric model fitting: From inlier char-

acterization to outlier detection. IEEE Transaction on Pattern Anaylsis and

Machine Intelligence 20 (2), 263–280.

Dash, M. and H. Liu 1997. Feature selection for classification. Intelligent Data

Analysis 1 (3), 131–156.

Dey, T. K., G. Li, and J. Sun 2005. Normal estimation for point clouds: a

comparison study for a voronoi based method. In 2005 Eurographics/IEEE

VGTC Symposium on Point-Based Graphics, pp. 39–46. Stony Brook, New

York, USA.

Dold, C. 2005. Extended gaussian images for the registration of terrestrial scan

data. International Archives of Photogrammetry, Remote Sensing and Spatial

Information Sciences XXXVI (part 3/W19), 180–185.

Dorninger, P. and C. Nothegger 2007. 3D segmentation of unstructured point

clouds for building modelling. International Archives of Photogrammetry, Re-

mote Sensing and Spatial Information Sciences XXXVI (part 3/W49A), 191–

196.

Du, D.-Z. and P. M. Pardalos 1998. Handbook of Combinatorial Optimization,

Volume 1. Springer.

135

Page 155: Classification and Segmentation of 3D Terrestrial Laser

REFERENCES

Dyn, N., K. Hormann, S. J. Kim, and D. Levin 2001. Optimizing 3d triangulations

using discrete curvature analysis. In Proceedings of Mathematical methods for

curves and surfaces (Oslo 2000), pp. 135146.

Falcidieno, B. and O. Ratto 1992. Two-manifold cell-decomposition of r-sets.

Computer Graphics Forum 11 (3), 391–404.

FARO 2008. [Website] http://www.faro.com. accessed: March 2008.

Filin, S. and N. Pfeifer 2006. Segmentation of airborne laser scanning data using

a slope adaptive neighborhood. ISPRS Journal of Photogrammetry & Remote

Sensing 60, 71–80.

Fischler, M. A. and R. C. Bolles 1981. Random sample consensus: a paradigm for

model fitting with applications to image analysis and automated cartography.

Communications of the ACM 24 (6), 381–395.

Forkuo, E. K. and B. King 2004. Automatic fusion of photogrammetric imagery

and laser scanner point clouds. International Archives of Photogrammetry,

Remote Sensing and Spatial Information Sciences XXXV.

Golub, G. H. and C. F. V. Loan 1989. Matrix Computations (2nd ed.). Baltimore,

MD: Johns Hopkins Press.

Gordon, S. J. 2005. Structural Deformation Measurement using Terrestrial Laser

Scanners. Ph. D. thesis, Department of Spatial Sciences, Curtin University of

Technology.

Gordon, S. J., D. Lichti, and M. Stewart 2001. Application of a high-resolution,

ground-based laser scanner for deformation measurements. In Proceedings of

10th International FIG Symposium on Deformation Measurements, Orange,

California, pp. 23–32.

Gordon, S. J., D. D. Lichti, and M. P. Stewart 2003. Structural deformation

measurement using terrestrial laser scanners. In Proceedings of 11th Interna-

tional FIG Symposium on Deformation Measurements. Santorini Island, Greece

[CD-ROM].

Gorte, B. and N. Pfeifer 2004. Structuring laser scanned trees using 3d math-

ematical morphology. International Archives of the Photogrammetry, Remote

Sensing and Spatial Information Sciences XXXV (part B5), 151–174.

136

Page 156: Classification and Segmentation of 3D Terrestrial Laser

REFERENCES

Gotardo, P. F. U., O. R. P. Bellon, K. L. Boyer, and L. Silva 2004. Range

image segmentation into planar and quadric surfaces using an improved robust

estimator and genetic algorithm. IEEE Transactions on Systems, Man, and

Cybernetics, Part B 34 (6), 2303–2316.

Grun, A. and D. Akca 2005. Least squares 3d surface and curve matching. ISPRS

Journal of Photogrammetry and Remote Sensing 59 (3), 151–174.

Gumhold, S., X. Wang, and R. MacLeod 2001. Feature extraction from point

clouds. In 10th International Meshing Roundtable, Sandia National Laborato-

ries, pp. 293–305.

Haralick, R. M. and L. G. Shapiro 1993. Computer and Robut Vision, Volume 2.

Addison-Wesley Publishing Company.

Hartley, R. I. and A. Zisserman 2004. Multiple View Geometry in Computer

Vision (Second ed.). Cambridge University Press.

Hofle, B., P. N. 2007. Correction of laser scanning intensity data: data and

model-driven approaches. ISPRS Journal of Photogrammetry and Remote

Sensing doi:10.1016/j.isprsjprs.2007.05.008 (in press).

Hoover, A., G. Jean-Baptiste, X. Jiang, P. J. Flynn, H. Bunke, D. Goldgof,

K. Bowyer, D. Eggert, A. Fitzgibbon, and R. Fisher 1996. An experimental

comparison of range image segmentation algorithms. IEEE Transactions on

Pattern Analysis and Machine Intelligence 18 (7), 673–689.

Hoppe, H., T. DeRose, T. Duchamp, J. McDonald, and W. Stuetzle 1992. Surface

reconstruction from unorganized points. Computer Graphics 26 (2), 71–78.

Horn, B. K. P. 1984. Extended gaussian images. In Proceedings of the IEEE,

Volume 72, pp. 1671–1686.

Hough, P. V. C. 1962. Method and means for recognizing complex patterns. US

Patent 3069654.

Huising, E. J. and L. M. G. Pereira 1998. Errors and accuracy estimates of laser

data acquired by various laser scanning systems for topographic applications.

ISPRS Journal of Photogrammetry and Remote Sensing 53 (5), 245–261.

137

Page 157: Classification and Segmentation of 3D Terrestrial Laser

REFERENCES

Illingworth, J. and J. Kittler 1988. A survey of the hough transform. Computer

Vision, Graphics, and Image Processing 44, 87–116.

Jain, A. K. and R. C. Dubes 1988. Algorithms for clustering data. Upper Saddle

River, NJ, USA: Prentice-Hall, Inc.

Jansa, J., N. Studnicka, G. Forkert, A. Haring, and H. Kager 2004. Terrestrial

laserscanning and photogrammetry - acquisition techniques complementing one

another. International Archives of Photogrammetry, Remote Sensing and Spa-

tial Information Sciences XXXV.

Jensen, J. R. 2005. Introductory Digital Image Processing: A Remote Sensing

Perspective (3rd ed.). Upper Saddle River, New Jersey: Prentice Hall.

Jiang, J., Z. Zhang, and Y. Ming 2005. Data segmentation for geometric feautre

extraction from lidar point clouds. Geoscience and Remote Sensing Symposium,

2005. IGARSS ’05. Proceedings. 2005 IEEE International 5, 3277–3280.

Jiang, X. and H. Bunke 1994. Fast segmentation of range images into planar

regions by scan line grouping. Machine Vision and Applications 7 (2), 115–

122.

Johnson, R. A. and D. W. Wichern 2002. Applied Multivariate Statistical Analysis

(5th ed.). New Jersey, USA: Prentice Hall.

Kalaiah, A. and A. Varshney 2003. Statistical point geometry. In Eurographics

Symposium on Geometry Processing, pp. 107–115. Aachen, Germany.

Kamberov, G. and G. Kamberova 2004. Topology and geometry of unorganized

point clouds. In Proceedings of the 2nd International Symposium on 3D Data

Processing, Visualization, and Transmission (3DPVT04), pp. 743–750. Thes-

saloniki, Greece.

Kanatani, K. 1996. Statistical optimization for geometric computation: theory

and practice (first ed.). Elsevier Science.

Khoshelham, K. 2007. Extending generalized hough transform to detect 3D ob-

jects in laser range data. International Archives of Photogrammetry, Remote

Sensing and Spatial Information Sciences XXXVI (part 3/W52), 206–210.

138

Page 158: Classification and Segmentation of 3D Terrestrial Laser

REFERENCES

Kobbelt, L. and M. Botsch 2004. A survey of point-based techniques in computer

graphics. Computers & Graphics 28 (6), 801–814.

Lalonde, J.-F., N. Vandapel, and M. Hebert 2005. Data structure for efficient

processing in 3-D. In Robotics: Science and Systems 1. MIT Press.

Lambers, K., H. Eisenbeiss, M. Sauerbier, D. Kupferschmidt, T. Gaisecker, S. So-

toodeh, and T. Hanusch 2007. Combining photogrammetry and laser scanning

for the recording and modelling of the late intermediate period site of Pin-

chango Alto, Palpa, Peru. Journal of Archaeological Science 34, 1702–1712.

Langer, D., M. Mettenleiter, F. Hartl, and C. Frohlich 2000. Imaging ladar for

3-D surveying and CAD modeling of real-world environments. International

Journal of Robotics Research 19 (11), 1075–1088.

Leica Geosystems HDS 2008. [Website] http://www.leica-geosystems.com/hds.

accessed: March 2008.

Lempitsky, V. and Y. Boykov 2007. Global optimization for shape fitting. In

IEEE Conference on Computer Vision and Pattern Recognition, CVPR ’07,

pp. 1–8. Minneapolis, Minnesota.

Lichti, D. D. 2005. Spectral filtering and classification of terrestrial laser scanner

point clouds. The Photogrammetric Record 20 (111), 218–240(23).

Lichti, D. D. and J. Franke 2005. Self-calibration of the iqsun 880 laser scanner.

Volume I, pp. 112–121. Vienna, Austria.

Lichti, D. D. and B. Harvey 2002. Effects of reflecting surface mate-

rial properties on time-of-flight laser scanner measurements. International

Archives of the Photogrammetry, Remote Sensing and Spatial Information Sci-

ences XXXIV (part 4).

Lichti, D. D. and M. G. Licht 2006. Experience with terrestrial laser scanner mod-

elling and accuracy assessment. International Archives of the Photogrammetry,

Remote Sensing and Spatial Information Sciences XXXVI (part 5), 155–160.

Lillesand, T. M. and R. W. Kiefer 2000. Remote Sensing and Image Interpretation

(4th ed.). New York, USA: John Wiley & Sons Inc.

139

Page 159: Classification and Segmentation of 3D Terrestrial Laser

REFERENCES

Lukas, G., R. Martin, and D. Marshall 1998. Faithful least-squares fitting of

spheres, cylinders, cones and tori for reliable segmentation. In Proceedings of the

5th European Conference on Computer Vision - ECCV’98, Volume 1406/1998,

pp. 671–686. Springer-Verlag, London, UK.

Maas, H. G. 2002. Methods for measuring height and planimetry discrepancies in

airborne laserscanner data. Photogrammetric Engineering and Remote Sens-

ing 68 (9), 933–940.

Maas, H. G. and G. Vosselman 1999. Two algorithms ofr extracting building

models from raw laser altimetry data. ISPRS Journal of Photogrammetry &

Remote Sensing 54, 153–163.

Marshall, D., G. Lukacs, , and R. Martin 2001. Robust segmentation of primitives

from range data in the presence of geometric degeneracy. IEEE Transactions

on Pattern Analysis and Machine Intelligence 23 (3), 304–314.

McGlone, C., E. Mikhail, and J. Bethel 2004. Manual of Photogrammetry (5th

ed.). American Society of Photogrammetry and Remote Sensing (ASPRS).

Mechelke, K., T. P. Kersten, and M. Lindstaedt 2007. Comparative investigations

into the accuracy behaviour of the new generation of terrestrial laser scanning

systems. In Optical 3-D Measurment Techniques VIII, Volume I, pp. 319–327.

Zurich, Switzerland.

Michaelsen, E., W. von Hansen, M. Kirchhof, J. Meidow, and U. Stilla 2006.

Estimating the essential matrix: Goodsac versus ransac. In Photogrammetric

Computer Vision, PCV’06, pp. 161–166. Bonn, Germany.

Michie, D., D. J. Spiegelhalter, C. C. Taylor, and J. Campbell (Eds.) 1994. Ma-

chine learning, neural and statistical classification. Upper Saddle River, NJ,

USA: Ellis Horwood.

Mikhail, E. M. and F. E. Ackermann 1976. Observations and least-squares.

Thomas Y. Crowell company.

Mitra, M. J., A. Nguyen, and L. Guibas 2004. Estimating surface normals in

noisy point cloud data. International Journal of Computational Geometry and

Applications 14 (4,5), 261–276.

140

Page 160: Classification and Segmentation of 3D Terrestrial Laser

REFERENCES

Ogundana, O. O., C. R. Coggrave, R. L. Burguete, and J. M. Huntley 2007. Fast

hough transform for automated detection of spheres in three-dimensional point

clouds. Optical Engineering 42 (5).

Ohtake, Y., A. Belyaev, and H.-P. Seidel 2004. Ridge-valley lines on meshes via

implicit surface fitting. ACM Transactions on Graphics (TOG) 23 (3), 609–612.

Ono, N., N. Tonoko, and K. Sato 2000. A case study on landslide by 3D laser

mirror scanner. International Archives of Photogrammetry, Remote Sensing

and Spatial Information Sciences 35 (B5), 593–598.

OuYang, D. and H.-Y. Feng 2005. On the normal estimation for point cloud data

from smooth surfaces. Computer-Aided Design 37 (10), 1071–1079.

Page, D. L., Y. Sun, A. F. Koschan, J. Paik, and M. A. Abidi 2002. Normal vec-

tor voting: Crease detection and curvature estimation on large, noisy meshes.

Journal of Graphical Models 64 (3/4), 199–229.

Pagounis, V., M. Tsakiri, S. Palaskas, B. Biza, and E. Zaloumi 2006. 3D laser

scanning for road safety and accident reconstruction. In FIG 2006 : Proceedings

of the conference : Shaping the change, XXIII FIG congress. Munich, Germany

[CD-ROM].

Pauly, M., M. Gross, and L. P. Kobbelt 2002. Efficient simplification of point-

sampled surfaces. In VIS ’02: Proceedings of the conference on Visualization

’02, Boston, Massachusetts, pp. 163–170. IEEE Computer Society.

Pauly, M., R. Keiser, and M. Gross 2003. Multi-scale feature extraction on point-

sampled surfaces. Computer Graphics Forum 22 (3), 281–289.

Persson, A., U. Soderman, J. Topel, and S. Ahlberg 2005. Visualization and anal-

ysis of full-waveform airborne laser scanner data. In Proceedings of the ISPRS

Workshop Laser scanning 2005, Enschede, Netherlands, pp. 7–12. IRSPS.

Peternell, M. 2004. Developable surface fitting to point clouds. Computer Aided

Geometric Design 22, 785–803.

Pfeifer, N. and C. Briese 2007. Geometrical aspects of airborne laser scanning and

terrestrial laser scanning. International Archives of Photogrammetry, Remote

Sensing and Spatial Information Sciences XXXVI (part 3/W52), 311–319.

141

Page 161: Classification and Segmentation of 3D Terrestrial Laser

REFERENCES

Pfeifer, N., P. Dorninger, A. Haring, and H. Fan 2007. Investigating terrestrial

laser scanning intensity data: Quality and functional relations. In Optical 3-D

Measurement Techniques VIII, pp. 328–337. Zurich, Switzerland.

Pham, D. L., C. Xu, and J. L. Prince 2000. Current methods in medical image

segmentation. Annual Review of Biomedical Engineering, Annual Reviews 2,

315–337.

Pottmann, H., S. Leopoldseder, J. Wallner, and M. Peternell 2002. Recogni-

tion and reconstruction of special surfaces from point clouds. International

Archives of the Photogrammetry, Remote Sensing and Spatial Information Sci-

ences XXXIV (part 3A), 271–276.

Pratt, V. 1987. Direct least-squares fitting of algebraic surfaces. SIGGRAPH

Computer Graphics 21 (4), 145–152.

Pu, S. and G. Vosselman 2007. Extracting windows from terrestrial laser scan-

ning. International Archives of Photogrammetry, Remote Sensing and Spatial

Information Sciences XXXVI (part 3/W52), 320–325.

Rabbani, T., S. Dijkman, F. van den Heuvel, and G. Vosselman 2007. An inte-

grated approach for modelling and global registration of point clouds. ISPRS

Journal of Photogrammetry and Remote Sensing 61 (6), 355370.

Rabbani, T. and F. van den Heuvel 2004. 3d industrial reconstruction by fitting

csg models to a combination of images and point clouds. International Archives

of Photogrammetry, Remote Sensing and Spatial Information Sciences XXXV.

Rabbani, T. and F. van den Heuvel 2005. Efficient hough transform for automatic

detection of cylinders in point clouds. International Archives of Photogramme-

try, Remote Sensing and Spatial Information Sciences XXXVI (part 3/W19),

60–65.

Rabbani, T., F. A. van den Heuvel, and G. Vosselman 2006. Segmentation of

point clouds using smoothness constraint. International Archives of the Pho-

togrammetry, Remote Sensing and Spatial Information Sciences XXXVI (part

5), 248–253.

Rappoport, A. and S. Spitz 1997. Interactive boolean operations for conceptual

design of 3-d solids. Computer Graphics 31, 269–278.

142

Page 162: Classification and Segmentation of 3D Terrestrial Laser

REFERENCES

Remondino, F. 2003. From point cloud to surface: the modeling and visualiza-

tion problem. International Archives of Photogrammetry, Remote Sensing and

Spatial Information Sciences XXXIV-5.

Roth, G. and M. D. Levine 1993. Extracting geometric primitives. CVGIP: Image

Understanding 58 (1), 1–22.

Rottensteiner, F. and C. Briese 2003. Automatic generation of building mod-

els from lidar data and the integration of aerial images. International

Archives of Photogrammetry, Remote Sensing and Spatial Information Sci-

ences XXXIV (part 3/W13), 174–180.

Rottensteiner, F., J. Trinder, S. Clode, and K. Kubik 2005. Automated delin-

eation of roof planes from lidar data. International Archives of Photogramme-

try, Remote Sensing and Spatial Information Sciences XXXVI (part 3/W52),

221–226.

Rusinkiewicz, S. and M. Levoy 2001. Efficient variant of the icp algorithm. In

Proceedings of 3-D Digital Imaging and Modelling (3DIM), pp. 145–152. Que-

bec.

Samet, H. 1989. The design and analysis of spatial data structures. Addison-

Wesley Publishing Company.

Samet, H. 1990. Application of spatial data structures, computer graphics, image

processing, and GIS. Addison-Wesley Publishing Company.

Schafer, T., T. Weber, P. Kyrinovie, and M. Zameenikova 2004. Deformation

measurement using terrestrial laser scanning at the hydropower station of gab-

cikovo. In INGEO 2004 and FIG Regional Central and Eastern European Con-

ference on Engineering Surveying. Bratislava, Slovakia.

Schafhitzel, T., E. Tejada, D. Weiskopf, and T. Ertl 2007. Point-based stream

surfaces and path surfaces. In GI ’07: Proceedings of Graphics Interface 2007,

pp. 289–296. Montreal, Canada.

Schnabel, R., R. Wahl, and R. Klein 2007. Efficient ransac for point-cloud shape

detection. Computer Graphics Forum 26 (2), 214–226.

143

Page 163: Classification and Segmentation of 3D Terrestrial Laser

REFERENCES

Schneider, M. and B. E. Weinrich 2004. An abstract model of three-dimensional

spatial data types. In GIS ’04: Proceedings of the 12th annual ACM interna-

tional workshop on Geographic information systems.

Schulz, T. and H. Ingensand 2004. Terrestrial laser scanning - investigation and

applications for high precsion scanning. In FIG Working Week. Athens, Greece.

Sedgewick, R. 1988. Algorithms (second ed.). Addison-Wesley Publishing Com-

pany.

Shakarji, C. M. 1998. Least-squares fitting algorithms of the NIST algorithm

testing system. Journal of Research of the National Institute of Standards and

Technology 103 (6), 633–641.

Sharp, G. C., S. W. Lee, and D. K. Wehe 2002. ICP registration using invari-

ant features. IEEE Transactions on Pattern Analysis and Machine Intelli-

gence 24 (1), 90–102.

Sithole, G. and G. Vosselman 2005. Filtering of airborne laser scanner data based

on segmented point clouds. International Archives of Photogrammetry, Remote

Sensing and Spatial Information Sciences XXXVI (part 3/W19), 66–71.

Slob, S. and R. Hack 2004. Lecture Notes in Earth Sciences: Engineering Ge-

ology for Infrastructure Planning in Europe: A European Perspective, Volume

104, Chapter 3D Terrestrial Laser Scanning as a New Field Measurement and

Monitoring Technique, pp. 179–189. Springer Berlin / Heidelberg.

Sotoodeh, S. 2006. Outlier detection in laser scanner point clouds. International

Archives of the Photogrammetry, Remote Sensing and Spatial Information Sci-

ences XXXVI (part 5), 297–302.

Staiger, R. 2002. Laser scanning in an industrial environment. In FIG XXII

International Congress, Washington, D.C. USA.

Stanek, H. 2004. Terrestrial laser-scanning universal method or a specialists tool?

In INGEO 2004 and FIG Regional Central and Eastern European Conference

on Engineering Surveying. Bratislava, Slovakia.

Sternberg, H., T. Kersten, I. Jahn, and R. Kinzel 2004. Terrestrial 3D laser scan-

ning - data acquisition and object modelling for industrial as-built documenta-

144

Page 164: Classification and Segmentation of 3D Terrestrial Laser

REFERENCES

tion and architectural applications. International Archives of Photogrammetry,

Remote Sensing and Spatial Information Sciences XXXV (part B5).

Stewart, J. 1995. Calculus (3rd ed.). California, USA: Brooks/Cole Publishing

Company.

Tang, C. K. and G. Medioni 1999. Robust estimation of curvature information

from noisy 3d data for shape description. In Proceedings of the Seventh Inter-

national Conference on Computer Vision, Kerkyra, Greece, pp. 426–433.

Tang, C. K. and G. Medioni 2002. Curvature-augmented tesnor voting for shape

inference from noisy data. IEEE Transactions on Pattern Analysis and Ma-

chine Intelligence 24 (6), 858–864.

Tang, Q., N. Sang, and T. Zhang 2007. Extraction of salient contours from

cluttered scenes. Pattern Recognition 40 (11), 3100–3109.

Tangelder, J. W. H., P. Ermes, G. Vosselman, and F. A. van del Heuvel 2003.

Cad-based photogrammetry for reverse engineering of industrial installations.

Computer-Aided Civel and Infrastructure Engineering 18, 264–274.

Taubin, G. 1991. Estimation of planar curves, surfaces, and nonplanar space

curves defined by implicit equations with applications to edge and range image

segmentation. IEEE Transacttion on Pattern Analysis and Machine Intelli-

gence 13 (11), 1115–1138.

Thies, M. and H. Spiecker 2004. Evaluation and future prospects of ter-

restrial laser scanning for standardized forest inventories. International

Archives of Photogrammetry, Remote Sensing and Spatial Information Sci-

ences XXXVI (part 8/W2), 192–197.

Tong, W.-S., C.-K. Tang, P. Mordohai, and G. Medioni 2004. First order aug-

mentation to tensor voting for boundary inference and multiscale analysis in

3D. IEEE Transactions on Pattern Analysis and Machine Intelligence 26 (5),

294–611.

Torr, P. and A. Zisserman 2000. MLESAC: A new robust estimator with applica-

tion to estimating image geometry. Computer Vision and Image Understand-

ing 78, 138–156.

145

Page 165: Classification and Segmentation of 3D Terrestrial Laser

REFERENCES

Tovari, D. and N. Pfeifer 2005. Segmentation based robust interpolation a new

approach to laser data filtering. International Archives of Photogrammetry,

Remote Sensing and Spatial Information Sciences XXXVI (part 3/W19), 79–

84.

Trimble 2008. [Website] http://www.trimble.com/. accessed: March 2008.

Valanis, A. and M. Tsakiri 2004. Automatic target identification for laser scan-

ners. International Archives of Photogrammetry, Remote Sensing and Spatial

Information Sciences XXXV.

Varady, T., P. Benko, and G. Kos 1998. Reverse engineering regular objects:

simple segmentation and surface fitting procedures. International Journal of

Shape Modelling 4, 127–141.

Visintini, D., F. Crosilla, and F. Sepic 2006. Laser scanning survey of the aquileia

basilica (italy) and automatic modeling of the volumetric primitives. Interna-

tional Archives of the Photogrammetry, Remote Sensing and Spatial Informa-

tion Sciences XXXVI (part 5).

von Hansen, W., E. Michaelsen, and U. Thonnessen 2006. Cluster analysis and

priority sorting in huge point clouds for building reconstruction. In Proceedings

of the 18th International Conference on Pattern Recognition (ICPR’06), pp.

23–26. Washington, DC, USA.

Vosselman, G. and S. Dijkman 2001. 3D building model reconstruction from

point clouds and ground plans. International Archives of the Photogrammetry,

Remote Sensing and Spatial Information Sciences XXXIV (part 3/W4), 37–44.

Vosselman, G., B. G. H. Gorte, G. Sithole, and T. Rabbani 2004. Recognising

structure in laser scanner point clouds. International Archives of Photogram-

metry, Remote Sensing and Spatial Information Sciences XXXVI (part 8/W2),

33–38.

Walpole, R. E., R. H. Myers, and S. L. Myers 1998. Probability and Statistics for

Engineers and Scientists (6th ed.). Upper Saddle River, New Jersey: Prentice

Hall International Inc.

146

Page 166: Classification and Segmentation of 3D Terrestrial Laser

REFERENCES

Wani, M. and B. Batchelor 1994. Edge-region based segmentation of range im-

ages. IEEE Transactions on Pattern Analysis and Machine Intelligence 16 (3),

314–319.

Weingarten, J., G. Gruener, and R. Siegwart 2003. A fast and robust 3d feature

extraction algorithm for structured environment reconstruction. In Proceedings

of 11th International Conference on Advanced Robotics (ICAR). Portugal.

Wu, J. and L. Kobbelt 2005. Structure recovery via hybrid variational surface

approximation. Computer Graphics Forum 24 (3), 277–284.

Xiang, R. and R. Wang 2004. Range image segmentation based on split-merge

clustering. In 17th International Conference on Pattern Recognition (ICPR’04),

Volume 3, Cambridge, pp. 614–617.

Yang, M. and E. Lee 1999. Segmentation of measured point data using a paramet-

ric quadric surface approximation. Computer-Aided Design 31 (7), 449–457.

Zach, C., M. Grabner, and K. Karner 2004. Improved compression of topology

for view-dependent rendering. In Proceedings of the 20th spring conference

on Computer graphics, Budmerice, Slovakia, pp. 168–176. SIGGRAPH: ACM

Special Interest Group on Computer Graphics and Interactive Techniques:

ACM,New York, NY, USA.

Zhoa, D. and X. Zhang 1997. Range-data-based object surface segmentation

via edges and critical points. IEEE Transactions on Image Processing 6 (6),

826–830.

Ziou, D. and S. Tabbone 1998. Edge detection techniques - an overview. Inter-

national Journal on Pattern Recognition and Image Analysis 8 (4), 537–559.

Zoller+Frohlich 2008. [Website] http://www.zf-laser.com. accessed: March 2008.

147

Page 167: Classification and Segmentation of 3D Terrestrial Laser

Appendix A

Overview of Principal

Component Analysis

Principal component analysis (PCA) is used in many techniques of point cloud

processing as it provides a method to describe a neighbourhood’s properties

through simple statistical analysis. Sometimes referred to as covariance anal-

ysis, PCA is concerned with explaining the structure and variance of a set of

data through the linear combinations of the variables (Johnson and Wichern,

2002). In the case of point cloud processing, the data set comprises points within

a local neighbourhood surrounding a point of interest. The principal components

are the linear combinations of eigenvectors that form an orthogonal basis for the

data set. The significance attributed to each component is related to the amount

of variation that the component contributes to the total variation exhibited by

the data. From this basis, the significant variables or interactions between vari-

ables can be determined, and often the complexity of the data can be reduced to

the key significant components (Johnson and Wichern, 2002).

For example, a plane in 3D space has only two significant components: the two

eigenvectors parallel to the surface. In the case of a line and a point, there will be

one significant component parallel to the line direction and no significant compo-

148

Page 168: Classification and Segmentation of 3D Terrestrial Laser

OVERVIEW OF PRINCIPAL COMPONENT ANALYSIS

nents for the point. This illustrates its potential use in point cloud processing.

The first step in finding the principal components is to populate the covariance

matrix for the coordinate data for a neighbourhood. The covariance matrix is

defined as:

Σ =

σ2x σxy σxz

σ2y σyz

Sym σ2z

=1

k

k∑i=1

(pi − µ)(pi − µ)T (A.1)

where σ2x and σxy are the variance in the direction of the x coordinate for the

neighbourhood and the covariance between the x and y coordinates respectively.

The pi is the position vector of the ith point in the neighbourhood and µ denotes

the mean or centroid of the neighbourhood. They are defined as:

pi =(

xi yi zi

)T

µ =(

x y z)T

The variances and covariances used in the given definition are for the popula-

tion statistics, whereas 1k

would be replaced with 1k−1

for the unbiased sample

statistics (Walpole et al., 1998). The choice of which one is to use depends on

whether the neighbourhood is viewed as a sample from a surface or point cloud

(Kamberov and Kamberova, 2004), or whether the neighbourhood is considered

a unique population since it does not necessarily reflect the properties of the a

single surface structure or point cloud (Berkmann and Caelli, 1994). For this the-

sis, the covariance information will be utilised in such a manner that the effect

of the denominator either cancels out, or is insignificant.

Once the covariance matrix has been specified, the principle directions need to

be extracted. This is done by the use of eigenvalue decomposition (Golub and

149

Page 169: Classification and Segmentation of 3D Terrestrial Laser

OVERVIEW OF PRINCIPAL COMPONENT ANALYSIS

Loan, 1989), which reduces the covariance matrix to the following form:

Σ =2∑

i=0

λieieTi

=(

e0 e1 e2

) λ0

λ1

λ2

eT

0

eT1

eT2

(A.2)

with λi denoting the ith eigenvalue and ei is the associated eigenvector, with the

eigenvalues arranged such that 0 ≤ λ0 ≤ λ1 ≤ λ2.

These eigenvectors represent the principal components of the neighbourhood and

the associated eigenvalues denote the variance in these directions Johnson and

Wichern (2002). The eigenvalues representing the variance in the principal direc-

tions will be positive since the covariance matrix is positive semi-definite (Golub

and Loan, 1989).

Figure A.1: A neighbourhood of points and the principle components foundthrough decomposition of the covariance matrix

Figure A.1 shows the eigenvectors for a neighbourhood of points representing a

planar surface, where e0 approximates the surface normal direction. As such, it

is a simple process to fit a planar surface to the data by solving the following

150

Page 170: Classification and Segmentation of 3D Terrestrial Laser

OVERVIEW OF PRINCIPAL COMPONENT ANALYSIS

equation: ((x y z

)T

− µ

)· e0 = 0 (A.3)

where e0 is the eigenvector associated with the smallest eigenvalue, and µ is the

centroid or mean of the neighbourhood. This is equivalent to fitting a plane by

least squares and solving the system by singular value decomposition (Shakarji,

1998). The RMS value of the plane fit is given by√

λ0.

151

Page 171: Classification and Segmentation of 3D Terrestrial Laser

Appendix B

Neighbourhood Correction

Methods

There are many existing correction methods that can be applied to the prob-

lem of refining the neighbourhood selection. This appendix will outline some of

the existing techniques to remove the effects of outliers and multiple surfaces in

a local neighbourhood, including outlier detection methods, random sampling,

anisotropic filtering, optimisation and voting techniques.

B.1 Outlier Detection

A method of outlier detection is one of the simplest method for alleviating the

effects of the estimation process of the geometric parameters (Danuser and Striker,

1998). Outliers are usually described as points in a data set that do not agree with

a fitted model (Walpole et al., 1998). In the case of local neighbourhoods from

point clouds, the model is most commonly either a first order planar surface fit or

a second order quadratic surface (OuYang and Feng, 2005). Because of the size

152

Page 172: Classification and Segmentation of 3D Terrestrial Laser

NEIGHBOURHOOD CORRECTION METHODS

of the local neighbourhood, a planar surface is often sufficient for approximating

the local attributes since higher order surfaces do not guarantee a better solution

(OuYang and Feng, 2005) and may be over-fitted due to noise (Lempitsky and

Boykov, 2007).

With the assumption that the error in the surface normal direction follows a Gaus-

sian distribution (Mitra et al., 2004), the z-scores are then calculated (Walpole

et al., 1998). In the case of a planar surface, this is done by:

zi =(pi − p) · n√

λ0

(B.1)

where pi is the ith point in the neighbourhood, p denotes the centroid and is

calculated as the mean of the neighbourhood, n is the approximated surface

normal and√

λ0 is the calculated standard deviation. n and√

λ0 are calculated

by PCA, as shown in Appendix A. A significance test is then performed as follows:

zi < Nα2(0, 1) or zi > N1−α

2(0, 1) (B.2)

If Eq. B.2 is true, then point xi can be considered to be a potential outlier to

the defined surface and can be removed. Note that α denotes the significance

level and Nα2(0, 1) comes from the normal distribution, with a mean of zero and

a standard deviation of one.

Using an outlier detection method to correct the neighbourhood definition may

suffer some problems. The first is the assumption of Gaussian noise. In most

instances, this assumption is acceptable since the error distribution is close to

being Gaussian. However, the noise in the normal direction is not truly Gaussian

in nature (Bae et al., 2005).

The more important problem is that it will not perform well in removing the ef-

fects cased by the presence of multiple surfaces. This is because the approximated

surface is significantly biased since the assumed error model is a combination of all

surfaces in the neighbourhood. For example, Figure B.1 presents an illustration

of the effect of removing outliers when the neighbourhood contains multiple sur-

153

Page 173: Classification and Segmentation of 3D Terrestrial Laser

NEIGHBOURHOOD CORRECTION METHODS

faces. It shows how a conventional outlier detection method will not remove the

effect of multiple surfaces, and instead will retain the orientation and structure

of the incorrectly fitted surface.

(a) (b) (c)

(d) (e) (f)

Figure B.1: Process of removing the worst point, circled in red, based on theresiduals as shown in step (a) to (f). Because the neighbourhood is balancedaround the intersection, the removal process will not lead to the normal approx-imate aligning to a surface normal of just one surface.

While an outlier detection method will work in the presence of outliers, it does not

have the ability to remove multiple surfaces from a neighbourhood. Therefore,

its use is limited to the classified surface points and a more intensive method is

required for edge points.

B.2 RANSAC

The Random Sampling Consensus (RANSAC) method for finding a set of inliers

was proposed by Fischler and Bolles (1981). The method and its variations are

often employed methods for model definition and inlier selection. This is because

of the simple and robust nature of the method, being reported to be able to handle

154

Page 174: Classification and Segmentation of 3D Terrestrial Laser

NEIGHBOURHOOD CORRECTION METHODS

data sets containing outliers in excess of 50% of the data (Roth and Levine, 1993).

The method is performed by first randomly selecting a sub-sample of the data set

to sufficiently solve the parameters of the model being applied to the points. In

the case of point clouds, the data set is usually the neighbourhood of points and

the model being applied is a surface definition. When a point from the neigh-

bourhood is within a specified threshold of the surface fitted to the sub-sample,

then it can be declared that the point agrees with the determined parameters. A

consensus set is then determined by all points that agree with the fitted surface

(Fischler and Bolles, 1981). Once a consensus set is found, a new sub-sample

of the neighbourhood is randomly selected and the process is repeated multiple

times. The fitted surface with the largest consensus set is selected as the best

representation of the neighbourhood of points, with the consensus set forming

the inliers of the neighbourhood (Hartley and Zisserman, 2004).

The problem of the RANSAC method is determining how many sub-samples must

be generated until a good surface fit is found. It is possible just to perform the

RANSAC procedure until convergence is achieved by either the variance for the

model being adequately found, or until no better consensus sets have been found

after a certain number of iterations (Hartley and Zisserman, 2004). The more

accepted and adopted method is to set a maximum number of iterations based

on the probable number of inliers. In this way, the number of iterations that are

performed is the number that ensures that the probability of a good consensus

being found is high (Hartley and Zisserman, 2004). This value is defined as:

N oiterations =

log (1− z)

log (1− wk)(B.3)

where z is the probability of finding a good consensus set of size k with the prob-

ability of a point being in the consensus set given by w (Hartley and Zisserman,

2004).

The RANSAC method is usually applied to a first order planar surface fit when

the aim is to correct the estimated normal direction (Bauer et al., 2003). It has

also been used for the fitting of higher order surfaces and geometric primitives

155

Page 175: Classification and Segmentation of 3D Terrestrial Laser

NEIGHBOURHOOD CORRECTION METHODS

(Schnabel et al., 2007). Because of its use of random sampling, if the RANSAC

method is performed a sufficient number of times, then the final solution will be

influenced by the presence of local optimal solutions which can affect line search

and least square methods.

There are a few possible shortcomings when using RANSAC. The threshold value

for generating the consensus set is usually based on the assumed noise in the

surface direction. In TLS point clouds, this can be problematic since the noise is

based on the distance from an object to the laser scanner, incident angle, surface

texture, and for point clouds generated by multiple scan setups, the accuracy of

registration (Bae et al., 2005). Another shortcoming is the number of possibly

redundant sub-samples required to ensure that a good surface fit is found. Since

the surface points are assumed to contain only one surface feature in the local

neighbourhood, this disadvantage can be limited to only applying to edge points

which are determined to contain more than one probable surface feature.

The last possible problem is that the RANSAC method is aimed at finding the

dominant surface in the neighbourhood in terms of the number of points at-

tributed to its consensus set. This could mean that the detected surface may not

contain or reflect the surface that the point of interest is a member of, due to

differing spatial sampling densities. An example is shown in Figure B.2 where

the dominant surface (in terms of points) does not contain the point of inter-

est, even though it belongs to the surface that has the largest surface area in the

neighbourhood. A method to overcome this is to simply remove the consensus set

from the neighbourhood and then repeat the RANSAC procedure until another

consensus set is formed containing the point of interest.

This section has outlined the basic methodology of utilising the RANSAC pro-

cedure for neighbourhood refinement. Other variations of RANSAC include

GOODSAC (Michaelsen et al., 2006), MLESAC (Torr and Zisserman, 2000), LO-

RANSAC (Chum et al., 2003) and PROSAC (Chum and Matas, 2005). These

variants try to direct subsequent sampling to limit the search space and converge

to a solution faster than the original random sampling presented in Fischler and

Bolles (1981).

156

Page 176: Classification and Segmentation of 3D Terrestrial Laser

NEIGHBOURHOOD CORRECTION METHODS

Figure B.2: Dominant surface based on area will not necessarily be the same asthe dominant surface by number points, although (due to the equal neighbour-hood size in all directions) the point will most likely to belong to the dominantsurface in terms of area covered by points.

B.3 Anisotropic Filtering

While the RANSAC method relied on random sampling, anisotropic filtering

relies on systematic sampling. The method works by either systematically sub-

sampling the neighbourhood or applying a mask in the form of a wavelet to

the neighbourhood, as shown in Figure B.3. When performing sub-sampling,

the sub-samples are examined to determine which has the best surface fit to

determine an inlier set. If there is more than one sub-sample exhibiting either

similar properties or surface parameters, they can be combined into a larger inlier

set. In a similar manner, points can be added to the inlier set if they are within a

certain threshold, as with the RANSAC. A problem is that, unlike the RANSAC

where the sub-sample is assumed to to be a set of inliers, the systematic method

may also include outliers to the sub-sample that need to be detected and removed

from the final solution.

The method of sub-sampling can be performed in various ways. A simple method

is shown in Figure B.3, where a circle sub-sample is taken at regular angular

intervals around either the point of interest or centroid of the neighbourhood.

157

Page 177: Classification and Segmentation of 3D Terrestrial Laser

NEIGHBOURHOOD CORRECTION METHODS

Figure B.3: Example of systematic sampling at regular intervals around thepoint of interest.

Other sampling methods can include either taking points within an angular arc or

taking points within a window translating across the neighbourhood, as illustrated

in Figure B.4. The variations in these types of methods come primarily from how

the sampling mask and stepping function is defined within the neighbourhood.

This is beneficial when there is an assumed structure within the neighbourhood.

If this is the case, a sub-sampling method, e.g. wavelet or mask, can be specified

based on the structure to produce the best result. If a point is near an edge, a

mask can be created to split the neighbourhood along a line. The sub-sample is

then performed such that the best sub-sample is found when the extent of the

sub-sample corresponds to the defined line.

The phase disk method (Clode et al., 2005) and neighbourhood re-sampling

method presented by Tang et al. (2007) share similarities to the described method.

The focus of these methods is on aligning the neighbourhood to the directions of

change to find line features. In these cases, a wavelet was applied as a mask to

find the feature of interest in the neighbourhood.

158

Page 178: Classification and Segmentation of 3D Terrestrial Laser

NEIGHBOURHOOD CORRECTION METHODS

(a) (b)

Figure B.4: (a) shows a simple mask based on a angular span from the pointof interest while (b) shows the sliding of the sampling window in a samplingdirection.

B.4 Optimisation

The previously introduced methods employed either random or systematic sam-

pling of the solution space1, instead of using a directed search method which

converges towards the optimal solution in the solution space through an iterative

refinement process (Burden and Faires, 2001). The problem of finding a set of

inliers can be formulated as a non-linear, mixed binary integer programming op-

timisation problem (Du and Pardalos, 1998). To do this, an objective function

must be specified with associated variables and constraints.

The first step is to identify the variables associated with the points in the neigh-

bourhood. In this case, a binary integer variable associated with the ith point

(xi) in the neighbourhood is used with the following definition:

pi =

{0 if xi ∈ S

1 if xi /∈ S

}(B.4)

where S is a sub-sample of the neighbourhood of points comprised of inliers.

1The solution space is the set of all possible solutions to an optimisation problem.

159

Page 179: Classification and Segmentation of 3D Terrestrial Laser

NEIGHBOURHOOD CORRECTION METHODS

From this, an objective function to be minimised can be defined as:

Z =k∑

i=1

pi ((xi − c) · n)2 (B.5)

where c and n are defined as the centroid and the surface normal approximation

of the sub-sample S, respectively. These values need to be re-evaluated every

time a value for pi changes, and this procedure will be explained shortly. The

first necessary constraint for the defined objective function is:

pi = {0, 1} ∀pi (B.6)

which limits the variable to binary integer value with one denoting inclusion in

the sub-sample S and zero denoting exclusion. The second constraint is:

k∑i=1

pi = m (B.7)

where m is the number of points that the sub-sample S contains.

When applying an optimisation technique, an initial solution of m points is cho-

sen. The process then proceeds iteratively by finding both the worst point xi in

the sub-sample (the one contributing the largest to the objective function) and

the best point xj not within the sub-sample. The change in the objective function

(of denoted as γi,j) for swapping the values in pi and pj is then calculated for each

point pair xi and xj as:

γi,j = Z − ((xi − c) • n)2 + ((xj − c) • n)2 (B.8)

The solution is then pivoted on the pair of points that has the smallest γi,j (or the

largest reduction to the value of the objective function) by swapping the values of

pi and pj. This process is then repeated until all the γi,j values are greater than

zero, which means that no more improvements can be made. At each stage in

the process, the points that have pi with a value of one (or are in the sub-sample)

will form a basis feasible solution (Du and Pardalos, 1998).

160

Page 180: Classification and Segmentation of 3D Terrestrial Laser

NEIGHBOURHOOD CORRECTION METHODS

There are some limitations to the method, as follows. The first is that the normal

and centroid should be calculated for the sub-sample after pivoting, not the cur-

rent neighbourhood solution. If it is just calculated on the current values, then

it only takes into account the change in the objective function for the current

model, not the new model based on the pivoted data set. This allows for the

possibility of the solution converging to the basic feasible solution for the current

model specification, not the basic feasible solution for the generalised model. To

overcome this, the PCA can be performed for every computation of γi,j to find

the new model, however, this becomes prohibitive due to its computational costs.

A way around this is to approximate the change in the normal direction and its

centroid by using the method described in Kanatani (1996). This defines the

perturbation of the original covariance as:

Σ′ = Σ + εD (B.9)

where Σ is the current covariance matrix, Σ′ is the perturbed matrix with D defin-

ing the change by swapping point xi with xj, and ε is the amount of change. D is

derived from observing the changes when swapping two points in the covariance

formulation:

Σ =1

k

k∑i=1

(xi − c) (xi − c)T (B.10)

If the centroid was fixed as the point of interest, the D is defined as:

D =xjx

Tj − xix

Ti

k(B.11)

If the centroid is the mean of the sub-sample S, then D is defined as:

D =xjx

Tj − xix

Ti − c (xj − xi)

T − (xj − xi) cT

k− (xj − xi) (xj − xi)

T

k2(B.12)

As defined in Kanatani (1996), the change that D has on the eigenvalues and

161

Page 181: Classification and Segmentation of 3D Terrestrial Laser

NEIGHBOURHOOD CORRECTION METHODS

eigenvectors is approximated, respectively, by:

λ′i = λi + ε (ei ·Dei) + O

(ε2)

(B.13)

e′i = ei + ε∑j 6=i

(ej ·Dei) ej

λi − λj

+ O(ε2)

(B.14)

In this way the variance and change in the normal direction can be approximated

for the pivoted sub-sample by observing λ′0 and e′0. The PCA will still need to

be performed after the sub-sample since this is only an approximate solution.

Otherwise the errors would be accumulated at each iteration.

For optimisation using this technique, the objective function should be defined

to guarantee it has linear constraints and is concave over the feasible solution

region. Even if the problem was reformulated to ensure this, because c and n are

not constant, it cannot be ensured. Even though it is assumed that the changes

are small enough so that they can be assumed to be constant between successive

iterations, it still forces γi,j to be calculated explicitly at each step, instead of the

normal method of updating at each step without the need for re-calculating. The

binary integer variables also contribute to this problem of recalculating γi,j.

Due to the effect of noise, the values in the solution space will not produce a per-

fectly concave function. The final solution will still converge towards the optimal

solution, but due to the effect of noise, there may be local optimal solutions near

the global solution. This will prohibit the process converging to the exact global

solution. In most cases, because of the assumed effect of noise being significantly

less than the affect from the surface properties, while the final solution may only

be a local minimum, it will only vary slightly from the true global minimum.

To allow for a change in the neighbourhood, the constraint in Eq. B.7 can be

redefined as:

k∑i=1

pi ≥ m (B.15)

with m being the minimum number of points to be within the sub-sample. Since

162

Page 182: Classification and Segmentation of 3D Terrestrial Laser

NEIGHBOURHOOD CORRECTION METHODS

optimisation techniques require equality constraints, a slack variable s is required

(Du and Pardalos, 1998). The constraint is then reformulated as:

k∑i=1

pi − s = m (B.16)

with the added constraint that s can only take on positive values. But, due to

the definition of the objective cost function, it will find optimality when there is

only m members in the sub-sample. To avoid this problem, the objective function

can be divided by the number of points within the sub-sample (∑k

i=1 pi), or the

objective function can be modified to:

Z =k∑

i=1

pi ((xi − c) • n)2 + ν

(k −

k∑i=1

pi

)(B.17)

where the second part of the function (ν(k −

∑ki=1 pi

)) encourages as many

points as possible to be in the sub-sample and ν determines the importance

factor.

B.5 Voting Methods

Voting methods are used in Hough transformations and similar methodologies.

They are performed by searching the parameter space for a surface model. For

each point in the neighbourhood, the coordinate values are substituted into the

surface model. This allows the possible parameter solution to be determined for

each point. An intersection will occur in the parameter space for values that

satisfy more than one point. A significant number of intersection at a point in

the parameter space will be caused by a surface. The coordinates of the point in

the parameter space will provide a solution to the surface parameters.

To calculate and define all the intersections in an explicit manner is too compu-

tationally prohibitive. Instead, the parameter solution space can be divided into

163

Page 183: Classification and Segmentation of 3D Terrestrial Laser

NEIGHBOURHOOD CORRECTION METHODS

bins. Each point then casts a vote for all bins that contain a possible solution for

the surface definition of that point. A surface is indicated by a bin with a high

number of votes. The solution to the surface parameters being contained in the

parameter values is represented by that bin. Points that cast votes for those bin

denote the points that either belong to or are inliers to that surface. In this way,

the surfaces present in a neighbourhood can be found and the effects of multiple

surfaces can be removed by identifying the points that do not cast votes for the

dominant surface and removing their effects.

Most commonly, a first order planar surface is used when forming the parameter

space. This is because the number of dimensions in the parameter space is equal

to the number of dimensions of the coordinate domain (Vosselman and Dijkman,

2001). Higher order surfaces and geometric structures have also been used (Vos-

selman et al., 2004; Rabbani and van den Heuvel, 2005; Ogundana et al., 2007).

However, the number of dimensions for their parameter space is often larger than

the coordinate domain, which leads to an increase in computational complexity.

Another method of voting is based on creating a line from the centroid to each

point in the surrounding neighbourhood. Every line created votes for the solution

which contains the normal direction that is perpendicular to that line. The

solution with the highest number of votes represents the best normal orientation.

Page et al. (2002) illustrates this method using quadratic curves between the

centroid and points in the neighbourhood.

164