efficient selection, storage, and retrieval of irregularly distributed elevation data

7
Computers & Geoscwnces Vol 11, No 6, pp 667-673, 1985 0098-3004/85 $3 00 + 00 Pnnted m the U S A © 1985 Pergamon Press Ltd EFFICIENT SELECTION, STORAGE, AND RETRIEVAL OF IRREGULARLY DISTRIBUTED ELEVATION DATA LEILA DE FLORIANI, BIANCA FALCIDIENO, AND CATERINA PIENOVI Istltuto per la Matemalaca Apphcata del C N R, Genova, Italy and GEORGE NAGY Electrical Computer and Systems Engmeenng Department, Rensselaer Polytechnic Institute, Troy, NY 12180, U S A (Received 19 December 1983, revtsed 12 July 1985) Abstract--The advantages and dtsadvantages of various digital terrmn models are discussed briefly and a new model for triangulating a set of nonuniformly distributed three-dimensional surface observations is described An algorithm for hierarchical subdivision of a set of data points into nested triangles Is proposed The algorithm selects a subset of the data points which reduce the maximum error between a piecewase hnear approximation of the surface using only the selected points and the elevations of the points not selected The data structure used is such that for any gwen degree of approximation (in the maximum-error sense) only the necessary points need to be stored Furthermore, an et~cient method is availableto approxamate the elevation of the surface at any point not included in the original data The performance of the algorithm ~sdemonstrated experimentally Key Words DagRalterrain models, Computational geometry, Data structures, Cartography INTRODUCTION Surface representation IS a usual requirement In geo- graphic data processing The data structure used to accommodate the algorithms necessary to make use of surface information usually is termed a Digital Ter- rain Model (DTM) (Fowler and Little, 1979) The ob- servations consist of a set of triplets (x, y, z) The de- pendent variable z is termed "elevation" The inde- pendent vanables (x-y pmrs) may form a uniformly spaced array or they may be Irregularly distributed Given a list of points (x, v) with elevation z, the pnn- cipal alternatives for obtaining a data structure which provides a satisfactory approximation to the topogra- phy of the terrain are ( 1 ) computing or stonng the z-values at the vertices of a uniform x-y grid, and (2) associating the given data points with each other by a triangulation Grldding seems to be appropriate particularly when the original data are given at regularly spaced intervals and when the elevations of isolated locations are desired (Davis and McCullagh, 1975, Nagy and Wagle, 1979) It results, however. In inet~cient storage utihzation when the data contain large regions of uniform slope Hierarchical grid-based representations, such as the quadtree or the k-d tree, are not adaptable directly to consistent linear interpolation or elevation data Tnangulatlon is well suited for irregularly distributed data points and surface approximation by linear In- terpolation (Lawson, 1977) Because it is possible to pass a plane through any three points in space, inter- polating elevation values after triangulation does not gave rise to ambiguities Triangulation also renders it possible to work with nonconvex and multiply con- nected regions, such as elevations near bodies of water A hierarchical approach can be applied readily because triangles can be subdivided into triangles We shall show that hierarchical triangulation provides a way of representing the terrain topography with a restncted amount of data points, selected from a larger data set A third alternative is to store the contour lines which can be extracted directly or obtmned from a rectangular or triangular grid Strip trees (Ballard, 198 l) provide an efficient hierarchical representation for arbitrary curves, but the inevitable loss of accuracy resulting from not stonng the actual observation points cannot be quantified easily Because of the advantages cited, in the remmnder of this paper we shall be concerned only with trian- gulation DTMs using a triangulated structure are termed Triangulated Irregular Networks (TINs) First, the requirements imposed by common applications of DTMs are reviewed briefly Then the algorithms and data structures necessary for our method are described Next, the performance of the method is demonstrated on diverse data Finally, some conclusions are drawn regarding the domain of applicability of the method, and directions for further research are indicated REQUIREMENTS FOR A DIGITAL TERRAIN MODEL The pnncipal cntenon for the acceptabihty of a DTM is the accuracy of the terrain approxlmat~on that can be obtained with a gaven allocation of computing resources 667

Upload: leila-de-floriani

Post on 28-Aug-2016

217 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Efficient selection, storage, and retrieval of irregularly distributed elevation data

Computers & Geoscwnces Vol 11, No 6, pp 667-673, 1985 0098-3004/85 $3 00 + 00 Pnnted m the U S A © 1985 Pergamon Press Ltd

EFFICIENT SELECTION, STORAGE, AND RETRIEVAL OF IRREGULARLY DISTRIBUTED ELEVATION DATA

LEILA DE FLORIANI, BIANCA FALCIDIENO, AND CATERINA PIENOVI Istltuto per la Matemalaca Apphcata del C N R, Genova, Italy

and

GEORGE NAGY Electrical Computer and Systems Engmeenng Department, Rensselaer Polytechnic Institute,

Troy, NY 12180, U S A

(Received 19 December 1983, revtsed 12 July 1985)

Abstract--The advantages and dtsadvantages of various digital terrmn models are discussed briefly and a new model for triangulating a set of nonuniformly distributed three-dimensional surface observations is described An algorithm for hierarchical subdivision of a set of data points into nested triangles Is proposed The algorithm selects a subset of the data points which reduce the maximum error between a piecewase hnear approximation of the surface using only the selected points and the elevations of the points not selected The data structure used is such that for any gwen degree of approximation (in the maximum-error sense) only the necessary points need to be stored Furthermore, an et~cient method is available to approxamate the elevation of the surface at any point not included in the original data The performance of the algorithm ~s demonstrated experimentally

Key Words DagRal terrain models, Computational geometry, Data structures, Cartography

INTRODUCTION

Surface representation IS a usual requirement In geo- graphic data processing The data structure used to accommodate the algorithms necessary to make use of surface information usually is termed a Digital Ter- rain Model (DTM) (Fowler and Little, 1979) The ob- servations consist of a set of triplets (x, y, z) The de- pendent variable z is termed "elevation" The inde- pendent vanables (x-y pmrs) may form a uniformly spaced array or they may be Irregularly distributed Given a list of points (x, v) with elevation z, the pnn- cipal alternatives for obtaining a data structure which provides a satisfactory approximation to the topogra- phy of the terrain are

( 1 ) computing or stonng the z-values at the vertices of a uniform x -y grid, and

(2) associating the given data points with each other by a triangulation

Grldding seems to be appropriate particularly when the original data are given at regularly spaced intervals and when the elevations of isolated locations are desired (Davis and McCullagh, 1975, Nagy and Wagle, 1979) It results, however. In inet~cient storage utihzation when the data contain large regions of uniform slope Hierarchical grid-based representations, such as the quadtree or the k-d tree, are not adaptable directly to consistent linear interpolation or elevation data Tnangulatlon is well suited for irregularly distributed data points and surface approximation by linear In- terpolation (Lawson, 1977) Because it is possible to pass a plane through any three points in space, inter- polating elevation values after triangulation does not gave rise to ambiguities Triangulation also renders it

possible to work with nonconvex and multiply con- nected regions, such as elevations near bodies of water A hierarchical approach can be applied readily because triangles can be subdivided into triangles We shall show that hierarchical triangulation provides a way of representing the terrain topography with a restncted amount of data points, selected from a larger data set

A third alternative is to store the contour lines which can be extracted directly or obtmned from a rectangular or triangular grid Strip trees (Ballard, 198 l) provide an efficient hierarchical representation for arbitrary curves, but the inevitable loss of accuracy resulting from not stonng the actual observation points cannot be quantified easily

Because of the advantages cited, in the remmnder of this paper we shall be concerned only with trian- gulation DTMs using a triangulated structure are termed Triangulated Irregular Networks (TINs) First, the requirements imposed by common applications of DTMs are reviewed briefly Then the algorithms and data structures necessary for our method are described Next, the performance of the method is demonstrated on diverse data Finally, some conclusions are drawn regarding the domain of applicability of the method, and directions for further research are indicated

REQUIREMENTS FOR A DIGITAL

TERRAIN MODEL

The pnncipal cntenon for the acceptabihty of a DTM is the accuracy of the terrain approxlmat~on that can be obtained with a gaven allocation of computing resources

667

Page 2: Efficient selection, storage, and retrieval of irregularly distributed elevation data

668 L DEFLORIANI and others

Accuracy We consider "accuracy" a property of both the subset

of points selected to represent the surface and of the specific tnangulat lon ~mposed on these points A triangulation defined on a set of points consUtutes an approximation of the surface by planar triangular patches Consequently, accuracy may be considered as some measure--average, median, or m a x i m u m - - o f the &stnbut~on of the vertmal dewatlons from the un- derlying plane of all the points not included m the triangulation Such points may be of two types E~ther they were part of the onganal data set and rejected by the selection algonthm, or they consmute entirely new "test data" that were not available to the selection al- gorithm at all Once we have agreed on the appropriate cn t enon - - t he one adopted m this paper ~s to mlmmlze the maxtmum devmt~on--~t ~s straightforward to com- pare alternatwe methods of point selection and trian- gulation on any specific data If our primary objectwe ~s data compression, then points of the first type are appropriate for testing the algorithm If, however, we wish to estimate how well the data structure represents the topography, then we also must consider points of the second type

Inttlal preparation of the data structure Gwen the lnltml data, the selection of the subset of

points constituting the D T M is a relatively t ime-con- suming operatxon Indeed, l f M p o m t s are to be selected from N points, then the number of possible candidate subsets xs (N select M), which Is a large number indeed It ~s shown that the proposed algorithm reqmres only between N log M and N X M operations Our algorithm does not, however, guarantee determining the subset with the least number of points which satisfies the specified error criterion

Ease of updatmg When the values of existing points are changed, new

points are introduced, or old points are deleted, the data structure must be changed to accommodate the new reformation At best, every modification reqmres a small, fixed number of operations In general, how- ever, if the D T M was constructed according to a specml criterion, such as uniform length of search or approx- ~matlon accuracy, then even a change revolving only a single pomt may reqmre complete reorgamzaUon of the structure to retain the properties guaranteed by the initial model

Retrieval efficiency Here we are concerned about the computer t~me

necessary to approximate the surface elevaUon at a specific point (whmh may or may not be included m the database) Both the "average" and the "worst case" sltuat~on are of interest

Storage efficiency In a s~mflar veto, we wish to determine the storage

capacity reqmred for a gaven database Because a subset of the points will be stored, at the min imum it is nec-

essary to save the x, y, and z values of these points and the spatml relaUonsh~p of the points to one another It ~s shown that with the proposed structure, the total storage reqmred ts proportional to the number of se- lected points M, and the constant of proportionality is calculated

METHOD

The preparation of our D T M conststs of two steps (1) Inmal subdivision of the datapolnts into mu-

tually exclusive and completely exhaustive tri- angular subsets

(2) Sub&vision of each subset into nested triangles by hierarchical triangulation

Inmal trtangulat~on The data are presented originally as a set of Irregu-

larly &stnbuted data points S We must represent the terrain topography in the regton bounded by the convex hull of S, which is the m i n i m u m convex polygon en- closing S The 1deal procedure for sub&vision deter- mines the point P which results in the subdivision of the points into mutually exclusive tnangular regions w~th the property that the max imum error within each triangle, m the sense descnbed, is as small as possible If, however, the total number of points is large, or ff the region is not connected (for example, if the area of interest contains a lake) then we may have to accept a nonoptlmal sub&wslon, which can be obtmned more rapidly

Hterarchtcal subdtvtston The result of the initial tnangulaUon of a s~mply

connected regmn is a two-level tree of as many tnangles as there are edges m the convex hull In each of these tnangles the surface, as represented by the elevation values of the data points, now will be approximated by a set of nested mangles These mangles are obtmned hmrarchmally, at each step of the a lgonthm one of the existing triangles Is selected for subdivision The tri- angle selected for sub&wslon is the one which has the max imum error where the vertical distance of the far- thest point in each triangle is the error assocmted w~th that triangle

A sub&vision consists of joining the data point with the max imum error w~thm the tnangle to each of ~ts vertices The sub&v~s~on proceeds untd one of the fol- lowing three condmons is met

(1) All of the remaining data points are wxthln a preset tolerance from the pmcewxse hnear ap- proximation represented by the existing tnan- gulatlon

(2) All of the data points have been used If th~s stopping rule is used, then the order in which the points are selected is preserved m the data structure, so it is possible to truncate the resulting tree to the eqmvalent of one generated with rule (1)

(3) A preset number K of data points have been used The number of points effectively used can

Page 3: Efficient selection, storage, and retrieval of irregularly distributed elevation data

Irregularly dlstnbuted elevatton data 669

be less than K when this subset gwes a better approximation

Our procedure for tnangulatmg a set of points bounded by a convex hull which is itself a triangle may be sum- manzed formally as follows For slmphclty, we consider only stopping rule (1)

The t ime reqmred to determine the smallest triangle which includes a given point (1 e , the retrieval time) depends directly on the depth of the tree As we have seen, this may range from M in the worst situation to log M m the best sltuaUon

Procedure T R I E R A R C H Y while error assocmted with the triangulation T > tolerance do

let t be the tnangle with the max imum error, let P be the point m T w~th the greatest error, subdwlde t into three subtnangles, t~, t2, and t3,

having e as a c o m m o n vertex for t ~-- 1 to 3 do

add t, to the tree structure as son of t, select the points m t which fall within t,, if t, is not empty, then compute the max imum error

associated with t, end for

end while end T R I E R A R C H Y

The most important component of the data structure necessary for the efficient operation of the algorithm is a ternary tree The root of this tree is one of the initial tnangles Each node represents a triangle, and each branch corresponds to a subdivision A triangle becomes a leaf node if it cannot be divided further (either because it contains no data points other than its own verUces or because none of the data points it contains is farther from the plane passing through its vemces than the preset tolerance)

In order to avoid recomputlng the d~stances of the points m triangles which have not been subdlvaded, the error associated with each triangle is preserved at the corresponding node of tbe tree The error associated w~th a given tnangulat lon is thus the max imum error of its leaf-node triangles

The specml situation of a point falling on the edge of a previously created tnangle is handled by duph- caring the point Thus, one "copy" of the point is con- sldered to be part of each of the triangles sharing the edge, and participates in further subdwlSlOnS of this triangle Of course, a subdivision of a tnangle using such an edge point results in only two tnangles rather than three

COMPUTATIONAL EVALUATION

Ttme complexity The worst situation for hierarchical tnangulatlon

occurs when at each subdivision of a triangle two of the three subtnangles are empty, whereas the third contains all of the points In this situation the number of steps performed is proportional to M × N, where M is the number of points necessary to meet the tol- erance and N is the total number of data points in the initial tnangle T On the other hand, when each sub- tnangle contains about one-third of the points wtthm the tnangle being subdivided, the number of compu- tations is proportional to M log N

Space complexity The primary data structure used in the hierarchical

triangulation, in addmon to a list of all the datapoint coordinates, is a tree Each node of the tree contains (a) pointers to the vertices of the corresponding tnangle T, (b) pointers to the children of T, and (c) the error associated with T A total of 7 fields are thus required for each node except the root, which has as many chil- dren as there are initial mangles

A pnonty queue is used to store the tnangles which can be subdivided further, and two chains of pointers indicate how the data points are split among the various tnangles These temporary structures are discarded when the triangulation is completed

It can be shown in terms of the quantities defined that the storage requirements for the structures are

List of data points 3N fields Tree 3M - 2Nc - 2 nodes, other than the root

= 21M - 12Nc - 20 fields (including the root)

Note that the max imum possible value of M IS N, and the min imum possible value of Nc is 3 The overall storage requirement is dominated by the larger of the two quantities 3N and 21M

If there are a large number of points, the permanent data structures (the list of data points and the tree) can be moved to secondary storage and organized by clas- sical pagang techniques However, the computat ion t ime will be reduced greatly if at least the list of data points is kept in primary storage

Implementation The triangulation algorithm was implemented on a

PDP 11/40 computer with 128K bytes of pnmary storage The programs were wnt ten in PASCAL and executed under the RSX 11-M operating system The programs also were run subsequently on an IBM 370/ 158 under OS/MVS With the current implementation

Page 4: Efficient selection, storage, and retrieval of irregularly distributed elevation data

670 L DEFLORIANI and others

on the PDP 11/40, which keeps all data structures res- ident in pnmary storage, the program can process up to 3000 points, and a complete hierarchical triangu- lation of 400 points is executed in approximately l 1 seconds

EXPERIMENTS

The most widely used model oftnangulated irregular network is based on Delaunay tnangulat lon This tnangulatlon, defined as the stralght-hne dual o f a Vo- ronol diagram, maxlm~zes the min imum interior angle (Lee and Schacter, 1980) Such a property, termed the equiangulanty property, is fundamental for contour extraction, because it reduces the max imum d~stance of any interior point from a vertex of its tnangle In the expenments, therefore, we use Delaunay tnangu- latlon as a standard of companson for hierarchical triangulation Several different types of data were used for experimentation, as described next

Data set H Y D R O These data were denved from hydrographic data but were modified in order to test as many problem-prone particular s~tuat~ons as possible with a limited number (313) of data points It has been used extensively in the development of other digital terrain models by the Genova Computat ional Ge- ometry Group (De F lonam and others, 1981) Figure 1 shows a map of data set H Y D R O

The pnncipal quantitative cn tenon for evaluatxon of the tnangulat ion programs is based on random di- vision of the H Y D R O data set into two parts, a "Training Set" of 213 points and a disjoint "Test Set" of 100 points Subsets of M points, termed "Tnan- gulated Sets" were selected by procedure T R I E R A R - C H Y from the training set, with M ranging from 50- 213 The resxdue of the training set, containing (213 - M ) points, is termed the "Approximated Set"

Figure 2 RelaUonshlp of point sets used for testing on HY- DRO data A, Complete Set of 314 points, B, Test Set of 100 points, C, Triangulated Set of 50 points selected by procedure TRIERARCHY from A-B, D, Triangulated Sets of 100 points selected randomly from A-B, E, Tnangulated Sets of 100 points selected under constrained randomization from A-B

Three different random divisions of the onganal data set into a trmmng set and a test set were tned to increase the rehabihty of our results Fo rcompanson purposes, two other methods of point selecuon also were imple- mented

The first method, R A N D O M SELECTION, was simply a random choice of M points from the training set The results from up to ten different random selec- tions were averaged to obtmn the eompanson shown next

The second method, C O N S T R A I N E D R A N D O M SELECTION, was based on random selection also, but at tempted to obtain a more representatwe sample by ehmlnatmg from the list of available candidate points within a given x, y, or z distance from selected points The threshold distance was adjusted so as to yield M points The relationship of the various sets of data points is shown chagrammatlcally m F~gure 2 The

Figure 1 Map of data set HYDRO

°_

z P )

o

b_ o '

°o~ ' 1o'. ' 20'. ' 30'. ' 4o so'. NORMALIZED ERROR

Figure 3 Cumulatwe histogram of normahzed error on Ap- proximated Set, w~th hmrarchmal selection and tnangulatmn

Page 5: Efficient selection, storage, and retrieval of irregularly distributed elevation data

Irregularly dlstnbuted elevauon data

Table 1 Error as function of method of point selection and tnangulanon on data set HYDRO

671

ALGORITHMS ERROR

polnt

selection

trzangu- Test Set Approxlmated Set

latlon max mean max mean

Set 1 H1erarch

Random

(average of 10)

Constrained

Random

(average of 10)

Hlerarch. 33 4 8 8 17 8 6 6

Delaunay 33 4 6 3 22 2 5 2

Delaunay 37 8 8 2 41 4 8 7

Delaunay 33.4 6 5 33 7 5 9

Set 2

Set 3

Hzerarch

Random

Constrazned

Random

H~erarch

Hlerarch 37 3 8 7 13 5 5 5

Delaunay 25 4 5 6 33 7 6 3

Delaunay 39 9 7 7 37 3 8 2

Delaunay 28 7 6 2 36 0 5 8

Hlerarch 43 7 8 6 17 7 6 6

Delaunay 23 6 5 4 23 3 5 4

triangulated set selected w~th hmrarchmal tnangulanon was retnangulated using Delaunay triangulation to de-

termine the effect o f retr langulanon on the accuracy

(kC °

I / - . t . . . . . j ~ ~ . .

" ~ - - , _ _ . . . . . . . . . . . - _

I I ) I I I I I I / 60 70 80 90 10 0

POINTS USED IN TRIANGULATION

Ftgure 4 Decrease m average error on Test Set (sohd hne) and on Approximated Set (dashed hne) as number of points m Triangulated Set is increased from 50 to 100 (Tnerarchy)

of approx~matlon The average and max i mu m errors were measured w~th all four methods on both the ap-

proximated sets and the test sets The error was nor-

+ ÷ + ~,* ÷

. ÷ ++ ÷ ÷ ÷ ÷ ÷ ÷ + ÷ ÷

÷ ÷ .

÷

÷ ÷

÷ ÷

÷ ÷ ÷

÷ ÷ ** I

. ÷ ° ÷ ' : " ÷ ÷~ + + ÷ ÷

~ ÷ ÷ ÷ ÷ ÷ ÷ + + ÷ ÷

Figure 5 Test funcnon EXP, showing 400 randomly selected points and contours of analytmal funcnon

Page 6: Efficient selection, storage, and retrieval of irregularly distributed elevation data

672 L DEFLORIANI and others

z L D

o ~

d o , , ~ , , , ' ~ 0 . 2o'. 30. 40'• 50'. NORMALIZED ERROR

Figure 6 Cumulatwe histogram ofnormalmed error on EXP when 50 points are selected hierarchically and triangulated

Average error IS 5% of maximum d~fference of z-values

Z

I--¢~

c~

<o?,

O0 10 O0 20 O0 30.00 40 O0 50'.00 -0 NORMALIZED ERROR

Figure 8 Cumulative histogram of normalized error on CUB when 20 points are selected hIerarchxcally and triangulated

Average error is 10%

mahzed with respect to the m a x i m u m difference be- tween the z-values of the data points Table 1 shows the results ob ta ined

The dis t r ibut ion of the errors on the Approx imated Set, using procedure T R I E R A R C H Y for bo th poin t selection and t r iangulat ion, is shown in cumula t ive his togram form m Figure 3

The decrease in the error In the Approx imated Set as a function of the n u m b e r of points selected is plotted for hierarchical t r iangulat ion in Figure 4 It mus t be men t ioned tha t the m a x i m u m error on the Test Data, which is due to a poin t no t well represented by the Training Data, does not decrease further after selection of 50 points by any m e t h o d

Data set FUNCTION Two sets of 400 points, each

÷

\

F~gure 7 Test function CUB, showang 400 randomly selected points and elevatmn contour lines

selected randomly from z-values specafied by analytical b ivana te functions, also were used for testing how well hierarchic t r iangulat ion can approximate a complex surface The funct ions are

E X P z = l O 0 { e x p - 2 ( ( x - O 3 ) Z + ( v - 0 3 ) z)

+ e x p - 2 ( ( x - 0 7 ) 2 + ( y - 0 7 ) 2 ) }

- 2 < x < 2

- 2 < v < 2

and

CUB z = 100{x2(x - 1 ) + y 2 ( v - 1 ) + 10}

0 5 < x < 1 0

0 5 < y < 1 0

Results on these analytical funct ions are shown in Fig- ures 5 -8

CONCLUDING REMARKS

A l g o n t h m s were demons t ra t ed for bui lding a dlgttal terrain model or data s t ructure for observat ions of ele- vat ions taken at un i fo rm or r a n d o m intervals and for evaluat ing the elevations at new points obta ined by l inear in terpola t ion f rom the gwen data points

Advantages of the proposed m e t h o d are tha t the tnangula t lon obtmned depends on the elevation values as well as on the hor izonta l location of the data, and that the appropriate points may be selected to guarantee a given vertical accuracy

x Hierarchical t r iangulat ion is no t satisfactory for plot t ing visually acceptable contours , because It is no t constructed according to the equ langu lan ty property Consequent ly , if contours are required, it is necessary to re tnangula te the selected subset of points according

Page 7: Efficient selection, storage, and retrieval of irregularly distributed elevation data

Irregularly distributed elevation data 673

to the Delaunay cri terion However, the accuracy of the surface approxamatlon is not preserved necessarily

Acknowledgment--The authors gratefully acknowledge the support of the Italian NaUonal Research Councd, and of the Conservation and Survey Dawslon of the University of Ne- braska through NASA Grant 28-004-020

REFERENCES

Ballard, D H, 1981, Strip trees a hierarchical representation for curves Comm of the ACM, v 24, no 5, p 310-321

Davis, J C, and McCullagh, M J , eds, 1975, Display and analysis of spatial data John Wiley & Sons, New York, 375 p

De Flonanl, L, Detton, G , Falcldleno, B, Glanuzzl, V, and Plenovl, C, 1981, A graphical system for geographical data processing in a minicomputer environment Signal Pro- cesslng, no 3, p 253-257

Fowler, R F , and Little, J J , 1979, AutomaUc extraction of digital terrain models Computer Graphics, v 13, no 1, p 199-207

Lawson, C L, 1977, Software for C ~ surface interpolation, m Rice, J R , ed, MathemaUcal software III Academic Press Inc, New York, p 92-112

Lee, D T , and Schacter, B J , 1980, Two algorithms for constructing a Delaunay tnangulaUon Jour Computer and Information Sciences, v 9, no 3, p 219-242

Nagy, G , and Wagle, S, 1979, Geographical data processing Computing Surveys, v 11, no 2, p 140-181