ce 406 – advanced surveying

391
ADVANCED SURVEYING FOR THE COURSE OF CE 406: ADVANCED SURVEYING WORLD UNIVERSITY OF BANGLADESH DEPARTMENT OF CIVIL ENGINEERING september2013

Upload: -

Post on 27-Oct-2015

137 views

Category:

Documents


2 download

DESCRIPTION

CE 406 – Advanced Surveying

TRANSCRIPT

Page 1: CE 406 – Advanced Surveying

ADVANCED SURVEYING

FOR THE COURSE OF

CE 406: ADVANCED SURVEYING

WORLD UNIVERSITY OF BANGLADESH DEPARTMENT OF CIVIL ENGINEERING

september2013

Page 2: CE 406 – Advanced Surveying

©

This copy is written for the students of 5th semester of Dept. of civil engineering

for 3 credit course of CE 406 : ADVANCED SURVEYING. It is not for sell or

any kind of financial prifit making.All rights whatsoever in this book are strictly

reserved and no portion of it may be reproduced any process for any purpose

without the written permission of the owners.

Page 3: CE 406 – Advanced Surveying

AUTHOR

S M Tanvir Faysal Alam Chowdhoury, B.Sc. (Civil)

Lecturer in Dept. Of Civil Engineering

WORLD UNIVERSITY OF BANGLADESH

Page 4: CE 406 – Advanced Surveying

1TRIANGULATION AND

TRILATERATION1.1 GENERALThe horizontal positions of points is a network developed to provide accurate control for topographic mapping,charting lakes, rivers and ocean coast lines, and for the surveys required for the design and construction ofpublic and private works of large extent. The horizontal positions of the points can be obtained in a number ofdifferent ways in addition to traversing. These methods are triangulation, trilateration, intersection, resection,and satellite positioning.

The method of surveying called triangulation is based on the trigonometric proposition that if one sideand two angles of a triangle are known, the remaining sides can be computed. Furthermore, if the direction ofone side is known, the directions of the remaining sidescan be determined. A triangulation system consists ofa series of joined or overlapping triangles in which anoccasional side is measured and remaining sides arecalculated from angles measured at the vertices of thetriangles. The vertices of the triangles are known astriangulation stations. The side of the triangle whoselength is predetermined, is called the base line. Thelines of triangulation system form a network that tiestogether all the triangulation stations (Fig. 1.1).

Fig. 1.1 Triangulation network

A trilateration system also consists of a series of joined or overlapping triangles. However, for trilaterationthe lengths of all the sides of the triangle are measured and few directions or angles are measured to establishazimuth. Trilateration has become feasible with the development of electronic distance measuring (EDM)equipment which has made possible the measurement of all lengths with high order of accuracy under almostall field conditions.

A combined triangulation and trilateration system consists of a network of triangles in which all theangles and all the lengths are measured. Such a combined system represents the strongest network forcreating horizontal control.

Since a triangulation or trilateration system covers very large area, the curvature of the earth has to betaken into account. These surveys are, therefore, invariably geodetic. Triangulation surveys were first carriedout by Snell, a Dutchman, in 1615.

Field procedures for the establishment of trilateration station are similar to the procedures used fortriangulation, and therefore, henceforth in this chapter the term triangulation will only be used.

TriangulationstationBase line

F E

D

A B C

Page 5: CE 406 – Advanced Surveying

2 Higher Surveying

1.2 PRINCIPLE OF TRIANGULATIONFig. 1.2 shows two interconnected triangles ABC and BCD. All the angles in both the triangles and the lengthL of the side AB, have been measured.

Also the azimuth θ of AB has been measured at thetriangulation station A, whose coordinates (XA, YA), are known.

The objective is to determine the coordinates of thetriangulation stations B, C, and D by the method oftriangulation. Let us first calculate the lengths of all the lines.By sine rule in ,ABC∆ we have

3sinAB

= 2sin1sinCABC =

We have AB = L = lAB

or BC = BClL =3sin1sin

and CA = CAlL =3sin2sin

Now the side BC being known in ,BCD∆ by sine rule, we have

6sinBC

= 5sin4sinBDCD =

We have BC = BClL =3sin1sin

or CD = CDlL =

6sin4sin

3sin1sin

and BC = BDlL =

6sin5sin

3sin1sin

Let us now calculate the azimuths of all the lines.Azimuth of AB = ABθ=θAzimuth of AC = ACθ=∠+θ 1Azimuth of BC = BCθ=∠−°+θ 2180Azimuth of BD = BDθ=∠+∠−°+θ )42(180Azimuth of CD = CDθ=∠+∠−θ 52From the known lengths of the sides and the azimuths, the consecutive coordinates can be computed as

below.Latitude of AB = ABABAB Ll =θcosDeparture of AB = ABABAB Dl =θsinLatitude of AC = ACACAC Ll =θcosDeparture of AC = ACACAC Dl =θsin

Latitude of BD = BDBDBD Ll =θcos

Departure of BD = BDBDBD Ll =θsin

Fig. 1.2 Principle of triangulation

4

26

CA

L

N

B D

3

51

Page 6: CE 406 – Advanced Surveying

Triangulation and Trilateration 3

Latitude of CD = CDCDCD Ll =θcosDeparture of CD = CDCDCD Dl =θsin

The desired coordinates of the triangulation stations B, C, and D are as follows :

X-coordinate of B, XB = ABA DX +

Y-coordinate of B, YB = ABB LY +

X-coordinate of C, XC = ACA DX +

Y-coordinate of C, YC = ACA LY +

X-coordinate of D, XD = BDB DX +

Y-coordinate of D, YD = BDB LY +It would be found that the length of side can be computed more than once following different routes,

and therefore, to achieve a better accuracy, the mean of the computed lengths of a side is to be considered.

1.3 OBJECTIVE OF TRIANGULATION SURVEYS

The main objective of triangulation or trilateration surveys is to provide a number of stations whose relativeand absolute positions, horizontal as well as vertical, are accurately established. More detailed location orengineering survey are then carried out from these stations.

The triangulation surveys are carried out(i) to establish accurate control for plane and geodetic surveys of large areas, by terrestrial methods,

(ii) to establish accurate control for photogrammetric surveys of large areas,(iii) to assist in the determination of the size and shape of the earth by making observations for

latitude, longitude and gravity, and(iv) to determine accurate locations of points in engineering works such as :

(a) Fixing centre line and abutments of long bridges over large rivers.(b) Fixing centre line, terminal points, and shafts for long tunnels.(c) Transferring the control points across wide sea channels, large water bodies, etc.(d) Detection of crustal movements, etc.(e) Finding the direction of the movement of clouds.

1.4 CLASSIFICATION OF TRIANGULATION SYSTEMBased on the extent and purpose of the survey, and consequently on the degree of accuracy desired,triangulation surveys are classified as first-order or primary, second-order or secondary, and third-order ortertiary. First-order triangulation is used to determine the shape and size of the earth or to cover a vast arealike a whole country with control points to which a second-order triangulation system can be connected. Asecond-order triangulation system consists of a network within a first-order triangulation. It is used to coverareas of the order of a region, small country, or province. A third-order triangulation is a framework fixed withinand connected to a second-order triangulation system. It serves the purpose of furnishing the immediatecontrol for detailed engineering and location surveys.

Page 7: CE 406 – Advanced Surveying

4 Higher Surveying

Table 1.1 Triangulation system

S.No. Characteristics First-order Second-order Third-ordertriangulation triangulation triangulation

1. Length of base lines 8 to 12 km 2 to 5 km 100 to 500 m2. Lengths of sides 16 to 150 km 10 to 25 km 2 to 10 km3. Average triangular error (after less than 1" 3" 12"

correction for spherical excess)4. Maximum station closure not more than 3" 8" 15"5. Actual error of base 1 in 50,000 1 in 25,000 1 in 10,0006. Probable error of base 1 in 10,00,000 1 in 500,000 1 in 250,0007. Discrepancy between two k5 mm k10 mm k25 mm

measures (k is distance inkilometre)

8. Probable error of the 1 in 50,000 to 1 in 20,000 to 1 in 5,000 tocomputed distances 1 in 250,000 1 in 50,000 1 in 20,000

9. Probable error in 0.5" 5" 10"astronomical azimuth

Table 1.1 presents the general specifications for the three types of triangulation systems.

1.5 TRIANGULATION FIGURES AND LAYOUTSThe basic figures used in triangulation networks are the triangle, braced or geodetic quadilateral, and thepolygon with a central station (Fig. 1.3).

Fig. 1.3 Basic triangulation figures

The triangles in a triangulation system can be arranged in a number of ways. Some of the commonlyused arrangements, also called layouts, are as follows :

1. Single chain of triangles2. Double chain of triangles3. Braced quadrilaterals4. Centered triangles and polygons5. A combination of above systems.

1.5.1 Single chain of trianglesWhen the control points are required to be established in a narrow strip of terrain such as a valley

between ridges, a layout consisting of single chain of triangles is generally used as shown in Fig. 1.4. Thissystem is rapid and economical due to its simplicity of sighting only four other stations, and does not involveobservations of long diagonals. On the other hand, simple triangles of a triangulation system provide onlyone route through which distances can be computed, and hence, this system does not provide any check onthe accuracy of observations. Check base lines and astronomical observations for azimuths have to beprovided at frequent intervals to avoid excessive accumulation of errors in this layout.

Polygon with central stationBraced quadrilateralTriangle

Page 8: CE 406 – Advanced Surveying

Triangulation and Trilateration 5

Fig. 1.4 Single of triangles

1.5.2 Double chain of trianglesA layout of double chain of triangles is shown in Fig. 1.5. This arrangement is used for covering the

larger width of a belt. This system also has disadvantages of single chain of triangles system.

Fig. 1.5 Double chain of triangles

1.5.3 Braced quadrilateralsA triangulation system consisting of figures containing four corner stations and observed diagonals

shown in Fig. 1.6, is known as a layout of braced quadrilaterals. In fact, braced quadrilateral consists ofoverlapping triangles. This system is treated to be the strongest and the best arrangement of triangles, and itprovides a means of computing the lengths of the sides using different combinations of sides and angles.Most of the triangulation systems use this arrangement.

Fig. 1.6 Braced quadrilaterals

1.5.4 Centered triangles and polygonsA triangulation system which consists of figures containing interior stations in triangle and polygon as

shown in Fig. 1.7, is known as centered triangles and polygons.

A

B D F

H

GE

C

E

A

F

D

C

B

H

L

J N

G

I

M

K

F

EA

HD

GCB

Page 9: CE 406 – Advanced Surveying

6 Higher Surveying

Fig. 1.7 Centered triangles and polygons

This layout in a triangulation system is generally used when vast area in all directions is required to becovered. The centered figures generally are quadrilaterals, pentagons, or hexagons with central stations.Though this system provides checks on the accuracy of the work, generally it is not as strong as the bracedquadrilateral arrangement. Moreover, the progress of work is quite slow due to the fact that more settings ofthe instrument are required.

1.5.5 A combination of all above systemsSometimes a combination of above systems may be used which may be according to the shape of the

area and the accuracy requirements.

1.6 LAYOUT OF PRIMARY TRIANGULATION FOR LARGE COUNTRIESThe following two types of frameworks of primary triangulation are provided for a large country to cover theentire area.

1. Grid iron system2. Central system.

1.6.1 Grid iron systemIn this system, the primary triangulation

is laid in series of chains of triangles, whichusually runs roughly along meridians (north-south) and along perpendiculars to themeridians (east-west), throughout the country(Fig. 1.8). The distance between two suchchains may vary from 150 to 250 km. The areabetween the parallel and perpendicular seriesof primary triangulation, are filled by thesecondary and tertiary triangulation systems.Grid iron system has been adopted in Indiaand other countries like Austria, Spain, France,etc.

Fig. 1.8 Grid iron system of triangulation

A

B

D

J

I

EH

K

G

F

C

Page 10: CE 406 – Advanced Surveying

Triangulation and Trilateration 7

1.6.2 Central systemIn this system, the whole area is covered by a network of primary triangulation extending in all directions

from the initial triangulation figure ABC, which is generally laid at the centre of the country (Fig. 1.9).This system is generally used for the survey of an area of moderate extent. It has been adopted in United

Kingdom and various other countries.

Fig. 1.9 Central system of triangulation

1.7 CRITERIA FOR SELECTION OF THE LAYOUT OF TRIANGLESThe under mentioned points should be considered while deciding and selecting a suitable layout of triangles.

1. Simple triangles should be preferably equilateral.2. Braced quadrilaterals should be preferably approximate squares.3. Centered polygons should be regular.4. The arrangement should be such that the computations can be done through two or more

independent routes.5. The arrangement should be such that at least one route and preferably two routes form well-

conditioned triangles.6. No angle of the figure, opposite a known side should be small, whichever end of the series is used

for computation.7. Angles of simple triangles should not be less than 45°, and in the case of quadrilaterals, no angle

should be less than 30°. In the case of centered polygons, no angle should be less than 40°.8. The sides of the figures should be of comparable lengths. Very long lines and very short lines

should be avoided.9. The layout should be such that it requires least work to achieve maximum progress.

10. As far as possible, complex figures should not involve more than 12 conditions.It may be noted that if a very small angle of a triangle does not fall opposite the known side it does not

affect the accuracy of triangulation.

A

C

B

Page 11: CE 406 – Advanced Surveying

8 Higher Surveying

1.8 WELL-CONDITIONED TRIANGLESThe accuracy of a triangulation system is greatly affected by the arrangement of triangles in the layout and themagnitude of the angles in individual triangles. The triangles of such a shape, in which any error in angularmeasurement has a minimum effect upon the computed lengths, is known as well-conditioned triangle.

In any triangle of a triangulation system, the length of oneside is generally obtained from computation of the adjacent triangle.The error in the other two sides if any, will affect the sides of thetriangles whose computation is based upon their values. Due toaccumulated errors, entire triangulation system is thus affectedthereafter. To ensure that two sides of any triangle are equallyaffected, these should, therefore, be equal in length. This conditionsuggests that all the triangles must, therefore, be isoceles.

Let us consider an isosceles triangle ABC whose one sideAB is of known length (Fig. 1.10). Let A, B, and C be the threeangles of the triangle and a, b, and c are the three sides opposite tothe angles, respectively.

As the triangle is isosceles, let the sides a and b be equal.Applying sine rule to ,ABC∆ we have

Aa

sin = Cc

sin ... (1.1)

or a = c CA

sinsin ... (1.2)

If an error of Aδ in the angle A, and Cδ in angle C introduce the errors 1aδ and 2aδ , respectively, in theside a, then differentiating Eq. (1.2) partially, we get

1aδ = c CAA

sincos δ

... (1.3)

and 2aδ = – c C

CCA2sin

cossin δ ... (1.4)

Dividing Eq. (1.3) by Eq. (1.2), we get

aa1δ

= AA cotδ ... (1.5)

Dividing Eq. (1.4) by Eq. (1.2), we get

aa2δ

= CC cotδ− ... (1.6)

If ,α±=δ=δ CA is the probable error in the angles, then the probable errors in the side a are

aaδ

= CA 22 cotcot +α±

But C = 180° – (A + B)or = 180° – 2A, A being equal to B.

Therefore aaδ

= AA 2cotcot 22 +α± ... (1.7)

From Eq. (1.7), we find that, if aaδ

is to be minimum, (cot2A + cot2 2A) should be a minimum.

BA

C

b a

c

Fig. 1.10 Triangle in a triangulation system

Page 12: CE 406 – Advanced Surveying

Triangulation and Trilateration 9

Differentiating cot²A + cos² 2A with respect to A, and equating to zero, we have4 cos4A + 2 cos²A – 1 = 0 ...(1.8)

Solving Eq. (1.8), for cos A, we getA = 56°14' (approximately)

Hence, the best shape of an isoceles triangle is that triangle whose base angles are 56°14' each. However,from practical considerations, an equilateral triangle may be treated as a well-conditional triangle. In actualpractice, the triangles having an angle less than 30° or more than 120° should not be considered.

1.9 STRENGTH OF FIGUREThe strength of figure is a factor to be considered in establishing a triangulation system to maintain thecomputations within a desired degree of precision. It plays also an important role in deciding the layout of atriangulation system.

The U.S. Coast and Geodetic Surveys has developed a convenient method of evaluating the strength ofa triangulation figure. It is based on the fact that computations in triangulation involve use of angles oftriangle and length of one known side. The other two sides are computed by sine law. For a given change inthe angles, the sine of small angles change more rapidly than those of large angles. This suggests that smallerangles less than 30° should not be used in the computation of triangulation. If, due to unavoidablecircumstances, angles less than 30° are used, then it must be ensured that this is not opposite the side whoselength is required to be computed for carrying forward the triangulation series.

The expression given by the U.S. Coast and Geodetic Surveys for evaluation of the strength of figure,is for the square of the probable error (L²) that would occur in the sixth place of the logarithm of any side, if thecomputations are carried from a known side through a single chain of triangles after the net has been adjustedfor the side and angle conditions. The expression for L² is

L² = Rd ²34

... (1.9)

where d is the probable error of an observed direction in seconds of arc, and R is a term which represents theshape of figure. It is given by

R = )( 22BBAAD

CD δ+δδ+δ∑−... (1.10)

whereD = the number of directions observed excluding the known side of the figure,

CBA δδδ ,, = the difference per second in the sixth place of logarithm of the sine of the distance angles A, Band C, respectively. (Distance angle is the angle in a triangle opposite to a side), and

C = the number of geometric conditions for side and angle to be satisfied in each figure. It is givenby

C = (n' – S' + 1) + (n – 2S + 3) ... (1.11)where

n = the total number of lines including the known side in a figure,n' = the number of lines observed in both directions including the known side,S = the total number of stations, andS' = the number of stations occupied.

For the computation of the quantity )( 22BBAA δ+δδ+δΣ in Eq. (1.10) , Table 1.2 may be used.

In any triangulation system more than one routes are possible for various stations. The strength offigure decided by the factor R alone determines the most appropriate route to adopt the best shaped triangulationnet route. If the computed value of R is less, the strength of figure is more and vice versa.

Page 13: CE 406 – Advanced Surveying

10 Higher Surveying

Table 1.2 Values of 22BBAA δ+δδ+δ

10° 12° 14° 16° 18° 20° 22° 24° 26° 28° 30° 35° 40° 45° 50° 55° 60° 65° 70° 75° 80° 85° 90°0

10 428 35912 359 295 25314 315 253 214 18716 284 225 187 162 14318 262 204 168 143 126 113

20 245 189 153 130 113 100 9122 232 177 142 119 103 91 81 7424 221 167 134 111 95 83 74 67 6126 213 160 126 104 89 77 68 61 56 5128 206 153 120 99 83 72 63 57 51 47 43

30 199 148 115 94 79 68 59 53 48 43 40 3335 188 137 106 85 71 60 52 46 41 37 33 27 2340 179 129 99 79 65 54 47 41 36 32 29 23 19 1645 172 124 93 74 60 50 43 37 32 28 25 20 16 13 11

50 167 119 89 70 57 47 39 34 29 26 23 18 14 11 9 855 162 115 86 67 54 44 37 32 27 24 21 16 12 10 8 7 560 159 112 83 64 51 42 35 30 25 22 19 14 11 9 7 5 4 465 155 109 80 62 49 40 33 28 24 21 18 13 10 7 6 5 4 3 2

70 152 106 78 60 48 38 32 27 23 19 17 12 9 7 5 4 3 2 2 175 150 104 76 58 46 37 30 25 21 18 16 11 8 6 4 3 2 2 1 1 180 147 102 74 57 45 36 29 24 20 17 15 10 7 5 4 3 2 1 1 1 0 085 145 100 73 55 43 34 28 23 19 16 14 10 7 5 3 2 2 1 1 0 0 0 0

90 143 98 71 54 42 33 27 22 19 16 13 9 6 4 3 2 1 1 1 0 0 0 0

95 140 96 70 53 41 32 26 22 18 15 13 9 6 4 3 2 1 1 0 0 0 0100 138 95 68 51 40 31 25 21 17 14 12 8 6 4 3 2 1 1 0 0 0105 136 93 67 50 39 30 25 20 17 14 12 8 5 4 2 2 1 1 0 0110 134 91 65 49 38 30 24 19 16 13 11 7 5 3 2 2 1 1 1

115 132 89 64 48 37 29 23 19 15 13 11 7 5 3 2 2 1 1120 129 88 62 46 36 28 22 18 15 12 10 7 5 3 2 2 1125 127 86 61 45 35 27 22 18 14 12 10 7 5 4 3 2130 125 84 59 44 34 26 21 17 14 12 10 7 5 4 3

135 122 82 58 43 33 26 21 17 14 12 10 7 5 4140 119 80 56 42 32 26 20 17 14 12 10 8 6145 116 77 55 41 32 25 21 17 15 13 11 9150 112 75 54 40 32 26 21 18 16 15 13

152 111 75 53 40 32 26 22 19 17 16154 110 74 53 41 33 27 23 21 19156 108 74 54 42 34 28 25 22158 107 74 54 43 35 30 27160 107 74 56 45 38 33

162 107 76 59 48 42164 109 79 63 54166 113 86 71168 122 98170 143

Page 14: CE 406 – Advanced Surveying

Triangulation and Trilateration 11

1.10 ACCURACY OF TRIANGULATIONErrors are inevitable and, therefore, inspite of all precautions the errors get accumulated. It is, therefore,essential to know the accuracy of the triangulation network achieved so that no appreciable error in plottingis introduced. The following formula for root mean square error may be used.

m = n

E3

2Σ... (1.12)

where m = the root mean square error of unadjusted horizontal angles in seconds of arc as obtained fromthe triangular errors,

EΣ = the sum of the squares of all the triangular errors in the triangulation series, andn = the total number of triangles in the series.

It may be noted that(i) all the triangles have been included in the computations,

(ii) all the four triangles of a braced quadrilateral have been included in the computations, and(iii) if the average triangular error of the series is 8", probable error in latitudes and departures after a

distance of 100 km, is approximately 8 m.

ILLUSTRATIVE EXAMPLES

Example 1.1 If the probable error of direction measurement is 1.20", compute the maximum value of R forthe desired maximum probable error of (i) 1 in 20,000 and (ii) 1 in 10,000.

Solution: (i) L being the probable error of a logarithm, it represents the logarithm of the ratio of the truevalue and a value containing the probable error.

In this case L = the 6th place in log

±

2000011

= the 6th place in log ( )00005.01( ±log (1 + 0.00005) = 0.0000217

The 6th place in the log value = 21Hence L = ± 21It is given that d = 1.20"From Eq. (1.9), we have

L² = Rd 2

34

maxR = 2

2

43

dL

= 2

2

20.121

43× = 230.

(ii) L = the 6th place in log

±

1000011

log (1 + 0.0001) = 0.0000434The 6th place in the log value = 43Hence L = 43±

Rmax = 2

2

20.143

43 × = 963.

Page 15: CE 406 – Advanced Surveying

12 Higher Surveying

Example 1.2 The probable error of direction measurement is 1". Compute the maximum value of R if themaximum probable error is

(i) 1 in 25000(ii) 1 in 5000.Solution:

(i) log

+

2500011 = 0.0000174

The 6th place in the log value = 17Hence L = 17±From Eq. (1.9), we get

maxR = 2

2

43

dL

The value of d is given as 1"

maxR = 2

2

14173×

×= 217.

(ii)

+

5000011log = 0.0000086

The 6th place in the log value = 9Hence L = 9±

maxR = 2

2

1493

××

= 61.

Example 1.3 Compute the value of D

CD − for the following triangulation figures if all the stations have

been occupied and all the lines have been observed in both directions :(i) A single triangle

(ii) A braced quadrilateral(iii) A four-sided central-point figure without diagonals(iv) A four-sided central-point figure with one diagonal.

Solution: (i) Single triangle (Fig. 1.11)From Eq. (1.11), we have

C = (n' – S' + 1) + (n – 2S + 3)n' = 3n = 3S = 3S' = 3C = (3 – 3 + 1) + (3 – 2 × 3 + 3) = 1

and D = the number of directions observed excluding the known side.= 2 (total number of lines – 1)= 2 × (3 – 1) = 4

DCD −

= 4

14 − = 0.75.

Fig. 1.11

Page 16: CE 406 – Advanced Surveying

Triangulation and Trilateration 13

(ii) Braced quadrilateral (Fig. 1.12)n = 6n' = 6S = 4S' = 4C' = (6 – 4 + 1) + (6 – 2 × 4 + 3) = 4D = 2 × (6 – 1) = 10

DCD −

= 10410 −

= 0.6.

(iii) Four-sided central-point figures without diagonals (Fig. 1.13)n = 8n' = 8S = 5S' = 5C = (8 – 5 + 1) + (8 – 2 × 5 + 3) = 5D = 2 × (8 – 1) = 14

ThereforeD

CD −=

14514 −

= 0.64.Fig. 1.13

(iv) Four-sided central-point figure with one diagonal. (Fig. 1.14)n = 9n' = 9S = 5S' = 5C = (9 – 5 + 1) + (9 – 2 × 5 + 3) = 7D = 2 × (9 – 1) = 16

ThereforeD

CD −= 16

716 −= 0.56.

Fig. 1.14

Example 1.4 Compute the value of D

CD − for the triangulation nets shown in Fig. 1.15 (a – d). The

directions observed are shown by arrows.

Fig. 1.15

Fig. 1.12

( )a

( )c

( )b

( )d

Page 17: CE 406 – Advanced Surveying

14 Higher Surveying

Solution: (i) Fig. 1.15aFrom Eq. (1.11), we have

C = (n' – S' + 1) + (n – 2S + 3)n = the total number of lines

= 11n' = the total number of lines observed in both directions

= 9S = the total number of stations

= 7S' = the total number of stations occupied

= 6C = (9 – 6 + 1) + (11 – 2 × 7 + 3) = 4

and D = the total number of directions observed excluding the known side= 2 × (n' – 1) + number of lines observed in one direction= 2 × (9 – 1) + 2 = 18

ThereforeD

CD −= 18

418 −= 0.78.

(ii) Fig. 1.15bn = 13n' = 11S = 7S' = 7C = (11 – 7 + 1) + (13 – 2 × 7 + 3) = 7D = 2 × (11 – 1) + 2 = 22

ThereforeD

CD −=

22722 −

= 0.68.

(iii) Fig. 1.15cn = 13n' = 11S = 7S' = 7C = (11 – 7 + 1) + (13 – 2 × 7 + 3) = 7D = 2 × (11 – 1) + 2 = 22

ThereforeD

CD −=

22722 −

= 0.68.

(iv) Fig. 1.15dn = 19n' = 19S = 10S' = 10C = (19 – 10 + 1) + (19 – 2 × 10 + 3) = 12D = 2 (19 – 1) + 0 = 36

ThereforeD

CD −= 36

1236 − = 0.67.

Page 18: CE 406 – Advanced Surveying

Triangulation and Trilateration 15

Example 1.5 Compute the strength of the figure ABCD for all the routes by which the length CD can becomputed from the known side AB. Assume that all the stations were occupied.

Solution:From Eq. (1.10), we have

R = )( 22CBAAD

CD δ+δδ+δΣ−

For the given figure in Fig. 1.16, we haven = 6n' = 6S = 4S' = 4D = 2 × (n – 1)

= 2 × (6 – 1) = 10Hence C = (n' – S' + 1) + (n – 2S + 3)

= (6 – 4 + 1) + (6 – 2 × 4 + 3) = 4

andD

CD −= 10

410 − = 0.60.

(a) Route-1, using ABCs∆ and ADC with common side ACFor ABC∆ the distance angles of AB and AC are 26° and 100° = 44° + 56°, respectively.From Table 1.2,

22626100

2100 δ+δδ+δ = 17

For ,ADC∆ the distance angles of AC and DC are 112° = (44° + 68°) and 38°, respectively,23838112

2112 δ+δδ+δ = 6

1R = 0.6 × (17 + 6) = 13.8 ~− 14(b) Route-2, using ABCs∆ and BCD with common side BC

For ABC∆ the distance angles of AB and BC are 26° and 54°, respectively,2262654

254 δ+δδ+δ = 27

For ,BCD∆ the distance angle of BC and CD are 68° and 56°, respectively,2565668

268 δ+δδ+δ = 4

2R = 0.6 × (27 + 4) = 18.6 ~− 19(c) Route-3, using ABDs∆ and ACD with common side AD

From ABC∆ the distance for both the sides AB and AD is 44°.2444444

244 δ+δδ+δ = 13

From ,ACD∆ the distance angles of AD and CD and 30° and 38°, respectively,2303038

238 δ+δδ+δ = 31

3R = 0.6 × (13 + 31) = 26.4 ~− 26(d) Route-4, using ABDs∆ and BCD with common side BD.

From ,ABD∆ the distance angles of AB and DB are 44° and 92° = (38° + 54°), respectively,2383892

292 δ+δδ+δ = 7

From ,BCD∆ the distance angles of BD and CD are 56° = (30° + 26°) and 56°, respectively,2565656

256 δ+δδδ = 7

Fig. 1.16

Page 19: CE 406 – Advanced Surveying

16 Higher Surveying

4R = 0.6 × (7 + 7) = 8.4 ~− 8Since the lowest value of R represents the highest strength, the best route to compute the length of CD

is Route-4, having R4 = 8.

1.11 ROUTINE OF TRIANGULATION SURVEYThe routine of triangulation survey, broadly consists of

(a) field work, and (b) computations.The field work of triangulation is divided into the following operations :

(i) Reconnaissance(ii) Erection of signals and towers

(iii) Measurement of base line(iv) Measurement of horizontal angles(v) Measurement of vertical angles

(vi) Astronomical observations to determine the azimuth of the lines.

1.12 RECONNAISSANCEReconnaissance is the preliminary field inspection of the entire area to be covered by triangulation, andcollection of relevant data. Since the basic principle of survey is working from whole to the part, reconnaissanceis very important in all types of surveys. It requires great skill, experience and judgement. The accuracy andeconomy of triangulation greatly depends upon proper reconnaissance survey. It includes the followingoperations:

1. Examination of terrain to be surveyed.2. Selection of suitable sites for measurement of base lines.3. Selection of suitable positions for triangulation stations.4. Determination of intervisibility of triangulation stations.5. Selection of conspicuous well-defined natural points to be used as intersected points.6. Collection of miscellaneous information regarding:

(a) Access to various triangulation stations(b) Transport facilities(c) Availability of food, water, etc.(d) Availability of labour(e) Camping ground.

Reconnaissance may be effectively carried out if accurate topographical maps of the area are available.Help of aerial photographs and mosaics, if available, is also taken. If maps and aerial photographs are notavailable, a rapid preliminary reconnaissance is undertaken to ascertain the general location of possibleschemes of triangulation suitable for the topography. Later on, main reconnaissance is done to examine theseschemes. The main reconnaissance is a very rough triangulation. The plotting of the rough triangulation maybe done by protracting the angles. The essential features of the topography are also sketched in. The finalscheme is selected by studying the relative strengths and cost to various schemes.

For reconnaissance the following instruments are generally employed:1. Small theodolite and sextant for measurement of angles.2. Prismatic compass for measurement of bearings.3. Steel tape.4. Aneroid barometer for ascertaining elevations.5. Heliotropes for ascertaining intervisibility.6. Binocular.7. Drawing instruments and material.8. Guyed ladders, creepers, ropes, etc., for climbing trees.

Page 20: CE 406 – Advanced Surveying

Triangulation and Trilateration 17

1.12.1 Erection of signals and towersA signal is a device erected to define the exact position of a triangulation station so that it can be

observed from other stations whereas a tower is a structure over a station to support the instrument and theobserver, and is provided when the station or the signal, or both are to be elevated.

Before deciding the type of signal to be used, the triangulation stations are selected. The selection oftriangulation stations is based upon the following criteria.

Criteria for selection of triangulation stations1. Triangulation stations should be intervisible. For this purpose the station points should be on the

highest ground such as hill tops, house tops, etc.2. Stations should be easily accessible with instruments.3. Station should form well-conditioned triangles.4. Stations should be so located that the lengths of sights are neither too small nor too long. Small

sights cause errors of bisection and centering. Long sights too cause direction error as the signalsbecome too indistinct for accurate bisection.

5. Stations should be at commanding positions so as to serve as control for subsidiary triangulation,and for possible extension of the main triangulation scheme.

6. Stations should be useful for providing intersected points and also for detail survey.7. In wooded country, the stations should be selected such that the cost of clearing and cutting, and

building towers, is minimum.8. Grazing line of sights should be avoided, and no line of sight should pass over the industrial areas

to avoid irregular atmospheric refraction.

Determination of intervisibility of triangulation stationsAs stated above, triangulations stations should be chosen on high ground so that all relevant stations

are intervisible. For small distances, intervisibility can be ascertained during reconnaissance by directobservation with the aid of binocular, contoured map of the area, plane mirrors or heliotropes using reflectedsun rays from either station.

However, if the distance between stations is large, the intervisibility is ascertained by knowing thehorizontal distance between the stations as under.

Case-I Invervisibility not obstructed by intervening groundIf the intervening ground does not obstruct the intervisibility, the distance of visible horizon from the

station of known elevation is calculated from the following formula:

h = )21(2

2m

RD − ... (1.13)

where h = height of the station above datum,D = distance of visible horizon,R = earth’s mean radius, andm = mean coefficient of refraction taken as 0.07 for sights over land,

and. 0.08 for sights over sea.Substituting the values of m as 0.071 and R as 6370 km in Eq. (1.13), the value of h in metres is given by

h = 0.06735 2D ... (1.14)where D is in kilometres.

Page 21: CE 406 – Advanced Surveying

18 Higher Surveying

In Fig. 1.17, the distance between two stations A and B of heights Ah and Bh , respectively, is D. If ADand BD are the distances of visible horizon from A and B, respectively, we have

AD = 06735.0Ah

= 3.853 Ah ... (1.15)

Fig. 1.17 Intervisibility not obstructed by intervening ground

We have D = BA DD +or BD = ADD −For the known distance of visible horizon BD as above, the height of station B is computed. If the

computed value is Bh' , then

Bh' = 0.06735 2BD ... (1.16)

The computed value of height 'Bh is compared with the known value Bh as below :

If BB h'h ≥ , the station B will be visible from A, and

if BB h'h < , the station B will not be visible from A.

If B is not visible from A, )( BB hh' − is the required amount of height of signal to be erected at B. Whiledeciding the intervisibility of various stations, the line of sight should be taken at least 3 m above the point oftangency T of the earth’s surface to avoid grazing rays.

Case-II Intervisibility obstructed by intervening groundIn Fig. 1.18, the intervening ground at C is obstructing the intervisibility between the stations A and B.From Eq. (1.15), we have

AD = 3.853 Ah ... (1.17)

The distance TD of the peak C from the point of tangency T, is given by

TD = CA DD − ... (1.18)

Fig. 1.18 Intervisibility obstructed by intervening ground

hA

A'

A

T

DDA DB

Datum hB

B

B'

B"

hB'

A'

h A

A B

B"

B'

B0T

C'

C

C"

hC'Datum

hC

DT

D C

h C'hC"

hB''

hB' hBDD A DB

Page 22: CE 406 – Advanced Surveying

Triangulation and Trilateration 19

and Ch' = 0.06735 2TD ... (1.19)

Bh' = 0.06735 2BD ... (1.20)

If CC hh' > , the line of sight is clear of the obstruction,

and it becomes Case-I discussed above. If CC hh' < thenthe signal at B is to be raised. The amount of raising requiredat B is computed as below.

From similar A'C'C"s∆ and A'B'B" in Fig. 1.19, weget

C

C

Dh"

= Dh"B

or Bh" = CC

h"DD

... (1.21)

where C"h = CC 'hh − .

The required height of signal above station 0B is

B0B" = (BB' + B'B") – 0BB= BCB hh"h' −+ )( ... (1.22)

Alternate method (Captain G.T. McCaw’s method)A comparison of elevations of the stations A and B (Fig. 1.20) decides whether the triangulation

stations are intervisible or not. A direct solution suggested by Captain McCaw is known as Captain McCaw’smethod.

Fig. 1.20 Captain McCaw’s method of ascertaining intervisibility

Let Ah = elevation of station A

Bh = elevation of station B

Ch = elevation of station C.2S = distance between A and B

(S + x) = distance between A and C(S – x) = distance between C and B

h = elevation of the line of sight at Cξ = zenith distance from A to B

= (90°-vertical angle).

Fig. 1.19

( – )h h = hc c c' "

C"

C'A'

B'

hB"

DC

D

Datum

B"

A

A'

B

C

C'

hA hB

D

2SS+x S-x

h hc

Line of sightξ

Page 23: CE 406 – Advanced Surveying

20 Higher Surveying

From Captain McCaw’s formula

RmxS

Sxhhhhh ABA 2

)21(eccos)()(21)(

21 222

3−ξ−−−++= ... (1.23)

Practically in most of the cases, the zenith distance is very nearly equal to 90° and, therefore, the valueof cosec² ξ may be taken approximately equal to unity.

However, for accurate calculations,

cosec² ξ = 2

2

4)(1

Shh AB −+ ... (1.24)

In Eq. (1.23), the value of

Rm

221

is usually taken as 0.06735.

Therefore

06735.0)()(21)(

21 22 ×−−−++= xS

Sxhhhhh ABAB ... (1.25)

If h > Ch , the line of sight is free of obstruction. In case ,Chh < the height of tower to raise the signal atB, is computed from Eqs. (1.21) and (1.22).

ILLUSTRATIVE EXAMPLES

Example 1.6 Two stations A and B, 80 km apart, have elevations 15 m and 270 m above mean sea level,respectively. Calculate the minimum height of the signal at B.

Solution: (Fig. 1.21)It is given that

Ah = 15 m

Bh = 270 mD = 80 km

Fig. 1.21

From Eq. (1.15), we get

AD = 3.853 Ah = 3.853 × 15 = 14.92 kmWe have

BD = D – ADor = 80 – 14.92

= 65.08 kmTherefore Bh' = 0.06735 2

BD= 0.06735 × 65.08² = 285.25 m

Hence, since the elevation of B is 270 m, the height of signal required at B, is= 285.25 – 270 = 15.25 ~− 15.5 m.

15 m

A'

A

T

80 kmDA DB

B

B'

B"

270m hB'

Page 24: CE 406 – Advanced Surveying

Triangulation and Trilateration 21

Example 1.7 There are two stations P and Q at elevations of 200 m and 995 m, respectively. The distanceof Q from P is 105 km. If the elevation of a peak M at a distance of 38 km from P is 301 m, determine whether Qis visible from P or not. If not, what would be the height of scaffolding required at Q so that Q becomes visiblefrom P ?

Solution: (Fig. 1.22)From Eq. (1.15), we get

PT = 3.853 × 200 = 54.45 kmTherefore MT = PT – PM

= 54.45 – 38 = 16.45 kmUsing Eq. (1.14) and the value of MT, we get

MM' = 0.06735 × 16.452 = 18.23 mThe distance of Q from the point of tangency T is

QT = 105 – 54.45 = 50.55 kmTherefore QQ' = 0.06735 × 50.552 = 172.10 m

Fig. 1.22

From similar s∆ P'M'M" and P'Q'Q", we have

PMM'M"

= PQQ'Q"

Q'Q" = PMPQ

M'M"

= PMPQ

(MM" – MM' )

= 38105

× (301 – 18.23) = 781.34 m

We have QQ" = QQ' + Q'Q"= 172.10 + 781.34 = 953.44 m

As the elevation 995 m of Q is more than 953.44 m, the peak at M does not obstruct the line of sight.

Alternatively, from the similar s∆ P'M'Mo and oQQP ′ , we have

PMMM o′

= PQQQ o′

or M'Mo = oQQPQPM ′

105 km

Q

Qo

MoM'

P'

P

M

M"

Q'T

200m38m

301m hn' 995m

hg'

Page 25: CE 406 – Advanced Surveying

22 Higher Surveying

= )( QQQQPQPM

o ′−

= )10.172995(10538 −× = 297.81

The elevation of line of sight P'Qo at M is

oMM = MM' + oMM ′= 18.23 + 297.81 = 316.04.

Since the elevation of peak at M is 301 m, the line of sight is not obstructed by the peak and, therefore,no scaffolding is required at Q.

Example 1.8 Solve the problem given in Example 1.7 by Capt. McCaw’s method.Solution: (Fig. 1.22)From Eq. (1.25), the elevation of line of sight at M joining the two stations is

h = 06735.0)()(21)(

21 22 ×−−−++ xS

Sxhhhh PQPQ

It is given thatPh = 200 m

Qh = 995 m Mh = 301 m

2S = 105 km or S = 52.5 kmS + x = 38 km or x = – 14.5 km

Therefore

h = 5.52)5.14()200995(

21)200995(

21 −×−×++×

06735.0)5.145.52( 22 ×−−= 316.24 m.

The elevation of the line of sight p'Q0 at M is 316.24 m, and the elevation of the peak is 301 m, therefore,the line of sight is clear of obstruction.

Example 1.9 In a triangulation survey, the altitudes of two proposed stations A and B, 100 km apart, arerespectively 425 m and 750 m. The intervening ground situated at C, 60 km from A, has an elevation of 435 m.Ascertain if A and B are intervisible, and if necessary find by how much B should be raised so that the line ofsight must nowhere be less than 3 m above the surface of the ground. Take R = 6400 km and m = 0.07.

Solution: (Fig. 1.20)From the given data we have

Ah = 425 m, Bh = 750 m, Ch = 435 m, R = 6400 km, m = 0.072S = 100 km, or S = 50 km

S + x = 60 km or x = 10 kmEq. (1.23) gives

Ch′ = RmxS

Sxhhhh ABAB 2

)21(cosec)()(21)(

21 222 −ξ−−−++

Taking cosec2 ξ = 1, and substituting the values of the given data in the above equation, we have

h = )1050(5010)425705(

21)425705(

21 22 −−×−×++×

× 100064002

)07.021(1 ××

×−× = 431.75 m

Page 26: CE 406 – Advanced Surveying

Triangulation and Trilateration 23

As the elevation of the line of sight at C is less than the elevation of C, the line of sight fails to clear C by435 – 431.75 = 3.25 m

To avoid grazing rays, the line of should be at least 3m above the ground. Therefore, the line of sightshould be raised to 3.25 + 3 = 6.25 m at C.

Hence, the minimum height of signal to be erected at B

= 1006025.6 × = 10.42 m.

Station MarkThe triangulation stations should be permanently marked on the ground so that the theodolite and

signal may be centered accurately over them. The following points should be considered while marking theexact position of a triangulation station :

(i) The station should be marked on perfectly stable foundation or rock. The station mark on a largesize rock is generally preferred so that the theodolite and observer can stand on it. Generally, a hole10 to 15 cm deep is made in the rock and a copper or iron bolt is fixed with cement.

(ii) If no rock is available, a large stone is embeded about 1 m deep into the ground with a circle, anddot cut on it. A second stone with a circle and dot is placed vertically above the first stone.

(iii) A G.I. pipe of about 25 cm diameter driven verticallyinto ground up to a depth of one metre, also servedas a good station mark.

(iv) The mark may be set on a concrete monument. Thestation should be marked with a copper or bronzetablet. The name of the station and the date on whichit was set, should be stamped on the tablet.

(v) In earth, generally two marks are set, one about 75cm below the surface of the ground, and the otherextending a few centimeters above the surface of theground. The underground mark may consist of astone with a copper bolt in the centre, or a concretemonument with a tablet mark set on it (Fig. 1.23).

(vi) The station mark with a vertical pole placed centrally,should be covered with a conical heap of stonesplaced symmetrically. This arrangement of markingstation, is known as placing a cairn (Fig. 1.27).

(vii) Three reference marks at some distances on fairly permanent features, should be established tolocate the station mark, if it is disturbed or removed.

(viii) Surrounding the station mark a platform 3 m × 3 m × 0.5 m should be built up of earth.

1.13 SIGNALSSignals are centered vertically over the station mark, and the observations are made to these signals fromother stations. The accuracy of triangulation is entirely dependent on the degree of accuracy of centering thesignals. Therefore, it is very essential that the signals are truly vertical, and centered over the station mark.Greatest care of centering the transit over the station mark will be useless, unless some degree of care incentering the signal is impressed upon.

Fig. 1.23 Station mark

Stone slab

Metal pipe

Concrete

Copper bolt75 cm

75 cm

i p singh
Sticky Note
assignment
Page 27: CE 406 – Advanced Surveying

24 Higher Surveying

A signal should fulfil the following requirements :(i) It should be conspicuous and clearly visible against any background. To make the signal

conspicuous, it should be kept at least 75 cm above the station mark.(ii) It should be capable of being accurately centered over the station mark.

(iii) It should be suitable for accurate bisection from other stations.(iv) It should be free from phase, or should exhibit little phase (cf., Sec. 1.15).

1.13.1 Classification of signalsThe signals may be classified as under :(i) Non-luminous, opaque or daylight signals

(ii) Luminous signals.(i) Non-luminous signalsNon-luminous signals are used during day time and for short distances. These are of various types, and

the most commonly used are of following types.(a) Pole signal (Fig. 1.24) : It consists of a round pole painted black and white in alternate strips, and

is supported vertically over the station mark, generally on a tripod. Pole signals are suitable uptoa distance of about 6 km.

(b) Target signal (Fig. 1.25): It consists of a pole carrying two squares or rectangular targets placed atright angles to each other. The targets are generally made of cloth stretched on wooden frames.Target signals are suitable upto a distance of 30 km.

Fig. 1.24 Pole signal Fig. 1.25 Target signal

(c) Pole and brush signal (Fig. 1.26): It consists of a straight pole about 2.5 m long with a bunch oflong grass tied symmetrically round the top making a cross. The signal is erected vertically overthe station mark by heaping a pile of stones, upto 1.7 m round the pole. A rough coat of white washis given to make it more conspicuous to be seen against black background. These signals are veryuseful, and must be erected over every station of observation during reconnaissance.

(d) Stone cairn (Fig. 1.27): A pile of stone heaped in a conical shape about 3 m high with a cross shapesignal erected over the stone heap, is stone cairn. This white washed opaque signal is very usefulif the background is dark.

Page 28: CE 406 – Advanced Surveying

Triangulation and Trilateration 25

Fig. 1.26 Pole and brush signal Fig. 1.27 Stone cairn

(e) Beacons (Fig. 1.28): It consists of red and white cloth tiedround the three straight poles. The beacon can easily becentered over the station mark. It is very useful for makingsimultaneous observations.

(ii) Luminous signalsLuminous signals may be classified into two types :(i) Sun signals

(ii) Night signals.(a) Sun signals (Fig. 1.29): Sun signals reflect the rays of the sun

towards the station of observation, and are also known as heliotropes.Such signals can be used only in day time in clear weather.

Heliotrope : It consists of a circular plane mirror with a small hole atits centre to reflect the sun rays, and a sight vane with an aperture carryinga cross-hairs. The circular mirror can be rotated horizontally as well asvertically through 360°. The heliotrope is centered over the station mark, and the line of sight is directedtowards the station of observation. The sight vane is adjusted looking through the hole till the flashes givenfrom the station of observation fall at the centre of the cross of the sight vane. Once this is achieved, theheliotrope is disturbed. Now the heliotrope frame carrying the mirror is rotated in such a way that the blackshadow of the small central hole of the plane mirror falls exactly at the cross of the sight vane. By doing so, thereflected beam of rays will be seen at the station of observation. Due to motion of the sun, this small shadowalso moves, and it should be constantly ensured that the shadow always remains at the cross till theobservations are over.

Fig. 1.29 Heliotrope

Fig. 1.28 Beacon

Sun

Reflected ray

Hole

Cross-hairsMirror

Page 29: CE 406 – Advanced Surveying

26 Higher Surveying

The heliotropes do not give better results compared to signals. These are useful when the signal stationis in flat plane, and the station of observation is on elevated ground. When the distance between the stationsexceed 30 km, the heliotropes become very useful.

(b) Night signals: When the observations are required to be made at night, the night signals of followingtypes may be used.

1. Various forms of oil lamps with parabolic reflectors for sights less than 80 km.2. Acetylene lamp designed by Capt. McCaw for sights more than 80 km.3. Magnesium lamp with parabolic reflectors for long sights.4. Drummond’s light consisting of a small ball of lime placed at the focus of the parabolic reflector,

and raised to a very high temperature by impinging on it a stream of oxygen.5. Electric lamps.

1.14 TOWERSA tower is erected at the triangulation station when the stationor the signal or both are to be elevated to make the observationspossible form other stations in case of problem of intervisibility.The height of tower depends upon the character of the terrainand the length of the sight.

The towers generally have two independent structures.The outer structure is for supporting the observer and the signalwhereas the inner one is for supporting the instrument only. Thetwo structures are made entirely independent of each other sothat the movement of the observer does not disturb the instrumentsetting. The two towers may be made of masonary, timber orsteel. For small heights, masonary towers are most suitable.Timber scaffolds are most commonly used, and have beenconstructed to heights over 50 m. Steel towers made of lightsections are very portable, and can be easily erected anddismantled. Bilby towers patented by J.S. Bilby of the U.S. Coastand Geodetic Survey, are popular for heights ranging from 30 to40 m. This tower weighing about 3 tonnes, can be easily erectedby five persons in just 5 hrs. A schematic of such a tower isshown in Fig. 1.30.

Fig. 1.30 Bilby tower

1.15 PHASE OF A SIGNALWhen cylindrical opaque signals are used, they require a correction in the observed horizontal angles due anerror known as the phase. The cylindrical signal is partly illuminated by the sun, and the other part remains inshadow, and becomes invisible to the observer. While making the observations, the observer may bisect thebright portion or the bright line. Thus the signal is not bisected at the centre, and an error due to wrongbisection is introduced. It is, thus, the apparent displacement of the signal. The phase correction is thusnecessary so that the observed horizontal angles may be reduced to that corresponding to the centre of thesignal.

Depending upon the method of observation, phase correction is computed under the following twoconditions.

Windows

Lamp

Screen

Outer tower(with bracings)

Inner tower(without bracings)

i p singh
Sticky Note
Not in syllabus
Page 30: CE 406 – Advanced Surveying

Triangulation and Trilateration 27

(i) Observation made on bright portionIn Fig. 1.31, a cylindrical signal of radius r, is centered over

the station P. The illuminated portion of the signal which theobserver from O is able to see, is AB. The observer from thestation O, makes two observations at A and B of the brightportion, AB. Let C be the midpoint of AB.

Let θ = the angle between the sun and the line OP

1α and 2α = the angles BOP and AOP, respectivelyD = the horizontal distance OPα = half of the angle AOB

= )(21

12 α−α

β = the phase correction

= )(21

1211 α−α+α=α+α

or = )(21

21 α+α ... (1.26)

From OAP∆ we get

tan 2α = Dr

2α being small, we can write

2α = Dr

radians ... (1.27)As the distance PF is very small compared to OP, OF may be taken as OP. Thus, from right angle,BFO∆ we get

tan 1α = DBF

OPBF

OFBF == ... (1.28)

From ,PFB∆ we getBF = r sin (90 – θ ) = r cos θ

Substituting the value of BF in Eq. (1.28), we get

tan 1α = D

r θcos

1α being small, we can write

1α = D

r θcos radians ... (1.29)

Substituting the values of 1α and 2α in Eq. (1.26), we have

β =

θ+

Dr

Dr cos

21

=

θ+

2cos1

Dr

= 2

cos2 θDr

radians ... (1.30)

= 2cos

1sin2 θ

"Dr

seconds

β = 2

cos206265 θ2

Dr

seconds ... (1.31)

Fig. 1.31 Phase correction when

observation made on the bright portion

θ

θ

D

A

S

S

r

CF

P

B

β

α

α1α2

O

Page 31: CE 406 – Advanced Surveying

28 Higher Surveying

(ii) Observations made on the bright lineIn this case, the bright line at C on the cylindrical signal of

radius r, is sighted from O (Fig. 1.32).Let CO = the reflected ray of the sun from the bright line at C

β = the phase correction

θ = the angle between the sun and the line OPThe rays of the sun are always parallel to each other,

therefore, SC is parallel to OS1 .

SCO∠ = 180° – )( β−θ

PCO∠ = 180° – SCO∠21

or = 180° – )](180[21 β−θ−°

= 90° + )(21 β−θ ... (1.32)

Therefore, CPO∠ =180° – )( PCO∠+β ... (1.33)

Substituting the value of PCO∠ from Eq. (1.32) in Eq.(1.33) and after simplification, we get

CPO∠ = 90° – )(21 β+θ

As β is very small compared to ,θ it can be ignored,Therefore

CPO∠ = 90° – θ21

From the right angle ,CFP∆ we have

CPCF

= sin CPO = sin

θ−°

2190

or CF = r sin

θ−°

2190 ... (1.34)

From ,CFO∆ we get

tan β = OFCF

... (1.35)

PF being very small compared to OP, OF may be taken as OP. Substituting the value of CF from Eq.(1.34) and taking OF equal to D, we get the Eq. (1.35) as

tan β = D

r

θ−°

2190sin

Fig. 1.32 Phase correction when observation

made on the bright line

S1

θ

O

S

S

Ar

β

D

B90°– — ( – ) θ β1

2C F

P

90°–

— (

–)

θβ

1 2

Page 32: CE 406 – Advanced Surveying

Triangulation and Trilateration 29

or β = D

r2

cos θ

radians

β = 2

cos206265 θD

r seconds ... (1.36)

The phase correction β is applied to the observedhorizontal angles in the following manner.

Let there be four stations S1, S2, P, and O as shown in(Fig. 1.33). The observer is at O, and the angles OPS1 and

2POS have been measured from O as 1θ′ and 2θ′ , respectively.

If the required corrected angles are 1θ and 2θ , then

1θ = β+θ′1and 2θ = β−θ′2when β is the phase correction.While applying the corrections the directions of the phase

correction, and the observed stations with respect to the lineOP, must be noted carefully.

Fig. 1.33 Applying the phase correction to the

measured horizontal angles

ILLUSTRATIVE EXAMPLES

Example 1.10 A cylindrical signal of diameter 4 m, was erected at station B. Observations were made onthe signal from station A. Calculate the phase corrections when the observations were made

(i) on the bright portion, and(ii) on the bright line.Take the distance AB as 6950 m, and the bearings of the sun and the station B as 315° and 35°,

respectively.Solution: Given that θ = Bearing of sun – bearing of B

= 315° – 35° = 280°

r = m224

2Diameter ==

D = 6950 m(i) (Fig. 1.31)From Eq. (1.31), the phase correction

β = 2

cos206265 2 θD

r seconds

= 2280cos

69502206265 2 °××

= 34.83 seconds.

(ii) (Fig. 1.32)From Eq. (1.36), the phase correction

β = 2

cos206265 θD

r seconds

= 2280cos

69502206265 °××

= 45.47 seconds.

P

S1

S2θ1'

β

θ2'

O

θ2θ1

Page 33: CE 406 – Advanced Surveying

30 Higher Surveying

Example 1.11 The horizontal angle measuredbetween two stations P and Q at station R, was38°29'30". The station Q is situated on the right of theline RP.

The diameter the cylindrical signal erected atstation P, was 3 m and the distance between P and Rwas 5180 m. The bearing of the sun and the station Pwere measured as 60° and 15°, respectively. If theobservations were made on the bright line, computethe correct horizontal angle PRQ.

Solution: (Fig. 1.34)From the given data

θ = 60° – 15° = 45°D = 5180 mr = 1.5 m

From Eq. (1.36), we get

β = 2

cos206265 θD

r

= 245cos

51805.1206265 °×

= 55.18 secondsThe correct horizontal angle PRQ = 38° 29' 30" + β

= 38°29'30" + 55.18" = 38°30'25.18".

1.16 MEASUREMENT OF BASE LINEThe accuracy of an entire triangulation system depends on thatattained in the measurement of the base line and, therefore, themeasurement of base line forms the most important part of thetriangulation operations. As base line forms the basis forcomputations of triangulation system it is laid down with greataccuracy in its measurement and alignment. The length of thebase line depends upon the grade of the triangulation. The lengthof the base is also determined by the desirability of securingstrong figures in the base net. Ordinarily the longer base, theeasier it will be found to secure strong figures.

The base is connected to the triangulation systemthrough a base net. This connection may be madethrough a simple figure as shown in Fig. 1.35, orthrough a much more complicated figures discussed inthe base line extension (Sec. 1.16.3).

Fig. 1.35 Base net

Apart from main base line, several other check bases are also measured at some suitable intervals. InIndia, ten bases were measured, the length of nine bases vary from 6.4 to 7.8 miles, and that of the tenth baseis 1.7 miles.

Fig. 1.34

Base lineA B

P

N

r

S

S

Q

R

5180

m

38°29 30'"

60°

β15°

θ

Page 34: CE 406 – Advanced Surveying

Triangulation and Trilateration 31

1.16.1 Selection of site for base lineSince the accuracy in the measurement of the base line depends upon the site conditions, the following

points should be taken into consideration while selecting the site for a base line.1. The site should be fairly level or gently undulating. If the ground is sloping, the slope should be

uniform and gentle.2. The site should be free from obstructions throughout the length of the base line.3. The ground should be firm and smooth.4. The two extremities of the base line should be intervisible.5. The site should be such that well-conditioned triangles can be obtained while connecting extremities

to the main triangulation stations.6. The site should be such that a minimum length of the base line as specified, is available.

1.16.2 Equipment for base line measurementGenerally the following types of base measuring equipments are used :

1. Standardised tapes : These are used for measuring short bases in plain grounds.2. Hunter’s short base: It is used for measuring 80 m long base line and its extension is made by

subtense method.3. Tacheometric base measurements : It is used in undulating grounds for small bases (cf., Chapter 8

of Plane Surveying).4. Electronic distance measurement: This is used for fairly long distances and has been discussed in

Chapter 11.Standardised tapes : For measuring short bases in plain areas standardised tapes are generally used.

After having measured the length, the correct length of the base is calculated by applying the requiredcorrections. For details of corrections, refer to Chapter 3 of Plane Surveying. If the triangulation system is ofextensive nature, the corrected lengths of the base is reduced to the mean sea level.

Hunter’s short base : Dr. Hunter who was a Director of Survey of India, designed an equipment tomeasure the base line, which was named as Hunter’s short base. It consists of four chains, each of 22 yards(20.117 m) linked together. There are 5 stands, three-intermediate two-legged stands, and two three-leggedstands at ends (Fig. 1.36). A 1 kg weight is suspended at the end of an arm, so that the chains remain straightduring observations. The correct length of the individual chains is supplied by the manufacturer or is determinedin the laboratory. The lengths of the joints between two chains at intermediate supports, are measured directlywith the help of a graduated scale. To obtain correct length between the centres of the targets, usual correctionssuch as temperature, sag, slope, etc., are applied.

To set up of the Hunter’s short base the stand at the end A (marked in red colour) is centered on theground mark and the target is fitted with a clip. The target A is made truly vertical so that the notch on its tipside is centered on the ground mark. The end of the base is hooked with the plate A and is spread carefully tillits other end is reached. In between, at every joint of the chains, two-legged supports are fixed to carry thebase. The end B (marked in green colour) is fixed to the B stand and the 1 kg weight is attached at the end ofthe lever. While fixing the end supports A and B it should be ensured that their third leg should face each otherunder the base. Approximate alignment of the base is the done by eye judgement.

For final alignment, a theodolite is set up exactly over the notch of the target A, levelled and centeredaccurately. The target at B is then bisected. All intermediate supports are set in line with the vertical cross-hairof the theodolite. At the end again ensure that all the intermediate supports and the target B are in one line.

In case the base is spread along undulating ground, slope correction is applied. To measure the slopeangles of individual supports, a target is fixed to a long iron rod of such a length that it is as high above thetape at A as the trunion axis of the theodolite. The rod is held vertically at each support and the vertical anglesfor each support are read.

Page 35: CE 406 – Advanced Surveying

32 Higher Surveying

Fig. 1.36 Hunter’s short base

ILLUSTRATIVE EXAMPLES

Example 1.12 A tape of standard length 20 m at 85° F was used to measure a base line. The measureddistance was 882.10 m. The following being the slopes for the various segments of the line.

Segment Slope

100 m 2°20'150 m 4°12'50 m 1°06'

200 m 7°45'300 m 3°00'

82.10 m 5°10'

Find the true length of the base line if the mean temperature during measurement was 63°F. The coefficientof expansion of the tape material is 6.5 × 10 –6 per °F.

Solution: (refer to Sec. 3.5 of Plane Surveying):Correction for temperature

Ct = Lttm )( 0−α= 10.882)6563(105.6 6 ×−×× −

= 0.126 m (subtractive)Correction for slope

Cs = ])cos1[( Lα−Σ= 50)061cos1(150)124cos1(100)202cos1( ×°−+×°−+×°− '''

300)003cos1(200)847cos1( ×′°−+×′°−+ 10.82)015cos1( ×′°−+= 3.079 m (subtractive)

Total correction = Ct + Cs= 0.126 + 3.079= 3.205 m (subtractive)

Corrected length = 882.10 – 3.205= 878.895 m.

Example 1.13. A base line was measured between two points A and B at an average elevation of224.35 m. The corrected length after applying all correction was 149.3206 m. Reduce the length to mean sealevel. Take earth’s mean radius as 6367 km.

A

B

Page 36: CE 406 – Advanced Surveying

Triangulation and Trilateration 33

Solution: (Refer Sec. 3.5 of Plane Surveying):The reduced length at mean seal level is

L' = LhR

R)( +

=3205.149

100035.2246367

6367 ×

+

= 149.3152 m.

1.16.3 Extension of base lineUsually the length of the base lines is much shorter than the average length of the sides of the triangles.

This is mainly due to the following reasons:(a) It is often not possible to get a suitable site for a longer base.(b) Measurement of a long base line is difficult and expensive.The extension of short base is done through forming a base net consisting of well-conditioned triangles.

There are a great variety of the extension layouts but the following important points should be kept in mind inselecting the one.

(i) Small angles opposite the known sides must be avoided.(ii) The length of the base line should be as long as possible.

(iii) The length of the base line should be comparable with the mean side length of the triangulationnet.

(iv) A ratio of base length to the mean side length should be at least 0.5 so as to form well-conditionedtriangles.

(v) The net should have sufficient redundant lines to provide three or four side equations within thefigure.

(vi) Subject to the above, it should provide the quickest extension with the fewest stations.There are two ways of connecting the selected base to the triangulation stations. There are(a) extension by prolongation, and(b) extension by double sighting.(a) Extension by prolongationLet up suppose that AB is a short base line (Fig. 1.37) which is required to be extended by four times. The

following steps are involved to extend AB.

Fig. 1.37 Base extension by prolongation

AB

C

D

E

F

G

H

Page 37: CE 406 – Advanced Surveying

34 Higher Surveying

(i) Select C and D two points on either side of AB such that the triangles BAC and BAD are well-conditioned.

(ii) Set up the theodolite over the station A, and prolong the line AB accurately to a point E which isvisible from points C and D, ensuring that triangles AEC and AED are well-conditioned.

(iii) In triangle ABC, side AB is measured. The length of AC and AD are computed using the measuredangles of the triangles ABC and ABD, respectively.

(iv) The length of AE is calculated using the measured angles of triangles ACE and ADE, and takingmean value.

(v) Length of BE is also computed in similar manner using the measured angles of the triangles BEC andBDE. The sum of lengths of AB and BE should agree with the length of AE obtained in step (iv).

(vi) If found necessary, the base can be extended to H in the similar way.(b) Extension by double sightingLet AB be the base line (Fig. 1.38). To extend the base to the length of side EF, following steps are

involved.(i) Chose intervisible points C, D, E, and F.

(ii) Measure all the angles marked in triangles ABC and ABD. The most probable values of theseangles are found by the theory of least-squares discussed in Chapter 2.

(iii) Calculate the length of CD from these angles and the measured length AB, by applying the sinelaw to triangles ACB and ADB first, and then to triangles ADC and BDC.

Fig. 1.38 Base extension by double sighting

(iv) The new base line CD can be further extended to the length EF following the same procedure asabove. The line EF may from a side of the triangulation system.

If the base line AB is measured on a good site which is well located for extension and connection to themain triangulation system, the accuracy of the system is not much affected by the extension of the base line.In fact, in some cases, the accuracy may be higher than that of a longer base line measured over a poor terrain.

1.17 MEASUREMENT OF HORIZONTAL ANGLESThe instruments used for triangulation surveys, require great degree of precision. Horizontal angles aregenerally measured with an optical or electronic theodolite in primary and secondary triangulation. Fortertiary triangulation generally transit or Engineer’s transit having least count of 20" is used.

Various types of theodolities have been discussed in Sec. 4.4.5 of Plane Surveying. The salient featuresof the modern theodolities are as follow:

(i) These are small in dimension, and light in weight.(ii) The graduations are engraved on glass circles, and are much finer.

(iii) The mean of two readings on the opposite sides of the circles can be read directly through aneyepiece, saving the observation time.

(iv) There is no necessity to adjust the micrometers.

A

B

C

D

E F

Page 38: CE 406 – Advanced Surveying

Triangulation and Trilateration 35

(v) These are provided with optical plummet which makes possible accurate centering of the instrumenteven in high winds.

(vi) These are water proof and dust proof.(vii) These are provided with electrical arrangement for illumination during nights if necessary.

(viii) Electronic theodolites directly display the value of the angle on LCD or LED.

1.17.1 Methods of observation of horizontal anglesThe horizontal angles of a triangulation system can be observed by the following methods:(i) Repetition method

(ii) Reiteration method.The procedure of observation of the horizontal angles by the above methods has been discussed in

Sec. 4.5 of Plane Surveying.(i) Repetition methodFor measuring an angle to the highest degree of precision, several sets of repetitions are usually taken.

There are following two methods of taking a single set.(a) In the first method, the angle is measured clockwise by 6 repetitions keeping the telescope normal.

The first value of the angle is obtained by dividing the final reading by 6. The telescope is inverted,and the angle is measured again in anticlockwise direction by 6 repetitions. The second value ofthe angle is obtained by dividing the final reading by 6. The mean of the first and second values ofthe angle is the average value of the angle by first set.For first-order work, five or six sets are usually required. The final value of the angle is the mean ofthe values obtained by different sets.

(b) In the second method, the angle is measured clockwise by six repetitions, the first three withtelescope normal and the last three with telescope inverted. The first value of the angle is obtainedby dividing the final reading by 6. Now without altering the reading obtained in the sixth repetition,the explement angle (i.e., 360°– the angle), is measured clockwise by six repetitions, the first threewith telescope inverted and the last three with telescope normal. The final reading shouldtheoretically be zero. If the final reading is not zero, the error is noted, and half of the error isdistributed to the first value of the angle. The result is the corrected value of the angle by the firstset. As many sets as desired are taken, and the mean of all the value of various sets, is the averagevalue of the angle. For more accurate work and to eliminate the errors due to inaccurate graduationsof the horizontal circle, the initial reading at the beginning of each set may not be set to zero but todifferent values. If n sets are required, the initial setting should be sucessively increased by 180°/n.For example, for 6 sets the initial readings would be 0°, 30°, 60°, 90°, 120° and 150°, respectively.

(ii) Reiteration method or direction methodIn the reiteration method, the triangulation signals are bisected successively, and a value is obtained for

each direction in each of several rounds of observations. One of the triangulation stations which is likely tobe always clearly visible may be selected as the initial station or reference station. The theodolites used for themeasurement of angles for triangulation surveys, have more than one micrometer. One of the micrometer is setto 0° and with telescope normal, the initial station is bisected, and all the micrometers are read. Each of thesuccessive stations are then bisected, and all the micrometers are read. The stations are then again bisectedin the reverse direction, and all the micrometers are read after each bisection. Thus, two values are obtained foreach angle when the telescope is normal. The telescope is then inverted, and the observations are repeated.This constitutes one set in which four value of each angle are obtained. The micrometer originally at 0° is nowbrought to a new reading equal to 360°/mn (where m is the number of micrometers and n is the number of sets),and a second set is observed in the same manner. The number of sets depends on the accuracy required. Forfirst-order triangulation, sixteen such sets are required with a 1" direction theodolite, while for second-ordertriangulation four, and for third-order triangulation two. With more refined instrument having finer graduations,however, six to eight sets are sufficient for the geodetic work.

Page 39: CE 406 – Advanced Surveying

36 Higher Surveying

1.18 MEASUREMENT OF VERTICAL ANGLESMeasurement of vertical angles is required to compute the elevation of the triangulation stations. The methodof measurement of vertical angles is discussed in Sec. 4.5.4 of Plane Surveying.

1.19 ASTRONOMICAL OBSERVATIONSTo determine the azimuth of the initial side, intermediate sides, and the last side of the triangulation net,astronomical observations are made. For detailed procedure and methods of observation, refer to Chapter 7.

1.20 SOME EXTRA PRECAUTIONS IN TAKING OBSERVATIONSTo satisfy first-second, and third-order specifications as given in Table 1.1, care must be exercised. Observermust ensure the following:

1. The instrument and signals have been centred very carefully.2. Phase in signals has been eliminated.3. The instrument is protected form the heating effects of the sun and vibrations caused by wind.4. The support for the instrument is adequately stable.5. In case of adverse horizontal refraction, observations should be rescheduled to the time when the

horizontal refraction is minimum.Horizontal angles should be measured when the air is the clearest, and the lateral refraction is minimum.

If the observations are planned for day hours, the best time in clear weather is from 6 AM to 9 AM and from4 PM till sunset. In densely clouded weather satisfactory work can be done all day. The best time for measuringvertical angles is form 10 AM to 2 PM when the vertical refraction is the least variable.

First-order work is generally done at night, since observations at night using illuminated signals help inreducing bad atmospheric conditions, and optimum results can be obtained. Also working at night doublesthe hours of working available during a day. Night operations are confined to period from sunset to midnight.

1.21 SATELLITE STATION AND REDUCTION TO CENTRETo secure well-conditioned triangles or to have good visibility, objectssuch as chimneys, church spires, flat poles, towers, lighthouse, etc.,are selected as triangulation stations. Such stations can be sightedfrom other stations but it is not possible to occupy the station directlybelow such excellent targets for making the observations by settingup the instrument over the station point. Also, signals are frequentlyblown out of position, and angles read on them have to be correctedto the true position of the triangulation station. Thus, there are twotypes of problems:

1. When the instrument is not set up over the true station, and2. When the target is out of position.

In Fig. 1.39, A, B, and C are the three triangulation stations. It is notpossible to place instrument at C. To solve this problem another stationS, in the vicinity of C, is selected where the instrument can be set up, andfrom where all the three stations are visible for making the angle observations. Such station is known assatellite station. As the observations from C are not possible, the observations form S are made on A, B, and,C from A and B on C. From the observations made, the required angle ACB is calculated. This is known asreduction to centre.

Fig. 1.39 Reduction to centre

O

A Bc

a

b

dS C

θA

β

φθγ

α

θB

Page 40: CE 406 – Advanced Surveying

Triangulation and Trilateration 37

In the other case, S is treated as the true station point, and the signal is considered to be shifted to theposition C. This case may also be looked upon as a case of eccentricity of signal. Thus, the observations fromS are made to the triangulation stations A and B, but from A and B the observations are made on the signal atthe shifted position C. This causes errors in the measured values of the angles BAC and ABC.

Both the problems discussed above are solved by reduction to centre.Let the measured

BAC∠ = AθABC∠ = BθASB∠ = θBSC∠ = γ

Eccentric distance SC = dThe distance AB is known by computations form preceding triangle of the triangular net. Further, let

SAC∠ = αSBC∠ = βACB∠ = φ

AB = cAC = bBC = a

As a first approximation in ABC∆ the ACB∠ may be taken as= )(180 ABCBAC ∠+∠−°

or φ = )(180 BA θ+θ−° ...(1.37)In the triangle ABC we have

φsinc

=BA

baθ

=θ sinsin

a = φθ

sinsin. Ac

...(1.38)

and b = φθ

sinsin. Bc

...(1.39)

Compute the values of a and b by substituting the value of φ obtained from Eq. (1.37) in Eqs. (1.38) and(1.39), respectively.

Now, from SACs∆ and SBC we have

αsind

= )(sin γ+θb

βsind

= γsina

αsin = bd )(sin γ+θ

βsin = ad γsin

As the satellite station S is chosen very close to the main station C, the angles βα and are extremelysmall. Therefore, taking β=βα=α sinand,sin in radians, we get.

α = "bd

1sin)sin( γ+θ

Page 41: CE 406 – Advanced Surveying

38 Higher Surveying

or = seconds206265)sin( ×γ+θb

d...(1.40)

and β = seconds206265sin ×γa

d...(1.41)

In Eqs. (1.40) and (1.41), abd and,,, γθ are known quantities, therefore, the values of βα and can becomputed. Now a more correct value of the angle ACB∠ can be found.

We haveAOB∠ = β+φ=α+θ

or φ = β−α+θ ...(1.42)Eq. (1.42) gives the value of φ when the satellite station S is to the left of the main station C. In the

general, the following four cases as shown in Fig. 1.40a, can occur depending on the field conditions.Case I: S towards the left of C (Fig. 1.39)

φ = β−α+θCase II: S towards the right of C (Fig. 1.40b), the position S2.

φ = β+α−θ ...(1.43)Case III: S inside the triangle ABC (Fig. 1.40c), the position S3.

φ = β−α−θ ...(1.44)Case IV: S outside the triangle ABC (Fig. 1.40d), the position S4.

φ = β+α+θ ...(1.45)

Fig. 1.40 Locations of satellite station with reference to triangulation stations C

1.22 ECCENTRICITY OF SIGNALWhen the signal is found shifted from its true position, the distance between the shifted signal and the stationpoint d is measured. The corrections βα and to the observed angles BAC and ABC, respectively, are computedfrom Eqs. (1.40) and (1.41), and the corrected values of the angles are obtained as under (Fig. 1.39):

Correct BAS∠ = α+θA ...(1.46)Correct ABS∠ = β−θB ...(1.47)

For other cases shown in Fig. 1.40, one can easily find out the correct angles.

S4

S3

S2

S1S2

S3

S4

α β

A B

C

BA

C

βα

φ θ

d

A B

C

θ βα

φd

d

C

BA

θ

φ

(a)

(c)

(b)

(d)

Page 42: CE 406 – Advanced Surveying

AERIAL PHOTOGRAPHYAERIAL PHOTOGRAPHYABRAHAM THOMASABRAHAM THOMAS

University of the Western CapeUniversity of the Western CapeBellville 7535, Cape Town, South AfricaBellville 7535, Cape Town, South Africa

Page 43: CE 406 – Advanced Surveying

OutlineOutlineDefinition of aerial photographyDefinition of aerial photography

Characteristics of aerial photographsCharacteristics of aerial photographs

Aerial cameras and their typesAerial cameras and their types

Geometric properties of aerial photographsGeometric properties of aerial photographs

Aerial Photo Interpretation ElementsAerial Photo Interpretation Elements

An Introduction toAn Introduction to photogrammetryphotogrammetry

Photo interpretation and Photo interpretation and photogrammetricphotogrammetric equipmentequipment

Taking measurements from aerial photographsTaking measurements from aerial photographs

Mapping with aerial photographsMapping with aerial photographs

Page 44: CE 406 – Advanced Surveying

Defining Aerial PhotographyDefining Aerial PhotographyThe term "photography" is derived from two Greek words The term "photography" is derived from two Greek words meaning "light" (meaning "light" (phosphos) and "writing" () and "writing" (graphiengraphien). From Greek ). From Greek phphōōtt-- , the stem of , the stem of phphōōss ‘‘lightlight’’, which is a unit of illumination., which is a unit of illumination.

Photography means the art, hobby, or profession of taking Photography means the art, hobby, or profession of taking photographs, and developing and printing the film or photographs, and developing and printing the film or processing the digitized array image.processing the digitized array image.

Photography is production of permanent images by means of Photography is production of permanent images by means of the action of light on sensitized surfaces (film or array insidethe action of light on sensitized surfaces (film or array inside a a camera), which finally giving rise to a new form of visual art.camera), which finally giving rise to a new form of visual art.

Aerial Photography means photography from the air.Aerial Photography means photography from the air.

The word The word ‘‘aerialaerial’’ originated in early 17th century. [Formed originated in early 17th century. [Formed from Latin from Latin aeriusaerius , from Greek , from Greek aeriosaerios , from , from aaēērr ‘‘airair’’.].]

Page 45: CE 406 – Advanced Surveying

Aerial Photography: An Overview Aerial Photography: An Overview Aerial Photography is one of the most common, versatile and Aerial Photography is one of the most common, versatile and economical forms of remote sensing.economical forms of remote sensing.

It is a means of fixing time within the framework of space (de It is a means of fixing time within the framework of space (de LatilLatil, 1961)., 1961).

Aerial photography was the first method of remote sensing Aerial photography was the first method of remote sensing and even used today in the era of satellite and electronic and even used today in the era of satellite and electronic scanners. Aerial photographs will still remain the most widely scanners. Aerial photographs will still remain the most widely used type of remote sensing data.used type of remote sensing data.

Aerial photographs were taken from balloons and kites as Aerial photographs were taken from balloons and kites as early as the midearly as the mid--1800s.1800s.

1858 1858 -- Gasper Felix Gasper Felix TournachonTournachon ""NadarNadar" took the first aerial " took the first aerial photograph from a captive balloon from an altitude of 1,200 photograph from a captive balloon from an altitude of 1,200 feet over Paris.feet over Paris.

Page 46: CE 406 – Advanced Surveying

Characteristics of Aerial Characteristics of Aerial PhotographyPhotography

Synoptic viewpointSynoptic viewpoint: Aerial photographs give a bird: Aerial photographs give a bird’’s eye view of s eye view of large areas enabling us to see surface features in their spatiallarge areas enabling us to see surface features in their spatialcontext. They enable the detection of small scale features and context. They enable the detection of small scale features and spatial relationships that would not be found on the ground.spatial relationships that would not be found on the ground.

Time freezing abilityTime freezing ability: They are virtually permanent records of the : They are virtually permanent records of the existing conditions on the Earthexisting conditions on the Earth’’s surface at one point in time, and s surface at one point in time, and used as an historical document.used as an historical document.

Capability to stop actionCapability to stop action: They provides a stop action view of : They provides a stop action view of dynamic conditions and are useful in studying dynamic phenomena dynamic conditions and are useful in studying dynamic phenomena such as flooding, moving wildlife, traffic, oil spills, forest fsuch as flooding, moving wildlife, traffic, oil spills, forest fires.ires.

Three dimensional perspectiveThree dimensional perspective: It provides a stereoscopic view : It provides a stereoscopic view of the Earthof the Earth’’s surface and make it possible to take measurements s surface and make it possible to take measurements horizontally and vertically horizontally and vertically -- a characteristic that is lacking for the a characteristic that is lacking for the majority of remotely sensed data.majority of remotely sensed data.

Page 47: CE 406 – Advanced Surveying

Characteristics of Aerial Characteristics of Aerial Photography (2)Photography (2)

Spectral and spatial resolutionSpectral and spatial resolution: Aerial photographs : Aerial photographs are sensitive to radiation in wavelengths that are outside are sensitive to radiation in wavelengths that are outside of the spectral sensitivity of the human eye (0.3 of the spectral sensitivity of the human eye (0.3 µµm to m to 0.9 0.9 µµm versus 0.4 m versus 0.4 µµm to 0.7 m to 0.7 µµm). m).

They are sensitive to objects outside the spatial They are sensitive to objects outside the spatial resolving power of human eye.resolving power of human eye.

AvailabilityAvailability: Aerial photographs are readily available at : Aerial photographs are readily available at a range of scales for much of the world.a range of scales for much of the world.

EconomyEconomy: They are much cheaper than field surveys : They are much cheaper than field surveys and are often cheaper and more accurate than maps.and are often cheaper and more accurate than maps.

Page 48: CE 406 – Advanced Surveying

Aerial CamerasAerial CamerasAerial photographs can be made with any type of camera (e.g. 35 Aerial photographs can be made with any type of camera (e.g. 35 mm small amateur or 70 mm or special cameras that are purpose mm small amateur or 70 mm or special cameras that are purpose built meant for mapping).built meant for mapping).

Many successful applications have employed aerial photography Many successful applications have employed aerial photography made from light aircraft with handheld 35 mm cameras.made from light aircraft with handheld 35 mm cameras.

For the aerial study of large areas, high geometric and radiometFor the aerial study of large areas, high geometric and radiometric ric accuracy are required and these can only be obtained from by usiaccuracy are required and these can only be obtained from by using ng cameras that are purpose built.cameras that are purpose built.

Aerial camera are precision built and specifically designed to eAerial camera are precision built and specifically designed to expose xpose a large number of films/photographs in rapid succession with thea large number of films/photographs in rapid succession with theultimate in geometric fidelity and quality.ultimate in geometric fidelity and quality.

These cameras usually have a medium to large format, a high qualThese cameras usually have a medium to large format, a high quality ity lens, a large film magazine, a mount to hold the camera in a verlens, a large film magazine, a mount to hold the camera in a vertical tical position and a motor drive.position and a motor drive.

Page 49: CE 406 – Advanced Surveying

Aerial CamerasAerial Cameras

One of the smaller models of aerial camera, dated 1907,kept in DuetschesMuseum, Germany.

Source: Curran, (1988).

Page 50: CE 406 – Advanced Surveying

Types of Aerial CamerasTypes of Aerial Cameras

There are many types of aerial cameras:There are many types of aerial cameras:

Aerial mapping camera (single lens),Aerial mapping camera (single lens),Reconnaissance camera,Reconnaissance camera,Strip camera,Strip camera,Panoramic camera,Panoramic camera,MultilensMultilens camera, the multi camera array camera, the multi camera array ((multibandmultiband aerial camera) andaerial camera) andDigital camera.Digital camera.

Page 51: CE 406 – Advanced Surveying

Aerial Mapping (Single Lens) CameraAerial Mapping (Single Lens) Camera

• Aerial mapping cameras (also called as metric or cartographic cameras) are single lens frame cameras designed to provide extremely highgeometric image quality.

• They employ a low distortion lens system held in a fixed position relative to the plane of the film.

• The film format size is commonly a square of 230 mm on a side. The total width of the film used is 240 mm and the film magazine capacity ranges up to film lengths of 120 metres.

• A frame of imagery is acquired with each opening of the camera shutter, which is tripped at a set frequency by an electronic device called an intervalometer.

• They are exclusively used in obtaining aerial photos for remote sensing in general and photogrammetric mapping purposes in particular.

• Single lens frame cameras are the most common cameras in use today.

Page 52: CE 406 – Advanced Surveying

Aerial Mapping CameraAerial Mapping Camera

An aerial mapping camera (Carl Zeris RMK/A15/23) with automatic levelling and exposure control. It is mounted on a suspension mount, between the remote control unit (left) and its navigation telescope (right). Source: Curran, 1988).

Page 53: CE 406 – Advanced Surveying

Single Lens Frame CameraSingle Lens Frame Camera

A typical aerial mapping camera and its associated gyro-stabilised suspension mount.

The principal components of a single lens frame mapping camera.

Source: Lillesand et al, 2005.

Page 54: CE 406 – Advanced Surveying

Panoramic Aerial CameraPanoramic Aerial Camera

In panoramic cameras the ground areas are covered by either rotating the camera lens or rotating a prism in front of the lens.

The terrain is scanned from side to side, transverse to the flight direction. The film is exposed along a curved surface located at the focal distance from the rotating lens assembly, and the angular coverage can extend from horizon to horizon.

Camera with a rotating prism design contain a fixed lens and a flat film plane. Scanning is accomplished by rotating the prism in front of the lens.

The operating principle of a panoramic camera

Page 55: CE 406 – Advanced Surveying

Panoramic PhotographPanoramic Photograph

Panoramic photograph with 180 degree scan angle. Note image detail, large area of coverage and geometric distortion. Area near the two ends of the photograph are compressed. Source: Lillesand et al, 2005.

Page 56: CE 406 – Advanced Surveying

Multiband Aerial CamerasMultiband Aerial Cameras

Multilens camera system

Multicamera array comprising four 70 mm camera Imaging digital camera

comprising eight synchronously operating CCD-based digital cameras

Page 57: CE 406 – Advanced Surveying

Multiband Aerial Photo of Multiband Aerial Photo of Waterfront Area, Cape TownWaterfront Area, Cape Town

Page 58: CE 406 – Advanced Surveying

Geometric Properties of APGeometric Properties of APThe most important geometric properties of an aerial photograph The most important geometric properties of an aerial photograph are those of are those of

an an angleangle and and scalescale..

Angle of Arial PhotographsAngle of Arial PhotographsThe angle at which aerial The angle at which aerial photograph is taken is used to photograph is taken is used to classify the photograph into one of classify the photograph into one of three types viz. vertical, high three types viz. vertical, high oblique and low oblique.oblique and low oblique.

Vertical photograph taken with a Vertical photograph taken with a single lens is the most common single lens is the most common type of aerial photography used in type of aerial photography used in remote sensing applications.remote sensing applications.

The vertical photography is taken The vertical photography is taken with the camera axis pointing with the camera axis pointing vertically downwards.vertically downwards.

Oblique photography is taken with Oblique photography is taken with the camera axis pointing obliquely the camera axis pointing obliquely downwards (intentional inclination downwards (intentional inclination of the camera axis).

Geometric Photo Types Geometric Photo Types

High oblique photography incorporates an image of the horizon into the photographs while low oblique photographs do not.of the camera axis).

Page 59: CE 406 – Advanced Surveying

Geometric Properties: Camera AngleGeometric Properties: Camera Angle

A A ‘‘trulytruly’’ vertical aerial photograph vertical aerial photograph is rarely obtainable because of is rarely obtainable because of unavoidable angular rotations or unavoidable angular rotations or tilts, caused by the angular attitude tilts, caused by the angular attitude of the aircraft at the instant of of the aircraft at the instant of exposure.exposure.

These unavoidable tilts cause slight These unavoidable tilts cause slight (1 to 3 degrees ) unintentional (1 to 3 degrees ) unintentional inclination of the camera optical inclination of the camera optical axis, resulting in the acquisition of axis, resulting in the acquisition of tilted photographstilted photographs..

Vertical photographs have Vertical photographs have properties similar to those of a map properties similar to those of a map with a approximately constant with a approximately constant scale over the whole photograph, scale over the whole photograph, and therefore can be used for and therefore can be used for mapping and measurements. Geometric Photo Types Geometric Photo Types mapping and measurements.

Page 60: CE 406 – Advanced Surveying

Taking Vertical AP: Flying PatternTaking Vertical AP: Flying Pattern

Page 61: CE 406 – Advanced Surveying

Photographic Coverage Along A Flight StripPhotographic Coverage Along A Flight Strip

a: conditions during exposure

b: resulting photograph

Page 62: CE 406 – Advanced Surveying

Flying PatternFlying Pattern

Page 63: CE 406 – Advanced Surveying

Basic Geometric Elements of Basic Geometric Elements of Vertical PhotographVertical Photograph

Page 64: CE 406 – Advanced Surveying

Vertical Aerial PhotographVertical Aerial Photograph

Image ID

Clock

Fiducial markdefining the frame of reference for spatial measurements

Level bubble

Altimeter

Frame No.

Vertical photo taken from with a 230 x 230-mm precision mapping film camera showing Langenburg, Germany

Page 65: CE 406 – Advanced Surveying

Geometric Characteristics: Photo ScaleGeometric Characteristics: Photo Scale

Scale of Arial Photographs (Photographic Scale)Scale of Arial Photographs (Photographic Scale)The scale of a photograph expresses the mathematical relationshiThe scale of a photograph expresses the mathematical relationship p between a distance measured on the photo and the corresponding between a distance measured on the photo and the corresponding distance measured on the ground.distance measured on the ground.A photograph scale is an expression that states one unit of distA photograph scale is an expression that states one unit of distance ance on a photograph represents a specific number of units of actual on a photograph represents a specific number of units of actual ground distance.ground distance.Scales may be expressed as unit equivalents (1 mm = 25 m), Scales may be expressed as unit equivalents (1 mm = 25 m), representative fractions (1/25,000) or ratios (1: 25,000).representative fractions (1/25,000) or ratios (1: 25,000).Unlike maps, which have a constant scale throughout, the aerial Unlike maps, which have a constant scale throughout, the aerial photographs have a range of scales that vary in proportion to thphotographs have a range of scales that vary in proportion to the e elevation of the terrain involved. elevation of the terrain involved. The most straight forward method for determining photo scale is The most straight forward method for determining photo scale is to to measure the corresponding photo and ground distances between measure the corresponding photo and ground distances between any two points. The scale any two points. The scale SS is then computed as the ratio of the is then computed as the ratio of the photo distance photo distance dd to the ground distance to the ground distance DD..SS = photo scale = photo distance/ground distance = = photo scale = photo distance/ground distance = d/Dd/D

Page 66: CE 406 – Advanced Surveying

Aerial Photo ScaleAerial Photo Scale

The scale of a photograph is The scale of a photograph is determined by the focal length of the determined by the focal length of the camera and the vertical height of the camera and the vertical height of the lens above the ground.lens above the ground.

The focal length (f) of the camera is The focal length (f) of the camera is the distance measured from thethe distance measured from thecentrecentre of the camera lens to the film.of the camera lens to the film.

The vertical height of the lens above The vertical height of the lens above the ground (Hthe ground (H--h) is the height of the h) is the height of the lens above sea level (H), minus the lens above sea level (H), minus the height of the ground above see level height of the ground above see level (h), when the optical axis is vertical (h), when the optical axis is vertical and the ground is flat.and the ground is flat.

These parameters are related by These parameters are related by formulaformulaS = f / (H S = f / (H –– h)h)

Page 67: CE 406 – Advanced Surveying

Geometric Characteristics: ScaleGeometric Characteristics: Scale

Photographic Scale Contd.Photographic Scale Contd.

For instance, if the photo scale were 1:63,360,, then 1 inch on For instance, if the photo scale were 1:63,360,, then 1 inch on the the photo would represent 63,360 inches. The first number (map photo would represent 63,360 inches. The first number (map distance) is always 1. The second number (ground distance) is distance) is always 1. The second number (ground distance) is different for each scale; the larger the second number is, the different for each scale; the larger the second number is, the smaller the scale of the map. i.e. Large is Small.smaller the scale of the map. i.e. Large is Small.

Quite often the terms large scale and small scale are confusing Quite often the terms large scale and small scale are confusing to to those whose who are not working with scale expression on a routithose whose who are not working with scale expression on a routine ne basis.basis.

A convenient way to make scale comparisons is to remember that A convenient way to make scale comparisons is to remember that the same objects are smaller on a smaller scale photograph than the same objects are smaller on a smaller scale photograph than on on a larger scale photo.a larger scale photo.

A large scale photograph will provide a detailed and high resoluA large scale photograph will provide a detailed and high resolution tion view of a small area.view of a small area.

Page 68: CE 406 – Advanced Surveying

Large Scale Vs. Small ScaleLarge Scale Vs. Small Scale

A map’s scale determines how a feature will be represented. On a large-scale map, a river might be

represented as a polygon rather than a line or a city’s extent is so large that it can only be accurately

represented as a polygon rather than a point.

Scale can be used as a measure of viewable detail; small scale implies less detail is visible, large scale implies more detail is visible. Thus, in GIS scale can be used to control display; as scale increases (becomes larger and more “zoomed in”) more detail can be displayed without overcrowding the screen display.

Page 69: CE 406 – Advanced Surveying

Comparative Geometry of a Map Comparative Geometry of a Map and a Vertical Photographand a Vertical Photograph

• On a map we see a top view of objects in their true relative horizontal positions. On a photograph, areas of terrain at the higher elevations lie closer to the camera and therefore appear larger than the corresponding areas lying at lower elevations.

• The image of the tops of objects appearing in a photograph are displaced from the images of their bases. This distortion is known as relief displacementand causes any object standing above the terrain to lean away from the principal point of a photo radially.

Page 70: CE 406 – Advanced Surveying

Aerial Photo InterpretationAerial Photo Interpretation• When we look at a photo we see various objects of different sizes and

shapes. Some of these objects may be readily identifying while others may not, depending on our individual perceptions and experience.

• When we can identify certain objects or areas and communicates the information identified to others we are then practicing image interpretation.

• Aerial photographic interpretation is defined as the act of examining photographic images for the purpose of identifying objects and judging their significance (Curran, 1988).

• During the process of interpretation, the aerial photo interpreters usually make use of seven tasks, which form a chain of events. They are: 1) detection, 2) recognition and identification, 3) analysis, 4) deduction, 5) classification, 6) idealisation and 7) accuracy determination.

Page 71: CE 406 – Advanced Surveying

Aerial Photo Interpretation (2)Aerial Photo Interpretation (2)• Detection involves selectively picking out objects that are directly visible

(e.g. water bodies, rivers, rock faces etc.) or areas that are indirectly visible (e.g. areas of wet soils or palaeochannels) on the photographs.

• Recognition and identification involve naming objects or areas (most important task in this chain of events).

• Analysis involves trying to detect the spatial order of the objects or areas.

• Deduction is rather complex and involves the principle of convergence of evidence to predict the occurrence of certain relationships on the photo.

• Classification helps or comes in to arrange the objects and elements identified into an orderly system before the interpretation is idealisedusing guidelines/directions which are drawn to summarise the spatial distribution of objects (e.g. land use/land cover).

• During accuracy determination random points are visited in the field to confirm or refute the interpretation.

Page 72: CE 406 – Advanced Surveying

Elements of Photo InterpretationElements of Photo Interpretation• An interpreter uses following basic characteristics of photograph such as tone,

texture, pattern, place, shape, shadow and size.

• Tone or hue refers to the relative brightness or colour of objects on an image. It is the most important characteristics of the photo. It represents a record of the radiation that has been reflected from the Earth’s surface onto the film.

• Light tone represents areas with a high reflectance/radiance and dark tone represents areas with low radiance. The nature of the materials on the Earth’s surface affects the amount of light reflected.

• Texture is the frequency of tonal changes within an aerial photo that arises when a number of features are viewed together. Texture is produced by an aggregation of unit features that may be too small may be discerned individually on the image such as the tree leaves and leaf shadows. It determines the overall visual “smoothness” or “coarseness” of image features.

• Texture is dependent on the scale of aerial photograph. As the scale is reduced the texture progressively becomes finer and ultimately disappears.

Page 73: CE 406 – Advanced Surveying

Photo ElementsPhoto Elements

Page 74: CE 406 – Advanced Surveying

Elements of Photo Interpretation (2)Elements of Photo Interpretation (2)• Pattern is the spatial arrangement of objects. The repetition of certain

general forms or relationships is characteristic of many objects. For examples road patterns or drainage pattern, crop disease pattern and lithological pattern.

• Place/site is a statement of an object’s position in relation to others in its vicinity and usually aids in its identification (e.g. certain vegetations or tree species are expected to occur on well drained uplands or in certain countries).

• Shape is a qualitative statement referring to the general form, configuration or outline of an object (e.g. ‘V’ shaped valleys indicative of deeply incised river).

• Shadows of objects aid in their identification. Shadows are important in two opposing respects: (1) the shape or outline of shadow affords an impression of the profile view of objects (which aids in interpretation) and (2) objects with shadows reflect little light and are difficult to discern on a photo.

Page 75: CE 406 – Advanced Surveying

Shadows in PhotographsShadows in Photographs

Page 76: CE 406 – Advanced Surveying

Elements of Photo Interpretation (3)Elements of Photo Interpretation (3)• Size of an object is a function of photo scale. The sizes of objects can be

estimated by comparing them with objects whose sizes are known.

• Sizes of objects must be considered while interpreting features and some features may be misinterpreted if sizes were not considered (e.g., a small storage shed might be misinterpreted as a barn if size was not considered).

• Association refers to the occurrence of certain features in relation to others. For example, a merry-go-round wheel might be difficult to identify if standing in a field near a barn, but would be easy to identify if stand in an area identified as amusement park.

• Success in interpretation varies with the training and experience of the interpreter, the nature of the objects/phenomena being interpreted, and the quality of the image/photo being utilised.

Page 77: CE 406 – Advanced Surveying

Stereoscopic ViewStereoscopic ViewOne of the advantage of all aerial photographs is that when taken as overlapping pairs (called stereopairs) they can provide a 3D view of the terrain (also called perspective view).

The 3D view is made possible by the effect of parallax. Parallax refers to the apparent change in relative positions of stationery objects caused by a change in viewing position.

Our left and right eyes are recording information from two slightly differing viewpoints; the brain uses the effect of parallax to give us the perception of depth.

Page 78: CE 406 – Advanced Surveying

Viewing Photos StereoscopicallyViewing Photos Stereoscopically

Paracutin volcano in Mexico.(Source: Curran, 1988)

StereopairsStereopairs: overlapping vertical photos: overlapping vertical photos

Page 79: CE 406 – Advanced Surveying

StereoscopesStereoscopes

Pocket Stereoscope Mirror Stereoscope

Scanning Stereoscope ‘Interpreterscope’ (Carl Zeiss)

Page 80: CE 406 – Advanced Surveying

Photogrammetry: An IntroductionPhotogrammetry: An Introduction• Photogrammetry is the science and technology of obtaining spatial

measurements and other geometrically derived products from aerial photographs (Lillisand et al., 2005).

• Photogrammetric analysis procedures range from obtaining distances, area, elevations using hardcopy (analog) photographic products, equipment and simple geometric concepts to generating precise digital elevation models (DEMs), orthophotos, thematic data and other derived products/information through the use of digital images and analytical techniques.

• Digital or soft copy photogrammetry refers to any photogrammetric operation involving the use of digital raster photographic image.

• Historically, one of the most widespread uses of photogrammetry is in preparation of topographic maps. Today, photogrammetric operations are extensively used to produce a range of GIS data products such as thematic data in 2D and 3D, raster image backdrops and DEMs.

Page 81: CE 406 – Advanced Surveying

Area Measurements on PhotographsArea Measurements on Photographs

Area measurements using transparent dot grid overlay

Page 82: CE 406 – Advanced Surveying

Area Measurements on PhotographsArea Measurements on Photographs

Summergraphics table digitiser being used to measure and record areas.

Page 83: CE 406 – Advanced Surveying

Measuring Heights from PhotographsMeasuring Heights from Photographs

p Pah) - (H x p h

∆+∆

=∆

where ∆h = height of object (tree) in meters

∆p = difference in distance between the top and bottom of the feature on the two photo in mm

Pa = distance between image centres minus the distance between the feature on the two photos in mm

(H – h) = aircraft flying height above the surface of the ground in metres

Page 84: CE 406 – Advanced Surveying

Mapping With Aerial PhotographMapping With Aerial Photograph

Monoscopiczoom trasnferscope

Monoscopictrasnferscope

Stereoscopic radial line plotter

Stereosketch

Equipment used to transfer planimetric details from photos.

Page 85: CE 406 – Advanced Surveying

Accurate Plotting of TopographyAccurate Plotting of Topography

Stereoplotter is the main piece of photogrammetric instrumentation used for the measurement of distance, area, height on aerial photographs and transfer of planimetric details. There are 4 types of stereoplotters: optical-, mechanical, optical-mechanical and analytical stereoplotter.

Optical StereoplotterAnalytical Stereoplotter

Page 86: CE 406 – Advanced Surveying

PhotogrammetricPhotogrammetric WorkstationWorkstationPhotogrammetricPhotogrammetric workstation workstation involve integrated hardware involve integrated hardware and software systems for and software systems for spatial data capture, spatial data capture, manipulation, analysis, manipulation, analysis, storage, display, and output storage, display, and output of softcopy images.of softcopy images.

These systems incorporate These systems incorporate functionality of analytical functionality of analytical stereoplottersstereoplotters, automated , automated generation of generation of DEMsDEMs, , computation of digital computation of digital orthopohotosorthopohotos, preparation of , preparation of perspective views and perspective views and captures 2D and 3D data for captures 2D and 3D data for use in a GIS.use in a GIS.

Page 87: CE 406 – Advanced Surveying

THANK YOUTHANK YOUVERY MUCHVERY MUCH

&&ANY QUESTIONS ?ANY QUESTIONS ?

Page 88: CE 406 – Advanced Surveying

CE 2204 : Surveying I

By

Dr. Srinath Rajagopalan

Page 89: CE 406 – Advanced Surveying

CE2204 SURVEYING I Syllabus

OBJECTIVE At the end of the course the student will posses knowledge about Chain surveying,

Compass surveying, Plane table surveying, Leveling, Theodolite surveying and Engineering surveys.

Unit I INTRODUCTION AND CHAIN SURVEYING 8 • Definition - Principles - Classification - Field and office work - Scales - Conventional signs

- Survey instruments, their care and adjustment - Ranging and chaining - Reciprocal ranging - Setting perpendiculars - well - conditioned triangles - Traversing - Plotting - Enlarging and reducing figures.

Unit II COMPASS SURVEYING AND PLANE TABLE SURVEYING 7 • Prismatic compass - Surveyor’s compass - Bearing - Systems and conversions - Local

attraction - Magnetic declination - Dip - Traversing - Plotting - Adjustment of errors - Plane table instruments and accessories - Merits and demerits - Methods - Radiation - Intersection - Resection – Traversing.

Unit III LEVELLING AND APPLICATIONS 12 • Level line - Horizontal line - Levels and Staves - Spirit level - Sensitiveness - Bench marks

- Temporary and permanent adjustments - Fly and check levelling - Booking - Reduction - Curvature and refraction - Reciprocal levelling - Longitudinal and cross sections - Plotting - Calculation of areas and volumes - Contouring - Methods - Characteristics and uses of contours - Plotting - Earth work volume - Capacity of reservoirs.

Page 90: CE 406 – Advanced Surveying

CE2204 SURVEYING I Syllabus

Unit IV. THEODOLITE SURVEYING 8 • Theodolite - Vernier and microptic - Description and uses - Temporary and

permanent adjustments of vernier transit - Horizontal angles - Vertical angles - Heights and distances - Traversing - Closing error and distribution - Gale’s tables - Omitted measurements.

Unit V. ENGINEERING SURVEYS 10 • Reconnaissance, preliminary and location surveys for engineering projects

- Lay out - Setting out works - Route Surveys for highways, railways and waterways - Curve ranging - Horizontal and vertical curves - Simple curves - Setting with chain and tapes, tangential angles by theodolite, double theodolite - Compound and reverse curves - Transition curves - Functions and requirements - Setting out by offsets and angles - Vertical curves - Sight distances - Mine Surveying - instruments - Tunnels - Correlation of under ground and surface surveys - Shafts - Adits.

Page 91: CE 406 – Advanced Surveying

Text books and references

TEXT BOOKS 1. Bannister A. and Raymond S., Surveying, ELBS, Sixth Edition, 1992. 2. Kanetkar T.P., Surveying and Levelling, Vols. I and II, United Book

Corporation, Pune, 1994. 3. Punmia B.C. Surveying, Vols. I, II and III, Laxmi Publications, 1989 REFERENCES 1. Clark D., Plane and Geodetic Surveying, Vols. I and II, C.B.S. Publishers and

Distributors, Delhi, Sixth Edition, 1971. 2. James M.Anderson and Edward M.Mikhail, Introduction to Surveying,

McGraw-Hill Book Company, 1985. 3. Heribert Kahmen and Wolfgang Faig, Surveying, Walter de Gruyter, 1995. Refer Civil Intranet Elearning website for presentation, notes, and other

information

Page 92: CE 406 – Advanced Surveying

Surveying

• Defined as the art of determining relative positions of distinctive features on, above or below the surface of earth through measurement of distances, elevations, and directions

• The term Surveying is limited to representation of surface features on a horizontal plane

• The branch of surveying which deals with the measurement of the relative heights of features is known as leveling

Origin : B.C 3000 (from Egypt) due to the overflowed Nile River. Try to re-established the boundaries

Page 93: CE 406 – Advanced Surveying

Importance:

The planning and design of all Civil Engineering projects such as construction of

highways, railways, bridges, tunnels, dams, all types of buildings etc are based

upon surveying measurements.

• Moreover, during execution, project of any magnitude is constructed along the

lines and points established by surveying.

• Other principal works in which surveying is primarily utilized are

• to fix the national and state boundaries;

• to chart coastlines, navigable streams and lakes;

• to establish control points;(stations having known position)

• to execute hydrographic and oceanographic charting and mapping; and

• to prepare topographic map of land surface of the earth.

Page 94: CE 406 – Advanced Surveying

Objectives

• The objective of measurements is to show relative position of various objects on paper.

• Such representations on paper are called Plan and Map.

Plan and Map

It is the graphical representation of the features on, near or below the earth surface as projected on horizontal plane to a suitable scale.

If the area surveyed is small and the scale to which its result plotted is large, then it is called Plan.

If the area surveyed is large and the scale to which its result plotted is small, then it is called Map.

No exact difference between the Plan and Map.

Page 95: CE 406 – Advanced Surveying

Principles of Surveying 1. Working from the “ whole to the part” :

• Start the survey with a system of control points with high precision.(either by triangulation or by traversing)

• The line joining these points forms the boundary line of the area.( main skeleton of the survey.)

• Break the boundary into smaller ones and measure it with less laborious method.

Reasons:

• To avoid the accumulation of errors and to control any localized errors.

Page 96: CE 406 – Advanced Surveying

Check Line Tie Line

SoI – Principal mapping agency of the Country.

Topographical map: shows natural and man-made features, contours and positions of GTS benchmarks.

Page 97: CE 406 – Advanced Surveying
Page 98: CE 406 – Advanced Surveying

Plane Surveying Types/Divisions: Geodetic Surveying

• The Earth shape is Oblate Spheroid, (polar axis : 12713.8 km, Equatorial axis : 12756.75 km)

the line connecting any two points is not a straight line, but a Curve.

• Large area or accuracy required is high – Curvature of the Earth has to be taken into account.

• Small distances – the difference and subtended chord.

Page 99: CE 406 – Advanced Surveying
Page 100: CE 406 – Advanced Surveying

Types of Surveying

• Controlling Factor- Degree of Accuracy.

1. Length of an arc of 1.2 Km on earth’s mean surface is only 1mm more than the straight line connecting those two points.

2. Sum of the interior angles of a geometrical figure laid on the surface of the earth differs from that of the corresponding figure only to the extent of one second for about every 200 sq.km.

Page 101: CE 406 – Advanced Surveying

Types of Surveying

Page 102: CE 406 – Advanced Surveying

Classification of Surveying

Based on: – Nature of field of Survey

– Object of Survey

– Instruments used

– The methods employed.

1. Nature of field of Survey:

(a) Land Survey i. Topographic Survey: Measurement of natural features(rivers,

streams, lakes hills and forests) and man made features (roads, railways, towns, villages and canals)

ii. Cadastral Survey: Survey to mark properties of government and individuals

iii. City Surveys: Survey made in connection with the construction of streets, water supply and sewage lines etc.

Page 103: CE 406 – Advanced Surveying
Page 104: CE 406 – Advanced Surveying

2. Object of Surveying: ( on the basis of objectives)

i. Engineering Survey : to collect data for designing roads, railways, highways, irrigation, water supply and sewage disposal projects.

ii. Military Survey : with an objective to work out points of strategic importance.

iii. Mine Survey : to explore mineral wealth.

iv. Geological Survey : to find out different strata in the Earth’s crust.

v. Archeological Survey : Unearthing relics of antiquity.

3. Based on Instruments used:

i. Chain Survey

ii. Compass Survey

iii. Plane table Survey

iv. Theodolite Survey

v. Tacheometric Survey

vi. Modern Survey ( using distant meters and total stations)

vii. Photographic and Aerial Survey

Page 105: CE 406 – Advanced Surveying
Page 106: CE 406 – Advanced Surveying

Triangulation

Page 107: CE 406 – Advanced Surveying
Page 108: CE 406 – Advanced Surveying

Surveying Character of Work

• Four Distinct parts

– Planning: Involves selection of appropriate surveying method , instruments, and station points

– Field work: Measurement of angles and distances and keeping a record of what has been done in Field Notes

– Office work: Consist of drafting, computing, and designing

– Care and Adjustments of Instruments

Page 109: CE 406 – Advanced Surveying

Field Work

• Measuring distances and angles to:

– establish points and lines of reference for locating details such as boundary lines, roads, buildings, fences, rivers, bridges, and other existing features

– stake out or locate roads, buildings, utilities, and other construction projects

– establish lines parallel or at right angles to other lines, measure inaccessible distances as across rivers, extend straight lines beyond obstacles such as buildings and do any work that may require use of geometric or trigonometric principles.

• Measuring differences in elevations and determining elevations to:

– establish permanent points of known elevation (bench marks)

– determine elevations of terrain along a selected line or area for plotting profiles and computing grade lines

– stake out grades, cuts, and fills for construction projects.

• Making topographic surveys wherein horizontal and vertical measurements are combined.

• Recording field notes to provide a permanent record of the field work.

Page 110: CE 406 – Advanced Surveying

Field Notes • The field notes of the surveyor must contain a complete record of all

measurements made during the survey with sketches and narration, where necessary, to clarify the notes.

• The best field survey is of little value if the notes are not complete and clear. They are the only record that is left after the field party leaves the survey site.

• Make notes for each day’s work on the survey complete with

– Title of survey

– Date

– Weather conditions

– List of equipments

– Personnel of the crew

– Sign the record at end of the day

• All field notes should be lettered legibly. Use a sharp 2H or 3H pencils

• Numerals and decimal points should be legible and permit only one interpretation.

• Notes must be kept in the regular field notebook and not on scraps of paper for later transcription.

• The field notebook is a permanently bound book (not loose-leaf) for recording measurements made in the field.

Page 111: CE 406 – Advanced Surveying

Field Notes

• Note: Erasures are not permitted in field notebooks. – Individual numbers or lines recorded incorrectly shall be lined out and

the correct values added and the correction must be initialed.

– Pages that are to be rejected are crossed out neatly and referenced to the substituted page.

– This procedure is mandatory since the field notebook is the book of record and it is often used as legal evidence.

• Field note recording takes three general forms: tabulations, sketches, and descriptions. Two, or even all three forms, are combined when necessary to make a complete record

Page 112: CE 406 – Advanced Surveying

Field Notes

• Tabulation – Measurements may be recorded manually in a field book or they may be recorded

electronically through a data collector.

– Electronic data collection has the advantage of eliminating reading and recording errors.

• Sketches – Sketches add much to clarify electronic data collection files and should be used as a

supplemental record of the survey.

– They may be drawn to an approximate scale, or important details may be exaggerated for clarity.

– Measurements may be placed directly onto the sketch or keyed in some way to the tabular data.

– A very important requirement of a sketch is legibility. It should be drawn clearly and large enough to be understandable.

• Descriptions – Tabulations with or without added sketches can also be supplemented with descriptions.

– The description may only be one or two words to clarify the recorded measurements, or it may be quite lengthy in order to cover and record pertinent details of the survey.

Page 113: CE 406 – Advanced Surveying

Scales

• Not always convenient to draw objects to their actual size – Building drawings

– Microchip circuit diagram

• Convenient scale chosen to draw objects to readable size in a

sheet of paper

• Scale is defined as the ratio of the linear dimension of an element of an object as represented in the original drawing to the linear dimension of the same element of the object itself.

• The ratio of the drawing of an object to its actual size is called the representative fraction, usually referred to as R.F.

– R.F =drawing of an object/its actual size (in same units)

Page 114: CE 406 – Advanced Surveying

Classification of Scales • Based on Representative Fraction (R.F)

– Full size scale : If we show the actual length of an object on a drawing, then the scale used is called full size scale. represented as 1:1, R.F. =1

– Enlarging scale: Drawings of smaller machine parts, mechanical instruments, watches, etc. are made larger than their real size. These are said to be drawn in an increasing or enlarging scale. represented as n:1 (n>1), R.F. > 1

– Reducing scale : If we reduce the actual length of an object so as to accommodate that object on drawing, then scale used is called reducing scale. Such scales are used for the preparation of drawings of large machine parts, buildings, bridges, survey maps, architectural drawings etc. represented as 1: n (n>1), RF < 1

Page 115: CE 406 – Advanced Surveying

Scales

Page 116: CE 406 – Advanced Surveying

Requirement of Good Scale

• It should have suitable length, preferably within 300 mm

• The scale should be accurately divided and numbered

• It should read to the required accuracy

• R.F. should be clearly written on scale

• The main divisions should be the units, one tenth of the units and one hundredth of the units

• The zero of the scale should be placed between the units and its subdivisions for easy measurement of distance

Page 117: CE 406 – Advanced Surveying

Classification of Scales

• Plain Scale

– A plain scale is simply a line, which is divided into a suitable number of equal parts the first of which is further sub-divided into small parts.

– It is used to represent either two units or a unit and its fraction such as km, m and dm, etc.

Page 118: CE 406 – Advanced Surveying

Classification of Scales • Diagonal Scale

– Diagonal scales are used to represent either three units of measurements such as meters, decimeters, centimeters or to read to the accuracy correct to two decimals.

– It consists of a line divided into required number of equal parts. The first part is sub-divided into smaller parts by diagonals

Page 119: CE 406 – Advanced Surveying

Vernier Scale

• Used to measure fractional part of the smallest division in main scale

• Invented in 1631 by Pierre Vernier

• Consists of small scale (Vernier) and long scale (main scale)

• Graduated edge of vernier Slides over graduate edge of main scale

• Used in many survey equipments such as theodalites for precise measurement

• Two Types: Direct Vernier and Retrograde vernier

Page 120: CE 406 – Advanced Surveying

Direct Vernier • Graduations in vernier are in same direction

as main scale

• n division on vernier scale coincide with n-1 division on main scale scale (vernier divisions smaller than main scale divisions)

• Least count is d/n where d is the value of the smallest division in main scale

Page 121: CE 406 – Advanced Surveying

Retrograde Vernier

• Graduations in vernier are in opposite direction as main scale

• n division on vernier scale coincide with n+1 division on main scale (vernier divisions larger than main scale divisions)

• Least count is d/n where d is the value of the smallest division in main scale

Page 122: CE 406 – Advanced Surveying

COVENTIONAL SYMBOLS USED in CHAIN SURVEYING

Page 123: CE 406 – Advanced Surveying
Page 124: CE 406 – Advanced Surveying
Page 125: CE 406 – Advanced Surveying
Page 126: CE 406 – Advanced Surveying
Page 127: CE 406 – Advanced Surveying
Page 128: CE 406 – Advanced Surveying

Sl.No

Type of Chain

Features

1 Gunter’s or Surveyor’s Chain.

• 66ft long chain divided into 100 links. • 10 square chains are equal to 1 acre. • 10 chain length equal to 1 furlong (1/8 of a mile) • Used for land measurement and marking milestones along the roads.

2 Revenue Chain

• 33ft long chain divided into 16 links. •Used for measuring Cadastral surveys. (survey, map, or plan on a large scale i.e Usually topographical map, which exaggerates the dimensions of houses and the breadth of roads and streams, for the sake of distinctness.)

3. Engineer’s Chain.

• 100ft long chain divided into 100 links. •Used for all engineering surveys in ft.

4. Metric Chain

• IS 1492-1970 specifies the requirement of metric chain. • Commonly used is of 20m or 30m length having 100 links with talleys at every 5m. (quick and easy reading). •On talleys letter “M” engraved to distinguish a metric chain from nonmetric chain. •Simple rings are provided at 1m .

Page 129: CE 406 – Advanced Surveying

Sl.No

Type of Chain

Features

4. Metric Chain (contd…)

•Links are formed by pieces of galvanized and connected together by means of three oval shaped rings. This oval shape afford flexibility to the chain. •loops •Groove cut is made on outside of the brass handle for insertion of arrow. •Brass handle with swivel joints facilitates turning of chain without twisting. •Total length is marked on the brass handle.

5. Steel band ( also known as band chain)

• Consists of a ribbon of steel of 12 to 16mm width and 0.3 to 0.6mm thickness. • steel ribbon wound around on open steel cross or in a metal reel. •Available in 20 or 30 m lengths. • Marking on the band is any one of the following methods:

o Providing brass studs at every 0.2m and numbering at every metre. Last links from either end are subdivided in cm and mm. oEtching graduations as m, dm and cm on one side of the band and 0.2m links on the other side.

Page 130: CE 406 – Advanced Surveying
Page 131: CE 406 – Advanced Surveying
Page 132: CE 406 – Advanced Surveying
Page 133: CE 406 – Advanced Surveying
Page 134: CE 406 – Advanced Surveying
Page 135: CE 406 – Advanced Surveying
Page 136: CE 406 – Advanced Surveying
Page 137: CE 406 – Advanced Surveying
Page 138: CE 406 – Advanced Surveying
Page 139: CE 406 – Advanced Surveying
Page 140: CE 406 – Advanced Surveying

Advantages and Disadvantages of Chain over Steel Band.

• Advantages of Chain over Steel bands: o It can be read easily.

o It can be repaired easily.

o Being more flexible, ideal for surveying rough terrain.

o Does not require frequent cleaning as steel band requires.

• Disadvantages of Chain over Steel bands: o Due to opening of links actual chain length may become more than

marked length and due to bending, its length get shortened whereas length of steel is practically unaltered.

o Chain is heavy and cumbersome.

o In chaining sloping ground, suspended chain has more sag and needs sag corrections for horizontally measured lengths.

Page 141: CE 406 – Advanced Surveying

Principle of Chain Surveying: • Divide the area into a number of triangles of suitable sides.

• Triangle is the simple plane geometrical figure which can be plotted with the lengths of its sides alone – reason to prefer network of triangles.

• Chain Surveying – simple and no need for measuring angles.

Applications of Chain Surveying: • On level ground and open with simple details.

• Area to be surveyed preferably of small extent.

• For ordinary works only as its length alters due to continued use.

• Sagging of chain- reduces accuracy of measurement.

• Can be read and repaired in the field itself.

• Suitable for rough useage.

Limitations of Chain Surveying:

• Unsuitable for large area crowded with many details.

• Unsuitable for undulated and wooded areas.

Page 142: CE 406 – Advanced Surveying

Technical Terms in Chain Surveying: • Main Survey Station: The point where two sides of a

main triangles meet.

• Tie Stations .( subsidiary stations): Stations selected on the main survey lines for running auxiliary lines.

• Base Line: Longest of the main survey lines.

– Main reference line for fixing the positions of various stations and also to fix the direction of other lines.

– Accuracy of entire triangulation critically depends on this measurement.

• Check line: used in the field in order to check the accuracy of the measurements.

• Tie Line: is the line joining tie stations and subsidiary stations.

• Offset: Important details such as boundaries, fences, buildings and towers are located w.r.t main chain lines by means of lateral measurements. (Perpendicular and Oblique)

Page 143: CE 406 – Advanced Surveying

Ranging Out Survey Lines

• In measuring the length of a survey line (chain line), it is necessary that the chain should be laid out on the ground in a straight line between the end stations.

• When a survey line is longer than a chain length, it is necessary to align intermediate points on chain line so that the measurements are along the line.

• The process of establishing intermediate point on a straight line between two end points is known as ranging.

• Two method of ranging – Direct ranging : When the stations are inter-visible

– Indirect Ranging : When the station are not inter-visible (also called reciprocal ranging)

Page 144: CE 406 – Advanced Surveying

Direct Ranging

Page 145: CE 406 – Advanced Surveying
Page 146: CE 406 – Advanced Surveying

Indirect Ranging

• Indirect ranging is used when the stations are not inter-visible due to high ground or a hill or if the ends are too far apart.

• Intermediate points can be fixed on the survey line by reciprocal ranging.

• This method may also be used in ranging a line across a valley or river.

Page 147: CE 406 – Advanced Surveying

Reciprocal Ranging • Let A & B be the two stations with rising ground or a hill.

• Let two chainmen with ranging rods take up arbitrary positions at M1 and N1, such that, chainmen at M1 can see both rods at N1 and B and the chainmen at N1 can see the ranging rods at M1 and A.

• The chainmen at N1 directs the chainmen at M1 to shift the ranging rod to M2 in line with A and then chainman at M2 directs the chainmen at N1 to shift the ranging rod to N2 in line with B,

• By successively directing each other to be in line with the end points, their positions will be changed until finally they are both in line with A & B exactly on line AB.

• Now the four ranging rods at A M N & B are on same straight line.

Page 148: CE 406 – Advanced Surveying
Page 149: CE 406 – Advanced Surveying

Offsets

• Perpendicular offsets

– Cross Staff (draw and explain how to set perpendiculars using cross staff)

– Optical Square (draw and explain how to set perpendiculars using Optical square)

• With reflecting mirror

• With prism

• Oblique Offsets

– Using two linear measurements

– Two angular measurements

– One linear and one angular measurements

Page 150: CE 406 – Advanced Surveying

Well Conditioned Triangle

• The triangles of such a shape, in which any error in angular measurement has a minimum effect upon the computed lengths, is known as well-conditioned triangle.

• the best shape of a well conditioned triangle is an isosceles triangle whose base angles are 56°14' each.

• However, from practical considerations, an equilateral triangle may be treated as a well-conditional triangle.

• In actual practice, the triangles having an angle less than 30° or more than 120° should not be considered.

Page 151: CE 406 – Advanced Surveying

Well conditioned triangle

• In any triangle of a triangulation system, the length of one side is generally obtained from computation of the adjacent triangle.

• The error in the other two sides if any, will affect the sides of the triangles whose computation is based upon their values.

• Due to accumulated errors, entire triangulation system is thus affected thereafter.

• To ensure that two sides of any triangle are equally affected, these should, therefore, be equal in length or well conditioned triangle

Page 152: CE 406 – Advanced Surveying

Topics Covered in Class • Obstacles in Chain Surveying

– Obstacles for chaining

– Obstacles for ranging

– Obstacles for both

– Numerical Problems

• Chaining Over Sloped ground (Explain the procedure with diagram) – Direct method

– Indirect method

• Measuring the vertical angle

– Theodalite

– Clinometer

• Measuring the level difference

– Theodalite

– Dumpy Level

• Hypotenusal Allowance

Page 153: CE 406 – Advanced Surveying

Topics Covered in Class • Chain Traversing

– Reconnaissance

– Marking and fixing Survey stations • Criteria for selecting survey stations and survey lines

• Arrangement of survey lines – Base Line

– Main Lines

– Tie Lines

– Check lines

• Locating ground features (offsets)

– Running Survey Lines

• Plotting of Chain Survey

– One line

– Double line

– Conventional symbols

Page 154: CE 406 – Advanced Surveying

Accuracy & Precision

• Accuracy : Degree of closeness of measured value to true value.

– Accuracy of any measurement is very hard to judge as the true value is almost always never known

• Precision: Degree of closeness of measured value to other measured values

Page 155: CE 406 – Advanced Surveying

Sources of Errors

• Instrumental

– Faulty and out of calibration instruments

• Personal

– Human error in adjustment and reading of results

• Natural

– Errors due to refraction, fog, temperature , humidity, wind etc

Page 156: CE 406 – Advanced Surveying

Types of Errors

• Mistakes – Errors due to inattention, carelessness, inexperience – Hard to detect and correct – Every values taken in the field must be independently verified by another

person

• Systematic Error – Cumulative error – Under similar condition the error has same magnitude and sign – All equipments must be periodically calibrated and checked to minimize this

error

• Accidental Error – Compensating error over large set of readings – Random error over small set of readings – Obeys laws of probability – Can be corrected using probability curves and most probable number

methods

Page 157: CE 406 – Advanced Surveying

Errors in Chain Survey

• Erroneous length of Chain or Tape

– Cumulative (+ or-)

– Error due to wrong length of chain

– Serious source of error

– If length of chain is more measured distance will be less and error is –ve

– If length of chain is less measured distance will be more and error is +ve

– Length of tape or chain must checked periodically

Page 158: CE 406 – Advanced Surveying

Errors in Chain Survey

• Bad Ranging

– Cumulative +

– Shortest distance connecting two points is straight line

– Any deviation from this is always longer

– Has great effect on offset measurements

• Careless holding and marking

– Compensating + or –

– Placement and holding of arrows

– Can be cumulative for short stretch of lengths

Page 159: CE 406 – Advanced Surveying

Errors in Chain Survey

• Bad Straightening and Non Horizontality – Cumulative +

– If the chain is in irregular horizontal curve or vertical line the measured distance will always be greater than the actual distance

• Sag in Chain – Cumulative +

– When chain not stretched properly tend to sag in center causing measured distance to be greater than the actual distance

Page 160: CE 406 – Advanced Surveying

Errors in Chain Survey

• Variation in Temperature – Cumulative + or – – When Chain or tape is used at a temperature other than the

calibrated temperature, its length changes – Length increases with increase in temperature and vice versa

• Variation in Pull – Cumulative + or -, compensating + or – – If the pull applied is different than calibrated pull the length

changes – If the pull applied is not measured and is variable than it tends

to compensate – However, if a chainman applies the same pull, which is different

than the standard pull than the error is cumulative

Page 161: CE 406 – Advanced Surveying

Errors in Chain Survey

• Personal Mistakes – Displacement of arrows – Miscounting chain length – Misreading – Erroneous booking

• Relative importance of errors – Cumulative errors are more important than compensating errors – One cumulative error might compensate other cumulative error,

a greater pull may offset sag, or high temperature may offset short length of chain

– In short line compensating errors do not compensate – The more times a line is measured, the more likely are

accidental errors to disappear from the average value

Page 162: CE 406 – Advanced Surveying

Tape Corrections

• Correction for absolute length

• Correction for temperature

• Correction for pull

• Correction for sag

• Correction for slope

• Correction for horizontal alignment

• Reduction to mean sea level

• Correction to Measurement in vertical plane

Page 163: CE 406 – Advanced Surveying

Correction for absolute length

• Applied if the actual (absolute) length of tape not equal to standardized (nominal) value

• If absolute length > standardized length – Measured distance is too short – Correction is +ve

• If absolute length < standardized length – Measured distance is too long – Correction is –ve

• Ca= L . c/l – Where – Ca is absolute length correction – L is measured length – c= absolute length – standardized length – l= standardized length of tape

Page 164: CE 406 – Advanced Surveying

Correction for Temperature

• Length of tape and chain varies with temperature • As Temp increases Length increases and vice versa • Ct = .(Tm-To).L

– Ct is temperature correction (meters) – : Coefficient of thermal expansion (oC-1) – Tm: Average temperature in field during measurement (oC) – To : Temperature during standardization of tape (oC) – L is the measured length(m) – If (Tm > To) Ct is +ve – If (Tm< To) Ct is -ve

Page 165: CE 406 – Advanced Surveying

Correction for Pull • Length of tape standardized at a particular force.

• If applied pull > standard pull the tape length increases, the correction is +ve

• If applied pull < standard pull the tape length decreases, the correction is –ve

• Cp = (P-P0)L/AE – Cp Correction for pull (m)

– P : Pull applied in field (kg or N)

– P0: Standard Pull (kg or N)

– L : Measured Length (m)

– A : Cross Sectional Area of chain or tape (mm2,cm2)

– E: Young’s Modulus of Elasticity (N/mm2, kg/cm2)

Page 166: CE 406 – Advanced Surveying

Correction for Sag

• When tape stretched on two supports it takes a form of horizontal catenary

• The horizontal distance will be less than the curved distance, therefore sag correction is always –ve

• For the purpose of determining the correction the curve is assumed to be a parabola

Page 167: CE 406 – Advanced Surveying

Correction for Sag

• Cs1 =l1 (wl1)2/24P2

– Cs1 : Sag correction for length l1 (m)

– l1 : Length of tape suspended between two supports (m)

– w : Weight of tape per unit length (N/m)

– P : Pull applied in field (N)

• Cs = nCs1 = lW2/24n2P2 – Cs : sag correction per tape length (m)

– n : number of equal spans

– W : Total weight of tape (N)

– l : Tape length (m)

Page 168: CE 406 – Advanced Surveying

Correction for Sag • Total Sag Correction = NCs + sag correction for any

fractional tape length

– L :total measured length

– N : number of whole tape lengths

• Correction for Sag and slope

– If the two supports are in different levels

– This correction is for both sag and slope, no separate slope correction required

– Cs` =Cs Cos2 (1 ± wl (Sin )/P) • ± : + when P applied at higher end and – when P applied at lower

end)

• : Slope angle

Page 169: CE 406 – Advanced Surveying

GS400.02

Introduction to Photogrammetry

T. [email protected]

Autumn Quarter 2005

Department of Civil and Environmental Engineering and Geodetic ScienceThe Ohio State University

2070 Neil Ave., Columbus, OH 43210

Page 170: CE 406 – Advanced Surveying

Contents

1 Introduction 11.1 Preliminary Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Definitions, Processes and Products . . . . . . . . . . . . . . . . . . 3

1.2.1 Data Acquisition . . . . . . . . . . . . . . . . . . . . . . . . 41.2.2 Photogrammetric Products . . . . . . . . . . . . . . . . . . . 5

Photographic Products . . . . . . . . . . . . . . . . . . . . . 5Computational Results . . . . . . . . . . . . . . . . . . . . . 5Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

1.2.3 Photogrammetric Procedures and Instruments . . . . . . . . . 61.3 Historical Background . . . . . . . . . . . . . . . . . . . . . . . . . 7

2 Film-based Cameras 112.1 Photogrammetric Cameras . . . . . . . . . . . . . . . . . . . . . . . 11

2.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.1.2 Components of Aerial Cameras . . . . . . . . . . . . . . . . 12

Lens Assembly . . . . . . . . . . . . . . . . . . . . . . . . . 12Inner Cone and Focal Plane . . . . . . . . . . . . . . . . . . 13Outer Cone and Drive Mechanism . . . . . . . . . . . . . . . 14Magazine . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.1.3 Image Motion . . . . . . . . . . . . . . . . . . . . . . . . . . 142.1.4 Camera Calibration . . . . . . . . . . . . . . . . . . . . . . . 162.1.5 Summary of Interior Orientation . . . . . . . . . . . . . . . . 19

2.2 Photographic Processes . . . . . . . . . . . . . . . . . . . . . . . . . 202.2.1 Photographic Material . . . . . . . . . . . . . . . . . . . . . 202.2.2 Photographic Processes . . . . . . . . . . . . . . . . . . . . . 21

Exposure . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21Sensitivity . . . . . . . . . . . . . . . . . . . . . . . . . . . 22Colors and Filters . . . . . . . . . . . . . . . . . . . . . . . . 22Processing Color Film . . . . . . . . . . . . . . . . . . . . . 23

2.2.3 Sensitometry . . . . . . . . . . . . . . . . . . . . . . . . . . 232.2.4 Speed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252.2.5 Resolving Power . . . . . . . . . . . . . . . . . . . . . . . . 26

Page 171: CE 406 – Advanced Surveying

ii CONTENTS

3 Digital Cameras 293.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

3.1.1 Camera Overview . . . . . . . . . . . . . . . . . . . . . . . 303.1.2 Multiple frame cameras . . . . . . . . . . . . . . . . . . . . 313.1.3 Line cameras . . . . . . . . . . . . . . . . . . . . . . . . . . 313.1.4 Camera Electronics . . . . . . . . . . . . . . . . . . . . . . . 323.1.5 Signal Transmission . . . . . . . . . . . . . . . . . . . . . . 343.1.6 Frame Grabbers . . . . . . . . . . . . . . . . . . . . . . . . . 34

3.2 CCD Sensors: Working Principle and Properties . . . . . . . . . . . . 343.2.1 Working Principle . . . . . . . . . . . . . . . . . . . . . . . 353.2.2 Charge Transfer . . . . . . . . . . . . . . . . . . . . . . . . . 37

Linear Array With Bilinear Readout . . . . . . . . . . . . . . 37Frame Transfer . . . . . . . . . . . . . . . . . . . . . . . . . 37Interline Transfer . . . . . . . . . . . . . . . . . . . . . . . . 37

3.2.3 Spectral Response . . . . . . . . . . . . . . . . . . . . . . . 38

4 Properties of Aerial Photography 414.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414.2 Classification of aerial photographs . . . . . . . . . . . . . . . . . . . 41

4.2.1 Orientation of camera axis . . . . . . . . . . . . . . . . . . . 424.2.2 Angular coverage . . . . . . . . . . . . . . . . . . . . . . . . 424.2.3 Emulsion type . . . . . . . . . . . . . . . . . . . . . . . . . 43

4.3 Geometric properties of aerial photographs . . . . . . . . . . . . . . 434.3.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . 434.3.2 Image and object space . . . . . . . . . . . . . . . . . . . . . 454.3.3 Photo scale . . . . . . . . . . . . . . . . . . . . . . . . . . . 464.3.4 Relief displacement . . . . . . . . . . . . . . . . . . . . . . . 47

5 Elements of Analytical Photogrammetry 495.1 Introduction, Concept of Image and Object Space . . . . . . . . . . . 495.2 Coordinate Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

5.2.1 Photo-Coordinate System . . . . . . . . . . . . . . . . . . . 505.2.2 Object Space Coordinate Systems . . . . . . . . . . . . . . . 52

5.3 Interior Orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . 525.3.1 Similarity Transformation . . . . . . . . . . . . . . . . . . . 525.3.2 Affine Transformation . . . . . . . . . . . . . . . . . . . . . 535.3.3 Correction for Radial Distortion . . . . . . . . . . . . . . . . 545.3.4 Correction for Refraction . . . . . . . . . . . . . . . . . . . . 555.3.5 Correction for Earth Curvature . . . . . . . . . . . . . . . . . 565.3.6 Summary of Computing Photo-Coordinates . . . . . . . . . . 57

5.4 Exterior Orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . 595.4.1 Single Photo Resection . . . . . . . . . . . . . . . . . . . . . 615.4.2 Computing Photo Coordinates . . . . . . . . . . . . . . . . . 61

5.5 Orientation of a Stereopair . . . . . . . . . . . . . . . . . . . . . . . 615.5.1 Model Space, Model Coordinate System . . . . . . . . . . . . 615.5.2 Dependent Relative Orientation . . . . . . . . . . . . . . . . 63

Page 172: CE 406 – Advanced Surveying

CONTENTS iii

5.5.3 Independent Relative Orientation . . . . . . . . . . . . . . . 655.5.4 Direct Orientation . . . . . . . . . . . . . . . . . . . . . . . 665.5.5 Absolute Orientation . . . . . . . . . . . . . . . . . . . . . . 67

6 Measuring Systems 716.1 Analytical Plotters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

6.1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . 716.1.2 System Overview . . . . . . . . . . . . . . . . . . . . . . . . 71

Stereo Viewer . . . . . . . . . . . . . . . . . . . . . . . . . . 72Translation System . . . . . . . . . . . . . . . . . . . . . . . 72Measuring and Recording System . . . . . . . . . . . . . . . 73User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . 74Electronics and Real-Time Processor . . . . . . . . . . . . . 75Host Computer . . . . . . . . . . . . . . . . . . . . . . . . . 76Auxiliary Devices . . . . . . . . . . . . . . . . . . . . . . . . 76

6.1.3 Basic Functionality . . . . . . . . . . . . . . . . . . . . . . . 76Model Mode . . . . . . . . . . . . . . . . . . . . . . . . . . 76Comparator Mode . . . . . . . . . . . . . . . . . . . . . . . 77

6.1.4 Typical Workflow . . . . . . . . . . . . . . . . . . . . . . . . 77Definition of System Parameters . . . . . . . . . . . . . . . . 77Definition of Auxiliary Data . . . . . . . . . . . . . . . . . . 78Definition of Project Parameters . . . . . . . . . . . . . . . . 78Interior Orientation . . . . . . . . . . . . . . . . . . . . . . . 78Relative Orientation . . . . . . . . . . . . . . . . . . . . . . 79Absolute Orientation . . . . . . . . . . . . . . . . . . . . . . 79

6.1.5 Advantages of Analytical Plotters . . . . . . . . . . . . . . . 796.2 Digital Photogrammetric Workstations . . . . . . . . . . . . . . . . . 79

6.2.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . 81Digital Photogrammetric Workstation and Digital Photogram-

metry Environment . . . . . . . . . . . . . . . . . 816.2.2 Basic System Components . . . . . . . . . . . . . . . . . . . 826.2.3 Basic System Functionality . . . . . . . . . . . . . . . . . . 84

Storage System . . . . . . . . . . . . . . . . . . . . . . . . . 85Viewing and Measuring System . . . . . . . . . . . . . . . . 86Stereoscopic Viewing . . . . . . . . . . . . . . . . . . . . . . 88Roaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

6.3 Analytical Plotters vs. DPWs . . . . . . . . . . . . . . . . . . . . . . 94

Page 173: CE 406 – Advanced Surveying
Page 174: CE 406 – Advanced Surveying

Chapter 1

Introduction

1.1 Preliminary Remarks

This course provides a general overview of photogrammetry, its theory and generalworking principles with an emphasis on concepts rather than detailed operational knowl-edge.

Photogrammetry is an engineering discipline and as such heavily influenced bydevelopments in computer science and electronics. The ever increasing use of computershas had and will continue to have a great impact on photogrammetry. The discipline is,as many others, in a constant state of change. This becomes especially evident in theshift from analog to analytical and digital methods.

There has always been what we may call a technological gap between the latestfindings in research on one hand and the implementation of these results in manufac-tured products; and secondly between the manufactured product and its general use inan industrial process. In that sense, photogrammetric practice is an industrial process.A number of organizations are involved in this process. Inventions are likely to beassociated with research organizations, such as universities, research institutes and theresearch departments of industry. The development of a product based on such researchresults is a second phase and is carried out, for example, by companies manufacturingphotogrammetric equipment. Between research and development there are many sim-ilarities, the major difference being the fact that the results of research activities arenot known beforehand; development goals on the other hand, are accurately defined interms of product specifications, time and cost.

The third partner in the chain is the photogrammetrist: he daily uses the instrumentsand methods and gives valuable feedback to researchers and developers. Fig. 1.1 illus-trates the relationship among the different organizations and the time elapsed from themoment of an invention until it becomes operational and available to the photogram-metric practice.

Analytical plotters may serve as an example for the time gap discussed above.Invented in the late fifties, they were only manufactured in quantities nearly twentyyears later; they are in wide spread use since the early eighties. Another example

Page 175: CE 406 – Advanced Surveying

2 1 Introduction

RESEARCH DEVELOPMENT USE

inve

nti

on

avai

lab

ility

time gap

Figure 1.1: Time gap between research, development and operational use of a newmethod or instrument.

is aerial triangulation. The mathematical foundation was laid in the fifties, the firstprograms became available in the late sixties, but it took another decade before theywere widely used in the photogrammetric practice.

There are only a few manufacturers of photogrammetric equipment. The two leadingcompanies are Leica (a recent merger of the former Swiss companies Wild and Kern),and Carl Zeiss of Germany (before unification there were two separate companies: ZeissOberkochen and Zeiss Jena).

Photogrammetry and remote sensing are two related fields. This is also manifestin national and international organizations. The International Society of Photogram-metry and Remote Sensing (ISPRS) is a non-governmental organization devoted tothe advancement of photogrammetry and remote sensing and their applications. It wasfounded in 1910. Members are national societies representing professionals and special-ists of photogrammetry and remote sensing of a country. Such a national organizationis the American Society of Photogrammetry and Remote Sensing (ASPRS).

The principle difference between photogrammetry and remote sensing is in the appli-cation; while photogrammetrists produce maps and precise three-dimensional positionsof points, remote sensing specialists analyze and interpret images for deriving informa-tion about the earth’s land and water areas. As depicted in Fig. 1.2 both disciplines arealso related to Geographic Information Systems (GIS) in that they provide GIS withessential information. Quite often, the core of topographic information is produced byphotogrammetrists in form of a digital map.

ISPRS adopted the metric system and we will be using it in this course. Whereappropriate, we will occasionally use feet, particularly in regards to focal lengths ofcameras. Despite considerable effort there is, unfortunately, not a unified nomenclature.We follow as closely as possible the terms and definitions laid out in (1). Students whoare interested in a more thorough treatment about photogrammetry are referred to (2),(3), (4), (5). Finally, some of the leading journals are mentioned. The official journalpublished by ISPRS is called Photogrammetry and Remote Sensing. ASPRS’ journal,Photogrammetric Engineering and Remote Sensing, PERS, appears monthly, whilePhotogrammetric Record, published by the British Society of Photogrammetry andRemote Sensing, appears six times a year. Another renowned journal is Zeitschrift für

Page 176: CE 406 – Advanced Surveying

1.2 Definitions, Processes and Products 3

remote sensing

photogrammetry

GIS

remote sensing

photogrammetryobject space

data fusion

GIS

Figure 1.2: Relationship of photogrammetry, remote sensing and GIS.

Photogrammetrie und Fernerkundung, ZPF, published monthly by the German Society.

1.2 Definitions, Processes and Products

There is no universally accepted definition of photogrammetry. The definition givenbelow captures the most important notion of photogrammetry.

Photogrammetry is the science of obtaining reliable information about theproperties of surfaces and objects without physical contact with the objects,and of measuring and interpreting this information.

The name “photogrammetry" is derived from the three Greek words phosor photwhich means light, grammawhich means letter or something drawn, and metrein, thenoun of measure.

In order to simplify understanding an abstract definition and to get a quick grasp atthe complex field of photogrammetry, we adopt a systems approach. Fig. 1.3 illustratesthe idea. In the first place, photogrammetry is considered a black box. The input ischaracterized by obtaining reliable information through processes of recording patternsof electromagnetic radiant energy, predominantly in the form of photographic images.The output, on the other hand, comprises photogrammetric products generated withinthe black box whose functioning we will unravel during this course.

Page 177: CE 406 – Advanced Surveying

4 1 Introduction

data acquisition photogrammetric procedures photogrammetric products

camera ---> photographs

sensor ---> digital imagery

scanner

rectifier

orthophoto projector

comparator

stereoplotter

analytical plotter

softcopy workstation

rectifications

enlargements/reductions

photographic products

orthophotos

points

DEMs, profiles, surfaces

maps

topographic maps

special maps

Figure 1.3: Photogrammetry portrayed as systems approach. The input is usuallyreferred to as data acquisition, the “black box" involves photogrammetric proceduresand instruments; the output comprises photogrammetric products.

1.2.1 Data Acquisition

Data acquisition in photogrammetry is concerned with obtaining reliable informationabout the properties of surfaces and objects. This is accomplished without physicalcontact with the objects which is, in essence, the most obvious difference to surveying.The remotely received information can be grouped into four categories

geometric information involves the spatial position and the shape of objects. It is themost important information source in photogrammetry.

physical information refers to properties of electromagnetic radiation, e.g., radiantenergy, wavelength, and polarization.

semantic information is related to the meaning of an image. It is usually obtained byinterpreting the recorded data.

temporal information is related to the change of an object in time, usually obtainedby comparing several images which were recorded at different times.

As indicated in Table 1.1 the remotely sensed objects may range from planets toportions of the earth’s surface, to industrial parts, historical buildings or human bodies.The generic name for data acquisition devices is sensor, consisting of an optical anddetector system. The sensor is mounted on a platform. The most typical sensorsare cameras where photographic material serves as detectors. They are mounted on

Page 178: CE 406 – Advanced Surveying

1.2 Definitions, Processes and Products 5

Table 1.1: Different areas of specialization of photogrammetry, their objects and sensorplatforms.

object sensor platform specialization

planet space vehicle space photogrammetryearth’s surface airplane aerial photogrammetry

space vehicleindustrial part tripod industrial photogrammetryhistorical building tripod architectural photogrammetryhuman body tripod biostereometrics

airplanes as the most common platforms. Table 1.1 summarizes the different objectsand platforms and associates them to different applications of photogrammetry.

1.2.2 Photogrammetric Products

The photogrammetric products fall into three categories: photographic products, com-putational results, and maps.

Photographic Products

Photographic products are derivatives of single photographs or composites of overlap-ping photographs. Fig. 1.4 depicts the typical case of photographs taken by an aerialcamera. During the time of exposure, a latent image is formed which is developed toa negative. At the same time diapositives and paper prints are produced. Enlargementsmay be quite useful for preliminary design or planning studies. A better approximationto a map are rectifications. A plane rectification involves just tipping and tilting thediapositive so that it will be parallel to the ground. If the ground has a relief, then therectified photograph still has errors. Only a differentially rectified photograph, betterknown as orthophoto, is geometrically identical with a map.

Composites are frequently used as a first base for general planning studies. Photo-mosaics are best known, but composites with orthophotos, called orthophoto mapsarealso used, especially now with the possibility to generate them with methods of digitalphotogrammetry.

Computational Results

Aerial triangulationis a very successful application of photogrammetry. It delivers 3-Dpositions of points, measured on photographs, in a ground control coordinate system,e.g., state plane coordinate system.

Profiles and cross sections are typical products for highway design where earthworkquantities are computed. Inventory calculations of coal piles or mineral deposits are

Page 179: CE 406 – Advanced Surveying

6 1 Introduction

negative

perspective center

reduction

diapositive

enlargement

ground

f

-f

rectification

Figure 1.4: Negative, diapositive, enlargement reduction and plane rectification.

other examples which may require profile and cross section data. The most popular formfor representing portions of the earth’s surface is the DEM (Digital Elevation Model).Here, elevations are measured at regularly spaced grid points.

Maps

Maps are the most prominent product of photogrammetry. They are produced at variousscales and degrees of accuracies. Planimetric maps contain only the horizontal positionof ground features while topographic maps include elevation data, usually in the formof contour lines and spot elevations. Thematic maps emphasize one particular feature,e.g., transportation network.

1.2.3 Photogrammetric Procedures and Instruments

In our attempt to gain a general understanding of photogrammetry, we adopted a systemsapproach. So far we have addressed the input and output. Obviously, the task ofphotogrammetric procedures is to convert the input to the desired output. Let us takean aerial photograph as a typical input and a map as a typical output. Now, what are themain differences between the two? Table 1.2 lists three differences. First, the projectionsystem is different and one of the major tasks in photogrammetry is to establish thecorresponding transformations. This is accomplished by mechanical/optical means inanalog photogrammetry, or by computer programs in analytical photogrammetry.

Another obvious difference is the amount of data. To appreciate this comment, letus digress for a moment and find out how much data an aerial photograph contains. Wecan approach this problem by continuously dividing the photograph in four parts. Aftera while, the ever smaller quadrants reach a size where the information they contain isnot different. Such a small area is called a pixelwhen the image is stored on a computer.A pixel then is the smallest unit of an image and its value is the gray shade of thatparticular image location. Usually, the continuous range of gray values is divided into256 discrete values, because 1 byte is sufficient to store a pixel. Experience tells usthat the smallest pixel size is about 5 µm. Considering the size of a photograph (9inches or 22.8 cm) we have approximately half a gigabyte (0.5 GB) of data for one

Page 180: CE 406 – Advanced Surveying

1.3 Historical Background 7

Table 1.2: Differences between photographs and maps.

photograph map task

projection central orthogonal transformationsdata ≈ 0.5 GB few KB feature identificationinformation explicit implicit and feature extraction

photograph. A map depicting the same scene will only have a few thousand bytes ofdata. Consequently, another important task is data reduction.

The information we want to represent on a map is explicit. By that we mean that alldata is labeled. A point or a line has an attribute associated which says something aboutthe type and meaning of the point or line. This is not the case for an image; a pixel hasno attribute associate with it which would tell us what feature it belongs to. Thus, therelevant information is only implicitly available. Making information explicit amountsto identifying and extracting those features which must be represented on the map.

Finally, we refer back to Fig. 1.3 and point out the various instruments that areused to perform the tasks described above. A rectifier is kind of a copy machine formaking plane rectifications. In order to generate orthophotos, an orthophoto projectoris required. A comparatoris a precise measuring instrument which lets you measurepoints on a diapositive (photo coordinates). It is mainly used in aerial triangulation. Inorder to measure 3-D positions of points in a stereo model, a stereo plotting instrumentor stereo plotterfor short, is used. It performs the transformation central projection toorthogonal projection in an analog fashion. This is the reason why these instrumentsare sometimes less officially called analog plotters. An analytical plotterestablishesthe transformation computationally. Both types of plotters are mainly used to producemaps, DEMs and profiles.

A recent addition to photogrammetric instruments is the softcopy workstation. It isthe first tangible product of digital photogrammetry. Consequently, it deals with digitalimagery rather than photographs.

1.3 Historical Background

The development of photogrammetry clearly depends on the general development ofscience and technology. It is interesting to note that the four major phases of photogram-metry are directly related to the technological inventions of photography, airplanes,computers and electronics.

Fig. 1.5 depicts the four generations of photogrammetry. Photogrammetry had itsbeginning with the invention of photography by Daguerre and Niepce in 1839. Thefirst generation, from the middle to the end of last century, was very much a pioneer-ing and experimental phase with remarkable achievements in terrestrial and balloon

Page 181: CE 406 – Advanced Surveying

8 1 Introduction

photogrammetry.

1850

1900

1950

2000

invention of computer

invention of airplane

invention of photography

firs

t g

ener

atio

n

anal

og

ph

oto

gra

mm

etry

anal

ytic

al p

ho

tog

r.

dig

ital

Figure 1.5: Major photogrammetric phases as a result of technological innovations.

The second generation, usually referred to as analog photogrammetry, is character-ized by the invention of stereophotogrammetry by Pulfrich (1901). This paved the wayfor the construction of the first stereoplotter by Orel, in 1908. Airplanes and camerasbecame operational during the first world war. Between the two world wars, the mainfoundations of aerial survey techniques were built and they stand until today. Analog rec-tification and stereoplotting instruments, based on mechanical and optical technology,became widely available. Photogrammetry established itself as an efficient surveyingand mapping method. The basic mathematical theory was known, but the amount ofcomputation was prohibitive for numerical solutions and consequently all the effortswere aimed toward analog methods. Von Gruber is said to have called photogrammetrythe art of avoiding computations.

With the advent of the computer, the third generation has begun, under the mottoof analytical photogrammetry. Schmid was one of the first photogrammetrists whohad access to a computer. He developed the basis of analytical photogrammetry in thefifties, using matrix algebra. For the first time a serious attempt was made to employadjustment theory to photogrammetric measurements. It still took several years beforethe first operational computer programs became available. Brown developed the firstblock adjustment program based on bundles in the late sixties, shortly beforeAckermannreported on a program with independent models as the underlying concept. As a result,

Page 182: CE 406 – Advanced Surveying

REFERENCES 9

the accuracy performance of aerial triangulation improved by a factor of ten.Apart from aerial triangulation, the analytical plotter is another major invention of

the third generation. Again, we observe a time lag between invention and introductionto the photogrammetric practice. Helava invented the analytical plotter in the late fifties.However, the first instruments became only available in the seventies on a broad base.

The fourth generation, digital photogrammetry, is rapidly emerging as a new disci-pline in photogrammetry. In contrast to all other phases, digital images are used insteadof aerial photographs. With the availability of storage devices which permit rapid accessto digital imagery, and special microprocessor chips, digital photogrammetry began inearnest only a few years ago. The field is still in its infancy and has not yet made itsway into the photogrammetric practice.

References

[1] Multilingual Dictionary of Remote Sensing and Photogrammetry, ASPRS, 1983,p. 343.

[2] Manual of Photogrammetry, ASPRS, 4th Ed., 1980, p. 1056.

[3] Moffit, F.H. and E. Mikhail, 1980. Photogrammetry, 3rd Ed., Harper & RowPublishers, NY.

[4] Wolf, P., 1980. Elements of Photogrammetry, McGraw Hill Book Co, NY.

[5] Kraus, K., 1994. Photogrammetry, Verd. Dümmler Verlag, Bonn.

Page 183: CE 406 – Advanced Surveying

10 1 Introduction

Page 184: CE 406 – Advanced Surveying

Chapter 2

Film-based Cameras

2.1 Photogrammetric Cameras

2.1.1 Introduction

In the beginning of this chapter we introduced the term sensing device as a genericname for sensing and recording radiometric energy (see also Fig. 2.1). Fig. 2.1 showsa classification of the different types of sensing devices.

An example of an active sensing device is radar. An operational system sometimesused for photogrammetric applications is the side looking airborne radar (SLAR). Itschief advantage is the fact that radar waves penetrate clouds and haze. An antenna,attached to the belly of an aircraft directs microwave energy to the side, rectangularto the direction of flight. The incident energy on the ground is scattered and partiallyreflected. A portion of the reflected energy is received at the same antenna. The timeelapsed between energy transmitted and received can be used to determine the distancebetween antenna and ground.

Passive systems fall into two categories: image forming systems and spectral datasystems. We are mainly interested in image forming systems which are further sub-divided into framing systems and scanning systems. In a framing system, data areacquired all at one instant, whereas a scanning system obtains the same informationsequentially, for example scanline by scanline. Image forming systems record radiantenergy at different portions of the spectrum. The spatial position of recorded radiationrefers to a specific location on the ground. The imaging process establishes a geometricand radiometric relationship between spatial positions of object and image space.

Of all the sensing devices used to record data for photogrammetric applications,the photographic systems with metric properties are the most frequently employed.They are grouped into aerial cameras and terrestrial cameras. Aerial cameras are alsocalled cartographiccameras. In this section we are only concerned with aerial cameras.Panoramiccameras are examples of non-metric aerial cameras. Fig. 2.2(a) depicts anaerial camera.

Page 185: CE 406 – Advanced Surveying

12 2 Film-based Cameras

active systems passive systems

image forming systems spectral data systems

framing systems scanning systems

photographic systems CCD array systems multispectralscanners

electron imagers

aerial cameras terrestrial cameras

Sensing devices

Figure 2.1: Classification of sensing devices.

2.1.2 Components of Aerial Cameras

A typical aerial camera consists of lens assembly, inner cone, focal plane, outer cone,drive mechanism, and magazine. These principal parts are shown in the schematicdiagram of Fig. 2.2(b).

Lens Assembly

The lens assembly, also called lens cone, consists of the camera lens (objective), thediaphragm, the shutter and the filter. The diaphragm and the shutter control the exposure.The camera is focused for infinity; that is, the image is formed in the focal plane.

Fig. 2.3 shows cross sections of lens cones with different focal lengths. Super-wide-anglelens cones have a focal length of 88 mm (3.5 in). The other extreme arenarrow-anglecones with a focal length of 610 mm (24 in). Between these two extremesare wide-angle, intermediate-angle, and normal-anglelens cones, with focal lengths of153 mm (6 in), 213 mm (8.25 in), and 303 mm (12 in), respectively. Since the filmformat does not change, the angle of coverage, or field for short, changes, as well as the

Page 186: CE 406 – Advanced Surveying

2.1 Photogrammetric Cameras 13

Figure 2.2: (a) Aerial camera Aviophot RC20 from Leica; (b) schematic diagram ofaerial camera.

Table 2.1: Data of different lens assemblies.

super- wide- inter- normal- narrow-wide angle mediate angle angle

focal length [mm] 88. 153. 210. 305. 610.field [o] 119. 82. 64. 46. 24.photo scale 7.2 4.0 2.9 2.0 1.0ground coverage 50.4 15.5 8.3 3.9 1.0

scale. The most relevant data are compiled in Table 2.1. Refer also to Fig. 2.4 whichillustrates the different configurations.

Super-wide angle lens cones are suitable for medium to small scale applicationsbecause the flying height, H , is much lower compared to a normal-angle cone (samephoto scale assumed). Thus, the atmospheric effects, such as clouds and haze, aremuch less a problem. Normal-angle cones are preferred for large-scale applications ofurban areas. Here, a super-wide angle cone would generate much more occluded areas,particularly in built-up areas with tall buildings.

Inner Cone and Focal Plane

For metric cameras it is very important to keep the lens assembly fixed with respect tothe focal plane. This is accomplished by the inner cone. It consists of a metal withlow coefficient of thermal expansion so that the lens and the focal plane do not changetheir relative position. The focal plane contains fiducial marks, which define the fiducialcoordinate system that serves as a reference system for metric photographs. The fiducialmarks are either located at the corners or in the middle of the four sides.

Usually, additional information is printed on one of the marginal strips during the

Page 187: CE 406 – Advanced Surveying

14 2 Film-based Cameras

Figure 2.3: Cross-sectional views of aerial camera lenses.

time of exposure. Such information includes the date and time, altimeter data, photonumber, and a level bubble.

Outer Cone and Drive Mechanism

As shown in Fig. 2.2(b) the outer cone supports the inner cone and holds the drivemechanism. The function of the drive mechanism is to wind and trip the shutter, tooperate the vacuum, and to advance the film between exposures. The vacuum assuresthat the film is firmly pressed against the image plane where it remains flat duringexposure. Non-flatness would not only decrease the image quality (blurring) but alsodisplace points, particularly in the corners.

Magazine

Obviously, the magazine holds the film, both, exposed and unexposed. A film roll is120 m long and provides 475 exposures. The magazine is also called film cassette. Itis detachable, allowing to interchange magazines during a flight mission.

2.1.3 Image Motion

During the instance of exposure, the aircraft moves and with it the camera, including theimage plane. Thus, a stationary object is imaged at different image locations, and theimage appears to move. Image motion results not only from the forward movement ofthe aircraft but also from vibrations. Fig. 2.5 depicts the situation for forward motion.

An airplane flying with velocity v advances by a distance D = v t during theexposure time t. Since the object on the ground is stationary, its image moves by a

Page 188: CE 406 – Advanced Surveying

2.1 Photogrammetric Cameras 15

super-wide angle

normal angle

d’

f

f

ground coverage

perspective center

Figure 2.4: Angular coverage, photo scale and ground coverage of cameras with differentfocal lengths.

distance d = D/m where m is the photo scale. We have

d =v t

m=

v t f

H(2.1)

with f the focal length and H the flying height.Example:

exposure time t 1/300 secvelocity v 300 km/hfocal length f 150 mmflying height H 1500 mimage motion d 28 µm

Image motion caused by vibrations in the airplane can also be computed usingEq. 2.1. For that case, vibrations are expressed as a time rate of change of the cameraaxis (angle/sec). Suppose the camera axis vibrates by 20/sec. This corresponds to adistanceDv = 2H/ρ = 52.3 m. Since this “displacement" occurs in one second, it canbe considered a velocity. In our example, this velocity is 188.4 km/sec, correspondingto an image motion of 18µm. Note that in this case, the direction of image motion israndom.

As the example demonstrates, image motion may considerably decrease the imagequality. For this reason, modern aerial cameras try to eliminate image motion. Thereare different mechanical/optical solutions, known as image motion compensation. Theforward image motion can be reduced by moving the film during exposure such that the

Page 189: CE 406 – Advanced Surveying

16 2 Film-based Cameras

d

D

Figure 2.5: Forward image motion.

image of an object does not move with respect to the emulsion. Since the direction ofimage motion caused by vibration is random, it cannot be compensated by moving thefilm. The only measure is a shock absorbing camera mount.

2.1.4 Camera Calibration

During the process of camera calibration, the interior orientation of the camera isdetermined. The interior orientation data describe the metric characteristics of thecamera needed for photogrammetric processes. The elements of interior orientationare:

1. The position of the perspective center with respect to the fiducial marks.

2. The coordinates of the fiducial marks or distances between them so that coordi-nates can be determined.

3. The calibrated focal length of the camera.

4. The radial and discentering distortion of the lens assembly, including the originof radial distortion with respect to the fiducial system.

5. Image quality measures such as resolution.

There are several ways to calibrate the camera. After assembling the camera, themanufacturer performs the calibration under laboratory conditions. Cameras should becalibrated once in a while because stress, caused by temperature and pressure differencesof an airborn camera, may change some of the interior orientation elements. Laboratorycalibrations are also performed by specialized government agencies.

Page 190: CE 406 – Advanced Surveying

2.1 Photogrammetric Cameras 17

In in-flight calibration, a testfield with targets of known positions is photographed.The photo coordinates of the targets are then precisely measured and compared withthe control points. The interior orientation is found by a least-square adjustment.

We will describe one laboratory method, known as goniometer calibration. Thiswill further the understanding of the metric properties of an aerial camera.

Fig. 2.6 depicts a goniometer with a camera ready for calibration. The goniometerresembles a theodolite. In fact, the goniometer shown is a modified T4 high precisiontheodolite used for astronomical observations. To the far right of Fig. 2.6(a) is a colli-mator. If the movable telescope is aimed at the collimator, the line of sight representsthe optical axis. The camera is placed into the goniometer such that its vertical axispasses through the entrance pupil. Additionally, the focal plane is aligned perpendicularto the line of sight. This is accomplished by autoreflection of the collimator. Fig. 2.6(b)depicts this situation; the fixed collimator points to the center of the grid plate whichis placed in the camera’s focal plane. This center is referred to as principal point ofautocollimation, PPA.

Figure 2.6: Two views of a goniometer with installed camera, ready for calibration.

Now, the measurement part of the calibration procedure begins. The telescope isaimed at the grid intersections of the grid plate, viewing through the camera. The anglessubtended at the rear nodal point between the camera axis and the grid intersectionsare obtained by subtracting from the circle readings the zero position (reading to thecollimator before the camera is installed). This is repeated for all grid intersectionsalong the four semi diagonals.

Page 191: CE 406 – Advanced Surveying

18 2 Film-based Cameras

Having determined the anglesαi permits to compute the distances di from the centerof the grid plate (PPA) to the corresponding grid intersections i by Eq. 2.2

di = f tan(αi) (2.2)

dr′′i = dgi − di (2.3)

The computed distances di are compared with the known distances dgi of the gridplate. The differences dr′′

i result from the radial distortion of the lens assembly. Radialdistortion arises from a change of lateral magnification as a function of the distancefrom the center.

The differences dr′′i are plotted against the distances di. Fig. 2.7(a) shows the result.

The curves for the four semi diagonals are quite different and it is desirable to makethem as symmetrical as possible to avoid working with four sets of distortion values.This is accomplished by changing the origin from the PPA to a different point, calledthe principal point of symmetry(PPS). The effect of this change of the origin is shownin Fig. 2.7(b). The four curves are now similar enough and the average curve representsthe direction-independent distortion. The distortion values for this average curve aredenoted by dr′

i.

Figure 2.7: Radial distortion curves for the four semi-diagonals (a). In (b) the curvesare made symmetrical by shifting the origin to PPS. The final radial distortion curve in(c) is obtained by changing the focal length from f to c.

The average curve is not yet well balanced with respect to the horizontal axis. Thenext step involves a rotation of the distortion curve such that

∣∣drmin∣∣ = |drmax|. A

change of the focal length will rotate the average curve. The focal length with thisdesirable property is called calibrated focal length, c. Through the remainder of thetext, we will be using c instead of f , that is, we use the calibrated focal length and notthe optical focal length.

After completion of all measurements, the grid plate is replaced by a photosensitiveplate. The telescope is rotated to the zero position and the reticule is projected through

Page 192: CE 406 – Advanced Surveying

2.1 Photogrammetric Cameras 19

the lens onto the plate where it marks the PPA. At the same time the fiducial marks areexposed. The processed plate is measured and the position of the PPA is determinedwith respect to the fiducial marks.

2.1.5 Summary of Interior Orientation

We summarize the most important elements of the interior orientation of an aerialcamera by referring to Fig. 2.8. The main purpose of interior orientation is to definethe position of the perspective center and the radial distortion curve. A camera withknown interior orientation is called metric if the orientation elements do not change.An amateur camera, for example, is non-metric because the interior orientation changesevery time the camera is focused. Also, it lacks a reference system for determining thePPA.

Figure 2.8: Illustration of interior orientation. EP and AP are entrance and exit pupils.they intersect the optical axis at the perspective centers O and Op. The mathematicalperspective center Om is determined such that angles at O and Om become as similaras possible. Point Ha, also known as principal point of autocollimation, PPA, is thevertical drop of Om to the image plane B. The distance Om, Ha, c, is the calibratedfocal length.

1. The position of the perspective center is given by the PPA and the calibrated focallength c. The bundle rays through projection center and image points resemblemost closely the bundle in object space, defined by the front nodal point andpoints on the ground.

2. The radial distortion curve contains the information necessary for correcting im-age points that are displaced by the lens due to differences in lateral magnification.The origin of the symmetrical distortion curve is at the principal point of symmetryPPS. The distortion curve is closely related to the calibrated focal length.

3. The position of the PPA and PPS is fixed with reference to the fiducial system.The intersection of opposite fiducial marks indicates the fiducial center FC. The

Page 193: CE 406 – Advanced Surveying

20 2 Film-based Cameras

three centers lie within a few microns. The fiducial marks are determined bydistances measured along the side and diagonally.

Modern aerial cameras are virtually distortion free. A good approximation for theinterior orientation is to assume that the perspective center is at a distance c from thefiducial center.

2.2 Photographic Processes

The most widely used detector system for photogrammetric applications is based onphotographic material. It is analog system with some unique properties which makes itsuperior to digital detectors such as CCD arrays. An aerial photograph contains on theorder of one Gigabyte of data (see Chapter 1); the most advanced semiconductor chipshave a resolution of 2K × 2K, or 4 MB of data.

In this section we provide an overview of photographic processes and propertiesof photographic material. The student should gain a basic understanding of exposure,sensitivity, speed and resolution of photographic emulsions.

Fig. 2.9 provides an overview of photographic processes and introduces the termslatent image, negative, (dia)positiveand paper print.

exposing processing copying

developing

fixing

drying

washing

ob

ject

late

nt

imag

e

neg

ativ

e

dia

po

siti

vep

aper

pri

nt

Figure 2.9: Overview of photographic processes.

2.2.1 Photographic Material

Fig. 2.10 depicts a cross-sectional view of color photography. It consists of threesensitized emulsions which are coated on the base material. To prevent transmittedlight from being reflected at the base, back to the emulsion, an antihalation layer isadded between emulsion and base.

The light sensitive emulsion consists of three thin layers of gelatine in which are sus-pended crystals of silver halide. Silver halide is inherently sensitive to near ultra violetand blue. In order for the silver halide to absorb energy at longer wavelengths, opticalsensitizers, called dyes, are added. They have the property to transfer electromagneticenergy from yellow to near infrared to the silver halide.

A critical factor of photography is the geometrical stability of the base material.Today, most films used for photogrammetric applications (called aerial films), have apolyester base. It provides a stability over the entire frame of a few microns. Most of the

Page 194: CE 406 – Advanced Surveying

2.2 Photographic Processes 21

base

antihalation layer

red sensitive layer

green sensitive

yellow filter

blue sensitive layer

Figure 2.10: Cross section of film for color photography.

deformation occurs during the development process. It is caused by the developmentbath and mechanical stress to transport the film through the developer. The deformationis usually called film shrinkage. It consists of systematic deformations (e.g. scale factor)and random deformations (local inconsistencies, e.g. scale varies from one location toanother). Most of the systematic deformations can be determined during the interiororientation and subsequently be corrected.

2.2.2 Photographic Processes

Exposure

Exposure H is defined as the quantity of radiant energy collected by the emulsion.

H = E t (2.4)

where E is the irradiance as defined in section 2.1.4, and t the exposure time. H isdetermined by the exposure time and the aperture stop of the the lens system (comparevignetting diagrams in Fig. 2.16). For fast moving platforms (or objects), the exposuretime should be kept short to prevent blurring. In that case, a small f-number must bechosen so that enough energy interacts with the emulsion. The disadvantage with thissetting is an increased influence of aberrations.

The sensitive elements of the photographic emulsion are microscopic crystals withdiameters from 0.3 µm to 3.0 µm. One crystal is made up of 1010 silver halide ions.When radiant energy is incident upon the emulsion it is either reflected, refracted orabsorbed. If the energy of the photons is sufficient to liberate an electron from a boundstate to a mobile state then it is absorbed, resulting in a free electron which combinesquickly with a silver halide ion to a silver atom.

The active product of exposure is a small aggregate of silver atoms on the surfaceor in the interior of the crystal. This silver speck acts as a catalyst for the developmentreaction where the exposed crystals are completely reduced to silver whereas the un-exposed crystals remain unchanged. The exposed but undeveloped film is called latentimage. In the most sensitive emulsions only a few photons are necessary for forming a

Page 195: CE 406 – Advanced Surveying

22 2 Film-based Cameras

developable image. Therefore the amplifying factor is on the order of 109, one of thelargest amplifications known.

Sensitivity

The sensitivity can be defined as the extent of photographic material to react to radiantenergy. Since this is a function of wavelength, sensitivity is a spectral quantity. Fig. 2.11provides an overview of emulsions with different sensitivity.

infrared

panchromatic

orthochromatic

color blind

0.3 0.4 0.5 0.6 0.7 0.8 0.9

wavelength

Figure 2.11: Overview of photographic material with different sensitivity range.

Silver halide emulsions are inherently only sensitive to ultra violet and blue. Inorder for the silver halide to absorb energy at longer wavelengths, dyes are added. Thethree color sensitive emulsion layers differ in the dyes that are added to silver halide.If no dyes are added the emulsion is said to be color blind. This may be desirable forpaper prints because one can work in the dark room with red light without affecting thelatent image. Of course, color blind emulsions are useless for aerial film because theywould only react to blue light which is scattered most causing a diffuse image withoutcontrast.

In orthochromaticemulsions the sensitivity is extended to include the green portionof the visible spectrum. Panchromaticemulsions are sensitive to the entire visiblespectrum; infrared film includes the near infrared.

Colors and Filters

The visible spectrum is divided into three categories: 0.4 to 0.5 µm, 0.5 to 0.6 µm,and 0.6 to 0.7 µm. These three categories are associated to the primary colors ofblue, green and red. All other colors, approximately 10 million, can be obtained byan additive mixture of the primary colors. For example, white is a mixture of equalportions of primary colors. If two primary colors are mixed the three additive colorscyan, yellow and magenta are obtained. As indicated in Table 2.2, these additive colorsalso result from subtracting the primary colors from white light.

Page 196: CE 406 – Advanced Surveying

2.2 Photographic Processes 23

Table 2.2: Primary colors and additive primary colors.

additive color primary additive mixture of subtraction from2 color primaries white light

cyan b + g w - ryellow g + r w - b

magenta r + b w - g

Subtraction can be achieved by using filters. A filter with a subtractive color primaryis transparent for the additive primary colors. For example, a yellow filter is transparentfor green and red. Such a filter is also called minus bluefilter. A combination of filtersis only transparent for that color the filters have in common. Cyan and magenta istransparent for blue since this is their common primary color.

Filters play a very important role in obtaining aerial photography. A yellow filter,for example, prevents scattered light (blue) from interacting with the emulsion. Often, acombination of several filters is used to obtain photographs of high image quality. Sincefilters reduce the amount of radiant energy incident the exposure must be increased byeither decreasing the f-number, or by increasing the exposure time.

Processing Color Film

Fig. 2.12 illustrates the concept of natural colorand false colorfilm material. A naturalcolor film is sensitive to radiation of the visible spectrum. The layer that is struck firstby radiation is sensitive to red, the middle layer is sensitive to green, and the third layeris sensitive to blue. During the development process the situation becomes reversed;that is, the red layer becomes transparent for red light. Wherever green was incidentthe red layer becomes magenta (white minus green); likewise, blue changes to yellow.If this developed film is viewed under white light, the original colors are perceived.

A closer examination of the right side of Fig. 2.12 reveals that the sensitivity ofthe film is shifted towards longer wavelengths. A yellow filter prevents blue light frominteracting with the emulsion. The top most layer is now sensitive to near infrared, themiddle layer to red and the third layer is sensitive to green. After developing the film,red corresponds to infrared, green to red, and blue to green. This explains the namefalse color film: vegetation reflects infrared most. Hence, forest, trees and meadowsappear red.

2.2.3 Sensitometry

Sensitometrydeals with the measurement of sensitivity and other characteristics ofphotographic material. The density(amount of exposure) can be measured by a densit-ometer. The density D is defined as the degree of blackening of an exposed film.

Page 197: CE 406 – Advanced Surveying

24 2 Film-based Cameras

yellow layer

magenta layer

cyan layer

blue-sensitive layer

green-sensitive layer

red-sensitive layer

viewing white light

irradiance

natural color

R G B

R G B

late

nt im

age

deve

lope

d im

age

viewing white light

irradiance

false color

IR R G

R G B

Figure 2.12: Concept of processing natural color (left) and false color film (right.

D = log(O) (2.5)

O =Ei

Et(2.6)

T =Et

Ei=

1O

(2.7)

H = E t (2.8)

where

O opacity, degree of blackeningEi irradianceEt transmitted irradianceT transmittanceH exposure

The density is a function of exposureH . It also depends on the development process.For example, the density increases with increasing development time. An underexposedlatent image can be “corrected" to a certain degree by increasing the development time.

Fig. 2.13 illustrates the relationship between density and exposure. The character-istic curveis also called D-log(H) curve. Increasing exposure results in more crystals

Page 198: CE 406 – Advanced Surveying

2.2 Photographic Processes 25

log exposure

fog

den

sity

1.0

2.0

3.0

1.0 2.0 3.0

12

3

4

Figure 2.13: Characteristic curve of a photographic emulsion.

with silver specks that are reduced to black silver: a bright spot in the the scene appearsdark in the negative.

The characteristic curve begins at a threshold value, called fog. An unexposed filmshould be totally transparent when reduced during the development process. This is notthe case because the base of the film has a transmittance smaller than unity. Additionally,the transmittance of the emulsion with unexposed material is smaller than unity. Bothfactor contribute to fog. The lower part of the curve, between point 1 and 2, is called toeregion. Here, the exposure is not enough to cause a readable image. The next region,corresponding to correct exposure, is characterized by a straight line (between point 2and 3). That is, the density increases linearly with the logarithm of exposure. The slopeof the straight line is called gammaor contrast. A film with a slope of 450 is perceivedas truly presenting the contrast in the scene. A film with a higher gamma exaggeratesthe scene contrast. The contrast is not only dependent on the emulsion but also on thedevelopment time. If the same latent image is kept longer in the development process,its characteristic curve becomes flatter.

The straight portion of the characteristic curve ends in the shoulder regionwherethe density no longer increases linearly. In fact, there is a turning point, solarization,where D decreases with increasing exposure (point 4 in Fig. 2.13). Clearly, this regionis associated with over exposure.

2.2.4 Speed

The size and the density of the silver halide crystals suspended in the gelatine of theemulsion vary. The larger the crystal size the higher the probability that is struck byphotons during the exposure time. Fewer photons are necessary to cause a latent image.Such a film would be called faster because the latent image is obtained in a shortertime period compared to an emulsion with smaller crystal size. In other words, a faster

Page 199: CE 406 – Advanced Surveying

26 2 Film-based Cameras

film requires less exposure. Unfortunately, there is no universally accepted definitionof speed. There is, however, a standard for determining the speed of aerial films, knownas Aerial film Speed (AFS).

The exposure used to determine AFS is the point on the characteristic curve at whichthe density is 0.3 units above fog (see Fig. 2.14). The exposure H needed to producethis density is used in the following definition

AFS =3

2H(2.9)

Note that aerial film speed differs from speed as defined by ASA. Here, the the expo-sure is specified which is necessary to produce a density 0.1 units above fog. Fig. 2.14shows two emulsions with different speed and different gamma. Since emulsion Arequires less exposure to produce the required density at 0.3 above fog, it is faster thanemulsion B (HA < HB).

log exposure

den

sity A

B

0.3

H HA B

1.0

2.0

3.0

1.0 2.0 3.0

Figure 2.14: Concept of speed.

2.2.5 Resolving Power

The image quality is directly related to the size and distribution of the silver halidecrystals and the dyes suspended in the emulsion. The crystals are also called corn,and the corn size corresponds to the diameter of the crystal. Granularity refers to thesize and distribution, concentrationto the amount of light-sensitive material per unitvolume. Emulsions are usually classified as fine-, medium-, or coarse-grained.

The resolving power of an emulsion refers to the number of alternating bars andspaces of equal width which can be recorded as visually separate elements in the spaceof one millimeter. A bar and a space is called a line or line pair. A resolving power of50 l/mm means that 50 bars, separated by 50 spaces, can be discerned per millimeter.Fig. 2.15 shows a typical test pattern used to determine the resolving power.

Page 200: CE 406 – Advanced Surveying

2.2 Photographic Processes 27

Figure 2.15: Typical test pattern (three-bar target) for determining resolving power.

The three-bar target shown in Fig. 2.15 is photographed under laboratory conditionsusing a diffraction-limited objective with large aperture (to reduce the effect of theoptical system on the resolution). The resolving power is highly dependent on thetarget contrast. Therefore, targets with different contrast are used. High contrast targetshave perfectly black bars, separated by white spaces, whereas lower contrast targetshave bars and spaces with varying grey shades. In the Table below are listed someaerial films with their resolving powers.

Note that there is an inverse relationship between speed and resolving power: coarse-grained films are fast but have a lower resolution than fine-grained aerial films.

Page 201: CE 406 – Advanced Surveying

28 2 Film-based Cameras

Table 2.3: Films for aerial photography.

manufa- resolution [l/mm]cturer designation speed (AFS) contrast contrast gamma

1000 : 1 1.6 : 1

Agfa Aviophot Pan 133 1.0 - 1.4Kodak Plus-X Aerographic 160 100 50 1.3Kodak High Definition 6.4 630 250 1.3Kodak Infrared Aerographic 320 80 40 2.3Kodak Aerial Color 6 200 100

Page 202: CE 406 – Advanced Surveying

Chapter 3

Digital Cameras

3.1 Overview

The popular term “digital camera" is rather informal and may even be misleading becausethe output is in many cases an analog signal. An more generic term is solid-state camera.

Other frequently used terms include CCD cameraand solid-state camera. Thoughthese terms obviously refer to the type of sensing elements, they are often used in amore generic sense.

The chief advantage of digital cameras over the classical film-based cameras is theinstant availability of images for further processing and analysis. This is essential inreal-time applications (e.g. robotics, certain industrial applications, bio-mechanics,etc.).

Another advantage is the increased spectral flexibility of digital cameras. The majordrawback is the limited resolution or limited field of view.

Digital cameras have been used for special photogrammtric applications since theearly seventies. However, vidicon-tube cameras available at that time were not veryaccurate because the imaging tubes were not stable. This disadvantage was eliminatedwith the appearance of solid-state cameras in the early eighties. The charge-coupleddevice provides high stability and is therefore the preferred sensing device in today’sdigital cameras.

The most distinct characteristic of a digital camera is the image sensing device.Because of its popularity we restrict the discussion to solid-state sensors, in particularto charge coupled devices (CCD).

The sensor is glued to a ceramic substrate and covered by a glass. Typical chipsizes are 1/2 and 2/3 inches with as many as 2048 × 2048 sensing elements. However,sensors with fewer than 1K × 1K elements are more common. Fig. 3.1 depicts a linesensor (a) and a 2D sensor chip (b).

The dimension of a sensing element is smaller than 10 µm, with an insulationspace of a few microns between them. This can easily be verified when considering thephysical dimensions of the chip and the number of elements.

Page 203: CE 406 – Advanced Surveying

30 3 Digital Cameras

Figure 3.1: Example of 1D sensor element and 2D array sensor.

3.1.1 Camera Overview

Fig. 3.2 depicts a functional block diagram of the major components of a solid-statecamera.

(a) imagecapture

A/Dconversion

short termstorage

signalprocessing

imagetransfer

imageprocessing archiving networking

(b) electroniccamera frame grabber host computer

(c) digital camera frame grabber host computer

(d) digital camera imaging board host computer

(e) camera on a chip host computer

Figure 3.2: Functional block diagram of a solid-state camera. A real camera may nothave all components. The diagram is simplified, e.g. external signals received by thecamera are not shown.

The optics component includes lens assembly and filters, such as an infrared blocking

Page 204: CE 406 – Advanced Surveying

3.1 Overview 31

filter to limit the spectral response to the visible spectrum. Many cameras use a C-mountfor the lens. Here, the distance between mount and image plane is 17.526 mm. As anoption, the optics subsystem may comprise a shutter.

The most distinct characteristic of an electronic camera is the image sensing device.Section 3.2 provides an overview of charge-coupled devices.

The solid-state sensor, positioned in the image plane, is glued on a ceramic substrate.The sensing elements (pixels) are either arranged in a linear array or a frame array.Linear arrays are used for aerial cameras while close range applications, includingmobile mapping systems, employ frame array cameras.

The accuracy of a solid-state camera depends a great deal on the accuracy andstability of the sensing elements, for example on the uniformity of the sensor elementspacing and the flatness of the array. From the manufacturing process we can expectan accuracy of 1/10th of a micron. Considering a sensor element, size 10 µm, theregularity amounts to 1/100. Camera calibration and measurements of the positionand the spacing of sensor elements confirm that the regularity is between 1/50th and1/100th of the spacing.

The voltage generated by the sensor’s read out mechanism must be amplified forfurther processing, which begins with converting the analog signal to a digital signal.This is not only necessary for producing a digital output, but also for signal and imageprocessing. The functionality of these two components may range from rudimentary tovery sophisticated in a real camera.

You may consider the first two components (optics and solid-state sensor) as imagecapture, the amplifiers and ADC as image digitization, and signal and image processingas image restoration. A few examples illustrate the importance of image restoration.The dark current can be measured and subtracted so that only its noise signal componentremains; defective pixels can be determined and an interpolated signal can be output; thecontrast can be changed (Gamma correction); and image compression may be applied.The following example demonstrates the need for data compression.

3.1.2 Multiple frame cameras

The classical film-based cameras used in photogrammetry are often divided into aerialand terrestrial (close-range) cameras. The same principle can be applied for digitalcameras. A digital aerial camera with a resolution comparable to a classical framecamera must have on the order of 15, 000 × 15, 000 sensing elements. Such imagesensors do not (yet) exist. Two solutions exist to overcome this problem: line camerasand multiple cameras, housed in one camera body.

Fig. 3.3 shows an example of a multi-camera system (UltraCam from Vexcel). Itconsists of 8 different cameras that are mounted in a common camera frame. The groundcoverage of each of these frame cameras slightly overlaps and the 8 different imagesare merged together to one uniform frame image by way of image processing.

3.1.3 Line cameras

An alternative solution to frame cameras are the so called line camerasof which the3-line camerais the most popular one. The 3-line camera employs three linear areas

Page 205: CE 406 – Advanced Surveying

32 3 Digital Cameras

Figure 3.3: Example of a multi-camera system (Vexcel UltraCam), consisting of 8different cameras that are mounted in a slightly convergent mode to assure overlap ofthe individual images.

which are mounted in the image plane in fore, nadir and aft position (see Fig. 3.4(a).With this configuration, triple coverage of the surface is obtained. Examples of 3-linecameras include Leica’s ADS40. It is also possible to implement the multiple lineconcept by having convergent lenses for every line, as depicted in Fig. 3.4(b).

A well-known example of a one line-camera is SPOT. The linear array consists of7,000 sensing elements. Stereo is obtained by overlapping strips obtained from adjacentorbits.

Fig. 3.5 shows the overlap configuration obtained with a 3-Line camera.

3.1.4 Camera Electronics

The camera electronics contains the power supply, a video timing and a sensor clockgenerator. Additional components are dedicated to special signal processing tasks, suchas noise reduction, high frequency cross-talk removal and black level stabilization. A“true" digital camera would have an analog-to-digital converter which samples the videosignal with the frequency of the sensor element clock.

The camera electronics may have additional components which serve the purposeto increase the camera’s functionality. An example is the acceptance of external syncwhich allows to synchronize the camera with other devices. This would allow formultiple camera setups with uniform sync.

Cameras with mechanical (or LCD) shutters need appropriate electronics to read

Page 206: CE 406 – Advanced Surveying

3.1 Overview 33

Figure 3.4: Schematic diagram of a 3-line camera. In (a), 3 sensor lines are mountedon the image plane in fore, nadir and aft locations. An alternative solution is using 3convergent cameras, each with a single line mounted in the center (b).

Figure 3.5: Stereo obtained with a 3-Line camera.

Page 207: CE 406 – Advanced Surveying

34 3 Digital Cameras

external signals to trigger the shutter.

3.1.5 Signal Transmission

The signal transmission follows the video standards. Unfortunately, there is no suchthing as a uniform video standard used worldwide. The first standard dates back to 1941when the National Television Systems Committee (NTSC) defined RS-170 for black-and-white television. This standard is used in North America, parts of South America,in Japan and the Philippines. European countries developed other standards, e.g. PAL(phase alternate line) and SECAM (sequential color and memory). Yet another standardfor black-and-white television was defined by CCIR (Commité Consultatif Internationaldes Radiocommunications). It differs only slightly to the NTSC standard, however.

Both, the R-170 and CCIR standard use the principle of interlacing. Here, the image,called a frame, consists of two fields. The odd field contains the odd line numbers, theeven field the even line numbers. This technique is known from video monitors.

3.1.6 Frame Grabbers

Frame grabbers receive the video signal, convert it, buffer data and output it to the storagedevice of the digital image. The analog front end of a frame grabber preprocesses thevideo signal and passes it to the AD converter. The analog front end must cope withdifferent signals (e.g. different voltage level and impedance).

3.2 CCD Sensors: Working Principle and Properties

10K

100K

1M

10M

100M

6

10

18

30

1975 1980 1985 1990 1995 2000

pixel size

sensor size

pix

el s

ize

in m

icro

ns

sen

sor

size

in p

ixel

s(r

eso

luti

on

)

Figure 3.6: Development of CCD arrays over a period of 25 years.

The charge-coupled device (CCD)was invented in 1970. The first CCD line sensorcontained 96 pixels; today, chips with over 50 million pixels are commercially available.

Page 208: CE 406 – Advanced Surveying

3.2 CCD Sensors: Working Principle and Properties 35

Fig. 3.6 on the preceding page illustrates the astounding development of CCD sensorsover a period of 25 years. The sensor size in pixels is usually loosely termed resolution,giving rise to confusion since this term has a different meaning in photogrammetry1.

3.2.1 Working Principle

Fig. 3.7(a) is a schematic diagram of a semiconductor capacitor—the basic buildingblock of a CCD. The semiconductor material is usually silicon and the insulator is anoxide (MOS capacitor). The metal electrodes are separated from the semiconductor bythe insulator. Applying a positive voltage at the electrode forces the mobile holes tomove toward the electric ground. In this fashion, a region (depletion region) with nopositive charge forms below the electrode on the opposite side of the insulator.

(a)

semiconductor

insulatorelectrode

photon

e h

EMR

(b)

Figure 3.7: Schematic diagram of CCD detector. In (a) a photon with an energy greaterthan the bandgap of the semiconductor generates an electron-hole pair. The electron eis attracted by the positive voltage of the electrode while the mobile hole moves towardthe ground. The collected electrons together with the electrode form a capacitor. In (b)this basic arrangement is repeated many times to form a linear array.

Suppose EMR is incident on the device. Photons with an energy greater than theband gap energy of the semiconductor may be absorbed in the depletion region, creatingan electron-hole pair. The electron—referred to as photon electron—is attracted by thepositive charge of the metal electrode and remains in the depletion region while themobile hole moves toward the electrical ground. As a result, a charge accumulates atopposite sides of the insulator. The maximum charge depends on the voltage appliedto the electrode. Note that the actual charge is proportional to the number of absorbedphotons under the electrode.

The band gap energy of silicon corresponds to the energy of a photon with a wave-length of 1.1µm. Lower energy photons (but still exceeding the band gap) may penetratethe depletion region and be absorbed outside. In that case, the generated electron-holepair may recombine before the electron reaches the depletion region. We realize thatnot every photon generates an electron that is accumulated at the capacitor site. Con-sequently, the quantum efficiency is less than unity.

1Resolution refers to the minimum distance between two adjacent features, or the minimum size of afeature, which can be detected by photogrammetric data acquisition systems. For photography, this distanceis usually expressed in line pairs per millimeter (lp/mm).

Page 209: CE 406 – Advanced Surveying

36 3 Digital Cameras

An ever increasing number of capacitors are arranged into what is called a CCD ar-ray. Fig. 3.7(b) illustrates the concept of a one-dimensional array (called a linear array)that may consist of thousands of capacitors, each of which holds a charge proportionalto the irradiance at each site. It is customary to refer to these capacitor sites as detectorpixels, or pixels for short. Two-dimensional pixel arrangements in rows and columnsare called full-frame or staring arrays.

pulse

drain

0· t∆

1· t∆

2· t∆

Figure 3.8: Principle of charge transfer. The top row shows a linear array of accumu-lated charge packets. Applying a voltage greater than V1 of electrode 1 momentarilypulls charge over to the second electrode (middle row). Repeating this operation in asequential fashion eventually moves all packets to the final electrode (drain) where thecharge is measured.

The next step is concerned with transferring and measuring the accumulated charge.The principle is shown in Fig. 3.8. Suppose that the voltage of electrode i+1 is mo-mentarily made larger than that of electrode i. In that case, the negative charge underelectrode i is pulled over to site i+1, below electrode i+1, provided that adjacent depletionregions overlap. Now, a sequence of voltage pulses will cause a sequential movementof the charges across all pixels to the drain (last electrode) where each packet of chargecan be measured. The original location of the pixel whose charge is being measured inthe drain is directly related to the time when a voltage pulse was applied.

Several ingenious solutions for transferring the charge accurately and quickly havebeen developed. It is beyond the scope of this book to describe the transfer technologyin any detail. The following is a brief summary of some of the methods.

Page 210: CE 406 – Advanced Surveying

3.2 CCD Sensors: Working Principle and Properties 37

3.2.2 Charge Transfer

Linear Array With Bilinear Readout

As sketched in Fig. 3.9, a linear array (CCD shift register) is placed on both sides ofthe single line of detectors. Since these two CCD arrays are also light sensitive, theymust be shielded. After integration, the charge accumulated in the active detectors istransferred to the two shift registers during one clock period. The shift registers are readout in a serial fashion as described above. If the readout time is equal to the integrationtime, then this sensor may operate continuously without a shutter. This principle, knownas push broom, is put to advantage in line cameras mounted on moving platforms toprovide continuous coverage of the object space.

sensenode

serial readout register (shielded)

activedetectors

Figure 3.9: Principle of linear array with bilinear readout. The accumulated charge istransferred during one pixel clock from the active detectors to the adjacent shift registers,from where it is read out sequentially.

Frame Transfer

You can visualize a frame transfer imager as consisting of two identical arrays. Theactive array accumulates charges during integration time. This charge is then transferredto the storage array, which must be shielded since it is also light sensitive. During thetransfer, charge is still accumulating in the active array, causing a slightly smearedimage.

The storage array is read out serially, line by line. The time necessary to read outthe storage array far exceeds the integration. Therefore, this architecture requires a me-chanical shutter. The shutter offers the advantage that the smearing effect is suppressed.

Interline Transfer

Fig. 3.10 on the following page illustrates the concept of interline transfer arrays. Here,the columns of active detectors (pixels) are separated by vertical transfer registers. Theaccumulated charge in the pixels is transferred at once and then read out serially. Thisagain allows an open shutter operation, assuming that the read out time does not exceedthe integration time.

Since the CCD detectors of the transfer register are also sensitive to irradiance, theymust be shielded. This, in turn, reduces the effective irradiance over the chip area. Theeffective irradiance is often called fill factor. The interline transfer imager as described

Page 211: CE 406 – Advanced Surveying

38 3 Digital Cameras

sensenode

serial readout register

activ

e de

tect

ors

vert

ical

tran

sfer

reg

iste

r(s

hiel

ded)

Figure 3.10: Principle of linear array with bilinear readout. The accumulated charge istransferred during one pixel clock from the active detectors to the adjacent shift registersfrom where it is read out sequentially.

here has a fill factor of 50%. Consequently, longer integration times are required tocapture an image. To increase the fill factor, microlenses may be used. In front of everypixel is a lens that directs the light incident on an area defined by adjacent active pixelsto the (smaller) pixel.

3.2.3 Spectral Response

Silicon is the most frequently used semiconductor material. In an ideal silicon detector,every photon exceeding the band gap (λ < 1.1 µm) causes a photon electron that iscollected and eventually measured. The quantum efficiency is unity and the spectralresponse is represented by a step function. As indicated in Fig. 3.11, the quantumefficiency of a real CCD sensor is less than unity for various reasons. For one, notall the incident flux interacts with the detector (e.g. reflected by the electrode in frontilluminated sensors). Additionally, some electron-hole pairs recombine. Photons withlonger wavelengths penetrate the depletion region and cause electron-hole pairs deepinside the silicon. Here, the probability of recombination is greater and many fewerelectrons are attracted by the capacitor. The drop in spectral response toward blue andUV is also related to the electrode material that may become opaque for λ < 0.4 µm.

Sensors illuminated from the back avoid diffraction and reflection problems causedby the electrode. Therefore, they have a higher quantum efficiency than front illumi-nated sensors. However, the detector must be thinner, because high energy photonsare absorbed near the surface—opposite the depletion region—and the chances of elec-tron/hole recombination are lower with shorter diffusion length.

In order to make the detector sensitive to other spectral bands (mainly IR), detector

Page 212: CE 406 – Advanced Surveying

3.2 CCD Sensors: Working Principle and Properties 39

0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0

wavelength

qu

antu

m e

ffic

ien

cy

back illuminated

front illuminated

ideal silicon

Figure 3.11: Spectral response of CCD sensors. In an ideal silicon detector all photonsexceeding the band gap energy generate electrons. Front illuminated sensors have alower quantum efficiency than back illuminated sensors because part of the incidentflux may be absorbed or redirected by the electrodes (see text for details).

material with the corresponding bandgap energy must be selected. This leads to hy-brid CCD arrays where the semiconductor and the CCD mechanism are two separatecomponents.

Page 213: CE 406 – Advanced Surveying

40 3 Digital Cameras

Page 214: CE 406 – Advanced Surveying

Chapter 4

Properties of AerialPhotography

4.1 Introduction

Aerial photography is the basic data source for making maps by photogrametric means.The photograph is the end result of the data acquisition process discussed in the previ-ous chapter. Actually, the net result of any photographic mission are the photographicnegatives. Of prime importance for measuring and interpretation are the positive repro-ductions from the negatives, called diapositives.

Many factors determine the quality of aerial photography, such as

• design and quality of lens system

• manufacturing the camera

• photographic material

• development process

• weather conditions and sun angle during photo flight

In this chapter we describe the types of aerial photographs, their geometrical prop-erties and relationship to object space.

4.2 Classification of aerial photographs

Aerial photographs are usually classified according to the orientation of the camera axis,the focal length of the camera, and the type of emulsion.

Page 215: CE 406 – Advanced Surveying

42 4 Properties of Aerial Photography

4.2.1 Orientation of camera axis

Here, we introduce the terminology used for classifying aerial photographs accordingto the orientation of the camera axis. Fig. 4.1 illustrates the different cases.

true vertical photograph A photograph with the camera axis perfectly vertical (iden-tical to plumb line through exposure center). Such photographs hardly exist inreality.

near vertical photograph A photograph with the camera axis nearly vertical. Thedeviation from the vertical is called tilt. It must not exceed mechanical limitationsof stereoplotter to accomodate it. Gyroscopically controlled mounts providestability of the camera so that the tilt is usually less than two to three degrees.

oblique photograph A photograph with the camera axis intentially tilted between thevertical and horizontal. A high oblique photograph, depicted in Fig. 4.1(c) istilted so much that the horizon is visible on the photograph. A low oblique doesnot show the horizon (Fig. 4.1(b)).

The total area photographed with obliques is much larger than that of verticalphotographs. The main application of oblique photographs is in reconnaissance.

Figure 4.1: Classification of photographs according to camera orientation. In (a) theschematic diagram of a true vertical photograph is shown; (b) shows a low oblique and(c) depicts a high oblique photograph.

4.2.2 Angular coverage

The angular coverage is a function of focal length and format size. Since the format sizeis almost exclusively 9′′ × 9′′ the angular coverage depends on the focal length of thecamera only. Standard focal lengths and associated angular coverages are summarizedin Table 4.1.

Page 216: CE 406 – Advanced Surveying

4.3 Geometric properties of aerial photographs 43

Table 4.1: Summary of photographs with different angular coverage.

super- wide- inter- normal- narrow-wide angle mediate angle angle

focal length [mm] 85. 157. 210. 305. 610.angular coverage [o] 119. 82. 64. 46. 24.

4.2.3 Emulsion type

The sensitivity range of the emulsion is used to classify photography into

panchromatic black and white This is most widely used type of emulsion for pho-togrammetric mapping.

color Color photography is mainly used for interpretation purposes. Recently, color isincreasingly being used for mapping applications.

infrared black and white Since infrared is less affected by haze it is used in applica-tions where weather conditions may not be as favorable as for mapping missions(e.g. intelligence).

false color This is particular useful for interpretation, mainly for analyzing vegetation(e.g. crop desease) and water pollution.

4.3 Geometric properties of aerial photographs

We restrict the discussion about geometric properties to frame photography, that is,photographs exposed in one instant. Furthermore, we assume central projection.

4.3.1 Definitions

Fig. 4.2 shows a diapositive in near vertical position. The following definitions apply:

perspective center C calibrated perspective center (see also camera calibration, inte-rior orientation).

focal length c calibrated focal length (see also camera calibration, interior orientation).

principal point PP principal point of autocollimation (see also camera calibration,interior orienttion).

camera axis C-PP axis defined by the projection center C and the principal point PP.The camera axis represents the optical axis. It is perpendicular to the image plane

Page 217: CE 406 – Advanced Surveying

44 4 Properties of Aerial Photography

Y

X

Z

C

N

N’

O’

I

PP

ip

pl

t

s

α

Figure 4.2: Tilted photograph in diapositive position and ground control coordinatesystem.

nadir point N ′ also called photo nadir point, is the intersection of vertical (plumb line)from perspective center with photograph.

ground nadir point N intersection of vertical from perspective center with the earth’ssurface.

tilt angle t angle between vertical and camera axis.

swing angle s is the angle at the principal point measured from the +y-axiscounter-clockwise to the nadir N .

azimut α is the angle at the ground nadir N measured from the +Y-axisin the groundsystem counterclockwise to the intersectionO of the camera axis with the groundsurface. It is the azimut of the trace of the principal plane in the XY -plane of theground system.

principal line pl intersection of plane defined by the vertical through perspective centerand camera axis with the photograph. Both, the nadir N and the principal point

Page 218: CE 406 – Advanced Surveying

4.3 Geometric properties of aerial photographs 45

PP are on the principal line. The principal line is oriented in the direction ofsteepest inclination of of the tilted photograph.

isocenter I is the intersection of the bisector of angle t with the photograph. It is onthe principal line.

isometric parallel ip is in the plane of photograph and is perpendicular to the principalline at the isocenter.

true horizon line intersection of a horizontal plane through persepective center withphotograph or its extension. The horizon line falls within the extent of the pho-tograph only for high oblique photographs.

horizon point intersection of principal line with true horizon line.

4.3.2 Image and object space

The photograph is a perspective (central) projection. During the image formationprocess, the physical projection center object side is the center of the entrance pupilwhile the center of the exit pupil is the projection center image side (see also Fig. 4.3.The two projection centers are separated by the nodal separation. The two projectioncenters also separate the space into image spaceand object spaceas indicated in Fig. 4.3.

A B

B’ A’negative

object space

image space

exitpupil

entrance

Figure 4.3: The concept of image and object space.

During the camera calibration process the projection center in image space ischanged to a new position, called the calibrated projection center. As discussed in2.6, this is necessary to achieve close similarity between the image and object bundle.

Page 219: CE 406 – Advanced Surveying

46 4 Properties of Aerial Photography

4.3.3 Photo scale

We use the representative fraction for scale expressions, in form of a ratio, e.g. 1 : 5,000.As illustrated in Fig. 4.4 the scale of a near vertical photograph can be approximated by

mb =c

H(4.1)

where mb is the photograph scale number, c the calibrated focal length, and H theflight heightabove mean ground elevation. Note that the flight height H refers to theaverage ground elevation. If it is with respect to the datum, then it is called flight altitudeHA, with HA = H + h.

h

HH

datum

C

P’

P

A

Figure 4.4: Flight height, flight altitude and scale of aerial photograph.

The photograph scale varies from point to point. For example, the scale for point Pcan easily be determined as the ratio of image distance CP ′ to object distance CP by

mP =CP ′

CP(4.2)

CP ′ =√x2

P + y2P + c2 (4.3)

CP =√

(XP −XC)2 + (YP − YC)2 + (ZP − ZC)2 (4.4)

Page 220: CE 406 – Advanced Surveying

4.3 Geometric properties of aerial photographs 47

where xP , yP are the photo-coordinates, XP , YP , ZP the ground coordinates ofpoint P, and XC , YC , ZC the coordinates of the projection center C in the groundcoordinate system. Clearly, above equation takes into account any tilt and topographicvariations of the surface (relief).

4.3.4 Relief displacement

The effect of relief does not only cause a change in the scale but can also be consideredas a component of image displacement. Fig. 4.5 illustrates this concept. Suppose pointT is on top of a building and pointB at the bottom. On a map, both points have identicalX,Y coordinates; however, on the photograph they are imaged at different positions,namely in T ′ and B′. The distance d between the two photo points is called reliefdisplacementbecause it is caused by the elevation difference ∆h between T and B.

B

T

C

PP

r

rd H

h∆

T

B

Figure 4.5: Relief displacement.

The magnitude of relief displacement for a true vertical photograph can be deter-mined by the following equation

d =r ∆h

H=

r′ ∆h

H − ∆h(4.5)

where r =√x2

T + y2T , r′ =

√x2

B + y2B , and ∆h the elevation difference of two

points on a vertical. Eq. 4.5 can be used to determine the elevation ∆h of a verticalobject

h =d H

r(4.6)

Page 221: CE 406 – Advanced Surveying

48 4 Properties of Aerial Photography

The direction of relief displacement is radial with respect to the nadir point N ′,independent of camera tilt.

Page 222: CE 406 – Advanced Surveying

Chapter 5

Elements of AnalyticalPhotogrammetry

5.1 Introduction, Concept of Image and Object Space

Photogrammetry is the science of obtaining reliable information about objects and ofmeasuring and interpreting this information. The task of obtaining information is calleddata acquisition, a process we discussed at length in GS601, Chapter 2. Fig. 5.1(a)depicts the data acquisition process. Light rays reflected from points on the object, sayfrom point A, form a divergent bundle which is transformed to a convergent bundle bythe lens. The principal rays of each bundle of all object points pass through the centerof the entrance and exit pupil, unchanged in direction. The front and rear nodal pointsare good approximations for the pupil centers.

Another major task of photogrammetry is concerned with reconstructing the objectspace from images. This entails two problems: geometric reconstruction (e.g. theposition of objects) and radiometric reconstruction (e.g. the gray shades of a surface).The latter problem is relevant when photographic products are generated, such as or-thophotos. Photogrammetry is mainly concerned with the geometric reconstruction.The object space is only partially reconstructed, however. With partial reconstructionwe mean that only a fraction of the information recorded from the object space is usedfor its representation. Take a map, for example. It may only show the perimeter ofbuildings, not all the intricate details which make up real buildings.

Obviously, the success of reconstruction in terms of geometrical accuracy dependslargely on the similarity of the image bundle compared to the bundle of principal raysthat entered the lens during the instance of exposure. The purpose of camera calibrationis to define an image space so that the similarity becomes as close as possible.

The geometrical relationship between image and object space can best be establishedby introducing suitable coordinate systems for referencing both spaces. We describe thecoordinate systems in the next section. Various relationships exist between image andobject space. In Table 5.1 the most common relationships are summarized, together withthe associated photogrammetric procedures and the underlying mathematical models.

Page 223: CE 406 – Advanced Surveying

50 5 Elements of Analytical Photogrammetry

A B

B’ A’latent image

object space

image space

exitpupil

entrance

A B

B’ A’negative

diapositive

Figure 5.1: In (a) the data acquisition process is depicted. In (b) we illustrate thereconstruction process.

In this chapter we describe these procedures and the mathematical models, exceptaerotriangulation (block adjustment) which will be treated later. For one and the sameprocedure, several mathematical models may exist. They differ mainly in the degreeof complexity, that is, how closely they describe physical processes. For example, asimilarity transformation is a good approximation to describe the process of convertingmeasured coordinates to photo-coordinates. This simple model can be extended todescribe more closely the underlying measuring process. With a few exceptions, wewill not address the refinement of the mathematical model.

5.2 Coordinate Systems

5.2.1 Photo-Coordinate System

The photo-coordinate system serves as the reference for expressing spatial positionsand relations of the image space. It is a 3-D cartesian system with the origin at theperspective center. Fig. 5.2 depicts a diapositive with fiducial marks that define thefiducial center FC. During the calibration procedure, the offset between fiducial centerand principal point of autocollimation, PP, is determined, as well as the origin of theradial distortion, PS. The x, y coordinate plane is parallel to the photograph and thepositive x−axis points toward the flight direction.

Positions in the image space are expressed by point vectors. For example, pointvector p defines the position of point P on the diapositive (see Fig. 5.2). Point vectorsof positions on the diapositive (or negative) are also called image vectors. We have forpoint P

Page 224: CE 406 – Advanced Surveying

5.2 Coordinate Systems 51

Table 5.1: Summary of the most important relationships between image and objectspace.

relationship between procedure mathematical model

measuring system and interior orientation 2-D transformationphoto-coordinate systemphoto-coordinate system and exterior orientation collinearity eq.object coordinate systemphoto-coordinate systems relative orientation collinearity eq.of a stereopair coplanarity conditionmodel coordinate system and absolute orientation 7-parameterobject coordinate system transformationseveral photo-coordinate systems bundle block collinearity eq.and object coordinate system adjustmentseveral model coordinate systems independent model 7 parameterand object coordinate system block adjustment transformation

x

y

z

FC

PPPS

P

cp

FC

PP

PS

c

p

Fiducial Center

Principal Point

Point of Symmetry

calibrated focal length

image vector

Figure 5.2: Definition of the photo-coordinate system.

p =

xp

yp

−c

(5.1)

Note that for a diapositive the third component is negative. This changes to a positive

Page 225: CE 406 – Advanced Surveying

52 5 Elements of Analytical Photogrammetry

value in the rare case a negative is used instead of a diapositive.

5.2.2 Object Space Coordinate Systems

In order to keep the mathematical development of relating image and object space sim-ple, both spaces use 3-D cartesian coordinate systems. Positions of control points inobject space are likely available in another coordinate systems, e.g. State Plane coordi-nates. It is important to convert any given coordinate system to a cartesian system beforephotogrammetric procedures, such as orientations or aerotriangulation, are performed.

5.3 Interior Orientation

We have already introduced the term interior orientationin the discussion about cameracalibration (see GS601, Chapter 2), to define the metric characteristics of aerial cam-eras. Here we use the same term for a slightly different purpose. From Table 5.1 weconclude that the purpose of interior orientation is to establish the relationship betweena measuring system1 and the photo-coordinate system. This is necessary because it isnot possible to measure photo-coordinates directly. One reason is that the origin of thephoto-coordinate system is only mathematically defined; since it is not visible it cannotcoincide with the origin of the measuring system.

Fig. 5.3 illustrates the case where the diapositive to be measured is inserted inthe measuring system whose coordinate axis are xm, ym. The task is to determinethe transformation parameters so that measured points can be transformed into photo-coordinates.

5.3.1 Similarity Transformation

The most simple mathematical model for interior orientation is a similarity transfor-mation with the four parameters: translation vector t, scale factor s, and rotation angleα.

xf = s(xm cos(α) − ym sin(α)) − xt (5.2)

yf = s(xm sin(α) + ym cos(α)) − yt (5.3)

These equations can also be written in the following form:

xf = a11xm− a12ym− xt (5.4)

yf = a12xm+ a11ym− yt (5.5)

If we consider a11, a12, xt, yt as parameters, then above equations are linear in theparameters. Consequently, they can be directly used as observation equations for a least-squares adjustment. Two observation equations are formed for every point known in

1Measuring systems are discussed in the next chapter.

Page 226: CE 406 – Advanced Surveying

5.3 Interior Orientation 53

FC

xf

yf

yx

oo

PPx

y

xm

ym

α

Figure 5.3: Relationship between measuring system and photo-coordinate system.

both coordinate systems. Known points in the photo-coordinate system are the fiducialmarks. Thus, computing the parameters of the interior orientation amounts to measuringthe fiducial marks (in the measuring system).

Actually, the fiducial marks are known with respect to the fiducial center. Therefore,the process just described will determine parameters with respect to the fiducial coor-dinate system xf, yf. Since the origin of the photo-coordinate system is known in thefiducial system (x0, y0), the photo-coordinates are readily obtained by the translation

x = xf − x0 (5.6)

y = yf − y0 (5.7)

5.3.2 Affine Transformation

The affine transformation is an improved mathematical model for the interior orien-tation because it more closely describes the physical reality of the measuring system.The parameters are two scale factors sx, sy , a rotation angle α, a skew angle ε, and atranslation vector t = [xt, yt]T . The measuring system is a manufactured product and,as such, not perfect. For example, the two coordinate axis are not exactly rectangular,

Page 227: CE 406 – Advanced Surveying

54 5 Elements of Analytical Photogrammetry

as indicated in Fig. 5.3(b). The skew angle expresses the nonperpendicularity. Also,the scale is different between the the two axis.

We have

xf = a11xm+ a12ym− xt (5.8)

yf = a21xm+ a22ym− yt (5.9)

where

a11 sx(cos(α− ε sin(α))a12 —sy(sin(α))a21 sx(sin(α+ ε cos(α))

Eq. 4.8 and 5.9 are also linear in the parameters. Like in the case of a similaritytransformation, these equations can be directly used as observation equations. Withfour fiducial marks we obtain eight equations leaving a redundancy of two.

5.3.3 Correction for Radial Distortion

As discussed in GS601 Chapter 2, radial distortion causes off-axial points to be radiallydisplaced. A positive distortion increases the lateral magnification while a negativedistortion reduces it.

Distortion values are determined during the process of camera calibration. Theyare usually listed in tabular form, either as a function of the radius or the angle at theperspective center. For aerial cameras the distortion values are very small. Hence,it suffices to linearly interpolate the distortion. Suppose we want to determine thedistortion for image point xp, yp. The radius is rp = (x2

p + y2p)1/2. From the table

we obtain the distortion dri for ri < rp and drj for rj > rp. The distortion for rp isinterpolated

drp =(drj − dri) rp

(rj − ri)(5.10)

As indicated in Fig. 5.4 the corrections in x- and y-direction are

drx =xp

rpdrp (5.11)

dry =yp

rpdrp (5.12)

Finally, the photo-coordinates must be corrected as follows:

xp = xp − drx = xp(1 − drp

rp) (5.13)

yp = yp − dry = yp(1 − drp

rp) (5.14)

Page 228: CE 406 – Advanced Surveying

5.3 Interior Orientation 55

The radial distortion can also be represented by an odd-power polynomial of theform

dr = p0 r + p1 r3 + p2 r

5 + · · · (5.15)

The coefficients pi are found by fitting the polynomial curve to the distortion values.Eq. 5.15 is a linear observation equation. For every distortion value, an observationequation is obtained.

PP x

y

dr

r

p

p

dr

dr x

y

P

P’

x

y p

p

Figure 5.4: Correction for radial distortion.

In order to avoid numerical problems (ill-conditioned normal equation system), thedegree of the polynom should not exceed nine.

5.3.4 Correction for Refraction

Fig. 5.5 shows how an oblique light ray is refracted by the atmosphere. According toSnell’s law, a light ray is refracted at the interface of two different media. The densitydifferences in the atmosphere are in fact different media. The refraction causes theimage to be displayed outwardly, quite similar to a positive radial distortion.

The radial displacement caused by refraction can be computed by

dref = K(r +r3

c2) (5.16)

K =(

2410HH2 − 6H + 250

− 2410h2

(h2 − 6h+ 250)H

)10−6 (5.17)

These equations are based on a model atmosphere defined by the US Air Force. Theflying height H and the ground elevation h must be in units of kilometers.

Page 229: CE 406 – Advanced Surveying

56 5 Elements of Analytical Photogrammetry

P

P

P’

h

H

dref

datum

negative

perspective center

Figure 5.5: Correction for refraction.

5.3.5 Correction for Earth Curvature

As mentioned in the beginning of this Chapter, the mathematical derivation of therelationships between image and object space are based on the assumption that for bothspaces, 3-D cartesian coordinate systems are employed. Since ground control pointsmay not directly be available in such a system, they must first be transformed, say froma State Plane coordinate system to a cartesian system.

TheX andY coordinates of a State Plane system are cartesian, but not the elevations.Fig. 5.6 shows the relationship between elevations above a datum and elevations in the3-D cartesian system. If we approximate the datum by a sphere, radius R = 6372.2km, then the radial displacement can be computed by

dearth=r3 (H − ZP )

2 c2 R(5.18)

Like radial distortion and refraction, the corrections in x− and y-direction is readilydetermined by Eq. 4.13 and 5.14. Strictly speaking, the correction of photo-coordinatesdue to earth curvature is not a refinement of the mathematical model. It is much betterto eliminate the influence of earth curvature by transforming the object space into a3-D cartesian system before establishing relationships with the ground system. Thisis always possible, except when compiling a map. A map, generated on an analyticalplotter, for example, is most likely plotted in a State Plane coordinate system. That is,

Page 230: CE 406 – Advanced Surveying

5.3 Interior Orientation 57

P

P’

H - Z

ZZ

dea

rth

datum

Zp

∆ P

P

P

Figure 5.6: Correction of photo-coordinates due to earth curvature.

the elevations refer to the datum and not to the XY plane of the cartesian coordinatesystem. It would be quite awkward to produce the map in the cartesian system andthen transform it to the target system. Therefore, during map compilation, the photo-coordinates are “corrected" so that conjugate bundle rays intersect in object space atpositions related to reference sphere.

5.3.6 Summary of Computing Photo-Coordinates

We summarize the main steps necessary to determine photo-coordinates. The processto correct them for systematic errors, such as radial distortion, refraction and earthcurvature is also known as image refinement. Fig. 5.7 depicts the coordinate systemsinvolved, an imaged point P , and the correction vectors dr, dref, dearth.

1. Insert the diapositive into the measuring system (e.g. comparator, analyticalplotter) and measure the fiducial marks in the machine coordinate system xm, ym.Compute the transformation parameters with a similarity or affine transformation.The transformation establishes a relationship between the measuring system andthe fiducial coordinate system.

2. Translate the fiducial system to the photo-coordinate system (Eqs. 4.6 and 5.7).

3. Correct photo-coordinates for radial distortion. The radial distortion drp for point

Page 231: CE 406 – Advanced Surveying

58 5 Elements of Analytical Photogrammetry

FC xf

yf

yx

o

o

dref

dr dear

th

PP

P’

P

x

y

xm

ym

Figure 5.7: Interior orientation and image refinement.

P is found by linearly interpolating the values given in the calibration protocol(Eq. 5.10).

4. Correct the photo-coordinates for refraction, according to Eqs. 4.16 and 5.17.This correction is negative. The displacement caused by refraction is a functionalrelationship of dref= f(H,h, r, c). With a flying heightH = 2, 000 m, elevationabove ground h = 500 m we obtain for a wide angle camera (c ≈ 0.15 m) acorrection of −4µm for r = 130 mm. An extreme example is a superwide anglecamera, H = 9, 000 m, h = 500 m, where dref= −34 µm for the same point.

5. Correct for earth curvature only if the control points (elevations) are not in acartesian coordinate system or if a map is compiled. Using the extreme exampleas above, we obtain dearth= 65 µm. Since this correction has the oppositesign of the refraction, the combined correction for refraction and earth curvaturewould be dcomb= 31 µm. The correction due to earth curvature is larger thanthe correction for refraction.

Page 232: CE 406 – Advanced Surveying

5.4 Exterior Orientation 59

5.4 Exterior Orientation

Exterior orientation is the relationship between image and object space. This is ac-complished by determining the camera position in the object coordinate system. Thecamera position is determined by the location of its perspective center and by its attitude,expressed by three independent angles.

P

P’

H - Z

ZZ

dea

rth

datum

Zp

∆ P

P

P

Figure 5.8: Exterior Orientation.

The problem of establishing the six orientation parameters of the camera can conve-niently be solved by the collinearity model. This model expresses the condition that theperspective center C, the image point Pi, and the object point Po, must lie on a straightline (see Fig. 5.8). If the exterior orientation is known, then the image vector pi and thevector q in object space are collinear:

pi =1λq (5.19)

As depicted in Fig. 5.8, vector q is the difference between the two point vectors cand p. For satisfying the collinearity condition, we rotate and scale q from object toimage space. We have

pi =1λRq =

1λR (p − c) (5.20)

with R an orthogonal rotation matrix with the three angles ω, φ and κ:

Page 233: CE 406 – Advanced Surveying

60 5 Elements of Analytical Photogrammetry

R =

∣∣∣∣∣∣cosφ cosκ − cosφ sinκ sinφcosω sinκ+ sinω sinφ cosκ cosω cosκ− sinω sinφ sinκ − sinω cosφsinω sinκ− cosω sinφ cosκ sinω cosκ+ cosω sinφ sinκ cosω cosφ

∣∣∣∣∣∣(5.21)

Eq. 5.20 renders the following three coordinate equations.

x =1λ

(XP −XC)r11 + (YP − YC)r12 + (ZP − ZC)r13 (5.22)

y =1λ

(XP −XC)r21 + (YP − YC)r22 + (ZP − ZC)r23 (5.23)

−c =1λ

(XP −XC)r31 + (YP − YC)r32 + (ZP − ZC)r33 (5.24)

By dividing the first by the third and the second by the third equation, the scalefactor 1

λ is eliminated leading to the following two collinearity equations:

x = −c (XP −XC)r11 + (YP − YC)r12 + (ZP − ZC)r13(XP −XC)r31 + (YP − YC)r32 + (ZP − ZC)r33

(5.25)

y = −c (XP −XC)r21 + (YP − YC)r22 + (ZP − ZC)r23(XP −XC)r31 + (YP − YC)r32 + (ZP − ZC)r33

(5.26)

with:

pi =

x

y−f

p =

XP

YP

ZP

c =

XC

YC

ZC

The six parameters: XC , YC , ZC , ω, φ, κ are the unknown elements of exterior ori-entation. The image coordinates x, y are normally known (measured) and the calibratedfocal length c is a constant. Every measured point leads to two equations, but also addsthree other unknowns, namely the coordinates of the object point (XP , YP , ZP ). Unlessthe object points are known (control points), the problem cannot be solved with onlyone photograph.

The collinearity model as presented here can be expanded to include parameters ofthe interior orientation. The number of unknowns will be increased by three2. Thiscombined approach lets us determine simultaneously the parameters of interior andexterior orientation of the cameras.

There are only limited applications for single photographs. We briefly discuss thecomputation of the exterior orientation parameters, also known as single photograph re-section, and the computation of photo-coordinates with known orientation parameters.Single photographs cannot be used for the main task of photogrammetry, the recon-struction of object space. Suppose we know the exterior orientation of a photograph.Points in object space are not defined, unless we also know the scale factor 1/λ forevery bundle ray.

2Parameters of interior orientation: position of principal point and calibrated focal length. Additionally,three parameters for radial distortion and three parameters for tangential distortion can be added.

Page 234: CE 406 – Advanced Surveying

5.5 Orientation of a Stereopair 61

5.4.1 Single Photo Resection

The position and attitude of the camera with respect to the object coordinate system (ex-terior orientation of camera) can be determined with help of the collinearity equations.Eqs. 5.26 and 4.27 express measured quantities3 as a function of the exterior orienta-tion parameters. Thus, the collinearity equations can be directly used as observationequations, as the following functional representation illustrates.

x, y = f(XC , YC , ZC , ω, φ, κ︸ ︷︷ ︸exterior orientation

, XP , YP , ZP )︸ ︷︷ ︸object point

(5.27)

For every measured point two equations are obtained. If three control points aremeasured, a total of 6 equations is formed to solve for the 6 parameters of exteriororientation.

The collinearity equations are not linear in the parameters. Therefore, Eqs. 4.25 and5.26 must be linearized with respect to the parameters. This also requires approximatevalues with which the iterative process will start.

5.4.2 Computing Photo Coordinates

With known exterior orientation elements photo-coordinates can be easily computedfrom Eqs. 4.25 and 5.26. This is useful for simulation studies where synthetic photo-coordinates are computed.

Another application for the direct use of the collinearity equations is the real-timeloop of analytical plotters where photo-coordinates of ground points or model pointsare computed after relative or absolute orientation (see next chapter, analytical plotters).

5.5 Orientation of a Stereopair

5.5.1 Model Space, Model Coordinate System

The application of single photographs in photogrammetry is limited because they cannotbe used for reconstructing the object space. Even though the exterior orientation ele-ments may be known it will not be possible to determine ground points unless the scalefactor of every bundle ray is known. This problem is solved by exploiting stereopsis,that is by using a second photograph of the same scene, taken from a different position.

Two photographs with different camera positions that show the same area, at least inpart, is called a stereopair. Suppose the two photographs are oriented such that conjugatepoints(corresponding points) intersect. We call this intersection space model space. Inorder for expressing relationships of this model space we introduce a reference system,the model coordinate system. This system is 3-D and cartesian. Fig. 5.9 illustrates theconcept of model space and model coordinate system.

Introducing the model coordinate system requires the definition of its spatial position(origin, attitude), and its scale. These are the seven parameters we have encountered

3We assume that the photo-coordinates are measured. In fact they are derived from measured machinecoordinates. The correlation caused by the transformation is neglected.

Page 235: CE 406 – Advanced Surveying

62 5 Elements of Analytical Photogrammetry

y

x

z

y

x

z

C’C"

model space

y

x

z

y

x

z

C’C"

P’

P"

P

xm

zm

ym

Figure 5.9: The concept of model space (a) and model coordinate system (b).

in the transformation of 3-D cartesian systems. The decision on how to introduce theparameters depends on the application; one definition of the model coordinate systemmay be more suitable for a specific purpose than another. In the following subsections,different definitions will be discussed.

Now the orientation of a stereopair amounts to determining the exterior orientationparameters of both photographs, with respect to the model coordinate system. Fromsingle photo resection, we recall that the collinearity equations form a suitable math-ematical model to express the exterior orientation. We have the following functionalrelationship between observed photo-coordinates and orientation parameters:

x, y = f(X ′C , Y

′C , Z

′C , ω

′, φ′, κ′︸ ︷︷ ︸ext. or′

, X ′′C , Y

′′C , Z

′′C , ω

′′, φ′′, κ′′︸ ︷︷ ︸ext. or′′

, X1, Y1, Z1︸ ︷︷ ︸mod. pt 1

, · · · , Xn, Yn, Zn)︸ ︷︷ ︸mod. pt n

(5.28)

where f refers to Eqs. 4.25 and 5.26. Every point measured in one photo-coordinatesystem renders two equations. The same point must also be measured in the secondphoto-coordinate system. Thus, for one model point we obtain 4 equations, or 4nequations for n object points. On the other hand, n unknown model points lead to3n parameters, or to a total 12 + 3n − 7. These are the exterior orientation elementsof both photographs, minus the parameters we have eliminated by defining the modelcoordinate system. By equating the number of equations with number of parameterswe obtain the minimum number of points, nmin, which we need to measure for solvingthe orientation problem.

4nmin = 12 − 7 + 3nmin =⇒ nmin = 5 (5.29)

The collinearity equations which are implicitly referred to in Eq. 5.28 are non-linear.By linearizing the functional form we obtain

x, y ≈ f0 +ϑf

ϑX ′C

∆X ′C +

ϑf

ϑY ′C

∆Y ′C + · · · +

ϑf

ϑZ ′′C

∆Z ′′C (5.30)

Page 236: CE 406 – Advanced Surveying

5.5 Orientation of a Stereopair 63

with f0 denoting the function with initial estimates for the parameters.For a point Pi, i = 1, · · · , n we obtain the following four generic observation

equations

r′xi =

ϑf

ϑX ′C

∆X ′C +

ϑf

ϑY ′C

∆Y ′C + · · · +

ϑf

ϑZ ′′C

∆Z ′′C + f0 − x′

i

r′yi =

ϑf

ϑX ′C

∆X ′C +

ϑf

ϑY ′C

∆Y ′C + · · · +

ϑf

ϑZ ′′C

∆Z ′′C + f0 − y′

i

r′′xi =

ϑf

ϑX ′C

∆X ′C +

ϑf

ϑY ′C

∆Y ′C + · · · +

ϑf

ϑZ ′′C

∆Z ′′C + f0 − x′′

i (5.31)

r′′yi =

ϑf

ϑX ′C

∆X ′C +

ϑf

ϑY ′C

∆Y ′C + · · · +

ϑf

ϑZ ′′C

∆Z ′′C + f0 − y′′

i

As mentioned earlier, the definition of the model coordinate system reduces thenumber of parameters by seven. Several techniques exist to consider this in the leastsquares approach.

1. The simplest approach is to eliminate the parameters from the parameter list.We will use this approach for discussing the dependent and independent relativeorientation.

2. The knowledge about the 7 parameters can be introduced in the mathematicalmodel as seven independent pseudo observations (e.g. ∆XC = 0), or as conditionequations which are added to the normal equations. This second technique is moreflexible and it is particularly suited for computer implementation.

5.5.2 Dependent Relative Orientation

The definition of the model coordinate system in the case of a dependent relative orien-tation is depicted in Fig. 5.10. The position and the orientation is identical to one of thetwo photo-coordinate systems, say the primed system. This step amounts to introduc-ing the exterior orientation of the photo-coordinate system as known. That is, we caneliminate it from the parameter list. Next, we define the scale of the model coordinatesystem. This is accomplished by defining the distance between the two perspectivecenters (base), or more precisely, by defining the X-component.

With this definition of the model coordinate system we are left with the followingfunctional model

x, y = f(ym′′c , zm

′′c , ω

′′, φ′′, κ′′︸ ︷︷ ︸ext. or′′

, xm1, ym1, zm1︸ ︷︷ ︸model pt 1

, · · · , xmn, ymn, zmn)︸ ︷︷ ︸model pt n

(5.32)

With 5 points we obtain 20 observation equations. On the other hand, there are 5exterior orientation parameters and 5×3 model coordinates. Usually more than 5 pointsare measured. The redundancy is r = n − 5. The typical case of relative orientation

Page 237: CE 406 – Advanced Surveying

64 5 Elements of Analytical Photogrammetry

xm

zm

ym

y"

x"

z"

ω″

φ″

κ″

C’C"

P’ P"

bx

P

bz

by

Parameters

by

bz

y base component

z base component

rotation angle about x

rotation angle about y

rotation angle about z

ω″φ″κ″

Figure 5.10: Definition of the model coordinate system and orientation parameters inthe dependent relative orientation.

on a stereoplotter with the 6 von Gruber points leads only to a redundancy of one. It ishighly recommended to measure more, say 12 points, in which case we find r = 7.

With a non linear mathematical model we need be concerned with suitable approx-imations to ensure that the iterative least squares solution converges. In the case of thedependent relative orientation we have

f0 = f(yc0c , zm0c , ω

0, φ0, κ0, xm01, ym

01, zm

01, · · · , xm0

n, ym0n, zm

0n) (5.33)

The initial estimates for the five exterior orientation parameters are set to zero foraerial applications, because the orientation angles are smaller than five degrees, andxmc >> ymc, xmc >> zmc =⇒ ym0

c = zm0c = 0. Initial positions for the model

points can be estimated from the corresponding measured photo-coordinates. If thescale of the model coordinate system approximates the scale of the photo-coordinatesystem, we estimate initial model points by

xm0i ≈ x′

i

ym0i ≈ y′

i (5.34)

zm0i ≈ z′

i

The dependent relative orientation leaves one of the photographs unchanged; theother one is oriented with respect to the unchanged system. This is of advantage for theconjunction of successive photographs in a strip. In this fashion, all photographs of astrip can be joined into the coordinate system of the first photograph.

Page 238: CE 406 – Advanced Surveying

5.5 Orientation of a Stereopair 65

5.5.3 Independent Relative Orientation

Fig. 5.11 illustrates the definition of the model coordinate system in the independentrelative orientation.

xm

zm

ym

y’

x’

z’ y"

x"

z"

ω″

φ′ φ″κ′κ″

C’C"

P’ P"

bx

P

bz

by

Parameters

rotation angle about y’

rotation angle about z’

rotation angle about x"

rotation angle about y"

rotation angle about z"

φ′κ′ω″φ″κ″

Figure 5.11: Definition of the model coordinate system and orientation parameters inthe independent relative orientation.

The origin is identical to one of the photo-coordinate systems, e.g. in Fig. 5.11 itis the primed system. The orientation is chosen such that the positive xm-axis passesthrough the perspective center of the other photo-coordinate system. This requiresdetermining two rotation angles in the primed photo-coordinate system. Moreover, iteliminates the base components by, bz. The rotation about the x-axis (ω) is set to zero.This means that the ym-axis is in the x− y plane of the photo-coordinate system. Thescale is chosen by defining xm′′

c = bx.With this definition of the model coordinate system we have eliminated the position

of both perspective centers and one rotation angle. The following functional modelapplies

x, y = f( φ′, κ′︸ ︷︷ ︸ext.or.′

, ω′′, φ′′, κ′′︸ ︷︷ ︸ext.or.′′

, xm1, ym1, zm1︸ ︷︷ ︸model pt 1

, · · · , xmn, ymn, zmn)︸ ︷︷ ︸model pt n

(5.35)

The number of equations, number of parameters and the redundancy are the sameas in the dependent relative orientation. Also, the same considerations regarding initialestimates of parameters apply.

Note that the exterior orientation parameters of both types of relative orientationare related. For example, the rotation angles φ′, κ′ can be computed from the spatialdirection of the base in the dependent relative orientation.

Page 239: CE 406 – Advanced Surveying

66 5 Elements of Analytical Photogrammetry

φ′ = arctan(zm′′

c

bx) (5.36)

κ′ = arctan(ym′′

c

(bx2 + zm2c)1/2 ) (5.37)

5.5.4 Direct Orientation

In the direct orientation, the model coordinate system becomes identical with the groundsystem, for example, a State Plane coordinate system (see Fig. 5.12). Since such systemsare already defined, we cannot introduce a priori information about exterior orientationparameters like in both cases of relative orientation. Instead we use information aboutsome of the object points. Points with known coordinates are called control points. Apoint with all three coordinates known is called full control point. If only X and Y isknown then we have a planimetric control point. Obviously, with an elevation controlpoint we know only the Z coordinate.

X

Z

Y

y’

x’

z’ y"

x"

z"

ω′

ω″

φ′ φ″κ′κ″

C’C"

P’ P"

P

Parameters

X’ ,Y’ ,Z’

X" ,Y" ,Z"

position of perspective center left

rotation angles left

position of perspective center right

rotation angles right

C C C

C C C

ω′, φ′, κ′

ω″, φ″, κ″

Figure 5.12: Direct orientation of a stereopair with respect to a ground control coordinatesystem.

The required information about 7 independent coordinates may come from differentarrangements of control points. For example, 2 full control points and an elevation, ortwo planimetric control points and three elevations, will render the necessary informa-tion. The functional model describing the latter case is given below:

x, y = f(X ′C , Y

′C , Z

′C , ω

′, φ′, κ′︸ ︷︷ ︸ext. or′

, X ′′C , Y

′′C , Z

′′C , ω

′′, φ′′, κ′′︸ ︷︷ ︸ext. or′′

, Z1, Z2, X3, Y3, X4, Y4, X5, Y5︸ ︷︷ ︸unknown coord. of ctr. pts

(5.38)The Z-coordinates of the planimetric control points 1 and 2 are not known and

thus remain in the parameter list. Likewise, X − Y -coordinates of elevation controlpoints 3, 4, 5 are parameters to be determined. Let us check the number of observationequations for this particular case. Since we measure the five partial control points on both

Page 240: CE 406 – Advanced Surveying

5.5 Orientation of a Stereopair 67

photographs we obtain 20 observation equations. The number of parameters amountsto 12 exterior orientation elements and 8 coordinates. So we have just enough equationsto solve the problem. For every additional point 4 more equations and 3 parameters areadded. Thus, the redundancy increases linearly with the number of points measured.Additional control points increase the redundancy more, e.g. full control points by 4,an elevation by 2.

Like in the case of relative orientation, the mathematical model of the direct orienta-tion is also based on the collinearity equations. Since it is non-linear in the parameterswe need good approximations to assure convergence. The estimation of initial val-ues for the exterior orientation parameters may be accomplished in different ways. Toestimate X0

C , Y0C for example, one could perform a 2-D transformation of the photo

coordinates to planimetric control points. This would also result in a good estimation ofκ0 and of the photo scale which in turn can be used to estimate Z0

C = scale c. For aerialapplications we set ω0 = φ0 = 0. With these initial values of the exterior orientationone can compute approximations X0

i , Y0i of object points where Z0

i = haver.Note that the minimum number of points to be measured in the relative orientation

is 5. With the direct orientation, we need only three points assuming that two are fullcontrol points. For orienting stereopairs with respect to a ground system, there is noneed to first perform a relative orientation followed by an absolute orientation. Thistraditional approach stems from analog instruments where it is not possible to performa direct orientation by mechanical means.

5.5.5 Absolute Orientation

With absolute orientation we refer to the process of orienting a stereomodel to theground control system. Fig. 5.13 illustrates the concept. This is actually a very straight-forward task which we discussed earlier under 7-parameter transformation. Note thatthe 7-parameter transformation establishes the relationship between two 3-D Cartesiancoordinate systems. The model coordinate system is cartesian, but the ground controlsystem is usually not cartesian because the elevations refer to a separate datum. In thatcase, the ground control system must first be transformed into an orthogonal system.

The transformation can only be solved if a priori information about some of theparameters is introduced. This is most likely done by control points. The same consid-erations apply as just discussed for the direct orientation.

From Fig. 5.13 we read the following vector equation which relates the model tothe ground control coordinate system:

p = sRpm − t (5.39)

where pm = [xm, ym, zm]T is the point vector in the model coordinate system,p = [X,Y, Z]T the vector in the ground control system pointing to the object pointP and t = [Xt, Yt, Zt]T the translation vector between the origins of the 2 coordinatesystems. The rotation matrix R rotates vector pm into the ground control system ands, the scale factor, scales it accordingly. The 7 parameters to be determined comprise3 rotation angles of the orthogonal rotation matrix R, 3 translation parameters and onescale factor.

Page 241: CE 406 – Advanced Surveying

68 5 Elements of Analytical Photogrammetry

X

Z

Y

xm

ym

zm

model

t

m

p

Figure 5.13: Absolute orientation entails the computation of the transformation param-eters between model and ground coordinate system.

The following functional model applies:

x, y, z = f(Xt, Yt, Zt︸ ︷︷ ︸translation

, ω, φ, κ︸ ︷︷ ︸orientation

, s︸︷︷︸scale

) (5.40)

In order to solve for the 7 parameters at least seven equations must be available. Forexample, 2 full control points and one elevation control point would render a solution.If more equations (that is, more control points) are available then the problem of deter-mining the parameters can be cast as a least-squares adjustment. Here, the idea is tominimize the discrepancies between the transformed and the available control points.An observation equation for control point Pi in vector form can be written as

Page 242: CE 406 – Advanced Surveying

5.5 Orientation of a Stereopair 69

ri = sRpmi − t − pi (5.41)

with r the residual vector [rx, ry, rz]T . Obviously, the model is not linear in theparameters. As usual, linearized observation equations are obtained by taking the partialderivatives with respect to the parameters. The linearized component equations are

The approximations may be obtained by first performing a 2-D transformation withx, y-coordinates only.

Page 243: CE 406 – Advanced Surveying

70 5 Elements of Analytical Photogrammetry

Page 244: CE 406 – Advanced Surveying

Chapter 6

Measuring Systems

Most analytical photogrammetric procedures require photo coordinates as measuredquantities. This, in turn, requires accurate, reliable and efficient devices for measuringpoints on stereo images. The accuracy depends on the application. Typical accuraciesrange between three and ten micrometers. Consequently, the measuring devices mustmeet an absolute, repeatable accuracy of a few micrometers over the entire range of thephotographs, that is over an area of 230 mm × 230 mm.

In this chapter we discuss the basic functionality and working principles of analyticalplotters and digital photogrammetric workstations.

6.1 Analytical Plotters

6.1.1 Background

The analytical plotter was invented in 1957 by Helava. The innovative concept was metwith reservation because computers at that time were not readily available, expensive,and not very reliable. It took nearly 20 years before the major manufacturers of pho-togrammetric equipment embarked on the idea and began to develop analytical plotters.At the occasion of the ISPRS congress in 1976, analytical plotters were displayed forthe first time to photogrammetrists from all over the world. Fig.6.1 shows a typicalanalytical plotter.

Slowly, analytical plotters were bought to replace analog stereoplotters. By 1980,approximately 5,500 stereoplotters were in use worldwide, but only a few hundredanalytical plotters. Today, this number increased to approximately 1,500. Leica andZeiss are the main manufacturers with a variety of systems. However, production ofinstruments has stopped in the early 1990s.

6.1.2 System Overview

Fig. 6.2 depicts the basic components of an analytical plotter. These components com-prise the stereo viewer, the user interface, electronics and real-time processor, and hostcomputer.

Page 245: CE 406 – Advanced Surveying

72 6 Measuring Systems

Figure 6.1: SD2000 analytical plotter from Leica.

Stereo Viewer

The viewing system resembles closely a stereo comparator, particularly the binocularsystem with high quality optics, zoom lenses, and image rotation. Also, the measuringmark and the illumination system are refined versions of stereocomparator components.Fig 6.3 shows a typical viewer with the binocular system, the stages, and the knobs foradjusting the magnification, illumination and image rotation.

The size of the stages must allow for measuring aerial photographs. Some in-struments offer larger stage sizes, for example 18 × 9 in. to accomodate panoramicimagery.

An important part of the stereo viewer is the measuring and recording system.As discussed in the previous section, the translation of the stages, the measuring andrecording is all combined by employing either linear encoders or spindles.

Translation System

In order to move the measuring mark from one point to another either the viewing systemmust move with respect to a stationary measuring system, or the measuring system,including photograph, moves against a fixed viewing system. Most x-y-comparatorshave a moving stage system. The carrier plate on which the diapositive is clamped,moves against a pair of fixed glass scales and the fixed viewing system (compare alsoFig. 6.5).

In most cases, the linear translation is accomplished by purely mechanical means.Fig. 6.4 depicts some typical translation guides. Various forms of bearings are used to

Page 246: CE 406 – Advanced Surveying

6.1 Analytical Plotters 73

user

interface

host

computerviewer

real-time

processor

Figure 6.2: The main components of an analytical plotter.

Figure 6.3: Stereo viewer of the Planicomp P-3 analytical plotter from Zeiss.

reduce friction and wear and tear. An interesting solution are air bearings. The air ispumped through small orifices located on the facing side of one of two flat surfaces.This results in a thin uniform layer of air separating the two surfaces, providing smoothmotion.

The force to produce motion is most often produced by threaded spindles or precisionlead screws. Coarse positioning is most conveniently accomplished by a free movingcursor. After clamping the stages, a pair of handwheels allows for precise positioning.

Measuring and Recording System

If the translation system uses precision lead screws then the measuring is readily ac-complished by counting the number of rotations of the screw. For example, a singlerotation would produce a relative translation equal to the pitch of the screw. If the pitchis uniform, a fractional part of the rotation can be related to a fractional part of the

Page 247: CE 406 – Advanced Surveying

74 6 Measuring Systems

Figure 6.4: End view of typical translation way.

pitch. Full revolutions are counted on a coarse scale while the fractional part is usuallyinterpreted on a separate, more accurate scale.

To record the measurements automatically, an analog to digital (A/D) conversionis necessary because the x-y-readings are analog in nature. Today, A/D converters arebased on solid state electronics. They are very reliable, accurate and inexpensive.

sensor

light

scale

carrier stage

diapositive

Figure 6.5: Working principle of linear encoders.

Fig. 6.5 illustrates one of several concepts for the A/D conversion process, usinglinear encoders. The grating of the glass scales is 40 µm. Light from the source Ltransmits through the glass scale and is reflected at the lower surface of the plate carrier.A photo diode senses the reflected light by converting it into a current that can bemeasured. Depending on the relative position of plate carrier and scale, more or lesslight is reflected. As can be seen from Fig. 6.5 there are two extreme positions whereeither no light or all light is reflected. Between these two extreme positions the amountof reflected light depends linearly on the movement of the plate carrier. Thus, the preciseposition is found by linear interpolation.

User Interface

With user interface we refer to the communication devices an operator has availableto work on an analytical plotter. These devices can be associated to the following

Page 248: CE 406 – Advanced Surveying

6.1 Analytical Plotters 75

functional groups:

viewer control buttons permit to change magnification, illumination and image rota-tion.

pointing devices are necessary to drive the measuring mark to specific locations,e.g.fiducial marks, control points or features to be digitized. Pointing devices includehandwheels, footdisk, mouse, trackball, cursor. A typical configuration consistsof a special cursor with an additional button to simulate z-movement (see Fig. 6.6).Handwheels and footdisk are usually offered as an option to provide the familiarenvironment of a stereoplotter.

digitizing devices are used to record the measuring mark together with addtional in-formation such as identifiers, graphical attributes, feature codes. For obviousreasons, digitizing devices are usually in close proximity to pointing devices. Forexample, the cursor is often equipped with additional recording buttons. Digitiz-ing devices may also come in the form of foot pedals, a typical solution found withstereoplotters. A popular digitizing device is the digitizing tablet that is mainlyused to enter graphical information. Another solution is the function keyboard.It provides less flexibility, however.

host computer communication involves graphical user interface and keyboard.

Electronics and Real-Time Processor

The electronic cabinet and the real-time processor are the interface between the hostcomputer and the stereo viewer. The user does not directly communicate with thissub-system.

The motors that drive the stages receive analog signals, for example voltage. How-ever on the host computer only digital signals are available. Thus, the main function ofthe electronics is to accomplish A/D and D/A conversion.

Figure 6.6: Planicomp P-cursor as an example of a pointing and digitizing device.

The real-time processor is a natural consequence of the distributed computing con-cept. Its main task is to control the user interface and to perform the computing of

Page 249: CE 406 – Advanced Surveying

76 6 Measuring Systems

stage coordinates from model coordinates in real-time. This involves executing thecollinearity equations and inverse interior orientation at a rate of 50 to 100 times persecond.

Host Computer

The separation of real-time computations from more general computational tasks makesthe analytical plotter a device independent peripheral with which the host communicatesvia standard interface and communication. The task of the host computer is to assistthe operator in performing photogrammetric procedures such as the orientation of astereomodel and its digitization.

The rapid performance increase of personal computers (PC) and their relatively lowprice makes them the natural choice for the host computer. Other hosts typically usedare UNIX workstations.

Auxiliary Devices

Depending on the type of instruments, auxiliary devices may be optionally available toincrease the functionality. On such device is the superpositioning system. Here, thecurrent digitizing status is displayed on a small, high resolution monitor. The display isinterjected into the optical path so that the operator sees the digitized map superimposedon the stereomodel.This is very helpful for quickly checking the completeness and thecorrectness of graphical information.

6.1.3 Basic Functionality

Analytical plotters work in two modes: stereocomparator mode and model mode. Wefirst discuss the model mode because that is the standard operational mode.

Model Mode

Suppose we have set up a model. That is, the diapositives of a stereopair are placed onthe stages and are oriented. The task is now to move the measuring mark to locationsof interest, for example to features we need to digitize. How do the stages move to theconjugate location?

The measuring mark, together with the binoculars, remain fixed. As a consequence,the stages must move to go from one point to another. New positions are indicated bythe pointing devices, for example by moving the cursor in the direction of the new point.The cursor position is constantly read by the real-time processor. The analog signal isconverted to a 3-D location. One can think of moving the cursor in the 3-D modelspace. The 3-D model position is immediately converted to stage coordinates. Thisis accomplished by first computing photo-coordinates with the collinearity equations,followed by computing stage coordinates with the inverse interior orientation. We havesymbolically

X,Y, Z = derived from movement of pointing devicex′, y′ = f(ext.or′, X, Y, Z, c′)

Page 250: CE 406 – Advanced Surveying

6.1 Analytical Plotters 77

x′′, y′′ = f(ext.or′′, X, Y, Z, c′′)xm′, ym′ = f(int.or′, x′, y′)xm′′, ym′′ = f(int.or′′, x′′, y′′)

These equations symbolize the classical real-time loop of analytical plotters. Thereal-time processor is constantly reading the user interface. Changes in the pointingdevices are converted to model coordinates X,Y, Z which, in turn, are transformedto stage coordinates xm, ym that are then submitted to the stage motors. This loopis repeated at least 50 times per second to provide smooth motion. It is important torealize that the pointing devices do not directly move the stages. Alternatively, modelcoordinates can also be provided by the host computer.

Comparator Mode

Clearly, the model mode requires the parameters of both, exterior and interior orienta-tion. These parameters are only known after successful interior and relative orientation.Prior to this situation, the analytical plotter operates in the comparator mode. The sameprinciple as explained above applies. The real-time processor still reads the positionof the pointing devices. Instead of using the orientation parameters, approximationsare used. For example, the 5 parameters of relative orientation are set to zero, and thesame assumptions are made as discussed in Chapter 2, relative orientation. Since onlyrough estimates for the orientation parameters are used, conjugate locations are onlyapproximate. The precise determination of conjugate points is obtained by clearingthe parallaxes, exactly in the same way as with stereocomparators. Again, the pointingdevices do not drive the stages directly.

6.1.4 Typical Workflow

In this section we describe a typical workflow, beginning with the definition of param-eters, performing the orientations, and entering applications. Note that the communi-cation is exclusively through the host computer, preferably by using a graphical userinterface (GUI), such as Microsoft Windows.

Definition of System Parameters

After the installation of an analytical plotter certain system parameters must be defined.Some of these parameters are very much system dependent, particularly those related tothe user interface. A good example is the sensitivity of pointing devices. One revolutionof a handwheel corresponds to a linear movement in the model (actually to a translationof the stages). This value can be changed.

Other system parameters include the definition of units, such as angular units, or thedefinition of constants, such as earth radius. Some of the parameters are used as defaultvalues, that is, they can be changed when performing procedures involving them.

Page 251: CE 406 – Advanced Surveying

78 6 Measuring Systems

Definition of Auxiliary Data

Here we include information that is necessary to conduct the orientation procedures. Forthe interior orientation camera parameters are needed. This involves the calibrated focallenght, the coordinates of the principal point, the coordinates of the fiducial marks, andthe radial distortion. Different software varies in the degree of comfort and flexibilityof entering data. For example, in most camera calibration protocols the coordinates ofthe fiducial marks are not explicitely available. They must be computed from distancesmeasured between them. In that case, the host software should allow for enteringdistances, otherwise the user is required to compute coordinates.

For the absolute orientation control points are necessary. It is preferable to enterthe control points prior to performing the absolute orientation. Also, it should bepossible to import a ground control file if it already exists, say from computing surveyingmeasurements. Camera data and control points should be independent from project databecause several projects may use the same information.

Definition of Project Parameters

Project related information usually includes the project name and other descriptivedata. At this level it is also convenient to define the number of parallax points, andthe termination criteria for the orientation procedures, such as maximum number ofiterations, or minimum changes of parameters between successive iterations.

More detailed information is required when defining the model parameters. Thecamera calibration data must be associated to the photography on the left and rightstages. An option should exist to assign different camera names. Also, the groundcontrol file name must be entered.

Interior Orientation

The interior orientation begins with placing the diapositives on the stages. Sometimes,the accessibility to the stages is limited, especially when they are parked at certainpositions. In that case, the system should move the stages into a position of best acces-sibility. After having set all the necessary viewer control buttons, few parameters andoptions must be defined. This includes entering the camera file names and the choiceof transformation to be used for the interior orientation. The system is now ready formeasuring the fiducial marks. Based on the information in the camera file, approxi-mate stage coordinates are computed for the stages to drive to. The fine positioning isperformed with one of the pointing devices.

With every measurement improved positions of the next fiducial mark can be com-puted. For example, the first measurement allows to determine a better translation vector.After the second measurement, an improved value for the rotation angle is computed.In that fashion, the stages drive closer to the true position of every new fiducial mark.After the set of fiducial marks as specified in the calibration protocol is measured, thetransformation parameters are computed and displayed, together with statistical results,such as residuals and standard deviation. Needless to say that throughout the interiororientation the system is in comparator mode.

Page 252: CE 406 – Advanced Surveying

6.2 Digital Photogrammetric Workstations 79

Upon acceptance, the interior orientation parameters are downloaded to the real-timeprocessor.

Relative Orientation

The relative orientation requires first a successful interior orientation. Prior to the mea-suring phase, certain parameters must be defined, for example the number of parallaxpoints and the type of orientation (e.g. independent or dependent relative orientation).The analytical plotter is still in comparator mode. The stages are now directed to approx-imate locations of conjugate points, which are regularly distributed accross the model.The approximate positions are computed according to the consideration discussed in theprevious section. Now, the operator selects a suitable point for clearing the parallaxes.This is accomplished by locking one stage and moving the other one only until a thepoint is parallax free.

After six points are measured, the parameters of relative orientation are computedand results are displayed. If the computation is successful, the parameters are down-loaded to the RT processor and a model is established. At that time, the analyticalplotter switches to the model mode. Now, the operator moves in an oriented model.To measure additional points, the system changes automatically to comparator mode toforce the operator to clear the parallaxes.

It is good practice to include the control points in the measurements and computa-tions of the relative orientation. Also, it is advisable to measure twelve or more points.

Absolute Orientation

The absolute orientation requires a successful interior and relative orientation. In casethe control points are measured during the relative orientation, the system immediatelycomputes the absolute orientation. As soon as the minimum control information ismeasured, the system computes approximate locations for additional control points andpositions the stages accordingly.

6.1.5 Advantages of Analytical Plotters

The following table summarizes some of the advantages of analytical plotters overcomputer-assisted or standard stereoplotters. With computer-assisted plotters we meana stereoplotter with encoders attached to the machine coordinate system so that modelcoordinates can be recorded automatically. A computer processes then the data anddetermines orientation parameters, for example. Those parameters must be turned inmaually, however.

6.2 Digital Photogrammetric Workstations

Probably the single most significant product of digital photogrammetry is the digitalphotogrammetric workstation (DPW), also called a softcopy workstation. The role ofDPWs in digital photogrammetry is equivalent to that of analytical plotters in analyticalphotogrammetry.

Page 253: CE 406 – Advanced Surveying

80 6 Measuring Systems

Table 6.1: Comparison analytical plotters/stereoplottes.

Feature Analytical Computer-assisted ConventionalPlotter Stereoplotter Stereoplotter

accuracyinstrument 2 µm ≥ 10µm ≥ 10µmimage refinement yes no no

drive toFM, control points yes no noprofiles yes yes yesDEM grid yes no no

photographyprojection system any only central only centralsize ≤ 18 × 9 in. ≤ 9 × 9 in. ≤ 9 × 9 in.

orientationscomputer assistance high medium nonetime 10 minutes 30 minutes 1 hourstoring parameters yes yes norange of or. parameters unlimited ω, ϕ ≤ 5o ω, ϕ ≤ 5o

map compilationCAD systems many few nonetime 20 % 30 % 100 %

Page 254: CE 406 – Advanced Surveying

6.2 Digital Photogrammetric Workstations 81

The development of DPWs is greatly influenced by computer technology. Consid-ering the dynamic nature of this field, it is not surprising that digital photogrammetricworkstations undergo constant changes, particularly in terms of performance, comfortlevel, components, costs, and vendors. It would be nearly impossible to provide a com-prehensive list of the current products, which are commercially available much lessdescribe them in some detail. Rather, the common aspects, such as architecture andfunctionality is emphasized.

The next section provides some background information, including a few historicalremarks and an attempt to classify the systems. This is followed by a description of thebasic system architecture and functionality. Finally, the most important applicationsare briefly discussed.

To build on common ground, I frequently compare the performance and functionalityof DPWs with that of analytical plotters. Sec. 6.3 summarizes the advantages and theshortfalls of DPWs relative to analytical plotters.

6.2.1 Background

Great strides have been made in digital photogrammetry during the past few years dueto the availability of new hardware and software, such as powerful image processingworkstations and vastly increased storage capacity. Research and development effortsresulted in operational products that are increasingly being used by government orga-nizations and private companies to solve practical photogrammetric problems. We arewitnessing the transition from conventional to digital photogrammetry. DPWs play akey role in this transition.

Digital Photogrammetric Workstation and Digital Photogrammetry Environment

Fig. 6.7 depicts a schematic diagram of a digital photogrammetry environment. On theinput side we have a digital camera or a scanner with which existing aerial photographsare digitized. At the heart of the processing side is the DPW. The output side maycomprise a filmrecorder to produce hardcopies in raster format and a plotter for providinghardcopies in vector format. Some authors include the scanner and filmrecorder ascomponents of the softcopy workstation. The view presented here is that a DPW is aseparate, unique part of a digital photogrammetric system.

As discussed in the previous chapters, digital images are obtained directly by usingelectronic cameras, or indirectly by scanning existing photographs. The accuracy ofdigital photogrammetry products depends largely on the accuracy of electronic camerasor on scanners, and on the algorithms used. In contrast to analytical plotters (and evenmore so to analog stereoplotters), the hardware of DPWs has no noticeable effect onthe accuracy.

Figs. 6.9 and 6.8 show typical digital photogrammetric workstations. At first sightthey look much like ordinary graphics workstations. The major differences are thestereo display, 3-D measuring system, and increased storage capacity to hold all digitalimages of an entire project. Sec. 6.2.2 elaborates further on these aspects.

The station shown in Fig. 6.8 features two separate monitors. In this fashion, thestereo monitor is entirely dedicated to display imagery only. Additional information,

Page 255: CE 406 – Advanced Surveying

82 6 Measuring Systems

film recorder

orthophoto

plotter

map

scanner

photograph

digital image

Digital PhotogrammetricWorkstation (DPW)

digital camera

display computer storage

user interf.

Figure 6.7: Schematic diagram of digital photogrammetry environment with thedigital photogrammetric workstation (softcopy workstation) as themajor component.

such as the graphical user interface, is displayed on the second monitor. As an optionto the 3-D pointing device (trackball), the system can be equipped with handwheels tomore closely simulate the operation on a classical instrument.

The main characteristic of Intergraph’s ImageStation Z is the 28-inch panoramicmonitor that provides a large field of view for stereo display (see Fig. 6.9, label 1).Liquid crystal glasses (label 3) ensure high-quality stereo viewing. The infrared emitteron top of the monitor (label 4) provides synchronization of the glasses and allows groupviewing. The 3-D pointing device (label 6) allows freehand digitizing and the 10 buttonsfacilitate easy menu selection.

6.2.2 Basic System Components

Fig. 6.10 depicts the basic system components of a digital photogrammetric workstation.

CPU the central processing unit should be reasonably fast considering the amountof computations to be performed. Many processes lend themselves to paral-lel processing. Parallel processing machines are available at reasonable prices.However, programming that takes advantage of them is still a rare commodityand prevents a more wide spread use of the workstations.

Page 256: CE 406 – Advanced Surveying

6.2 Digital Photogrammetric Workstations 83

Figure 6.8: Typical digital photogrammetric workstation. The system shownhere offers optional handwheels to emulate operation on classicalphotogrammetric plotters. Courtesy LH Systems, Inc., San Diego,CA.

OS the operating system should be 32 bit based and suitable for real-time processing.UNIX satisfies these needs; in fact, UNIX based workstations were the systemsof choice for DPWs until the emergence of Windows 95 and NT that make PCsa serious competitor of UNIX based workstations.

main memory due to the large amount of data to be processed, sufficient memoryshould be available. Typical DPW configurations have 64 MB, or more, of RAM.

storage system must accommodate the efficient storage of several images. It usuallyconsists of a fast access storage device, e.g. hard disks, and mass storage mediawith slower access times. Sec. 6.2.3 discusses the storage system in more detail.

graphic system the graphics display system is another crucial component of the DPW.The purpose of the display processor is to fetch data, such as raster (images)or vector data (GIS), process and store it in the display memory and update themonitor. The display system also handles the mouse input and the cursor.

3-D viewing system is a distinct component of a DPWs usually not found in otherworkstations. It should allow viewing a photogrammetric model comfortablyand possibly in color. For a human operator to see stereoscopically, the leftand right image must be separated. Sec. 6.2.3 discusses the principles of stereoviewing.

3-D measuring device is used for stereo measurements by the operator. The solutionmay range from a combination of a 2-D mouse and trackball to an elaborate devicewith several programmable function buttons.

Page 257: CE 406 – Advanced Surveying

84 6 Measuring Systems

Figure 6.9: Digital photogrammetric workstation. Shown is Intergraph’s Im-ageStation Z. Main characteristic is the large stereo display ofthe 28-inch panoramic monitor. Courtesy Intergraph Corporation,Huntsville, AL.

network a modern DPW hardly works in isolation. It is often connected to the scanningsystem and to other workstations, such as a geographic information system. Theclient/server concept provides an adequate solution in this scenario of multipleworkstations and shared resources (e.g. printers, plotters).

user interface may consist of hardware components such as keyboard, mouse, andauxiliary devices like handwheels and footwheels (to emulate an analytical plotterenvironment). A crucial component is the graphical user interface (GUI).

6.2.3 Basic System Functionality

The basic system functionality can be divided into the following categories

1. Archiving: store and access images, including image compression and decom-pression.

Page 258: CE 406 – Advanced Surveying

6.2 Digital Photogrammetric Workstations 85

CPU/OS

memory

storage

graphic

network

periphery

3-Dviewing

3-Dmeasuring

printer

plotter

Figure 6.10: Basic system components of a digital photogrammetric worksta-tion.

2. Processing:basic image processing tasks, such as enhancement and resampling.

3. Display and Roam:display images or sub-images, zoom in and out, roam withina model or an entire project.

4. 3-D Measurement:interactively measure points and features to sub-pixel accu-racy.

5. Superpositioning:measured data or existing digital maps must be superimposedon the displayed images.

A detailed discussion about the entire system functionality is beyond the scope ofthis book. We will focus on the storage system, on the display and measuring system,and on roaming.

Storage System

A medium size project in photogrammetric mapping contains hundreds of photographs.It is not uncommon to deal with thousands of photographs in large projects. Assumingdigital images with 16 K × 16 K resolution (pixel size approx. 13 µm), a storagecapacity of 256 MB per uncompressed black and white image is required. Consider acompression rate of three and we arrive at the typical number of 80 MB per image. Tostore a medium size project on-line causes heavy demands on storage.

Photogrammetry is not the only imaging application with high demands on storage,however. In medical imaging, for example, imaging libraries in the terabyte size aretypical. Other examples of high storage demand applications include weather track-ing and monitoring, compound document management, and interactive video. These

Page 259: CE 406 – Advanced Surveying

86 6 Measuring Systems

applications have a much higher market volume than photogrammetry; therefore, it isappealing for companies to further develop storage technologies.

The storage requirements in digital photogrammetry can be met through a carefullyselected combination of available storage technologies. The options include:

hard disks: are an obvious choice, because of fast access and high performance ca-pabilities. However, the high cost of disk space1 would make it economicallyinfeasible to store entire projects on disk drives. Therefore, hard disk drivesare typically used for interactive and real-time applications, such as roaming ordisplaying spatially related images.

optical disks: have slower access times and lower data transfer rates but at lower cost(e.g. $10 to $15 per GB, depending on technology). The classical CD ROM andCD-R (writable) with a capacity of approximately 0.65 GB can hold only onestereomodel. A major effort is being devoted to increasing this capacity by anorder of magnitude and make the medium a rewritable one. Until such systemsbecome commercially available (including accepted standards), CDs are usedmostly as a distribution media.

magnetic tape: offers the lowest media cost per GB (up to two orders of magnitude lessthan hard disk drives). Because of its slow performance (due to sequential accesstype), magnetic tapes are primarily used as backup devices. Recent advances intape technology, however, make this device a viable option for on-line imagingapplication. Juke boxes with Exabyte or DLT (digital linear tape) cartridges(capacity of 20 to 40 GB per media) lend themselves into on-line image librarieswith capacities of hundreds of gigabytes.

When designing a hierarchical storage system, factors such as storage capacity, ac-cess time, and transfer rates must be considered. Moreover, the way data is accessed,for example, randomly or sequentially, is important. Imagery requires inherently ran-dom access: think of roaming within a stereomodel. This seems to preclude the use ofmagnetic tapes for on-line applications. Clearly, one would not want to roam within amodel stored on tape. However, if entire models are loaded from tape to hard disk, theaccess mode is not important, only the sustained transfer rate.

Viewing and Measuring System

An important aspect of any photogrammetric measuring system, be it analog or digital, isthe viewing component. Viewing and measuring is typically performed stereoscopically,although certain operations do not require stereo capability.

As discussed in Chapter ??, humans can discern 7 to 8 lp/mm at a normal viewingdistance of 25 cm. To exploit the resolution of aerial films, say 70 lp/mm, it mustbe viewed under magnification. The oculars of analytical plotters have zoom opticsthat allow viewing the model at different magnifications2. Obviously, the larger the

1Every 18 months the storage capacity doubles, while the price per bit halves. As this book is written,hard disk drives sold for less than $100 per GB.

2Typical magnification values range from 5 to 20 times.

Page 260: CE 406 – Advanced Surveying

6.2 Digital Photogrammetric Workstations 87

captionwidth7cm

Table 6.2: Magnification and size of field of view of analytical plotters.

field of view, [mm]magnification

BC 1 C120 P 1

5 × 29 406 × 3210 × 2115 × 1420 × 9 10 10

magnification the smaller the field of view. Table 6.2 lists zoom values and the size ofthe corresponding film area that appears in the oculars. Feature extraction (compilation)is usually performed with a magnification of 8 to 10 times. With higher magnification,the graininess of the film reduces the quality of stereoviewing. It is also worth pointingout that stereoscopic viewing requires a minimum field of view.

Let us now compare the viewing capabilities of analytical plotters with that of DPWs.First, we realize that this function is performed by the graphics subsystem, that is, by themonitor(s). To continue with the previous example of a film with 70 lp/mm resolution,viewed 10 × magnified, we read from Table 6.2 that the corresponding area on the filmhas a diameter of 20 mm. To preserve the high film resolution it ought to be digitizedwith a pixelsize of approximately 6 µm (1000/(2 × 70)). It follows that the monitorshould display more than 3K × 3K pixels. Monitors with this sort of resolution donot exist or are prohibitively expensive, particularly when considering color imageryand true color rendition (24+ bit planes).

If we relax the high resolution requirements and assume that images are digitizedwith a pixelsize of 15 µm, then a monitor with the popular resolution of 1280 × 1024would display an area that is quite comparable to that of analytical plotters.

Magnification, known under the more popular terms zooming in/out, is achieved bychanging the ratio of number of image pixels displayed to the number of monitor pixels.To zoom in, more monitor pixels are used than image pixels. As a consequence, thesize of the image viewed decreases and stereoscopic viewing may be affected.

The analogy to the floating point mark of analytical plotters is the three dimensionalcursor that is created by using a pattern of pixels, such as a cross or a circle. The cursormust be generated by bitplane(s) that are not used for displaying the image. The cursormoves in increments of pixels, which may appear jerky compared to the smooth motionof analytical plotters. One advantage of cursors, however, is that they can be representedin any desirable shape and color.

The accuracy of interactive measurements depends on how well you can identifya feature, on the resolution, and on the cursor size. Ultimately, the pixelsize sets thelower limit. Assuming that the maximum error is 2 pixels, the standard deviation isapproximately 0.5 pixel. A better sub-pixel accuracy can be obtained in two ways. A

Page 261: CE 406 – Advanced Surveying

88 6 Measuring Systems

straight-forward solution is to use more monitor pixels than image pixels. Fig. 6.11(a)exemplifies the situation. Suppose we use 3 × 3 monitor pixels to display one imagepixel. The standard deviation of a measurement is now 0.15 image pixels3. As pointedout earlier, using more monitor pixels for displaying an image pixel reduces the size ofthe field of view. In the example above, only an area of 6 mm would be seen—hardlyenough to support stereopsis.

(a) (b)

monitor pixel

image pixel

cursor

Figure 6.11: Two solutions to sub-pixel accuracy measurements. In (a), an image pixel is dis-played to m monitor pixels, m > 1. The cursor moves in increments of monitorpixels, corresponding to 1/m image pixels. In (b) the image is moved under the fixedcursor position in increments smaller than an image pixel. This requires resamplingthe image at sub-pixel locations.

To circumvent the problem of reducing the field of view, an alternative approachto sub-pixel measuring accuracy is often preferred. Here, the cursor is fixed in themonitor’s center and the image is moved instead. Now the image does not need to movein increments of pixels. Resampling at sub-pixel locations allows smaller movements.This solution requires resampling in real-time to assure smooth movement.

Yet another aspect is the illumination of the viewing system; so crucial when it comesto interpreting imagery. The brightness of the screen drops to 25% when polarizationtechniques are used4. Moreover, the phosphor latency causes ghost images. All thesefactors reduce the image quality.

In conclusion, we realize that viewing on a DPW is hampered in several ways and isfar inferior to viewing the same scene on an analytical plotter. To alleviate the problems,high resolution monitors should be used.

Stereoscopic Viewing

An essential component of a DPW is the stereoscopic viewing system (even thougha number of photogrammetric operations can be performed monoscopically). For ahuman operator to see stereoscopically, the left and right image must be separated.

3As before, the standard deviation is assumed to be 0.5 monitor pixel. We then obtain in the image domainan accuracy of 0.5 × 1/3 image pixel.

4Polarization absorbs half of the light. Another half is lost because the image is only viewed during halfof the time usually available when viewing in monoscopic mode.

Page 262: CE 406 – Advanced Surveying

6.2 Digital Photogrammetric Workstations 89

Table 6.3: Separation of images for stereoscopic viewing.

separation implementation

2 monitors + stereoscopespatial 1 monitor + stereoscope (split screen)

2 monitors + polarization

anaglyphicspectral

polarization

alternate display of left and right imagetemporal

synchronized by polarization

This separation is accomplished in different ways; for example, spatially, spectrally, ortemporally (Table 6.3).

One may argue that the simplest way to achieve stereoscopic viewing is by displayingthe two images of a stereopair on two separate monitors. Viewing is achieved by meansof optical trains, e.g. a stereoscope, or by polarization. Matra adopted this principleby arranging the two monitors at right angles, with horizontal and vertical polarizationsheets in front of them.

An example of the split-screen solution is shown in Fig. 6.12. Here, the left andright images are displayed on the left and right half of the monitor, respectively. Astereoscope, mounted in front of the monitor, provides viewing. Obviously, this solutionpermits only one person to view the model. A possible disadvantage is the resolution,because only half of the screen resolution5 is available for displaying the model.

The most popular realization of spectral separation is by anaglyphs. The restrictionto monochromatic imagery and the reduced resolution outweigh the advantage of sim-plicity and low cost. Most systems today use temporal separation in conjunction withpolarized light. The left and right image is displayed in quick succession on the samescreen. In order to achieve a flicker-free display, the images must be refreshed at a rateof 60 Hz per image, requiring a 120 Hz monitor.

Two solutions are available for viewing the stereo model. As illustrated in Fig. 6.13(a),a polarization screen is mounted in front of the display unit. It polarizes the light emittedfrom the display in synchronization with the monitor. An operator wearing polarizedglasses will only see the left image with the left eye as the polarization blocks any visualinput to the right eye. During the next display cycle, the situation is reversed and theleft eye is prevented from seeing the right image. The system depicted in Fig. 6.8 onpage 83 employs the polarization solution.

The second solution, depicted in Fig. 6.13(b), is more popular and less expensive torealize. It is based on active eyewear containing alternating shutters, realized, for exam-ple, by liquid crystal displays(LCD). The synchronization with the screen is achieved

5Actually, only the horizontal resolution is halved while the vertical resolution remains the same as indual monitor systems.

Page 263: CE 406 – Advanced Surveying

90 6 Measuring Systems

Figure 6.12: Example of a split-screen viewing system. Shown is the DVPdigital photogrammetric workstation. Courtesy of DVP Geomatics,Inc., Quebec.

by an infrared emitter, usually mounted on top of the monitor (Fig. 6.9 on page 84 showsan example). Understandably, the goggles are heavier and more expensive comparedto the simple polarizing glasses of the first solution. On the other hand, the polarizingscreen and the monitor are a tightly coupled unit, offering less flexibility in the selectionof monitors.

Roaming

Roaming refers to moving the 3-D pointing device. This can be accomplished in twoways. In the simpler solution, the cursor moves on the screen according to the move-ments of the pointing device (e.g. mouse) by the operator. The preferred solution,however, is to keep the cursor locked in the screen center, which requires redisplayingthe images. This is similar to the operation of analytical plotters where the floatingpoint mark is always in the center of the field of view.

The following discussion refers to the second solution. Suppose we have a stereoDPW with a 1280 × 1024 resolution, true color monitor, and imagery digitized to 15µmpixelsize (or approximately 16K×16K pixels). Let us now freely roam within a stere-omodel, much as we would do it on an analytical plotter and analyze the consequencesin terms of transfer rates and memory size.

Fig. 6.14 schematically depicts the storage and graphic systems. The essentialcomponents of the graphic system include the graphics processor, the display memory,

Page 264: CE 406 – Advanced Surveying

6.2 Digital Photogrammetric Workstations 91

display memory

monitor

polarization screen

polarizing glasses

(a)

synchronized eyewear

(b)

Figure 6.13: Schematic diagram of the temporal separation of the left and rightimage of a stereopair for stereoscopic viewing. In (a), a polariz-ing screen is mounted in front of the display. Another solutionis sketched in (b). The screen is viewed through eyewear withalternating shutters. See text for detailed explanations.

the digital-to-analog converter (DAC), and the display device (CRT monitor in our case).The display memory contains the portion of the image that is displayed on the monitor.Usually, the display memory is larger than the screen resolution to allow roaming inreal-time. As soon as we roam out of the display memory, new image data must befetched from disk and transmitted to the graphics system.

Graphic systems come in the form of high-performance graphics boards, such asRealiZm or Vitec boards. These state-of-the-art graphics systems are as complex as thesystem CPU. The interaction of the graphics system with the entire DPW, e.g. requestingnew image data, is a critical measure of system performance.

Factors such as storage organization, bandwidths, and additional processing causedelays in the stereo display. Let us further reflect on these issues.

With an image compression rate of three, approximately 240 MB are required to storeone color image. Consequently, a 24 GB mass storage system could store 100 imageson-line. By the same token, a hard disk with 2.4 GB capacity could hold 10 compressedcolor images.

Since we request true color display, approximately 2×4 MB are required to hold the

Page 265: CE 406 – Advanced Surveying

92 6 Measuring Systems

massstorage

harddisk

systemmemory

storage system

clock

displaymemory

graphicsprocessor

inter-face

LUT DACdisplaydevice

programmemory

graphics system

syst

em b

us

Figure 6.14: Schematic diagram of storage system, graphic system and display.

two images of the stereomodel6. As discussed in the previous section, the left and rightimage must be displayed alternately at a frequency of 120 Hz to obtain an acceptablemodel7. The bandwidth of the display memory amounts to 1280×1024×3×120 = 472MB/sec. Only high speed, dual port memory, such as VRAM (video RAM) satisfiessuch high transfer rates. For less demanding operations, such as storing programs orfonts, less expensive memory is used in high performance graphic workstations.

At what rate should one be able to roam? Skilled operators can trace contour linesat a speed of 20 mm/sec. A reasonable request is that the display on the monitor shouldbe “crossed” within 2 seconds, in any direction. This translates to 1280 × 0.015/2 ≈10 mm/sec in our example. Some state a maximum roam rate of 200 pixels/sec onIntergraph’s ImageStation Z softcopy workstation. As soon as we begin to move thepointing device, new portions of the model must be displayed. To avoid immediate disktransfer, the display memory is larger than the monitor, usually four times. Thus, wecan roam without problems within a distance twice as long as the screen window at thecost of increased display memory size (32 MB of VRAM in our example).

Suppose we move the cursor with a speed of 10 mm/sec toward one edge. Whenwill we hit the edge of the display memory? Assuming we begin at the center, after onesecond the edge is reached and the display memory must be updated with new data. Toassure continuous roaming, at least within one stereomodel, the display memory mustbe updated before the screen window reaches the limit. The new position of the windowis predicted by analyzing the roaming trajectory. A look-ahead algorithm determinesthe most likely positions and triggers the loading of image data through the hierarchy

61280 × 1024 × 3 Bytes = 3,932,160 Bytes.7Screen flicker is most noticeable far out in one’s vision periphery. Therefore, large screen sizes require

higher refresh rates. Studies indicate that for 17-inch screens refresh rates of 75 Hz are acceptable. For DPWslarger monitors are required; therefore with a refresh rate of 60 Hz for one image we still experience annoyingflicker at the edges.

Page 266: CE 406 – Advanced Surveying

6.2 Digital Photogrammetric Workstations 93

of the storage system.Referring again to our example, we have one second to completely update the display

memory. Given its size of 32 MB, data must be transferred at a rate of 32 MB/sec fromhard disk via system bus to the display memory. The bottle necks are the interfaces,particularly the hard disk interface. Today’s systems do not offer such bandwidths,except perhaps SCSI-2 devices8. A PCI interface (peripheral component interface) onthe graphics system will easily accommodate the required bandwidth.

A possible solution around the hard disk bottleneck is to dedicate system memoryfor storing an even larger portion of the stereomodel, serving as sort of a relay stationbetween hard disk and display memory. This caching technique, widely used by operat-ing systems to increase the efficiency of data transfer disk to memory, offers additionalflexibility to the roaming prediction scheme. It is quite unlikely that we will move thepointing device with a constant velocity across the entire model (features to be digitizedare usually confined to rather small areas). That is, the content of the system memorydoes not change rapidly.

Fig. 6.15 depicts the different windows related to the size of a digital image. In ourexample, the size of the display window is 19.2 mm × 15.4 mm, the display memorysize is 4× larger, and the dedicated system memory again could be 4× larger. Finally,the hard disk holds more than one stereopair.

predicted trajectory

system memory

display memory

monitor

Figure 6.15: Schematic diagram of the different windows related to the size of animage. Real-time roaming is possible within the display memory.System memory holds a larger portion of the image. The location ispredicted by analyzing the trajectory of recent cursor movements.

8Fast wide SCSI-2 devices, available as options, sustain transfer rates of 20 MB/sec. This would besufficient for roaming within a b/w stereo model.

Page 267: CE 406 – Advanced Surveying

94 6 Measuring Systems

6.3 Analytical Plotters vs. DPWs

During the discussion of the basic system functionality and the orientation procedure,the advantages and disadvantages of DPWs became apparent. Probably the most severeshortcoming of today’s DPWs is the viewing and roaming quality, which is far inferiorto that of analytical plotters. Consider moving the floating point mark in an orientedmodel on the analytical plotter. Regardless on how quickly you move, the model isalways there, viewed at superb quality. In contrast, DPWs behave poorly when anattempt is made to move the cursor from one model boundary to another.

Several factors influence the viewing quality. For one, the monitor resolution sets alimit to the size of the field of view. Small fields of view reduce the capability to stereo-scopically view a model. This makes image interpretation more difficult. Flickering,particularly noticeable on large monitors, is still a nuisance, despite of 120 Hz refreshrates. A third factor is the reduction in illumination due to polarizing and alternateimage displays. Finally, the ease and simplicity of optical image manipulation, such asrotation, cannot be matched on DPWs. Resampling is a time consuming process andmay even reduce the image quality.

The advantages of DPWs outweigh these shortcomings by far, however. The benefitsare well documented. The following are a few noteworthy factors:

• image processing capabilities are available at the operator’s fingertips. Enlarge-ments, reductions, contrast enhancements, and dodging do not require a photolab anymore—DPWs have a built-in photo lab.

• traditional photogrammetric equipment, such as point transfer devices and com-parators, are no longer required—their functionality is assumed by DPWs. Digitalphotogrammetric workstations are much more universal than analytical plotters.

• the absence of any moving mechanical-optical parts make DPW more reliableand potentially more accurate since no calibration procedures are necessary.

• DPWs offer more flexibility in viewing and measuring several images simultane-ously. This is a great advantage in identifying and measuring control points andtie points.

• several persons can stereoscopically view a model. This is interesting for appli-cations where design data is superimposed on a model. Free stereo viewing isalso considered an advantage by many operators.

• DPW are more user friendly than analytical plotters. As more photogrammetricprocedures will be automated, the operation of a DPW requires less specializedoperators.

Among the many potentials of DPWs is the possibility to increase the user base. Toillustrate this point, compare the skill level of an operator working on a stereoplotter,analytical plotter, and digital photogrammetry workstation. There is clearly a trendaway from very special photogrammetric know-how to more generally available know-how on how to use a computer. That stereo models can be viewed without optical

Page 268: CE 406 – Advanced Surveying

6.3 Analytical Plotters vs. DPWs 95

mechanical devices and the possibility to embed photogrammetric processes in userfriendly graphical user interfaces raises the chances that non-photogrammetrists cansuccessfully use photogrammetric techniques.

It is possible to increase the roaming capabilities of DPWs beyond the stereomodel.Roaming should be performed in the entire project and it should include vector data,DEMs, databases, and design data. Such a generalized roaming scheme would furtherincrease the efficiency and user friendliness of DPWs.

One of the biggest advantages, however, lies in the potential to automate photogram-metric applications, such as aerial triangulation, DEM generation, and orthophoto pro-duction.

Page 269: CE 406 – Advanced Surveying

Lab 4: Image Processing and Photographic Mosaic

EE299 Winter 2008

Due: Exercises 1-2 are due on or before 8 February, and the remainder of the lab is due on or before 15February 2008.

Objective

The purpose of this lab is to learn about digital images and perform some basic image processing in MATLAB.You have already done some work with images in MATLAB in Lab 1. In this lab, you will learn a fewmore tools for handling images and develop a better understanding of the tools that you have used. Morespecifically, this lab covers:

1: representing and displaying images in MATLAB;

2: simple techniques for image filtering and transformation;

3: how to create synthetic color images; and

4: creating a photographic mosaic (an image created from many much smaller images).

There are four exercises total in this lab: exercises 1-3 allow you to practice loading, displaying, and changingimages in MATLAB, and in exercise 4 you will make your photo-mosaic.

Prelab:

To ensure that you can complete everything within the allotted time, before your lab section you should:

• Read the lab before coming to lab.

• Try to finish parts 1-2 (ideally 1-3) during the first week of lab.

• Before the second week of lab, find a digital picture that you would like to turn into a photo-mosaic,and make sure that it is available to download to a lab computer. Ideally, you should pick somethingthat is fairly high resolution to start with. Teammates can have the same picture or different ones.

Optional for more ambitious students: If you want to use their own image collection to build themosaic, you should have that collected in one directory that you can access from the Sieg 232 lab, or burna CD ahead of time.

1 Representing Images in MATLAB

In this section we will talk about how to read, display and write images in MATLAB, as well as howto manipulate subimages. As a preliminary, it is important to understand that a digital image (color orgrayscale) is just a bunch of numbers, but the numbers themselves can be characterized using differentformats. Digital systems typically represent two types of numbers: integer and floating point. There aremany types of each in MATLAB, but the ones we will focus on are double (floating point) and uint8 (8 bit

1

Page 270: CE 406 – Advanced Surveying

Table 1: Differences between double and uint8 in MATLAB

double uint8type floating point integerbits 64 8

range roughly -1.8e+308 to 1.8e+308 0 to 255

unsigned integer). Most all variables in MATLAB are double by default, and most functions expect doubleand will issue a warning if it is something else. In image storage and display, however, uint8 is usually usedbecause it is more memory efficient. In image processing (where numbers are being multiplied and added),the double representation is often preferred. The differences between double and uint8 in MATLAB areshown in Table 1. You can figure out the type of representation associated with a particular variable ormatrix by using the MATLAB class function. You can also specify the type as an argument in buildingmatrices of ones and zeroes, as we will see in the first exercise.

Recall from Lab1 that there are three general methods used for representing images: intensity, indexed,and “truecolor.” For images that only have one color plane (i.e. NxMx1 matix) and have elements of theuint8 type, MATLAB will interpret the matrix to be an indexed image, and will use each element value asan index into a colormap. If the elements are of the double type, with values between 0 and 1; MATLAB willinterpret this as an intensity image. If this is a grayscale image, 0 represents black and 1 represents white. Ifit is a truecolor image, meaning there are three color plane (i.e. NxMx3 matrix), the [0,1] range representsthe intensity of the three colors red, green and blue; black corresponds to (0,0,0) and white to (1,1,1).

The “colormap” is a K-length vector (where K = 2N given N bits/pixel) for grayscale images, and aKx3 matrix for color images with the columns corresponding to red, green and blue intensities. The valuesin the colormap should also be in the [0,1] range. The colormap is essentially a lookup table.

1.1 Reading, Displaying and Writing Images

As for sound files, there are three ways for getting images into MATLAB:

• Convert an external image file into a MATLAB matrix with the imread command, using either:>> myImage = imread(‘someimage.ext’);>> [myImage map] = imread(‘someimage.ext’);Images can be stored in different formats on your computer, and the filename extension ext specifiesthe format. MATLAB can handle a large number of formats, including jpg (or jpeg), gif, tif (or tiff),pgm, bmp and more, as documented in the MATLAB help documentation. The variable myImage is anMxN array if the file is a grayscale image or MxNx3 for most color formats. The map variable specifiesthe colormap and is only needed for indexed images.

• Load an image that already exists as a MATLAB matrix into your workspace, using the load command,as you did in Lab1 with the “durer” image. The syntax is either:>>load durer;>>load(‘durer’);The signal is put into a previously-defined variable, in this case X, and it has unnormalized doubleformat. (For some tools, you will need to normalize it to the [0,1] range.)

• Create a matrix from scratch in MATLAB, as in the example script makeGrayImage.m, which you willuse in Exercise 1.

Once you have generated a matrix, the original format (jpg, gif, etc.) doesn’t play a role, though the in-dexed/intensity/color difference will matter.

2

Page 271: CE 406 – Advanced Surveying

There are three ways to display an image: image, imagesc and imshow. For photos, the imshow com-mand is nicer, since it doesn’t give the dimension tick marks on the edges of the image that the othercommands provide (which can be useful for scientific graphics), but it requires that the intensities be in therange [0,1]. If the values of the matrix are not in the [0,1] range, then you should use the imagesc command.Now, let’s say you type:

>>imshow(myImage);then MATLAB will assume that a 2-dimensional (MxN) myImage is grayscale and a 3-dimensional (MxNx3)myImage is color. You can also use the indexed representation of a color image using

>>imshow(myImage,map);in which case myImage is 2-dimensional with integer uint8 (or uint16 ) values and map is a Kx3 (RGB) ma-trix as described above. You can change the colormap for by using the colormap() function. The differentcolormaps in MATLAB can be viewed in the MATLAB help function or by typing “doc colormap” at thecommand line. You can also use non-standard colormaps (try >>[mon map] = imread(‘mondrian.gif’);and look at the resulting map) or make up your own.

Once you have created or modified an image, you can save the image using either>> imwrite(myImage,‘filename.ext’,fmt);>> imwrite(myImage,map,‘filename.ext’,fmt);

where myImage is the matrix your image is stored in, “filename.ext” is the name you want the file to have,and fmt is any format that MATLAB supports (e.g. ‘jpg’, ‘gif’, etc.). By convention, it is nice if ’fmt’ and’ext’ are the same. (Note that the output format need not be the same as the input format.) The mapvariable is only needed if you are using indexed color images.

1.2 Working with SubImages

In Lab1, we learned how to access (and change) a part of an image. We can use the same technique togenerate a new image that is a part of the original image. For example, download the Paolina image fromthe course web page (make sure to get the full image, not the thumbnail) and try the following:

>>pao = imread(‘Paolina.gif’);>>A=pao(1:240,1:256);>>imshow(A);

You should see the upper left quadrant of the Paolina image (since it is 480x512, which you can determineusing the size command). You can use “:” alone to indicate that you want everything in a particulardimension, so pao(:,1:256) corresponds to the left half of the Paolina image.

We can alter parts of an image by doing mathematical operations on that part. The script changePaolina.mon the class webpage shows how to lighten/darken different parts of the image and add lines in specifiedplaces.

We can build bigger images out of smaller images (which will be important for the photomosaic) byconcatenating images. For example, you can put images side-by-side in MATLAB by concatenating them ina row, as in

>>group=[A1 A2 A3];or you can stack them as in

>>group=[A1; A2; A3];Using these together, you can create a bigger image out of subimages, which you will do in the next exercise.

Download the makeGrayImage M-file from the course web page. This is a function that will create animage object with concentric squares. Try:

>>y=makeGrayImage;>>imshow(y);>>figure;>>z = [y y y];>>mapc=colormap(’cool’);

3

Page 272: CE 406 – Advanced Surveying

>>imshow(z,mapc); This should give you two figures, one grayscale with one block and the other in colorwith 3 blocks.

For another example, look at the jumble.m script on the course webpage which scrambles the Paolinaimage.

Exercise 1: Write one or more new functions similar to makeGrayImage but that generate a differentpattern, and use these in a script to make a quilt image. You can also mix in parts of photos (such asPaolina), but you must have some squares with geometric patterns that you create. Show the image to yourTA using a different color map in the display. Write out the image to a file and upload it to the CollectItEE299 Lab 4 space during the first week of the lab. You should verify that the file you created looksthe way you intended before you submit by clicking on it to display it in Windows.

2 Image Processing

Modifying images using signal processing is similar to modifying sounds, except that you need to keep trackof two or three dimensions instead of one. As with digital sounds, there are many things that you can dowith digital images. For example, you can:

• operate on one pixel at a time, as in the changePaolina.m script,

• create an “echo” effect as with the lamp image we’ll be working with (which is 256x256)>>lampE=lamp;>>lampE(:,8:256)=0.7*lampE(:,8:256)+0.3*lamp(:,1:249);

• or change a pixel based on a region of its neighbors as in the filtering that we will explore next.

Download the lamp image and the filter image.m function from the course webpage. Read the imageinto MATLAB (don’t forget to use the correct extension) and run the script on it. The function returns twofiltered images and displays the original with them. Note that you can run the function with or withoutassigning the returned images to a variable, so either

>>[lampL lampH]=filter image(lamp);>>filter image(lamp);

will work (assuming the image is assigned to variable lamp). In this script, the low pass filter is a simplesmoothing function over a 10x10 region, and the high pass filter is designed using the MATLAB filter designfunction. Both filters are designed using the same 1-dimensional techniques you used for sound files, and thenturned into a 2-dimensional filter by taking a vector outer product. It is also possible to design 2-dimensionalfilters directly, but this is beyond the scope of the class.

Exercise 2: Create a new script that modifies the lamp image and displays a sequence of the original plustwo modified versions of the picture using different processing techniques than those in the filter image.mfunction. You can use filters with different design criteria, or warp the intensities, or add an echo, or anyother variation that is of interest to you. You may copy code from any of the m-files provided with this lab,but you must change some part of it. Explain the techniques that you used to your TA.

3 Color Images

Accessing parts of RGB images is a little trickier because of the third dimension. Now we also have a colorchannel: myColorImage(row, column, channel). For an RGB image there are only three color channels, 1 =Red, 2 = Green, and 3 = Blue. If we want to cut out or operate on a sub-image of all the channels, we cansimply use the ‘:’ to tell MATLAB that we want everything associated with a dimension of the array. Forexample,

>> mySubImage=myColorImage(1:50, 100:120, :);

4

Page 273: CE 406 – Advanced Surveying

will select a sub-image of 1,100 to 50,120 with all three color planes. If all we wanted was the green channel ofthe full image then we can use myColorImage(:, :, 2);. You can select a single channel and then displayit as a grayscale image using

>> Gpart=myColorImage(:, :, 2);>> imshow(Gpart);

or you can remove the green from the image by using:>> myColorImage(:, :, 2)=0;

Exercise 3: Download the mfile makecolorimage.m from the class webpage. The purpose of this scriptis to demonstrate constructing an RGB image in MATLAB. There are 400 pixels in the image; each pixel’scolor is determined by the three values (between 0 and 1), representing the red, green, and blue intensitiesthat make up the color. When all three values are 1, the pixel appears as white. Setting the red value of apixel to zero causes it to appear as a combination of green and blue.

a) Examine the mfile and the resulting images so you understand how to change the color values forregions of pixels in the image. Then change the image matrix A to add a red region (it doesn’t have tobe a square) somewhere in the image, and display the result.

b) Download the color image parrot.jpg and change the colors in two regions of the image. You canchange colors by modifying the intensity (using scaling, adding or filtering) or simply zeroing (removing)them.

4 Photographic Mosaic

A photographic mosaic is an image that is created from many smaller images. The effect is to recreate somepicture (e.g. a face) by replacing small portions of that picture with another image (which we’ll call a tile)that has the same average color. At a distance, the mosaic will look like the original picture, while up close,the individual tiles can be seen. An example of a photographic mosaic can be seen in the lab and on theclass web site under Lab 4. To learn more about the history of photographic mosaics see Wikipedia’s entryat http://en.wikipedia.org/wiki/Photographic_mosaic.

Using the notion of sub-images discussed earlier, we can split a picture into sub-images and compareeach one’s average color value with a tile-set. The original picture’s sub-image will be replaced with a tilefrom the tile-set that has the closest average color value. The tile-set you will be using contains over 5,000images of various subject matter. While it should give you a good result, typically mosaics that contain atile-set with a theme have a more profound impact artistically. For example, a mosaic of an animal mightbe made of tiles from nature photos.

It is not difficult to make a photographic mosaic in MATLAB, and the functions to make the mosaicare given to you so you have more time to be creative with the project. Some of the parameters are left upto you to choose, such as the number of sub-sub-blocks (“nBlocks”) to split the tiles into. In addition, youare strongly encouraged to look at the code to make sure you understand it and because you may want toenhance or modify the functions.

Exercise 4: Using the procedure outlined next, make two versions of a photographic mosaic from an imageof your choosing. There are 5 tile-sets to choose from: 25x25, 50x50, 75x75, 100x100, and 150x150 pixels.The different tile-sets all characterize the same set of images, but different sizes may be better for differentpictures or final image sizes.

a) Poster printout: Choose a poster size of either 11”x14”, 13”x19”, 16”x20”, or 18”x24”. Calculatethe image dimensions (in pixels) needed to create a print at 300 DPI with the chosen poster size. (Forexample, an 11”x14” poster requires a 3300x4200 pixel image.) Resize your image so that it is this sizeor larger. The image does not have to have the exact dimensions; it can be cropped to fit the posterdimensions. If cropping is needed, use the idea of sub images to cut out the part of the picture that

5

Page 274: CE 406 – Advanced Surveying

you want. Choose how many tiles per inch you want to have in the mosaic and use the tile-set thatcomes closest. (As an example: the 100x100 tile set is used in the poster hanging in the lab.)

b) Web version: For the mosaic that you will turn in via CollectIt should be suitable for a web site, sochoose the display size in inches (roughly) that would be reasonable on a computer screen. Follow thesame procedure as for the poster, except that you will resize the original image and choose the tile-setbased on a different resolution: monitors typically have 96 to 120 DPI. Explain your tile-set choice toyour TA. (Note: If you create multiple poster mosaics in your team, you need only pick one to be usedfor the web page version.)

Getting Set Up. From the class web page, download the two functions needed to create the mosaic:

• getAverages.m, precomputes a simple representation of images in the tile-set for search, and

• mosaic.m, creates the mosaic.

Next download the image you want to base the mosaic on. Finally, download your desired tile-set fromsccserv.ee.washington.edu (in the /v0/data/images/mosaictiles/ directory). The tile-sets are storedin a mat file as a 4-dimensional matrix, shown in Figure 1, as in tilesetN(rows, columns, rgb, image number).Load the tile-set into MATLAB, e.g. using:

>> load tileset25and the tile-set will be loaded as a variable with the same name as the filename. try displaying a couple ofimages from it with the imshow command (hint, use the ‘:’ to specify you want everything, such as all rows,or columns).

Figure 1: The tile-set layout.

Formatting the Original Image. In order to make the mosaic the correct size, your image may haveto be adjusted to make the mosaic dimensions. If your image is smaller than the size needed for the posterthat you computed, then you will have to enlarge the image using the imresize command. Once it is thesame or larger than the required size, check to see if the image size corresponds to an integer multiple of thetile size in both dimensions. (For example, a 50x50 tile would perfectly cover a 1200x1000 image, but not a1220x980 image.) If not, you need to crop your image, as shown in Figures 2(a) or 2(b). The figure displaysan image of birds with a grid representing the size of the tile-set. Each one of these squares in the grid willbe replaced with a tile in the tile-set. The image does not fit into the squares at the edges, so the imagemust be cropped (or cut) to fit into the grid. In MATLAB, cropping simply involves taking a sub-imagefrom the picture.

6

Page 275: CE 406 – Advanced Surveying

(a) Image cropped from the bottom-right.

(b) Image cropped from the middle.

Figure 2: Examples of cropping the image to be divisible by the tile-set size.

Formatting the Tile-Set. With the image ready and the tile-set chosen, use getAverages.m to precom-pute the averages for the tile-set, as in:

>> tileAve=getAverages(tileData,nBlocks);The function takes two arguments, “tileData” and “nBlocks”. The first is name of the tile-set that you haveloaded into MATLAB, and the second defines an NxN grid of sub-blocks to split each tile into, shown inFigure 3. The sub-blocks are used in computing the distance between a block in the image and each differenttile. For the example in Figure 3, “nBlocks” was set to three to produce a 3x3 set of sub-blocks for that tile.The getAverages.m function will get the average of each of these sub-blocks for each tile. You can think ofthem as sub-sub-images. If you set “nBlocks” to one, then only a single average (for each color plane) willbe recorded for that tile. The “nBlocks” variable provides a way to control how closely the sub-images inthe picture match the tiles in the tile-set. Try different values to see how it affects your mosaic, but avoidbig numbers since that makes the function much slower. Of course, “nBlocks” must be smaller than the tilesize, but it need not divide evenly into the tile size. Note: You will need to rerun getAverages.m whena new tile-set is loaded or if you want to try a different value for “nBlocks.”

Making the Mosaic. Finally, to make the mosaic, use the function mosaic.m. It takes three arguments,“img”, “tileData”, and “tileAve”. The first is the image from which you will create the mosaic, the secondis the tile-set, and the third argument represents the precomputed averages returned from getAverages.m.

7

Page 276: CE 406 – Advanced Surveying

Figure 3: “nBlocks” determines the number of sub-sub-blocks that a sub-image is split into. In this case,“nBlocks” = 3, so the sub-image is split into 3 x 3 sub-sub-blocks for purposes of finding the distance ofthat block to images in the tile set.

This function splits the image up into sub-images that are the same size as your tile-set, and then iteratesthrough each one. For each sub-image, the function computes the average in the same way as getAverages.m,and for each sub-sub-image the distance between that average and the precomputed averages of all the tilesin the tile-set is calculated. The distances for each sub-sub-image are summed for that sub-image, and thetile in the tile-set with the minimum cumulative distance is inserted in the sub-image.

Optional: If you want to explore some variations that will require more code changes, some possibilitiesinclude:

• modify the functions to operate on grayscale images;

• prevent the same tile from being repeated immediately after itself; and/or

• create your own tile set (contact the TA for an additional script that can help with this).

Upload your web-sized mosaic to CollectIt EE299 Lab 4.

IMPORTANT: You only get 2 free poster-size printouts per person, so make sure you are happy withyour final result before you print it out on the big color printer. The printer driver is installed only on onemachine in the lab. See your TA for help with printing.

8

Page 277: CE 406 – Advanced Surveying

1

Remote Sensing of the Environment

FW4540

Advanced Terrestrial Remote Sensing

FW5540

Lecture 5A i l Ph t h dAerial Photography and

Photogrammetry

Page 278: CE 406 – Advanced Surveying

2

Airphoto Geometry

Basic Airphoto MeasurementsAir Photo InterpretationpPhotogrammetry“The art & science of making accurate measurements

from aerial photographs”

Scale, object height & length, area & perimeter - single photographsp g p

With successive, overlapping aerial photos very precise measurements can be made

Page 279: CE 406 – Advanced Surveying

3

Successive photos on a flight line have ~60- 65% overlap

Endlap is between 30 and 35%Same portion of the ground may be imaged on 3

photographs along the same flight linep g p g g

Portions of the photo within the overlap zone may be viewed in stereo

How do you determine the area of overlap?Fiducial marksFiducial marksPrincipal point

Conjugate principal pointFiducial Marks

Type and number vary amongst cameras, 4-8 marksDraw a line between opposite marks: Principle Point

Principle Point-The geometric center at which the camera was aimed when the photo was acquired

Page 280: CE 406 – Advanced Surveying

4

Page 281: CE 406 – Advanced Surveying

5

Scale- ratio of photo distance to ground distanceUnitless ratio- 1:24,000Representative fraction- 1/24,000Equivalent units- 1” = 2,000 ft.

Which is the larger scale: 1:15,840 or 1:40,000?

Scale and area are inverse: small scale shows larger area and large scale shows a smaller areaarea and large scale shows a smaller area

Page 282: CE 406 – Advanced Surveying

6

Some airphotos may have a scale printed on them-nominal scale

Photo scale function of the focal length of the lens used and the flying height above the groundand the flying height above the ground

If the ground isn’t flat there are various scales across the photo

Features that are closer to the camera (higher elevation) will be larger

Features that are farther from the camera (lowerFeatures that are farther from the camera (lower elevation) will be smaller

Page 283: CE 406 – Advanced Surveying

7

Vertical Photo Scale over Flat Terrain

S = ab/AB = f/H’

Where ab = photo distanceWhere ab = photo distanceAB = ground distancef = focal length of lens usedH’ = flying height above terrain

Maximum & Minimum Photo Scale

Smax = f / H-h1

f = focal length of lens usedH = flying height above datum (sea level)h1 = maximum elevation of terrain

Smin = f / H-h2

f = focal length of lens usedf = focal length of lens usedH = flying height above datum (sea level)h2 = minimum elevation of terrain

Page 284: CE 406 – Advanced Surveying

INTRODUCTION TO REMOTE SENSING

Dr Robert Sanderson New Mexico State University

Satellite picture of Las Cruces, NM

Page 285: CE 406 – Advanced Surveying

2

Table of Contents

Introduction..........................................................................................................................1 Electromagnetic energy ...........................................................................................1 Reflection and absorption ........................................................................................2 Sensors and platforms ..........................................................................................................3 Passive sensors.........................................................................................................3 Active sensors ..........................................................................................................3 Orbits and swaths.....................................................................................................3 Sensor characteristics...............................................................................................4 Spatial resolution .....................................................................................................4 Temporal resolution.................................................................................................4 Spectral resolution ...................................................................................................5 Platforms ..................................................................................................................6 Common satellites..................................................................................................10 Spectral signatures of natural and human-made materials ................................................11 Spectral reflectance signature ................................................................................11 Geodesy, geodetic datum and map projections..................................................................13 Flat earth vs curved earth.......................................................................................14 Sea level and the composition of the earth’s interior.............................................14 Types of geodic datum...........................................................................................14 Datum and GIS ......................................................................................................15 Map projection coordinates....................................................................................15 Universal Transverse Mercator..............................................................................15 Global Positioning Systems ...............................................................................................16 Geographic Information Systems ......................................................................................17 Pixel, images and colors ....................................................................................................19 Color composite images.........................................................................................19 False color composite ............................................................................................19 Natural color composite.........................................................................................20 Image processing and analysis...............................................................................21 Digitizing of images...........................................................................................................21 Image enhancement ...............................................................................................22 Image classification ...............................................................................................22 Image interpretation ...............................................................................................23 Vegetation indices..................................................................................................23

Page 286: CE 406 – Advanced Surveying

3

Current applications of remote sensing..............................................................................25 Forestry ..................................................................................................................25 Greenhouse gases...................................................................................................25 Vegetation health ...................................................................................................25 Biodiversity............................................................................................................26 Change detection....................................................................................................26 Geology..................................................................................................................26 Land degradation ...................................................................................................26 Oceanography ........................................................................................................27 Meteorology...........................................................................................................27 On-line tutorials .................................................................................................................27 Glossary .............................................................................................................................28

Page 287: CE 406 – Advanced Surveying

1

Introduction Remote sensing can be broadly defined as the collection and interpretation of information about

an object, area, or event without being in physical contact with the object. Aircraft and satellites

are the common platforms for remote sensing of the earth and its natural resources. Aerial

photography in the visible portion of the electromagnetic wavelength was the original form of

remote sensing but technological developments has enabled the acquisition of information at

other wavelengths including near infrared, thermal infrared and microwave. Collection of

information over a large numbers of wavelength bands is referred to as multispectral or

hyperspectral data. The development and deployment of manned and unmanned satellites has

enhanced the collection of remotely sensed data and offers an inexpensive way to obtain

information over large areas. The capacity of remote sensing to identify and monitor land

surfaces and environmental conditions has expanded greatly over the last few years and remotely

sensed data will be an essential tool in natural resource management.

Electromagnetic energy

The electromagnetic (EM) spectrum is the continuous range of electromagnetic radiation,

extending from gamma rays (highest frequency & shortest wavelength) to radio waves (lowest

frequency & longest wavelength) and including visible light.

The EM spectrum can be divided into seven different regions —— gamma rays, X-rays,

ultraviolet, visible light, infrared, microwaves and radio waves.

Page 288: CE 406 – Advanced Surveying

2

Remote sensing involves the measurement of energy in many parts of the electromagnetic (EM)

spectrum. The major regions of interest in satellite sensing are visible light, reflected and emitted

infrared, and the microwave regions. The measurement of this radiation takes place in what are

known as spectral bands. A spectral band is defined as a discrete interval of the EM spectrum.

For example the wavelength range of 0.4µm to 0.5µm (µm = micrometers or 10-6m) is one

spectral band. Satellite sensors have been designed to measure responses within particular

spectral bands to enable the discrimination of the major Earth surface materials. Scientists will

choose a particular spectral band for data collection depending on what they wish to examine.

The design of satellite sensors is based on the absorption characteristics of Earth surface

materials across all the measurable parts in the EM spectrum.

Reflection and absorption

When radiation from the Sun reaches the surface of the Earth, some of the energy at specific

wavelengths is absorbed and the rest of the energy is reflected by the surface material. The only

two exceptions to this situation are if the surface of a body is a perfect reflector or a true black

body. The occurrence of these surfaces in the natural world is very rare. In the visible region of

the EM spectrum, the feature we describe as the color of the object is the visible light that is not

absorbed by that object. In the case of a green leaf, for example, the blue and red wavelengths

are absorbed by the leaf, while the green wavelength is reflected and detected by our eyes.

In remote sensing, a detector measures the electromagnetic (EM) radiation that is reflected

back from the Earth’s surface materials. These measurements can help to distinguish the type of

land covering. Soil, water and vegetation have clearly different patterns of reflectance and

absorption over different wavelengths.

The reflectance of radiation from one type of surface material, such as soil, varies over the range

of wavelengths in the EM spectrum. This is known as the spectral signature of the material. All

Earth surface features, including minerals, vegetation, dry soil, water, and snow, have unique

spectral reflectance signatures, as discussed later.

Page 289: CE 406 – Advanced Surveying

3

Sensors and platforms

A sensor is a device that measures and records electromagnetic energy. Sensors can be divided

into two groups. Passive sensors depend on an external source of energy, usually the sun. The

most common passive sensor is the photographic camera. Active sensors have their own source

of energy, an example would be a radar gun. These sensors send out a signal and measure the

amount reflected back. Active sensors are more controlled because they do not depend upon

varying illumination conditions

Passive sensors Active sensors

Orbits and swaths

The path followed by a satellite is referred to as its orbit. Satellites which view the same portion

of the earth’s surface at all times have Geostationary orbits.Weather and communication

satellites commonly have these types of orbits. Many satellites are designed to follow a north

south orbit which, in conjunction with the earth’s rotation (west-east), allows them to cover most

of the earth’s surface over a period of time. These are Near-polar orbits. Many of these satellites

orbits are also Sun-synchronous such that they cover each area of the world at a constant local

time of day. Near polar orbits also means that the satellite travels nortward on one side of the

earth and the southward on the second half of its orbit. These are called Ascending and

Page 290: CE 406 – Advanced Surveying

4

Descending passes. As a satellite revolves around the earth, the sensor sees a certain portion of

the earth’s surface. The area imaged is referred to as the Swath. The surface directly below the

satellite is called the Nadir point. Steerable sensors on satellites can view an area (off nadir)

before and after the orbits passes over a target.

Satellite sensor characteristics

The basic functions of most satellite sensors is to collect information about the reflected radiation

along a pathway, also known as the field of view (FOV), as the satellite orbits the Earth. The

smallest area of ground that is sampled is called the instantaneous field of view (IFOV). The

IFOV is also described as the pixel size of the sensor. This sampling or measurement occurs in

one or many spectral bands of the EM spectrum.

The data collected by each satellite sensor can be described in terms of spatial, spectral and

temporal resolution.

Spatial resolution

The spatial resolution (also known as ground resolution) is the ground area imaged for the

instantaneous field of view (IFOV) of the sensing device. Spatial resolution may also be

described as the ground surface area that forms one pixel in the satellite image. The IFOV or

ground resolution of the Landsat Thematic Mapper (TM) sensor, for example, is 30 m. The

ground resolution of weather satellite sensors is often larger than a square kilometer. There are

satellites that collect data at less than one meter ground resolution but these are classified

military satellites or very expensive commercial systems.

Temporal resolution

Temporal resolution is a measure of the repeat cycle or frequency with which a sensor revisits

the same part of the Earth’s surface. The frequency will vary from several times per day, for a

typical weather satellite, to 8—20 times a year for a moderate ground resolution satellite, such as

Landsat TM. The frequency characteristics will be determined by the design of the satellite

sensor and its orbit pattern.

Page 291: CE 406 – Advanced Surveying

5

Spectral resolution

The spectral resolution of a sensor system is the number and width of spectral bands in the

sensing device. The simplest form of spectral resolution is a sensor with one band only, which

senses visible light. An image from this sensor would be similar in appearance to a black and

white photograph from an aircraft. A sensor with three spectral bands in the visible region of the

EM spectrum would collect similar information to that of the human vision system. The Landsat

TM sensor has seven spectral bands located in the visible and near to mid infrared parts of the

spectrum.

A panchromatic image consists of only one band. It is usually displayed as a grey scale image,

i.e. the displayed brightness of a particular pixel is proportional to the pixel digital number which

is related to the intensity of solar radiation reflected by the targets in the pixel and detected by

the detector. Thus, a panchromatic image may be similarly interpreted as a black-and-white

aerial photograph of the area, though at a lower resolution.

Multispectral and hyperspectral images consists of several bands of data. For visual display,

each band of the image may be displayed one band at a time as a grey scale image, or in

combination of three bands at a time as a color composite image. Interpretation of a

multispectral color composite image will require the knowledge of the spectral reflectance

signature of the targets in the scene.

Page 292: CE 406 – Advanced Surveying

6

Platforms

Aerial photography has been used in agricultural and natural resource management for many

years. These photographs can be black and white, color, or color infrared. Depending on the

camera, lens, and flying height these images can have a variety of scales. Photographs can be

used to determine spatial arrangement of fields, irrigation ditches, roads, and other features

(figure 3 on page 8) or they can be used to view individual features within a field (figure 4 on

page 8).

Infrared images can detect stress in crops before it is visible with the naked eye. Healthy

canopies reflect strongly in the infrared spectral range, whereas plants that are stressed will

reflect a dull color (figure 5 on page 9). These images can tell a farmer that there is a problem

but does not tell him what is causing the problem. The stress might be from lack of water, insect

damage, improper nutrition or soil problems, such as compaction, salinity or inefficient drainage.

The farmer must assess the cause of the stress from other information. If the dull areas disappear

on subsequent pictures, the stress could have been lack of water that was eased with irrigation. If

the stress continues it could be a sign of insect infestation. The farmer still has to conduct in-field

Page 293: CE 406 – Advanced Surveying

7

assessment to identify the causes of the problem. The development of cameras that measure

reflectance in a wider range of wavelengths may lead to better quantify plant stress. The use of

these multi-spectral cameras are increasing and will become an important tool in precision

agriculture.

Satellite remote sensing is becoming more readily available for use in precision agriculture. The

Landsat and the NOAA polar-orbiting satellites carry instruments that can be used to determine

crop types and conditions, and to measure crop acreage. The Advanced Very High Resolution

Radiometer (AVHRR) carried onboard NOAA polar orbiting satellites measure reflectance

from the earth’s surface in the visible, near infrared, and thermal infrared portions of the

electromagnetic spectrum. Figure 6 on page 9 shows a typical image obtained from this satellite.

This spectral sensitivity makes it suitable for measuring vegetative condition and because the

satellite passes overhead twice a day, it can be used to detect rapidly changing conditions.

Unfortunately, its use as a precision agriculture tool is limited because the spatial resolution of

the sensor is nominally 1.1km. A possible application of this scanner would be to use the thermal

infrared sensor to estimate daily maximum and minimum temperatures. These temperature

estimates could then be used to determine degree-days that will drive pest development models.

Degree-day models are an essential part of IPM programs and the enhanced spatial coverage

provided by satellites would allow for assessment of spatial variability in predicted events that is

not possible with data from sparsely spaced weather stations currently used for these models.

Remotely sensed data can also be used to determine irrigation scheduling and adequacy of

irrigation systems for uniformly wetting an entire field.

The sensors aboard the Landsat satellite measures reflected radiation in seven spectral bands

from the visible through the thermal infrared. The sensors high spatial resolution (approximately

30m) makes it useful in precision agriculture. Figure 7 on page 10 shows a typical image

obtained from this satellite. The spectral response and higher spatial resolution make it suitable

for assessing vegetative condition for individual fields but the overpass frequency is only once

every 16 days. The less frequent overpass makes it difficult to use these data for assessing

rapidly changing events such as insect outbreaks or water stress. New satellites with enhanced

Page 294: CE 406 – Advanced Surveying

8

capabilities are planned and remotely sensed data will become more widely used in management

support systems.

Figure 3. A black and white aerial photograph showing fields, roads, and irrigation ditches.

Figure 4. A high resolution aerial photograph showing individual trees within a orchard.

PecanOrchard

Roads

Fields

River

Page 295: CE 406 – Advanced Surveying

9

Figure 5. A color infrared photograph of a pecan orchard. Darker areas show stressed plants.

Figure 6. Advanced Very High Resolution Radiometer (AVHRR) image of the southwest Untied States. Image is centered on the Las Cruces, New Mexico.

Page 296: CE 406 – Advanced Surveying

10

Figure 7. A Landsat satellite image of farm land south of Las Cruces, New Mexico. Common Satellites GOES 5 spectral bands 1 - 41 km spatial resolution Geostationary NOAA AVHRR 5 spectral bands 1.1 km spatial resolution 1 day repeat cycle Landsat TM 7 spectral bands 30m spatial resolution 16 day repeat cycle MODIS Multi- spectral bands 250-1000m spatial resolution (band dependent) 1day repeat cycle IKONOS 4 spectral Bands 4m spatial resolution 5 day repeat cycle

Page 297: CE 406 – Advanced Surveying

11

Spectral signatures of natural and human-made materials

Remote sensing makes use of visible, near infrared and short-wave infrared sensors to form

images of the earth's surface by detecting the solar radiation reflected from targets on the ground.

Different materials reflect and absorb differently at different wavelengths. Thus, the targets can

be differentiated by their spectral reflectance signatures in the remotely sensed images.

Spectral Reflectance Signature

When solar radiation hits a target surface, it may be transmitted, absorbed or reflected.

Different materials reflect and absorb differently at different wavelengths. The reflectance

spectrum of a material is a plot of the fraction of radiation reflected as a function of the incident

wavelength and serves as a unique signature for the material. In principle, a material can be

identified from its spectral reflectance signature if the sensing system has sufficient spectral

resolution to distinguish its spectrum from those of other materials. This premise provides the

basis for multispectral remote sensing.

Page 298: CE 406 – Advanced Surveying

12

The following graph shows the typical reflectance spectra of water, bare soil and two types of vegetation.

The reflectance of clear water is generally low. However, the reflectance is maximum at the

blue end of the spectrum and decreases as wavelength increases. Hence, water appears dark-

bluish to the visible eye. Turbid water has some sediment suspension that increases the

reflectance in the red end of the spectrum and would be brownish in appearance. The reflectance

of bare soil generally depends on its composition. In the example shown, the reflectance

increases monotonically with increasing wavelength. Hence, it should appear yellowish-red to

the eye.

Vegetation has a unique spectral signature that enables it to be distinguished readily from other

types of land cover in an optical/near-infrared image. The reflectance is low in both the blue and

red regions of the spectrum, due to absorption by chlorophyll for photosynthesis. It has a peak at

the green region. In the near infrared (NIR) region, the reflectance is much higher than that in

the visible band due to the cellular structure in the leaves. Hence, vegetation can be identified by

the high NIR but generally low visible reflectance. This property has been used in early

reconnaissance missions during war times for "camouflage detection".

Page 299: CE 406 – Advanced Surveying

13

The shape of the reflectance spectrum can be used for identification of vegetation type. For

example, the reflectance spectra of dry grass and green grass in the previous figures can be

distinguished although they exhibit the generally characteristics of high NIR but low visible

reflectance. Dry grass has higher reflectance in the visible region but lower reflectance in the

NIR region. For the same vegetation type, the reflectance spectrum also depends on other factors

such as the leaf moisture content and health of the plants. These properties enable vegetation

condition to be monitored using remotely sensed images.

Geodesy, Geodetic Datums and Map Projections

Geodesy is the branch of science concerned with the determination of the size and shape of the

Earth. Geodesy involves the processing of survey measurements on the curved surface of the

Earth, as well as the analysis of gravity measurements. Knowing the exact location of a pixel on

the Earth’s surface (its spatial location) is an essential component of remote sensing. It requires a

detailed knowledge of the size and the shape of the Earth.

Page 300: CE 406 – Advanced Surveying

14

The Earth is not a simple sphere. Topographic features such as mountain ranges and deep oceans

disturb the surface of the Earth. The ideal reference model for the Earth’s shape is one that can

represent these irregularities and identify the position of features through a co-ordinate system. It

should also be easy to use.

Flat Earth vs curved Earth

The “flat Earth” model is not appropriate when mapping larger areas. It does not take into

account the curvature of the Earth.

A “curved Earth” model more closely represents the shape of the Earth. A spheroid best

represents the shape of the Earth because it is significantly wider at the equator than around the

poles (Unlike a simple sphere). A spheroid, (also known as an ellipsoid) represents the equator as

an elliptical shape, rather than a round circle. Surveying and navigation calculations can he

performed over a large area when a spheroid is used as a curved Earth reference model.

Sea level and the composition of the Earth’s interior

The surface of the sea is not uniform. The Earth’s gravitational field shapes it. The rocks that

make up the Earth’s interior vary in density and distribution, causing anomalies in the

gravitational field. These, in turn, cause irregularities in the sea surface. A mathematical model

of the sea surface can be formulated; however, it is very complex and not useful for finding

geographic positions on a spheroid reference model.

Types of geodetic datum

Based on these ideas, models can be established from which spatial position can be calculated.

These models are known as geodetic datums and are normally classified into two types

geocentric datum and local geodetic datum.

A geocentric datum is one which best approximates the size and shape of the Earth as a whole.

The center of its spheroid coincides with the Earth’s center of mass. A geocentric datum does not

seek to be a good approximation to any particular part of the Earth.

Page 301: CE 406 – Advanced Surveying

15

A local geodetic datum is used to approximate the size and shape of the Earth’s sea surface in a

smaller area.

Datums and GIS

Having a standard accurate datum set becomes increasingly important as multiple layers of

information about the same area are collected and analyzed. The layers are developed into

geographic information systems (GIS), which enable the relationships between layers of data to

be examined. In order to function effectively, a GIS must possess one essential attribute. It must

have the ability to geographically relate data within and across layers. For example, if a dataset

about vegetation is being examined against the data sets for topography and soils, the accurate

spatial compatibility of the two datasets is critical.

Map projection coordinates

A map projection is a systematic representation of all or part of of the Earth on a two-

dimensional surface, such as a flat sheet of paper. During this process some distortion of

distances, directions, scale, and area is inevitable. There are several different types of map

projections. No projection is free from all distortions, but each minimizes distortions in some of

the above properties, at the expense of leaving errors in others. For example, the commonly used

Transverse Mercator projection represents direction accurately, but distorts distance and area,

especially those farthest from the equator. Greenland, for example, appears to be much larger

than it really is. The Transverse Mercator projection is useful for navigation charts.

Universal Transverse Mercator (UTM)

Universal Transverse Mercator (UTM) is a global spatial system based on the Transverse

Mercator projection. UTM divides the Earth into 60 equal zones, each being 6 degrees wide.

Each zone is bounded by lines of longitude extending from the North Pole to the South Pole.

Imagine an orange consisting of 60 segments. Each segment would be equivalent to a UTM

zone.

Page 302: CE 406 – Advanced Surveying

16

A rectangular grid coordinate system is used in most map projections. These coordinates are

referred to as Eastings and Northings, being distances East and North of an origin. They are

usually expressed in metres.

Under the UTM system, each East and North coordinate pair could refer to one of sixty points on

Earth — one point in each of the sixty zones. Because of this, the zone number needs to be

quoted to ensure the correct point on Earth is being identified.

Global Positioning System

The Global Positioning System (GPS) is a satellite based system that gives real time three

dimensional (3D) latitude, longitude, and height information at sub-meter accuracy. The system

was developed by the United States military in the late 1970’s to give troops accurate position

and navigational information. A GPS receiver calculates its position on earth from radio signals

broadcast by satellites orbiting the earth. There are currently twenty-four GPS satellites in this

system. GPS equipment is capable of measuring a position to within centimeters but the accuracy

suffers due to errors in the satellite signals. Errors in the signal can be caused by atmospheric

interference, proximity of mountains, trees, or tall buildings. The government can also introduce

errors in the signal for security purposes. This intentional degradation of the satellite signals is

known as selective availability. The accuracy of the position information can be improved by

using differential GPS. In differential GPS, one receiver is mounted in a stationary position,

usually at the farm office, while the other is on the tractor or harvesting equipment. The

stationary receiver calculates the error and transmits the necessary correction to the mobile

receiver. GPS equipment suitable for precision agricultural cost several thousands dollars. Less

expensive equipment is becoming available but the accuracy and capability is reduced.

Page 303: CE 406 – Advanced Surveying

17

Examples of GPS equipment

Geographic Information System (GIS)

A Geographic Information System (GIS) is a computer-assisted system for handling spatial

information. GIS software can be considered as a collection of software programs to acquire,

store, analyze, and display information. The input data can be maps, charts, spreadsheets, or

pictures. The GIS software can analyze these data using image processing and statistical

procedures. Data can be grouped together and displayed as overlays. Overlays could be

information such as soil type, topography, crop type, crop yield, pest levels, irrigation, and

management information as shown.

Page 304: CE 406 – Advanced Surveying

18

The figure below shows a categorized aerial photograph overlaid with soil information using GIS

software.

Relationships can be examined and new data sets produced by combining a number of overlays.

These data sets can be combined with models and decision support systems to construct a

powerful management tool. For example, we could assess how far a field was from roads or non-

agricultural crops. This information could be important in pest infestation or in planning

chemical application. We could also examine crop yield relationship to soil type or other factors

as show in the following figure.. A number of GIS software packages are now commercially

available. Spatial data for the GIS is often collected using GPS equipment but another source of

spatial information is aerial and satellite imagery.

Page 305: CE 406 – Advanced Surveying

19

Pixels, Images and colors

Color Composite Images

In displaying a color composite image, three primary colors (red, green and blue) are used. When

these three colors are combined in various proportions, they produce different colors in the

visible spectrum. Associating each spectral band (not necessarily a visible band) to a separate

primary color results in a color composite image.

Many colors can be formed by combining the three primary colors (Red, Green, Blue) in

various proportions.

False Color Composite

The display color assignment for any band of a multispectral image can be done in an entirely

arbitrary manner. In this case, the color of a target in the displayed image does not have any

resemblance to its actual color. The resulting product is known as a false color composite

image. There are many possible schemes of producing false color composite images. However,

some scheme may be more suitable for detecting certain objects in the image.

Page 306: CE 406 – Advanced Surveying

20

Natural Color Composite

When displaying a natural color composite image, the spectral bands (some of which may not be

in the visible region) are combined in such a way that the appearance of the displayed image

resembles a visible color photograph, i.e. vegetation in green, water in blue, soil in brown or

grey, etc. Many people refer to this composite as a "true color" composite. However, this term

may be misleading since in many instances the colors are only simulated to look similar to the

"true" colors of the targets.

For example, the bands 3 (red band), 2 (green band) and 1 (blue band) of a AVHRR image can

be assigned respectively to the R, G, and B colors for display. In this way, the color of the

resulting color composite image resembles closely what the human eyes would observe.

Page 307: CE 406 – Advanced Surveying

21

Image processing and analysis

Many image processing and analysis techniques have been developed to aid the interpretation of

remote sensing images and to extract as much information as possible from the images. The

choice of specific techniques or algorithms to use depends on the goals of each individual

project. The key steps in processing remotely sensed data are Digitizing of Images, Image

Calibration, Geo-Registration, and Spectral Analysis. Prior to data analysis, initial processing

on the raw data is usually carried out to correct for any distortion due to the characteristics of the

imaging system and imaging conditions. Depending on the user's requirement, some standard

correction procedures may be carried out by the ground station operators before the data is

delivered to the end-user. These procedures include radiometric correction to correct for

uneven sensor response over the whole image and geometric correction to correct for geometric

distortion due to Earth's rotation and other imaging conditions (such as oblique viewing). The

image may also be transformed to conform to a specific map projection system. Furthermore, if

accurate geographical location of an area on the image needs to be known, ground control

points (GCP's) are used to register the image to a precise map (geo-referencing).

Digitizing of Images

Page 308: CE 406 – Advanced Surveying

22

Image digitization is the conversion of an analogue image, such as a photograph, into a series of

grid cells. The value of each cell is related to the brightness, color or reflectance at that point. A

scanner is a simple way to digitize images. Many modern sensors now produce raw data in

digital format.

Image Enhancement

In order to aid visual interpretation, visual appearance of the objects in the image can be

improved by image enhancement techniques such as grey level stretching to improve the

contrast and spatial filtering for enhancing the edges. An example of an enhancement procedure

is shown here.

Image Classification

Different landcover types in an image can be discriminated using some image classification

algorithms using spectral features, i.e. the brightness and "color" information contained in each

pixel. The classification procedures can be "supervised" or "unsupervised".

In supervised classification, the spectral features of some areas of known landcover types are

extracted from the image. These areas are known as the "training areas". Every pixel in the

whole image is then classified as belonging to one of the classes depending on how close its

spectral features are to the spectral features of the training areas.

In unsupervised classification, the computer program automatically groups the pixels in the

image into separate clusters, depending on their spectral features. Each cluster will then be

assigned a landcover type by the analyst.

Each class of landcover is referred to as a "theme" and the product of classification is known as

a "thematic map".

The information derived from remote sensing images are often combined with other auxiliary

data to form the basis for a Geographic Information System (GIS). A GIS is a database of

Page 309: CE 406 – Advanced Surveying

23

different layers, where each layer contains information about a specific aspect of the same area

which is used for analysis by the resource scientists.

Image Interpretation

Vegetation Indices

Different bands of a multispectral image may be combined to accentuate the vegetated areas.

One such combination is the ratio of the near-infrared band to the red band. This ratio is known

as the Ratio Vegetation Index (RVI)

RVI = NIR/Red

Since vegetation has high NIR reflectance but low red reflectance, vegetated areas will have

higher RVI values compared to non-vegetated aeras. Another commonly used vegetation index is

the Normalised Difference Vegetation Index (NDVI) computed by

NDVI = (NIR - Red)/(NIR + Red)

Table 1 shows equations and references for several indices that can be use in vegetation

monitoring.

Table1.

PARAMETER EQUATION REFERENCE Normalized Difference Vegetation Index (NDVI)

(NIR-Red)/(NIR+Red) Rouse et al (1974)

Water Band Index (WBI)

900/970 nm Pefluelas et al. (1997)

Water Moisture Index (WMI)

1600/820 nm Hunt and Rock (1989)

Photosynthesis Index

(531-570)/531+570) Gamon et al. (1990)

Nitrogen Index (RN)

(550-600)/(800-(900) Blackmer et al. (1996)

Chlorophyll based Difference Index (CI)

(850-710)/850-680) Datt (1999)

Page 310: CE 406 – Advanced Surveying

24

Example of image processing of aerial infrared photographs to produce a vegetation map for a

chile field.

Vegetation maps are produced by generating a normalized difference vegetation index from a infrared image and then doing a vegetation classification. Color infrared photographs collect information in the green, red and near infrared light reflectance spectrum. Green vegetation reflects very strongly in the near infrared light range and therefore infrared images can detect stress in many crops before it is visible with the naked eye The Normalized Difference Vegetation Index (NDVI) is used to separate green vegetation from the background soil brightness. It is the difference between the near infrared and red reflectance normalized over the sum of these bands. NDVI = (IR-Red)/(IR+Red)These NDVI maps can then be classified into vegetation categories and displayed as a vegetation maps with different colors representing different levels of vegetation. In the map on the left browns and yellow represent bare soil and shades of green represent vegetation, darker greens are stronger vegetation.

Page 311: CE 406 – Advanced Surveying

25

Current applications of remote sensing

Forestry applications

Satellite imagery is used to identify and map: -

• The species of native and exotic forest trees.

• The effects of major diseases or adverse change in environmental conditions.

• The geographic extent of forests.

This application of satellite imagery has led to the extensive use of imagery by organizations that

have an interest in a range of environmental management responsibilities at a state and national

level.

Greenhouse gases — sinks and sources

Forests are often referred to as carbon sinks. This description is used because during

photosynthesis, carbon dioxide, the major greenhouse gas, is taken from the atmosphere and

converted into plant matter and oxygen.

Climate change has serious implications for Australia and overseas countries alike. Sustainable

land management is essential for effective greenhouse gas management; hence, it is important to

acquire data on land cover in Australia. Remotely sensed land cover changes are used in

calculations of our national emission levels, and data collected on a national scale will enable

governments to develop responses to land clearing.

Vegetation health

Vegetation can become stressed or less healthy because of a change in a range of environmental

factors. These factors include lack of water, concentration of toxic elements/herbicides and

infestation by insects/viruses. The spectral reflectance of vegetation changes according to the

structure and health of a plant. In particular, the influence of chlorophyll in the leaf pigments

controls the response of vegetation to radiation in the visible wavelength. As a plant becomes

Page 312: CE 406 – Advanced Surveying

26

diseased, the cell structure of a plant alters and the spectral signature of a plant or plant

community will change.

The maximum reflection of electromagnetic radiation from vegetation occurs in the near infrared

wavelengths. Vegetation has characteristically high near-infrared reflectance and low red

reflectance. Air-borne scanners using narrow spectral bands between 0.4 urn and 0.9 urn can

indicate deteriorating plant health before a change in condition is visible in the plant itself.

Biodiversity

Vegetation type and extent derived from satellite imagery can be combined, with biological and

topographic information to provide information about biodiversity. Typically, this analysis is

done with a geographic information system.

Change detection

Satellite imagery is not always able to provide exact details about the species or age of

vegetation. However, the imagery provides a very good means of measuring significant change

in vegetation cover, whether it is through clearing, wildfire damage or environmental stress. The

most common form of environmental stress is water deficiency.

Geology

Remote sensing is useful for providing information relevant to the geosciences. For example,

remote sensing data are used in:

• Mineral and petroleum exploration,

• Mapping geomorphology, and

• Monitoring volcanoes.

Land degradation

Imagery can be used to map areas of poor or no vegetation cover. A range of factors, including

saline or sodic soils, and overgrazing, can cause degraded landscapes.

Page 313: CE 406 – Advanced Surveying

27

Oceanography

Remote sensing is applied to oceanography studies. Remote sensing is used, for example, to

measure sea surface temperature and to monitor marine habitats.

Meteorology

Remote sensing is an effective method for mapping cloud type and extent, and cloud top

temperature.

In many of the applications identified above remotely sensed data are used with a range of other

Earth science data to provide information about the natural environment. This analysis of Earth

science data from a range of sources is usually done in a geographic information system (GIS).

On-line Tutorials

http://www.ccrs.nrcan.gc.ca

http://rst.gsfc.nasa.gov

Page 314: CE 406 – Advanced Surveying

28

Introduction to Remote Sensing- Glossary of Terms Absorption, reflection and transmission - Absorption is the property of an Earth substance or atmospheric gas which absorbs the Sun’s radiation. Reflection is when certain materials or gasses contain properties which reflect the Sun’s radiation, and transmission is the ability of a substance or gas to pass the Sun’s radiation through it. Most materials and gasses possess some of each of these qualities. A healthy green leaf, for example, will absorb the blue and red wavelengths of the Sun’s radiation, while reflecting the green wavelengths which are detected by our eyes. Active remote sensing - Remote sensing methods that provide their own source of electromagnetic radiation to illuminate the target. Radar is an example of active remote sensing, a flash camera is another. (See Passive Remote Sensing). Albedo - The percentage of incoming radiation that is reflected by a natural surface such as the ground, ice, snow, water, clouds, or particles in the atmosphere. Analog display - A form of data display in which values are shown in graphic form, such as curves. This differs from digital displays in which values are shown as arrays of numbers. Anomaly - An area on an image that differs from the surrounding, normal area. For example, a concentration of vegetation within a desert scene constitutes an anomaly. Azimuth - Geographic orientation of a line given as an angle measured in degrees clockwise from the North. Azimuth resolution - In radar images, the spatial resolution in the azimuth direction. Band - A wavelength interval in the electromagnetic spectrum. For example, in Landsat images the bands designate specific wavelength intervals at which images are acquired. Biome - A community of living organisms in a single major ecological region. Carbon cycle - The natural cycle of carbon dioxide to carbohydrates by photosynthesis and it’s return to the atmosphere by animal metabolism and decomposition. Chlorosis - The yellowing of plant leaves resulting from an imbalance in the iron metabolism caused by excess concentrations of copper, zinc, manganese, or other elements in the plants. Chlorosis can be detected by infared sensing. Climatology - The science and study of climates and their phenomena.

Page 315: CE 406 – Advanced Surveying

29

Crime mapping - The technology used by the full range of law enforcement agencies to replace the old pins-in-the-map-on-the-wall technique of following crime in an area. Crime mapping utilizes remote sensing and computer technology to create multi-dimensional computer displays of the full range of criminal activity from car theft, vandalism, domestic violence, poverty violence, child abuse, murders, drugs, to prostitution and pickpocketing. The technology of crime mapping allows law enforcement personnel to archive, follow and even predict criminal activity. Cryosphere - The part of the Earth’s surface that is perennially frozen; the zone of the Earth where ice and frozen ground are formed. Digital display - A form of data display in which values are shown as an array of numbers. Digital image - An image where the property being measured has been converted from a continuous range of analogue values to a range expressed by a finite number of integers, usually recorded as binary codes from 0 to 255, or as one byte. Digital image processing - Computer manipulation of the digital-number values of an image. Digitization - The process of converting an analog display into a digital display. Diurnal - Daily Doppler principle - Describes the change in observed frequency that electromagnetic, or other waves undergo as a result of the movement of the source of waves relative to the observer. Ecosystem - An ecological system composed of interacting organisms and their environments. The result of interaction between biology, geochemical and geophysical systems. Electromagnetic radiation - Energy propagated in the form of, and advancing interaction between electric and magnetic fields. All electromagnetic radiation travels at the speed of light and its measurement takes place in what are known as spectral bands. Electromagnetic spectrum (EM) - The continuous sequence or range of electromagnetic energy arranged according to wavelength or frequency. The EM spectrum extends from gamma rays (highest frequency and shortest wave length) to radio waves (lowest frequency and longest wavelength), and includes light rays visible to the human eye. The regions of the EM spectrum include gamma rays, X-rays, ultraviolet, visible light, infared, microwaves, and radio waves. Electromagnetic wavelength - The distance or time between the alternating cycles of electromagnetic energy. Ephemeris - A table of predicted satellite orbital locations for specific time intervals. The ephemeris data help to characterize the conditions under which remotely sensed data are collected and are commonly used to correct the sensor data prior to analysis.

Page 316: CE 406 – Advanced Surveying

30

False color composite - The display colors for any band of a multispectral image can be assigned in an entirely arbitrary manner, resulting in the color of a target in the displayed image not resembling its “true color”. This is opposed to the natural color composite, or “true color” where the spectral bands are combined or simulated to resemble the true colors of the target as seen by the human eye. (See True Color Composite). Field of View (FOV) - The pathway along which most satellite sensors collect reflected radiation. Fluorimetry - The non-destructive analytical technique used to determine concentrations of specific chemical elements. The procedure is based on the artificially induced absorption, atomic excitation, and emission of electromagnetic radiation of characteristic wavelengths. Geodesy - The branch of science concerned with the determination of the size and shape of the Earth, its surface, and the analysis of gravity measurements. Geodetic - Knowing or determining the exact location points of a pixel (the spatial location) on the Earth’s surface. Geodetic accuracy - The accuracy with which geographic position and elevation of features on the Earth’s surface are mapped. This accuracy incorporates information in which the size and shape of the Earth has been taken into account. Geographic Information System (GIS) - The computer assisted system developed for handling spatial information. GIS software is considered as a collection of software programs which acquire, store, analyze, and display geospatial information. Geometric correction - To correct for geometric distortion due to Earth’s rotation and other imaging conditions, such as oblique viewing. Geostationary - Refers to satellites traveling at the angular velocity at which the Earth rotates; as a result, the satellites remain above the same point on Earth at all times. Geostationary orbit - An orbit at 41,000 km in the direction of the Earth’s rotation, which matches speed so that a satellite remains over a fixed point on the Earth’s surface. Geothermal - Refers to heat from sources within the Earth. Global Positioning System (GPS) - The satellite based location system developed by the United States military in the 1970s to give troops accurate position and navigational information. GPS gives real time three dimensional latitude, longitude, and height information at sub-meter accuracy. Currently there are 24 GPS satellites in this system.

Page 317: CE 406 – Advanced Surveying

GMT - Greenwich mean time. The international 24 hour system used as the prime basis of time throughout the world and to designate the time at which Landsat images are acquired. GOES - Geostationary Operational Environmental Satellite. Ground resolution - (spatial resolution) - The ground area imaged for instantaneous field of view of the sensing device. May also be described as the ground surface area that forms one pixel in the satellite image. Ground swath - The width of the strip of the Earth’s surface that is imaged by a scanner system. Hue - In the Intensity, Hue, and Saturation (IHS) system, hue represents the dominant wavelength of a color. Hydrology - The scientific study of the waters of the Earth, especially with the relation to the effects of precipitation and evaporation upon the occurrence and character of ground water. Hyperspectral data - Data gathered from hyperspectral systems, systems capable of taking measurements in many spectral bands, as in the case of the hyperspectral satellite ARIES which takes measurements in over 126 spectral bands. Hyperspectral image - An image consisting of many more spectral bands of data than multispectral systems. Instantanenous Field of View (IFOV) - The smallest area of ground that is sampled, also described as the pixel size of the sensor. Image enhancement - In order to aid visual interpretation, the visual appearance of the objects in the image can be improved by techniques such as grey level stretching to improve the contrast and spatial filtering for enhancing the edges. LIDAR - Light intensity detection and ranging, which uses lasers to stimulate fluorescence in various compounds and to measure distances to reflecting surfaces. Luminance - The quantitative measure of the intensity of light from a source. Mode - The value that occurs most frequently within the data sample being taken. In a histogram, it is the data value at which the peak of the distribution curve occurs. Mosaic - A composite image or photograph made by piecing together individual images or photographs covering adjacent areas. Multispectral data - Data collected from several spectral bands.

Page 318: CE 406 – Advanced Surveying

32

Multispectral image - An image consisting of several bands of data which requires the knowledge of the spectral reflectance signature to interpret. Multispectral scanner - A scanner system that simultaneously acquires images of the same scene at different wavelengths. Nadir - The point on the Earth’s surface directly below the center of the remote sensing platform. Orbit - The path of a satellite around a body such as the Earth, under the influence of gravity. Panchromatic image - An image consisting of only one band, usually displayed as a grey scale image, or as a low resolution black and white aerial photograph. Parse - To break down a sequence of numbers or letters into meaningful parts based on their location in the character sequence. For example, the first three numbers in a phone number are the area code numbers that identify the location of the phone number. Passive remote sensing - Remote sensing of energy naturally reflected or radiated from the target. (See Active Remote Sensing). Phenololgy or Phenological - Refers to the rate and timing of natural events, such as the growth cycle of vegetation over a growing period. Land cover and vegetation types may often be distinguished from each other by their characteristic spectral/temporal signature, as illustrated by a graph plotting values against time through a growing season for several agricultural categories. The shape and position of each curve defines that category’s phenological characteristic. Pixel - An abbreviation of picture element. The minimum size area on the ground detectable by a particular remote sensing device. The size varies depending on the type of sensor. Planimetric - Two dimensional. The measurements of plane surfaces. A map representing only horizontal features. Parts of a map that represent everything except relief. Platforms - The vehicles on which remote sensors are mounted, usually satellites and aircraft. Unmanned Air Vehicles (UAV) are used more and more frequently because they are cheaper than a full sized aircraft and a pilot. In addition, remote sensors can be mounted on structures such as bridges (to sense water flow or levels, or traffic patterns over bridges) and buildings (to monitor air quality and pollution in urban areas). Police radar guns are an example of a portable remote sensing device as well as a simple camera. POES - Polar orbiting environmental satellite.

Page 319: CE 406 – Advanced Surveying

33

Primary colors - A set of three colors that in various combinations will produce the full range of colors in the visible spectrum. There are two sets of primary colors, additive and subtractive. Quantum - The elementary quantity of electromagnetic (EM) energy that is transmitted by a particular wavelength. According to the quantum theory, EM radiation is emitted, transmitted, and absorbed as numbers of quanta, the energy of each quantum being a simple function of the frequency of the radiation. Radar - The acronym for Radio Detection and Ranging. Radar is an active form of remote sensing that operates in the microwave and radio wavelength regions. Radiation - The propagation of energy in the form of electromagnetic waves. Radiometric correction - To correct for uneven sensor response over the entire image. Reflection, absorption and transmission - Absorption is the property of an Earth substance or atmospheric gas which absorbs the Sun’s radiation. Reflection is when certain materials or gasses contain properties which reflect the Sun’s radiation, and transmission is the ability of a substance or gas to pass the Sun’s radiation through it. Most materials and gasses possess some of each of these qualities. A healthy green leaf, for example, will absorb the blue and red wavelengths of the Sun’s radiation, while reflecting the green wavelengths which are detected by the human eye. Remote link - The direct connection to a computer-based system located at another data center. Links are established via wide area networks and are initiated by the GLIS software. Once connection is established, the control of the user’s session is passed to that system. Remote sensing - The art and science of detecting, measuring and analyzing a substance or object from a distance. Scanner - An imaging system in which the Instantaneous Field of View (IFOV) of one or more detectors is swept across the terrain. Soil classifications - The systematic arrangement of soils into groups or categories based on their characteristics. Broad groupings are made on the bases of general characteristics and subdivisions on the premise of more detailed differences in specific properties. Spatial resolution (ground resolution) - The ground area imaged for instantaneous field of view of the sensing device. May also be described as the ground surface area that forms one pixel in the satellite image. Spatial resolution can be from 1 meter to several km, depending on the precision and scope of the sensing device.

Page 320: CE 406 – Advanced Surveying

34

Spectral bands - The discrete intervals of the electromagnetic (EM) spectrum used to measure this radiation. Spectral reflectance - Reflectance of electromagnetic energy at specified wavelength intervals. Spectral resolution - The number and width of the spectral bands in a sensing device. The simplest form is a sensor with one band only, which senses visible light. An image from this sensor would be similar in appearance to a black and white photograph from an aircraft. Spectral signature of the material - The reflectance of radiation from a certain type of the Earth’s surface material or other materials in the atmosphere. Minerals, vegetation, soil, water and snow have unique spectral reflectance signatures, as do clouds, fog and smoke. Swath - A swath of data is all data received from a spacecraft on a single pass from acquisition of signal (AOS) to loss of signal (LOS). Synthetic Aperture Radar (SAR) - The most common active remote sensing system which emits radar pulses from under an aircraft or satellite onto a given area. The reflected or back-scattered radar signals form an image. Target - The specific object of interest in a remote sensing investigation. Temporal resolution - The measure of the repeat cycle or frequency with which a sensor revisits the same part to the Earth’s surface. This will vary from several times per day, for a typical weather satellite, to 20 times per year for a moderate ground resolution satellite such as Landsat TM (Thematic Mapper). Thematic mapping - Each type of land cover is classified into a “theme” and produced on a thematic map. Transmission, absorption, and reflection - Absorption is the property of an Earth substance or atmospheric gas which absorbs the Sun’s radiation. Reflection is when certain materials or gasses contain properties which reflect the Sun’s radiation, and transmission is the ability of a substance or gas to pass the Sun’s radiation through it. Most materials and gasses possess some of each of these qualities. A healthy green leaf, for example, will absorb the blue and red wavelengths of the Sun’s radiation, while reflecting the green wavelengths which are detected by our eyes. Transpiration - The expulsion of water vapor and oxygen by vegetation.

Page 321: CE 406 – Advanced Surveying

35

True color composite - When displaying a natural color composite image, the spectral bands (some of which may not be visible) are combined in such a way that the appearance of the displayed image resembles a visible color photograph, i.e. vegetation in green, water in blue, soil in brown or grey, etc. This may be misleading as, in many instances the colors are only simulated to look similar to the “true” colors of the targets. (See False Color Composite). Universal Transverse Mercator (UTM) - The global spatial system based on the Transverse Mercator projection. This system divides the Earth into 60 equal zones, each 6 degrees wide, bounded by lines of longitude extending from the North Pole to the South Pole, each segment the equivalent to a UTM zone. UV - The ultraviolet region of the electromagnetic spectrum ranging in wavelengths from 0.01 to 0.4m. Virtual fencing - The system by which free ranging animals are controlled, by remote sensing signals and receptors (commonly in the form of ear tags), in their range, or directed to more appropriate terrain for their health, safety and effective use of natural resources. Zenith - The point on the celestial sphere vertically above a given position or observer.

Page 322: CE 406 – Advanced Surveying

Satellite Remote SensingSatellite Remote Sensing

GE 4150GE 4150-- Natural HazardsNatural Hazards

Some slides taken from Ann Maclean: Introduction to Some slides taken from Ann Maclean: Introduction to Digital Image ProcessingDigital Image Processing

Page 323: CE 406 – Advanced Surveying

Remote SensingRemote Sensing

“the art, science, and technology of obtaining “the art, science, and technology of obtaining reliable information about physical objects and the reliable information about physical objects and the environment, through the process of recording, environment, through the process of recording, measuring and interpreting imagery and digital measuring and interpreting imagery and digital measuring and interpreting imagery and digital measuring and interpreting imagery and digital representations of energy patterns derived from representations of energy patterns derived from noncontact sensor systems”. (Cowell 1997)noncontact sensor systems”. (Cowell 1997)

Taken from: Introductory Digital Image Processing. 3Taken from: Introductory Digital Image Processing. 3rdrd edition. edition. Jensen, 2004Jensen, 2004

Page 324: CE 406 – Advanced Surveying

Remote SensingRemote Sensing

A remote sensing instrument collects information about an

object or phenomenon within the instantaneous-field-of-view instantaneous-field-of-view (IFOV) of the sensor system

without being in direct physical contact with it. The sensor is

located on a suborbitalor satellite platform.

Introductory Digital Image Processing. 3rd edition. Jensen, 2004Introductory Digital Image Processing. 3rd edition. Jensen, 2004

Page 325: CE 406 – Advanced Surveying

Remote SensingRemote Sensing

Remote sensing is a tool or technique similar to mathematics. Using sensors to measure the amount of electromagnetic radiation (EMR) exiting an object or geographic area from a distance and then extracting valuable information from the data using mathematically information from the data using mathematically and statistically based algorithms is a scientificactivity. It functions in harmony with other spatialdata-collection techniques or tools of the mapping sciences, including cartography and geographic information systems (GIS) (Clarke, 2001).

Introductory Digital Image Processing. 3rd edition. Jensen, 2004Introductory Digital Image Processing. 3rd edition. Jensen, 2004

Page 326: CE 406 – Advanced Surveying

Remote SensingRemote Sensing

Information about an Object or AreaInformation about an Object or AreaSensors can be used to obtain specific information about Sensors can be used to obtain specific information about an object (e.g., the diameter of a cottonwood tree crown) an object (e.g., the diameter of a cottonwood tree crown) or the geographic extent of a phenomenon (e.g., the or the geographic extent of a phenomenon (e.g., the boundary of a cottonwood stand). The EMR reflected, boundary of a cottonwood stand). The EMR reflected, emitted, or backemitted, or back--scattered from an object or geographic scattered from an object or geographic emitted, or backemitted, or back--scattered from an object or geographic scattered from an object or geographic area is used as a area is used as a surrogatesurrogate for the actual property under for the actual property under investigation. The electromagnetic energy investigation. The electromagnetic energy measurements must be calibrated and turned into measurements must be calibrated and turned into information using visual and/or digital image processing information using visual and/or digital image processing techniques.techniques.

Introductory Digital Image Processing. 3rd edition. Jensen, 2004Introductory Digital Image Processing. 3rd edition. Jensen, 2004

Page 327: CE 406 – Advanced Surveying

Introductory Digital Image Processing. 3rd edition. Jensen, 2004Introductory Digital Image Processing. 3rd edition. Jensen, 2004

Page 328: CE 406 – Advanced Surveying

Electromagnetic EnergyElectromagnetic Energy

Thermonuclear fusion on the surface of the Sun yields a continuous spectrum of electromagnetic energy. Thermonuclear fusion on the surface of the Sun yields a continuous spectrum of electromagnetic energy. The 6,000 K temperature of this process produces a large amount of short wavelength energy (from 0.4 The 6,000 K temperature of this process produces a large amount of short wavelength energy (from 0.4 --0.7 0.7 µµm; blue, green, and red light) that travels through the vacuum of space at the speed of light. Some m; blue, green, and red light) that travels through the vacuum of space at the speed of light. Some energy is intercepted by the Earth where it interacts with the atmosphere and surface materials. The Earth energy is intercepted by the Earth where it interacts with the atmosphere and surface materials. The Earth may reflect some of the energy directly back out to space or it may absorb the short wavelength energy may reflect some of the energy directly back out to space or it may absorb the short wavelength energy and then reand then re--emit it at a longer wavelength.emit it at a longer wavelength.

Introductory Digital Image Processing. 3rd edition. Jensen, 2004Introductory Digital Image Processing. 3rd edition. Jensen, 2004

Page 329: CE 406 – Advanced Surveying

2.5

3.0

3.5

4.0

violet limit

blue

green limit

Photon energy of visible light in

electron volts (eV) Photon wavelength in nanometers (nm)

400

450

550

10 -14

10 -8

10 -6

Sun

Earth

Gamma and x-ray

Ultraviolet

Wavelength in meters (m)

Electromagnetic Spectrum and the Photon Energy of Visible Light

Visible

10 -12

3.10

2.75

2.252.48 green 500

ultraviolet

2.5

3.0

3.5

4.0

violet limit

blue

green limit

Photon energy of visible light in

electron volts (eV) Photon wavelength in nanometers (nm)

400

450

550

10 -14

10 -8

10 -6

Sun

Earth

Gamma and x-ray

Ultraviolet

Wavelength in meters (m)

Electromagnetic Spectrum and the Photon Energy of Visible Light

Visible

10 -12

3.10

2.75

2.252.48 green 500

ultraviolet

0

0.5

1.0

1.5

2.0

green limit yellow orange red

550 580 600 650

10

10 -2

10

Infrared

Microwave and radio waves

2.252.142.061.91

10001.24

1.77 700 red limit

30k0.041

near-infrared

far infrared

0

0.5

1.0

1.5

2.0

green limit yellow orange red

550 580 600 650

10

10 -2

10

Infrared

Microwave and radio waves

2.252.142.061.91

10001.24

1.77 700 red limit

30k0.041

near-infrared

far infrared

Introductory Digital Image Processing. 3rd edition. Jensen, 2004Introductory Digital Image Processing. 3rd edition. Jensen, 2004

Page 330: CE 406 – Advanced Surveying

SensorsSensors

�� Passive Passive �� Sun’s energy which is reflected Sun’s energy which is reflected

(visible) or(visible) or�� Absorbed and reAbsorbed and re--emitted as emitted as

thermal infrared wavelengthsthermal infrared wavelengthsthermal infrared wavelengthsthermal infrared wavelengths�� ASTER, Landsat, AVHRRASTER, Landsat, AVHRR

�� ActiveActive�� Emit radiationEmit radiation�� Radiation reflected is detected and Radiation reflected is detected and

measuredmeasured�� LIDAR, RADAR, and SONARLIDAR, RADAR, and SONAR

http://ccrs.nrcan.gc.ca/resource/tutor/fundam/chapter1/06_e.php

Page 331: CE 406 – Advanced Surveying

Spectral ResolutionSpectral Resolution

Introductory Digital Image Processing. 3rd edition. Jensen, 2004Introductory Digital Image Processing. 3rd edition. Jensen, 2004

Page 332: CE 406 – Advanced Surveying

Spatial Spatial ResolutionResolution

Introductory Digital Image Processing. 3rd edition. Jensen, 2004Introductory Digital Image Processing. 3rd edition. Jensen, 2004

Page 333: CE 406 – Advanced Surveying

Temporal ResolutionTemporal Resolution

June 1, 2004June 1, 2004 June 17, 2004June 17, 2004 July 3, 2004July 3, 2004

Remote Sensor Data AcquisitionRemote Sensor Data Acquisition

16 days16 days

Page 334: CE 406 – Advanced Surveying

Radiometric ResolutionRadiometric Resolution

88--bitbit0

77--bitbit(0 (0 -- 127)127)0

88--bitbit(0 (0 -- 255)255)

99--bitbit(0 (0 -- 511)511)

1010--bitbit(0 (0 -- 1023)1023)

0

0

0

Page 335: CE 406 – Advanced Surveying

VolcanologyVolcanology

�� Map lava flows and eruptive deposits Map lava flows and eruptive deposits (lahars)(lahars)

�� Analyze SOAnalyze SO22 in volcanic plumesin volcanic plumes�� Thermal monitoringThermal monitoring�� Digital Elevation ModelsDigital Elevation Models�� Volcanic ash analysisVolcanic ash analysis

Page 336: CE 406 – Advanced Surveying

MODISMODIS

�� Moderate Resolution Imaging Moderate Resolution Imaging SpectroradiometerSpectroradiometer

�� Launched in 1999 on NASA’s Earth Launched in 1999 on NASA’s Earth Orbiting System (EOS) platformOrbiting System (EOS) platformOrbiting System (EOS) platformOrbiting System (EOS) platform

�� 36 spectral bands36 spectral bands�� http://terra.nasa.gov/About/MODIS/modis_http://terra.nasa.gov/About/MODIS/modis_

swath.htmlswath.html

Page 337: CE 406 – Advanced Surveying

MODVOLCMODVOLC

�� Algorithm created by University of HawaiiAlgorithm created by University of Hawaii�� “The MODVOLC algorithm automatically “The MODVOLC algorithm automatically

scans each 1 kilometer pixel within it to scans each 1 kilometer pixel within it to check for the presence of highcheck for the presence of high--check for the presence of highcheck for the presence of high--temperature hottemperature hot--spots.”spots.”

�� Used not only for volcanic eruptions, but Used not only for volcanic eruptions, but wildfires as well.wildfires as well.

http://modis.higp.hawaii.edu/

Page 338: CE 406 – Advanced Surveying

Mount BelindaMount Belinda

�� South Sandwich IslandsSouth Sandwich Islands

�� Eruption first recorded Eruption first recorded using MODVOLCusing MODVOLC

�� Used MODIS, Landsat Used MODIS, Landsat 7ETM+, ASTER, and 7ETM+, ASTER, and RADARSATRADARSAT--11

�� Identified first recorded Identified first recorded eruption evereruption ever

http://www.intute.ac.uk/sciences/worldguide/satellite/2374.jpg

Page 339: CE 406 – Advanced Surveying

Selected high spatial-resolution images of Montagu Island. North is up. A) Landsat 7 ETM+ Band 8 image from 4 Jan 2002 showing diffuse plume (P) emanating from Mount Belinda s summit (MB) and tephra deposits on north flank. Scale bar applies to A, B and C. B) ASTER visible band composite image (Bands 3–2-1) on 7 Dec 2003, showing tephra deposits and 2003 lava flow (L2). C) RADARSAT-1 image from 30 Oct 2003 showing recent morphology, with inset (D). Arrows point to approximate summit of Mount Belinda and vent location (MB), ash plumes (P), 600 m long lava flow first observed in Jan 2002 (L1), entrenched 2 km long lava flow first observed in Aug 2003 (L2), and arcuate fractures unrelated to this eruption (F). RADARSAT image was provided by the Alaska Satellite Facility, and is copyright 2003 CSA

Page 340: CE 406 – Advanced Surveying

ASTERASTER

�� Advanced Spaceborne Thermal Emission Advanced Spaceborne Thermal Emission and Reflection Radiometerand Reflection Radiometer

�� Launched in 1999, part of NASA’s EOSLaunched in 1999, part of NASA’s EOS�� Spatial Resolution 15m(VNIR), Spatial Resolution 15m(VNIR), �� Spatial Resolution 15m(VNIR), Spatial Resolution 15m(VNIR),

30m(SWIR), 90(TIR). 16 day temporal 30m(SWIR), 90(TIR). 16 day temporal resolution possibleresolution possible

�� Per request basisPer request basis

Page 341: CE 406 – Advanced Surveying

ASTER UsesASTER Uses

�� Volcanological StudiesVolcanological Studies�� Mineralogical StudiesMineralogical Studies�� Hydrothermal StudiesHydrothermal Studies�� Forest FiresForest FiresForest FiresForest Fires�� Glacier StudiesGlacier Studies�� Limnological StudiesLimnological Studies�� Climatology StudiesClimatology Studies�� Digital Elevation ModelsDigital Elevation Models�� http://asterweb.jpl.nasa.gov/gallerymap.asphttp://asterweb.jpl.nasa.gov/gallerymap.asp

Page 342: CE 406 – Advanced Surveying

http://asterweb.jpl.nasa.gov/content/03_data/05_Application_Examples/volcanology/default.htm

Page 343: CE 406 – Advanced Surveying
Page 344: CE 406 – Advanced Surveying

North ShoreOahu, HIOahu, HI15 x 15 m 15 x 15 m

(RGB= 1,4,3)(RGB= 1,4,3)

ASTERASTER

Page 345: CE 406 – Advanced Surveying

LANDSATLANDSAT

�� Launched in 1972, Launched in 1972, Managed by NASA and Managed by NASA and USGSUSGS

�� ETM 7+ has 7 bands (30 ETM 7+ has 7 bands (30 and 60 m) and a and 60 m) and a and 60 m) and a and 60 m) and a panchromatic (15)panchromatic (15)

�� Collected every 16 daysCollected every 16 days�� Mapping lava flows, Mapping lava flows,

thermal monitoring, thermal monitoring, extrusion ratesextrusion rates

Introductory Digital Image Processing. 3rd edition. Jensen, 2004Introductory Digital Image Processing. 3rd edition. Jensen, 2004

Landsat 7 Image of Landsat 7 Image of Palm Spring, CA Palm Spring, CA

30 x 30 m 30 x 30 m (bands 4,3,2 = RGB)(bands 4,3,2 = RGB)

Page 346: CE 406 – Advanced Surveying

AVHRRAVHRR

�� Advanced Very High Resolution Advanced Very High Resolution RadiometerRadiometer

�� First launched in 1978 by NOAAFirst launched in 1978 by NOAA�� Global coverage 4.4 km, U.S 1 km (low Global coverage 4.4 km, U.S 1 km (low �� Global coverage 4.4 km, U.S 1 km (low Global coverage 4.4 km, U.S 1 km (low

spatial resolutionspatial resolution�� Collected twice a day (High temporal Collected twice a day (High temporal

resolution)resolution)

Page 347: CE 406 – Advanced Surveying

http://www.geo.mtu.edu/volcanoes/research/avhrr/images/spurr/

Page 348: CE 406 – Advanced Surveying

IKONOS IKONOS Panchromatic Images Panchromatic Images of Washington, DCof Washington, DC

Jensen, 2004

1 x 1 m spatial resolution1 x 1 m spatial resolution

First satellite launched by a private company, launched in 1999

1 meter panchromatic, 4 m visible and near infrared

Page 349: CE 406 – Advanced Surveying

Active SensorsActive Sensors

�� Emits energy pulse, measure backscatter, Emits energy pulse, measure backscatter, records as a digital numberrecords as a digital number

�� Long wavelengthLong wavelength--microwavemicrowave�� Long wavelengthLong wavelength--microwavemicrowave

�� Penetrates clouds and vegetationPenetrates clouds and vegetation

�� RADAR always black and white with speckled RADAR always black and white with speckled texturetexture

Page 350: CE 406 – Advanced Surveying

RADARRADAR

http://www.jpl.nasa.gov/images/earth/california/sar-la-browse.jpg

European Remote Sensing 1 satellite radar image of stormwater runoff plumes from Los Angeles and San Gabriel Rivers into the Los Angeles and Long Beach Harbors. Dec. 28, 1992. Image credit: ESA

Page 351: CE 406 – Advanced Surveying

LIDARLIDAR

�� Light Detection and RangingLight Detection and Ranging�� Transmits a laser light to targetTransmits a laser light to target�� 15 cm accuracy15 cm accuracy

http://vulcan.wr.usgs.gov/Volcanoes/MSH/http://vulcan.wr.usgs.gov/Volcanoes/MSH/�� http://vulcan.wr.usgs.gov/Volcanoes/MSH/http://vulcan.wr.usgs.gov/Volcanoes/MSH/Eruption04/LIDAR/Eruption04/LIDAR/

Page 352: CE 406 – Advanced Surveying

1

ITU Photogrammetry Division

TERRESTRIAL & NUMERICAL PHOTOGRAMMETRY

Assoc.Prof. Dr. Dursun Z. SEKER, Res. Asist. Zaide DURAN

ITU Photogrammetry Division

PHOTOGRAMMETRY

Photogrammetry is, a system in which an object or an event in time and space is recorded onto a sensitized film or plate by means of appropriate camera or other imaging system, and in which the subsequent image is measured in order to define, portray, digitize or in some way classify the object or event.

Some of nonmapping application of photogrammetry are made in the areas of medicine, dentistry, architecture archeology, experimental analysis of structures hydraulics, ship building, animal husbandry, deformation of dams, glacier and earth slide movements, vehicle motion, missile tracking, accident reconstruction and uderwater events.

Page 353: CE 406 – Advanced Surveying

2

ITU Photogrammetry Division

•Assumes the camera produces a perfect central

projection,

•There must be no deviation of light rays passing through

the lens of the camera,

•The image medium at the focal plane of the camera must

be a rigid, planar surface,

•The mathematical relationship between the object and

the image is known as the principle of collinearity,

•The principle of collinearity embraces the six degrees of

freedom of the camera: three translations and three

rotations,

•Departures from the central projection can be modelled

as systematic errors in the collinearity condition.

The Principle of Photogrammetry

ITU Photogrammetry Division

Its most important feature is the fact, that the objects are measured without beingtouched. Therefore, the term “remote sensing” is used by some authors instead of “photogrammetry”. “Remote sensing” is a rather young term, which was originally confined to working with aerial photographs and satellite images. Today, it includes also photogrammetry, although it is still associated rather with“image interpretation”.

Principally, photogrammetry can be divided into:

1. Depending on the lense-setting:

•Far range photogrammetry (with camera distance setting to indefinite),

•Close range photogrammetry (with camera distance settings to finite values).

2. Another grouping can be:

•Aerial photogrammetry (which is mostly far range photogrammetry),

•Terrestrial Photogrammetry (mostly close range photogrammetry).

Page 354: CE 406 – Advanced Surveying

3

ITU Photogrammetry Division

The applications of photogrammetry are widely spread. Principally, it is utilized for object interpretation (What is it? Type? Quality? Quantity) and object measurement (Where is it? Form? Size?). Aerial photogrammetry is mainly used to produce topographical or thematicalmaps and digital terrain models. Among the users of close-range photogrammetry are architects and civil engineers (to supervise buildings, document their current state, deformations or damages), archaeologists, surgeons (plastic surgery) or police departments(documentation of traffic accidents and crime scenes), just to mention a few.

ITU Photogrammetry Division

Single Camera

•2D information only•application limited to planar objects

•precision dependent on image scale•no reliability

Close-range Camera Stereopair

•minimum configuration for 3Dinformation

•widely used for aerial and close range•precision dependent on image scale and base to height ratio

•minimal reliability

Photogrammetric Techniques

Page 355: CE 406 – Advanced Surveying

4

ITU Photogrammetry Division

ITU Photogrammetry Division

Page 356: CE 406 – Advanced Surveying

5

ITU Photogrammetry Division

ITU Photogrammetry Division

Page 357: CE 406 – Advanced Surveying

6

ITU Photogrammetry Division

ITU Photogrammetry Division

Technique that uses photographs for mapmaking and surveying. As early as 1851 the French inventor AiméLaussedat perceived the possibilities of the application ofthe newly invented camera to mapping, but it was not until50 years later that the technique was successfully employed.

In the decade before World War I, terrestrialphotogrammetry, as it came to be known later, was widely used; during the war the much more effective technique ofaerial photogrammetry was introduced. Although aerialphotogrammetry was used primarily for military purposes until the end of World War II, thereafter peacetime uses expanded enormously. Photography is today the principal method of making maps, especially of inaccessible areas,and is also heavily used in ecological studies and inforestry, among other uses.

Page 358: CE 406 – Advanced Surveying

7

ITU Photogrammetry Division

From the air, large areas can be photographed quickly using special cameras, and blind areas, hidden from terrestrial cameras, are minimized. Each photograph is scaled, using marked and known ground reference points; thus, a mosaiccan be constructed that may include thousands ofphotographs. Plotting machines and computers are used to overcome complications.

Instruments used in photogrammetry have become very sophisticated. Developments in the second half of the 20thcentury include satellite photography, very large scale photographs, automatic visual scanning, high-quality colour photographs, use of films sensitive to radiations beyond the visible spectrum, and numerical photogrammetry.

ITU Photogrammetry Division

Page 359: CE 406 – Advanced Surveying

8

ITU Photogrammetry Division

ITU Photogrammetry Division

Analogue: A pair of photographs

are placed in a mechanical/optical

device called a stereoplotter. An

operator physically adjusts the

orientations of the photographs to

match the exposure situation.

Detail and heights are traced on a

plotting table by a direct

mechanical linkage.

Photogrammetric Processing Techniques

Page 360: CE 406 – Advanced Surveying

9

ITU Photogrammetry Division

Analytical: Single or pairs

of photographs are placed

in an X-Y measuring stage

which digitally records

image coordinates. Mono or

stereo comparators are

manually driven whilst

analytical plotters are semi-

automated. Recorded

measurements are

computer processed and

the information registered in

a CAD database.

Photogrammetric Processing Techniques

ITU Photogrammetry Division

Digital Image: Single or pairs of digital

images are loaded into a computer with

image processing capabilities. The

images may be from satellite or

airborne scanners, CCD cameras or

are conventional photographs captured

by a line scanner. The images are

either displayed on the screen for

operator interpretation, enhanced by

image processing or subjected to

image correlation in order to form a

digital elevation model or extract detail.

Photogrammetric Processing Techniques

Page 361: CE 406 – Advanced Surveying

10

ITU Photogrammetry Division

ITU Photogrammetry Division

Page 362: CE 406 – Advanced Surveying

11

ITU Photogrammetry Division

ITU Photogrammetry Division

Fiducial marks are small targets on the body of metric cameras. Their positions relative to the camera body are calibrated. Thus, they define the image co-ordinate system; in that system, the position of the projection centre is known. Form as well as distribution of fiducial marks depend on the manufacturer. If amateur cameras are used, the images of corners of the camera frame on the negatives can be used instead of fiducial marks.

Fiducial Marks Fiducial Marks

Page 363: CE 406 – Advanced Surveying

12

ITU Photogrammetry Division

ITU Photogrammetry Division

Page 364: CE 406 – Advanced Surveying

13

ITU Photogrammetry Division

ITU Photogrammetry Division

Page 365: CE 406 – Advanced Surveying

14

ITU Photogrammetry Division

ITU Photogrammetry Division

Page 366: CE 406 – Advanced Surveying

15

ITU Photogrammetry Division

ITU Photogrammetry Division

TERRESTRIAL PHOTOGRAMMETRY

When ground-based cameras are employed, the term terrestrial photogrammetry is used. This term has been historically applied to the system of surveying and mapping from phototographs taken at ground stations. Terrestrialphotogrammetry can be further classified;

• as close-range photogrammetry if the camera-object distance is somewhere between 1:10 m to 100 m,

• as macrophotogrammetry if the camera-object distance is in the 0.10 to 0.01 m range,

• as microphotogrammetry when the photos are exposed through a microscope.

Page 367: CE 406 – Advanced Surveying

16

ITU Photogrammetry Division

A photographic image is a „central perspective“. This implies, that every light ray, which reached the film surface during exposure,passed through the camera lens (which is mathematically considered as a single point, the so called „perspective center“). In order to take measurements of objects from photographs, the ray bundle must bereconstructed. Therefore, the internal geometry of the used camera (which is defined by the focal length, the position of the principal point and the lens distortion) has to be precisely known. The focal length is called „principal distance“, which is the distance of the projection center from the image plane´s principal point.

PHOTOGRAPHING DEVICES; CAMERAS

Depending on the availability of this knowledge, the photogrammetrist devides photographing devices into three categories:

ITU Photogrammetry Division

They have stable and precisely known internal geometries and very low lens distortions. Therefore, they are very expensive devices. The principal distance is constant, which means, that the lens cannot be sharpened when taking photographs. As a result, metric cameras are only usable within a limited range of distances towards the object. The image coordinate system is defined by (mostly) four fiducial marks, which are mounted on the frame of the camera. Terrestrial cameras can be combined with tripods andtheodolites. Aerial metric cameras are built into aeroplanes mostly looking straight downwards. Today, all of them have an image format of 23 by 23 centimeters.

Metric Cameras

Page 368: CE 406 – Advanced Surveying

17

ITU Photogrammetry Division

If an object is photographed from two different positions, the line between the two projection centers is called “base”. If both photographs have viewing directions, which are parallel to each other and in a right angle to the base (the so called “normal case”), then they have similar properties as the two images of our retinas. Therefore, the overlapping area of these two photographs (which are called a “stereopair”) can be seen in 3D, simulating man´s stereoscopic vision.

In practice, a stereopair can be produced with a single camera from two positions or using a stereometric camera. A stereometric camera in principle consists of two metric cameras mounted at both ends of a bar, which has a precisely measured length (mostly 40 or 120 cm). This bar is functioning as the base. Both cameras have the same geometric properties. Since they are adjusted to the normal case, stereopairs are created easily.

Stereometric Camera

ITU Photogrammetry Division

The photogrammetrist speaks of an „amateur camera“, when the internal geometry is not stable and unknown, as is the case with any „normal“ commercially available camera. However, also these can be very expensive and technically highly developed professional photographic devices. Photographing a test field with many control points and at a repeatably fixed distance setting (for example at infiniy), a „calibration“ of the camera can be calculated. In this case, the four corners of the camera frame function as fiducials. However, the precision will never reach that of metric cameras. Therefore, they can only be used for purposes, where no high accuracy is demanded. But in many practical cases such photography is better than nothing, and very useful in cases of emergency.

Non-metric (Amateur) Cameras

Page 369: CE 406 – Advanced Surveying

18

ITU Photogrammetry Division

Digital Cameras

Photography can be taken with avariety of cameras; however, the result must be digital image files. Digital cameras work the best for schedule and efficiency, with noloss of accuracy. The resolution ofthe cameras defines the field procedures to be used, not the finalaccuracy. Generally, lower cost,lower resolution cameras take more labor to get the same accuracy ashigher resolution cameras. Vexcelcan assist in determining the best camera for your particular needs.

ITU Photogrammetry Division

CAMERAS IN TERRESTRIAL PHOTOGRAMMETRY

Two basic camera types are employed in terrestrialphotogrammetry. These are; metric cameras and non metric cameras.

Metric cameras are designed and calibrated specifically forphotogrammetric measurement. It has a known and stable interior orientation and is usually a fixed-focus camera. They also contains fidicual marks with which to recover the interior orientation.

Nonmetric cameras are represented by a variety of fairly high-quality hand-held cameras used by amateur and professional photographers to take good pictorial quality.

Page 370: CE 406 – Advanced Surveying

19

ITU Photogrammetry Division

TERRESTRIAL METRIC CAMERAS

The photographs for terrestrial potogrammetry are usually taken with the cameras in fixed positions, the elements of outer orientation being frequently determined by field survey. Photographs at large distances, camera to object, are only used in special cases, for example for topographic surveys by expeditions and for glaciological research. Detail photographs in hilly areas, e.g. for the constructions of hydroelectric power stations of for quarry surveys, border on close-range photogrammetry in which the camera is focused on finite distances and the depth of field has to be considered.

ITU Photogrammetry Division

GENERAL DESIGN OF TERRESTRIAL METRIC CAMERAS: Stereometric Camera

Stereometric Camera consist of two cameras fixed relative to each other in the normal case with, usually, a fixed base. The most common base is 120 cm, for object distances is from 5 to 25 m. They are designed for those cases where a simple photogrammetric arrangement is suitable., for example traffic accident or simple surveys of building facades. Fixed-base cameras baselengths of 40 cm and 200 cm also exist.

Systematic diagram of a stereometric camera.

Page 371: CE 406 – Advanced Surveying

20

ITU Photogrammetry Division

GENERAL DESIGN OF TERRESTRIAL METRIC CAMERAS: Independent Metric Camera

These cameras are used whenever maximum accuracy is required and the base/distance ratio must be carefully considered. Systematic diagram of a an

independent metric camera.

ITU Photogrammetry Division

Stereometric Cameras Independent Metric Camera

Page 372: CE 406 – Advanced Surveying

21

ITU Photogrammetry Division

EXAMPLES FOR STEREOMETRIC CAMERAS

© Wild C120

© Wild C40

© Zeiss Oberkohen SMK 120

© Zeiss Oberkohen SMK 40

EXAMPLES FOR INDEPENDENT METRIC CAMERAS

ι Wild P31

ι Wild P32

ι Zeiss Jena UMK

ITU Photogrammetry Division

STEPS OF A TERRESTRIAL PHOTOGRAMMETRIC APPLICATION

Page 373: CE 406 – Advanced Surveying

22

ITU Photogrammetry Division

ITU Photogrammetry Division

Page 374: CE 406 – Advanced Surveying

23

ITU Photogrammetry Division

ITU Photogrammetry Division

FLOWCHART OF PHOTOGRAMMETRIC MAP PRODUCTION

Reconnaissance(trying for discovery)

Base Map

Marking on the Ground Photogrammetric Triangulation

Establish of Ground Control Point

AuxiliaryData Image Definition

Stereo Evaluation Single Image Evaluation

Page 375: CE 406 – Advanced Surveying

24

ITU Photogrammetry Division

Stereo Evaluation Single Image Evaluation

Numerical (Digital) Data Stereo Plotting

Process Cartographic Process

Analogue MapOrthophoto

PrintingDigital Map Photo Map

Mosaic

Rectification

ITU Photogrammetry Division

TERRESTRIAL AND NUMERICAL

PHOTOGRAMMETRY

Dursun Z. ªeker

Page 376: CE 406 – Advanced Surveying

25

ITU Photogrammetry Division

AREAS OF APPLICATIONS OF CLOSE-RANGE PHOTOGRAMMETRY

The ever-expanding areas of application of close-range photogrammetry can be grouped into three major areas: architectural photogrammetry, biomedical and bioengineering photogrammetry (biostereometrics) and industrial photogrammetry.

ITU Photogrammetry Division

ARCHITECTURE

It is noteworthy that the very first measurements ever made by photogrammetry (in the middle of the 19th century) had to do with monuments. It is also a fact that the term “photogrammetry” was introduced by an architect,

Albrecht Meydenbauer, who made his first photogrammetric surveys in 1867. For over century, photogrammetric methods and equipment have continued to evolve. More recently, the field of architectural application of photogrammetry has undergone considerable expansion both in scope and diversity.

Page 377: CE 406 – Advanced Surveying

26

ITU Photogrammetry Division

SURVEYS OF HISTORICAL MONUMENTS

Photogrammetric surveys of historic monuments can be grouped in three major categories:

v Rapid And Relatively Simple Surveys

v Accurate and Complete Surveys

v Very accurate surveys.

ITU Photogrammetry Division

Operational Procedures

Procedures for all of the above-discussed types of photogrammetric surveys are well established and documented. Independent stereopairs of photographs are taken either horizontally, vertically or at some inclination using the camera(s) most suitable for the individual project. Base-to-distance ratio is kept rather small (1/5 to 1/15). External controls are kept as simple as possible (such as number of distances and checks on the levelling bubbles of the camera). In case of complex object, however, a network of reference points is necessary. Camera stations are normally located on the ground, on scaffoldings, on nearby buildings, on a hydraulic lift truck or even in helicopters, which are sometimes used to take horizontal photographs of the upper portions of tall buildings.

Page 378: CE 406 – Advanced Surveying

27

ITU Photogrammetry Division

BIOSTEREOMETRICS (BIOMEDICAL AND BIOENGINNERING APPLICATIONS OF

PHOTOGRAMMETRY)

The study of biological form is one of the most engaging subjects in the history of human thought, which is hardly surprising considering the immense variety of living things. As new measurement techniques and experimental strategies have appeared, new fields of inquiry have been launched and more minds have become absorbed with the riddle of biological form. Discovery of the microscope and X-rays prompted the development of microbiology and radiology, respectively. More recently advances in electronics, photo optics, computers and related technologies have helped to expand the frontiers of morphological research. Growing interest in the stereometric analysis of biological form typifies this trend.

ITU Photogrammetry Division

INDUSTRIAL PHOTOGRAMMETRY

Photogrammetry has been applied in numerous industrial fields and the potentially for further expansion and growth is seemingly limitless. Industrial photogrammetry has been described as “application of photogrammetry in building construction, civil engineering, mining, vehicle and machine construction, metallurgy, ship building and traffic, with their fundamentals and border subjects, including the the phases of research, planning, production engineering, manufacture testing, monitoring, repair and reconstruction. Objects measured by photogrammetric techniques may be solid, liquid or gaseous bodies or physical phenomena, whether stationary or moving, that allow of being photographed” by Meyer (1973).

Page 379: CE 406 – Advanced Surveying

28

ITU Photogrammetry Division

Economic benefits of photogrammetric approach

$ Measurement time on the object is reduced by %90 - %95,

$ Saving in manpower,

$ Reduce machine and time for blade machining through optimisation of the metal removal rate,

$ Reduced material expenditure in the propeller casting manifacture through optimised molds,

$ A cut in recycling time for non-ferrous metals,

$ Shorter production time for propeller manufacture.

ITU Photogrammetry Division

üAutomobile Construction

üMining Engineer

üMachine Constructions

üObjects in Motion

üShipbuilding

üStructures and Buildings

üTraffic Engineering

Examples of Industrial Applications

Page 380: CE 406 – Advanced Surveying

29

ITU Photogrammetry Division

Architectural Close Range Application Done by Our Department

v Hagia Sophia Photogrammetric Record of a World Cultural Heritage

v Soðukçeþme Sokaðý (Coldfountain Street)

v Obtaining of a Facade Plan of Dolmabahce Palace by Digital Photogrammetric Techniques

v Kucuksu Pavillion

v Seniye Sultan Mansion

v Amcazade Huseyin Pasha Mension

v Historical Galatasaray Postoffice

ITU Photogrammetry Division

OLD CITY SILHOUETTE OF ISTANBUL

In this study, it has been intended to obtain a 1:500 scaled silhouette of old Ýstanbul in order to protect the historical structure. For this purpose, the photographs were taken from the arbitrary points on board of a sea craft. The control points were marked along the shore. UMK10/1318Photogrammetric camera has been used to take photographs and Digital PhotogrammetricSystem (PICTRAN )was used for evaluation.

Page 381: CE 406 – Advanced Surveying

30

ITU Photogrammetry Division

An Example Drawing of Old City Silhoutte of Istanbul

ITU Photogrammetry Division

Hagia Sophia Photogrammetric Record of

a World Cultural Heritage

The Haghia Sophia in Istanbul belongs with its unique dome construction to the outstanding and extraordinary architectural structures in the whole world. Build between 532 and 537 during the Byzantine Impair Justinian(527-565), it reflects the sum of all experiences and knowledge of the classical antiquity and it is one of the important monuments of the world heritage. HaighaSophia considered as the first and the last unique application in terms of its architecture, magnificence and functionality has been inspiration for Ottoman mosque on the basis of giving opinion, and is product of synthesis of west and east. The art is one of the wonders of the world remained until now.

Page 382: CE 406 – Advanced Surveying

31

ITU Photogrammetry Division

The Conservation and Restoration Branch of Historical Buildings asked the Photogrammetry Division of the Istanbul Technical University to prepare orthophotos of the Hagia Sophia. Together with the Institute of Photogrammetry and Remote Sensing of the Vienna University of Technology, it was decided to create a high quality 3D model of the dome, so that the obtained results can later be also used in an “Hagia-Sophia Information System”. This Information system has the duty to collect all the information about the building and will be a useful guide for everyone. As one result a 3D photo-model was generated and stored using the data format VRML (Virtual Reality Modeling Language). This paper describes the measurement process, the generation of the 3D model, the production of the terrestrial orthophotos and the setup of an information system.

ITU Photogrammetry Division

Vrml model of Hagia Sophia

Page 383: CE 406 – Advanced Surveying

32

ITU Photogrammetry Division

SOÐUKÇE ªME SOKAÐI (GOLD FOUNTAIN STREET)

On appoaching the Imperial Gate leading into the outer courtyard of Topkapi Palace one's attention is immediately attracted by the row of old Istanbul houses in the street running off the left. This narrow street between the palace walls and Ayasofya is known as the "Street of the Cool Fountain".

ITU Photogrammetry Division

Photogrammetric Evaluation of Sogukcesme Street

Page 384: CE 406 – Advanced Surveying

33

ITU Photogrammetry Division

The houses built against the palace valls form part of a complex that includes the fountain dated 1810 that gives the street its name and a cistern forming part of the chain of great water depots from the Roman period, the whole reflecting the character of a city that has served as capital ofthree great empires.

SOÐUKÇE ªME SOKAÐI

ITU Photogrammetry Division

SOÐUKÇE ªME SOKAÐI

Page 385: CE 406 – Advanced Surveying

34

ITU Photogrammetry Division

Obtaining of Facade Plan of Dolmabahçe Palace by Digital Photogrammetric Techniques

The aim of the project was to make facade plans of Dolmabahçe Palace with a scale of 1/100, 1/50, 1/20 and 1/10. A preliminary study was done and control points were signalized. By the help of surveying methods, ground control points coordinates were measured. Photographs were taken due to a Study Plan, and were scanned. Evaluation was done using digital photogrammetric software, PICTRAN. After interior and exterior orientation, points were measured on oriented photographs, and bundle adjustment was used. Information produced in Pictran was transferred into AUTOCAD system. Cross-section plans were obtained by conventional methods.

ITU Photogrammetry Division

Obtaining of Facade Plan of Dolmabahce Palace

Page 386: CE 406 – Advanced Surveying

35

ITU Photogrammetry Division

Windows Details from Facade Plan of Dolmabahce Palace

ITU Photogrammetry Division

KÜCÜKSU PAVILIONThis attractive part of the Bosphorus on the Asian shore is mentioned by Byzantine historians, and in Ottoman times became one of the imperial parks known as Kandil Bahçesi(Lantern Garden). Sultan Murad IV (1623-1640) was particularly fond of Küçüksu and gave it the name Gümüº Selvi (Silver Cypress), and in several sources from the l7th century onwards the name Baðçe-i Göksu is used.

Page 387: CE 406 – Advanced Surveying

36

ITU Photogrammetry Division

Seniye Sultan Mansion

ITU Photogrammetry Division

Amcazade Hüseyin Paºa Mansion

The only survivor of the old, wood-built waterside residences is the Amcazade Hüseyin Paºa Mansion on the coast at Kanlýca. In fact only a part of this great mansion, the T-shaped reception room with its great windows overlooking the Bosphorus, remains.

Page 388: CE 406 – Advanced Surveying

37

ITU Photogrammetry Division

ITU Photogrammetry Division

Unfortunately its walls, which are embellished with painted and gold leaf designs, have deteriora-ted rapidly during the last 50 years because of neglect. In the middle of this room is a marble pool and over it, a domed roof bearing traces of its former magnificence.

Page 389: CE 406 – Advanced Surveying

38

ITU Photogrammetry Division

ITU Photogrammetry Division

Photogrammetric and Geodetic Map Revision forBoðazkale Archaeological Excavation Field

The aim of the project is to revise the mapof Boðazkale Archaeological Excavation Field by means of geodetic andphotogrammetric methods. According to the plan which is prepared for taking photographs; a preliminary study done at Boðazkale Archaeological Field. Control points are painted on the rocks. Photographs were taken by SMK 120 stereo photogrammetric camera. Photographs are evaluated at Ý.T.Ü. Engineering Faculty PhotogrammetryLaboratory by means of B8S analyticalphotogrammetric instrument. A PC based Digital photogrammetric software PICTRAN was used for evaluation.

Page 390: CE 406 – Advanced Surveying

39

ITU Photogrammetry Division

Camera calibration is made according to bundle adjustment and photographs that were taken without approximate values of orientation parameters, are scanned and oriented. Points are measured on oriented photographs and point coordinates are determined by means at intersected homologous rays. According to theproject appropriateness, Digital TerrainModel (DTM) of Boðazkale Archaeological Field as obtained by the help of software, which is developed by thePhotogrammetry Division. Information, which was produced in Pictran software, is afterwards transferred into AUTOCAD system.

ITU Photogrammetry Division

In this study, it has been obtained data for architechtural CAD drawing with 1/20 and 1/50 scale by means of Digital Close Range Photogrammetrictechniques at historicalPost Office of Galatasaray building. Pictran D - B software were used for Digital PhotogrammetricSoftware evaluation. Rollei6008 Metric Camera were used for taking photos with focus 40 mm lens.

Architectural Photogrammetric Work At Historical Galatasaray Post Office

Page 391: CE 406 – Advanced Surveying

40

ITU Photogrammetry Division

The control points weremarket on the building side with slycon. AutoCAD R14 were used for drawing the plans for architechtural work. These products will use for restoration and reconstructionof the historical Galatasaray Post Office building in Istanbul city.