mobile mapping: an emerging technology for spatial data

8
Mobile Mapping: An Emerging Technology for Spatial Data Acquisition Rongxing Li Abstract Mobile mapping has been the subject of significant research and develooment bv several research teams over the oust decade. A &bile &upping system consists mainly o i a mov- ing platform, navigation sensors, and mapping sensors. The mobile platform may be a land vehicle, a vessel, or an air- craft. Generally, navigation sensors, such as Global Position- ing System (GPS) receivers, vehicle wheel sensors, and inertial navigation systems (INS), provide both the track of the vehicle and position and orientation information of the mapping sensors. Objects to be surveyed are sensed d&ectly by mapping sensors, for instance, charge coupled devices (CCD) cameras, laser rangers, and radar sensors. Because the orientation parameters of the mapping sensors are estimated directly by the navigation sensors, complicated computations such as photogrammetric triangulation are greatly simplified or avoided. Spatial information of the objects is extracted di- rectly from the georeferenced mapping sensor data by inte- grating navigation sensor data. Mobile mapping technology has evolved to a stage which allows mapping and GIS indus- tries to apply it in order to obtain high flexibility in data ac- quisition, more information with less time and effort, and high productivity. In addition, a successful extension of this technology to helicopter-borne and airborne systems will pro- vide a powerful tool for large-scale and medium-scale spatial data acquisition and database updating. This paper provides a systematic introduction to the use of mobile mapping technology for spatial data acquisition. Issues related to the basic principle, data processing, auto- mation, achievable accuracies, and a break down of errors are given. Application considerations and application exam- ples of the technology in highway and utility mapping are described. Finally, the perspective of the mobile mapping technology is discussed. Introduction Large scale spatial data and associated attributes in geograph- ical information systems (GIS) are in high demand in order to generate new databases and to update existing databases in applications such as transportation, utility management, and city planning. The acquisition of such information is mostly realized by aerial photogrammetry or terrestrial surveying us- ing, for example, total stations. In the former case, aerial photographs may not provide sufficient detailed information regarding planimetric and horizontal object features, for ex- ample, manholes, positions of road curbs and center lines, building facades, etc., because of the photoscale and perspec- Department of Geomatics Engineering, The University of Cal- gary, 2500 University Drive N.W., Calgary, Alberta T2N 1N4, Canada. The author is presently with the Department of Civil and En- vironmental Engineering and Geodetic Science, The Ohio State University, Columbus, OH 43210. tive projection geometry. In the latter case, object features can be measured very accurately. However, the operational time of the field survey and associated costs are major con- cerns and sometimes render a project impossible. Further- more, a road or utility survey along a highway may be costly or impractical if the traffic has to be stopped or detoured for a long period. However, a mobile mapping system, equipped with a mobile platform, navigation sensors, and mapping sensors, is capable of solving the above problems. The mo- bile platform may be a land vehicle, a vessel, or an aircraft. Generally, navigation sensors, such as Global Positioning System (GPS) receivers, vehicle wheel sensors, and inertial navigation systems (INS), provide both the track of the vehi- cle and position and orientation information of the mapping sensors. Objects to be surveyed are sensed directly by map- ping sensors, for instance, charge coupled device (CCD) cam- eras, laser rangers, and radar sensors. Because the orientation parameters of the mapping sensors are directly supplied by the navigation sensors, complicated computations such as photogrammetric triangulation are reduced or avoided. Spa- tial information regarding the objects is extracted directly from the georeferenced mapping sensor data by integrating navigation sensor data. Advantages of such a system include Increased coverage capability, rapid turnaround time, and, thus, improved efficiency of the field data acquisition; Integration of various sensors so that quality spatial and at- tribute data can be acquired and associated efficiently; Simplified geometry for object measurements supported by direct control data from navigation sensors; Flexible data processing scheme with original data stored as archive data and specific objects measured at any time; and Strongly georeferenced image sequences which provide an opportunity for automatic object recognition and efficient the- matic GIS database generation. Mobile mapping technology has been researched and de- veloped since the late 1980s. It was inspired by the availabil- ity of GPS technology for civilian uses. Experiments have been performed to strengthen the geometry and to reduce the ground control in photogrammetric aerotriangulation by em- ploying GPS observations (van der Vegt, 1988; Colomina, 1989; Jacobsen, 1993; Ackermann and Schade, 1993; Merchant, 1994). The mobility of the mapping systems has always been a concern. It requires high resolution mapping sensors covering large areas, and high accuracy navigation sensors determining the positions and orientations of the ve- hicle. The integration of the various sensors involved, such as GPS, INS, laser scanners, CCD cameras, and others, in a mapping system has been investigated and demonstrated by Hein (1989), Krabill (1989), Schwarz et al. (1993b), Scherzin- Photogrammetric Engineering & Remote Sensing, Vol. 63, No. 9, September 1997, pp. 1085-1092. 0099-1112/97/6309-1085$3.00/0 O 1997 American Society for Photogrammetry and Remote Sensing PE&RS September 1997

Upload: others

Post on 26-Apr-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Mobile Mapping: An Emerging Technology for Spatial Data

Mobile Mapping: An Emerging Technology for Spatial Data Acquisition

Rongxing Li

Abstract Mobile mapping has been the subject of significant research and develooment bv several research teams over the oust decade. A &bile &upping system consists mainly o i a mov- ing platform, navigation sensors, and mapping sensors. The mobile platform may be a land vehicle, a vessel, or an air- craft. Generally, navigation sensors, such as Global Position- ing System (GPS) receivers, vehicle wheel sensors, and inertial navigation systems (INS), provide both the track of the vehicle and position and orientation information of the mapping sensors. Objects to be surveyed are sensed d&ectly by mapping sensors, for instance, charge coupled devices (CCD) cameras, laser rangers, and radar sensors. Because the orientation parameters of the mapping sensors are estimated directly by the navigation sensors, complicated computations such as photogrammetric triangulation are greatly simplified or avoided. Spatial information of the objects is extracted di- rectly from the georeferenced mapping sensor data by inte- grating navigation sensor data. Mobile mapping technology has evolved to a stage which allows mapping and GIS indus- tries to apply it in order to obtain high flexibility in data ac- quisition, more information with less time and effort, and high productivity. In addition, a successful extension of this technology to helicopter-borne and airborne systems will pro- vide a powerful tool for large-scale and medium-scale spatial data acquisition and database updating.

This paper provides a systematic introduction to the use of mobile mapping technology for spatial data acquisition. Issues related to the basic principle, data processing, auto- mation, achievable accuracies, and a break down of errors are given. Application considerations and application exam- ples of the technology in highway and utility mapping are described. Finally, the perspective of the mobile mapping technology is discussed.

Introduction Large scale spatial data and associated attributes in geograph- ical information systems (GIS) are in high demand in order to generate new databases and to update existing databases in applications such as transportation, utility management, and city planning. The acquisition of such information is mostly realized by aerial photogrammetry or terrestrial surveying us- ing, for example, total stations. In the former case, aerial photographs may not provide sufficient detailed information regarding planimetric and horizontal object features, for ex- ample, manholes, positions of road curbs and center lines, building facades, etc., because of the photoscale and perspec-

Department of Geomatics Engineering, The University of Cal- gary, 2500 University Drive N.W., Calgary, Alberta T2N 1N4, Canada.

The author is presently with the Department of Civil and En- vironmental Engineering and Geodetic Science, The Ohio State University, Columbus, OH 43210.

tive projection geometry. In the latter case, object features can be measured very accurately. However, the operational time of the field survey and associated costs are major con- cerns and sometimes render a project impossible. Further- more, a road or utility survey along a highway may be costly or impractical if the traffic has to be stopped or detoured for a long period. However, a mobile mapping system, equipped with a mobile platform, navigation sensors, and mapping sensors, is capable of solving the above problems. The mo- bile platform may be a land vehicle, a vessel, or an aircraft. Generally, navigation sensors, such as Global Positioning System (GPS) receivers, vehicle wheel sensors, and inertial navigation systems (INS), provide both the track of the vehi- cle and position and orientation information of the mapping sensors. Objects to be surveyed are sensed directly by map- ping sensors, for instance, charge coupled device (CCD) cam- eras, laser rangers, and radar sensors. Because the orientation parameters of the mapping sensors are directly supplied by the navigation sensors, complicated computations such as photogrammetric triangulation are reduced or avoided. Spa- tial information regarding the objects is extracted directly from the georeferenced mapping sensor data by integrating navigation sensor data. Advantages of such a system include

Increased coverage capability, rapid turnaround time, and, thus, improved efficiency of the field data acquisition; Integration of various sensors so that quality spatial and at- tribute data can be acquired and associated efficiently; Simplified geometry for object measurements supported by direct control data from navigation sensors; Flexible data processing scheme with original data stored as archive data and specific objects measured at any time; and Strongly georeferenced image sequences which provide an opportunity for automatic object recognition and efficient the- matic GIS database generation.

Mobile mapping technology has been researched and de- veloped since the late 1980s. It was inspired by the availabil- ity of GPS technology for civilian uses. Experiments have been performed to strengthen the geometry and to reduce the ground control in photogrammetric aerotriangulation by em- ploying GPS observations (van der Vegt, 1988; Colomina, 1989; Jacobsen, 1993; Ackermann and Schade, 1993; Merchant, 1994). The mobility of the mapping systems has always been a concern. It requires high resolution mapping sensors covering large areas, and high accuracy navigation sensors determining the positions and orientations of the ve- hicle. The integration of the various sensors involved, such as GPS, INS, laser scanners, CCD cameras, and others, in a mapping system has been investigated and demonstrated by Hein (1989), Krabill (1989), Schwarz et al. (1993b), Scherzin-

Photogrammetric Engineering & Remote Sensing, Vol. 63, No. 9, September 1997, pp. 1085-1092.

0099-1112/97/6309-1085$3.00/0 O 1997 American Society for Photogrammetry

and Remote Sensing

PE&RS September 1997

Page 2: Mobile Mapping: An Emerging Technology for Spatial Data

i-th sensor coord. system

Global coordinate system

Figure 1. Relationships between objects, the platform, and the world.

ger et al. (1995), Da and Dedes (1995), and Schwarz and El- Sheimy (1996). In some applications, only GPS is used to provide the dynamic positions of the vehicle and the loca- tions of the mapped objects. For example, an all-terrain vehi- cle equipped with a GPS receiver gives the shoreline position when it moves along the water mark line (Shaw and Allen 1995). Video images or other GIS attributes can be provided with associated locations surveyed by GPS receivers in a static or kinematic mode. There are a number of alternatives for determining the attitude of a vehicle, including dead reckoning systems (Novak, 1995; Da and Dedes, 1995), gyro- scopes (Cosandier et al., 1993), INS (Wei and Schwarz, 1990), and the attitude information derived from multiple GPS an- tennas (Lu, 1995; Sun, 1996). These techniques offer differ- ent accuracies with varying costs. More sophisticated appli- cations of integrated sensors have been researched recently. For instance, positions determined by a GPS receiver on a car are integrated with a road network database to support car navigation (Krakiwsky and French, 1995). A GPS assisted Au- tonomous Precision Approach and Landing System (APALS) uses calibrated Synthetic Aperture Radar (SAR) maps to sup- port aircraft landing (Cavanaugh, 1995). Position and orienta- tion parameters are derived from GPS and INS data to control aerial cameras and sonar beams on a vessel (Schwarz et al., 1993b; Scherzinger et al., 1995).

Land-vehicle-based mobile mapping systems result in, among others, (1) close distances between the systems and the objects to be surveyed, (2) no ground control and no tri- angulations across images exposed at different time, and (3) completely digital processing. These systems are designed mainly for mapping purposes. Different approaches to the system design and implementation have been used. To give a few examples, TruckMapTM (Pottle, 1995) uses DGPS, a reflec- torless laser rangefinder, and a high accuracy azimuth engine to establish object positions. Video images are acquired, each image being associated with a digitized position and time reference (Pottle, 1995; Chapman and Baker, 1996). GPS- VanTM (Bossler et al., 1991) started with DGPS, CCD cameras, an odometer for measuring the distance driven, a wheel sen- sor for deriving vehicle speed, and a gyro system for obtain- ing orientation parameters. The system was improved and successfully employed, for instance, in railway-related map- ping (Novak, 1995; Bossler and Toth, 1996). VISAT-Van (Schwarz et al., 1993a, Li et al., 1994a) uses a high accuracy strapdown inertial navigation system to provide high quality angular orientation parameters of the CCD cameras, along with positional orientation parameters from DGPS. The new configuration of the system contains eight CCD cameras cov- ering a view field of 180". Similar configurations were imple- mented and reported in KiSS (Hock et al., 1995) and GPSVision (He, 1996). Processing of the vast amount of mo-

bile mapping data is subsequently a very important task. So far, there is no common commercial software capable of han- dling the data from different mobile mapping systems. Auto- mation of the procedures of mobile mapping data processing has not been extensively researched because most efforts seem to have been made in the development of the data ac- quisition systems. However, the automation of processing such large observation databases is of great importance. Au- tomatic matching of corresponding image points in an image sequence was reported by Li et al. (1994b) and Xin (1995). Extraction of road center lines and curb lines from mobile mapping image sequences was researched by He and Novak (1992), Tao et al. (1996), and Li et al. (1996a). Three-dimen- sional coordinates in the object space calculated from the mobile mapping image sequences can be optimized by con- sidering both precision and reliability (Li et al., 1996b). A discussion of building object-oriented 3D databases from mo- bile mapping data can be found in Qian (1995). Several land- vehicle-based systems have been developed. Efforts in reali- zation of the concept in the airborne environment have been made by researchers (Lapine, 1991; Merchant, 1994; Bossler, 1996).

This paper introduces the mobile mapping technology for spatial data acquisition systematically, including its prin- ciple, data processing, and automation. Achievable accura- cies of the current mobile mapping systems and a break down of errors are given. Examples and considerations relat- ing to the use of the technology in highway and utility appli- cations are described. The perspective of mobile mapping technology is also discussed.

Mobile Mapping Technology Spatial and Time Referencing in Mobile Mapping Systems The position of the moving platform changes dynamically along a predefined track to acquire mapping data of objects within its field of view. Two critical issues are (1) how to de- termine the dynamic positions of the platform itself at any time, and (2) how to further derive the spatial information of the objects of interest in the field of view from the platform. Without appropriately defined spatial and time reference sys- tems, it is difficult to describe relationships between the ob- jects, the sensors, the platform, and the world. The track of the vehicle is usually defined in a global coordinate system (Figure 1). Depending on the geographical extent of a project and the application requirements, this global coordinate sys- tem may be, for example, the Universal Transverse Mercator (UTM) projection system, a state plan coordinate system, or a 3D Cartesian coordinate system. The platform coordinate sys- tem (X,, Y,, Z,) is defined on the vehicle and used to inte- grate various sensors such as the GPS receiver, INS unit, laser device, and cameras. An individual sensor, say the ith sensor, has its own local coordinate system (x,, y,, z,), which is related to the platform coordinate system. The platform coordinate system is further referenced to the global coordinate system. For example, the object to be surveyed is a traffic sign which is within the field of view of the ith sensor (a camera in Fig- ure 1). The objective is to derive the coordinates of the traffic sign in the global coordinate system (X,, Y,, Z,). This can be realized by detecting the object using a single or multiple sen- sors and calculating its coordinates in a local coordinate sys- tem. By means of the platform coordinate system, the coordinates of the object in the local sensor coordinate system are transformed to the global coordinate system.

The mathematical models for calculating the object coor- dinates in the local coordinate system depend on the types of sensors. This will be discussed in a later section. The de- termination of transformation parameters between the local sensor coordinate systems and the platform coordinate sys-

September 1997 PE&RS

Page 3: Mobile Mapping: An Emerging Technology for Spatial Data

- -

GPS Satellite

%

J

Object Point

0

Global Coordinate System

Figure 2. Mapping by direct measurements.

cessing algorithms. For example, the OTF (On-The-Fly) algo- rithm for kinematic positioning allows the vehicle to travel at a speed of 60 kmlh and to reach an accuracy of centirne- tres provided that the baseline between the rover and the master receiver is less than about 50 krn (Schwarz et al., 1993a; Cannon, 1994; Leick, 1995). Similar results in air- borne applications at a speed in excess of 360 kmlh were demonstrated (Lapine, 1991). The direct measurement system can be applied to many situations. However, if a digital ter- rain model is to be generated, there must be a very dense network consisting of a large number of vehicle tracks to cover the area. This may not be a practical and efficient method for this purpose.

In an indirect measurement system, GPS is employed to control other mapping sensors which acquire data of objects to be surveyed. A typical example of this kind of system is a van equipped with a GPS receiver, a pair of cCD cameras, and orientation sensors (Figure 3). These on-board sensors are mounted on a "rigid" body. They are associated with the platform coordinate system (X,, Y,, Z,). In some systems, the platform coordinate system is, for example, defined by the INS frame (El-Sheimy and Schwarz, 1993). The orientation of the platform in the global coordinate system is represented by a rotation matrix Mg which is a function of heading, pitch, and roll. These three angles can be measured dynami-

tem is usually carried out in a system calibration procedure. cally by the orientation sensors, for example, by means of This procedure requires a special setting of precisely sur- the INS, gyroscopes, or multiple GPS antennas. The system veyed and well distributed control points, and has to be per- calibration supplies the offset of the GPS antenna ArGPs and formed periodically, or whenever there is a change in the those of the camera exposure centers Ar& and Ar& in the relationship between the sensors (Moffitt and Mikhail, 1980; platform coordinate system. The objective is to compute the El-Sheimy and Schwarz, 1993). position of point J (r:) in the global coordinate system. At

A time reference system is used to track the kinematic any time, DGPS provides the position of the receiver along positions of the platform. It is also used in the synchroniza- the track, rg,, in the global coordinate system. Thus, the ori- tion of the sensors to incorporate signals from various gin of the platform coordinate system is sources taken at different epochs. The signals are both spa- r $ = r Eps + ME Ar tPs. tially and temporally referenced to derive the global spatial (2)

information of the object detected by the sensors. The time The positions of the two exposure stations are reference system used in mobile mapping is dependent on r & = rC, + ME Ar& the sensors used and the accuracy required. GPS time signals, related to Universal Coordinated Time (UCT), are available in rff = rE + MY Ark. received GPS data (Leick, 1995). An INS, however, provides more frequent time updates (El-Sheimy and Schwarz, 1993). Once a time system is chosen as the time reference system, signals from different sources can be integrated and the kine- matic platform locations can be determined.

A stereo image pair from the two cameras forms a stereo model in the local sensor coordinate system defined by the relative positions and orientations of the two cameras. Any object J in the overlapping area of the stereo images can be measured in the image space, and its position in the local - A

Kinematic Mapping Principle sensor coordinate system can be computed as Ar,c' and d ry .

GPS technology is commonly employed in the precise deter- Finally, the position of point J in the global coordinate

mination of the dynamic positions of the platform. The posi- 'yStem is as

tions of the objects to be measured are then acquired using r,G = r & + Mg M v ArlC1 or the mapping sensors. Mobile mapping systems are catego- rized into direct measurement systems and indirect measure- r: = r & + Mg M v ArlCZ, ment systems according to the methods in which the objects are measured.

A direct measurement system collects object positions di- rectly using the GPSIDGPS measurements. The simplest system of this kind uses a GPS receiver moved from one object point to the other. The coordinates of the point J (r,G in the global coor- dinate system) are measured and the attributes are associated in real time. To achieve a reasonable accuracy for mapping (Fig- Object po~nt ure Z), DGPS is applied by adding an additional GPS receiver at

J vJ' Y,, z,, a master station with known coordinates in the global coordi- nate system (rEs). The object position r: is computed by

rf = r g + ArIGPS, (1) 0 Observed

where Ar,GPS is provided by the DGPS result. A sophisticated - Calibrated or Calculated Global Coordinate System - Unknown

direct measurement system determines point positions based on "continuous" observations along the vehicle track. More Figure 3. Mapping by indirect measurements. accurate geometric information can be achieved by post pro-

Page 4: Mobile Mapping: An Emerging Technology for Spatial Data

where Mv and MC2 are rotation matrices between the two lo- data acquired. Data processing and measurements are con- cal sensor coordinate systems (Camera I and Camera 11) and ducted in post-processing sessions. the platform coordinate system. The solutions of Equations 1, 2, and 3 establish direct estimates of the exterior orientation Extraction of Spatial Information parameters of the cameras based on GPS, INS, and calibration measurements. The solution of Equation 4 requires calcula- Derivation of Spatial Information tion of either Ar? or d r y . These vectors are the photogram- Spatial information from a direct measurement system is pri- metric measurements which will be discussed in Equation 5. marily obtained from the positions of the receiver. In this pa- Additional mapping sensors may be integrated into h e sys- per, methods for deriving spatial information from indirect tem. For instance, laser devices can be used to scan road sur- measurement systems are discussed. If CCD cameras are em- faces and to acquire road surface data. Video cameras can be ployed (Figure 31, objects to be measured are identified in a employed to record analog images and provide continuous pair of stereo images acquired by the cameras. The objective video images. is to measure the objects in the images and to derive their

In Equations 1 to 4, observations of DGPS and orientation positions in the global coordinate system. Suppose that a sensors provide the dynamic position and orientation link point J appears in the left and right images with its image between the platform and the global coordinate system points as j and j', respectively. Their coordinates in the im- r&, and ME. In addition, with the help of the calibration ages are measured as (x,, y,) and (x,', 5'). According to photo- parameters, positions of the objects are measured in the im- grammetric principles (Moffitt and Mikhail, 1980), the three age space and transformed to the global coordinate system. vectors Ar,c1, Ary, and B must lie on the same plane. This In this way, no ground control points and terrestrial photo- coplanarity condition can be expressed as a scalar triple grammetric (strip) triangulations are required. Because the product equation view fields of mapping sensors - cameras in this case - cover a much wider area, especially in the cross-track direc- (Ar? X ArP) • B = 0. (5)

tion, the indirect measurement system usually has a higher This equation can be elaborated to a function of the observa- efficiency of data collection for the same track line in com- tions y,) and y,l), the calibration parameters, and the parison to a direct measurement system. unknown coordinates of point J in the platform coordinate

system. The calibration parameters include focal lengths of Mobile Mapping versus Real-Time Mapping the cameras, the rotation matrices MP and M v from the lo- The objective of mobile mapping is to acquire data for deriv- cal camera coordinate systems to the platform coordinate ing spatial and attribute information digitally and dynami- system, and the coordinates of the camera exposure centers cally during the course of surveying. Data processing and in the platform coordinate system. As a result of Equation 5, production of GIS databases can be carried out in a post-pro- Ar,C1 and ArF are computed and used in Equation 4 to calcu- cessing procedure. On the other hand, real-time mapping re- late the position of point J in the global coordinate system. It quires that the products, such as maps or GIS databases, be should be noted that the parameters in and the result of delivered during the course of surveying. In many applica- Equation 5 are all relative to the platform. Only if the dy- tions, the post-processing does not cause any problem. For namic position and attitude of the platform in the global co- example, it is important to reduce the field survey time ordinate system are measured by DGPS and by orientation when a utility survey along a highway is conducted. How- sensors, can the relative position of point J be transformed to ever, in some other applications, post-processing becomes an the global ~oordinate system using Equation 4.

obstacle, when, for example there is no previously acquired Because the images are taken in sequence, one object spatial data available and the mobile mapping system is em- point is usually covered by multiple images which may form ployed to produce the spatial information used immediately. stereo image pairs with combinations of images taken at dif- Such applications can be found in military situations and in ferent epochs. A simple way to improve the accuracy is to emergency response systems. In the latter cases, real-time measure the same point in all possible images for calculating mobile mapping is unavoidable. the coordinates of the point. This simultaneous approach

Currently, there are three major factors affecting real- was proven inefficient in comparison to a sequential estima- time mobile mapping: (1) Differential GPS corrections are not tion algorithm based on Givens transformation (Gruen and available until observations at both the master station and Kersten, 1995). Further research by Li et al., (1996a) led to rover stations are postprocessed, so that the determination of an algorithm for selecting an optimal image pair from an im- the vehicle positions by DGPS cannot be performed. Trans- age sequence for photogrammetric point intersection using mission of the differential GPS corrections between the mas- Kalman filtering. This algorithm optimizes both precision ter station and rovers by radiobeacons has been experi- and reliability. Image matching techniques automate the pro- mented with by U.S. Coast Guard (Leick, 1995). This makes cedure for measuring point and line features. Although not real-time kinematic positioning possible. If high quality cor- all features can be measured automatically at the moment, rections for OTF (On-The-Fly) are transmitted, real-time mo- productivity can be improved by the matching techniques, bile mapping will have one less obstacle. (2) Integration of considering the sizes of large databases to be generated. data sets from different sensors sometimes requires accumu- Points, arcs, and polygons are three primary spatial com- lative data acquired over a period of time instead of in a mo- ponents of a GIs database. With points derived from the im- ment. An extreme example is when INS data captured within age ~equences as discussed above, the other two components a tunnel have to be integrated with GPS data at the two ends can easily be generated by an extended point measurement of the tunnel. (3) Although some simple features, such as the procedure (Li et al., 1994a; Li et a]., 1994b). track of the vehicle, some marked targets, and road center- lines, can be extracted and their positions in the object space Derivation of Attribute Information can be calculated in a relatively short time (He and Novak, Attribute information is another important data category in a 1992; Li, 1993; Li et al., 1996), measurements of most fea- GIS database. It is associated with spatial features to describe tures require either an interactive or semiautomatic additional non-spatial characteristics of objects. Some sys- procedure which cannot usually be performed in real-time. tems collect attribute data such as recorded voices indicating Mobile mapping systems may provide some simplified real- the wildlife species found in a location, or video images of time functions, for instance, for checking completeness of the road surfaces, together with the spatial information acquired,

Page 5: Mobile Mapping: An Emerging Technology for Spatial Data

TABLE 1. CONTRIBUTIONS OF I N D I V I D U A L ERRORS AND THE OVERALL ACCURACY OF THE SYSTEMS

Error Type Error Conditions

Control Data Platform positions from DGPs (dynamic) 6 to 20 cm Speed of the vehicle < 70 km Angular parameters from INS 1 to 6 minutes Strapdown INS Angular parameters from gyroscopes 0.75 to 2 degrees Vertical, heading, and rate gyroscopes Angular parameters from multiple GPS antennas 0.3 to 0.5 degrees The average baseline between 1.7 and 1.0 m Synchronization 1 to 2 msec 1.9 to 3.9 cm at a vehicle speed of 70 kmlh

Mapping Sensor Data Calibration 2 to 5 mm Well-established control field Target positions in images 0.05 pixel Image metrology techniques General objects in images 0.5 pixel Zoom function Lens distortion 1.5 pixels If not corrected

Overall System Points in the object space 30 to 50 cm DGPS, INS, and medium level cameras

either as points for the species, or as "continuous" lines of the road surfaces in video images. In an indirect measure- ment system, interactive interpretation of the image se- quences is mostly used to derive the attributes. This requires that the mobile mapping software be designed and imple- mented in such a way that the attributes can be associated with the spatial features during the measurement procedure (Li et a]., 1994b).

An ideal way to generate the attribute information is to extract the information from the image sequences automati- cally (He and Novak, 1992; Li et al., 1996b; Tao et al., 1996). Currently, there are some very limited cases where this auto- mation may be performed successfully. First, text informa- tion in images, for example, street names on road signs and texts of traffic signs, can be extracted by pattern recognition methods. Depending on the orientation of the sign and of the cameras, the projection of the text on the image may be dis- torted or even invisible. Robust text recognition algorithms which consider the field conditions should be developed. Second, some specific types of objects, such as manholes and fire-hydrants, have a symmetric geometry in the horizontal plane and their projections in the images are less distorted. Because the geometry of these objects is known, artificial im- ages can be generated. Then these images can be matched with the object features in the real image sequences in order to automatically extract the attribute information. Finally, a general classification scheme, such as that used in the classi- fication of satellite remote sensing images, is not appropriate for the mobile mapping images, because of the rapidly changing scales of the objects in the images and because of the extremely detailed and diverse information involved. The number of object classes in such an image is much higher than that of satellite imagery.

A semiautomatic procedure is practical for GIS database production. For each spatial feature, its geometry can be measured either interactively or automatically using image matching techniques. The attribute is then determined auto- matically if possible. Otherwise, an interpretation by the op- erator is necessary. In the current stage, this is still the most effective and reliable method for attribute derivation from the mobile mapping data.

Accuracy The accuracy of the coordinates of a point in the global coor- dinate system derived from the mobile mapping data is one of the important quality measures. The errors involved are in turn contributed by those from the system components. The following discussion focuses on three error categories in mo- bile mapping systems (Table 1): control data errors, mapping sensor errors, and the overall system error, based on a typi- cal configuration with a land vehicle, DGPS, INS, and ccD cameras.

Control data refer to observations from the navigation sensors used to derive the position and orientation param- eters of other mapping sensors such as cameras. Thus, the

accuracy of these observations directly affects the estimated positions and attitudes of the mapping sensors. Furthermore, it influences the point positions derived from the mapping sensor data. According to recent reports by Pottle (1995), El- Sheimy (1996), and Bossler and Toth (1996), if DGPS and post-processing techniques are employed, the accuracy of the dynamic platform positions can be determined in the range of 6 to 20 cm. The speed of the vehicle is controlled under 70 krnlhour. Angular parameters derived from an affordable strapdown INS are accurate to within 1 to 6 minutes if the time-dependent drift is corrected (Schwarz and El-Sheimy, 1996; Hock et al., 1995). This angular error would result in a linear error of 1.5 to 8.7 cm at a distance of 50 m from the system. Inexpensive approaches using gyroscopes provide the angular parameters with an accuracy of 0.75 to 2.0 de- grees (Cosandier et al., 1993). Experiments calculating the attitude information from data acquired by multiple GPS re- ceivers on a vehicle gave an accuracy of 0.3 to 0.5 degrees with an average baseline between the receivers of 1.7 to 1.0 m (Lu, 1995; Sun, 1996). It is obvious that, at the present time, the angular parameters calculated from the observa- tions of gyroscopes and multiple GPS receivers are not satis- factory for high accuracy control of the mapping sensor. The synchronization of the sensors gives an error of about 1 to 2 msec which translates to a linear error of 1.9 to 3.9 cm at a vehicle speed of 70 kmlh (Schwarz and El-Sheimy, 1996).

Errors contained in the mapping sensor data include sys- tem calibration errors, image measurement errors, and lens distortion errors (if not corrected) in the photogrammetric processing (Table I). After a system calibration with a well- established control field and an appropriate procedure (El- Sheimy and Schwarz, 1993), the calibration errors can be as small as 2 to 5 mm. In the image space, locations of objects with clear image features and a symmetric geometry, such as marked targets used in system calibrations, can be deter- mined at a subpixel level, for example 0.05 pixel, using im- age metrology techniques (Cosandier and Chapman, 1992; Li, 1993). Measurements of general objects can be performed with an accuracy of 0.5 pixel if a zoom function is em- ployed. The lens distortion is usually modeled and corrected in the photogrammetric processing. If not corrected, this er- ror may reach 1.5 pixels in the image space.

The errors caused by imprecise image measurements were reported to be 4 cm for objects within 50 m from the cameras, while the effect of the lens distortion reaches 12.7 cm if not corrected (Schwarz and El-Sheimy, 1996). An esti- mate of the overall system accuracy depends on the accuracy of the individual components of the system. Taking the posi- tional accuracy of an object point in the object space as the measure of the overall system accuracy, some systems have reached 30 to 50 cm (both horizontal and vertical) if the ob- jects are 30 to 60 m from the system (Schwarz and El- Sheimy, 1996; Bossler and Toth, 1996; Hoch et al., 1995). This overall system accuracy is important for defining appli- cation areas of the system, and also for evaluating the cost-

Page 6: Mobile Mapping: An Emerging Technology for Spatial Data

effectiveness of the system configuration. In practice, the images may have a sufficient resolution for identifying rela- tively small objects such as fire hydrants. The positions can be determined to an accuracy of 30 to 50 cm, for example. However, the positional errors may exceed the diameter of the fire hydrants. The important differences between resolu- tion and accuracy should be distinguished. A relatively ex- pensive, hence important, component in the system is the INS. Systems developed for applications not requiring highly accurate spatial data may be able to use alternatives such as gyroscopes or multiple GPS receivers to reduce the system costs.

Application Considerations

Highway Application Because it employs dynamic data acquisition, mobile map- ping technology can be directly used in highway related ap- plications, such as traffic sign inventory, monitoring of speed and parking violations, generation of road network databases, and road surface condition inspection when laser technology is jointly applied. There are several advantages to using mo- bile mapping technology in highway applications. The data acquisition is performed without blocking traffic, assuming that traffic velocity is less than, for example, 70 km/h. The information obtained is diverse - a single collection can be used for multiple purposes. Moreover, because data can be both collected and processed in a short period of time, fre- quent and repetitive road surveys and database updating are both possible and affordable.

Objects along roads and highways, for example, traffic signs, light poles, bridges, road centerlines, etc., are usually represented as clear image features in the image sequences. Therefore, they can be identified easily and measured inter- actively to build a spatial database. In order to extract a road centerline, an image sequence of the road is needed. Each image pair supplies a segment of the centerline. The succes- sive road segments from the sequence are measured continu- ously and combined to produce the entire road centerline.

Road centerlines and separating lines between lanes are painted in white or yellow. They have solid or dashed line patterns. Based on road surface and weather conditions, the quality of the lines in the image sequence varies. Manual ex- traction of the lines by the operator is relatively time-con- suming, although it is superior to other traditional surveying methods. Automation of this procedure has been researched. From the image sequence, centerline features are enhanced and extracted automatically. The corresponding 3D centerline segments are then generated in the object space (He and No- vak, 1992). Another approach defines a 3D centerline model in the object space as a physical Snake model (Tao et al., 1996). The Snake model is optimized to adjust the centerline shape using image features of the centerline as internal con- straints, and using geometric conditions derived from other sensors of the system (GPS and INS) as external constraints. This method for automatic centerline extraction and recon- struction is reliable for different road conditions and line patterns. On the other hand, road curb lines, as opposed to painted centerlines, are projected onto the images based on their geometric shapes and material types. Therefore, curb lines can be more difficult to extract and identify automati- cally. Currently, curb line databases are built using semiauto- matic approaches. The system provides the user with pro- jected curb lines in a stereo image pair. The user is asked to confirm if the line pair suggested by the system is correct. Sometimes, because of image quality, the system cannot pro- vide the data. In this case, the operator is asked to digitize the curb segments manually.

It is often required, in road surface condition inspection,

that road surface cracks be located and measured. If the sys- tem is equipped with laser sensors, the depths between the sensors and the road surface are available as relative meas- urements. If the control data from the GPS and INS are avail- able, the depth data derived from the laser sensors can be integrated into the global reference system and used to gen- erate a digital road surface model (DRSM). The DRSM provides a geometric description of the road surface. This can be used to detect the locations, sizes, and shapes of the road surface cracks. If the digital image sequence of the same road is pro- cessed and georeferenced, each grid point of the DRSM can be assigned a gray scale from the corresponding pixel of the im- age sequence. In this way, the images appear to be draped onto the DRSM, and a 3D road surface image is generated. By observing this 3D image using 3D visualization tools, road surface conditions, including cracks, can be illustrated more efficiently. On the other hand, the DRsM can also be built from the stereo image sequence along the road. Correspond- ing road surface points appearing in a stereo pair can be measured by digital image matching techniques. Because one point may appear in more than one successive stereo pair, a point covered by an obstacle, such as a vehicle, in one stereo pair may be visible in the preceding or subsequent pairs. A gridding procedure is applied to calculate grid points of the DRSM. In comparison to the digital matching method using stereo image sequences, laser sensors generate more reliable surface models, because the depths are always measured di- rectly. However, in the model built by the matching based method, there may be areas with points that are not mea- sured, but interpolated, because the areas are invisible or do not have sufficient image texture for image matching.

Application to Facility Mapping Another useful application of mobile mapping technology is in the area of facility mapping. High voltage power transmis- sion lines can be photographed by the mobile mapping sys- tem, and their positions can be measured from the image sequences. There are a number of important parameters which can be calculated from the mobile mapping data, for instance, the positions of the poles and/or towers, the posi- tions of the insulators on each line which support the sus- pending transmission line segments, and the lowest points of the suspending line segments. In order to capture these de- sired transmission line features, the cameras have to be ori- ented somewhat upwards to aim at the towers and line segments. Consequently, a large portion of the resulting im- ages contains the sky as background. This makes it easy to distinguish the targets from the background in the images be- cause of the high contrast between them. Based on the same reason, automatic extraction of the transmission line seg- ments in the image space is also possible. However, if epipo- lar geometry (Moffitt and Mikhail, 1980; Li, 1996) is used to determine the line segments in the object space, the 3D points along the line segments to be measured are dependent on the intersections between the epipolar lines and the line segments in the images (Figure 4). In case I, two cameras are forward looking, with an upper-left angle. For the kth image pair, the epipolar lines formed by Image,,,,, and Imageright,, are almost parallel to the transmission lines. The intersec- tions thus obtained are of low accuracy, considering the effect of both the intersection accuracy and errors of the epi- polar lines caused by imperfect orientation parameters. This will consequently affect the accuracy of the 3D line segments in the object space derived thereby. There are four options for solving this problem. (1) In case I of Figure 4, if an ap- propriate overlapping area is available, subsequent images such as Image ,,,,, and Image,,,,,,, are used to form a stereo pair so that the epipolar-line~,,,,,,,,efttk+l, has a better intersect- ing angle with the transmission line. (2) In case I1 of Figure

September 1997 PE&RS

Page 7: Mobile Mapping: An Emerging Technology for Spatial Data

Figure 4. Camera orientation and epipolar geometry for determination of transmission lines.

4, the cameras are oriented toward the left side. This will also result in effective intersecting angles. (3) A hardware- based stereo viewing system can be used, for example, using a polarized system with a stereo glass and a special monitor. In this case, a 3D stereo model can be reconstructed. The op- erator is then able to view the transmission line in 3D and measure points along the line three-dimensionally. (4) Better results can also be achieved by combining data acquired by laser ranging by a helicopter if available (Krabill, 1989).

Some objects, such as fire-hydrants and manholes, have symmetric geometric shapes and are less dependent on the camera positions and orientations. Thus, they appear similar in images. Based on the geometric shapes of the objects, sim- ulated lighting sources, and material characteristics, artificial images can be generated. These artificial images can then be compared with the image features in the sequences. A match- ing procedure between the artificial images and real images is performed. In this way, both the geometric information and attributes of fire-hydrants and manholes appearing in the image sequences can be extracted efficiently.

A Perspective on Mobile Mapping One of the criteria used to measure the quality of the system is the accuracy of the location of objects measured using the acquired data. This accuracy is strongly influenced by the control data collected by the navigation sensors and the quality of the mapping sensors. The components of mobile mapping systems function efficiently and reliably as individ- ual and independent systems in various applications. A mo- bile mapping system requires that these components work cooperatively.

Another factor to consider is cost. A highly accurate strapdown INS is currently the most costly component in the system. Low level INS and gyroscopes may be used, but they do not supply the same quality angular parameters which can be employed, for example, in camera orientation. The high cost of the INS make the entire system relatively expensive. If high accuracy is not essential, low cost gyroscopes can be used in order to make the system more affordable. The image resolution of the cameras has a great influence on the accu- racy of the photogrammetric intersections of object points. This is especially critical because the physical baseline of the cameras is limited by the dimension of the land-based vehicle. For an object far from the cameras, the intersecting angle is small, and a one-pixel error in the image will result in a large

along-track error. High-resolution cameras up to 4096 pixels by 4096 pixels will be available but will be rather expensive. In addition, the images acquired by high-resolution cameras occupy a large memory and storage space. If multiple high- resolution color cameras are employed, the problems of effi- cient image data transmission and medium storage during the data acquisition, efficient object measurement and attribute ex- traction, and data archiving will be addressed.

GPS should provide "continuous" and consistent data for absolute positional control. However, there are cases where GPS signals are blocked by high-rise buildings, tunnels, and other objects. An integration of GPS signals at the last point before the signal blocking and the first point of signal recov- ery with INS trajectories bridges the gap of the control data. On the other hand, there are situations in which GPS signals are only blocked for a very short period, affecting one or two exposure stations. If not detected and corrected, these errors will distort both the positions of the camera exposure ten- ters, and the object points derived from the image sequences. Therefore, an automated systematic quality checking proce- dure should be implemented to examine the GPS and INS data. If such an inconsistency exists, positions of a point de- rived from different image pairs of different exposure sta- tions will show a large difference. Otherwise, they should have the same position within a certain tolerance. This qual- ity checking procedure guarantees the quality of the control data used for camera orientation.

To overcome the difficulties and to improve the mobile mapping technology, the following further research and de- velopment should be conducted:

If a secondary local navigation network were available in ar- eas where GPS signals are blocked, vehicle positioning and sensor control would be more reliable anytime and anywhere. Further improvement in downloading the mapping data ac- quired from the vehicle by using wireless communication technology should be explored. Larger format c c D chips should be employed to increase the resolution of the images and, consequently, the accuracy of the photogrammetrically intersected object points. This is es- pecially important if a similar configuration is used on heli- copters or aircraft where one pixel represents a much larger area on the ground. More efficient image processing and sequential estimation al- gorithms should be researched and developed in order to make good use of the large amount of high-resolution data and the characteristics of sequential images. Enhancement of the automation of object recognition and at- tribute extraction would improve the efficiency of GIS data- base generation from the georeferenced image sequences. This would also contribute to the reduction of the significant difference between the speed of mobile mapping data acqui- sition and that of the subsequent data processing. An alternative way to bridge a period of GPS outage using INS is to perform a terrestrial photogrammetric triangulation. A strip of overlapping photos are relatively oriented to form a strip model covering the GPS gap. The GPS data available at the two ends of the strip can be used to orient the strip model absolutely. Automation of this labor-intensive and time-consuming procedure is deemed necessary. Important is- sues involved in the automation include automatic tie point selection, conjugate tie point searching, and absolute strip orientation using GPS data. Ideally, depending on the degree of the automation, the terrestrial photogrammetric triangula- tion may be performed using all images.

Efforts in mobile mapping research and development have been made by researchers and engineers over the last decade. The technology has evolved to such a degree as to allow mapping and GIS industries to use it for high flexibility in data acquisition. More information can be gained in less time, and with less effort, while achieving high productivity. In addition, a successful extension of this technology to heli- copter-borne and airborne (Bossler, 1996) systems will pro-

Page 8: Mobile Mapping: An Emerging Technology for Spatial Data

vide us with a powerful tool for small- and medium-scale Lapine, L.A., 1991. Analytical Calibration of the Airborne Photogmm- spatial data acquisition and database updating. metric System Using A Priori Knowledge of the Exposure Station

Obtained bv Kinematic GPS Techniaues. Re~ort No. 411, 1991. De-

Acknowledgment partment oi Geodetic Science and ~;rv&g, The Ohio state uni- versity.

Research support from the Natural Sciences and Engineering Research Council of Canada (NSERC) is acknowledged. R ~ - Li, R., 1993. Building Octree Representations of 3D Objects in CAD/

CAM by Digital Image Matching Techniques, Photogrammeteric En- viewers and editor's comments are appreciated. gineering & Remote Sensing, 58(12):1685-1691.

References Ackermann, F., and H. Schade, 1993. Application of GPS for Aerial Tri-

angulation, Photogrammetric Engineering 6. Remote Sensing, 59(11): 1625-1632.

Bossler, J.D., 1996. Airborne Integrated Mapping System, GIM, July, pp. 32-35.

Bossler, J.D., C.C. Goad, P.C. Johnson, and K. Novak, 1991. GPS and GIs - Map the Nation's Highways, GeoInfo Systems, March, pp. 27-37.

Bossler, J.D., and C.K. Toth, 1996. Feature Positioning Accuracy in Mo- bile Mapping: Results Obtained by the GPSVanT", Int. Archives of Photogrammetry and Remote Sensing, 31(Part B2):139-142.

Cannon. M.E.. 1994. The Use of GPS for GIS Georeferencine: Status and ~ ~ p l i c a t i b n s , Proceedings of the Canadian ConferenceYon GIs, pp. 654-663.

Cavanaugh, D.B., 1995. Calibration of the APALS Database, Proceedings of 1995 Mobile Mapping Symposium, ASPRS, pp. 1-8.

Chapman, M., and L. Baker, 1996. High-Speed Video Road Survey in Singapore, GIs Asia Pacific, April, pp. 30-32.

~olomina,-I., 1989. Combined Adjustment of Photogrammetric and GPS Data, 42nd Photogrammetric Week, Stuttgart, Germany, pp. 313- 325.

Cosandier, D., and M.A. Chapman, 1992. High Precision Target Location for Industrial Metrology, Videometrics, SPIE, 1820:lll-122.

Cosandier, D., M.A. Chapman, and T. Ivanco, 1993. Low Cost Attitude Systems for Airborne Remote Sensing and Photogrammetry, The Canadian Conference on GIs, Ottawa, March.

Da, R., and G. Dedes, 1995. Nonlinear Smoothing of Dead Reckoning Data with GPS Measurements, Proceedings of 1995 Mobile Mapping Symposium, ASPRS, pp. 173-182.

El-Sheimy, N., 1996. A Mobile Multi-Sensor System for GIS Applica- tions in Urban Centers, Int. Archives of Photogrammetry and Re- mote Sensing, 31(Part B2):95-100.

El-Sheimy, N., and K.-P. Schwarz, 1993. Kinematic Positioning in Three Dimension Using CCD Technology, Proceedings of IEEE - Vehicle Navigation 6. Information Systems [VNIS), pp. 472475.

Gruen, A., and T.P. Kersten, 1995. Sequential Estimation in Robot Vi- sion, Photogrammetric Engineering 6. Remote Sensing, 61(1):75-82.

He, G., 1996. Design of a Mobile Mapping System for GIs Data Collec- tion, Int. Archives of Photogrammetry and Remote Sensing, 31(Part B2):154-159.

He, G., and K. Novak, 1992. Automatic Analysis of Highway Features £ram Digital Stereo-Images, Int. Archives of Photogrammetry and Remote Sensing, 29(Part B3):119-124.

Hein, G.W., 1989. Precise Kinematic GPSILNS Positioning: A Discussion on the Applications in Aerophotogrammetry. 42nd Photogrammet- ric Week, Stuttgart, Germany, pp. 261-282.

Hock, C., W. Caspary, H. Heister, J. Klemm, and H. Sternberg, 1995. Ar- chitecture and Design of the Kinematic Survey System KiSS, Pro- ceedings of the 3rd Int. Workshop on High Precision Navigation, Stuttgart, April, pp. 569-576.

Jacobsen, K., 1993. Correction of GPS Antenna Position for Combined Block Adjustment, Proceedings of ASPRS/ACSM, pp. 152-158.

Kimura S., H. Kano, and T. Kanade, 1995. CMU Video - Rate Stereo Machine, Proceedings of 1995 Mobile Mapping Symposium, ASPRS, pp. 9-18.

Krabill, W.B., 1989. GPS Applications to Laser Profiling and Laser Scan- ning for Digital Terrain Models, 42nd Photogrammetric Week, Stutt- gart, Germany, pp. 329-340.

Krakiwsky, E.J., and R.L. French, 1995. Japan in the Driver's Seat, GPS World, October, pp. 53-60.

Leick, A,, 1995. GPS Satellite Surveying, Second Edition, John Wiley & Sons, Inc., New York.

, 1996. Design and Implementation of a Photogrammetric Geo- Calculator in a Windows Environment. Photoprammetric En~ineer- " " ing 6. Remote Sensing, 62(1):85-88.

Li, R., K.-P. Schwarz, M.A. Chapman, and M. Gravel, 1994a. Integrated GPS and Related Technologies for Rapid Data Acquisition, GIs World, April, pp. 4143 .

Li, R., M.A. Chapman, L. Qian, Y. Xin, and K.-P. Schwarz, 1994b. Rapid GIS Database Generation Using GPSIINS Controlled CCD Cameras, Proceedings of the Canadian Conference on GIs, pp. 465477.

Li, R., M.A. Chapman, L. Qian, Y. Xin, and C. Tao, 1996a. Mobile Map- ping for 3D GIS Data Acquisition, Int. Archives of Photogrammetry and Remote Sensing, 31(Part B2):232-237.

Li, R., M.A. Chapman, and W. Zou, 1996b. Optimal Acquisition of 3D Object Coordinates from Stereoscopic Image Sequences, Int. Archives of Photogrammetry and Remote Sensing, 31(Part B3):44%52.

Lu, G., 1995. Development of a GPS Multi-Antenna System for Attitude Determination, Ph.D. dissertation, Department of Geomatics Engi- neering, The University of Calgary, 185 p.

Merchant, D.C., 1994. Airborne GPS-Photogrammetry for Transportation Systems, Proceedings of ASPRS/ACSM, pp. 392-395.

Moffitt, F.H., and E.M. Mikhail, 1980. Photogrammetry, Harper & Row, Publishers, Inc., New York.

Novak, K., 1995. Mobile Mapping Technology for GIS Data Collection, Photogrammetric Engineering b Remote Sensing, 61(5):493-501.

Pottle, D., 1995. New Mobile Mapping System Speeds Data Acquisition, GIM, 9(9):51-53.

Qian, L., 1996. Building 3 0 GIs by Object Orientation, Master's thesis, The University of Calgary, Canada, 84 p.

Scherzinger, B.M., J.J. Hutton, and D.B. Reid, 1995. The Position and Orientation System (POS) for Land, Marine and Airborne Mapping, Proceedings of 1995 Mobile Mapping Symposium, ASPRS, pp. 75- 84.

Schwarz, K.-P., H.E. Martell, N. El-Sheimy, R. Li, M.A. Chapman, and D. Cosandier, 1993a. VISAT - A Mobile Highway Survey System of High Accuracy, Proceedings of the BEE Vehicle Navigation and In- formation Systems Conference, 12-15 October, Ottawa, pp. 476481.

Schwarz, K.-P., M.A. Chapman, M.E. Cannon, and P. Gong, 1993b. An Integrated INSIGPS Approach to the Georeferencing of Remotely Sensed Data, Photogrammetric Engineering 6. Remote Sensing, 59[11):1667-1674.

Schwarz, K.-P., and N. El-Sheimy, 1996. Kinematic Multi-Sensor Sys- tems for Close Range Digital Imaging, Int. Archives of Photogram- metry and Remote Sensing, 31(Part B3):774-785.

Shaw, B., and J.R. Allen, 1995. Analysis of a Dynamic Shoreline at Sandy Hook, New Jersey Using a Geographic Information System, Proceedings of ASPRS/ACSM, pp. 382-391.

Sun, H., 1996. Integration of Multi-Antenna GPS System with CCD Camera for Rapid GIs Data Acquisition, ENGO 661 course report, Department of Geomatics Engineering, The University of Calgary, 14 p.

Tao, C., R. Li, and M.A. Chapman, 1996. A Model Driven Approach for Extraction of Road Line Features Using Stereo Image Sequences from a Mobile Mapping System, ASPRS/ACSM Proceedings, April, Baltimore, Maryland.

van der Vegt, H.J.W., D. Boswinkel, and R. Witmer, 1988. Utilization of GPS in Large Scale Photogrammetry, International Archives of Pho- togrammetry, 27-Bll(Comrn.lII):413-429.

Wei, M., and K.-P. Schwarz, 1990. A Strapdown Inertial Algorithm Us- ing an Earth-Fixed Cartesian Frame, Navigation, 3 7(2): 153-167.

Xin, Y., 1995. Automating Geostation: A Softcopy System, Master's the- sis, Department of Geomatics Engineering, The University of Cal- gary, Canada.

(Received 20 August 1996; accepted 05 November 1996; revised 09 December 1996)

September 1997 PE&RS