databases, design, and organisationmsu.ac.zw/elearning/material/1175073538ges 404 document.doc ·...

259
DATABASES, DESIGN, AND ORGANISATION Databases GIS Databases Database design Database management system Databases A database is a collection of information that's related to a particular subject or purpose, such as tracking residential population or maintaining a music collection. If your database isn't stored on a computer, or only parts of it are, you may be tracking information from a variety of sources that you're having to coordinate and organize yourself. Within a database, divide your data into separate storage containers called tables; view, add, and update table data by using online forms; find and retrieve just the data you want by using queries; and analyse or print data in a specific layout by using reports. Allow users to view, update, or analyse the database's data from the Internet or an intranet by creating data access pages. To store your data, create one table for each type of information that you track. To bring the data from multiple tables together in a query, form, report, or data access page, define relationships between the tables. To find and retrieve just the data that meets conditions that you specify, including data from multiple tables, create a query. A query can also update or delete multiple records at the same time, and perform predefined or custom calculations on your data. To easily view, enter, and change data directly in a table, create a form

Upload: ngokhuong

Post on 13-Mar-2018

215 views

Category:

Documents


1 download

TRANSCRIPT

DATABASES, DESIGN, AND ORGANISATION

Databases GIS DatabasesDatabase designDatabase management system

Databases

A database is a collection of information that's related to a particular subject or purpose, such as tracking residential population or maintaining a music collection. If your database isn't stored on a computer, or only parts of it are, you may be tracking information from a variety of sources that you're having to coordinate and organize yourself.

Within a database, divide your data into separate storage containers called tables; view, add, and update table data by using online forms; find and retrieve just the data you want by using queries; and analyse or print data in a specific layout by using reports. Allow users to view, update, or analyse the database's data from the Internet or an intranet by creating data access pages.

To store your data, create one table for each type of information that you track. To bring the data from multiple tables together in a query, form, report, or data access page, define relationships between the tables.

To find and retrieve just the data that meets conditions that you specify, including data from multiple tables, create a query. A query can also update or delete multiple records at the same time, and perform predefined or custom calculations on your data. To easily view, enter, and change data directly in a table, create a form

GIS databases

The issue of designing and organising a GIS database has to be considered in its entirety and needs a conceptual understanding of different disciplines, - cartography and mapmaking, geography, GIS, databases etc. here an overview of the design procedure that could be adopted and the organisational issues have been addressed. The issue of updating the database and the linkage aspect of the GIS database to other databases has also been addressed.

The Geographical Information System (GIS) has two distinct utilisation capabilities - the first pertaining to querying and obtaining information and the second pertaining to in targeted analytical modelling. The importance of the GIS database stems from the fact that the data elements of the database are closely interrelated and thus need to be structured for easy integration and retrieval. The GIS database has also to cater to the

different needs of applications. In general, a proper database organisation needs to ensure the following [Healey, 1991; NCGIA, 1990]:

a)      Flexibility in the design to adapt to the needs of different users.

b)      A controlled and standardised approach to data input and updation.

c)      A system of validation checks to maintain the integrity and consistency of the data elements.

d)      A level of security for minimising damage to the data.

e)      Minimising redundancy in data storage.

THE DATA IN GIS

Broadly categorised, the basic data for the GIS database has two components:

a) Spatial data - consisting of maps and which have been pr-pared either by field surveys or by the interpretation of Remote-ly Sensed (RS) data. Some examples of the maps are the soil survey map,geological map, landuse map from RS data, village map etc. Much of these maps are available in analog form and it is of late that some map information is available directly in digital format. Thus, the incorporation of these maps into a GIS depends upon whether it is in analog or digital format - each of which has to be handled differently.

b) Non-spatial data - attributes as complementary to the spatial data and describe what is at a point, along a line or in a polygon and as socio-economic characteristics from census and other sources. The attributes of a soil category could be the depth of soil, texture, erosion, drainage etc and for a geological category could be the rock type, its age, major composition etc. The socio-economic characteristics could be the demographic data, occupation data for a village or traffic volume data for roads in a city etc. The non-spatial data is mainly available in tabular records in analog form and need to be converted into digital format for incorporation in GIS. However, the 1991 census data is now available in digital mode and thus direct incorporation to GIS database is possible.

2.1 MEASUREMENT OF GEOGRAPHICAL DATA

The data in a GIS is generally having a geographical connotation and thus it carries the normal characteristics of geographical data. The measurement of the data pertains to the description of what the data represents - a naming or legending or classification function and the calculation of their quantity - a counting or scaling or measurement function. Thus, scaling of the data is important while organising a GIS database. There are four scales by which data is represented [Brien, 1992]:

a) nominal, where the data is principally classified into mutually exclusive sets or levels based on relevant characteristics. The landuse information on a map representing the different categories of landuses is a nominal representation of data. The nominal scale is the commonly used measure for spatial data.

b) ordinal, which is a more sophisticated measurement as the classes are placed into some form of rank order based on a logical property of magnitude. A Ground water prospect map showing different classes of prospects and categorised from "high prospect" to "low prospect" is an ordinal scale measurement.

c) interval, which is continous scale of measurement and is crude representation of numeric data on a scale. Here, the class definition is a rank order where the differences between the ranks are quantified. The representation of population density in rank order is an example of interval data.

d) ratio, which is also a continous scale where the original of the scale is real and not imaginary. Further ratio interval represents the scaling between individual observation in the dataset and not just between datasets. An example of the ratio scale is when each value is normalised against a reference - generally an average or maxima or minima.

The above four scales have been defined as an hierarchy and thus the ratio scale exhibits all the defining operations while those further down the hierarchy possess fewer. Thus, a ratio scale may be reexpressed as an interval, ordinal or nominal data but nominal data cannot be expressed as ratios. Further, the nominal and ordinal scale are used to define categorical data - which is the method of representing maps or spatial data and the interval and ratio data are used to define continous data. TABLE - 1 shows the characteristics of the scales.

 

DATABASE DESIGN GIS database design

Just as in any normal database activity, the GIS database also needs to be designed so as to cater to the needs of the application that proposes to utilise it. Apart from this the design would also:

a) provide a comprehensive framework of the database.

b) allow the database to be viewed in its entirety so that interaction and linkages between elements can be defined and evaluated.

c) permit identification of potential bottlenecks and problem areas so that design alternatives can be considered.

d) identify the essential and correct data and filter out irrelevant data

e) define updation procedures so that newer data can be incorporated in future.

The design of the GIS database will include three major elements [NCGIA, 1990]:

a) Conceptual design, basically laying down the application requirements and specifying the end- utilisation of the database. The conceptual design is independent of hardware and software and could be a wish-list of utilisation goals.

b) Logical design, which is the specification of the database vis-a-vis a particular GIS package. This design sets out the logical structure of the database elements determined by the GIS package.

c) Physical design, which pertains to the hardware and software characteristics and requires consideration of file structure, memory and disk space, access and speed etc.

Each stage is interrelated to the next stage of the design and impacts the organisation in a major way. For example, if the concepts are clearly defined, the logical design is easier done and if the logical design is clear the physical design is also easy. FIGURE 1 shows a framework of the design elements and their relationship. The success or failure of a GIS project is determined by the strength of the design and a good deal of time must be allocated to the design activity. SAC has evolved a set of design guidelines for the GIS database creation [Rao et al (1990)] which has been adopted for implementation of GIS projects for Bombay Metropolitan Region (BMR) [SAC and BMRDA, 1992]; Regional planning at district level for Bharatpur [SAC and TCPO, 1992]; Wasteland Development for Dungarpur [SAC, 1993]. Much of what has been discussed here is based on the design guidelines evolved and also the experience gained in the execution of the different GIS projects. To illustrate the design aspects of a GIS database examples from design of the Bharatpur district database will be explained and referred.

Designing a database

Good database design is the keystone to creating a database that does what you want it to do effectively, accurately, and efficiently.

Steps in designing a database

        Determine the purpose of your database

        Determine the tables you need

        Determine the fields you need

        Identify the field or fields with unique values in each record

Determine the relationships between tables

3.1 GIS - CORE OF THE DATABASE

The Geographical Information system (GIS) package is the core of the GIS database as both spatial and non-spatial databases have to be handled. The GIS package offers efficient utilities for handling both these datasets and also allows for the spatial database organisation; non-spatial datasets organisation - mainly as attributes of the spatial elements; analysis and transformation for obtaining the required information; obtaining information in specific format (cartographic quality outputs and reports); organisation of a user-friendly Query-system. Different types of GIS packages are available and the GIS database organisation depends on the GIS package that is to be utilised. Apart from the basic functionality of a GIS package, some of the crucial aspects that impact the GIS database organisation are as follows:

a) data structure of the GIS package. Most GIS packages adopt either a raster or vector structure, or their variants, internally to organise spatial data and represent realworld features.

b) attribute data management. Most of the GIS packages have embedded linkage to a Data Base Management System (DBMS) to manage the attribute data as tables.

c) a tiled concept of spatial data handling, which is fundamental to the way maps are represented in real world. For example, 16 SOI 1:50,000 map sheets make up 1 1: 250,000 sheet and 16 1:250,000 sheet make 1 1:1,000,000 sheet. This map tile graticule could also be represented in a GIS and some GIS package allow tile-data handling.

4.0 GIS DATABASE - CONCEPTUAL DESIGN

The Conceptual Design (CD) of a GIS database defines the application needs and the end objective of the database. Generally, this is a statement of end needs and is defined fuzzily. However, it crystallises and evolves as the GIS database progresses but within the framework of the broad statement of intentions. However, the clearer and well defined the CD the easier it is for the logical designing of the GIS database. Some of the key issues that merit consideration for the CD are:

a) Specifying the ultimate use of the GIS database as a single statement. Some examples could be GIS DATABASE FOR URBAN PLANNING AT MICRO-LEVEL; GIS DATABASE FOR WATER SUPPLY MANAGEMENT; GIS DATABASE FOR WILDLIFE HABITAT MANAGEMENT. The important aspect here is the management of a particular resource, facility etc and thus the statement would generally include the management activity.

b) Level or detail of GIS database which indicates the scale or level of the data contents of the database. A database designed for MICRO-LEVEL would require far more details than one designed for MACRO-LEVEL applications. TABLE 1 illustrates the

relationship between level and applications which could be used as a guideline In most of the cases the level or detail is implicit in the statement of end use.

c) Spatial elements of GIS database, which depends upon the end use and defines the spatialdatasets that will populate the database. The spatial elements is application specific and is mainly made of maps obtained from different sources.

The spatial elements could be categorised into primary elements, which are the ones that are digitised or entered into the database and derived elements, those that are derived from the primary elements based on a GIS operation. For example, the contours/elevation points could be primary elements but the slope that is derived from the contours/elevation points is a derived element. This distinction of the primary and secondary element is useful in estimating the database creation load and also in scheduling GIS operations. TABLE 2 illustrates some of the primary elements and derived elements of a GIS database for district level planning applications.

d) Non-spatial elements of GIS database which are the non-spatial datasets that would populate the GIS database. The actual definition of the non-spatial elements would depend upon the end use and is application specific. For example, non-spatial data for forest applications would include data on tree species, age, production etc and non-spatial data for urban applications would include wardwise population, services and facilities data and so on. TABLE 3 shows some of the typical non-spatial data elements for a district planning application. Much of the non-spatial data comes from sources like the Census department, municipalities, resource survey agencies etc.

e) Source of spatial and non-spatial data is an important design issue as it brings about the details of the data collection activity and also helps identify the need for data generation. Most of the spatial data or thematic maps are available from the central and state survey agencies and non-spatial data is available as Census records or from the survey departments.

f) Age of data is an important design issue as it, in turn, defines the age of the database - making it either useful or useless for a particular end application. For example, if the application is to study the impact of pollution in an urban area then the pollution data needs to be current and the use of past data would render the impact analysis ineffective.

g) Spatial data domain, pertaining to the basic framework of the spatial datasets. Most of the spatial data sets follow the Survey of India (SOI) latitude-longitude coordinate system (as is given in the SOI maps) and thus, the spatial data base needs to follow the standards of the SOI mapsheets.

h) Impact of study area extent, defining the actual geographical area for which the GIS database is to be organised. Mostly, if SOI framework is adopted, the coverage will be in non-overlapping SOI map sheets - extent in certain mapsheets is partial as against the full extent in certain mapsheet. The extent definition also lays down the limits of the database and also helps in the logical design of the spatial elements.

i) Spatial Registration framework, is essential to adopt a standard registration procedure for the database. This is generally done by the use of registration points - also called TIC points in GIS. These registration points could be the corners of the graticule of the spatial domain - say the four corners of the SOI mapsheet at 1: 50,000 scale or control points that can be discerned - road intersections, railline-road intersections, bridges etc in each spatial element that is to populate the database. Unique identifiers for each registration point helps in locating and registering the database. FIGURE 2 shows the scheme of registration points used for the Bharatpur project. This scheme is a "shared" method of points where each registration is a part of more than one mapsheet. This helps in the map joining/mosaicking and sheet-by-sheet data digitisation process.

j) Non-spatial data domain specifying the levels of non-spatial data. The non-spatial datasets are available at different levels and it is essential to organise the non-spatial data at the lowest unit. The higher levels could then be abstracted from the lowest unit whenever required. For example for the Bharatpur database non-spatial data was available at different levels of administrative units - district, taluk and village. The village was the lowest unit at which the non-spatial data was available and thus non-spatial data domain was considered at the village level.

5.0 GIS DATABASE - LOGICAL DESIGN

The Logical Design of the GIS database pertains to the logical definition of the database and is a more detailed organisation activity in a GIS. Most of the design issues are specific to GIS and thus the scope varies with the type and kind of GIS package to be utilised. However, in an overall manner most of these issues are common over the different GIS packages. SAC has evolved a set of guidelines for the logical designing of the GIS database which have been adopted in the organisation of GIS databases for BMR, Bharatpur, Dungarpur etc. TABLE 4 shows some of these critical design guidelines adopted which could be adopted for the GIS database organisation. Some of the key issues are:

a) Coordinate system for database, which determines the way coordinates are to be stored in the GIS packages. Most GIS package offer a range of coordinate systems depending on what projection systems are employed. The coordinate system for the GIS database needs to be in appropriate units that represent the geographic features in their true shape and sizes. The coordinate system would generally get defined by the spatial domain of the GIS database. For example, if the SOI 1:50, 000 graticule has been adopted for the database, it is essential to have the same coordinate/projection system that SOI adopts. All SOI toposheets on 1:50, 000 scale adopt the Polyconic projection system. Further, the units of the polyconic projection are represented in actual ground distances - meters. As a result all spatial elements of the GIS database are referenced in an uniform coordinate system. This would allow for easy integration of spatial datasets as part of the analysis and also maintain a homogeneity in the GIS database.

b) Spatial Tile design pertains to the concept of a set of map tiles composing the total extent. For example, the district of Bharatpur is organised in 19 map tiles of SOI sheets at

1:50,000 scale. Certain GIS packages allow for the organisation of tiles which facilitates the systematic data entry on a tile-by-tile basis and also the horizontal organisation of spatial data.

FIGURE 3 shows the concept of horizontal and vertical organisation of the spatial data in the database.

c) Defining attribute data dictionary: The data dictionary is an organised collection of attribute data records containing information on the feature attribute codes and names used for the spatial database. The dictionary consists descriptions of the attribute code for each spatial data element. TABLE 5 shows a partial listing of the attribute data dictionary adopted for Bharatpur database.

d) Spatial data normalisation is akin to the Normalisation of relations and pertains to finding the simplest structure of the spatial data and identifying the dependency between spatial elements. Normalisation avoids of general information and also reduces redundancy. A process of normalisation of the spatial data is also essential to identify master templates and component templates. This normalisation process insures that the coincident component features of the various elements are coordinate coincident - thus limiting overlay sliver problems. This also ensures the redundancy in digitisation process as master templates are digitised only once and form a part of all elements. For example, in the Bharatpur database, the following features have been identified as master templates: - district /taluka boundary- rivers/streams- water bodies These elements need to occur in each spatial element and also because they need to be coordinate coincident.

e) Tolerances definitions are an important aspect of the GIS database design. The tolerances specify the error-level associated with each spatial element. The different tolerances that need to be considered are:

- Coordinate Movement Tolerance (CMT) which specifies the limit upto which coordinates could move as part of a GIS operation. If the tolerance is not stringent then repeated GIS operation could move the coordinates significantly so as to distort the size and shape of the features.

- Weed Tolerance (WT) which pertains to the minimum separation between coordinates while digitising. For example a straight line could be represented by two vertices and intermediate vertices are redundant. A proper weed tolerance would not create the intermediate vertices at all and thus not populate the database unnecessarily.

- Minimum Spatial Unit (MSU) which indicates the smallest representable area in the database. Any polygon feature having lesser area than the MSU would be aggregated. The MSU is an indication of the resolution of the database. The concept of MSU is pertinent for vector GIS databases and is not applicable for raster GIS databases as in a raster GIS a raster/grid becomes the MSU as no features below the MSU can be resolved. The tolerances are all dependent on the scale or level of database. Some of the general guidelines suggested for different scales are listed in TABLE 7.

f) Spatial and non-spatial data linkage where the interlinkages of the spatial and non-spatial data are defined. These linkages and interrelationships are an important element of the GIS database organisation as they define the userrelations or userviews that can be created. There are two major linkage aspects involved:

- for all spatial data sets representing resources information or thematic information and those other than administrative maps, the linkage is achieved through the data dictionary feature code at the time of creation/digitisation itself.

- for administrative maps - village and taluk maps, the linkage is achieved on a one-to-one relation based on a unique code for each village or the taluk. For example, in Bharatpur database this code has been identified as the census code for the 1463 villages/settlements in the district. Thus the internal organisation of the spatial village/taluk boundaries is flexible to relate to the village-wise non-spatial database on a one-to-one basis. FIGURE 4 shows the type of relation that was adopted for the Bharatpur database.

6.0 GIS DATABASE PHYSICAL DESIGN

The Physical Design (PD) pertains to the assessment of the load, disk space requirement, memory requirement, access and speed requirements etc for the GIS. Much of these pertain to the hardware platform on which the GIS will operate. There are no standards on PD aspects available and much of the design has to be based on experience. However, some of the key issues are as follows:

a) Disk space requirement is a major concern for GIS database designers. The paradigm THERE IS NO END TO A GIS DATABASE sums it up all as most GIS databases have realised how fast their disk space estimates have gone awry. As an illustration of this aspect, the Bharatpur database takes up about 54 MB space for the actual data. Any further integrated analysis which would create intermediate outputs would take anywhere between 3-4 times the normal space. Thus, experience shows that for a district database a 300 MB disk is just sufficient and a higher disk space would be appropriate. The differences in space utilisation of different GIS packages is reflected in a benchmark application run on PC-ARC/INFO and ISROGIS packages. The PC-ARC/INFO utilised 26.84 MB space while for the same dataset the ISROGIS utilised 35.22 MB [Rao et al, 1993]. This is to illustrate the range of space utilisation variation.

b) Load of database is also difficult to determine as there is no way of estimating the number of points, line, polygons in each spatial element. However, broad guidelines could be evolved and estimates made. For example, the National Capital Region Planning Board (NCRPB) have adopted a two way categorisation of spatial elements - three-level qualitative density based categorisation of maps and two-level full or partial coverage based categorisation for the 67 maps covering the NCR. Based upon this the total spatial maps to be organised is estimated as 657 map sheets.

c) Access and speed requirements are more oriented towards the ability to handle large and dense maps rather that the time involved in processing. The GIS applications are not real-time applications and thus the access time or speed becomes a secondary aspect. A benchmark study on PC-ARC/INFO and ISROGIS has sown that the time taken for an overall application - consisting of various steps is 11.5 hrs and 7.5 hours respectively [Rao et al, 1993]. The point to be noted is that even though there is a 4 hrs difference the implication on the application is not driven by the difference as it is not real-time. d) File and data organisation in GIS is an activity which is taken care of by the GIS package itself and no design aspects need be considered for the physical organisation of files. Each GIS package has its own file system organisation which could be either a single file or a set of files and are transparent to the user.

7.0 GIS DATABASE CREATION

7.1 Spatial database creation Based on the design, the steps of database creation are worked out and a procedure laid down. The procedure for the spatial database creation is described below:

a) Master template creation: As discussed earlier, a master template is created as a reference layer and consisting of the district boundary, rivers etc. This template is then used for the component themes digitisation.

b) Thematic map manuscript preparation - Based on the spatial domain (in Bharatpur database it was the SOI graticule of 1:50,000 scale), the different theme oriented information is transferred from the base map to a mylar/transparent sheet. Spatial data manuscripts are mylars consisting features that are to be digitised. These manuscripts are prepared on a sheet-by- sheet basis for digitisation. These manuscripts consist "instructions" for digitisation or scanning - which include:

- Registration point locations and identifiers - feature codes as per the dictionary defined earlier. - feature boundaries - tolerance specifications - any other digitisation/scanning instructions

c) Digitisation of features: The theme features of the spatial dataset are then digitised/scanned using the GIS package. The digitisation is done for each mapsheet of the spatial reference. The master registration-point reference are used for the digitisation. The theme digitisation is done as a component into a copy of the master template layer.

d) Coverage editing - The digitised coverage is processed for digitisation errors such as dangles, constituting the overshoots or undershoots, and labels for polygons. This constitutes obtaining a report of these errors and then a manual editing of these features. Finally the coverage is processed for topology creation. As in the case of digitisation, the editing has also to be done on a mapsheet basis.

In the case of raster GIS packages, the topology construction may not be relevant. However, a clumping process to identify the clump of rasters having similar characteristics is essential.

d) Appending of mapsheets thematic features: The next step in the procedure is the appending or mosaicking of the different mapsheets into a single theme map for the whole extent. The graticule of registration points are used for this purpose.

e) Attribute coding verification: The attribute codes for the different categories need to be then verified and additional attributes - featurename, description etc. are added into the feature database. It is only after this procedure that the theme coverage is ready for GIS analysis. FIGURE 5 shows the procedure for spatial database creation.

7.2 Non-spatial database organisation

Non-spatial data elements are listed in TABLE 3 and most of these are available in analog mode- specifically the census data of 1981 and earlier. Towards converting it into a digital mode, a suitable application package could be used to configure a data entry system. At SAC, a dBASE interface module has been developed for the census data capture. It is a user friendly module for easy entry and editing of census data and organisation into a database and is based on the taluk-village hierarchy of the district. To this end it makes use of a primary file containing taluka-wise village names and their census code as listed in the census abstract. The module can be directly used for entering the census data in sector-wise databases which are created as secondary files. These secondary files are related to the primary file of villages based on the census village code as the keyitem [Rangwala et al, 1988]. Using this module, the census data for Bharatpur district has been organised into different sectoral databases. The Census data of 1991 is available in digital format as a set of database files. These database files could be structured into a sectoral organisation so that incorporation in GIS is easier.

7.3 Defining relations between spatial and non-spatial data

The GIS allows for the spatial data and the non-spatial features to be related or linked based upon a defined relationship. The relation in the GIS is a method of relating the same spatial entity to different non-spatial entities based on a linkkey.

The linkages are more pertinent for the village-wise data where village-boundary theme or settlement theme represents the spatial distribution of villages or the settlements and a one-to-one relationship can be defined for each of the village/settlement entity and the non-spatial data for the village/settlement. Apart from this, the village-taluk hierarchy can also be "forced" into all spatial datasets so as to be able to extract taluk-wise spatial feature information - either in spatial format or as non-spatial tabular output.

7.4 Integration of village boundaries - Issues

One of the important aspects of GIS database for districts/regions is the combined analysis of the tabular socioeconomic data and the thematic natural resources data. These two discrete datasets have different characteristics. The socioeconomic and developmental data is mainly the data collected by the Census which is on a village-wise basis. This dataset is based on a villa ge-taluk-district hierarchy and is mainly tabular. As against this, the thematic data on natural resources is based on a spatial framework. These datasets follow the SOI toposheet graticule and thus are based on the Polyconic projection system. An integrated planning exercise would require that these two datasets be combined/analysed together to derive meaningful plan inputs. The integration would be to:

a) merge the attributes of the villages and the natural resources for generating plan scenarios

b) spatial representation of the non-spatial tabular attributes of the villages.

c) amenability to aggregate and abstract the village attributes and the natural resources to the village-taluk-district and SOI graticule (for example 1:50,000 and 1:250,000 scale)

d) generate the village/taluk-wise information of natural resources for tabular updation.

A methodology for integrating the village boundary to a SOI mapbase has been developed at SAC and is based on projection of census village boundaries from a transparency to a standard SOI map base and transfer of village boundaries to the base [SAC and TCPO, 1992].

8.0 DATABASE UPDATION AND LINKAGES

Both the spatial and non-spatial database will have to be updated frequently so as to have the latest data for the further analysis/modeling. Some of the data elements could be relat ively static and thus could be created once and updated only when there are changes. Such elements are mainly administrative boundaries, elevation points, drainage maps etc. However, the data elements that have to be more frequently updated are as follows:

a) Spatial database: The updation of the spatial database will have to be based mainly on the inputs from RS data as also from the periodic surveys carried out by different agencies . Updation can be categorised as follows:

- RS data based updation - mainly landuse/cover (every year); forest type/densit y maps (once in two years); urban landuse maps (once a year for major cities and once in 3 years for towns/small cities); geological maps (once in 10 years); geomorphological/hydrogeomorphological maps (once in 3 years); GW potential maps (once in 2 years or whenever drought occurs); flood maps (pre- and post-flood season every year) etc. - Updation based on survey agency maps - mainly soil maps; forest maps; detailed geological and mineral data; road maps etc. These maps could be acquired from

the respective agency and digitised into the database. These could be taken up whenever available - ideally once in 10 years.

b) Non-spatial data: Much of the non-spatial data are based on the census records and thus would be updated once every 10 years. However, it would be more proper if some of the non-spatial data is available more frequently - say, once every five years so as to be optimal for the planning process. A ten year schedule is not commensurate with the ongoing development as the database needs to be updated for intermediate developments in a more frequent manner. Otherwise, data of a decade would be used for a planning process and suggesting developmental plans which would have already taken place. Exchange of data from the GIS database to other computerised databases at district level can be done so as to be able to provide data for further use. This exchange would mean:

a) a non-spatial data exchange as the district does not have the capability to handle data in spatial format. In case the capability to handle spatial data is available then the spatial data exchange can also be visualised.

b) the non-spatial representation of all datasets in the GIS database. This non-spatial representation of data could be on a taluk-basis or village-basis.

 

Database management systemsDBMSThe origins of DBMS data models is in computer science (Clarke, 1997). A DBMS contains:

        A data definition language        A data dictionary        A data entry module

        A data update module        A report generator        A query language

 Data definition language (DDL): DDL is the language used to describe the contents of the database (Modarres, 1998). DDL is the part of the DBMS that is allows the user to set up a new database, to specify how many attributes there will be what types and lengths or numerical ranges of each attribute will be and how much of the user is allowed to do (Clarke, 1997). It is used to describe, for example, attribute names (field names), data types,

location in the database, etc.This establishes the data dictionary, a catalog of all of the attributes with their legal values and ranges.The most management function is data entry, and since most entry of attribute dataMonotonous and may be by transcription from paper records, the DBMS's data-entry system should be able to enforce the ranges and limit entered into the data dictionary by definition language.

All data entry is subject to error, and first step after entry should be verification, and after that updated to reflect change.Then the DBMS can be used to perform functions such as sorting, reordering, subsetting, and searching; to do so requires the use of query language, the part that allows the user to interact with the data to perform those tasks(Clarke, 1997).Data manipulation and query language: Normally a fourth-generation language (4GL) is supported by a DBMS to form commands for input, edit, analysis, output, reformatting, etc. Some degree of standardisation has been achieved with SQL (Structured Query Language) (Modarres, 1998). DBMS queries are sorting, renumbering, subsetting, and searching.The query language is the user interface for searching.

GIS DATABASE INTRODUCTIONThe real world is too complex for our immediate and direct understanding. We create "models" of reality that are intended to have some similarity with selected aspects of the real world. Data bases are created from these "models" as a fundamental step in coming to know the nature and status of that reality (Modarres, 1998). The Geographical Information System (GIS) has two distinct utilisation capabilities - the first pertaining to querying and obtaining information and the second pertaining to in tegrated analytical modelling. However, both these capabilities depend upon the core of the GIS - the database that has been organised. Many a GIS utilisation have

been limited because of improper database organisation. The importance of the GIS database stems from the fact that the data elements of the database are closely interrelated and thus need to be structured for easy integration and retrieval. The GIS database has also to cater to the different needs of applications. In general, a proper database organisation needs to ensure the following [Healey, 1991; NCGIA, 1990]:         flexibility in the design to adapt to the needs of different users.         a controlled and standardised approach to data input and updation.         a system of validation checks to maintain the integrity and consistency of the data elements.        a level of security for minimising damage to the data.         minimising redundancy in data storage. While the above is a general consideration for database organisation, in a GIS domain the considerations are pertinent with the different types and nature of data that need to be organised and stored.  What is a database?Database: is a large collection of data in a computer system, organized so that it can be expanded, updated, and retrieved rapidly for various uses. It could be a file or a set of files (Ronli, 1999). File: is a collection of organized records of information. A record has usually a record number and record content. The file has a name give by the system or user (Ronli, 1999).  A database is a collection of information related to a particular subject or purpose, such as tracking customer orders or maintaining a music collection

(microsoft, 1997). Database: is self-describing collection of integrated records (Kroenke, 1995)Database is self-describing: It contains, in addition to the user's source data, a description of its structure. This description is called a data dictionary (or data directory or metadata). It is the data dictionary that makes program/data independence possible.A database is a collection of integrated records: Bits are aggregated into bytes or characters; characters are aggregated into fields; fields are aggregated into records; and records into files. Bits - characters - fields - records -filesOthers are metadata, indexes that are used to represent relationship among the data and also to improve the performance of database application, the database often contains data about the applications that uses the database. The structure of data entry form, or a report, is sometimes part of the database, which is called application metadata. Thus, database contains four types of data: files of the user's data, indexes, and application metadata.Files + metadata + indexes + application = Database. A spatial data base is a collection of spatially referenced data that acts as a model of reality. Spatial database: stores GEOREFERENCED data. For example, wells with their locations, bank account holders with addresses, and property taxes with boundaries   INTRODUCTION TO DATABASE PROCESSINGA successful GIS begins with a database, so it important to first take a look at

databaseIn GIS, the database is important as its creation will often account for up to three-quarters of the time and effort involved in developing a geographic information system. (Kenneth et al, 1996). It is important, however, to view these GIS databases as more than simple stores of information. The database is used to abstract very specific sorts of information about reality and organize it in a way that will prove useful. The database should be viewed as a representation or model of the world developed for a very specific application (Kenneth et al, 1996). There are very many things involved in the design of a database. GIS has become more powerful as database products has be more powerful and database technology more accessible (Kroenke, 1995). This has been so because         The personal computer DBMS have

become more powerful and easier to use, and their price has decreased substantially. Products such as Microsoft's access not only provide the power to a true relational DBMS on a PC, but also include facilities for developing GUI-based forms, reports, and menus.

        The new modelling methodologies and tools, especially those based on the object-oriented thinking, have become available. studies show that semantic object modelling( say with SALSA) to be far superior to the old techniques, such as the entity-relationship modelling (say with IEF) approach: able to create better models, faster, and with greater satisfaction

        There has the emergence of client

server processing in general and especially client server database processing in particular. This enables companies to download main frame

to a server database on a PC, making ease for personal to access to database.

 THE DATA IN DATABASE Broadly categorised, the basic data for the GIS database has two components: a) Spatial data - consisting of maps and which have been pr-pared either by field surveys or by the interpretation of Remote-ly Sensed (RS) data. Some examples of the maps are the soil survey map,geological map, landuse map from RS data, village map etc. Much of these maps are available in analog form and it is of late that some map information is available directly in digital format. Thus, the incorporation of these maps into a GIS depends upon whether it is in analog or digital format - each of which has to be handled differently. b) Non-spatial data - attributes as complementary to the spatial data and describe what is at a point, along a line or in a polygon and as socio-economic characteristics from census and other sources. The attributes of a soil category could be the depth of soil, texture, erosion, drainage etc and for a geological category could be the rock type, its age, major composition etc. The socio-economic characteristics could be the demographic data, occupation data for a village or traffic volume data for roads in a city etc. The non-spatial data is mainly available in tabular records in analog form and need to be converted into digital format for incorporation in GIS. However, the 1991 census data is now available in digital mode and thus direct incorporation to GIS database is possible.  DATA INPUT IN DATABASEThere are very many methods of data entry in database, these include use of advancing technologies, such as scanning, feature recognition, raster-to-vector conversion, and image processing, along with traditional digitizing and key entry methods (Kroenke, 1995).Digitizing on a tablet captures map data by tracing lines from a map by hand, using a cursor and an electronically-sensitive tablet. The result is a string of points with (x, y) values.Scanning places a map on a glass plate, and passes a light beam over it measuring the reflected light intensity. The result is a grid of pixels. Image size and resolution are important to scanning. Small features on the map can drop out if the pixels are too big.Attribute data can be thought of as being contained in a flat file. This is a table of attributes by records, with entries called values. How data is represented in databaseKenneth et al, 1996).It is important to realize that this non-spatial data can be filed away in several different forms depending on how it needs to be used and accessed. Perhaps the simplist method is the flat file or spreadsheet, where each geographic feature is matched to one row of data Flat Files and Spreadsheets

A flat file or spreadsheet is a simple method for storing data. All records in this data base have the same number of "fields". Individual records have different data in each field with one field serving as a key to locate a particular record (Kenneth et al, 1996). For a person, or a tract of land there could be hundreds of fields associated with the record. When the number of fields becomes lengthy a flat file is cumbersome to search. Also the key field is usually determined by the programmer and searching by other determinants may be difficult for the user. Although this type of database is simple in its structure, expanding the number of fields usually entails reprogramming. Additionally, adding new records is time consuming, particularly when there are numerous fields. Other methods offer more flexibility and responsiveness in GIS.

 Hierarchical FilesHierarchical files store data in more than one type of record. This method is usually described as a "parent-child, one-to-many" relationship (Kenneth et al, 1996). One field is key to all records, but data in one record does not have to be repeated in another. This system allows records with similar attributes to be associated together. The records are linked to each other by a key field in a hierarchy of files. Each record, except for the master record, has a higher level record file linked by a key field "pointer". In other words, one record may lead to another and so on in a relatively descending pattern. An advantage is that when the relationship is clearly defined, and queries follow a standard routine, a very efficient data structure results. The database is arranged according to its use and needs. Access to different records is readily available, or easy to deny to a user by not furnishing that particular file of the database. One of the disadvantages is one must access the master record, with the key field determinant, in order to link "downward" to other records.

  Relational FilesRelational files connect different files or tables (relations) without using internal pointers or keys. Instead a common link of data is used to join or associate records. The link is not hierchical (Kenneth et al, 1996). A "matrices of tables" is used to store the information. As long as the tables have a common link they may be combined by the user to form new inquires and data output. This is the most flexible system and is particularly suited to SQL (structured query language). Queries are not limited by a hierarchy of files, but instead are based on relationships from one type of record to another that the user establishes. Because of its flexibility this system is the most popular database model for GIS. They remain the dominant form of DBMS today (Clarke, 1997).They are simple, and user's standpoint is an extension of the flat file model. The major difference is that a database can consist of several flat files, and each can contain different attributes associated with a record. 

  Flat, Hierarchical, and Relational Files ComparedStructure Advantages Disadvantages 

Flat Files        Fast data retrieval         Simple structure and easy

to program

        Difficult to process multiple values of a data item

        Adding new data categories requires reprogramming

        Slow data retrieval without the key

Hierarchical Files

        Adding and deleting records is easy

        Fast data retrieval through higher level records

        Multiple associations with like records in different files

        Pointer path restricts access         Each association requires

repetitive data in other records         Pointers require large amount

of computer storage

Relational Files

        Easy access and minimal technical training for users, as data is kept in different files.

        Flexibility for unforeseen inquiries as it allows to assemble any combination of attributes and records as long as they are linked by a key attribute.

        Easy modification and addition of new relationships, data, and records

        New relations can require considerable processing

        Sequential access is slow         Method of storage an disks

impacts processing time         Easy to make logical mistakes

due to flexibility of relationships between records

        Physical storage of data can change without affecting relationships between records

 Now, let us consider a couple of examples of matching applications to database structures.  Exploratory research--flat files are easy to organize, space is not particular problem  Government agencies--hierarchical systems are particularly attractive  Planning and development--relational might be justified for flexibility   Why the use of database (Problems which database-processing system has solved) 1.      Data is separated and isolated. If some related data is needed, do so the system

manager must determine which parts of the file are needed; then he must decide how to the files are related; and he must co-ordinate the processing of the files so that the correct data is extracted. Database-processing system data is stored in one place and the database management system accesses the stored data. So, the general structure of all database applications is (users - database application - DBMS - Database). (Kroenke, 1995)

2.      Data is often duplicated. This waste files space and brings the problem of data integrity. If data items differ, they will produce inconsistent results, making difficult determine which is true thus reducing the credibility of the data. (Kroenke, 1995)

3.      Application programs are dependent on the file format. If changes are made in the file format, the application program also must be changed

4.      Files are often incompatible with one another. File format of the program language or the product used to generate it e.g. COBOL and C program. (Kroenke, 1995)

5.      The difficult of present the data the way the users view it.   Benefits of relational modelThe data is stored, at least conceptually, in a way the user can readily understand (Kroenke, 1995). Data is stored in tables, and the relationship between the rows of the table is visible in the data. Unlike the earlier database models where the DBMS stored the relationship in the systems data such as indexes which would hide the relationship, RDBMS enables the user to obtain information from the database without the assistance of the professional as the relationship is always stored in user-visible the data. RDBMS are particularly in the Decision-support system (DSS). Microcomputer DBMS productsThere has been a lot of development in the DBMS: started with dBase II was not a DBMS, dBase III that was a DBMS, dBase IV that was RDBMS. Today DBMS products provide rich and robust user interfaces using graphical user interface such as Microsoft windows. Two products are Microsoft's access and Borland's paradox for windows (Kroenke, 1995).

 Client server Database ApplicationsThe development of local area networks (LAN) which led to linking of micro computer CPUs so that they can work simultaneously, which was advantageous (greater performance) and more problematic, led to a new style of database processing called the client server database architecture (Kroenke, 1995).  Distributed Database processingOrganisational database applications address the problems of file processing and allow more integrated processing of organisational data. Personal and work-group database systems bring in database technology even closer to the user by allowing him access locally managed database. Distributed database combine these types of databases processing by allowing personal, work-group, and organisational databases to be combined into integrate but distributed system (Kroenke, 1995). Object-oriented DBMS (OODBMS)These are the DBMS, which come as a result of development of a new type of programming called Object-oriented programming. It difficult and has different types of data structures. Data modellingIs the processing of creating a representation of the user's view of the data.There are two data modelling tools. The entity-relation approach and semantic object approach. DEVELOPMENT OF DATABASE Components of database systems

  

Features and functions Of DBMS 

!.Design tools subsystem -Tables creation tool -Form ceation tool developer

-Query creation toolDatabase -Report creation tool

User's data -Procedural lanngauge Application Metadata compiler program Overhead data 2.Run tool subsystem

Indexes -Form processorLinked lists -Query processor

Application -report writer -Procedural language application users run time program

3.DBMS Engine   figure from (Kroenke, 1995) General strategiesTo develop a database, we build a data model that identifies the things to be stored in the database and defines their structure and the relationship among them. This familiarity must be obtained early in the development process, by interviewing the user and the building the requirements. There are two general strategies for developing a database: Top-down development and Bottom-up development (Kroenke, 1995)Top-down development: proceeds from the to the specific. It begins with goals of the organisation, the means by which the goals can be accomplished, the information requirements that must be satisfied to reach those goals, an abstract of data model is constructed. Using this high-level model, the development team progressively works down-wards towards more and more detailed descriptions and models. Intermediate-level models also are expanded with more detail until the particular databases and related applications can be identified. One or more applications are selected for development. Over time, the entire high-level data model is transformed into lower-level models, and the indicated systems, databases, and applications are created. For Bottom-up development is the reverse. The entity-relation approach is more effective with top-down development, and the semantic object approach is more effective with bottom-up development.

  FUNDAMENTAL DATA BASE ELEMENTS Elements of reality modelled in a GIS data base have two identities: entity and object. Entity An entity is the element in reality. An entity is something that can be by the user's work environment, something important to the user of the system, e.g. someone's name (Kroenke, 1995). An entity is "a phenomenon of interest in reality that is not further subdivided into phenomena of the same kind". e.g. a city could be considered an entity and subdivided into component parts but these parts would not be called cities (they could be districts, neighbourhoods, etc.) therefore it would be an entity. e.g. a forest could be subdivided into smaller forests therefore it would not be an entity. Similar phenomena to be stored in a data base are identified as entity types. An entity type is any grouping of similar phenomena that should eventually get represented and stored in a uniform way, e.g. roads, rivers, elevations and vegetation. Entities are grouped into entity classes, or collection of entities of the same type (Kroenke, 1995).Entity class is the general form or description of a thing, where as an instance of an entity class is the representation of a particular entity, e.g. car 123  Attributes Entities have attributes, or as they are some called, properties which the entity's characteristicsAn attribute is a characteristic of an entity selected for representation. It is usually non-spatial though some may be related to the spatial character of the phenomena under study (e.g. area and perimeter of a region).The actual value of the attribute that has been measured (sampled) and stored in the database is called attribute value. An entity type is almost always labelled and known by attributes (e.g. a road usually has a name and is identified according to its class such as freeway, state road, etc.). Attribute values often are conceptually organised in attribute tables, which list individual entities in the rows and attributes in the column. Entities in each cell of the table represent the attribute value of a specific attribute for a specific entity.  IdentifiersEntities instances have names that identify them. The identifier of an instance is one or more of its attributes (Kroenke, 1995). An identify may be ethier unique or not. If it is unique , its value will identify one, entity instance. RelationshipsEntity can be associated with one another in the relationship. The E-R model contains both relationship classes and relationship instances. Relationship classes are associations among entity classes, and relationship instances are associated among entity instances. Relationships can have attributes. (Kroenke, 1995).A relation can include many entities; the number of entities in a relationship is the degree of relationship. Although the E-R model allows relationships of any degree, most application of the model involve only relationship of degree two.

Relationship is an association established between common fields (columns) in two or more4 tables. Such relationships are sometimes called binary relationships. relationship can be one-to-one, one-to-many, or many-to-many. (1:1, 1:N, N:M)one-to-one relationship, each record in Table A can have only one matching record in Table B, and each record in Table B can have only one matching record in Table A. This type of relationship is not common, because most information related in this way would be in one table. You might use a one-to-one relationship to divide a table with many fields, to isolate part of a table for security reasons, or to store information that applies only to a subset of the main table. For example, you might want to create a table to track employees participating in a fundraising soccer game.A one-to-many relationshipA one-to-many relationship is the most common type of relationship. In a one-to-many relationship, a record in Table A can have many matching records in Table B, but a record in Table B has only one matching record in Table A.many-to-many relationshipIn a many-to-many relationship between two tables, one record in either table can relate to many records in the other table. In a many-to-many relationship, a record in Table A can have many matching records in Table B, and a record in Table B can have many matching records in Table A. This type of relationship is only possible by defining a third table (called a junction table) whose primary key consists of two fields - the foreign keys from both Tables A and B. A many-to-many relationship is really two one-to-many relationships with a third table. Weak entitiesThese entities whose presence in the database depends on the presence of another entity e.g. apartment depends on the building. E-R model includes a special type of weak entity called an ID-dependent entity  Object An object is the element as it is represented in the data base. An object is "a digital representation of all or part of an entity". The method of digital representation of a phenomenon varies according to scale purpose and other factors. e.g. a city could be represented geographically as a point if the area under consideration were continental in scale; while the same city could be geographically represented as an area if we are dealing with a geographical data base for a state or a country. The digital representation of entity types in a spatial data base requires the selection of appropriate spatial object types. The object types are listed in Table 3-1 (and illustrated in Figure 3-1) based on the following definition of spatial dimensions.         0-D: an object having a position in space, but no length (e.g. a point).          1-D: an object having a length and is composed of two or more 0-D objects (e.g. a

line)          2-D: an object having a length and width and is bounded by at least three 1-D line

segment objects (e.g. an area).         3-D: an object having a length, width and height/depth and is bounded by at least four

2-D objects (e.g. a volume).

An object class is the set of objects which represent the set of entities, e.g. the set of points representing the set of wells. 0-dimensional object types         point - specifies geometric location         node - a topological junction or end point, may specify location 1-dimensional object types         line - a one dimensional object         line segment - a direct line between two points         string - a sequence of line segments         arc - a locus of points that forms a curve that is defined by a mathematical function         link - a connection between two nodes         directed link - a link with one direction specified         chain - a directed sequence of nonintersecting line segments and/or arcs with nodes at

each end         ring - a sequence of non-intersecting chains, strings, links or arcs with closure 2-dimensional object types         area - a bounded continuous object which may or may not include its boundary         interior area - an area not including its boundary         polygon - an area consisting of an interior area, one outer ring and zero or more non-

intersecting, nonnested inner rings         pixel - a picture element that is the smallest nondivisible element of an image  

Figure 3-1. Spatial object types. 

 Data base model A data base model is a conceptual description of a database defining entity type and associated attributes.Each entity type is represented by specific spatial objects.After the database is constructed, the data base model is a view of the database, which the system can present to the user. Examples of data base models can be grouped by application areas; e.g. transportation applications require different data base models than do natural resource applications. Layers Spatial objects can be grouped into layers, also called overlays, coverages or themes. One layer may represent a single entity type or a group of conceptually related entity types. E.g. a layer may have only stream segments or may have streams, lakes, coastline and swamps.What kind of objects do you use to represent these entity types?  SPATIAL OBJECTS AND DATA BASE MODELS The objects in a spatial database are representations of real-world entities with associated attributes. The power of a GIS comes from its ability to look at entities in their geographical context and examine relationships between entities. Thus a GIS data base is much more than a collection of objects and attributes.         How are lines linked together to form complex hydrologic or transportation

networks?          How can points, lines, or areas be used to represent more complex entities like

surfaces?   

 Point Data         Points represent the simplest type of spatial object.          Choice of entities which will be represented as points depends on the scale of the map

or application. e.g. on a large scale map, building structures are encoded as point locations. e.g. on a small scale map - cities are encoded as point locations. 

        The coordinates of each point can be stored as two additional attributes.          Information on a set of points can be viewed as an extended attribute table. Each row (or record) represents a point recording all information about the point.Each column is an attribute (or a field), two of which are the x, y coordinates.Each point is independent of each other, represented as a separate row (Figure 3-2). 

Figure 3-2. Point data attribute table.  Line Data Lines represent network entities such as:         infrastructure networks         transportation networks - highway and railway         utility networks - gas, electricity, telephone and water pipe         airline networks - hubs and routes          natural networks         river channels  Network characteristics (Figure 3-3):         A network is composed of nodes and links.         The valency of a node is the number of links at the node. e.g.         ends of dangling lines are "1-valent"         4-valent nodes are most common in street networks         3-valent nodes are most common in hydrology         A tree network has only one path between any pair of nodes, no loops or circuits are

possible. Most river networks are trees.  

 

 Figure 3-3. Nodes and links in network entities.  Attributes of network entities:         Link attributes         transportation: direction of traffic, length, number of lanes, time to pass         pipe lines: diameter of pipe, direction of gas flow         electricity: voltage of electrical transmission line, height of towers          Node attributes         transportation: presence of traffic lights and overpass, names of intersecting streets         electricity: presence of shutoff valves, transformers   Area Data Area data are represented on area class maps, or choropleth maps. Boundaries may be defined by natural phenomena such as lakes, or by man such as forest stands, CENSUS zones. Types of areas that can be represented include:         Environmental/natural resource zones: land cover, forest, soil, water bodies         Socio-economic zones: CENSUS tracts, postcodes         Land records: land parcel boundaries, land ownership  Area coverage (Figure 3-4):         Type 1: Entities are isolated areas, possibly overlapping         Any place can be within any number of entities, or none.         Areas do not exhaust the space.          Type 2: Any place is within exactly one entity

        Areas exhaust the space.         Every boundary line separates two areas, except for the outer boundary.         Areas may not overlap.          Any layer of the first type can be converted to one of the second type.         Holes and islands (Figure 3-5):         Areas often have "holes" or areas of different attributes wholly enclosed within them.         More than one primitive single-boundary area (islands) can be grouped into an area

object.  

Figure 3-4. Area coverage: (a) Entities are separate; (b) Entities fill the space; (c) First type represented as second type.  

Figure 3-5. Holes and islands.  Representation of Continuous Surfaces Examples of continuous surface:        elevation (as part of topographic data)         rainfall, pressure, temperature         population density 

General nature of surfaces         Critical points         peaks and pits - highest and lowest points         ridge lines, valley bottoms - lines across which slope reverses suddenly         passes - convergence of 2 ridges and 2 valleys          faults - sharp discontinuity of elevation - cliffs          fronts - sharp discontinuity of slope          slopes and aspects can be derived from elevations  Data structures for representing surfaces         Traditional data models do not have a method for representing surfaces, therefore

surfaces are represented by the use of points, lines or areas.          Points - grid of elevations          Lines - digitised contours          Areas - TIN (Triangulated irregular network)   THE VECTOR GIS What is a vector data model?         based on vectors (as opposed to space-occupancy raster structures) (Figure 3-6)          fundamental primitive is a point          objects are created by connecting points with straight lines          areas are defined by sets of lines (polygons)  

Figure 3-6. Example of vector GIS data.  Arcs         When planar enforcement is used, area objects in one class or layer cannot overlap

and must exhaust the space of a layer.          Every piece of boundary line is a common boundary between two areas.          The stretch of common boundary between two junctions (nodes) may be called edge,

chain, or arc.          Arcs have attributes which identify the polygons on either side (e.g. "left" and "right"

polygons)          In what direction by which we can define "left" or "right"? 

        Arcs (chains/edges) are fundamental in vector GIS (Figure 3-7).  

 

 

Figure 3-7. An arc in vector GIS.  NodeThe beginning of any point of the arc or the location of a point feature, or the inntersection of two arcs PolygonIt is closed chain of arcs that represent all area features. Data Base Creation Data base creation involves several stages:        input of the spatial data         input of attribute data         linking spatial and attribute data  Once points are entered and geometric lines are created, the topology of the spatial object must be "built" (Figure 3-8). Building topology involves calculating and encoding relationships between the points, lines and areas. This information may be automatically coded into tables of information in the data base.Topology is recorded in 3 data tables. One for each type of spatial element, arc (polygon attribute table), node (node topology table), polygon (arc topology table) and a fourth table is used to store nodes, coordinates and vertices. 

Figure 3-8. Example of "built" topology  EditingDuring the topology generation process, problems such as overshoots, undershoots and spikes are either flagged for editing by the user or corrected automatically. Automatic editing involves the use of a tolerance value which defines the width of a buffer zone around objects within which adjacent objects should be joined. Tolerance value is related to the precision with which locations can be digitised.  Adding Attributes         Once the objects have been formed by building topology, attributes can be keyed in

or imported from other digital data bases.          Once added to the data base, attributes must be linked to the different objects.  Attribute data is stored and manipulated in entirely separate ways from the locational data. Usually a Relational Data Base Management System (RDBMS) is used to store and manage attribute data and their links to the corresponding spatial objects.    Things to consider in DatabaseEssential Differences between raster and vector (Hazelton, 1999)The essential differences between the two approach is how they deal with representing things in space.Raster systems are built on the premise that there is something at all points of interest, so we will record something for every location. The space under consideration is exhausted by a regular data structure. The concern is not with the boundaries between objects as

much as with the objects themselves, their interiors, or the phenomena they are representing. Spatial resolution is not as important as complete coverage. The presence of something is more important than its exact extent.Vector systems are built on the premise that we only need to record and deal with the essential points. If there isn’t something of significance at a location, don’t record anything. Not all locations in space are referenced, and many are simply referenced indirectly, as being inside a polygon. The data structure supports irregular objects and very high resolution. We are interested in the boundaries between objects at least as much as with the objects themselves, often more so. We need precision representation of linear objects, and this need overrides other needs for surface and area modeling of all but the simplest kind. Precision is the watchword in vector GIS, together with making spatial relationship explicit.The explicit nature of the relationships in vector GIS requires ‘topology’, . It also allows much easier analysis of these kinds of relationships, especially connectivity between locations (points), which is done with lines. In raster GIS, we can figure out which cells are the eight surrounding the one we are currently in, so connectivity is implicit in the data structure, and we don’t need all this extra stuff  Spaghetti (Hazelton, 1999)We drew a polygon by just making lines; we never explicitly say in the database ‘this is a polygon’. We often call this kind of representation the Spaghetti data model, after the way that a plate of spaghetti looks (and is structured).While the spaghetti looks fine, it doesn’t really satisfy our needs. While we just need to see a picture, it’s fine. As soon as we need to do anything beyond looking, when we need to get the machine to do some analysis rather than the user, we run into problems.A simple question might be: “If I follow this road (a line), what other roads join it?” This is rather an important question if you want the machine to tell you how to get from A to B. If we have a spaghetti model for our data storage, we can find out the answer to the question. However, we need to look at every single line and co-ordinate pair in the database and compare it to every line segment in the line we are starting from.Why is this? At any moment, another line might cross ours, and we won’t know it unless we test every single segment of that line against all the segments of our line, to see if they cross or meet. If there is a crossing or meeting, we can tag that line and keep going, but every such query requires a complete search and test for the entire database. As most GIS databases are fairly large, this is horribly inefficient. We can only consider it in a small system, such as MapInfo, and even there we do give the system a few hints to make things simpler. For big GIS, we use a thing called Topology Topology (Hazelton, 1999)When we look into the display of the spaghetti, we quickly see the polygons, the intersections and the like. This is because our brains are very powerful parallel-processing systems adapted to make sense of visual data very rapidly. Our lives depend on this capability, today and since the dawn of the species. But computers are painfully slow and awkward at this operation. They are good at crunching numbers, so we have to make the structure in the mess of spaghetti obvious to the machine, in a numerical form.

We call this ‘topology.’Topology is a branch of mathematics that deals with very basic geometric concepts. Way before we think about angle and distance and size, there are more fundamental properties of objects, properties which don’t change when we do a wide range of things to the object. For example, no matter how we manipulate an object, provided we don’t tear it, we don’t change the number of pieces there are of it, or the number of complete holes there are in it. If two objects are connected, the connectedness remains constant no matter how much we rotate, scale, move or otherwise manipulate them. One object is inside another until we tear open the outer one to remove the inner. So you can see that there are some very basic properties of objects that remain constant (or invariant) under a range of things that can be done to them (operations).Extending this to vector GIS databases, we find that an object is a polygon no matter how many sides it has (beyond a basic minimum); that the holes in an object remain the same no matter how it is transformed into different map projections; that a line goes from one end to the other at all times (i.e., there is a direction associated with it). We can build these things into the GIS by making them explicit. To do this requires a more developed data structure than just the spaghetti Topological Relationships (Hazelton, 1999)1)      Ownership and Component-nessThe most fundamental topological relationship is ‘owns’ or ‘is a component of’. (You will note that these two relationships are actually two sides of the one relationship.) This allows us to build a definite structure into all the objects in the vector GIS database. It works like this.We still have the great line strings of points, with the nodes at the ends. But we now give the line strings a definite identifier, a unique number (to keep the computer happy). The nodes we also keep a special note of in a separate area, linked back to the original data (which includes the co-ordinates). As we build up things like polygon boundaries, we link the line strings into chains, of which we also keep a special note. So we have a series of two-way relationships being built here. The nodes are a component of the lines, which are a component of the chains. The chains own the lines, which in turn own the nodes. We make this explicit in our database, so that we can find where everything is.One of the beauties of this system is that ownership and component-ship are not exclusive. A node can be the end-point of several lines, for example, and a line obviously forms a boundary between (and is owned by) two polygons. This means that we only need to store things once in the database, reducing redundancy.At the next level, we take collections of chains that form a closed loop, and reference these as the boundary of a polygon. The polygon then explicitly owns this collection of chains as its boundary. A polygon may have an outer boundary and several inner ones, such as islands in a lake.We need to make sure that the relationships are two-way, so we need to have the chains refer to the polygons that own them. Each chain can only be owned by a maximum of two polygons in a 2-D representation, so we make explicit which two these are.You will notice that in all the work to this point, co-ordinates have not been involved. This is an essential part of topology, and it means that the relationship hold true for any map projection of the data

 2)      DirectionAnother important topological relationship is the direction of a line. We take this to be from it ‘from- node’ to its ‘to-node’, naturally. In most cases, the from-node, is just where the line started to be entered. The user never sees this direction, as it doesn’t affect anything outside the topological structure. In many GIS, you can add a direction to a line, making it a directed line, where the direction has a special meaning, such as a one-way street. The GIS will store this information, but the direction of the line that the user want can be either direction.With direction, an interesting property occurs in 2-D (but not in 3-D). As we move along the line in its proper direction, we find that the two polygons that the line bounds are on either side, to the left and to the right. So we record the left and right polygon identifiers with the line data, and we use this to provide a link from the line to each polygon.In fact, because of this role of the line being linked to all the other components, the line assumes major importance in 2-D GIS. In addition, there is a topological property called ‘duality’, which means that there are strong relationships between the components in both directions which must be made explicit for the data structure to work properly. By working with the lines as the basis of the data structure, we have a single step to get to all the other components. 3)      ConnectivityThe topological relationships we now have as explicit in the database enable us to tell very quickly what lines meet at which nodes. We can choose a single node and find all the lines to which it belongs, and so all the polygons in whose boundary it is a component. This makes it very quick to determine how pipeline networks inter-connect. This simplifies a lot of the kinds of queries that are involved in network analysis. 4)      AdjacencyThe other aspect of connectivity is the relationship between polygons. If two polygons share a boundary, they are adjacent. If they share just a common point, their adjacency is of a lower order. But by making the fundamental relationships explicit in the database, it is quick and easy to determine this adjacency and its degree. This helps in a number of different applications of spatial analysis 5) NestednessAnother topological relationship is that of having things inside other things. The database handles this by referring to closed loops of chains as boundaries, and noting which of them are internal boundaries, i.e. inside another polygon, and which are external. It is then very simple to search for common boundaries and seek nested objects 6) How Many More?In a 2-D GIS, there are quite a few different topological relationships. There are those between polygons, between polygons and lines, between lines and lines, between lines and points, between polygons and points, and between points and points. Some are quite simple; others are more complex.As far as polygons and lines are concerned, a topologically sound database can handle

only two kinds of relationships (fundamentally). Either the two objects touch along a common boundary (polygons meet each other at lines and points, for example) or they do not touch at all. This is the state in a database when we have ‘built’ the topology Improper Relationships (Topological Division) (Hazelton, 1999)When we enter data into a GIS, we can have ‘improper’ topological relationships occur. For instance, we may digitize two polygons in a single layer that overlap, so that at one point we will have two values for a single attribute. This presents us with problems in analysis, so we don’t want this to happen. With spaghetti we can’t control this, but if we build the topology correctly, we can ensure that for each attribute layer, we have single values at any point.(Remember that part of the basic idea of a vector GIS is that within a polygon, the attribute value is constant, changing sharply at the boundary. This is exactly the same as with a raster GIS: sharp changes at boundaries, no change within. We have not yet got to the point of a GIS that allows continuous variability.)A similar circumstance arises when we undertake topological overlay. Here we have two layers that while having ‘good’ topology, will naturally have overlapping polygons. It is perfectly reasonable to have a map where you have overlapping polygons representing different attributes. However, when we want to build the topology in the newly created layer (created by the overlay operation), we need to start breaking the large polygons down into small polygons, such that any one attribute has just one value within the polygon.So in all cases of importance to us in this course, when we have the data structure in the state where all polygons are in one of the two basic topological states, and the same for lines, etc., we have eliminated all the improper relationships and can now proceed with analysis. We can be confident that we don’t have any ambiguities in the spatial relationships expressed in the database, so that analysis will work properly. Scale, Accuracy, Precision, Resolution (Hazelton, 1999)An interesting myth that has grown around vector GIS is that of the scale-less database. The argument runs that since we can represent locations to fractions of a millimeter, we can work on a 1 : 1 scale, and so avoid the problems of scale in maps and the like. While this is a nice idea, even with GPS it is still a long way off.The question is, how good is the input data? If I digitize locations from a 1:24,000 map, a location is good to about 12 meters. If I digitize a 1:500 map, the locations are good to about 0·25 meter. Note that for maps, this data quality is for ‘well-defined points’ only. Points that aren’t well-defined, lines, polygons and the like don’t count! So only a very small part of the map will actually be to that precision. How good is the rest? There are no standards for that part of the map.I can measure objects on the Earth with GPS and get precision to 0·1 meter. With good surveying gear, I may even be able to get to 0·01 meter. We are still a long way from a millimeter, let alone a fraction of a millimeter. Yet it is easy to pull up co-ordinates to whatever number of decimal places one wishes.The quality of the data, its accuracy if you like, is based very much on the precision of the measurements used in the database. But there is nothing in a GIS, in almost every case, to let the user know how good the data actually as, while it is being used. You pick

a point, and read out the co- ordinates to the fraction of a millimeter, and nothing springs up to say “Well actually, that’s only good to ± 50 meters, you know.” It is very misleading.As GIS users, you need to be very aware of this issue. It is soeasy to be led astray here, and many of your less well-educated users may fall into these pitfalls. Remember the Garbage In, Gospel Out situation. Here is very easy place to see it happen.The resolution of the computer hardware is also an issue here. ARC/INFO work with real numbers (floating point) for co-ordinates, and these can be either single precision or double precision. Single precision is good to 6 or 7 significant figures, while double precision is good to 14 to 16 significant figures. If you are recording locations using UTM co-ordinates, you will only get meter resolution if you use single precision and the full co-ordinates. MGE, on the other hand, uses integers, so that every location is ultimately expressed as a number between 0 and 4·2 billion. There can be questions of the fine-grained nature of this, but if you are aware of the differences and what is happening, things will be OK. Problems with Vector GIS (Hazelton, 1999)A full-blown vector GIS, especially with an associated raster component, is an awesome system. ARC/INFO sports over 3,000 commands, while MGE is not far behind it (although it has a much more mouse-windows oriented interface). It is very easy to get lost in the complexity and intricacies of these products.We have already looked at questions of precision, accuracy, resolution and scale. Vector systems have no built-in ‘check’ of the raster cell size as a give-away about their resolution. In many cases there is no metadata or data quality information to let you know about the data in the database. You may never know that part of your database was digitized from a 1:500,000 map, while all the rest is 1:24,000, and yet that difference could play havoc with analyses performed on the data.Another issue we haven’t touched on is the question of data conversion from raster to vector. We often need to do this to help a vector analysis. When the vectors are produced, there may be nothing (in the lineage part of data quality) to let you know that these were converted from a raster dataset of some resolution. When the vectors are smoothed and the data is included, how will we know there is anything different about those lines and polygons?Similar problems going from vector are raster are less of an issue, as the vector looks like it should be of a higher resolution and converts easily. But was the vector as good as the raster resolution? How can you tell? It is surprising how many raster GIS have the same resolution as a vector system.As with all science, you can avoid fooling other people if you first don’t fool yourself. If you know about the system, its capabilities, the data and what it should be able to achieve, you can do well with vector GIS.Conclusions (Hazelton, 1999)Vector GIS is a powerful tool for spatial representation and analysis. Yet it is open to misuse and abuse, like any other information system. Some of the potential traps have been pointed out, and you must be aware of them.If you can focus on the application rather than the hardware and software, you will do a good job with GIS in general

  GIS DATABASE DESIGNBefore actually building the tables, forms, and other objects that will make up your database, it is important to take time to design your database. A good database design is the keystone to creating a database that does what you want it to do effectively, accurately, and efficiently.  Database design stagesConceptual database design (focus on the content of the database i.e. user GIS data needs

to get GIS functional and data requirements, it is done by listing the database elements)

1.      physical database design (the actual structure of the database is developed and documented based on the content (features and attributes) identified above)

2.      database implementation (the actual coding of the physical database) These are the basic steps in designing a database:1 Determine the purpose of your database.2 Determine the tables you need in the database.3 Determine the fields you need in the tables.4 Identify fields with unique values.5 Determine the relationships between tables.6 Refine your design.7                    Add data and create other database objects (Tables, queries, forms, reports,

macros, and modules.8 Use Microsoft Access analysis tools. Determine the purpose of your databaseThe first step in designing a database is to determine the purpose of the database and how it's to be used. You need to know what information you want from the database. From that, you can determine what subjects you need to store facts about (the tables) and what facts you need to store about each subject (the fields in the tables). Talk to people who will use the database. Brainstorm about the questions you'd like the database to answer. Sketch out the reports you'd like it to produce. Gather the forms you currently use to record your data. Examine well-designed databases similar to the one you are designing. Determine the tables you needDetermining the tables can be the trickiest step in the database design process. That's because the results you want from your database — the reports you want to print, the forms you want to use, the questions you want answered — don't necessarily provide clues about the structure of the tables that produce them. Table is the fundamental structure of a relational database management system. A table is an object that stores data in records (rows) and fields (columns). The data is usually about

a particular category of things, such as employees or orders. A table should not contain duplicate information, and information should not be duplicated between tables.When each piece of information is stored in only one table, you update it in one place. This is more efficient, and also eliminates the possibility of duplicate entries that contain different information. For example, you would want to store each customer address and phone number once, in one table. Each table should contain information about one subject.When each table contains facts about only one subject, you can maintain information about each subject independently from other subjects. For example, you would store customer addresses in a different table from the customers' orders, so that you could delete one order and still maintain the customer information. In table Datasheet view, you can add, edit, or view the data in a table. You can also check the spelling and print your table's data, filter or sort records, change the datasheet's appearance, or change the table's structure by adding or deleting columns.You can sort, filter, or find records in the rows of your datasheet by the data in one or more adjacent columns.

You use a unique tag called a primary key to identify each record in your table. Just as a license plate number identifies a car, the primary key uniquely identifies a record. A table's primary key is used to refer to a table's records in other tables.  Determine the fields you needEach table contains information about the same subject, and each field in a table contains individual facts about the table's subject. For example, a customer table may include company name, address, city, state, and phone number fields. Field is an element of a table that contains a specific item of information, such as last name. A field is represented by a column or cell in a datasheet. When sketching out the fields for each table, keep these tips in mind:        Relate each field directly to the subject of the table.        Don't include derived or calculated data (data that is the result of an expression).        Include all the information you need.        Store information in its smallest logical parts (for example, First Name and Last

Name, rather than Name.) Identify fields with unique valuesIn order for DBMS to connect information stored in separate tables - for example, to connect a customer with all the customer's orders- each table in your database must include a field or set of fields that uniquely identifies each individual record in the table. Such a field or set of fields is called a primary key. The power of a relational database system such as Microsoft Access comes from its ability to quickly find and bring together information stored in separate tables using queries, forms, and reports. Once you designate a primary key for a table, to ensure

uniqueness, DBMS will prevent any duplicate or Null values from being entered in the primary key fields. A query is a question about the data stored in your tables, or a request to perform an action on the data. A query can bring together data from multiple tables to use as the source of data for a form or report.  A form is a database object on which you place controls for taking actions or for entering, displaying, and editing data in fields. A report is a database object that presents information formatted and organized according to your specifications. Examples of reports are sales summaries, phone lists, and mailing labels.There are three kinds of primary keys that can be defined in Microsoft Access: AutoNumber, single-field, and multiple-field.  Determine the relationships between tablesNow that you've divided your information into tables and identified primary key fields, you need a way to tell DBMS how to bring related information back together again in meaningful ways. To do this, you define relationships between tables.Foreign key is one or more table fields that refer to the primary key field or fields in another table. A foreign key indicates how the tables are related - the data in the foreign key and primary key fields must match.  Refine the designAfter you have designed the tables, fields, and relationships you need, it's time to study the design and detect any flaws that might remain. It is easier to change your database design now, rather than after you have filled the tables with data. Use Microsoft Access to create your tables, specify relationships between the tables, and enter a few records of data in each table. See if you can use the database to get the answers you want. Create rough drafts of your forms and reports and see if they show the data you expect. Look for unnecessary duplications of data and eliminate them. Enter data and create other database objectsWhen you are satisfied that the table structures meet the design goals described here, then it's time to go ahead and add all your existing data to the tables. You can then create any queries, forms, reports, macros, and modules that you may want.  Use Microsoft Access analysis toolsMicrosoft Access includes two tools that can help you to refine your database design. The Table Analyzer Wizard can analyze the design of one table at a time, can propose new table structures and relationships if appropriate, and can restructure a table into new related tables if that makes sense. For information on running the Table Analyzer Wizard, click .

 The Performance Analyzer can analyze your entire database and make recommendations and suggestions for improving it. The wizard can also implement these recommendations and suggestions. For information on using the Performance Analyzer, click . For additional ideas on designing a database, you may want to look at the Northwind sample database and the database schemas for one or more of the databases that you can create with the Database Wizard. For information on using the Database Wizard, click . What is involved in the design of a database        the logic elements ( they provide for the positional (x,y) reference structure that holds

graphic information, and are designated as nodes, links, chains, and areas)        the graphic elements, which are assigned to logic elements (i.e. to design the graphics

elements for the features that have to be represented graphically, maintained, and accessed in graphic fashion)

        the attributes (alphanumeric data), which are assigned/linked to the features, and the display rules for the attributes are also included

        GIS data relationship (i.e., relation between feature classes and their attribute types, relation among attribute types, and relationships among features.)

        the digital data to be included in the database e.g. raster images, satellite imagery, existing digital landbased data(maps) and facilities/assets data

        database has to be logically structured to relate similar data types to each other , either through laying or object-based approaches

          Should I use a macro or Visual Basic? In Microsoft Access, you can accomplish many tasks with macros or through the user interface. In many other database programs, the same tasks require programming. Whether to use a macro or Visual Basic for Applications often depends on what you want to do.When should I use a macro?Macros are an easy way to take care of simple details such as opening and closing forms, showing and hiding toolbars, and running reports. You can quickly and easily tie together the database objects you've created because there's little syntax to remember; the arguments for each action are displayed in the lower part of the Macro window. In addition to the ease of use macros provide, you must use macros to:

 · Make global key assignments.· Carry out an action or series of actions when a database first opens. However, you can use the Startup dialog box to cause certain things to occur when a database opens, such as open a form. When should I use Visual Basic?You should use Visual Basic instead of macros if you want to: · Make your database easier to maintain. Because macros are separate objects from the forms and reports that use them, a database containing many macros that respond to events on forms and reports can be difficult to maintain. In contrast, Visual Basic event procedures are built into the form's or report's definition. If you move a form or report from one database to another, the event procedures built into the form or report move with it.· Create your own functions. Microsoft Access includes many built-in functions, such as the IPmt function, which calculates an interest payment. You can use these functions to perform calculations without having to create complicated expressions. Using Visual Basic, you can also create your own functions either to perform calculations that exceed the capability of an expression or to replace complex expressions. In addition, you can use the functions you create in expressions to apply a common operation to more than one object. · Mask error messages. When something unexpected happens while a user is working with your database, and Microsoft Access displays an error message, the message can be quite mysterious to the user, especially if the user isn't familiar with Microsoft Access. Using Visual Basic, you can detect the error when it occurs and either display your own message or take some action.· Create or manipulate objects. In most cases, you'll find that it's easiest to create and modify an object in that object's Design view. In some situations, however, you may want to manipulate the definition of an object in code. Using Visual Basic, you can manipulate all the objects in a database, as well as the database itself. · Perform system-level actions. You can carry out the RunApp action in a macro to run another Windows-based or MS-DOS–based application from your application, but you can't use a macro to do much else outside Microsoft Access. Using Visual Basic, you can check to see if a file exists on the system, use Automation or dynamic data exchange (DDE) to communicate with other Windows-based applications such as Microsoft Excel, and call functions in Windows dynamic-link libraries (DLLs). · Manipulate records one at a time. You can use Visual Basic to step through a set of records one record at a time and perform an operation on each record. In contrast, macros work with entire sets of records at once.· Pass arguments to your Visual Basic procedures. You can set arguments for macro actions in the lower part of the Macro window when you create the macro, but you can't change them when the macro is running. With Visual Basic, however, you can pass

arguments to your code at the time it is run or you can use variables for arguments — something you can't do in macros. This gives you a great deal of flexibility in how your Visual Basic procedures run. Structured Query Language (SQL) A language used in querying, updating, and managing relational databases. SQL can be used to retrieve, sort, and filter specific data to be extracted from the database.

The design of database had to take care of the basic data elements (micro data) and aggregated data (macro data) as data was obtained in both formats. Here records of individual persons and households collected in the survey in their raw form as well as in their final corrected form, and the results of processing in the form of aggregations are stored with a view to preserving them for the future and to making access as easy as possible at all times. Some of the main advantages of a micro-database are the possibilities to retrieve data theoretically at any level of detail, and to build sampling frames. Since micro data could be used illegally in efforts to disclose sensitive information, privacy concerns must always be taken into consideration, in this the names were restricted and removed from the general display ( ). For the case, aggregated census data were stored in that format to preserve earlier aggregations, to provide readily usable information. Micro data were saved to allow aggregations to be made that were not programmed initially

 

Demographic Geocoding/Georeferencing

Georeferencing being the process of assigning a geographic location (e.g. latitude and longitude) to a geographic feature based on its address, this was carried to able to convert automatically existing addresses into a GIS database. For this to be accomplishable the digital record for the feature must have a field which can be linked to a geographic base file with known geographic coordinates.

Considering the way population data is collected using field survey, which is the main source of data, this data has to be georeferenced (geocoded) before it is analyzed in GIS. Demographic data is usually referenced by point and area, the integration of the two has been highlighted by Bracken (1994), can be done in three ways. First, point address locations may be added allocated to census zones so that in effect address data becomes another aggregate field of zonal record. Second, each address location can be assigned data from its enveloping zone, so that the point takes on the attributes of its surrounding area. Third, both types of data can be re-represented geographically onto a neutral base in the form of a georeferenced grid. It is this third alternative in which Bracken (1994) developed a surface model which generate a spatial distribution of population as a fine and variable resolution geographical grid which is advocated for and developed further by (Bracken and Martin, 1995). It is this technique being partly employed to derive

surface from the points to represent polygons but this time using buildings as the georeferencing spatial units as among the information recorded in population data collection is the place of residence mostly building number. This done to provide a way to disaggregate demographic analysis and this can be easily combined with other spatial analysis. It is accomplished by transferring the attributes of the bigger feature (e.g. road) on to the smaller dimensional features (buildings) so that individuals are geocoded on the right road; it explained further below.

There are many techniques of georeferencing (Cowen, 1997), in this thesis employed three techniques 1) totally assign new unique field, 2) Database Queries to read fields from tables and join them to other tables, 3) Any set of addresses can be accurately georeferenced by joining to this file on the basis of common fields.

With that we are in position

o Carry out direct queries of the spatial database. o One may also determine the coordinate of a single address with a direct

query of the GIS database by entering the address in a dialog box.

Once the particular address is located on a map then the coordinates can usually be read directly from the screen.

For the building road number was added to the building

This can simply be a relational data base join in which the geographic coordinates of the basemap are linked to the address records and made spatial.  Computer-Based Analysis for Public ManagementRelational Database DesignThomas H. Grayson 2000 Relational Database DesignIn this study, use a relational database and designing a relational model: all data have been represented as tables. Tables are comprised of rows and columns; rows and columns are unordered (i.e., the order in which rows and columns are referenced does not matter). Each table has a primary key, a unique identifier constructed from one or more columns. A table is linked to another by including the other table's primary key. Such an included column is called a foreign key. Let me talk more about how the primary keys were created. For individual data, each was given a unique identifier number which is the first column in table . for the building

Qualities of a Good Database Design Reflects real-world structure of the problem

Can represent all expected data over time Avoids redundant storage of data items Provides efficient access to data Supports the maintenance of data integrity over time Clean, consistent, and easy to understand

Introduction to Entity-Relationship Modeling Entity-Relationship (E-R) Modeling: A method for designing databases A simplified version is presented here Represents the data by entities that have attributes. An entity is a class of distinct identifiable objects or concepts Entities have relationships with one another Result of the process is a normalized database that facilitates access and avoids

duplicate data

E-R Modeling Process Identify the entities that your database must represent

Determine the cardinality relationships among the entities and classify them as one of

o One-to-one (e.g., a parcel has one address) o One-to-many (e.g., a parcel may be involved in many fires) o Many-to-many (e.g., parcel sales: a parcel may be sold by many owners,

and an individual owner may sell many parcels)

Draw the entity-relationship diagram

Determine the attributes of each entity

Define the (unique) primary key of each entity

From E-R Model to Database Design Entities with one-to-one relationships should be merged into a single entity Each remaining entity is modeled by a table with a primary key and attributes,

some of which may be foreign keys One-to-many relationships are modeled by a foreign key attribute in the table

representing entity on the "many" side of the relationship (e.g., the FIRES table has a foreign key that refers to the PARCELS table)

Many-to-many relationships among two entities are modeled by a third table that has foreign keys that refer to the entities. These foreign keys should be included in the relationship table's primary key, if appropriate

Commercially available tools can automate the process of converting a E-R model to a database schema

Database Design Rules of Thumb Keep data items atomic (e.g., first and last names are separate). Concatenating

columns together later on-the-fly is generally easy, but separating them is not. o What is an example of where parsing subfields from a column may go

awry? o When might you want to include the combined fields in a column anyway?

Define the primary key first. Use a descriptive name (PARCELID, not ID) In fact, use descriptive names that give a new user a decent chance of guessing

what they mean for all your columns! (E.g., use PARCEL_COUNT rather than PACT)

Use a single column for the primary key whenever possible; multi-column primary keys are appropriate for many-to-many relationships

Use lookup tables rather than storing long values Use numeric keys whenever possible (What about ZIP codes?) Avoid intelligent keys (exception: lookup tables) Avoid using multiple columns to represent a one-to-many relationship (e.g.,

columns such as CHILD1, CHILD2 in a table called PARENT rather than putting the children in a separate table.

For readability, use the primary key name for foreign keys unless the same foreign key is used multiple times in the same table (e.g., state of work and state of residence for a person might both be foreign keys that reference a table of states)

Do not include two columns whose values are linked together (e.g., county name and county ID) unless one of the columns is the primary key of the table

Avoid allowing NULL values in columns that have a discrete range of possible values (e.g., integers between 1 and 10, inclusive)

(not applicable to DBF files, which do not support NULLs) Avoid using multiple tables with similar structures that represent minor variants

on the same entity (e.g., putting Boston parcels and Cambridge parcels in separate tables).

Why is this rule often hard to practice with GIS? Plan ahead for transferring data to a different database. For example, you may

want to move data from Oracle to DBF, or Microsoft Access to Oracle. o Avoid column names with characters with other than UPPER CASE

letters (A-Z), digits (0-9), and the underscore (_). Other characters may not be accepted by a database. Some database systems may be case sensitive with regard to column names, while others are not.

o Keep your column names relatively short. Different databases support different numbers of characters in column names (e.g., 30 for Oracle, 64 for Microsoft Access, 10 for DBF). Try to make column names differ in the first few characters rather than at the end to avoid column name

duplication if the names are truncated during the conversion process (e.g., use COL1 and COL2, not LONG_COLUMN_NAME_1 and LONG_COLUMN_NAME_2).

Note that keeping column names short may be at odds with keeping your column names meaningful for neophytes. Be aware that you are making a tradeoff!

Remember that these are rules of thumb, not absolute laws! Bend the rules if you must but have a justification for your decision. The limitations of a GIS software package often provide a good reason.

Example: The Parcels Database

Tables and Primary Keys

Table Primary Key

PARCEL  PID, WPB

OWNERS  OWNERNUM

FIRES  PID, WPB, FDATE

TAX  PID, WPB

Cardinality Relationships

Primary Table Columns Foreign Table Columns   Cardinality

OWNERS.OWNERNUM PARCEL.ONUM One-to-many

PARCEL.PID, PARCEL.WPB FIRES.PID, FIRES.WPB One-to-many

PARCEL.PID, PARCEL.WPB TAX.PID, TAX.WPB One-to-one

Parcels Database Enhancements Eliminate intelligent key for parcel ID Make the primary key of PARCEL a single column (e.g., PARCELID) Merge the TAX and PARCEL tables or add the year to the tax table to keep track

of taxes over time (changes the relationship to one-to-many) Rename PARCEL table to PARCELS for consistency with other tables Rename PARCEL foreign key ONUM to be consistent with the OWNERS table Improve column names

 

Last modified 29 October 2000 by Wadembere, M. I.   

UNIT 67 - IMPLEMENTATION ISSUESUNIT 67 - IMPLEMENTATION ISSUES

Compiled with assistance from Ken Dueker, Portland State University A. INTRODUCTION B. STAGE THEORIES OF COMPUTING GROWTH

o Nolan model of computing growth o Incremental model o Radical model

C. RESISTANCE TO CHANGE D. IMPLEMENTATION PROBLEMS

o Overemphasis on technology o Rigid work patterns o Organizational inflexibility o Decision-making procedures o Assignment of responsibilities o System support staffing o Integration of information requirements

E. STRATEGIES TO FACILITATE SUCCESS o Management involvement o Training and education o Continued promotion o Responsiveness o Implementation and follow-up plans

REFERENCES EXAM AND DISCUSSION QUESTIONS NOTES

UNIT 67 - IMPLEMENTATION ISSUES

Compiled with assistance from Ken Dueker, Portland State University A. INTRODUCTION

most organizations acquiring GIS technology are relatively sophisticated o some level of investment already exists in electronic data processing

(EDP) o they have experience with database management and mapping systems

and some combination of mainframes, minis and micros

GIS technology will be moving into an environment with its own institutional structures - departments, areas of responsibility

o as an integrating technology, GIS is more likely to require organizational changes than other innovations

o the need for changes - cooperation, breaking down of barriers etc. - may have been used as arguments for GIS

o existing structures are already changing - centralized computing services with large staffs are disappearing because new distributed workstation hardware requires less support

organizational change is often difficult to achieve and can lead to failure of the GIS project

o organizational and institutional issues are more often reasons for failure of GIS projects than technical issues

B. STAGE THEORIES OF COMPUTING GROWTH several models have been proposed for the growth of computing within

organizations o growth is divided into a number of stages

Nolan model of computing growth the Nolan (1973) model has 4 stages:

Stage 1: Initiation

o computer acquisition o use for low profile tasks within a major user department o early problems appear

Stage 2: Contagion

o efforts to increase use of computing o desire to use inactive resources completely o supportive top management o fast rise in costs

Stage 3: Control

o efforts to control computing expenditures policy and management board created

o efforts to centralize computing and control o formal systems development policies are introduced o rate of increase in cost slows

charge-back policies introduced

Stage 4: Integration

o refinement of controls greater maturity in management of computing computing is seen as an organization-wide resource

o application development continues in a controlled way o costs rise slowly and smoothly

charge-back policy might be modified or abandoned how does this model fit GIS experience?

o two versions - incremental and radical

Incremental model GIS is a limited expansion of existing EDP facilities, no major organizational

changes required o GIS will be managed by EDP department as a service o probably run on EDP's mainframe o this model fits AM/FM and LIS applications best - adding geographical

access to existing administrative database GIS acquisition will likely be initiated by one or two departments, other

departments encouraged to support by management o thus it begins at stage 2 of Nolan's model o if acquisition is successful, use and costs will grow rapidly, leading to

control in stage 3

Radical model GIS is independent of existing EDP facilities, e.g. uses micros instead of EDP

mainframe, may be promoted by staff with little or no history of EDP use o EDP department may resist acquisition, or attempt to persuade

management to adopt an incremental-type strategy instead o may be strong pressure to make GIS hardware compatible with main EDP

facility to minimize training/maintenance costs this model more likely in GIS applications with strong analytical component, e.g.

resource management, planning model assumes that GIS will not require large supporting infrastructure - unlike

central EDP facility with staff of operators, programmers, analysts, consultants unlike the incremental model, this begins at step 1 of Nolan's model

o few systems have progressed beyond stage 2 - process of contagion is still under way in most organizations - GIS is still new

o stage 2 is slow in GIS because of the need to educate/train users in new approach - GIS does not replace existing manual procedures in many applications (unlike many EDP applications, e.g. payroll)

o support by management may evaporate before the contagion period is over - never get to stages 3 and 4

we have little experience of well-controlled (stage 3), well integrated (stage 4) systems at this point in time

C. RESISTANCE TO CHANGE all organizations are conservative

resistance to change has always been a problem in technological innovation o e.g. early years of the industrial revolution

change requires leadership o stage 1 requires a "missionary" within an existing department o stage 2 requires commitment of top management, similar commitment of

individuals within departments o despite the economic, operational, political advantages of GIS, the

technology is new and outside many senior managers' experience leaders take great personal risk

o ample evidence of past failure of GIS projects o initial "missionary" is an obvious scapegoat for failure o Rhind (1988), Chrisman (1988) document the role of various leaders in the

early technical development of GIS - similar roles within organizations will likely never be documented

GIS innovation is a sufficiently radical change within an organization to be called a "paradigm shift"

o a paradigm is a set of rules or concepts that provide a framework for conducting an organization's business

o the role of paradigms in science is discussed by Kuhn (1970) o use of GIS to support various scientific disciplines (biology, archaeology,

health science) may require a paradigm shift

D. IMPLEMENTATION PROBLEMS Foley (1988) reviews the problems commonly encountered in GIS

implementation, and common reasons for failure o reasons are predominantly non-technical

Overemphasis on technology planning teams are made up of technical staff, emphasize technical issues in

planning and ignore managerial issues planning teams are forced to deal with short-term issues, have no time to address

longer-term management issues

Rigid work patterns it is difficult for the planning team to foresee necessary changes in work patterns a formerly stable workforce will be disrupted

o some jobs will disappear o jobs will be redefined, e.g. drafting staff reassigned to digitizing

some staff may find their new jobs too demanding o former keyboard operators may now need to do query operations o drafting staff now need computing skills

people comfortable in their roles will not seek change o people must be persuaded of the benefits of change through education,

training programs productivity will suffer unless the staff can be persuaded that the new job is more

challenging, better paid etc.

Organizational inflexibility planning team must foresee necessary changes in reporting structure,

organization's "wiring diagram" departments which are expected to interact and exchange data must be willing to

do so

Decision-making procedures many GIS projects are initiated by an advisory group drawn from different

departments o this structure is adequate for early phases of acquisition but must be

replaced with an organization with well-defined decision-making responsibility for the project to be successful

o it is usually painful to give a single department authority (funds must often be reassigned to that department), but the rate of success has been higher where this has been done

e.g. many states have assigned responsibility for GIS operation to a department of natural resources, with mandated consultation with other user departments through committees

project may be derailed if any important or influential individuals are left out of the planning process

Assignment of responsibilities assignment is a subtle mixture of technical, political and organizational issues

o typically, assignment will be made on technical grounds, then modified to meet pressing political, organizational issues

System support staffing a multi-user GIS requires at minimum:

o a system manager responsible for day-to-day operation, staffing, financing, meeting user requirements

o a database manager responsible for database design, planning data input, security, database integrity

planning team may not recognize necessity of these positions in addition, the system will require

o staff for data input, report production o applications programming staff for initial development, although these

may be supplied by the vendor management may be tempted to fill these positions from existing staff without

adequate attention to qualifications personnel departments will be unfamiliar with nature of positions, qualifications

required and salaries

Integration of information requirements management may see integration as a technical data issue, not recognize the

organizational responses which may be needed to make integration work at an institutional level

E. STRATEGIES TO FACILITATE SUCCESS

Management involvement

management must take a more active role than just providing money and other resources

must become actively involved by supporting: o implementation of multi-disciplinary GIS teams o development of organizational strategies for crossing internal political

boundaries o interagency agreements to assist in data sharing and acquisition

must be aware that most GIS applications development is a long-term commitment

Training and education staff and management must be kept current in the technology and applications

Continued promotion the project staff must continue to promote the benefits of the GIS after it has been

adopted to ensure continued financial and political support projects should be of high quality and value a high profile project will gain public support

o an example is the Newport Beach, CA tracking of the 1990 oil spill (see Johansen, 1990)

Responsiveness the project must be seen to be responsive to users needs

Implementation and follow-up plans carefully developed implementation plans and plans for checking on progress are

necessary to ensure controlled management and continued support follow-up plans must include assessment of progress, include:

o check points for assessing project progress o audits of productivity, costs and benefits

REFERENCES

Chrisman, N.R., 1988. "The risks of software innovation: a case study of the Harvard lab," The American Cartographer 15:291-300.

Foley, M.E., 1988. "Beyond the bits, bytes and black boxes: institutional issues in successful LIS/GIS management," Proceedings, GIS/LIS 88, ASPRS/ACSM, Falls Church, VA, pp. 608- 617.

Forrest, E., G.E. Montgomery, G.M. Juhl, 1990. Intelligent Infrastructure Workbook: A Management-Level Primer on GIS, A-E-C Automation Newsletter, PO BOX 18418,

Fountain Hills, AZ 85269-8418. Describes issues in developing management support during project planning and suggests strategies for successful adoption of a project.

Johansen, E., 1990. "City's GIS tracks the California oil spill," GIS World 3(2):34-7.

King, J.L. and K.L. Kraemer, 1985. The Dynamics of Computing, Columbia University Press, New York. Presents a model of adoption of computing within urban governments, and results of testing the model on two samples of cities. Includes discussion of adoption factors and the Nolan stage model.

Kuhn, T.S., 1970. The Structure of Scientific Revolutions, University of Chicago Press, Chicago.

Nolan, R.L., 1973. "Managing the computer resource: a stage hypothesis," Communications of the ACM 16:339-405.

Rhind, D.W., 1988. "Personality as a factor in the development of a discipline: the example of computer- assisted cartography," The American Cartographer 15:277- 90.

EXAM AND DISCUSSION QUESTIONS

1. Summarize the Nolan model of staged development of a computing environment, and discuss its validity for a GIS project.

2. Hay (Hay, A.M., 1989. "Commentary," Environment and Planning A 21:709) argues that GIS is a technical shift rather than a paradigm shift. Do you agree with his arguments?

3. The Nolan model does not appear to allow for project failure, which has been a consistent problem in the history

of GIS. How could the model be elaborated to include the possibility of failure?

4. "Effective leadership in technological innovation requires both tenacious vision and the capacity to survive a long time". Discuss this comment in the context of GIS.

Back to Geography 370 Home Page

Back to Geography 470 Home Page

Back to GIS & Cartography Course Information Home Page

Please send comments regarding content to: Brian KlinkenbergPlease send comments regarding web-site problems to: The Techmaster

Last Updated: August 30, 1997.

HISTORY OF GIS http://www.geog.ubc.ca/courses/klink/gis.notes/ncgia/u23.html

This unit provides a very brief review of some important milestones in the development of GIS. Of course, it is likely there are some important stages we have omitted. It is perhaps a little too early yet to get a good perspective on the history of GIS. A. INTRODUCTION

development of GIS was influenced by: o key groups, companies and individuals o timely development of key concepts

content of this unit is concerned with North America outside North America, significant developments occurred at the Experimental

Cartography Unit in the UK o history of this group has been documented by Rhind (1988)

this unit draws on a preliminary "genealogy of GIS" assembled in 1989 by Donald Cooke of Geographic Data Technologies Inc.

B. HISTORIC USE OF MULTIPLE THEME MAPS idea of portraying different layers of data on a series of base maps, and relating

things geographically, has been around much longer than computers o maps of the Battle of Yorktown (American Revolution) drawn by the

French Cartographer Louis-Alexandre Berthier contained hinged overlays to show troop movements

o the mid-19th Century "Atlas to Accompany the Second report of the Irish Railway Commissioners" showed population, traffic flow, geology and topography superimposed on the same base map

o Dr. John Snow used a map showing the locations of death by cholera in central London in September, 1854 to track the source of the outbreak to a contaminated well - an early example of geographical analysis

C. EARLY COMPUTER ERA several factors caused a change in cartographic analysis:

o computer technology - improvements in hardware, esp. graphics o development of theories of spatial processes in economic and social

geography, anthropology, regional science o increasing social awareness, education levels and mobility, awareness of

environmental problems integrated transportation plans of 1950s and 60s in Detroit, Chicago

o required integration of transportation information - routes, destinations, origins, time

o produced maps of traffic flow and volume University of Washington, Department of Geography, research on advanced

statistical methods, rudimentary computer programming, computer cartography, most active 1958-611:

o Nystuen - fundamental spatial concepts - distance, orientation, connectivity

o Tobler - computer algorithms for map projections, computer cartography o Bunge - theoretical geography - geometric basis for geography - points,

lines and areas o Berry's Geographical Matrix of places by characteristics (attributes) -

regional studies by overlaying maps of different themes - systematic studies by detailed evaluation of a single layer

D. CANADA GEOGRAPHIC INFORMATION SYSTEM (CGIS) Canada Geographic Information System is an example of one of the earliest GISs

developed, started in the mid '60's is a large scale system still operating today its development provided many conceptual and technical contributions

Purpose to analyze the data collected by the Canada Land Inventory (CLI) and to produce

statistics to be used in developing land management plans for large areas of rural Canada

the CLI created maps which: o classify land using various themes: soil capability for agriculture

recreation capability capability for wildlife (ungulates) capability for wildlife (waterfowl) forestry capability present land use shoreline

o were developed at map scales of 1:50,000 o use a simple rating scheme, 1 (best) to 7 (poorest), with detailed

qualification codes, e.g. on soils map

____________________ 1see pages 62-66 in Johnston, R.J., 1983. Geography and Geographers: Anglo-American Human Geography since 1945, 2nd edition, Edward Arnold (Publishers), London.

may indicate bedrock, shallow soil, alkaline conditions

product of CLI was 7 primary map layers, each showing area objects with homogeneous attributes

o other map layers were developed subsequently, e.g. census reporting zones perception was that computers could perform analyses once the data had been

input

Technological innovations CGIS required the development of new technology

o no previous experience in how to structure data internally o no precedent for GIS operations of overlay, area measurement o experimental scanner had to be built for map input

very high costs of technical development o cost-benefit studies done to justify the project were initially convincing o major cost over-runs o analysis behind schedule

by 1970 project was in trouble o failure to deliver promised tabulations, capabilities

completion of database, product generation under way by mid 1970s o main product was statistical summaries of the area with various

combinations of themes o later enhancement allowed output of simple maps

CGIS still highly regarded in late 1970s, early 1980s as center of technological excellence despite aging of database

o attempts were made to adapt the system to new data o new functionality added, especially networking capability and remote

access o however, this was too late to compete with the new vendor products of

1980s

Key innovative ideas in CGIS overhead - Key ideas in CGIS

use of scanning for input of high density area objects o maps had to be redrafted (scribed) for scanning

o note: scribing is as labor intensive as digitizing

vectorization of scanned images geographical partitioning of data into "map sheets" or "tiles" but with

edgematching across tile boundaries

partitioning of data into themes or layers use of absolute system of coordinates for entire country with precision adjustable

to resolution of data o number of digits of precision can be set by the system manager and

changed from layer to layer internal representation of line objects as chains of incremental moves in 8

compass directions rather than straight lines between points (Freeman chain code) coding of area object boundaries by arc, with pointers to left and right area objects

o first "topological" system with planar enforcement in each layer, relationships between arcs and areas coded in the database

separation of data into attribute and locational files o "descriptor dataset" (DDS) and "image dataset" (IDS) o concept of an attribute table

implementation of functions for polygon overlay, measurement of area, user-defined circles and polygons for query

Key individual Roger Tomlinson, now with Tomlinson Associates, Ottawa

E. HARVARD LABORATORY full name - Harvard Laboratory For Computer Graphics And Spatial Analysis Howard Fisher, moved from Chicago to establish a lab at Harvard, initially to

develop general-purpose mapping software - mid 1960s Harvard Lab for Computer Graphics and Spatial Analysis had major influence on

the development of GIS until early 1980s, still continues at smaller scale Harvard software was widely distributed and helped to build the application base

for GIS many pioneers of newer GIS "grew up" at the Harvard lab

The Harvard packages overhead - The Harvard packages

SYMAP o developed as general-purpose mapping package beginning in 1964

o output exclusively on line printer

poor resolution, low quality

o limited functionality but simple to use

a way for the non-cartographer to make maps

o first real demonstration of ability of computers to make maps

o sparked enormous interest in a previously unheard-of technology

CALFORM (late 1960s) o SYMAP on a plotter o user avoided double-coding of internal boundaries by inputting a table of

point locations, plus a set of polygons defined by sequences of point IDs o more cosmetic than SYMAP - North arrows, better legends

SYMVU (late 1960s) o 3D perspective views of SYMAP output o first new form of display of spatial data to come out of a computer

GRID (late 1960s) o raster cells could be displayed using the same output techniques as

SYMAP o later developed to allow multiple input layers of raster cells, beginnings of

raster GIS o used to implement the ideas of overlay from landscape architecture and

McHarg POLYVRT (early 1970s)

o converted between various alternative ways of forming area objects: SYMAP - every polygon separately, internal boundaries twice CALFORM - table of point locations plus lists of IDs DIME - see below

o motivated by need of computer mapping packages for flexible input, transfer of boundary files between systems, growing supply of data in digital form, e.g. from Bureau of the Census

ODYSSEY (mid 1970s) o extended POLYVRT idea beyond format conversion to a comprehensive

analysis package based on vector data o first robust, efficient algorithm for polygon overlay - included sliver

removal

Key individuals Howard Fisher - initiated Lab, development of SYMAP William Warntz - succeeded Fisher as Director until 1971, developed techniques,

theories of spatial analysis based on computer handling of spatial data Scott Morehouse - move to ESRI was key link between ODYSSEY and the

development of ARC/INFO see Chrisman (1988) for additional information on the Lab and its key personnel

F. BUREAU OF THE CENSUS need for a method of assigning census returns to correct geographical location

o address matching to convert street addresses to geographic coordinates and census reporting zones

o with geographic coordinates, data could be aggregated to user-specified custom reporting zones

need for a comprehensive approach to census geography o reporting zones are hierarchically related o e.g. enumeration districts nest within census tracts

1970 was the first geocoded census

DIME files were the major component of the geocoding approach

DIME files precursor to TIGER, urban areas only coded street segments between intersections using

o IDs of right and left blocks o IDs of from and to nodes (intersections) o x,y coordinates o address ranges on each side

this is essentially the arc structure of CGIS and the internal structure (common denominator format) of POLYVRT

DIME files were very widely distributed and used as the basis for numerous applications

topological ideas of DIME were refined into TIGER model o planar enforcement o 0-, 1- and 2-cell terminology

DIME, TIGER were influential in stimulating development work on products which rely on street network databases

o automobile navigation systems o driver guides to generate text driving instructions (e.g. auto rental

agencies) o garbage truck routing o emergency vehicle dispatching

Urban atlases beginning with the 1970 census production of "atlases" of computer-generated maps for selected census variables

for selected cities demonstrated the value of simple computer maps for marketing, retailing

applications o stimulated development of current range of PC-based statistical mapping

packages based on use of digital boundary files produced by the Bureau

G. ESRI Jack Dangermond founded Environmental Systems Research Institute in 1969

based on techniques, ideas being developed at Harvard Lab and elsewhere 1970s period of slow growth based on various raster and vector systems early 1980s release of ARC/INFO

o successful implementation of CGIS idea of separate attribute and locational information

o successful marriage of standard relational database management system (INFO) to handle attribute tables with specialized software to handle objects stored as arcs (ARC) - a basic design which has been copied in many other systems

o "toolbox", command-driven, product-oriented user interface

modular design allowed elaborate applications to be built on top of toolbox

ARC/INFO was the first GIS to take advantage of new super-mini hardware o GIS could now be supported by a platform which was affordable to many

resource management agencies o emphasis on independence from specific platforms, operating systems

initial successes in forestry applications, later diversification to many GIS markets o expansion to $40 million company by 1988

REFERENCES

Special issue of The American Cartographer Vol 15(3), 1988, on the digital revolution in cartography - contains articles on the Harvard Lab, UK Experimental Cartography Unit, and the history of GIS.

Tomlinson, R.F., 1987. "Current and potential uses of geographical information systems," The North American experience. International Journal of Geographical Information Systems 1:203-18. Reviews GIS from beginnings to 1987, and summarizes lessons learned.

EXAM AND DISCUSSION QUESTIONS

1. Compare the Chrisman and Rhind articles in the special issue of The American Cartographer cited above. What roles did personalities play in the contributions of the Harvard Lab and the ECU?

2. What factors contributed to the unique development of CGIS in a department of the Canadian federal government in the mid 1960s?

3. In what ways has the concept of a geographic information system changed since the design of CGIS?

4. "The pattern of GIS development since 1965 has been largely attributable to the changing balance between the costs of hardware, communications and software development". Discuss.

Mapping the Maps : 200-1500 AD

Time Line

1351The Medici sea atlas is published that contains a ‘world’ map.1375The Catalan atlas is prepared by Catalan cartographers who made great contribution in the completion of reformation of world map. 1436Bianco’s world map is published where the continental mass is placed excentrically to the embracing ocean and eastern Asia breaks through the framework in order to leave more space in

the west for the insertion of Antillia. 1447The world map is prepared by Fra Mauro, a monk of Murano, near Venice, that is often regarded as the culmination of medieval cartography, but in some respects it is transitional between medieval and renaissance cartography.1448The Benedictine Andreas Walsperger at Constance draws a world map.1477The first printed edition of the ‘Geography’, Bologna is published on the basis of manuscript atlases, produced by Dominus Nicholaus Germanus. 1482The first edition of the work of Florence Francesco Berlinghieri is published, a rhyming version of the ‘Geography’ accompanied by an important set of maps, including a number of modern maps related to the Massajo and Laurenziana types. 1487The rounding of the southern promontory of Africa by Bernal Diaz1492Columbus discovers West Indies.1498The discovery of India by Vasco da Gama.1500The discovery of Brazil by Cabral.

Claudius Ptolemy

Claudius Ptolemy was an astronomer and mathematician of c. 2 A.D., who must apparently have worked in Alexandria between 127 and 148 A.D. since some of his astronomical observations are consistent with those dates. Ptolemy’s most famous works are the Almagest, a textbook of astronomy in which, among many other things, he laid the foundations of modern trigonometry; the Tetrabiblos, a compendium of astrology and geography. The importance of his manuscripts was that they transmitted a vast amount of topographical detail to Renaissance scholars, which profoundly influenced their conception of the world. The manuscript maps fall into two classes; one consisting of the world map and 26 regional maps, and the second containing 67 maps of smaller areas. From the 2nd until the early 15th century, they were almost entirely without influence on western cartography. But the Arab geographers, who possessed translation of his works, and through them, seem to have had some influence on 14th century cartographers such as Marino Sanudo, knew those. With the translation of the text into Latin in the early 15th century, Ptolemy dominated European cartography for a century, and his influence promoted cartographical progress. Ptolemy’s Geography was what we would now call an atlas, the core of which were necessarily the maps. Ptolemy suggested that people replot his data, and a good section of Book I of the Geography offers advice on how to draw maps. Various people at various times have redrawn the maps from the coordinates given in the work: the map appended to Prof. Stevenson’s edition, for example, is a medieval version or copy of just such a replot, but both Planudes and Karl Müller have done it as well.In South Asia, the themes obtained from the contributions of the occasional visitors like Fa-hsien, Hsuan-tsang etc. helped in the configuration of historical atlases in the later period. Such information was also supplied South Asia in the age of Gurjara-Pratiharas, Palas and Rastrakutas, by the contributions of al-Biruni, Kalhana, al-Masudi, the literatures of Sandhyakar Nandi, Somadeva Bhatta, al-Idrisi, Bilhana etc. ‘CANTINO PLANISPHERE’ of c.1502 is possibly the oldest extant European map to show an approximation of India’s true shape, executed on parchment by an anonymous Portuguese. On the basis of such historical records, ‘A Historical Atlas of South Asia’ was published by the University of Chicago Press in 1978, which correlates the literary themes with modern cartographic techniques.

The most interesting example of the circular world map is the ‘mappa mundi’ of Hereford, dating from as late as c. A.D. 1300. One of the links of it is the Hieronymus map of about A.D. 1150.

Mapping the Maps : 1500-1600 AD

Time Line1541Mercator presented a globe. 1569Mercator prepares a large map of Europe. 1570Abraham Ortelius prepares an atlas, 'Theatrum orbis terrarum'.

1571RajaTodarmal, one of the nine jewels of Akbar, introduces a rational revenue assessment system based on properply-surveyed holdings. 1578 Mercator prepares an edition of Ptolemy's world atlas.

Mercator

Gerhard Mercator, a maker of mathematical and astronomical instruments, owed much to his relations with Gemma Phrysius, the cosmographer and editor of Peter Apian. He acquired a profound knowledge of cosmography and of topographical progress in Europe and beyond, and won general recognition as the most learned geographer of his day. On his globe of 1541, were laid down, for the first time, loxodromes (constant bearings). Before the appearance of his famous world map in 1569, Mercator achieved international reputation as a cartographer, principally through his map of Europe of 1554, which displayed critical ability of a high order. His posthumous fame rests upon his world map published at Duisberg in 1569. Mercator, using conformal projection in his chart showing ‘waxing latitudes’, made an effort to represent the land surfaces as accurately as possible, and to show how much of the earth’s surface was known to the ancients. But the theoretical construction of the projection was not clearly set out until Edward Wright published his ‘Certaine Errors in Navigation’ in 1599. In his outlines of the continents, Mercator broke away completely from the conceptions of Ptolemy, though the latter’s influence in the interior of the Old World can still be traced. He recognised three great landmasses, the old world (Eurasia and Africa), the New Indies (North and South America) and a great southern continent, ‘Continens Australis’. His concept of the existence of a continent in the southern part as a counterpart of the ‘inhabited world’, was derived ultimately from the Greeks. In 1595, a year after his death, his heirs published the complete work with a general title page, Atlas sive cosmographicae meditation de fabrica mundi et fabricati figura. This was the first time the term atlas was applied to a collection of maps.

Land surveying and map-making, an integral part of any government, was pioneered in India by Todarmal who was employed in military operations in Akbar’s empire. In 1567, Todarmal, along with Muzaffar Khan effected a major change in the revenue collection procedure. A new procedure for collecting information about the area of land - cultivated and uncultivated, produce of the land and the land revenue figures or statistics was implemented. In popular memory, the ‘Dahsala’ or 10 yearly revenue system is associated with Todarmal who, along with ‘Diwan’ Shah Mansur, divided the empire into 12 provinces, each with a governor and a ‘Diwan’. Sher Shah Suri’s revenue maps based on regular land surveying system also proves the development of mapping techniques in the medieval period

Mapping the Maps : 1600-1700 AD

Time Line

1609Galileo discovers satellites orbiting Jupiter and the concept of an Earth - centred universe with all objects revolving around the earth. He made careful observations and measurments and recorded them in detailed descriptions and drawings. 1613The Indian Marine is formed by the British. 1669Jean Picard measures a degree of latitude on the meridian of Paris and later in the century Giovanni Cassini and his son Jacques continue to measure this meridian within France, and find that the length of a degree of latitude appears to diminish northward, suggesting that the earth was prolate, or flattened at the equator.

1675The Royal Observatory at Greenwitch is established as a Centre for Astronomical Research to improve navigation techniques. 1686On its change of base from Surat to Bombay (present Mumbai), the Indian Marine takes the name ‘The Bombay Marine’. 1687Sir Isaac Newton, in his ‘Principia Mathematica’, demonstrates as a corollary to his theory of gravitation that the earth is in fact flattened at the poles.

Theatrum of Ortelius

In the 17th century, among all the 41 editions of the atlas ‘Theatrum’ of Ortelius, the last one was published in 1612. In addition to the 21 Latin editions, there were two Dutch editions, five German, six French, four Spanish, two Italian and one English (1606). Gerritsz engraved and published several charts as the draughtsman of a magnificent manuscript map of the Pacific Ocean, is one of them ‘Caert van’t landt Eendracht’ was published in 1627 that portrayed the coast of western Australia discovered by the Dutch vessel, Eendracht. At the end of the 17th century, the reformation of cartography began with the longitude measurements of the French Academy. There were successive stages in the making of the new map of France, such as the measurement of an arc of the meridian of Paris by the Abbe Picard, in 1669-’70, by means of a chain of triangles, the first attempt to produce a new map of France by adjusting existing surveys, supplemented by observations for latitude and longitude, to the Paris meridian and planned survey of the whole country. Following the establishment of the Royal Observatory at Greenwich as a centre for astronomical research to improve navigation techniques, English maps began to use Greenwich as their prime meridian - a fixed point from which longitude was measured. In this period of time, the gap between popular and scientific geography was narrowed. New surveys were made and new maps drawn by cartographers who were also accomplished astronomers like the Cassini family in France, or scholars like d’Anville, or marine hydrographers like J. N. Bellin, Captain Cook and Murdock Mackenzee.

In India, mapping during the early 17th century focused much on the polity of the Mughal Empire. Those maps emphasised the seat of Mughal power in the northern plains and showed the Mughal territories west of the Indus, especially Punjab, the Hindu Kush, and occasionally Afghanistan, but the peninsula was omitted. The southward expansion of Mughal power under Aurangazeb (reigned 1658- 1707) in the late 17th century led to the merging of the two regional framings in the early 18th century; the European cartographers extended their maps of the empire to incorporate the peninsula.

Mapping the Maps : 1700-1800 ADTime Line

1717Hermann Moll's "The West Part of India, or the Empire of the Great Mogul" is published. 1752French geographer, Jean Baptiste Bourguignon d’Anville publishes a map of India laying the Indian geographical knowledge on a scientific footing. ‘Atlas Universal’ of Gilles and Didier Robert de Vaugondy is first published with maps of whole Indies. 1767The East India Company establishes the Survey of India (SOI) for mapping the territories it acquired, for developing them for commercial exploitation. Till now it is responsible for topographic survey related cartography and for the preparation of up-to-date maps, covering the whole country with an area of 32,87,263 km2, on standard scales (1:25,000, 1:50,000 and 1:250,000). 1770The professional approach in Hydrography starts from the days of "Atlases of parts of India" and "Directory" based on systematic surveys using traditional methods by Rennell and Ritcher Dalymple Horsburg, Ross, Walker, Lloyd etc. 1781Surveyor General James Rennell’s Bengal Atlas and Bihar Atlas are published. 1784Lt. Col. Mark Wood prepares plan of Calcutta. ----Cary’s ‘Map of the Great Post-Roads between London and Falmouth’ is prepared. 1785First Map of ‘Hindoostan’ is prepared by the then Surveyor General of India. 1788Rennell’s famous Map of India is published and this becomes the starting point in map-making by the Indian Government. 1791Cassini and William Roy officially establish the Ordnance Survey (first known as the Trigonometrical Survey) as the outcome of survey operations for the connection of England and France in 1787. 1792Alexander Read uses a Plane Tabling, one of the topographical survey techniques, for the first time to make sketch of Salem and Bara Mahal. ----A. Upjohn prepares a map of Calcutta and its environs in 1792-’93. 1799Col. W. Lambton initiates trigonometrical and general survey of the South Peninsula at the end of the year.

Bourguignon d'Anville

The contributions of renowned French cartographer, J. B. Bourguignon d’Anville (1697-1782), to cartography cannot be disputed. Some of his notable maps include the continents of North America (1746), South America (1748), Africa (1749), Asia (1751), the ‘Indies’ (1752), Europe presented in three sheets (1754-’60) and a general map of the world in two hemispheres (1761). The first map of the ‘Indies’, ‘Carte de l’Inde’ was published in 1752 at the scale of about 1:3,000,000 in four sheets. The map was prepared on paper of almost one sq. mt. area and extended from the Indus to the China Sea, with the subcontinent on the left and Indochina on the right. In 1753, he prepared large-scale maps of ‘Carnate’ and ‘Coromandel’ - the southeast coast of the country. In these large-scale maps, he put down detailed information like "This is the part of

India where the settlements that support the trade of the Europeans--." However, the data for most of the areas was so sparse that a major portion of the map was shown as white space. Soon after the Battle of Wandiwash (1760), the purpose and use of maps underwent major changes and they began to be prepared on more scientific grounds.

James Rennell

The "Father of Indian Geography", James Rennell, as Surveyor General of Bengal, collected together the geographical data acquired by the British Army columns on their campaign and began to map all of India in 1765, subdividing India according to the Mughal provinces (subas) as defined under Emperor Akbar (reigned 1555-1605). The translation of an Islamic geography of the empire, the Ain-e-Akbari (1598) helped him to acquire the knowledge about the old divisions. His ‘Bengal Atlas’ (1781) was the earliest known reference of an atlas which was followed by the ‘Atlas of India’, showing the provinces in the Bengal Presidency and western provinces, published by the then British Government of India in Calcutta. The title of Rennell’s memoirs explicitly equated India with the Mughal Empire - Memoir of a Map of Hindoostan; or the Mogul Empire, although the maps presented the entire subcontinent, usually referred by him as ‘India’.

Mapping the Maps : 1800-1900 AD

Time Line

1802On April 10, the actual work of the Great Trigonometrical Survey of India (GTS) commences by measurement of a baseline near Madras. 1812The development of German Geography begins with the publication of ‘Atlas geographique et physique’, as a result of the travel and studies in New Spain of Alexander von Humboldt. 1815The first Surveyor General of India is appointed having authority over all the surveyors in the three Presidencies. 1817The first fascicule of the famous ‘Hand-Atlas’, under the direction of Justus Perthes’ son Wilhelm is issued. 1820A project is established to compile an Atlas of India at the medium scale of four miles to an inch (1: 253,440). The GTS is tied to the production of the atlas. 1823Colonel Sir George Everest becomes the Superintendent of the Great Trigonometrical Survey of India (GTS). 1830Colonel Sir George Everest becomes the Surveyor General of India and retains it till 1843. 1841The magnetic observatories are set up at Simla, Madras and Singapore and observations are taken from 1841-45, and after that, these are closed down temporarily. 1851Arrival of Thomas Oldham in Calcutta marks the establishment of the Geological Survey of India. 1852P. W. Simmis prepares the maps of the City and Environs of Calcutta. 1859The ‘Royal Atlas’, the German sheets is published, prepared by Alexander Keith Johnston. A French photographer and balloonist Gaspard Felix Tournachon, also known as Nadar, who carried his bulky cameras aloft, starts aerial photography and remote sensing.

1861The Archaeological Survey of India (ASI) is established. 1871Great Trigonometrical Survey (GTS) on 16 inches to a mile and the Landuse Maps form the basis of Cadastral (settlement) surveys. 1872The first census of India is carried out by the Registrar General of Census of India. 1874The Marine Survey of India is established, that carries out systematic hydrographic surveying. 1875The Indian Meteorological Department (IMD) is established. 1878International Federation of Surveyors is established in Paris. 1891At the International Geographical Congress, Berne, Professor Albrecht Penck advances the idea of an International Map of the World on a scale of 1:1 million. 1895J. G. Bartholomew publishes the Royal Scottish Geographical Society’s ‘Atlas of Scotland’. 1899The first edition of the ‘Soumen Kartaso: Atlas of Finland’ is published.

Aerial Photography

Gaspard Felix Tournachon (1859), also known as Nadar, was a famous French photographer and ballonist who carried big cameras aloft. His goal was to make land surveys from aerial photographs. Although not fully succesful in his attempt, he set the stage for the future of remote sensing. His photographic observation did, however, catch the attention of the military. In April 1861, Professor Thaddeus Lowe went up in a balloon near Cincinnati, Ohio, to make a weather observation. Later on he was appointed as the in-charge of the US Army Balloon Corps. In 1903, realising the danger involved in the use of balloons, a very light camera was attached to a carrier pigeon. These cameras took a picture every 30 seconds as the pigeons winged its way along a straight course to its home shelter. Pigeons were certainly faster than balloons, but their flight paths were unpredictable. Wilbur Wright was the first pilot in remote sensing history who took photographs from an aeroplane. The first photograph from an aircraft was taken by Wilbur’s passenger, L. P. Bonvillain, on a demostration flight in France in 1908. In 1909, the first aerial motion pictures were taken by him in Italy.

The GTS

Colonel William Lambton in December 1799 put forward the proposal of the Great Trigonometric Survey (GTS) of India. On February 6, 1800 formal orders were issued for the commencement of the survey. The actual work of the GTS of India was started on April 10, 1802 by the measurement of a base line near Madras. From this base line, a series of triangles were carried up to the Mysore plateau and a second base was measured near Bangalore in 1804. The series was taken across the peninsula from this place. Having connected the two sides of the peninsula, Major Lambton measured an arc of the meridian, and the series of triangles that were measured for this purpose were known as the "Great Arc Series". In 1818, Sir George Everest joined with Lambton. Both Lambton and Everest considered their work to be of global importance -- the measurement of the arc had contributed to British, French and Swedish attempts to mathematically compute the exact shape of the earth.

Mapping the Maps : 1900-1950 AD

Time Line

1903Smart maps of Calcutta are published in 856 sheets. 1905The Survey of India gets engaged in survey beyond the Indian territory.Revenue Survey is transferred from the Survey of India to the state authorities. 1906The Imperial Forest Research Institute is established, which evolved as the Forest Research Institute, Dehradun. 1908Wilbur Wright’s passenger, L. P. Bonvillian takes the first photographs from an aircraft on a demonstration flight in France. 1909The first aerial motion pictures are taken in Italy. 1910The International Society of Photogrammetry (ISP) is founded under the leadership of its first President, Eduard Dolezal from Austria. 1914During the First World War (1914-1918), the value of complete aerial photographic reconnaissance is recognised by both sides: Germany acquires nearly 4000 photos a day as part of the planning for their last great offensive (1918), and the US Army has made over one million prints during the last four months of the war. 1916The Zoological Survey of India (ZSI) is established. 1921The International Hydrographic Bureau (IHB) is formed in Monaco and starts working to bring uniformity and standardisation in hydrographic database creations. 1922The International Geographical Union is established in Brussels. ----The Royal Geographical Society publishes ‘The Times Atlas of the World’, with the characteristic layer coloured maps. ----OGS Crawford from England is the inventor of the scientific aerial-archaeology with his photograph on the "Celtic fields", old soil marked field boundaries at Windmill Hill. 1923The Official Secrets Act is enacted in India. 1930The British Survey of India maps on 1:63,360 scale are published. 1931Maps on the Environs of Calcutta in the Imperial Gazetteer of India are published. 1936The ‘Atlas of American Agriculture’ prepared by O. E. Baker is published. 1937The first volume of the ‘Atlas Bolshoi’, one of the earliest atlases made by the Soviet cartographers, is published between 1937 and 1939. 1939Military Survey is set up during the Second World War (1939-45) by the SOI. ----The American Geographical Society on the initiative of its former Director, Isaiah Bowman, produces the ‘Million Map of Hispanic America’. 1943Last ordnance survey of India is carried out. Geodetic and research branch is split by the formation of the War Survey Research Institute.

Wars and Maps

In the early 20th century, the half-inch and ten-mile maps followed the method of preparing Quarter Inch Maps. In the third edition of the One-Inch map, completed in 1912, known as the ‘fully coloured’, relief was shown in brown and contours in red. In the years before 1914, experiments in the best methods of showing relief were carried on vigorously. Before 1917, cartography was well advanced in the former Soviet Union and the daunting task of mapping its Asian territories had been tackled. By the outbreak of the Second World War in 1939, of the approximate total of 975 sheets required to cover the land surface, 405 were published, but of these, only 232 conformed to the international pattern. Partly because of the insufficiency of material available for mapping, the Geographical section, General Staff under the British Government, between 1919 and 1939, produced a series of large scale maps such as of Africa, 1:2 million and Asia at 1:4 million. In the years since 1945, the science and art of cartography has advanced to an extent comparable to that achieved in any other great period of the past.

Swami Pranavananda

The exploration work in the Himalayas and Tibet by Swami Pranavananda, a Telugu Sanyasi explorer, of Holy Kailas and Manasarovar was a great task in unfolding the mystery and beauty of the Himalayas. He discovered the true sources of four great Himalayan rivers, the Indus, the Sutlej, the Brahmaputra and the Karnali. The Royal Geographical Society of London and the Survey of India accepted his findings; the SOI has incorporated them in the maps published since 1941. His thrilling book on ‘Exploration in Tibet’ was published by the University of Calcutta in 1950.

During this period, attempts were made by some private publishers and the Survey of India to produce atlases. The most significant contribution in this direction was by Prof. S. P. Chatterjee and Economic Atlas of Andhra Desa (now Andhra Pradesh) prepared by V. L. S. Prakasa Rao and V. V. Ramanadhan.

Mapping GIS Milestones : 1950-1960Time Line

1950On 13th November, the policy of restriction on maps is first enunciated vide Ministry of Defence, Government of India, letter No. F.119/49/D-1.  ---- The National Sample Survey (NSS) sets up a programme of conducting large scale surveys to provide data for estimation of national income and related aggregates, especially related to unorganised sectors of the economy and for planning and policy formulation.---- The Soviet cartographers publish two volumes and index of the ‘Morskoi Atlas’.  ---- The Meteorological Office of Britain publishes the ‘Climatological Atlas of the British Isles’.1951 Planning in India starts with the First Five Year Plan (1951-56).1954 The Naval Hydrographic Office is established, which is responsible for hydrographic surveying and charting of the Indian Waters. ----Development of Photography is introduced and the first stereo plotting machine a Wild Autograph A-7 was brought at Survey of India. ----The Soviet cartographers prepare the ‘Atlas Mira’.

----From 1954 to 1960, ‘Atlas of Australian Resources’ with maps of Australia generally on the scale of 1:6 million is published. 1956At the instance of Pandit Jawaharlal Nehru, the first Prime Minister of India, National Atlas and Thematic Mapping Organisation (NATMO) is founded. ----SOI assumes its new dimensions with the switch over to the metric system. ----The French Institute of Pondicherry, ‘Institute Francais de Pondicherry is established through the treaty of Cession of French Establishment in India. ----The Institute Geographique National, Paris, publishes the ‘Relief Form Atlas’ in French and English editions. 1957With the launch of Sputnik, mounting of cameras on orbiting spacecraft becomes possible. ----The first ‘National Atlas of India’ is produced in Hindi. ----An elaborate successor of the ‘Canadian Atlas’ is published. 1958National Aeronautics and Space Administration (NASA) is established. ----The Defence Research and Development Organisation, India (DRDO) is established by amalgamating Defence Science Organisation and some of the Technical Development Establishments. 1959The US AMS series of maps covering the Himalaya Range from Bhutan to Pakistan on 1:250,000 scale is published. ----13 founding members in Bern form the International Cartographic Association.

NASA

National Aeronautics Space and Administration (NASA) has a rich history of unique scientific and technological achievements in human space flight, aeronautics, space science and space applications. NASA, formed on October 1, 1958 as a result of the Sputnik crisis of confidence, inherited the earlier National Advisory Committee for Aeronautics (NACA) and other US government organisations and almost immediately began working on options for human space flight. NASA’s first high profile programme was Project Mercury and Project Gemini that used spacecraft built for two astronauts. NASA’s human space flight efforts then extended to the moon with Project Apollo, culminating in 1969 when the Apollo 11 mission first put humans on the lunar surface. After the Skylab and Apollo-Soyuz Test Projects of the early and mid 1970s, NASA’a human space flight efforts again resumed in 1981, with the Space shuttle programme that continues today to help build the International Space Station.

Edifying the NACA roots, NASA has continued to conduct many types of cutting-edge aeronautics research and also on such topics as "lifting bodies" (wingless airplanes) and "supercritical wings" to dampen the effect of shock waves on transonic aircraft. In addition, NASA has launched a number of significant scientific probes such as the Pioneer and Voyager spacecraft that have explored the Moon, the planets, and the other areas of our solar system. NASA has sent several spacecraft to investigate Mars including the Viking and Mars Pathfinder spacecraft. The Hubble Space Telescope and other space science spacecraft have enabled scientists to make a number of significant astronomical discoveries about our universe. NASA has helped bring about new

generations of communication satellites such as the Echo, Telstar and Syncom satellites. NASA’s efforts have literally changed the way we view our home planet; the Landsat and Earth Observing System spacecraft have contributed many important scientific findings.

NATMO

Established in 1956, National Atlas and Thematic Mapping Organisation (NATMO) is the premier organisation in India in the field of preparation of thematic maps. The functions of the organisation are compilation of the National Atlas of India in English and Hindi, preparation of National Atlas Maps in regional languages, preparation of thematic maps based on research studies on environmental and associated aspects and their impact on social and economic development, installation of Automated Mapping System for increasing efficiency in mapping and Geographical / Cartographical research and training. In the year of its establishment, NATMO produced the first National Atlas of India combining statistical thematic mapping to cartographic knowledge.

Mapping GIS Milestones : 1960-1970Time Line

1960 First meteorological satellite (TIROS-1) is launched. 1962 USSR’s first Cosmos satellite is launched. ----Britain’s first satellite Ariel is launched. ----In the wake of the Chinese aggression, maps on ½ inch scale and larger for the whole of India are restricted by Ministry of Defence 1963Development of Canada Geographic Information Systems (CGIS) commences, led by Roger Tom Linson, to analyse Canada’s national inventory. 1964The Harvard Lab for Computer Graphics and Spatial Analysis, Harvard University, US is established by Howard Fisher. 1965Inception of Forest Survey of India (FSI). Space Science & Technology Centre (SSTC) established at Thumba, India. 1966 Indian Photo-Interpretation Institute (IPI), now known as the Indian Institute of Remote Sensing (IIRS), is established. ----Howard Fisher developes SYMAP (Synagraphic Mapping System), a pioneering automated computer mapping application, at the Northwestern Technology Institute, University of Chicago and completed it at the Harvard Lab for Computer Graphics and Spatial Analysis. ----The second edition of the best known atlas of the United States, ‘National Geographic Atlas of the World’ is published. 1967Aerial photographs in India are graded secret unless advised to be graded top secret vide Air Headquarters No. Air HQ / S_20173 / Air Int., dated 11.04. 67. 1968 Apollo 8 Space programme returns the first pictures of the Earth from deep space. ----The first Geostationary Operational Environmental Satellite (GOES) is developed and launched

by NASA in 1968. Later on it was transferred to NOAA for day-to-day activities. 1969 Indian Space Research Organisation (ISRO), is established. ----Environmental Science Research Institute (ESRI) is founded by Jack and Laura Dangermond as a privately held consulting group. ----Jim Meadlock establishes Intergraph Corporation (originally called M & S Computing Inc). ----NASA succeeds with a great start on the Moon and in orbit around the Earth. As the link up between Apollo and Soyuz, Skylab establishes a lasting presence in space.Harvard Laboratory

The Harvard Laboratory for Computer Graphics and Spatial Analysis laid its foundation with the development of general purpose mapping software in the mid-1960s by Howard Fisher. A GIS-type course was taught in 1966 as a "collaborative regional-scale studio and used SYMAP in a landscape-planning study of the peninsula. SYMAP was invented in Chicago and then Fisher moved to Harvard where SYMAP and GIS evolved into many other things. The educational and research programme grew through the 60s, 70s and the 80s with GIS approach and automated mapping systems with development of databases. Apart from the SYMAP, other Harvard packages, which were equally important in the developing field of GIS and spatial data analysis, were CALFORM (late 1960s), SYMVU (late 1960s), GRID (late 1960s), Polyvrt (early 1970s) and ODYSSEY (mid 1970s).

ESRI

Jack and Laura Dangermond founded ESRI in 1969 as a privately held consulting group. The business began with $1100 from their personal savings and operated out of an historic home located in Redlands, California.

ESRI’s early research and development in cartographic data structure, specialised GIS software tools and creative applications set the stage for today’s revolution in digital mapping. Today ESRI continues to set standards in the GIS industry. Its software is installed at more than 100,000 client sites worldwide, of which about 13000 are in Asia and the Pacific regions. Worldwide, ESRI has over 91 distributors, 16 of which are located in Asia.

ISRO

The Indian space programme was driven by the vision of Dr. Vikram Sarabhai, considered as the father of Indian Space Programme. In June 1972 the Government of India set up the Space Commission and the Department of Space (DOS). Indian Space Research Organisation (ISRO), under DOS, executes space programme through its establishments located in different parts of India.

The main objective of the space programme includes the devlopment of satellites, launch vehicles, sounding rockets and support ground systems.

Mapping GIS Milestones : 1970-1980

Time Line

1970 The French Institutes of Pondicherry launches a cartographic programme for the Western Ghat area. ----National Sample Survey (NSS) is regognised by bringining together all aspects of survey work into a single unified agency, known as the National Sample Survey Organisation (NSSO) under the Department of Statistics. ----Operational System for collecting information about the earth on a repetitive schedule starts with the help of the instruments like Skylab (later, the Space Shuttle) ----National Oceanic and Atmospheric Administration (NOAA) is established. 1971Department of Science and Technology (DST) is established with the objective of promoting new areas of Science and Technology. ----Survey of India gets transferred to DST after a trail of supervisons from the Defence Ministry, Agricultural Department and Education. ----Mekaster International Pvt. Ltd is established, which later started marketing Trimble GPS and Ground Penetration Radars for sub-surface mapping. ----The Canada Geographic Information System (CGIS) becomes fully operational. 1972ISO establishes various centres like Vikram Sarabhai Space Centre (VSSC), Shar Centre, ISRO Satellite Application Centre (ISRO), Space Application centre (SAC), ISRO is brought under Department of Space. ----The first Landsat satellite is launched (originally known as ERTS-1) by NASA that was dedicated to mapping natural and cultural resources on land and ocean surfaces. ----General Information System for Planning (GISP) is developed by the US Department of the Environment. 1973Society of Photogrammetry is established and formally registered with the ideas sown in 1969, which is renamed as Indian Society of Remote Sensing by 1980s. First issue of Photonirvachak (which is later known as Indian Journal of Remote Sensing) is published. ----Maryland Automatic Geographic Information (MAGI), one of the first statewide GIS projects begins in US. 1974The first AUTOCARTO conference is held in September, in Reston, Virginia. (Although the AUTOCARTO series really started the year before as the International Symposium on Computer Assisted Cartography). ----The first Synchronous Meteorological Satellite, SMS-1 operational prototype is launched. 1975India’s first indigenous scientific satellite Aryabhata is launched by the Soviet launch vehicle. ----National Remote Sensing Agency (NRSA) is established at Hyderabad for acquisition and distribution of data from various satellites. ----European Space Agency is formed. 1976

Indian Photogrammetric Institute, (presently known as Indian Institute of Remote Sensing) comes under NRSA. ----National Informatics Centre (NIC) is established. ----Minnesota Land Management Information System (MLMIS), another significant state-wide GIS, begins as a research project at the Centre for Urban and Regional Analysis, University of Minnesota. 1977 The USGS developes the Digital Line Graph (DLG) spatial data format. 1978ERDAS is founded. ----A Radar Imaging System - the main sensor on Seasat, US is launched. ----Coastal Zone Color Scanner (CZSC) instrument is flown on-board the NIMBUS 7 platform that collected ocean colour data from November 1978 to June 1986. 1979Indian National Cartographic Association(INCA) is established. ----Bhaskara I, an indigeneous earth observation satellite is launched by a Soviet Vehicle. ----For the first time, the Indian Navy’s ship takes part in the international venture "Monsoon Experiments (MONEX 79)" to study the onset, break-up and advance of monsoon. ----The Government of India sets up the National Institute of Hydrology (NIH) with headquarters at Roorkee. ----Australian Centre for Remote Sensing (ACRES), Australia’s major Remote Sensing organisation which is established as the Australian Landsat station

NOAA

The National Oceanic and Atmospheric Administration (NOAA) is a multivaried American environmental scientific agency composed of the National Ocean Service, National Marine Fisheries service, National Environmental satellite Data and Information Service and Office of Oceanic and Atmospheric Research of the US government. The creation of NOAA on October3, 1970 was the result of a series of decisions, which recognised the importance of the oceans and atmosphere to the nation's welfare and economy. Three of the principal NOAA offices-maritime charting, weather and fisheries were all created in the 19th century. Maritime charting began as the Survey of the Coast in 1807 and US Army Lake Survey in 1841. The first national weather warning service was created in the Department of the Army in 1870; and the US Fish commission was created in 1871. These were the precursors of the National Marine Fisheries Service. The beginnings of National Environmental, Satellite, Data and Information service (NESDIS) can be traced back to the Coast & Geodetic Survey magnetic investigations in the 19th century. NESDIS's National Climatic Data Center can be traced to the passage of the Federal Records Acts of 1950, and the establishment of the National Weather Records centre in Asheville, North Carolina.

Aryabhata to IRS-P4

During 1970s, India undertook demonstration of space applications for communication, broadcasting and remote sensing by designing and building experimental satellites. Aryabhata,

Bhaskara, Apple and Rohini are the experimental satellites launched. Aryabhata the first experimental satellite launched in April 19 1975 from Kupustin Yar on Kosmos 11 K 65 M.

Indian Remote Sensing Satellites (IRS), commissioned in 1988, has the world's largest constellation of five remote sensing satellites in orbit, IRS-1B, IRS-1C, IRS-1D, IRS-P3 and IRS-P4. The first in the IRS series was the IRS-1A, launched on 17 March, 1988. Till the launch of IKONOS, IRS-1C and IRS-1D were the highest resolution civilian satellites.

Landsat

Landsat satellites for the past two and a half decades have been providing the repetitive aquisition of high-resolution multispectral data on global basis. A unique 25-year data record of the Earth’s land surface now exists. This unique retrospective potrait of the Earth’s surface has been used across disciplines to achieve improved understanding of the earth’s land surface and the impact of humans on the environment. The development of the Landsat programme was in a sense a spinoff from the US Lunar program when certain NASA scientist realised the significance of viewing of the Earth from space. Thus in 1967, the National Aeronautics and Space Administration (NASA) encouraged by the US department of the Interior initiated the Earth Resource Technology Satellite (ERTS) program which was renamed as "LANDSAT PROGRAM" during the second launch of the Landsat satellite series. This program resulted in the deployment of six satellites and one failure.

N I C

National Informatics Centre (NIC) was set up in 1976 with a long term objective of setting up a computer-based informatics network for decision support to Governments/Ministries/Departments, development of databases relating to India’s socio-economic development and monitoring planned programmes. Since 1977, NIC has been playing a catalytic role in creating informatics awareness. NIC operates a nation-wide satellite (INSAT) based computer communication network called NICNET to facilitate information flow from the points of origin to the place of decision making. NICNET is one of the largest VSAT-based networks of its kind in the world. Operating on C-band and high speed Ku-band, NICNET can be accessed through terrestrial linkages also, at any time of the day. Around 700 locations in India, including all state capitals, district headquarters and selected commercial centres can be directly accessed through NICNET. It is connected to over 200 International Networks in 160 countries.

Mapping GIS Milestones : 1980-1990

Time Line

19801980 First National level Symposium is organised by the Indian Society of Remote Sensing at DehraDun, since then it is being conducted regularly. ----Rohini - 1 indigenous technology satellite is launched by Indian SLV-3. ----The Botanical Survey of India (BSI) is established. ----International Society for Photogrammetry changes its name to International Society for Photogrammetry and Remote Sensing (ISPRS).

1981The Census of India evolves 10,000 sq. km. grid for a new approach to urbanisation by placing the urban settlements of all classes and measures the urbanisation with reference to such grids for the entire country at 1:4.5 million scale as an experiment, for preparing the Census Atlas of India. ----Rolta India is established. ----APPLE, an experimental geostationary communication satellite gets launched in European Ariane vehicle. ---- Bhaskara-II is launched. ----The use of automation and digital cartography starts in Naval Hydrographic Office (now National Hydrographic Office) with acquisition of new ships, modern automated equipments, automated data logging and plotting system, automated Cartographic & Printing system. ----ESRI launches ARC / INFO. ----JPL’s Shuttle Imaging Radar (SIR-A) launches the first synthetic imaging radar carried by NASA’s Space Shuttle Orbiter. 1982 Natural Resource Data Management System (NRDMS), a multi-disciplinary programme of the DST under the Goverment of India is launched in order to initiate and promote research and development cum demonstration in the field of Geographical Information Systems (GIS) technology and its application. ----Survey of India (SOI) adopts automated cartography. ---- INSAT - 1A multipurpose satellite is launched. ----An Environmental information System (ENVIS) is set up by the Ministry of Environment and Forests as a decentralised information network for collection, storage, retrieval and dissemination of environmental information. ----System Research Institute (SRI) starts GIS activities. 1983Indian National Satellite System is established with the commissioning of INSAT-1B. ----The government with the Department of Space, sets up National Natural Resource Management System (NNRMS) as a nodal agency for optimal utilisation of natural resources, using space-based remote sensing data in conjunction with conventional techniques. ----Speck Systems Limited established ----Digital mapping company, EKTAK is formed. 1984 The Prime Minister of India, Rajiv Gandhi promotes the use of Information Technology (IT) in the country. ----Geological Information System is prepared using a training package called MAPS from Yale University, US. 1985Survey of India initiates theDigital Mapping project to convert 1:50,000 toposheets into digital format for public use. ----Department of Space initiates two projects National Agricultural Drought Assessment and Management System (NADAMS) and Crop Acreage and Production Estimation (CAPE) under the

programme Remote Sensing Applications for Agricultural Applications for Department of Agricultural and Co-operation for monitoring of vegetation status using NOAA and AVHRR data. ----The GPS (Global Positioning System) becomes operational. ----Development of GRASS (Geographic Resources Analysis Support System), a raster based GIS programme, starts at the US Army Construction Engineering Research Laboratories. ----Remote Sensing Instruments Pvt. Ltd., a GIS company is formed in Hyderabad 1986 Department of Space developes ISRO-GIS. ----‘Mapping Awareness’, the business-to-business magazine for geographic technology users and managers in the United Kingdom and Ireland is founded. ----The first SPOT satellite Earth Observation System is launched and designed by Centre National d’Etudes Spatiales (CNES) in France and developed with the participation of Sweden and Belgium. ----Mapinfo is founded. 1987The Government of India announces a ‘Software Policy’, which gave the framework for certain industries to import softwares from abroad. ----The International Journal of Geographical Information Analysis gets published. ----Tydac releases SPANS GIS. ----Ron Eastman starts the IDRISI Project at Clark University. 1988Indian Remote Sensing Satellite (IRS) system is commissioned with the launch of IRS-1A. ----The National Centre for Geographic Information and Analysis (NCGIA) is established in the USA. ----‘SMALLWORLD’ is established.  ----Ezra Zubrow, State University of New York at Buffalo starts the GIS-L Internet list-server.  ----The first public release of the US bureau of Census ‘TIGER’ (Topographically Integrated Geographic Encoding and Referencing) digital data products. ----Founded as GIS World, the monthly magazine ‘GEO World’, the world’s first magazine for geographic technology gets published. 1989 The National Remote Sensing Agency prepares the first Wasteland Atlas. ----The ‘Association of Geographic Information’ (AGI) is formed in the UK. ----Intergraph launches MGE. ----The desktop image processing software, ‘ER Mapper’ is launched.

SPOT

Spot data has become essential to a wide spectrum of users who look to the continuation of the Spot family to prove just how reliable and operational the system has become. The Spot satellite Earth Observation System was designed by the CNES (Centre Nationale d’Etudes Spatiales), France and developed with the participation of Sweden and Belgium. The system comprises a series of spacecrafts plus ground facilities for satellite control and programming, image production and distribution. To meet the increasing demands for Spot Imagery, SPOT 1, 2 & 4 are still operational. Spot imagery for its unique features (high resolution, stereo imaging and revisit capability) enables it to collect data on areas of special interest for various applications. Since 1986 more than 5.5 million images have been archived to provide an unparallel record of our planet.

MapInfo

MapInfo was founded in 1986 by four students from the Renselaer Polytechnic Institute, the oldest engineering school in the United States. They pioneered the concept of using GIS for making business decisions and created the business mapping market in the early 1990s. MapInfo is also a publicly held company on Nasdaq (MAPS) with its software and data solutions available in 20 languages and distributed through a worldwide channel in 58 countries. MapInfo as "the Information Discovery Company" grew out of the database market, literally creating the software to visualise data in an easy-to-use PC based Windows application for the business decision makers. MapInfo has tried to develop new products for multi-user, multi-platform deployment in both client/server and Internet applications.

GPS

GPS, a space based positioning, navigation and timing system was developed by the U.S. Department of Defense (DoD) and emerged in late 60’s and early 70’s. GPS can be thought of as a satellite navigation and satellite positioning system, providing signals for geolocation and for the safe and efficient movement, measurement and tracking of people, vehicles, and other objects anywhere in the world. It is very reliable since it is affected neither by the atmospheric conditions, the topography of the ground nor by the various radioelectric interferences. The Global Navigation Satellite Systems (GNSS) are extended GPS systems, providing accurate information for critical navigation applications. The NAVSTAR system, operated by U.S DoD, was the first GPS system to be widely available for civilian use. The Russian GPS system, GLONASS, is similar in operation and is proving complimentary to NAVSTAR system. The European Space Agency (ESA) is now funding the GalileoSat as a new parallel new GPS set-up known as Global Navigation Satellite System (GNSS) to be operational by 2008.

Since the early years of its inception, GPS architecture was primarily designed for having a military advantage providing an upper hand to U.S and its military allies. In 1978, the DoD and the DoT co-operated together in publishing the biennial Federal Radionavigation Plan (FRP), which became the principle vehicle to set forth official government GPS policies. The FRP’s published in 1980 and 1982 reflected 500-meter accuracy for civil use and contained provisions for user registration and charges. Further the tragic incident in 1983 in which a Soviet pilot shot down a Korean civilian airliner, lead the Reagan Administration to offer GPS services to the world for the benefit of commercial aviation. Thus the policy of free access to GPS signals was first established in the FRP policy and later in the Federal Laws.

During its implementation stages GPS was not a well known phenomenon and this perhaps was the reason for its success. However, with the rapid adoption of GPS in various spheres GPS has gained the popularity in the scientific community.

Mapping GIS Milestones : 1990-1995

Time Line

1990The Education and Research Network (ERNET) starts in India. 1991NATMO comes out with India’s first map on Natural Hazard, "India - Natural Hazard Map", at 1: 6million scale. ----The Prime Minister of India, Mr. P.V. Narsimha Rao starts the process of economic reforms. ----Tata Consultancy Services (TCS), India established in 1968, starts with the TCS GIS group to provide services in the areas of GIS, Digital Image Processing, Automated Mapping and Facility Management. ----‘The Mountain Natural Resource Information Systems (MENRIS) division by ICIMOD, Nepal is established to facilitate the application of GIS and remote sensing. ----MapInfo Professional is launched. ----The first European Remote-Sensing Satellite (ERS-1) launched that carried radar altimeter. ----India launches second Remote Sensing satellite IRS 1B. 1992 Integrated Digital Systems, a GIS company is establishedin Calcutta. ----RMSI is launched in India with the support of RMS Inc of US. ----Natural Resource Data Management Systems (NRDMS), Department of Science and Technology designs GRAM-GIS (GeoReferenced Area Management) for the entry, storage manipulation, analysis and display of spatial data for a low cost computer configuration. ----First Issue of ‘GIS Today’, now known as ‘GIS India’ is published by Geomap Society of India. ----Bihar Geographic Information Systems made by Manosi Lahiri, completed at the end of 1991,becomes operational. ----Indian Resources Information & Management Technologies (IN-RIMT) is established at Hyderabad. ----RAMTECH Corporation, established in 1965, starts GIS activities, with prime emphasis on AM/FM/GIS and CAD/CAE solutions. ----Integrated Mission for Sustainable Development (IMSD) was initiated, which is an important user of IRS data. Department of Space establishes Antrix Corporation for overseas marketing of ISRO capabilities. ----CADD Centre, an imaging solution company starts its operation. ----The National Space Development Agency (NASDA), Japan launches JERS-1 satellite. ----In Lebanon, the Electricite du Libau (EDL) decides to rebuild the entire nations electricity network in GIS environment. ----An European magazine, ‘GIS Europe’ starts. 1993 The Election Commission, India creates Pollmap, a digital cartographic database, during the

Assembly Elections. ----The European Umbrella Organisation for Geographic Information (EUROGI) is established in Europe. ----Foundation of the monthly magazine ‘Business Geographics’ to meet the information needs of business people seeking to use geographic technology. ----Computer Eyes is formed. ----Indian Society of Geomatics is formed at Ahmedabad.

1994 The creation of Geomatics Laboratory by the French Research Institute of Pondicherry. ----IRS-P2 is launched by India. ----National Spatial Data Infrastructure (NSDI) is formed in US by an executive order of President Bill Clinton. ----International Steering Committee for Global Mapping (ISCGM) is established with members and advisors from NGOs, NMOs and academicians. ----PCI Geomatics, a geomatics solution company is formed . 1995Integrated Digital System (IDS), a Calcutta based GIS company, surveyes all the Calcutta retail outlets by its own team to provide the retail data clubbed with the map data on a GIS. ----Centre for Space Science and Technology Education in Asia and the Pacific (CSSTE - AP) is established on with initiatives taken by the United Nations Office of Outer Space Affairs. ----RADARSAT - SAR satellite is launched. ----Launch of third operational Indian Remote Sensing Satellite, IRS-1C. ----Infodesk Technologies Pvt. Ltd. starts GIS activities. ----National Geographic Data Framework (NGDF) established in UK. ----The bi-monthly magazine, ‘GEOAsia Pacific’ starts.

RADARSAT

Radarsat is an advanced Earth Observation satellite project developed by Canada to monitor environmental change and to support resource sustainability. The Radarsat project was launched on November 4, 1995 by the Canadian Space Agency and then Canada Centre for Remote Sensing (CCRS). Radarsat, planned for a lifetime of five years, is equipped with Synthetic Aperture Radar (SAR). The SAR is a powerful microwave instrument that can transmit and receive signals to "see" through clouds, haze smoke and darkness and obtain high quality images of the earth in all weather at any time.

Mapping GIS Milestones : 1995-2000

Time Line

1996ESRI India is formed. ----Japan’s ‘Advanced Earth Observation Satellite’ is launched. ----IRS-P3 is launched by India. ----NASA and JPL begin America’s study of Mars by launching ‘Mars Global Surveyor’ (MGS) spacecraft. ----Tej Technologies, authorised dealer for Ashtech Precision Products is established. ----Riding Consulting Engineers India Pvt. Ltd. (RCE) is formed. ---- The Prime Minister of India, Mr. P.V. Narsimha Rao starts the process of economic reforms. ----Tata Consultancy Services (TCS), India established in 1968, starts with the TCS GIS group to provide services in the areas of GIS, Digital Image Processing, Automated Mapping and Facility Management. ----‘The Mountain Natural Resource Information Systems (MENRIS) division by ICIMOD, Nepal is established to facilitate the application of GIS and remote sensing. ----The visionary Chief Minister of Andhra Pradesh, N. Chandrababu Naidu, leads the state as the most happening place in IT of India. ----Leica India Geo systems Ltd. is established 1998 Tata Infotech is appointed as exclusive distributor of MapInfo products in SAARC region. ---- Bentley India is established. ---- Kampsax India Ltd. a photogrammetry is formed. ---- Tele Atlas starts its operation in India. ---- CSDMS organises India’s first and largest conference and exhibition on GIS/GPS/Remote Sensing - "Map India’ 98". ---- Eicher releases user-friendly 21" X 35": zonal maps of Delhi. ---- The National Task Force on Information Technology and Software Development is constituted. The Government of India accepts all its recommendations. ---- For sustainable industrial development of the State, the Government of Uttar Pradesh through the state Industries Policy 1998, proposes a zoning atlas for the entire state so that industry can easily take the decision on the location of a unit. ---- India hosts the 12th Plenary meeting of the International Committee on Earth Observation Satellites. ---- ISPRS Commission IV hold its first symposium in Stuttgart (Germany) since it became a commission on GIS and mapping in 1996.

1999

The first "GIS Forum South Asia’99" is organised jointly by CSDMS and ICIMOD at Nepal. ---- Autodesk India Ltd. is formed. ---- Sokkia India is incorporated which is a subsidary office of Sokkia Singapore that markets GPS electronic total station, and optical theodolites, levels and laser products. ----Landsat 7 carrying Enhanced Thematic Mapper Plus (ETM+) is launched. IKONOS is launched ----The second Ministerial Conference on Space Applications for sustainable development is organised by UN ESCAP in New Delhi.

2000GIS@development becomes monthly.

IKONOS

IKONOS, the first commercial 720 kilogram high resolution imaging satellite, was launched into a sun-synchronous, near-polar, circular, low earth orbit on September 24, 1999 from the Vadenberg Air Force base. It simultaneously collects one-metre resolution panchromatic and multispectral images and is designed to take digital images of the earth from 680 km. up. Moving at a speed of about seven kilometres per second, the satellite can distinguish objects on the earth’s surface as small as one square metre. The satellite circles the globe 14 times per day, or once in every 98 minutes.

http://rst.gsfc.nasa.gov/Intro/Part2_7.html

History of Remote Sensing: In the Beginning; Launch Vehicles

Remote sensing as a technology started with the first photographs in the early nineteenth century. Many significant events led to the launch of the Landsat satellites.

The photographic camera has served as a prime remote sensor for more than 150 years. It captures an image of targets exterior to it by concentrating electromagnetic (EM) radiation (normally, visible light) through a lens onto a recording medium. A key advance in photography occurred in 1871 when Dr. Henry Maddox, a Brit, announced development of a photographic negative made by enclosing silver halide suspended in an emulsion mounted on a glass plate. Silver halide film remains the prime recording medium today. The film displays the target objects in their relative positions by variations in their brightness of gray levels (black and white) or color tones.

Although the first, rather primitive photographs were taken as "stills" on the ground, the idea photographing the Earth's surface from above, yielding the so-called aerial photo, emerged in the 1860s with pictures from balloons. The first success - now lost - is a photo of a French Valley made by Felix Tournachon.

Several famous kite photos were taken of the devastation in San Francisco, California right after the 1906 earthquake that, together with fire, destroyed most of the city.

It appears that Wilbur Wright - the co-developer of the first aeroplane to leave the ground in free flight - himself was the first to take pictures from an airplane, in France in 1908 and Italy in 1909.

By the first World War, cameras mounted on airplanes, or more commonly held by aviators, provided aerial views of fairly large surface areas that were invaluable for military reconnaissance.

From then until the early 1960s, the aerial photograph remained the single standard tool for depicting the surface from a vertical or oblique perspective.

Historically, the first photos taken from a small rocket, from a height of about 100 meters, were imaged from a rocket designed by Alfred Nobel (of Prize fame) and launched in 1897 over a Swedish landscape.

A camera succeeded in photographing the landscape at a height of 600 meters (2000 ft) reached by Alfred Maul's rocket during a 1904 launch:

Remote sensing above the atmosphere originated at the dawn of the Space Age. The power and capability of launch vehicles was a big factor in determining what remote sensors could be placed as part (or all) of the payload. At first, by 1946, some V-2 rockets, acquired from Germany after World War II, were launched by the U.S. Army in April 1946.

Smaller sounding rockets, such as the Wac Corporal, and the Viking and Aerobee series, were developed and launched by the military in the late '40s and '50s. These rockets, while not attaining orbit, contained automated still or movie cameras that took pictures as the vehicle ascended.

The other frequently used launch vehicle is the Space Transport System (STS), more commonly known as the Space Shuttle. It uses two external fuel tanks plus its own engines.

Remote sensing Satellites.

The first non-photo sensors were television cameras mounted on unmanned spacecraft and were devoted mainly to looking at clouds. The first U.S. meteorological satellite, TIROS-1, launched by an Atlas rocket into orbit on April 1, 1960, looked similar to this later TIROS vehicle.

TIROS, for Television Infrared Observation Satellite, used vidicon cameras to scan wide areas at a time.

Then, in the 1960s as man entered space, astronauts in space capsules took photos out the window. In time, the space photographers had specific targets and a schedule, although they also have some freedom to snap pictures at targets of opportunity.

During the '60s, the first sophisticated imaging sensors were incorporated in orbiting satellites. At first, these sensors were basic TV cameras that imaged crude, low resolution (little detail) black and white pictures of clouds and Earth's surface, where clear

Earlier on, other types of sensors were developed that " took images using the EM spectrum beyond the visible, into the near and thermal infrared regions. The field of view (FOV) was broad, usually 100s of kilometers on a side. Such synoptic areas of regional coverage were of great value to the meteorological community, so that many of these early satellites were metsats, dedicated to gathering information on clouds, air temperatures, wind patterns, etc.

Image interpretation & analysisIntroduction

In order to take advantage of and make good use of remote sensing data, we must be able to extract meaningful information from the imagery. This brings us to the topic of discussion in this chapter - interpretation and analysis - the sixth element of the remote sensing process which we defined in Chapter 1. Interpretation and analysis of remote sensing imagery involves the identification and/or measurement of various targets in an image in order to extract useful information about them. Targets in remote sensing images may be any feature or object which can be observed in an image, and have the following characteristics:

Targets may be a point, line, or area feature. This means that they can have any form, from a bus in a parking lot or plane on a runway, to a bridge or roadway, to a large expanse of water or a field.

The target must be distinguishable; it must contrast with other features around it in the image.

   

Much interpretation and identification of targets in remote sensing imagery is performed manually or visually, i.e. by a human interpreter. In many cases this is done using imagery displayed in a pictorial or photograph-type format, independent of what type of sensor was used to collect the data and how the data were collected. In this case we refer to the data as being in analog format. As we discussed in Chapter 1, remote sensing images can also be

represented in a computer as arrays of pixels, with each pixel corresponding to a digital number, representing the brightness level of that pixel in the image. In this case, the data are in a digital format. Visual interpretation may also be performed by examining digital imagery displayed on a computer screen. Both analogue and digital imagery can be displayed as black and white (also called monochrome) images, or as colour images (refer back to Chapter 1, Section 1.7) by combining different channels or bands representing different wavelengths.

When remote sensing data are available in digital format, digital processing and analysis may be performed using a computer. Digital processing may be used to enhance data as a prelude to visual interpretation. Digital processing and analysis may also be carried out to automatically identify targets and extract information completely without manual intervention by a human interpreter. However, rarely is digital processing and analysis carried out as a complete replacement for manual interpretation. Often, it is done to supplement and assist the human analyst.

Manual interpretation and analysis dates back to the early beginnings of remote sensing for air photo interpretation. Digital processing and analysis is more recent with the advent of digital recording of remote sensing data and the development of computers. Both manual and digital techniques for interpretation of remote sensing data have their respective advantages and disadvantages. Generally, manual interpretation requires little, if any, specialized equipment, while digital analysis requires specialized, and often expensive, equipment. Manual interpretation is often limited to analyzing only a single

channel of data or a single image at a time due to the difficulty in performing visual interpretation with multiple images. The computer environment is more amenable to handling complex images of several or many channels or from several dates. In this sense, digital analysis is useful for simultaneous analysis of many spectral bands and can process large data sets much faster than a human interpreter. Manual interpretation is a subjective process, meaning that the results will vary with different interpreters. Digital analysis is based on the manipulation of digital numbers in a computer and is thus more objective, generally resulting in more consistent results. However, determining the validity and accuracy of the results from digital processing can be difficult.

It is important to reiterate that visual and digital analyses of remote sensing imagery are not mutually exclusive. Both methods have their merits. In most cases, a mix of both methods is usually employed when analyzing imagery. In fact, the ultimate decision of the utility and relevance of the information extracted at the end of the analysis process, still must be made by humans.

Elements of Visual Interpretation

As we noted in the previous section, analysis of remote sensing imagery involves the identification of various targets in an image, and those targets may be environmental or artificial features which consist of points, lines, or areas. Targets may be defined in terms of the way they reflect or emit radiation. This radiation is measured and recorded by a sensor, and ultimately is depicted as an image product such as an air photo or a satellite image.

What makes interpretation of imagery more difficult than the everyday visual interpretation of our surroundings? For one, we lose our sense of depth when viewing a two-dimensional image, unless we can view it stereoscopically so as to simulate the third dimension of height. Indeed, interpretation benefits greatly in many applications when images are viewed in stereo, as visualization (and therefore, recognition) of targets is enhanced dramatically. Viewing objects from directly above also provides a very different perspective than what we are familiar with. Combining an unfamiliar perspective with a very different scale and lack of recognizable detail can make even the most familiar object unrecognizable in an image. Finally, we are used to seeing only the visible wavelengths, and the imaging of wavelengths outside of this window is more difficult for us to comprehend.

Recognizing targets is the key to interpretation and information extraction. Observing the differences between targets and their backgrounds involves comparing different targets based on any, or all, of the visual elements of tone, shape, size, pattern, texture, shadow, and association. Visual interpretation using these elements is often a part of our daily lives, whether we are conscious of it or not. Examining satellite images on the weather report, or following high speed chases by views from a helicopter are all familiar examples of visual image interpretation. Identifying targets in remotely sensed images

based on these visual elements allows us to further interpret and analyze. The nature of each of these interpretation elements is described below, along with an image example of each.

Tone refers to the relative brightness or colour of objects in an image. Generally, tone is the fundamental element for distinguishing between different targets or features. Variations in tone also allows the elements of shape, texture, and pattern of objects to be distinguished.

Shape refers to the general form, structure, or outline of individual objects. Shape can be a very distinctive clue for interpretation. Straight edge shapes typically represent urban or agricultural (field) targets, while natural features, such as forest edges, are generally more irregular in shape, except where man has created a road or clear cuts. Farm or crop land irrigated by rotating sprinkler systems would appear as circular shapes.

Size of objects in an image is a function of scale. It is important to assess the size of a target relative to other objects in a scene, as well as the absolute size, to aid in the interpretation of that target. A quick approximation of target size can direct interpretation to an appropriate result more quickly. For example, if an interpreter had to distinguish zones of land use, and had identified an area with a number of buildings in it, large buildings such as factories or warehouses would suggest commercial property, whereas small buildings would indicate residential use.

Pattern refers to the spatial arrangement of visibly discernible objects. Typically an orderly repetition of similar tones and textures will produce a distinctive and ultimately recognizable pattern. Orchards with evenly spaced trees, and urban streets with regularly spaced houses are good examples of pattern.

Texture refers to the arrangement and frequency of tonal variation in particular areas of an image. Rough textures would consist of a mottled tone where the grey levels change abruptly in a small area, whereas smooth textures would have very little tonal variation. Smooth textures are most often the result of uniform, even surfaces, such as fields, asphalt, or grasslands. A target with a rough surface and irregular structure, such as a forest canopy, results in a

rough textured appearance. Texture is one of the most important elements for distinguishing features in radar imagery.

Shadow is also helpful in interpretation as it may provide an idea of the profile and relative height of a target or targets which may make identification easier. However, shadows can also reduce or eliminate interpretation in their area of influence, since targets within shadows are much less (or not at all) discernible from their surroundings. Shadow is also useful for enhancing or identifying topography and landforms, particularly in radar imagery.

Association takes into account the relationship between other recognizable objects or features in proximity to the target of interest. The identification of features that one would expect to associate with other features may provide information to facilitate identification. In the example given above, commercial properties may be associated with proximity to major transportation routes, whereas residential areas would be associated with schools, playgrounds, and sports

fields. In our example, a lake is associated with boats, a marina, and adjacent recreational land.

Digital Image Processing

In today's world of advanced technology where most remote sensing data are recorded in digital format, virtually all image interpretation and analysis involves some element of digital processing. Digital image processing may involve numerous procedures including formatting and correcting of the data, digital enhancement to facilitate better visual interpretation, or even automated classification of targets and features entirely by computer. In order to process remote sensing imagery digitally, the data must be recorded and available in a digital form suitable for storage on a computer tape or disk. Obviously, the other requirement for digital image processing is a computer system, sometimes referred to as an image analysis system, with the appropriate hardware and software to process the data. Several commercially available software systems have been developed specifically for remote sensing image processing and analysis.

For discussion purposes, most of the common image processing functions available in image analysis systems can be categorized into the following four categories:

Preprocessing Image Enhancement Image Transformation

Image Classification and Analysis

Preprocessing functions involve those operations that are normally required prior to the main data analysis and extraction of information, and are generally grouped as radiometric or geometric corrections. Radiometric corrections include correcting the data for sensor irregularities and unwanted sensor or atmospheric noise, and converting the data so they accurately represent the reflected or emitted radiation measured by the sensor. Geometric corrections include correcting for geometric distortions due to sensor-Earth geometry variations, and conversion of the data to real world coordinates (e.g. latitude and longitude) on the Earth's surface.

 

The objective of the second group of image processing functions grouped under the term of image enhancement, is solely to improve the appearance of the imagery to assist in visual interpretation and analysis. Examples of enhancement functions include contrast stretching to increase the tonal distinction between various features in a scene, and spatial filtering to enhance (or suppress) specific spatial patterns in an image.

Image transformations are operations similar in concept to those for image enhancement. However, unlike image enhancement operations which are normally applied only to a single channel of data at a time, image transformations usually involve combined processing of data from multiple spectral bands. Arithmetic operations (i.e. subtraction, addition, multiplication, division) are performed to combine and transform the original bands into "new" images which better display or highlight certain features in the scene. We will look at some of these operations including various methods of spectral or band ratioing, and a procedure called principal components analysis which is used to more efficiently represent the information in multichannel imagery.

Image classification and analysis operations are used to digitally identify and classify pixels in the data. Classification is usually performed on multi-channel data sets (A) and this process assigns each pixel in an image to a particular class or theme (B) based on

statistical characteristics of the pixel brightness values. There are a variety of approaches taken to perform digital classification. We will briefly describe the two generic approaches which are used most often, namely supervised and unsupervised classification.

In the following sections we will describe each of these four categories of digital image processing functions in more detail.

Pre-processing

Pre-processing operations, sometimes referred to as image restoration and rectification, are intended to correct for sensor- and platform-specific radiometric and geometric distortions of data. Radiometric corrections may be necessary due to variations in scene illumination and viewing geometry, atmospheric conditions, and sensor noise and response. Each of these will vary depending on the specific sensor and platform used to acquire the data and the conditions during data acquisition. Also, it may be desirable to convert and/or calibrate the data to known (absolute) radiation or reflectance units to facilitate comparison between data.

Variations in illumination and viewing geometry between images (for optical sensors) can be corrected by modeling the geometric relationship and distance between the area of the Earth's surface imaged, the sun, and the sensor. This is often required so as to be able to more readily compare images collected by different sensors at different dates or times, or to mosaic multiple images from a single sensor while maintaining uniform illumination conditions from scene to scene.

As we learned in Chapter 1, scattering of radiation occurs as it passes through and interacts with the atmosphere. This scattering may reduce, or attenuate, some of the energy illuminating the surface. In addition, the atmosphere will further attenuate the

signal propagating from the target to the sensor. Various methods of atmospheric correction can be applied ranging from detailed modeling of the atmospheric conditions during data acquisition, to simple calculations based solely on the image data. An example of the latter method is to examine the observed brightness values (digital numbers), in an area of shadow or for a very dark object (such as a large clear lake - A) and determine the minimum value (B). The correction is applied by subtracting the minimum observed value, determined for each specific band, from all pixel values in each respective band. Since scattering is wavelength dependent (Chapter 1), the minimum values will vary from band to band. This method is based on the assumption that the reflectance from these features, if the atmosphere is clear, should be very small, if not zero. If we observe values much greater than zero, then they are considered to have resulted from atmospheric scattering.

Noise in an image may be due to irregularities or errors that occur in the sensor response and/or data recording and transmission. Common forms of noise include systematic striping or banding and dropped lines. Both of these effects should be corrected before further enhancement or classification is performed. Striping was common in early Landsat MSS data due to variations and drift in the response over time of the six MSS detectors. The "drift" was different for each of the six detectors, causing the same brightness to be represented differently by each detector. The overall appearance was thus a 'striped' effect. The corrective process made a relative correction among the six sensors to bring their apparent values in line with each other. Dropped lines occur when there are systems errors which result in missing or defective data along a scan line. Dropped lines are normally 'corrected' by replacing the line with the pixel values in the line above or below, or with the average of the two.

For many quantitative applications of remote sensing data, it is necessary to convert the digital numbers to measurements in units which represent the actual reflectance or emittance from the surface. This is done based on detailed knowledge of the sensor

response and the way in which the analog signal (i.e. the reflected or emitted radiation) is converted to a digital number, called analog-to-digital (A-to-D) conversion. By solving this relationship in the reverse direction, the absolute radiance can be calculated for each pixel, so that comparisons can be accurately made over time and between different sensors.

In section 2.10 in Chapter 2, we learned that all remote sensing imagery are inherently subject to geometric distortions. These distortions may be due to several factors, including: the perspective of the sensor optics; the motion of the scanning system; the motion of the platform; the platform altitude, attitude, and velocity; the terrain relief; and, the curvature and rotation of the Earth. Geometric corrections are intended to compensate for these distortions so that the geometric representation of the imagery will be as close as possible to the real world. Many of these variations are systematic, or predictable in nature and can be accounted for by accurate modeling of the sensor and platform motion and the geometric relationship of the platform with the Earth. Other unsystematic, or random, errors cannot be modeled and corrected in this way. Therefore, geometric registration of the imagery to a known ground coordinate system must be performed.

The geometric registration process involves identifying the image coordinates (i.e. row, column) of several clearly discernible points, called ground control points (or GCPs), in the distorted image (A - A1 to A4), and matching them to their true positions in ground coordinates (e.g. latitude, longitude). The true ground coordinates are typically measured from a map (B - B1 to B4), either in paper or digital format. This is image-to-map registration. Once several well-distributed GCP pairs have been identified, the coordinate information is processed by the computer to determine the proper transformation equations to apply to the original (row and column) image coordinates to map them into their new ground coordinates. Geometric registration may also be performed by registering one (or more) images to another image, instead of to geographic coordinates. This is called image-to-image registration and is often done prior to performing various image transformation procedures, which will be discussed in section 4.6, or for multitemporal image comparison.

In order to actually geometrically correct the original distorted image, a procedure called resampling is used to determine the digital values to place in the new pixel locations of the corrected output image. The resampling process calculates the new pixel values from the original digital pixel values in the uncorrected image. There are three common methods for resampling: nearest neighbour, bilinear interpolation, and cubic convolution. Nearest neighbour resampling uses the digital value from the pixel in the original image which is nearest to the new pixel location in the corrected image. This is the simplest method and does not alter the original values, but may result in some pixel values being duplicated while others are lost. This method also tends to result in a disjointed or blocky image appearance.

Bilinear interpolation resampling takes a weighted average of four pixels in the original image nearest to the new pixel location. The averaging process alters the original pixel values and creates entirely new digital values in the output image. This may be undesirable if further processing and analysis, such as classification based on spectral response, is to be done. If this is the case, resampling may best be done after the classification process. Cubic convolution resampling goes even further to calculate a distance weighted average of a block of sixteen pixels from the original image which surround the new output pixel location. As with bilinear interpolation, this method results in completely new pixel values. However, these two methods both produce images which have a much sharper appearance and avoid the blocky appearance of the nearest neighbour method.

Image Enhancement

Enhancements are used to make it easier for visual interpretation and understanding of imagery. The advantage of digital imagery is that it allows us to manipulate the digital pixel values in an image. Although radiometric corrections for illumination, atmospheric influences, and sensor characteristics may be done prior to distribution of data to the user, the image may still not be optimized for visual interpretation. Remote sensing devices, particularly those operated from satellite platforms, must be designed to cope with levels of target/background energy which are typical of all conditions likely to be encountered in routine use. With large variations in spectral response from a diverse range of targets (e.g. forest, deserts, snowfields, water, etc.) no generic radiometric correction could optimally account for and display the optimum brightness range and contrast for all targets. Thus, for each application and each image, a custom adjustment of the range and distribution of brightness values is usually necessary.

In raw imagery, the useful data often populates only a small portion of the available range of digital values (commonly 8 bits or 256 levels). Contrast enhancement involves changing the original values so that more of the available range is used, thereby increasing the contrast between targets and their backgrounds. The key to understanding contrast enhancements is to understand the concept of an image histogram. A histogram is a graphical representation of the brightness values that comprise an image. The brightness values (i.e. 0-255) are displayed along the x-axis of the graph. The frequency of occurrence of each of these values in the image is shown on the y-axis.

By manipulating the range of digital values in an image, graphically represented by its histogram, we can apply various enhancements to the data. There are many different techniques and methods of enhancing contrast and detail in an image; we will cover only a few common ones here. The simplest type of enhancement is a linear contrast stretch. This involves identifying lower and upper bounds from the histogram (usually the minimum and maximum brightness values in the image) and applying a transformation to stretch this range to fill the full range. In our example, the minimum value (occupied by actual data) in the histogram is 84 and the maximum value is 153. These 70 levels occupy less than one-third of the full 256 levels available. A linear stretch uniformly expands this small range to cover the full range of values from 0 to 255. This enhances the contrast in the image with light toned areas appearing lighter and dark areas appearing darker, making visual interpretation much easier. This graphic illustrates the increase in contrast in an image before (left) and after (right) a linear contrast stretch.

 

A uniform distribution of the input range of values across the full range may not always be an appropriate enhancement, particularly if the input range is not uniformly distributed. In this case, a histogram-equalized stretch may be better. This stretch assigns more display values (range) to the frequently occurring portions of the histogram. In this way, the detail in these areas will be better enhanced relative to those areas of the original histogram where values occur less frequently. In other cases, it may be desirable to enhance the contrast in only a specific portion of the histogram. For example, suppose we have an image of the mouth of a river, and the water portions of the image occupy the digital values from 40 to 76 out of the entire image histogram. If we wished to enhance the detail in the water, perhaps to see variations in sediment load, we could stretch only

that small portion of the histogram represented by the water (40 to 76) to the full grey level range (0 to 255). All pixels below or above these values would be assigned to 0 and 255, respectively, and the detail in

these areas would be lost. However, the detail in the water would be greatly enhanced.

Spatial filtering encompasses another set of digital processing functions which are used to enhance the appearance of an image. Spatial filters are designed to highlight or suppress specific features in an image based on their spatial frequency. Spatial frequency is related to the concept of image texture, which we discussed in section 4.2. It refers to the frequency of the variations in tone that appear in an image. "Rough" textured areas of an image, where the changes in tone are abrupt over a small area, have high spatial frequencies, while "smooth" areas with little variation in tone over several pixels, have low spatial frequencies. A common filtering procedure involves moving a 'window' of a few pixels in dimension (e.g. 3x3, 5x5, etc.) over each pixel in the image, applying a mathematical calculation using the pixel values under that window, and replacing the central pixel with the new value. The window is moved along in both the row and column dimensions one pixel at a time and the calculation is repeated until the entire image has been filtered and a "new" image has been generated. By varying the calculation performed and the weightings of the individual pixels in the filter window, filters can be designed to enhance or suppress different types of features.

 

A low-pass filter is designed to emphasize larger, homogeneous areas of similar tone and reduce the smaller detail in an image. Thus, low-pass filters generally serve to smooth the appearance of an image. Average and median filters, often used for radar imagery (and described in Chapter 3), are examples of low-pass filters. High-pass filters do the opposite and serve to sharpen the appearance of fine detail in an image. One implementation of a high-pass filter first applies a low-pass filter to an image and then subtracts the result from the original, leaving behind only the high spatial frequency information. Directional, or edge detection filters are designed to highlight linear features, such as roads or field boundaries. These filters can also be designed to enhance features which are oriented in specific directions. These filters are useful in applications such as geology, for the detection of linear geologic structures.

 

Image Transformations

Image transformations typically involve the manipulation of multiple bands of data, whether from a single multispectral image or from two or more images of the same area acquired at different times (i.e. multitemporal image data). Either way, image transformations generate "new" images from two or more sources which highlight particular features or properties of interest, better than the original input images.

Basic image transformations apply simple arithmetic operations to the image data. Image subtraction is often used to identify changes that have occurred between images collected on different dates. Typically, two images which have been geometrically registered (see section 4.4), are used with the pixel (brightness) values in one image (1) being subtracted from the pixel values in the other (2). Scaling the resultant image (3) by adding a constant (127 in this case) to the output values will result in a suitable 'difference' image. In such an image, areas where there has been little or no change (A) between the original images, will have resultant brightness values around 127 (mid-grey tones), while those areas where significant change has occurred (B) will have values higher or lower than 127 - brighter or darker depending on the 'direction' of change in reflectance between the two images . This type of image transform can be useful for mapping changes in urban development around cities and for identifying areas where deforestation is occurring, as in this example.

Image division or spectral ratioing is one of the most common transforms applied to image data. Image ratioing serves to highlight subtle variations in the spectral responses of various surface covers. By ratioing the data from two different spectral bands, the resultant image enhances variations in the slopes of the spectral reflectance curves between the two different spectral ranges that may otherwise be masked by the pixel brightness variations in each of the bands. The following example illustrates the concept of spectral ratioing. Healthy vegetation reflects strongly in the near-infrared portion of the spectrum while absorbing strongly in the visible red. Other surface types, such as soil and water, show near equal reflectances in both the near-infrared and red portions. Thus,

a ratio image of Landsat MSS Band 7 (Near-Infrared - 0.8 to 1.1 mm) divided by Band 5 (Red - 0.6 to 0.7 mm) would result in ratios much greater than 1.0 for vegetation, and ratios around 1.0 for soil and water. Thus the discrimination of vegetation from other surface cover types is significantly enhanced. Also, we may be better able to identify

areas of unhealthy or stressed vegetation, which show low near-infrared reflectance, as the ratios would be lower than for healthy green vegetation.

Another benefit of spectral ratioing is that, because we are looking at relative values (i.e. ratios) instead of absolute brightness values, variations in scene illumination as a result of topographic effects are reduced. Thus, although the absolute reflectances for forest covered slopes may vary depending on their orientation relative to the sun's illumination, the ratio of their reflectances between the two bands should always be very similar. More complex ratios involving the sums of and differences between spectral bands for various sensors, have been developed for monitoring vegetation conditions. One widely used image transform is the Normalized Difference Vegetation Index (NDVI) which has been used to monitor vegetation conditions on continental and global scales using the Advanced Very High Resolution Radiometer (AVHRR) sensor onboard the NOAA series of satellites (see Chapter 2, section 2.11).

Different bands of multispectral data are often highly correlated and thus contain similar information. For example, Landsat MSS Bands 4 and 5 (green and red, respectively) typically have similar visual appearances since reflectances for the same surface cover types are almost equal. Image transformation techniques based on complex processing of the statistical characteristics of multi-band data sets can be used to reduce this data redundancy and correlation between bands. One such transform is called principal components analysis. The objective of this transformation is to reduce the dimensionality (i.e. the number of bands) in the data, and compress as much of the information in the original bands into fewer bands. The "new" bands that result from this statistical procedure are called components. This process attempts to maximize (statistically) the amount of information (or variance) from the original data into the least number of new components. As an example of the use of principal components analysis, a seven band Thematic Mapper (TM) data set may be transformed such that the first three principal components contain over 90 percent of the information in the original seven

bands. Interpretation and analysis of these three bands of data, combining them either visually or digitally, is simpler and more efficient than trying to use all of the original seven bands. Principal components analysis, and other complex transforms, can be used either as an enhancement technique to improve visual interpretation or to reduce the number of bands to be used as input to digital classification procedures, discussed in the next section.

Image Classification and Analysis

A human analyst attempting to classify features in an image uses the elements of visual interpretation (discussed in section 4.2) to identify homogeneous groups of pixels which represent various features or land cover classes of interest. Digital image classification uses the spectral information represented by the digital numbers in one or more spectral bands, and attempts to classify each individual pixel based on this spectral information. This type of classification is termed spectral pattern recognition. In either case, the objective is to assign all pixels in the image to particular classes or themes (e.g. water, coniferous forest, deciduous forest, corn, wheat, etc.). The resulting classified image is comprised of a mosaic of pixels, each of which belong to a particular theme, and is essentially a thematic "map" of the original image.

When talking about classes, we need to distinguish between information classes and spectral classes. Information classes are those categories of interest that the analyst is actually trying to identify in the imagery, such as different kinds of crops, different forest types or tree species, different geologic units or rock types, etc. Spectral classes are groups of pixels that are uniform (or near-similar) with respect to their brightness values in the different spectral channels of the data. The objective is to match the spectral classes in the data to the information classes of interest. Rarely is there a simple one-to-one match between these two types of classes. Rather, unique spectral classes may appear which do not necessarily correspond to any information class of particular use or interest to the analyst. Alternatively, a broad information class (e.g. forest) may contain a number of spectral sub-classes with unique spectral variations. Using the forest example, spectral sub-classes may be due to variations in age, species, and density, or perhaps as a result of shadowing or variations in scene illumination. It is the analyst's job to decide on the utility of the different spectral classes and their correspondence to useful information classes.

Common classification procedures can be broken down into two broad subdivisions based on the method used: supervised classification and unsupervised classification. In a supervised classification, the analyst identifies in the imagery homogeneous representative samples of the different surface cover types (information classes) of interest. These samples are referred to as training areas. The selection of appropriate

training areas is based on the analyst's familiarity with the geographical area and their knowledge of the actual surface cover types present in the image. Thus, the analyst is "supervising" the categorization of a set of specific classes. The numerical information in all spectral bands for the pixels comprising these areas are used to "train" the computer to recognize spectrally similar areas for each class. The computer uses a special program or algorithm (of which there are several variations), to determine the numerical "signatures" for each training class. Once the computer has determined the signatures for each class, each pixel in the image is compared to these signatures and labeled as the class it most closely "resembles" digitally. Thus, in a supervised classification we are first identifying the information classes which are then used to determine the spectral classes which represent them.

Unsupervised classification in essence reverses the supervised classification process. Spectral classes are grouped first, based solely on the numerical information in the data, and are then matched by the analyst to information classes (if possible). Programs, called clustering algorithms, are used to determine the natural (statistical) groupings or structures in the data. Usually, the analyst specifies how many groups or clusters are to be looked for in the data. In addition to specifying the desired number of classes, the analyst may also specify parameters related to the separation distance among the clusters and the variation within each cluster. The final result of this iterative clustering process may result in some clusters that the analyst will want to subsequently combine, or clusters that should be broken down further - each of these requiring a further application of the clustering algorithm. Thus, unsupervised classification is not completely without human intervention. However, it does not start with a pre-determined set of classes as in a supervised classification.

 Did you know?

"...this image has such lovely texture, don't you think?..."

...texture was identified as one of the key elements of visual interpretation (section 4.2), particularly for radar image interpretation. Digital texture classifiers are also available and can be an alternative (or assistance) to spectral classifiers. They typically perform a "moving window" type of calculation, similar to those for spatial filtering, to estimate the "texture" based on the variability of the pixel values under the window. Various textural measures can be calculated to attempt to discriminate between and characterize the textural properties of different features.

ApplicationsIntroduction

As we learned in the section on sensors, each one was designed with a specific purpose. With optical sensors, the design focuses on the spectral bands to be collected. With radar imaging, the incidence angle and microwave band used plays an important role in defining which applications the sensor is best suited for.

Each application itself has specific demands, for spectral resolution, spatial resolution, and temporal resolution.

   

To review, spectral resolution refers to the width or range of each spectral band being recorded. As an example, panchromatic imagery (sensing a broad range of all visible wavelengths) will not be as sensitive to vegetation stress as a narrow band in the red wavelengths, where chlorophyll strongly absorbs electromagnetic energy.

Spatial resolution refers to the discernible detail in the image. Detailed mapping of wetlands requires far finer spatial resolution than does the regional mapping of physiographic areas.

Temporal resolution refers to the time interval between images. There are applications requiring data repeatedly and often, such as oil spill, forest fire, and sea ice motion monitoring. Some applications only require seasonal imaging (crop identification, forest insect infestation, and wetland monitoring), and some need imaging only once (geology structural mapping). Obviously, the most time-critical applications also demand fast turnaround for image processing and delivery - getting useful imagery quickly into the user's hands.

In a case where repeated imaging is required, the revisit frequency of a sensor is important (how long before it can image the same spot on the Earth again) and the reliability of successful data acquisition. Optical sensors have limitations in cloudy environments, where the targets may be obscured from view. In some areas of the world, particularly the tropics, this is virtually a permanent condition. Polar areas also suffer from inadequate solar illumination, for months at a time. Radar provides reliable data, because the sensor provides its own illumination, and has long wavelengths to penetrate cloud, smoke, and fog, ensuring that the target won't be obscured by weather conditions, or poorly illuminated.

Often it takes more than a single sensor to adequately address all of the requirements for a given application. The combined use of multiple sources of information is called integration. Additional data that can aid in the analysis or interpretation of the data is termed "ancillary" data.

The applications of remote sensing described in this chapter are representative, but not exhaustive. We do not touch, for instance, on the wide area of research and practical

application in weather and climate analysis, but focus on applications tied to the surface of the Earth. The reader should also note that there are a number of other applications that are practiced but are very specialized in nature, and not covered here (e.g. terrain trafficability analysis, archeological investigations, route and utility corridor planning, etc.).

Multiple sources of informationEach band of information collected from a sensor contains important and unique data. We know that different wavelengths of incident energy are affected differently by each target - they are absorbed, reflected or transmitted in different proportions. The appearance of targets can easily change over time, sometimes within seconds. In many applications, using information from several different sources ensures that target identification or information extraction is as accurate as possible. The following describe ways of obtaining far more information about a target or area, than with one band from a sensor.

MultispectralThe use of multiple bands of spectral information attempts to exploit different and independent "views" of the targets so as to make their identification as confident as possible. Studies have been conducted to determine the optimum spectral bands for analyzing specific targets, such as insect damaged trees.

MultisensorDifferent sensors often provide complementary information, and when integrated together, can facilitate interpretation and classification of imagery. Examples include combining high resolution panchromatic imagery with coarse resolution multispectral imagery, or merging actively and passively sensed data. A specific example is the integration of SAR imagery with multispectral imagery. SAR data adds the expression of surficial topography and relief to an otherwise flat image. The multispectral image contributes meaningful colour information about the composition or cover of the land surface. This type of image is often used in geology, where lithology or mineral composition is represented by the spectral component, and the structure is represented by the radar component.

MultitemporalInformation from multiple images taken over a period of time is referred to as multitemporal information. Multitemporal may refer to images taken days, weeks, or even years apart. Monitoring land cover change or growth in urban areas requires images from different time periods. Calibrated data, with careful controls on the quantitative aspect of the spectral or backscatter response, is required for proper monitoring activities. With uncalibrated data, a classification of the older image is compared to a classification from the recent image, and changes in the class boundaries are delineated. Another valuable multitemporal tool is the observation of vegetation phenology (how the vegetation changes throughout the growing season), which requires data at frequent intervals throughout the growing season.

"Multitemporal information" is acquired from the interpretation of images taken over the same area, but at different times. The time difference between the images is chosen so as to be able to monitor some dynamic event. Some catastrophic events (landslides, floods, fires, etc.) would need a time difference counted in days, while much slower-paced events (glacier melt, forest regrowth, etc.) would require years. This type of application also requires consistency in illumination conditions (solar angle or radar imaging geometry) to provide consistent and comparable classification results.

The ultimate in critical (and quantitative) multitemporal analysis depends on calibrated data. Only by relating the brightnesses seen in the image to physical units, can the images be precisely compared, and thus the nature and magnitude of the observed changes be determined.

http://rst.gsfc.nasa.gov/Intro/Part2_7.htmlThe chronological development or history of the use of remote sensing from platforms that fly or orbit above the Earth?s surface is introduced on this page. Brief mention is given to the history of aerial photography. Examples of image products photographed from sounding rockets, from one of the first satellite-mounted remote sensors, and taken by astronauts are presented. Links are provided to three Tables, each outlining some aspect of the history of remote sensing. Although not themselves directly associated with remote sensing, launch vehicles are needed to get the sensors into position. We mention and describe some of the best known: the V-2 modified from World War II weaponry; the Viking and Aerobee; the Delta, Atlas, and Titan rockets; the Apollo program's Saturn V; Russia's Proton and Energia; France's Ariane; the Space Shuttle and its Russian counterpart, Buran.

History of Remote Sensing: In the Beginning; Launch Vehicles

Having now covered some of the principles behind the nature and use of remote sensing data and methodologies, including sensors and image processing, we switch to a survey of the era of satellite remote sensing (and some mention of aircraft remote sensing and space photography by astronaut/cosmonaut individuals) introduced from an historical framework over the next 20 pages. Special topics near the end will be multiplatform systems, military surveillance, and remote sensing as it applies to medical imaging systems.

Remote sensing as a technology started with the first photographs in the early nineteenth century. Many significant events led to the launch of the Landsat satellites, which are the main focus of this tutorial. To learn about the milestones in remote sensing prior to the first Landsat, you can view a timeline of remote sensing in one of three areas - Photographic Methods, Non-Photographic Sensor Systems, Space Imaging Systems (taken from a table that appeared in the writer's [NMS] NASA Reference Publication 1078 [now out of print] on The Landsat Tutorial Workbook). That review ends with events in 1979. You can also find more on the general history of U.S. and foreign space programs in Appendix

A and at this online Web site: review-3. NASA's Earth Sciences Enterprise program has prepared a brief but informative summary of its first 40 years of Earth Observations, accessed at its site.

We present major highlights subsequent to 1979 both within this Introduction and throughout the Tutorial. Some of these highlights include short summaries of major space-based programs such as launching several other satellite/sensor systems similar to Landsat; inserting radar systems into space; proliferating of weather satellites; launching a series of specialized satellites to monitor the environment using, among other, thermal and passive microwave sensors; developing sophisticated hyperspectral sensors; and deploying a variety of sensors to gather imagery and other data on the planets and astronomical bodies.

The photographic camera has served as a prime remote sensor for more than 150 years. It captures an image of targets exterior to it by concentrating electromagnetic (EM) radiation (normally, visible light) through a lens onto a recording medium. The Daguerrotype plate was the first of this kind. A key advance in photography occurred in 1871 when Dr. Henry Maddox, a Brit, announced development of a photographic negative made by enclosing silver halide suspended in an emulsion mounted on a glass plate (later supplanted by flexible film that is advanced to allow many exposures). Silver halide film remains the prime recording medium today. The film displays the target objects in their relative positions by variations in their brightness of gray levels (black and white) or color tones (using dyes, as discussed in Section 10).

Although the first, rather primitive photographs were taken as "stills" on the ground, the idea photographing the Earth's surface from above, yielding the so-called aerial photo, emerged in the 1860s with pictures from balloons. The first success - now lost - is a photo of a French Valley made by Felix Tournachon. One of the oldest such photos, of Boston, appears on the first page of the Overview. The first free flight photo mission was carried out by Monsieur Triboulet in 1879. Meanwhile, an alternate approach, mounting cameras on kites, became popular in the last two decades of the 19th Century. E. Archibald of England began this method in 1882.

G.R. Lawrence took several famous kite photos of the devastation in San Francisco, California right after the infamous 1906 earthquake that, together with fire, destroyed most of the city.

It appears that Wilbur Wright - the co-developer of the first aeroplane to leave the ground in free flight - himself was the first to take pictures from an airplane, in France (LeMans) in 1908 and Italy (Centocelli) in 1909.

By the first World War, cameras mounted on airplanes, or more commonly held by aviators, provided aerial views of fairly large surface areas that were invaluable for military reconnaissance. This is docmented in these two photos:

From then until the early 1960s, the aerial photograph remained the single standard tool for depicting the surface from a vertical or oblique perspective. More on aerial photography is reviewed on page 10-1.

Historically, the first photos taken from a small rocket, from a height of about 100 meters, were imaged from a rocket designed by Alfred Nobel (of Prize fame) and launched in 1897 over a Swedish landscape; againto see this photo, click on the Overview

A camera succeeded in photographing the landscape at a height of 600 meters (2000 ft) reached by Alfred Maul's rocket during a 1904 launch:

Remote sensing above the atmosphere originated at the dawn of the Space Age (both Russian and American programs). The power and capability of launch vehicles was a big factor in determining what remote sensors could be placed as part (or all) of the payload. At first, by 1946, some V-2 rockets, acquired from Germany after World War II, were launched by the U.S. Army from White Sands Proving Grounds, New Mexico, to high altitudes (70 to 100 miles). The first V-2 launch in the U.S. took place in April 17 of 1946. Launch 13 in the V-2 series (which the writer witnessed from a distance while stationed at Fort Bliss, El Paso, TX to the south), on October 24, 1946, contained a motion picture camera in its nose cone, which acquired a series of views of the Earth's surface as it proceeded to a 134 km (83 miles) altitude. The writer later was assigned as a Post newpaper reporter privileged in Spring 1947 to attend a V-2 launch at White Sands and to interview Werner von Braun, the father of the German V-2 program and a prime mover of the Apollo program; little did I know then that I would be heavily involved in America's space program in my career years. V-2 pictures are included in an October 1950 article "Seeing the Earth from Space" in the National Geographic. We show here a photo of one of the White Sands launches of the V-2 - a precursor to the Saturn Rocket which propelled astronauts to the Moon.

Here is a picture from a later V-2 launch that shows the quality of detail in a scene in nearby New Mexico just north of the White Sands launching site:

Smaller sounding rockets, such as the Wac Corporal, and the Viking and Aerobee series, were developed and launched by the military in the late '40s and '50s. These rockets, while not attaining orbit, contained automated still or movie cameras that took pictures as the vehicle ascended. In these early days there were many variants of sounding rockets, along with those being groomed for eventual insertion of objects into orbit. An outdoor display of these at the White Sands Museum is impressive:

Here is an example of a typical oblique picture made during a Viking Flight in 1950, looking across Arizona and the Gulf of California to the curving Earth horizon (this photo is shown again in Section 12).

Having shown several of the early rockets in the above images, we will allow a brief sidetrack to show six more modern launch vehicles, since there is no other page in this Tutorial that focuses primarily on this topic. The first is the mightiest rocket of them all - The Saturn V - which remains Wernher von Braun's greatest triumph. The first scene shows this rocket on its conveyor vehicle enroute to a launch pad at Cape Canaveral. The second is a "surplus" (never used after the Apollo program ended) vehicle on display at the Johnson Space Center in Houston. The bottom image captures the famous moment when Apollo 11 lifted off for its historic journey to the Moon and back. Saturn V was 33 stories tall (113 m or 363 ft) and could deliver a thrust of 7.5 million pounds.

The two workhorse launch vehicles in the U.S. space program have been the Atlas and Delta rockets, each having been upgraded over the last 20 years. Both satellites orbiting Earth and leaving for outer space have utilized these rockets.

The U.S. Air Force's primary launch vehicle is the Titan. Other nations have built their own vehicles for varied purposes. The French Ariane rocket, operating out of Guiana in northern South America, is frequently chosen by companies launching commercial satellites.

The other frequently used launch vehicle is the Space Transport System (STS), more commonly known as the Space Shuttle. It uses to recoveable external fuel tanks plus its own engines. The Soviets copied this vehicle (calling it Buran) but use an Energia rocket (expendible) to launch it; very few flights have occurred, especially since it is now a Russian space program. They also use a Proton rocket.

Now, let us return to satellites that do remote sensing. The first non-photo sensors were television cameras mounted on unmanned spacecraft and were devoted mainly to looking at clouds. The first U.S. meteorological satellite,

TIROS-1, launched by an Atlas rocket into orbit on April 1, 1960, looked similar to this later TIROS vehicle.

TIROS, for Television Infrared Observation Satellite, used vidicon cameras to scan wide areas at a time. The image below is one of the first (May 9, 1960) returned by TIROS-1 (10 satellites in this series were flown, followed by the TOS and ITOS spacecraft, along with Nimbus, NOAA, GOES and others [see Section 14]. Superimposed on the cloud patterns is a generalized weather map for the region.

Then, in the 1960s as man entered space, cosmonauts and astronauts in space capsules took photos out the window. In time, the space photographers had specific targets and a schedule, although they also have some freedom to snap pictures at targets of opportunity.

During the '60s, the first sophisticated imaging sensors were incorporated in orbiting satellites. At first, these sensors were basic TV cameras that imaged crude, low resolution (little detail) black and white pictures of clouds and Earth's surface, where clear. Resolution is the size of the smallest contrasting object pairs that can be sharply distinguished. Below, we show three examples from the Nimbus satellite's sensors to give an idea of how good the early photos were.

Early on, other types of sensors were developed that " took images using the EM spectrum beyond the visible, into the near and thermal infrared regions. The field of view (FOV) was broad, usually 100s of kilometers on a side. Such synoptic areas of regional coverage were of great value to the meteorological community, so that many of these early satellites were metsats, dedicated to gathering information on clouds, air temperatures, wind patterns, etc.

For those who may want to know more about space travel in general and launching in particular, we recommend this website: Rocket and Space Technology, which also includes other aspects of space history.

Remote sensing then and nowEarly Beginnings 1960-66

http://ccrs.nrcan.gc.ca/org/history/history1_e.php

EARLY BEGINNINGS 1960-66

Canada has a notable history in the early development and application of aerial photography, photogrammetry and airborne geophysics to the mapping, resource development and environmental monitoring to its very large and remote territory. Topographers, foresters and geologists used aerial photography extensively, especially in the years immediately following WW II. Their combined experience in this art was heavily relied upon to develop the Canadian Remote Sensing Program. The term 'Remote Sensing, was first used by the U.S. Military to describe a type of aerial surveillance that

went beyond the use of photography into the use of parts of the electromagnetic spectrum other than the visible such as the infrared and the microwave parts.

It was my privilege to have an appropriate scientific and technical background in order to have been in a central position as far as the launching of the Canadian Remote Sensing Program was concerned. Prior to 1963, remote sensing was militarily classified by the U.S. Department of Defence. U-2 aircraft reconnaissance flights, spy satellites and airborne infrared line-scanners were being used to great strategic advantage in Vietnam and in the Cold War in general. As a civilian with no access to classified technology, serving as a geophysicist in the Geological Survey of Canada (GSC) and active in all aspects of airborne geophysical methods, I was naturally interested in these things.

By chance, in 1962, I happened to serve on a United Nations Mission on 'photogeology and airborne geophysics' at the Geological Survey of Japan with William Fischer, chief photogeologist with the U.S. Geological Survey. He told me that the U.S. Department of Defence would shortly be de-classifying a lot of remote sensing technology and that he was hoping to get the USGS involved. So that I was, from the beginning, on the lookout for information. In 1963, the Environmental Research Institute of Ann Arbor held the first unclassified international symposium on remote sensing. Steve Waskurak from the GSC attended and brought back much valuable information.

As soon as Bill Fischer returned from this mission, he set to work on the idea of using satellite imagery for photogeology and later became a force for leading the U.S. into its National Remote Sensing Program.

The GSC, allowed a lot of freedom to its scientists in choosing what kind of R & D they undertook. Thus the Geophysics Division was able to pursue R & D in remote sensing, even though aerial methods were considered quite "flakey" at the time for geological applications. We set up a Remote Sensing Section with Alan Gregory as its head. He had been conducting the original experiments in the development of the airborne gamma ray spectrometer which we considered to be a type of remote sensing. He went on a fact-finding mission to several laboratories in the U.S. which we knew were doing development work on remote sensing and returned with much valuable information and contacts which were used to point the direction in which we should be moving in Canada.

During this period, there were two persons in Canada who had been working with remote sensing in Canada within the classified area. One was Trevor Harwood, a geophysicist with DRB and the other a geologist with Mt. Allison University, the late 'Harky' Cameron, Cameron was an ex RCAF navigator who had maintained his connections with the RAF and was able to get access to PPI radar images taken over Nova Scotia by the RAF Vulcan aircraft based at Goose Bay, Labrador. They were taken at altitudes of 30,000 feet and each one covered nearly half of Nova Scotia in one image. These images

Alan Gregory

showed dramatic 'linears' which Cameron interpreted as major geological faults. The geological 'establishment' did not think much of his interpretation and nicknamed him "Faulty Cameron" which discouraged him from publishing. It is interesting to note, however, that his interpretation now stands as 'self evident', confirming Schopenhauer's words.

Trevor Harwood and Moira Dunbar, both of DRB, studied floating ice in the Canadian Arctic using aerial photography and remote sensing. This was, and still is, a subject of great concern to Canada, both from the point of view of unauthorized passage of foreign vessels through the N.W. Passage as well as providing information for safe surface navigation. Harwood organized the first infrared line-scanning survey of a test area in the Arctic which demonstrated the possibility of ice reconnaisance during the Arctic night. The Ice Branch had been conducting ice reconnaissance flights for several years during the summer, but had no information on winter coverage as they were using visual observation methods.

Both Harky Cameron and Trevor Harwood found interested listeners in the Geophysics Division of the GSC to any information on remote sensing which they felt could be revealed at that time. I vividly remember Trevor telling me in 1962 "the Americans have a dedicated C 130 and are 'shovelling' millions of dollars into it in the form of all kinds of remote sensing devices". I was very impressed with this because of my obsession with airborne methods of data gathering. It was also frustrating because of not having access to this information.

The break came in 1963 when the Environmental Research Institute of Michigan obtained permission from the U.S. Department of Defence to hold an open conference on remote sensing. A wide variety of both operational and experimental sensors ranging from infrared and multispectral scanners to side-looking radar and passive microwave imaging devices, scatterometers and laser sensors were discussed. Our only problem was to get our hands on some of these devices. Whereas the data from these sensors were de-classified, there was still a restriction on the sale of instruments. Steve Washkurak, a technologist with the Geophysics Division attended this conference and returned with much valuable information which he passed onto the division scientists in our regular Friday afternoon informal seminars.

Help came in 1964 from Dr. Robert Uffen, the newly-appointed chairman of the Defence Research Board. He was able to arrange the purchase by DRB of a state-of-the-art infrared line scanner from HRB Singer which was making these instruments for night-time aerial surveillance to be used in Vietnam. This scanner was then installed in the NRC/Flight Research North Star Aircraft by Lee Godby's Magnetic Airborne Detector (AD) group. They had been doing development work for DRB on aircraft demagnetization systems for use with the rubidium vapor magnetometer, a supersensitive magnetometer used for airborne submarine detection. The Geophysics Division had been Lee Godby

collaborating with the MAD group on the development of an airborne magnetic gradiometer for mineral exploration and continued this collaboration with the infrared line-scanner. This scanner provided the first opportunity in Canada to carry out de-classified remote sensing experiments. It was used later by the Canada Centre for Remote Sensing for many years. It was even loaned back to DND at the time of the Pierre Laporte kidnapping to search for cottages in the Laurentians where he may have been taken for hiding. It was in November when most cottages would have been closed up for the winter. Any cottage showing a heat target was therefore a suspected hideout.

This collaboration with the MAD group turned out to be very valuable to the future remote sensing program as their magnetic work for DRB was completed and five bright young engineers, including Lee Godby, later transferred over to the newly-formed Canada Centre for Remote Sensing (CCRS), providing much-needed electronic and computer expertise.

At the beginning of 1964 it became quite obvious to the few of us in the government who were interested in airborne remote sensing (before we envisioned doing it from space) that it could become a big and important activity for Canada for obtaining information for managing our resources and environment, especially because of our vast and relatively inaccessible land and continental shelf territories. How could we organize and promote this? As the mandate for airborne remote sensing would obviously cut across several governments departments (which later turned out also to be a major problem for the universities), the 'knee-jerk' response of we civil servants was to set up an interdepartmental committee. This had to be done at the working level, again because of the 'flakey' nature of the subject. After a year or so of the deliberations, we finally decided to recommend to the government the establishment of an aerogeophysical and remote sensing institute which would have a government-wide mandate. This was recommended in a bottom-up fashion to our various departmental senior officials. Several thought it to be a good idea and were prepared to push it up further, but unfortunately the idea became entwined in interdepartmental wrangles, such as which agencies had the mandate to operate aircraft and the proposal became stillborn.

The committee languished for a couple of years, becoming only a forum for the exchange of scientific and technical ideas which began to seem more and more like pipe dreams when a dramatic thing happened.

Remote sensing then and nowLeap Into Space

Civilian remote sensing in Canada would have developed incrementally had it not been for the proposal brought forward to the U.S. Government by Bill Fischer of the U.S. Geological Survey. By 1966, he had managed to convince the Department of the Interior and its then secretary, Bill Pecora, to sponsor an Earth Resources Orbiting Satellite (EROS). The satellite bus would be the same one that RCA had made for NASA's

experimental meteorological NIMBUS program and the payload would consist of three bore-sighted, RCA, very high-resolution return-beam vidicons. Each vidicon would have its own optical filter, one for the near infrared band, one for the red and the other for yellow, corresponding to the three emulsions layers on camouflage film used during the war and found to be especially useful in airborne experiments to map vegetation. Bill Pecora managed to get approval for funding the satellite as an Interior initiative.

I invited Bill Fischer to come to Ottawa and meet privately with senior EMR officials and John Chapman ADM of the Satellite Communication Branch of Eric Kieren's newly-formed Department of Communications. After the success of the Alouette and ISIS programs Chapman had also become interested in satellite remote sensing. The meeting was held at the Royal Ottawa Golf Club. Jim Harrison ADM/EMR and John Chapman, Yves Fortier, director of the GSC and Sam Gamble, director of the Surveys and Mapping Branch were present.

Fischer, after briefing us on the EROS program showed some of the pictures of the Earth taken by the astronauts with hand-held cameras. At the time they seemed very dramatic indeed and impressed all of us. Chapman then offered the Prince Albert Radar Laboratory as a readout station for EROS. It contained an 84-foot diameter parabaloid tracking dish which was no longer needed as the ionosphere experiments, for which it had been erected, were complete. Fischer was pleased with the idea, as this station would be able to read out data for the whole of North America. At that time, no plans had been made for the readout of EROS.

Alas!, the U.S. Bureau of the Budget had EROS cancelled. NASA had complained that EROS would be an experimental space project, and as such was within the mandate of NASA. Mandates are very important in the government service. Nevertheless, NASA was honour-bound to come up with an alternative program which they finally did in 1969. It was to be called The Earth Resources Technology Satellite (ERTS). It was to retain the three RCA return-beam vidicons, but was also to have a new sensor, the multispectral scanner designed and made by Hughes, Santa Barbara. A vibrating mirror focussed the earth scene onto an array of solid state detectors arranged in five clusters, each cluster being sensitive to a different wave band, making five channels ranging from blue to the far infrared. The sensor was to have a ground resolution of 80 metres.

Our problem now was how to get the same deal with NASA on ERTS as we had with Interior on EROS, particularly as NASA originally had no plans to share their program internationally. John Chapman, who had successful cooperation with NASA on the Alouette and ISIS programs and I visited their Assistant Administrator to propose that Canada read out ERTS at Prince Albert. We were politely told "it would be foolish to invest in a readout and ground data handling facility as it would be too large a risk for Canada. It was costing NASA $40 million and besides, they could cover most of the Canadian landmass from their three readout stations in Alaska, Goldstone, Arizona and Wallops Island, New Jersey. Furthermore ERTS would carry an on-board tape recorder which could record any data beyond the range of the three readout stations". It was clear

they did not want to have a foreign country reading out this satellite. Chapman appealed to the President Nixon's Scientific Advisor but to no avail.

Canada had a strong tradition, as did most states, of keeping the control of aerial photography and mapping within the country. We did not like the idea of having to purchase imagery of Canadian territory from a foreign country nor of having a foreign country acquire imagery of Canadian territory without advance permission. Such a concept was in violation of the Chicago International Convention which required any state wishing to acquire air photography of another country to first get permission. NASA took the position that in this case, the U.N. Treaty on the 'Peaceful Uses of Outer Space' applied - in which any state was free to conduct any activity in space provided it did no harm to other states (particularly in the case of falling debris).

The Deputy Minister of Communications, Alan Gottlieb and his legal advisor, Charles Dalfen, at first wished to make diplomatic objection on the grounds that the U.S. would be able to obtain exclusive information on the location of potential mineral and petroleum deposits in Canada by means of this satellite and might give advance information to U.S. exploration companies. By orbiting a resource satellite and taking imagery of Canadian territory, the U.S. would be invading our sovereign rights. In EMR, we took the view that it would be preferable for Canada, and indeed the international community, to gain knowledge of the technology which would allow us to better control the use of the data. To that end, Harrison, Gregory and I presented a paper at the International Astronautics Federation held in Brussels in 1971, in which we recommended that an international legal regime for the operation of remote sensing satellites, including the transfer of technology, be established. We suggested the establishment of an international network of readout and ground data handling centres and even supplied a map showing the possible locations of such a network. This aroused a lot of hostility from one of NASA's assistant administrators. Its publication was delayed for four years because the editor misplaced the manuscript and failed to notify me of this.

A break in the question of Canada's reading out the ERTS satellite occurred in March, 1969. NASA's Chief Administrator was on a tour of the countries active in space to seek technical contributions towards their "Post Apollo Program" which was the development of the Space Shuttle. When he came to Canada, he addressed a meeting of about 50 senior scientific administrators and politicians held in the Centre Block of the Parliament Buildings. After delivering a briefing on the re-useable shuttle concept he was asking Canada, as he had asked several other nations to consider how they might contribute. (Canada eventually responded to this request suggesting we contribute the remote manipulating system, later to be known as the CANADARM). During the question period I had the temerity to ask if NASA would allow Canada to read out their proposed ERTS satellite at Prince Albert. To my surprise he immediately replied "yes". While it did take another two years to reach a written agreement, we were definitely on our way. Whatever happened within NASA, I do not know, but within a few months, it became their policy to encourage other states to read out ERTS and to pay a fee of $200K per year per station for the priviledge of doing so. After a year of negotiations with NASA and the U.S. State

Department, 'an exchange of notes' on the Canada/U.S. Earth Resources Agreement was finally signed.

Remote sensing then and nowThe Program Planning Office (1969-71)

The next thing was to prepare for the reception of ERTS at Prince Albert. But before this, we had to reach an agreement in Canada on the organization and funding for an agency to implement the remote sensing program.

Whereas it is a relatively easy thing for the ruling party in the government to create a new government agency, it is nearly impossible to promote such a thing from the position of a division chief in a branch of one department because of innate interdepartmental rivalries and questions of departmental mandates (witness the lack of success of the previous interdepartmental committee on remote sensing). The solution was found in the suggestion by a consultant Mr. E.J. Robb from TRW, the U.S. Aerospace giant. He suggested going to Cabinet to ask for the establishment of an Interdepartmental Planning Office for Remote Sensing. This we did, and at the same time asked for an initial budget of $550,000 to do advance planning for the readout of ERTS and for the establishment of a Remote Sensing Centre - this was approved - to have a life of two years, at the end of which, an organizational and operating plan was to be submitted to Cabinet.

In early 1969, a document was sent to Cabinet by Energy Mines and Resources recommending the establishment of an Interagency Committee on Resource Satellites and Remote Airborne Sensing, supported by a Program Planning Office to set up technical working groups, prepare program forecasts for resource satellite and remote sensing programs and to plan and recommend an organization to carry out these programs. The governance, management and committee structure of the Planning Office was based on the model of the NRC Associate Committee on Geodesy and Geophysics, on which I had served in the fifties and sixties. This committee, among other things, had spawned Canada's very successful participation in the 1957 International Geophysical Year which, in many ways, set the stage for Canada's venture into space. It was this organization that led to the program becoming a truly national one, with most of the appropriate technical and scientific organizations and individuals in Canada being consulted and involved.

Analogous to the NRC Associate Committee on Geodesy and Geophysics, there were 14 sub-committees called 'working groups', each one of which had between 10 and 15 representative members from both federal and provincial governments, from industry and universities. Generally, working group leaders were selected from the various federal government departments where the required disciplinary expertise existed. In all, there were more than 140 scientists and engineers involved in these working groups, each of which, in the first year, met four or five times. This may have seemed like committee 'overkill', but we desperately needed lots of help to meet the ERTS launch date of July 1972. They were indeed 'working groups' and they had two jobs: firstly to educate

themselves in this new technology, and secondly to do their part in preparing for the onslaught of data from ERTS--to reproduce, distribute and interpret the useable images of the 1500 or so, scenes of Canada which the satellite was to record every day, seven days a week!

In retrospect, these working groups performed beautifully. Each did its job with the result that Canada was fully ready for the launch of ERTS - probably even more so than the U.S. itself as we were a smaller, more tightly-knit organization.

Remote sensing then and nowThe Prince Albert Satellite Station

The satellite and ground station engineering working group had the most urgent job to get ready. Ron Barrington from John Chapman's Communications/CRC group supervised the re-furbishing and conversion of the Prince Albert Radar Laboratory (PARL) to an ERTS receiving station (PASS), through a contract to the Department of Physics of the University of Saskatchewan headed by Alex Kavadas.

When it came to designing the ground data handling centre, there was nobody in the Ottawa area who had any relevant experience. NASA had contracted with Bendix for $30 million to design and build their system. We talked to Bendix but considered their costs too high. Murray Strome, the computer expert from the NRC/NAE/MAD group had resigned NRC and was hired as a consultant to the Program Planning Office. He managed to put the necessary technical specifications together and a contract was let to Computing Devices of Ottawa. They assembled a team of six engineers under the leadership of Ed Shaw which worked in house, on contract, as part of the Program Planning Office. They

managed to complete and install the system on time, at the new headquarters of the Canada Centre for Remote Sensing in Ottawa on Sheffield Rd., on time and within the budget of $4 million (compared to NASA's cost of $30M.)

During the design of our system, NASA called an open seminar on the design of ground data handling and interpretation centres for resource satellites which was held at the Annapolis Naval Academy. All the big U.S. aerospace companies presented papers as well as several specialized consultants. Al Gregory and I attended because our design team was too busy to attend. As a finale to the meeting, a panel consisting of several senior NASA people and chaired by the senior science officer for the State Department, was convened to answer questions from the audience about ERTS. They obviously must have expected questions from the international members of the audience because of the presence of the State Department official. Towards the end of the session, which had

Murray Strome

Ed Shaw

been quite unexciting and after most of the audience had left, someone at the back asked if it would be permitted for a private company to put up its own ground station, read out and market the ERTS data. It was as if someone had thrown a brick through the window. The audience was stunned, as was the chairman. After he regained composure, he asked, "what is the name of your company?" The reply was, "Wright Engineering of Vancouver," which surprised the audience even more, especially Gregory and me. The reply was, "we would have to consider the merits of the case," and the session ended.

Gregory and I ran to the back of the room to meet this man and arranged to have dinner with him. His name was David Sloan. At the time, he was a recent Ph.D. from the U.B.C. Physics Department. He had read about ERTS and had ideas as to how to build a low-cost ground station and data handling centre. He had joined Wright Engineering, a large mine and mill design engineering company and was in the process of convincing them to build an ERTS ground station in Vancouver. He was unsuccessful in getting Wright Engineering to finance this venture, so he went to John MacDonald, President of a new start-up firm in

Vancouver, named MDA. As John MacDonald had recently come from the staff of the U.B.C Physics Department, he knew David Sloan and hired him immediately. We could not contract with MDA for the ground data handling centre because we had already let the contract to CDC but Ron Barrington who was handling the conversion of the Prince Albert Radar Station, let a small contract to MDA to make and install what was called a QUICK LOOK system at PARL (Prince Albert Radar Laboratory). The QUICK LOOK was to provide a 60 mm photo- transparency of the imagery almost as quickly as it was received from the satellite - admittedly a somewhat decimated version of the image - but fast. This system enabled Canada to see the first image produced by ERTS which covered the twin cities of Dallas and Fort Worth. NASA took four days to produce their first image.

I took the space to mention this story because MDA's success in the QUICKLOOK project was given world-wide publicity. I presented a paper at the U.N. Peaceful Uses of Outer Space Committee showing some of these images. The QUICK LOOK was the prelude to the development of a full, low-cost ground data handling centre. Within two years, MDA submitted an unsolicited proposal to DSS to provide a complete ERTS ground station for our new site at SHOE Cove, Nfld. This facility was to be built in anticipation of NASA's SEASAT, which was to carry the first spaceborne synthetic aperture radar. The unsolicited proposal was for $ 1.4 million and put the whole data handling system in one trailer measuring 12' X 25'--some accomplishment, considering the previous data handling centres occupied major computer rooms with false floors measuring at least 30' X 30'!

The international response to this was astounding and a path was beaten to our door by scientists from all over the world wanting to get more information on this system. This was the start of MDA's growth. They now employ more than 500 people and are the world's leading remote sensing technology supplier.

John MacDonald

Remote sensing then and nowThe DSS Unsolicited Proposal Fund

I digress to give credit to Peter Meyboom for initiating the first fully funded unsolicited proposal program for the Science Procurement Division of DSS. Meyboom was a Dutch-trained hydrogeologist at the Geological Survey of Canada where I knew him in the early 60's. He applied for a senior administrative training course in the Federal Government and was assigned a post in the Dept. of Supply and Services in charge of Science Procurement. It was at a time when a new government policy was being implemented throughout the Government Service - the "Make or Buy" policy which meant that all departments had to justify their reasons for conducting their work 'in house' as

opposed to contracting it out to industry. This was a good policy except that departments were not allotted the extra operating funds to do this as they were not allowed to lay off employees who would be made redundant by the contracting out policy. Meyboom argued that almost every department underspent their allotted operational funds every year by an almost predictable percentage. Why not create a fund from these monies to be known as the "Unsolicited Proposal Fund". Companies were asked to submit "unsolicited'proposals" to the government for R&D projects which might be in line with certain departments objectives or missions. A copy of each proposal was to be sent to every department and after due time for their consideration, a meeting was held at DSS to which all interested departments were invited. If one or more departments officially registered a genuine interest in the proposal and were prepared to act as sponsors and assign "technical contract monitors" to the project. It would be immediately funded from the Unsolicited Proposal Fund. This policy was 'heaven-sent' to the Remote Sensing Program as most of our work a the beginning was contracted out anyway. This allowed us, in the first few years to nearly double our budget which allowed us to accelerate our productivity at twice the speed we had planned. It is well known that in most "green field" starts, the funding required is always underestimated. The chances of getting more funding for the remote sensing program than was originally asked for would have been very remote indeed. For example, the Shoe Cove Ground Station was completely paid for out of this fund. It would not have happened otherwise. It was this project which allowed MDA to become the world's nearly exclusive supplier of ERTS, and later, most earth observation satellite ground stations.

The U.P. Fund had several advantages over normal funding methods. Firstly, the funds were already in place and did not have to be requested a year in advance, only to be told "find it out of your existing budget" Secondly, the U.P. Fund accomplished government-wide dissemination immediately which was important to avoid duplication of research by different government departments. And thirdly, the decision on acceptance or rejection was fast, which is very important from the proposer's point-of-view.

Peter Meyboom

Remote sensing then and nowSensor Development

We had included in the budget for the Planning Office an amount of $250,000 for the development of sensors. This had arisen because SPAR Aerospace had proposed building a Canadian Resource Satellite in which 'a suitable remote sensing device' could be installed. This made us realize that Canada had absolutely no 'candidate' sensors to develop for such a satellite. We went scouring the universities and private companies for ideas on novel sensors and then put out a 'Request for Proposals on Novel Remote Sensing Devices'. The response was quite amazing. We received 40 very interesting proposals. This large response posed a question. As we could not afford to fund them all, which ones should be selected? To answer this question, we set up the

Sensor Working Group as part of CACRS (Canadian Advisory Committee on Remote Sensing). Highly qualified scientists and engineers from government universities and industry were asked to serve on this working group which was chaired by Philip A. Lapp who had recently left SPAR to set up his own engineering consulting firm. Dr. Lapp also later served as a very valuable management consultant to CCRS in its formative years and made great contributions to the early CACRS meetings. In recent years he was instrumental, from the industry side, in helping to save the RADARSAT program from extinction as periodic budget cuts impinged.

The funding of sensor development was later picked up by the DSS Unsolicited Proposal Fund as the pattern had been originally established by the sensor working group. As a result of these two programs, Canada has become a world leader in the supply of remote sensing devices both for use in aircraft and satellites. Lidar altimeters and bathymetric devices, laser fluorosensors for oil- slick detection, pushbroom multispectral scanners, correlation spectrometers for sulphur dioxide and greenhouse gas detection, probing lidars for detection of atmospheric constituents such as ozone and aerosols and the WINDII Upper Atmospheric Research sensor--all got their start through these programs. Several of these Canadian developments have found their way onto NASA satellites, past present and in the future.

Photo Reproduction and Data Dissemination

The job of the Data Handling Working Group in the Planning Office also included photo reproduction and distribution. This was no mean task as the mass development and reproduction of 10 inch wide by 100 foot roles of black and white, colour and false colour photo images was a highly specialized and difficult process. The only group in Canada capable of doing this work was the National Air Photo Library and Reproduction

Phil Lapp

Centre. The Reproduction Centre which was run by both the Airforce and the Surveys and Mapping Branch of EMR was located at the Rockcliffe Air Base. As this base was slated to be dismantled, the building and equipment had been allowed to run down, in spite of the required increased work load being heaped on it. With the prospect of ERTS and airborne remote sensing data to be processed, their work would be approximately doubled.

As the about-to-be-proposed new agency for remote sensing would obviously require a new building, EMR decided that the Photo Reproduction Centre should be co-located with the remote sensing headquarters. As the plans called for 12 dark rooms, all of which had to have special water supplies and drains, including a silver recovery system, this posed a major plumbing problem. Fortunately, we located a large brassiere factory for sale that had a very large open area like a warehouse where dozens of sewing machines were lined up on a cement floor. This meant that the floor could be easily dug up for installing the extensive, plumbing and the partitions could be installed later. With the funding available, they were also able to buy state-of-the-art photo processing equipment that made them one of the largest and most

up-to-date colour air photo processing facilities in the world.

We were very fortunate in being able to inherit that wonderful satellite receiving station from DRB and the Dept. of Communications. It not only had an excellent 84 ft. tracking dish, but it was located in a rural setting with adequate property around it to obviate any possible interference and had more than ample laboratory and maintenance facilities. It saved us at least $20 M.

Alteration of the Prince Albert Radar Laboratory (PARL) was the job of Ron Barrington of CRC/Dept.of Communications/ Communications Research Centre. He was familiar with the idiosyncrasies of the tracking dish which, among other things was supposed to have bad bearings, and was predicted to last only a year or so. That was twenty-three years ago and the dish has been operated every day, seven days a week, recording at least three satellite passes a day ever since. It is still going strong. The man in charge of the operation since the beginning is Roy Irwin. He deserves a medal for the wonderful service he has performed over all these years in supervising this operation.

The QUICKLOOK facility made by MDA for this station made it famous the world over. The quicklook data was also used as an index for the available imagery in microfiche form so that users could evaluate the imagery for clarity and cloud cover before committing to an order. Later a contract was let to a private company managed by Don Fisher, to reproduce and distribute data directly from the station, but it failed because of a shortage of orders and in their ability to give prompt enough service due to lack of funds. In early experiments, QUICKLOOK data was faxed directly to ships operating in the Arctic to assist them in navigating through the ice.

CCRS Headquarters (1970's and 1980's)

Remote sensing then and nowThe Strange Beginnings of RESORS

RESORS, standing for the Remote Sensing, 'On-Line Retrieval System' is an integrated indexing and computer-based retrieval system concerned with the instrumentation, techniques and applications of remote sensing, photogrammetry, image analysis and G.I.S. It contains titles, authors, publishers and keywords for most of the world's literature published in English and French on these subjects since 1969. It also contains many unpublished, unclassified documents which CCRS and RESORS were able to get their hands on. It is unique and is subscribed to internationally. It was initially managed in house by the CCRS head librarian, Brian McGurrin. Later, it was contracted to Gregory Geoscience Ltd., and even later to Horler Information Inc. of Ottawa. (Editor's note: The archival database is now available through the Earth Sciences Information Centre (ESIC), Natural Resources Canada.)

It was started by Len Pomerleau, a systems science graduate from Ottawa University, hired by the Program Planning Office in 1970. Among other more scientific duties, he was asked to make a systematic manual file of several hundred brochures and 'separates' of papers on remote sensing which had been collected by various members of the staff. In the confusion of those days, they were thrown on the floor in a heap in a spare room that was to become the boardroom. After putting off this horrible task for several weeks, he came to me with the recommendation that the only way to be able to retrieve specific information required was by a computer system. In 1969, however, there were very few such systems in existence - certainly none in Ottawa, and those that did exist were experimental and costly. Pomerleau tracked down a Master's student at Carleton University by the name of Andy Smith who was interested in designing a system to serve as his Master's thesis. A contract was given and he designed the RESORS system that, with some updates, is practically the same as the present system. There was, however, a catch. Knowledgeable people had to be employed to acquire, read, keyword and enter the data from the documents into the computer. Four people were hired for this job, some of whom, twenty or more years later are still doing this painstaking and important job. This system has enabled the remote sensing population in Canada to remain current and competitive in this fast-moving technology over the years.

Remote sensing then and nowFinal Cabinet Approval for Establishing the Canada Centre for Remote

Sensing

Brian McGurrin

On October 16, 1970, the Planning Office completed its report entitled 'Organization for a National Program on Remote Sensing of the Environment'. This document was never published but is available through RESORS. It proposed the following objective and sub-objectives for the National Program of Remote Sensing.

"To produce in a timely and effective manner, remotely sensed data and derived information needed for the management of Canada's natural resources and environment, and to perform and support research and development on the collection, processing and interpretation of such data"

Sub - Objectives: 1. To plan, on a continuing basis, experimental and operational remote sensing programs

pertinent to the management of Canada's resources and environment. 2. To acquire relevant data from sensors located on spacecraft, aircraft, balloons and other

platforms. 3. To process remotely-sensed data, and assemble them in formats appropriate for

interpretation. 4. To market processed data to meet the requirements of governments, industries

universities and individuals. 5. To interpret data and to foster interpretation by governments, industry, universities and

individuals. 6. To improve the scope and effectiveness of the data and derived information through

research and development on sensing systems, data processing and interpretation. 7. To promote and coordinate international cooperation and information exchange in

designated areas of remote sensing. 8. To foster the development of expertise in Canadian industry and technologies related to

remote sensing and its applications.

Looking at these objectives from the hindsight of 24 years, they still seem comprehensive. The present objectives of CCRS and the Sector are quite similar except there is more priority given to sub-objective no. 8. However at the time these objectives were written, there were only four companies engaged in remote sensing, as compared to approximately 162 Canadian companies today.

It was still an open question as to which department was to be the lead agency. Jim Harrison ADM/EMR instructed me to prepare a memorandum to Cabinet from EMR. When it was completed he told me to show it to John Chapman, ADM Communications, which I did. Chapman reached into his desk drawer and pulled out a draft document to Cabinet proposing that Communications be the lead agency. My reply was that he had better have lunch with Jim Harrison. After the lunch, Jim Harrison told me that they decided they would recommend that the two departments would manage the proposed centre jointly. When the document reached Treasury Board for vetting before going to Cabinet, Sid Wagner, the Board's scientific advisor, told me in 1992 that he had questioned whether two departments could be jointly responsible for the same program. The answer was "no". Treasury Board decided that EMR should be the lead agency. If, at the time, there had been a Canadian Space Agency as had been recommended in the 'Chapman Report' in 1967, it would undoubtedly have been given the mandate.

Finally the big day came on Feb. 11, 1971 for approval by the Cabinet Committee on Science, of which Bud Drury was the chairman and Bob Uffen the secretary. For any

large proposal there is usually a series of approval levels to go through. The higher the decision-making level, the greater the jeopardy for project approval. Cabinet was the highest level.

On the first presentation, it was obvious the ministers had no idea what we were talking about. 'Remote sensing' at that time was a subject that was not in every day use and was a difficult concept for the lay person to understand. I remember particularly Eugene Whelan, the Minister of Agriculture, who kept asking "yes, but what does it do?" Afraid that ten years of work was about to go 'down the tube', I frantically suggested that I show some of the astronauts pictures of the earth taken during NASA's MERCURY project which I had brought with me, as well as a miniature Japanese projector and screen. Bob Uffen wagged his head, so I proceeded to show pictures for the next half hour. The ministers were so intrigued they were unaware of the passage of time until a bell rang which signified they were required in the House for a vote. They all disappeared leaving the beaurocrats sitting there. I asked Bob Uffen whether or not the proposal was approved. He replied "no problem". The Canada Centre for Remote Sensing came into being. After eleven years of working on the project, it was one of the high points of my professional life

The Provincial Remote Sensing Centres

The original plan for the National Remote Sensing Program was for the Provinces to undertake the operational interpretation of the remote sensing data. After all, the provinces carried the mandate for managing their own resources and environment within their own borders. The Canada Centre for Remote Sensing was to look after supplying the data from both aircraft and satellites as well as conducting research on new ways of applying the data. Agreement was reached with EMR and Treasury Board that the Federal Government would match provincial contributions 50:50 on the cost of setting up and operating provincial remote sensing centres. The Resource Ministers of all the provinces were officially informed of the federal government program on remote sensing and it was suggested that they set up interdepartmental committees on remote sensing to deal with the proposal. Joe Green, Minister of EMR, made a public announcement to this effect at the annual meeting of the Canadian Remote Sensing Association held at the Constellation Hotel in Toronto in Nov. 1971.

Jean Thie, of the Manitoba Ministry of Natural Resources actually prepared a proposal for Manitoba and obtained approval for it from his ministry. Ontario and Alberta were in the process of preparing proposals when unfortunately, Joe Green had a heart attack and had to resign. In the meantime federal provincial relationships began to deteriorate and the provinces decided they did not want to be involved in any new federal/provincial cost-shared projects, as they distorted provincial priorities. The federal government therefore cancelled all newly-proposed cost sharing

Bill Best

arrangements with the provinces. At that time, none of the provinces decided to go ahead on their own with remote sensing centres.

A few years later, due to the efforts of Victor Zsilinszky in Ontario, Jean Thie and Bill Best in Manitoba, Cal Bricker in Alberta and Herve Audet in Quebec Remote Sensing Centres were set up in these provinces, but they were all underfunded and had a very difficult time getting started. I had imagined originally that the provincial centres would become involved in the operational interpretation of the data--assisting in forest inventory mapping, conducting provincial crop inventories and making detailed land use maps and monitoring the hydrological cycle. How wrong I was. The Provincial Centres literally had to start from scratch--promoting the use of remote sensing, conducting demonstration projects and trying to get the resource and environmental managers to use the data. With a few notable exceptions, the provinces only recently (twenty-two years later), are beginning to use the data in an operational manner.

Remote sensing was a technology that was developed outside the user disciplines. What was required was discipline-oriented scientists, working within their own organizations to not only change the focus of their future careers, but also to convince their superiors that they should be allowed to do this without having to change their employer. It amounted to a major mid-career change for them. We called the scientists who did this 'change agents'. Certainly, most of the initial directors of the provincial remote sensing centres were such people. Victor Zsilinszky, was a senior forester with MNR in Ontario when he changed his career. Cal Bricker was a senior engineer with the Surveys and Mapping Branch of Alberta. In Manitoba, the first Chief of the Centre was Jean Thie, originally with the Canada Land Inventory; he was succeeded by Bill Best, a forester. In the Federal service, several foresters- Leo Sayn-Wittgenstein, Peter Kourtz and amongst the topographers, Betty Fleming demonstrated how ERTS imagery could be used to update topographical maps at great savings to the government. Jim Gower of the Institute of Ocean Sciences in B.C., Roy Slaney in the Geological Survey, and Alex Mack in the Department of Agriculture stood practically alone in their own departments trying to demonstrate the value of remote sensing.

I should not forget to mention again Lee Godby, Ralph Baker, Murray Strome, Neil DeVilliers, and Neville Davis from NAE, all of whom were so inspired by the idea of remote sensing that they literally put their careers on the line, by not only banking on Cabinet approving the establishment of CCRS, but by trusting that they would be hired when it finally was approved. The same applied to Ernie Gardiner of the Canadian Forces. In this respect, it was no different in the private sector. Al Gregory gave up an important career in mineral exploration to become involved in

Victor Zsilinszky

Jim Gower

Neville Davis

remote sensing by starting a new company "Gregory Geosciences" which became practically the only company in Canada knowledgeable in the process of updating topographical maps.

A bottom-up effort to introduce a major new organization within the government is quite similar to starting a new company in the private sector or a new department in a University. A total commitment is required and if it doesn't work out, you have lost your status in the organization with which you work and have to start all over again. When a government, University or well-established private company decides to create a new organization from the top down, your position is more or less guaranteed and you don't lose status within your old organization when you are moved to the new position--in fact, you gain status by the mere fact that you were 'selected' for the new position--you are not regarded as a 'deserter'.

Remote sensing then and nowTape Format and Software Development

When the ERTS system was developed, a very important part was the design of the high density tape format at the ground station and also the format of the final digital product, the computer compatible tape - one for each satellite scene, which was required by the users to do digital image analysis. A NASA/CCRS working group was set up which, among other things, was to try to standardize tape formats. When Brazil and Europe, followed by Australia and Japan, also decided that they wanted to read out ERTS, the committee was expanded and became known as the 'Landsat Station Operator's Working Group' (LSOWG). By this time, NASA had decided that ERTS was no longer a technology satellite and since a follow-on was planned, the name was changed to 'Landsat' to reflect the fact that another satellite, the main application of which would be for oceanography was to be called "Seasat". In actual fact, many applications of Seasat data were for land and a lot of Landsat data had many land applications.

CCRS, in the early 70's was fortunate to obtain the services of Florian Guertin and Jenny Murphy, both scientists who had expertise in creating computer software. Over the course of the years, they have contributed greatly to the practical use of earth observation satellite data by designing standardized tape formats and software which was adopted by the LSOWG committee. They have become international experts in this field.

Remote sensing then and nowApplications

Florian Guertin

In the setting up of CCRS, not nearly enough attention had been paid to Applications because it was assumed that user departments and the Provinces would almost automatically pick this up, once the data became available. How wrong we were. The reason for the Schopenhauer quote at the beginning of the paper relates to the initial user scepticism and reluctance to invest the time of its engineers and scientists in the technology, even though very little money would be required. Some of the provinces were willing, but when the Federal Government was forced to pull out of sharing the cost of provincial centres, they also backed off. Oceanographers, with the exception of Jim Gower and the U.B.C. Institute of Oceanography generally considered that remote sensing had little to contribute to the science. At the senior level in the Dept. of Agriculture, they felt that there was nothing that remote sensing could tell that the farmer and the earth-bound scientist didn't already know. They did finally contribute the services of Alex Mack, who devoted the rest of his career to agricultural applications and was a great support. At the senior level of the Surveys and Mapping group they found it difficult to believe that the ERTS imagery could assist in any way because of the distortion of the image and the lack of ground resolution compared to aerial photography. However Betty Fleming, a senior photogrammetrist with that group quickly determined that the imagery could be used to up-date the planimetry of 1:250,000 and later 1:50,000 maps at very low cost. She worked with Al Gregory, the president of Gregory Geosciences resulting in this company, using their development of the PROCOM projector was able to save that branch $ 10 M over the course of ten years, (Gauthier, 1991 ). Betty Fleming was also the person who devised the global system indexing the ERTS 'scenes' which became internationally-accepted for remote sensing satellite imagery.

Although the geological exploration people in Canada were the biggest users of ERTS data, the Geological Survey of Canada were reluctant to use the data, just as they had been reluctant in the 1950's to place credibility in photogeology, although they used air photos for maps and navigation through the bush. Roy Slaney of the GSC/Geophysics division laboured for ten years to promote its wider use in the GSC, and in the end did succeed.

The water resources people in Environment Canada were more positive and were very quick to use the facility of ERTS to relay data from ground data platforms, containing stream gauges to a central location. They now have nearly 1000 of these platforms in place throughout Canada.

The sea-ice reconnaissance group at the Atmospheric Environment Centre/Ice Central were very resistant at first to using ERTS data. They considered the data useless unless it could be used to determine the type of ice it was sensing. This they said could only be determined by a trained ice observer. Over the years however they changed their minds and now possess what is probably one of the most up-to-date near realtime operational remote sensing systems in the world. This consists of two long range aircraft with

double-sided SAR radar, the use of fast track ERS-1 radar data and a modern central data processing and interpretation centre in Ottawa capable of transmitting the interpreted information in near realtime, directly to the user ships in the Arctic and off the East Coast. One of these aircraft is operated by Intera of Calgary on contract to AES. It does seem to be a pity that all neartime ocean information and remote sensing data which could be used not only for ice reconnaissance but also for search and rescue, fisheries control, environmental monitoring, vessel traffic management and sovereignty control cannot be integrated in operations centres on the East and West coast and in the Canadian Arctic. Problems of departmental mandates and interdepartmental rivalries are difficulties akin to attempting to untie the Gordian knot.

The foresters were different. The Forest Management Institute and later the Petawawa Forest Research Institute ( after the FMI had been politically axed ), were extremely supportive of the remote sensing program. In fact, without their support, CCRS might never have come into being. Several of their scientists were keenly interested in this technology and have devoted their careers towards it. The present director general of CCRS, Leo Sayn-Wittgenstein was head of the Forest Management Institute at the time. When the FMI was closed down, he set up a successful private company that is still operating, devoted to the application of remote sensing to forestry - Dendron Resources of Ottawa. I have already mentioned the contribution of Victor Zsilinszky, a forester with the Ontario department, who set up the Ontario

Centre for Remote Sensing in 1973. Peter Murtha, a forester with UBC was into remote sensing early and was one of the first to set up a remote sensing section in a university. As this paper only covers the early history, I have not mentioned the many younger foresters who are still great contributors to the work.

Reception of Automatic Picture Transmission (APT) cloud data from satellites was already taking place at the Atmospheric Environment Centre in Toronto when CCRS was started, They continually kept in close touch with activities in the National Remote Sensing Program. Their ADMs served on the Interagency Committee on Remote Sensing and their scientists served on the Atmospheric Science Working Group of CACRS for many years. Graham Morrisey served enthusiastically as chairman of the atmospheric working group for many years. This cooperation turned out to be very important as is explained later under the recent splitting of Global Scientific Earth Observation and Operational Remote Sensing as separate disciplines.

Some of the above remarks may seem to have been critical of previous science administrations. They were not made for the purpose of denigrating organizations. The point is that the concept of remote sensing was so revolutionary that there was a natural human resistance to the concept in some quarters. It is interesting to note that one of Phil Lapp's recommendations in his report on 'Observables and Parameters in Remote Sensing' was that "CCRS maintain a social scientist on staff to assist principally with marketing activities and benefit analyses, and to provide a knowledgeable coupling with

Leo Sayn-Wittgenstein

potential contributions from the soft sciences". Unfortunately, this recommendation was not implemented.

Remote sensing then and nowInternational Relations

Within a couple of years of its establishment, CCRS was criticized by some of the provincial representatives on the Advisory Committee for spending too much time and money on the International aspects of remote sensing. Murray Strome and I devoted a lot of efforts in assisting External Affairs and the U. N. in trying to develop an international legal regime for satellite remote sensing. Lee Godby and later Jean Claude Henein spent a lot of effort in Africa trying to get ground stations approved there. CCRS scientists were frequently attending international meetings giving papers, promoting Canadian technology and bringing back important intelligence that was used to guide our future directions.

This was a policy I had followed in my previous existence as an exploration geophysicist with the Geological Survey. 'Showing the flag' had been helpful to the Canadian geophysical companies in marketing abroad. In retrospect, this was not a waste of time--quite the opposite! In later years, Canadian companies consistently exported more remote sensing technology than the government spent on remote sensing at home. This was, of course due to the great international marketing effort put forward by the Canadian remote sensing companies, but they all acknowledge the help originally provided by CCRS. Suffice it to say that Canada is now well known as being an important international contributor in all aspects of remote sensing.

I alluded previously to CCRS involvement in the U.N. 'hassles' about the international legal aspects of remote sensing and of the rights of 'sensed nations' to have some control over the data taken by foreign space powers. Murray Strome, played a big role at the U.N.'s 'Peaceful Uses of Outer Space Committee' by serving on an international drafting committee which, after many years of effort, produced a document acceptable to most nations. I shall not go into this further, as it is very complicated and should be the subject of a separate paper.

Remote sensing then and nowThe Airborne Operation

Cabinet did not give the go-ahead on the airborne program at the same time as the satellite data reception program because the logic of having the two programs together was, at that time, not too

obvious to people who were not actually involved in remote sensing. However the fact

Jean Claude Henein

CF 100

that the Air Force was so keen on it and that they had offered aircraft and personnel which gave greater credence to the proposal. The Forestry Branch was also very supportive because they were big users of aerial photography and were anxious to branch out into colour and colour IR photography as opposed to having to use the standard black and white photography specified for purposes of topographical mapping. The airborne program would, undoubtedly, not have been approved if the Air Force had not proposed becoming involved because the cost would have been too high. As it was, the Department of Trade and Commerce vigorously opposed the proposal on behalf of private industry quoting the "make or buy" policy. The only way it passed was with the proviso that the operation would be turned over to private industry within two years. The clinching argument by the Air Force was that there were no civilian operating companies that had the expertise to pilot and maintain the CF 100s.

Over the next two years, attempts were made to interest the air survey companies in remote sensing so they would be in a position to take over from the Air Force. At that time, however, the industry was experiencing hard times and the established companies were unwilling to send their technical personnel to working group meetings to learn the technology. There was, however a start-up company by the name of Intera (Editor's note: In 2003 this company is called Intermap Technologies) managed by Bob Holmes of Calgary that owned one Cessna Aircraft. They were interested in remote sensing and were marketing IR scanning surveys for experimental purposes. Bob Holmes was unfortunately killed in an accident with his own aircraft and the company was taken over by a small group of employees led by Brian Bullock, the current president of Intera. When the time came for CCRS to call for tenders on the operation of their aircraft, Intera teamed up with Innotech Aviation of Montreal and underbid a consortium of the air survey companies, winning the bid.

The airborne operation proved to have a wider scope and to be just as difficult and complicated to set up as the satellite operation. In the long run, it also proved, in many ways, to be equally as important as the satellite operation. Aircraft served several purposes. They were able to act as testbeds on which to test and validate new sensors at short notice. They were able to provide special air photographic coverage using colour and false colour imagery to assist in the validation of satellite imagery interpretation. And they were able to acquire high resolution remote sensing data of any kind and of any area, at short notice. Without the aircraft program, we never would have been able to get into the SAR radar technology as quickly as we did. Indeed RADARSAT, the first Canadian remote sensing satellite to be launched early in 1994 would not have been possible without CCRS having been able to 'jump on the SAR radar technology learning curve' so soon by means of the "SAR 580" program which I shall discuss later.

From the beginning, in the planning office, we had insisted that it would be folly to have a satellite data reception facility without a parallel airborne program. The difficulty was in getting approval. There was a mandate problem both with the National Aeronautical Establishment and the Department of Transport. They were the only civilian federal government agencies DC-3

mandated to operate aircraft. The NAE would be prepared to do so, but wanted full scientific control of the missions. The DOT would also do so, but their prices were too high and they were unprepared to make holes in their aircraft and jerry-rig sensor installations for testing. Help came from the Airforce. They had recently disbanded their photo reconnaissance squadron and had surplus aircraft and personnel. Through the efforts of Major E. Gardiner, they offered to transfer two CF-100s for high altitude remote sensing and two DC-3s for low altitude operation and testing of sensors, as well as the use of hangarage for these aircraft. In addition they offered to provide a commanding officer, four pilots, navigators, sensor operators and maintenance mechanics numbering about 35 in all. They were to remain on strength as Airforce personnel, were called the "Airborne Sensing Unit" and DND would be reimbursed from the Remote Sensing Program budget. Sensor engineering and installation designs were to be the responsibility of the future Remote Sensing Centre.

Approval for this plan was given by Cabinet on the understanding that the operation of these aircraft would be handled by a contract to a civilian operator within two years. This did in fact happen and the civilian contract was managed by a partnership arrangement between Innotech Aviation of Montreal and Intera of Calgary. The joint operating company was called 'Intertech'. A hangar was rented from Spartan Air Services and the airborne remote sensing division moved in with the aircraft. Major Gardiner along with several of his pilots and operators when they retired from the Canadian Forces were hired by Innotech to become their remote sensing division based in Ottawa.

The Airborne Remote Sensing Division of CCRS, under the direction of Ralph Baker, modified the two DC-3s by virtually removing the floor of each aircraft and substituting it with re-enforced beams leaving individual 'holes' for up to six downward-looking remote sensing devices. When equipped with tracking cameras and special navigation equipment, they made excellent, cost-effective sensor testing beds. The two CF-100s, although they had a 30,000 ft. operating capability, required flight personnel to wear oxygen masks and besides, their range was very limited. They were soon to be retired.

By chance, a Falcon Fanjet, owned by an American Company, based in Houston and equipped with several state-of the-art remote sensing devices turned up at Uplands Airport. Obtaining this aircraft for the Remote Sensing Program seemed like a pipe dream until an unprecedented incident took place. Our Treasury Board

analyst phoned me and said "Do you want that aircraft?" to which I replied "sure, but I could never raise the money because of all the supplementary budgets we are getting." He said he knew a way to get it and to leave it to him. Within days the aircraft was purchased, delivered and paid for! I never really did know how he accomplished this. I think the money came from the government's contingency fund, but it did get the aircraft program off to a flying start!

Multiple users from government, universities and industry were allowed to use these aircraft for experimental flights if they were able to justify the scientific merit of their

Falcon Fanjet

proposed experiment. Scheduling was managed by retired Major Ernie Maclaren, a top airborne surveillance officer from the No. 1 Air Division Headquarters in Lahr, Germany.

Supply could not meet the demand and this program not only accelerated the innovation of many new sensors, but also allowed users from all disciplines to acquire detailed airborne data over their test sites--an absolute necessity in validating experimental remote sensing interpretations from satellite data.

Remote sensing then and nowThe SAR 580 Remote Sensing Aircraft

The Falcon fanjet had a reasonable range but its payload was limited and its fuel consumption was high. While the DC3's had a reasonable payload, their range was not long enough for operations in the Arctic and for overseas operation. The airborne section settled on a used Convair 580 which had a large payload,

long range and was more economical to operate and maintain as it was not a jet aircraft. The aircraft was acquired at considerable loss to the other divisions which had to postpone some of their acquisitions to enable the Centre to pay for it.

As it turned out there was an even more important reason to acquire a bigger aircraft. Ralph Baker and Leon Bronstein, in talks in 1974 with the Environmental Research Institute of Michigan, had become interested in their state-of-the-art, airborne synthetic aperture radar (SAR) system which they had developed. There was only one other civilian system in the world, which was operated by Aero Service of Philadelphia. ERIM did not have a suitable aircraft in which to operate their system and there was not much call for it. A deal was made with them for CCRS to lease their SAR to CCRS for installation in the Convair on the understanding that ERIM would be able to lease the SAR-

equipped aircraft for any contracts they might get. ERIM provided technical back-up and training for CCRS engineers and technicians. As a serendipitous spin-off, Keith Raney, a radar scientist with ERIM decided to transfer to CCRS. His transfer resulted in CCRS and MDA being able to develop a digital SAR image processor. Previously all SAR processing at ERIM and at Aero Service had been done using optical laser methods which displayed inferior resolution. Thus CCRS and its major contractor, MDA, were able to acquire SAR technology at an early date which made it possible for them to be technically involved in the first satellite SAR project, NASA's SEASAT (1978). It was their involvement in SEASAT SAR processing that later provided the know-how and confidence for Canada to proceed with its own remote sensing satellite RADARSAT.

Convair

Keith Raney

Eventually CCRS bought the ERIM SAR outright and contracted test surveys with the European and Japanese Space Agencies. A major operation named SURSAT was planned to take place after the launch of SEASAT in which the SAR 580 was to underfly SEASAT over various test areas in Canada in order to confirm and validate the satellite imagery. Unfortunately SEASAT failed after three months, but it was decided to acquire the airborne imagery on its own anyway in order to give the users data to interpret. Over the years, Canadian and other users have been able to acquire expertise in interpreting and using SAR 580 data - a major requirement in preparation for RADARSAT which is to be launched in early 1995.

Early Developments in Commercial Remote Sensing

In 1969 when the Planning Office was set up, there were only four Canadian companies interested in remote sensing, - MDA in Vancouver and CDC of Ottawa, involved in satellite ground readout stations, Intera of Calgary, in airborne remote sensing and RCA Montreal in sensor development. Under the 'Make-or-Buy' policy CCRS contracted out as much work as was possible. Later, other companies came on stream in remote sensing, but the original ones got a head start.

However it was the policy of all these companies not to become too dependent on government contracts. They actively looked, on their own, to the private industry and foreign markets. MDA captured a major share of the world market in ground readout stations, not only for Landsat, but also for many other remote sensing satellites. From there they went onto sensor development and eventually into the military market. Intera got its major boost later when it obtained funding for their STAR-1 airborne radar manufactured by ERIM and MDA which, because of its superior

imagery captured the world market for commercial SAR surveys for geological and ice reconnaissance purposes. Intera is still enjoying this world monopoly. CCRS can take some credit for this as they assisted Intera by transferring technology and by loaning to them a highly-qualified radar scientist, Ray Lowry, Ray later joined their permanent staff and contributed greatly to their success with the STAR-1.

Remote sensing then and nowThe CCRS Applications Division

Leon Bronstein

Ray Lowry

In the original plan, applications did not have a high priority because, as mentioned, it was planned that this work would be taken on by the Provincial Remote Sensing Centres, supported financially by the Federal Government. Because of the general breakdown of federal/provincial relations, this plan did not happen, and there was a major delay of from 5 to 10 years before the provinces managed to fund their own remote sensing offices. It was therefore recommended by nearly all the applications working groups that CCRS re-enforce the more practical side of applications development work in-house, which up to that time had been staffed by only three scientists, Bob Ryerson, Tom Alföldi and Jean Thie, who left the Manitoba Centre to head up the CCRS Applications Development Section. David Goodenough with a staff of three mathematically-trained staff had been working on the theoretical methodology side of digital image processing. Both these sections were managed by Joe Mac Dowell who left about this time to become Science Counsellor for the Canadian Embassy in Washington.

Murray Strome turned over his Systems group to Ed Shaw and became the new head of an enlarged Applications Division. Three well qualified scientists, Ron Brown, Frank Ahern and later Joseph Cihlar were recruited to bolster the Interpretation Section. To bolster the provincial efforts, the Applications Division set up a technology transfer section which took on demonstration projects with the provinces. It became a stop-gap method of assisting the provinces which did not have adequate interpretation centres at the beginning. I estimate that the failure to properly set up full provincial remote sensing centres supported 50-50 by the federal and provincial governments, set back the development of operational interpretation in Canada about 10 years.

An important measure of the success of the Landsat Program was the sales of the data. Presumably if users found the data useful, they would be willing to pay a good price for it. Even though the reproduction and selling of data had been transferred to a private company to be operated on a strictly commercial basis, it was decided to hire an experienced marketing person. In 1979, Paul Hession, a former computer marketer was employed as a CCRS staff member to boost data sales. At that time it was most unusual for a scientific government agency to actually have a marketer on staff. This was an important step which changed the mindset of CCRS and did, in fact improve sales. Later he was one of the first to promote the use of PCs for digital image analysis, which was beneficial to both DIPIX and OVAAC-8 who were marketing software systems for this purpose.

Remote sensing then and nowRemote Sensing in the Colleges and Universities

Some universities became involved in remote sensing from the beginning through the involvement of some of their professors on the various CACRS Working Groups.

Paul Hession

However some of them had difficulty as did the governments, because of the interdisciplinary aspect of remote sensing. In the beginning, they could not afford the large budgets to fund the expensive computers and digital image analysis systems. Physics and electrical engineering departments were interested in the sensor development. Applied mathematics, computer science and electrical engineering departments were interested in digital image analysis. Civil engineers were interested in the surveying and mapping aspect of remote sensing. Foresters and agricultural scientists were interested in applications to their sciences. Physical Geographers were perhaps the best equipped because of the interdisciplinary nature of their science. It is extremely difficult in the universities to organize along interdisciplinary lines.

In 1973, the University of Waterloo under Dieter Steiner set up an informal working group in the geography department which had representation from civil engineering, earth sciences, computer sciences and other university departments. A working liaison with three other universities, - Guelph University in the fields of soil

science (Richard Protz) and photogrammetry (Stan Collins), Mc Master (Phil Howarth) and the University of Toronto Forestry Department (Jerry Vlcek). This arrangement ended after a few years.

Later, Phil Howarth joined the University of Waterloo and with Ellsworth LeDrew expanded the group. York University's Centre for Research in Experimental Space Science (CRESS), joined with the Waterloo group, the University of Toronto Institute of Aerospace Studies, Waterloo's Faculty of Environmental Studies, and Western's Department of physics to form the university\ industry consortium named the Institute for Space and Terrestrial

Science in 1986 of which I was the founding Executive Director. Along with fourteen member companies, it became one of the Centres of Excellence supported by the Ontario Government. The remote sensing and Global Change groups in this centre number about 50 permanent staff plus graduate students.

A remote sensing group under Prof. Bonn of the Geography Department of the University of Sherbrooke got an early start. This centre under the name of CARTEL has grown to a permanent staff of about 20 people and has spun off a number of private companies.

The College of Geographic Sciences in Lawrencetown, Nova Scotia under John Wightman and Ernie McLaren started in 1977 by offering a multidisciplinary training course in remote sensing to university and college graduates in geology, civil engineering, physics, mathematics, geography, forestry and agriculture. This worked out very well as the remote sensing technology and digital image analysis could be added onto the basic training of these graduate students of various disciplines. Many foreign students were also trained here. The College has managed to get accreditation in many universities for its courses.

Richard Protz

Ellsworth LeDrew

John Wightman

Individual professors in Memorial, Dalhousie, U.N.B., Laval, Univ. of Montreal, Ottawa U., Ryerson, Windsor, Univ. of Manitoba, Saskatchewan, Calgary, Edmonton, U.B.C. and Victoria all teach remote sensing from the point-of-view of an individual discipline.

Cost/Benefit Analysis, Systems Studies and Technology Assessments

In its formative years, CCRS was fortunate in being able to obtain the services of Phillip A. Lapp Ltd. and the late Donald J. Clough, of the Systems Engineering Department , University of Waterloo. Working as a team they provided invaluable advice on CCRS management and organization. They also served on Working Groups and attended the annual meetings of the Canadian Advisory Committee on Remote Sensing giving papers and providing advice to working groups.

Phil Lapp authored a very important CCRS document published by CCRS in 1971 entitled 'Observables and Parameters of Remote Sensing' which is available through RESORS. His name appears nowhere in the document. This report, discusses the problems in every-day decision making faced by environmental and resource managers. It then examines which of these decisions could be helped by information derived from remote sensing. However, it goes one step further by translating the environmental or resource parameter to be measured into 'remote sensing observables'. This holistic concept was of great help to all the user working groups by leading them to look at the whole problem of environmental and resource management and then investigating which of these problems could be assisted by remote sensing rather than vaguely considering "how can remote sensing help 'water resources'", for example. This philosophy guided the whole problem of innovating remote sensing over the years.

In 1975, the Government was faced with the decision on whether or not to fund the very expensive purchase of a fleet of new long range patrol aircraft for the Canadian Armed Forces. I was asked the question "could not some of this surveillance be provided by satellites?" I replied that ERTS would be of little or no use firstly, because the satellite only imaged the same area on the ground once every 16 days and secondly, the chances of there being cloud cover at the time of the pass, particularly off the Atlantic Coast were about 90 percent. However, I mentioned that NASA was planning to launch an experimental radar imaging satellite named

'Seasat' which could 'see through clouds' and could image ship and possibly submarine wakes. It was to have a repeat cycle of only two days in northern latitudes. While this could in no way match the surveillance capabilities of a fleet of radar and submarine detection equipment, it could provide some useful reconnaissance information which might complement that of the Canadian Forces Long Range Patrol Aircraft (LRPA). This lead to the writing of a very comprehensive report by Don Clough and Archie McQuillan of CCRS entitled 'Surveillance Satellites and Complementary Airborne and Seaborne Surveillance Systems for Canada'. This report was commissioned by the

Archie McQuillan

Interdepartmental Task Force on Surveillance Satellites, The Interagency Committee on Remote Sensing, the Interdepartmental Committee on Space and the Oceans Panel (MOSST). It was completed on Sept. 30, 1976 and contained 285 pages. The purpose of this report was to provide background against which the Government could decide whether or not to invest in reading out the Seasat radar. A second report authored by Phil Lapp entitled "Satellites and Sovereignty" for MOSST also served the same purpose. This report covers the visit by a MOSST Committee, of which I was a member, to Halifax, chaired by Lapp, to investigate the existing ocean services provided by several government departments including weather services, search and rescue, naval operations, DOT Vessel traffic management, ice reconnaissance and the fisheries surveillance system using the Tracker aircraft operated by the Canadian Armed Forces. Both these reports provided more than ample reasons why Canada should become involved in Seasat. As a result, there was a positive decision on Seasat, which also strengthened our reasons for going ahead on the airborne SAR project and the procurement of a digital SAR processor from MDA.

These reports prompted Don Clough to obtain funding from NATO to conduct an international symposium for 'Earth Observation and Environmental Control' to which 26 of the world's leading experts in this field were invited in Nov. 1976. Plenum Press published the proceedings, edited by Clough and Morley, in 1977. Philip Lapp in the same year promoted another NATO symposium dealing with Arctic Systems, also published by Plenum Press in 1977. This report, discipline-wise, was more general in scope, but dealt solely with Arctic matters which were of prime interest to both National Defence and the petroleum industry which were more than usually interested in the Arctic at that time. As recently as 1990, I had a request from Dr. Solandt to send him a copy of a paper published by Morley and Clough on a proposal for a multidisciplinary Arctic operations centre given at that symposium. One of the obvious features of both the Arctic and East Coast neartime surveillance operations is that surveillance data acquired by the various responsible agencies is not shared in a timely manner. Obviously this calls for joint operations centres, but so far they have not happened.

Archie McQuillan, the in-house CCRS expert on systems design, cost/benefit analysis and operations research, prepared several reports that were extremely valuable to CCRS management from a strategic planning point-of-view. They are:

Dec. '74: Benefits of Remote Sensing in Canadian Northern Resource Development (all aspects)

Oct. '75: The Value of Remote Sensing in Canadian Frontier Petroleum Operations

'78: Applications and Potential Benefits of Landsat - D

The latter report was done in justification of CCRS investing in considerable upgrade hardware and software in anticipation of NASA orbiting a new and improved version of Landsat.

In retrospect I believe that these reports should have received wider distribution, as they were more useful than just for program justification - there is the makings of a very useful textbook on remote sensing applications in all these reports.

Remote sensing then and nowSEASAT and SURSAT (1978)

Having served in WW II as a radar specialist and having seen some of Harky Cameron's 'secret' SLAR data from the RAF in the early '60s, I was naturally interested, even in the planning office stages, in the possibility of radar imaging from space. I consulted Chapman's spacecraft technology experts and was told that SLAR would not be feasible from space, because it would require too much power and too big an antenna to achieve any decent ground resolution, so I gave the idea up until 1972 when I attended an international conference on space, (in, of all places, Azerbaijan, U.S.S.R.), sponsored by the International Astronautical Federation. To my amazement, a NASA scientist gave a paper on a new satellite they were planning (I thought that such a paper would have been classified by the U.S. Military). It was to be called SEASAT and was to contain three sensors--a synthetic aperture radar with a ground resolution of 20 metres (he didn't explain the concept of' synthetic aperture' which did not require nearly as much power as a real aperture radar and mysteriously had a ground resolution which was independent of range!). Another sensor was a radar scatterometer, which operated at a frequency which was selectively responsive to the wave length of wavelets on the surface of seawater (this allowed the measurement of surface winds in strength and direction). The third was a radar altimeter, capable of determining the distance between the satellite and the sea surface to within an accuracy of five centimetres (it was to be used for measuring sea state and geodetic bulges and indentations in the Earth's crust).

Upon returning home, I immediately asked NASA if we could also read out SEASAT at Prince Albert. They said that Canada cannot expect to read out every remote sensing satellite that NASA puts up unless we contributed technically to the program. They asked if we could contribute the radar antenna. I replied yes, I thought so, but would have to check. I enquired from a NASA scientist at the working level and he said NASA had already let the contract to make the antenna. Again, it seems, they were not too anxious for a foreign country to be reading out one of their experimental satellites which, of course, is understandable. On the other hand, we wanted to ensure that we had direct access to data of Canada. I delivered a paper in Strasbourg sponsored by the European Space Agency on the importance of satellite radar and some of its applications. There didn't seem to be too much interest in the paper except from the U.K. However soon after, ESA requested NASA to allow them to read out SEASAT and eight years later in 1991 ESA launched its own experimental radar satellite ESA, using Canadian technology to digitally process their data. After ESA requested readout privileges for SEASAT, NASA relented and allowed both CANADA and ESA to read out.

Preparations began in 1974 for Canada to read out SEASAT. Both the Shoe Cove, Nfld. and Prince Albert Satellite Stations had to be modified to read out X-band, the downlink frequency. Canada also had to decide on what kind of a SAR data processor to use. JPL (Jet Propulsion Lab), the NASA SEASAT manager had decided on an optical laser processor using the ERIM technology. Keith Raney and Ed Shaw felt that Canada could develop a digital SAR processor and let a contract to MDA to develop it. MDA was successful in producing the World's first digital SAR processor both for aircraft and satellite use. This processor was eventually used by JPL for SEASAT SAR and by Intera for their STAR-1 airborne radar.

On the user side, it was decided to run a SAR validation program that we called SURSAT (surveillance satellite). Thirty-five (35) test areas were selected over various types of terrain and ocean, and the plan was to underfly SEASAT with our airborne radar at the same time as the satellite was passing overhead. Plans were made to take ground truth data at the same time.

SEASAT was launched in August 1978, but due to a massive short-circuit in one of the slip ring assemblies that was used to connect the rotating solar arrays into the electrical system, the satellite failed on Oct. 10th, '78, after satisfactory operation in orbit for 105 days. There were about 126 orbits in which SAR data was partially recorded over Canada, but unfortunately few of these were over our prescribed test areas. Fortunately for Canada, we still had our airborne SURSAT program which we decided to go ahead without the satellite. A whole new program was designed under contract to Intera.

Remote sensing then and nowRADARSAT

In 1979 when it became evident that NASA was not going to launch SEASAT-2 as they said "they did not know enough about how to interpret the data and wanted to do more research on this before launching the next radar satellite", the Working Group on Satellites and Ground Station Engineering recommended that Canada now had enough knowledge about SAR radar to be able to make its own radar satellite, and that we should do so. This recommendation was taken up to the Interagency Committee on Remote Sensing which, to my surprise, approved it and sent it on to Cabinet. That was 16 years ago!

A special RADARSAT Project Office was set up in 1980. Ed Shaw took over as manager, Keith Raney as chief scientist and Bob Warren from DOC in charge of spacecraft engineering. The project office was supported by secondees from DOC, Environment Canada and several other agencies. SPAR Aerospace was selected as prime contractor supported by MDA and COM DEV who are responsible for the SAR sensor. Several other companies are involved as sub-contactors.

These scientists and engineers have devoted a major part of their careers to this project. Most of the set-backs were not technical but were political and financial. The U.K. had promised to supply the spacecraft bus and later withdrew. After arrangements were made for another bus, the Government decided the whole project was too expensive. Phil Lapp led a group from the contractors who made a proposal for a 'stripped-down' version that finally got approval. Present plans are for a launch by early 1995. (Editor's note: RADARSAT was successfully launched in November 1995.) This satellite is now at the top of the priority list for the Canadian Space Agency. The Canadian Space Agency is responsible for the space segment and CCRS for the ground segment.

RADARSAT International, a consortium of SPAR Aerospace, MDA and COM DEV won the bid to handle the marketing and sales of the RADARSAT data. Later they also took over marketing for all remote sensing satellite data in Canada and have made agreements with the two other international distributors of remote sensing data, SPOTIMAGE of France and EOSAT of the U.S to market their data in Canada.

To promote the use of the RADARSAT data, CCRS has run a program entitled 'The Radar Data Development Program' (RDDP). It consisted of the letting of competitive contracts totalling $ 5 M. per year for up to 15 years and was begun in 1987. These projects have been mostly in the applications area. RADARSAT International is also running a vigorous international promotional campaign to market the data. The commercial success of RADARSAT will depend upon these two programs.

Remote sensing then and nowThe Development of Digital Image Analysis in Canada

When Canada proposed reading out NASA's ERTS satellite in 1968, very little was known about digital image analysis. Techniques for 'stretching' the data to obtain enhancements within limited spectral limits was known such as, for example stretching the spectral resolution in the green-blue area to enhance the water features (leaving the land part of the image white and featureless). It was also known how to geometrically correct an image first by using information on the attitude of the spacecraft at the time the image was acquired (system correction) and then using photo 'chips' of easily-identified points on the ground (ground control points), whose geodetic positions were accurately known, to warp the raw image by digital manipulation of the pixel positions to fit these points (precision correcting). Crude methods were also known for making atmospheric corrections. All these corrections were supposed to be done automatically before the hardcopy was made for distribution. It was assumed that all data interpretation would be done from colour prints or transparencies by 'eyeball' using standard airphoto interpretation techniques.

However more complex methods of digital image enhancement and classification procedures began to appear in the literature just before the launch of ERTS. CCRS

 

acquired a multispectral analysis display (MAD) which was capable of controlling the relative intensities of the red, yellow and blue 'guns' of the colour display monitor. However to carry out one of these classification procedures with this equipment, such as the 'maximum likelihood classifier' would require about 15 hours of computer time. David Goodenough of the CCRS Applications Division recommended the purchase of an image analysis machine developed by Richard Economy of G.E. in the U.S. which was a hard-wired computer system designed especially for image analysis called the IMAGE 100. It was more than 50 times as fast as the MAD equipment. CCRS was the first organization to acquire this equipment.

Shortly after it had been delivered, Goodenough and his staff decided that it needed to be modified and let a contract to CDC of Ottawa to do so. The conversion team led by Paul Pearl of CDC became knowledgeable in image processing machines during the course of this contract and when CDC decided to close down their remote sensing program, the group left and set up their own company called DIPIX. Like MDA, DIPIX got its initial boost from a DSS Unsolicited proposal directed to Leo Sayn Wittgenstein and Fred Peet of the Forest Management Institute. DIPIX produced an image analysis computer called ARIES-I which cost approximately $ 50,000 and which had more capability

than G.E.'s IMAGE 100 which had cost CCRS $ 1,000,000. Led by a very aggressive marketing team, DIPIX, over the course of the next ten years proceeded to corner the world market with their ARIES system.

Dick Economy did not take this lying down. In 1976, he left G.E. and set up a company with Willoughby in Toronto by the name of OVAAC-8. They proceeded to develop image analysis software that would work on a standard VAX computer manufactured by Digital Equipment. When the IBM PC came out they converted the software to run on a PC. As the software had virtually the same capabilities as the hard-wired ARIES equipment, it eventually spelled the demise of DIPIX. OVAAC-8 was sold to PCI in 1985, the presidency of which was taken over by Murray Strome from CCRS. In 1990 PCI under the new presidency of Dr. Bob Moses took over DIPIX. PCI is now the number two world supplier of remote sensing image analysis software, led only by ERDAS of Atlanta. For a time MDA had entered the field, but did not find it as profitable as their other business, so gave it up.

(Editor's note: Since the time of this publication by Dr. Morley, the ownership, position and partnerships of the companies mentioned, may have changed appreciably.)

Paul Pearl

GIS and Decision-making

Gis as a decision making toolDecision Strategies in GIS

By: Dr. Ronald Eastman (Dec 13, 2000

Ron Eastman is Director and lead software engineer of Clark Labs. This is his inaugural column at Directions.

We commonly use GIS to assist in the process of decision making. However, few of us realize how our software systems may be giving our decisions an unexpected character. For example, consider the simple Boolean intersection operator - a mainstay of multi-criteria decision making. Perhaps we are seeking land development opportunities and have established desirable criteria as being near to main roads, on low slopes, and unforested. Assuming these have been developed as Boolean layers (simple true/false binary layers), the logical AND of the intersection operator would then produce a layer that showed all areas that met these three conditions - a straightforward, but very risk averse solution.

Risk averse? Yes. The intersection operator is a very hard decision strategy. If any condition is missed, it is immediately removed from consideration, no matter how stellar its other attributes might be. While this may be appropriate in some instances, it may also be more limiting than we

might wish. Further, the result may be very different from one achieved through a different approach. For example, Weighted Linear Combination is another common strategy for evaluating multi-criteria decision problems. In this case, we rescale attributes to a common evaluation scale (e.g., a scale from 0 to 1, or perhaps 0 to 100), and then average the scores (often after applying an importance weight). This is considerably less risk averse. For example, having a very low slope might compensate for a location somewhat far from a road.

These two decision strategies dominate our use of GIS. However, recent developments in Decision Science suggest that a much wider range of strategies can be deployed. Perhaps the most flexible is a procedure known as the Ordered Weighted Average (OWA), recently introduced to GIS. This is a procedure that is somewhat related to Weighted Linear Combination, but which is capable of producing a virtually infinite variety of strategies as illustrated below.

The OWA procedure results in decision strategies that vary along two dimensions: risk and tradeoff. At one extreme, we have a solution which assumes the least risk possible and consequently allows no tradeoff (the lower left corner of this triangle). This corresponds most closely with the Boolean intersection operator and is, in fact, the same as the most commonly used fuzzy set intersection operator (the minimum operator). This result is illustrated by the upper-leftmost solution in Figure 2 (an evaluation of suitability for development based on proximity to roads and the town center, slope and distance from a protected nature reserve - green and yellows areas are best; red and blue are worst).

Click to enlarge

This is the most conservative solution (corresponding to the AND logical operation), since it characterizes locations by their worst qualities. To be suitable, all qualities must be good - no tradeoff of qualities is allowed. Thus this strategy represents a very hard AND operation.

At the other extreme is the solution at the lower-right of the triangle (illustrated by the right-most solution in Figure 2). This corresponds to the logical OR operation and is the most optimistic solution. In this case, locations are characterized by their best qualities, clearly with a necessary assumption of risk by the analyst (i.e., the risk that the poorer qualities that are ignored will adversely affect its actual performance as a solution). Note that this solution exactly corresponds with the Fuzzy Set union (maximum) operator.

The remaining corner of the triangle in Figure 1 (the apex) represents the standard Weighted Linear Combination solution. Here we have a case of full tradeoff, and consequently intermediate risk. Here poorer qualities are not ignored, but they can be compensated for. This is illustrated in the middle of the cascade of solutions in Figure 2.

So far, these illustrations are not unfamiliar. However, a glance at Figure 2 shows that many other solutions are possible. In fact, the cascade of solutions illustrated shows the effects of systematically varying the degree of risk and tradeoff in the solution. The progression from the left-most to the right-most solution in Figure 2 corresponds with a trajectory from the lower-left corner of the triangle in Figure 1, to the top of the triangle, and then back down to the lower-right. Thus we can see that it is possible to produce solutions that are strongly conservative (risk averse) but which allow some flexibility in trading off small imperfections by strong qualities in other factors (such as the second solution from the top in the cascade). Indeed, the OWA operator can produce any possibility within this triangle. For example, the solution in the lower-left corner of Figure 2 illustrates a case of intermediate risk (like the standard Weighted Linear Combination), but with no tradeoff. This characterizes features by their middle-most quality. It is similar to some scoring procedures in gymnastics competitions where the best and worst scores are thrown away.

In many respects, this triangular strategy space is similar to the concept of an investment portfolio. A portfolio of blue chip stocks would be found in the lower-left corner: a strategy that produces small but safe returns. The lower-right corner would represent a portfolio of high tech stocks - potentially high performers, but with considerable risk. Finally, a portfolio at the apex of

the triangle would be like a mutual fund - a mixed portfolio intended to provide higher returns with some absorption of risk.

This is a new feature in GIS, and is thus not found in many systems. However, it offers a clear maturation of our ability to make effective decisions in the allocation of precious resources, and (as recent interest in the professional literature would suggest) it seems virtually certain that it will be found in others quite soon.

Technology and change managementManagement of technological change

Prof Prabhakar MisraDirector, GIS [email protected]

Change is inevitable, whether it concerns individuals, organisations or societies. What we are concerned with is whether we can manage this change. This paper talks about management of the transfer of technology, which involves the active participation of user organisations and the technologists who make things happen (that are in the best interest of both the users and the technologists).

Management Model for Technology Transfer

We will concentrate here on technologic change and its effects. The management model presented here is heuristic and attempts to deal with technology change in a diagnostic manner. The proposed model is shown in Figure 1. In this model, five subsystems are enumerated:

defining needs/problems and their priorities technology: the solution package people: their culture, technical profile, level of competence and motivation organisation: the structure, charter of duties, traditions, professional domain/ competition/

cooperation with other organisations. government policies affecting technology, eg, restrictive policies, national priorities, major

political events.

Identifying Technological needs of the user organisations

It is essential to complete this phase of activity before we specify or select the technology to be introduced, and we need to concentrate on real, not demands. It is the common experience of service organisations, such as the Forest Survey of India and Survey of India, etc, that user organisations formulate rather unrealistic objectives. In the Survey, a good example is the demand for very large-scale maps. What the user really needs is only a large working space and not a large-scale map. It should be realized that efforts to produce a large-scale map are many times greater than producing a simple enlargment. Another example is the demand for too frequent monitoring of physical phenomena. Forest monitoring every two years falls into this category. Requests for too many features or colours on the map are similar examples. Many more examples can be cited which are contrary to the surveying principle of “needless refinement need not be resorted to.”

Since the resources in government departments for surveys in India are limited and the private sector remains in infancy, the “demand” must be properly “massaged”, so to say, to arrive at the real needs. This is done by establishing a meaningful dialogue, preferably with a group of three of four persons at different hierarchial levels. Only thus can we identify the potential needs of

organisations in a methodical manner.

Specifications of Products/ServicesProducts and services should primarily suit the “user” and not the “customer”. There is a lot of confusion about these two terms. The customer is that person who pays for the products and may not (generally does not) use the product himself. The “user”, on the other hand, really makes use of the products.

For example, the Wasteland Board of India ordered satellite images to be used for determining the wasteland around villages. The ultimate user was supposed to be the village official but turned out to be other organisations. The objective of extending satellite technology to the “user” remained partially operative.

Fig. 1: Management model for technology transfer

Further Refinement of user needsThis work is equivalent of doing market research by business organisations, who even spend substantial amounts of money by employing outside agencies. De Man (2) offered the following suggestions in this regard:

In order to identify users, an inventory should be made of existing flows and utilisation of data and information.

pilot surveys should be conducted in collaboration with the users to facilitate identifying the needs of the users.

the type of utilisation of products/services has a bearing on user needs; is the product needed for research, inventory, monitoring or evaluation.

the required degree of accuracy, precision and resolution in data should be identified. support systems for the users should be identified, ie, logistics, availability of finance,

training, etc.

Programmes and Plans – Deeper studyThe five-year plans and annual plans of user departments in India are the best sources of information on long-range activities of these organisations. In addition, annual reports provide useful information on the levels of technology and productivity of the organisation. In fact, an annual report is the best source of information about the strength and most importantly whether the available infrastructure can absorb the technological change.

Priority of problemsIt is most vital to know the priorities of the user’s problems for which the technologic package is to be designed. Priority can be generated by internal factors of the organisation, or it can be generated by external ones. For example, aid agencies such as the World Bank, FAO, etc, insist on certain types of maps for “assessments”. The demand of the World Bank then becomes a “priority” problem to be solved by the technology.

Technology – finding solutions for users problems

One of the keywords here is appropriateness. Introducing new technology is justified only on the basis of increasing productivity, the smoothness of operations, support to decision-making and to some extent, enhancement of the organisation’s image. Productivity can be measured according to the following four attributes: quality, quantity, cost and reliability.

Many conventional practices, including cost/benefit ratios can also be used for determining increases in productivity. The upshot of all this is that the “betterness” of a new technology is to be proved before it is adopted.

Acceptability, it seems, is the least concern of engineers and scientists. They feel that if a technology is right for society or an organisation, it should also be acceptable to them. In my experience, the “very right” may not be acceptable to the receiving organisation. It is possible (albeit not easy) to develop a more acceptable package/mix of technology if the subject is discussed thoroughly by the “donors of technology” and its users. Other considerations (mentioned elsewhere) affecting individuals in the change process have to be taken into account before a technology package is recommended. The element of acceptability is a complex one and does not often respond to very structured thought processes. There are many instances where acceptance or non-acceptance of technology has played a major role (if not havoc) in the introduction of new technology. It may be more prudent to transfer of technology in small doses rather than to pass on the latest know-how in one step.

The documentation of technology enables the reader to distinguish whether the technology is at production, operational, quasi-operational or at R & D stage. There are many instances where a technology while still at R & D stage is transferred as a production-level package.

A joint R & D programme between the donor and receiver is another way to transfer technology. In such a case the technology, in original or more often in modified form, is tested in actual conditions. The donor gains a better insight into the problems of the organisation. R & D projects between organisations of developing countries is another effective model.

People – Attitudes towards changeValues, norms, behaviour and attitudes of people have an impact on the transfer of technology. In many societies, the change process should be made deliberately slow because culturally the people are not used to rapid change. The rate of change must therefore be considered beforehand and has to be regulated at the most appropriate level. Slowness of change increases the acceptability of change. If the change is too fast, reactions can also be as fast and drastic. The Directorate of Land Records provides an encouraging example. The technology of rectifying aerial photographs was introduced successfully because of a well managed transfer by the Indian Institute of Remote Sensing. The changes even percolated to “patwari” (the lowest government functionary) level.

Research studies on the subject of change have defined various components of the manager’s role in organisation change. These include:

what the job involves what the manager can do what the manager achieves what the manager knows

Stated rather simplistically, for practical decision making a “people profile” should be carefully made for each organisation. The important characteristics here are:

Individual traits (their values, norms, behaviour, compulsions and conflicts) The ‘technologic health’ of the organisation (educational levels, technical and knowledge

renewal policies and library habits) Available equipment (computers, etc.) Cooperation and collaboration with other organisations.

These are then ranked from highly desirable through favourable, neutral and not favorable to indifferent.

Factors belonging to the realm of organisation structure play an important role in the management of technological change. Technology transfer is affected by existing organisational structure and any infusion of new technology affects the organisation. The Survey of India (SOI), a traditional department (more than 235 years old) provides some interesting examples.

The introduction of photogrammetry in the early ‘50s increased productivity by 2.5 to 3 times in terms of manuscript maps. An office of the SOI, known as “party”, became capable of producing 22 to 24 map sheets per year, compared with a previous average of eight sheets. These eight sheets produced from conventional field methods used to be “fair drawn” (cartographic completion) in the summer and rainy seasons of about six months (April to September). Thus a party was balanced. The increased production of manuscript maps created a backlog in cartography. This situation could be solved only by SOI opening more drawing offices.

The introduction of GIS entails close cooperation between data-generating agencies. Since the data are multidisciplinary (and therefore multi-organisational), the decentralized structure of parties in Survey of India is not suitable for absorbing the latest computer information systems. Many new offices have been opened for the new technology of digital mapping, but a major chunk of SOI is untouched by this development.

Most traditional large organisations, are governed by a set of well established but traditional charters of duties or objectives. The Survey of India and the Geological Survey of India have become almost synonymous with the profession of surveying. and change is difficult to bring about unless the top leadership at SOI and the government decide about changes of objectives.

The transfer of new technology has to go through a large number of layers of decision makers. For example, there are about 600 important towns in India which need base maps for urban planners — almost “yesterday”. But there are only about 200 formally produced basemaps and guide maps. This shortfall, which has existed in India for the last four or five decades, has to been catered for by any department of the government of India. The reason: the task does not fall within the charter of any existing organisation. The Commission on Urbanisation therefore recommened in 1988 that a new organisation called the “Settlement Survey of India”, fill this gap.

The existence of various professional entities scattered all over the nation is always conducive to better absorption of technology. For example, geology and geomorphology are well represented in the universities, the government sector, the Geological Survey of India, Central Ground Water Board, the public sector such as the Mineral Exploration Corporation, and in various organisations at state level. This has resulted in good professional standards in almost all organisations. Additionally, the existence of a central programming board facilitates exchange of views and helps in desired changes of technology.

In contrast, there is a vast gap in the level of technology in surveying at central (SOI) and state cadastral/land records offices. The technology available at state level, with the exception of three or four states, is at an archaic level. There is no visible formal collaboration between state cadastral offices and SOI.

Policy EnvironmentThere is no denying that the policy and legal environment has a profound bearing on the success or failure of the process of introducting a new technology. The absence of the right policy will create impediments in the implementation of change.

For example, aerial photography in India is governed by a policy of restriction — all aerial photographs are classified as secret. Permission from the Ministry of Defense has to be obtained at the time of flying and again after the photographs are completed, and then before release of

maps derived from the photographs. The steps in filling out forms and their cumbersome follow-up have made many organisations and individuals give up this technology altogether.

The result is that orthophotomapping, which was introduced in the early ‘70s, died a quick death, and there is hardly any production of orthophotomaps. All this despite the need for base maps for some 600 important towns—and nothing could have solved this problem more elegantly than making use of aerial photography and its products.

Corporate Strategy for ChangeThe strategy or approach for management to technology change at organisation level is dictated by various factors mentioned above. In fact, all factors impinge on the strategy, so we must decide on their order of importance. Some relevant literature has appeared on this strategy, including the Mayo mode (3). This model deals with “pull” and “push” factors. Pull factors include the common public good, public receptivity, a clear mandate (legal) and timeliness. Push factors include the potential of the technology, the embedded base of technology, natural sequencing and standards.

There is also a case for using social marketing strategies. These include the four Ps of standard marketing strategy; product (technology), price (project cost), place (availability) and promotion (advertising and promoting). In our case, we can add: preparatory surveys of needs and problems (the right technology for each problem) and the actual performance of technology in a real organisation. This marketing model shows a lot of promise.

Timing is a very important and complex factor and can make or mar the smooth introduction of new technology. Many examples can be cited where wrong timing spoiled the changes of a new technology. In fact, a bad experiment acts as an “immunisation” against any future attempts.

For example, the absorption of the technology of aerial photography for producing cadastral maps was most timely in Madhya Pradesh because a large number of village (about 1500) did not have maps at all. The political leadership supported the change whole-heartedly. The result is that full-fledged production capacity is generated, which is using the latest technology of aerial photography and computers. Their organisation has become a pace-setter for India.

The literature on organisational development amply states the importance of the involvement of the highest echelon of the corporate body in the change process. Any attempt to introduce change at lower levels without involving top persons will entail more effort. Second, the change may not attract the right resources of priorities in the total working environment of the organisation. We have to be aware of the customer/user relationship. Thus the optimal climate is that change must be desired at all three levels: decision makers, professionals and technologists. A training programme for the introduction of a new technology must therefore take care that training/education is done simultaneously for all three levels. Further, the role of outsiders (interventionist/expert/change agent) cannot be underestimated for managing change of technology.

Faith (or the image of the change agent)Management literature is almost bereft of this word “faith”. We, however, have observed that when a change is introduced by person (s) in whom the organisation has faith, it is done smoothly. The converse is also true. It should therefore be an important factor to be considered that if any new technology is to be handled, it must be done by that person or group of persons in whom the people have faith. The change agents have to be conscious of their faith image in their organisation.

Abridged and updated from the article ‘Transfer of remote sensing to users: an analysis of factors for the management of change’ published in ITC Journal 1993-3.

GIS Project Implementation Issues and Strategies

This paper presents an overview of GIS project implementation issues and requirements. The focus is on identifying implementation planning issues and strategies that must be addressed for a successful GIS implementation in orgainistaions. The paper will be of most interest to institutional managers and focuses on three key areas:.

1. Current Options and Software Assessment2. Justification and Expectations3. Implementation Issues

Current Options and Software Assessment

Perhaps the first question asked by anyone when discovering GIS is what are the current options available?. This question is often asked as directly as what is the best GIS?. Quite simply, there is no best GIS. A wide variety of GIS software offerings exist in the commercial market place. Commercial surveys often are a good starting point in the assessment of GIS software. The number of GIS software offerings is approximately 10 if one eliminates the following :

the university based research software, which tends to lack full integration and usually has narrow channels of functionality;

the CAD vendors, who like to use GIS jargon but often cannot provide full featured functionality; and

the consulting firms, that will provide or customize selected modules for a GIS but lack a complete product.

One of the problems in evaluating the functionality of GIS software is the bias one gets from using one system or another. Comparing similar functions between systems is often confusing. Like any software, ultimately some do particular tasks better than others, and also some lack functionality compared to others.

Due mostly to this diverse range of different architectures and the complex nature of spatial analysis no standard evaluation technique or method has been established to date

Any GIS should be evaluated strictly in terms of the potential user's needs and requirements in consideration of their work procedures, production requirements, and organizational context ! The experienced GIS consultant can play a large and valuable role in the assessment process.

A current accepted approach to selecting the appropriate GIS involves establishing a benchmark utilizing real data that best represents the normal workflow and processes employed in your organization.

The identification of potential needs and requirements is essential in developing a proper benchmark with which to evaluate GIS software packages. A formalized user need analysis is absolutely critical to the successful implementation of GIS technology.

Development of the benchmark should include a consideration of other roles within your organization that may require integration with the GIS technology. A logical and systematic approach as such is consistent with existing information systems (IS) planning methodologies and will ultimately provide a mechanism for a successful evaluation process.

Justification and Expectations

GIS is a long term investment that matures over time. The turnaround for results may be longer term than initially expected. Quite simply, GIS has a steep learning curve. The realization of positive results and benefits will be not achieved overnight.

Both initial investment funding and continued financial support are major determinants in the success or failure of a GIS.

Most often the justification and acquisition of a GIS centers on technical issues of computer hardware and software, functional requirements, and performance standards. But experience has shown that, as important as these issues may be, they are not the ones that in the end determine whether a GIS implementation will succeed or not.

Even though the proper assessment of an appropriate GIS product requires a good understanding of user's needs, most often systems are acquired based on less than complete and biased evaluations. Nonetheless, even with the GIS in hand a properly structured and systematic implementation plan is required for a successful operation. Generally, a GIS implementation plan must address the following technical, financial, and institutional considerations :

system acquisition tactics and costs; data requirements and costs; database design; initial data loading requirements and costs; system installation tactics, timetable, and costs; system life cycle and replacement costs; day-to-day operating procedures and costs; staffing requirements and costs; user training and costs; and application development and costs.

Potential GIS buyers should be aware of the necessary investment required in hardware, software, training, supplies, and staffing. The cost of establishing a successful GIS operation is substantial. However, with realistic expectations and support the development of GIS within an organization that manipulates geographic data will almost certainly prove beneficial.

Certain considerations of data longevity, data capture, personnel hiring, etc. are the practical concerns of GIS implementation. The longer term implications, such as hardware/software maintenance and replacement, should also be considered. The acquisition of GIS technology should not be done without seriously considering the way in which GIS will interact with the rest of the organization.

It is simply not enough to purchase a computer, a plotter, a display device, and some software and to put it into a corner with some enthusiastic persons and then expect immediate returns. A serious commitment to GIS implies a major impact on the whole organization.

Implementation Issues

The mere presence of an implementation plan does not guarantee success. Most organizations do not have sufficient staff to cope with the commitment and extra work required when introducing a GIS to existing operations. GIS implementation must also consider all technology transfer processes.

Common Pitfalls

Several pitfalls exist that most often contribute to the failure of a GIS implementation strategy. These are identified below:

1. Failure to identify and involve all users

Users in an operational GIS environment consist of operations, management, and policy levels of the organization. All three levels should be considered when identifying the needs of your users.

2. Failure to match GIS capability and needs.

A wide spectrum of GIS hardware and software choices currently exist. The buyer is presented with a significant challenge making the right choice. Remember, the right choice will be the GIS that provides the needed performance no more, no less for the minimum investment. The success of a GIS implementation is particularly sensitive to the right hardware and software choices !

3. Failure to identify total costs.

The GIS acquisition cost is relatively easy to identify. However, it will represent a very small fraction of the total cost of implementing a GIS. Ongoing costs are substantial and include hardware and software maintenance, staffing, system administration, initial data loading, data updating, custom programming, and consulting fees.

4. Failure to conduct a pilot study

The GIS implementation plan concerns itself with the many technical and administrative issues and their related cost impacts. Three of the most crucial issues, are database design, data loading and maintenance, and day-to-day operations. The pilot study will allow you to gather detailed observations, provided it is properly designed, to allow you to effectively estimate the operational requirements.

5. Giving the GIS implementation responsibility to the EDP Department.

Because of the distinct differences of the GIS from conventional EDP systems, the GIS implementation team is best staffed by non-data processing types. The specialized skills of the 'GIS analyst' are required at this stage. Reliance on conventional EDP personnel who lack these skills will ensure failure.

6. Failure to consider technology transfer.

Training and support for on-going learning, for in-house staff as well as new personnel, is essential for a successful implementation. Staff at the three levels should be educated with respect to the role of the GIS in the organization. Education and knowledge of the GIS can only be obtained through on-going learning exercises. Nothing can replace the investment of hands on time with a GIS !

The Learning Curve

Contrary to information provided by commercial vendors of GIS software, there is a substantial learning curve associated with GIS. It is normally not a technology that one becomes proficient in

overnight. It requires an understanding of geographical relationships accompanied by committed hands-on time to fully apply the technology in a responsible and cost effective manner. Proficiency and productivity are only obtained through applied hands on with the system ! GIS is an applied science. Nothing can replace the investment of hands-on with GIS. The following figure presents the typical learning curve for GIS installations.

The learning curve is dependent on a variety of factors including:

the amount of time spent by the individual with hands-on access; the skills, aptitude and motivation of the individual; the commitment and priority attached to GIS technology dictated by the organization and

management; the availability of data; and the choice of software and hardware platforms.

A critical requirement for all GIS implementations is that adequate education and training is provided for operational staff, as well as realistic priorities are defined with which to learn and apply the technology. This is where a formal training curriculum is required to ensure that time is dedicated to learning the technology properly. Adding GIS activities to a staff member's responsibilities without establishing well defined milestones and providing adequate time and training mechanisms is prone to failure. A focused and properly trained operations staff that has consistent training will result in greatly reduced turnaround times for operations, and ensure consistency in quality of product.

The threshold point of the learning curve is typically around the two year time frame. However, this is dependent on the ability of the organization to establish a well defined and structured implementation plan that affords appropriate training and resources for technical staff. The flat part of the learning curve can be shortened if proper training is provided, data is available for use, the right software and hardware is acquired.

The typical learning curve reflects a long initial period for understanding spatial data compilation requirements and database architecture. However, after data models are well understood and sufficient data compilation has been completed the learning curve accelerates. Once a formal application development environment is established and user needs are well defined an infrastructure exists for effective application of the technology. Building operational applications based on formal functional specifications will result in continued accelerated learning. The data hurdle is often a stumbling block for many GIS users.

The Productivity Curve

GIS is a long term investment that matures over time. The turnaround for results may be longer than initially expected. The establishment of a formal implementation strategy will help to ensure that realistic expectations are met. Data is the framework for successful application of GIS technology. In this respect, the investment in establishing a solid data platform will reap rewards in a short term timeframe for establishing a cost-effective and productive GIS operation. The availability of quality data supplemented by a planned implementation strategy are the cornerstones of achieving a productive and successful GIS operation. A robust database should be considered an asset !

However, even with a well defined and systematic implementation strategy GIS technology will not provide immediate benefits. Benefits and increased productivity are not achieved overnight. GIS technology is complex in nature, has a generally steep learning curve, and requires a complement of skills for it to be applied successfully. In fact, most organizations realize a loss in overall operational productivity over the short term while the GIS platforms are being installed, staff is trained, the learning curve is initiated, and data is being captured. This is common of all office automation activities. The following figure presents the typical situation that occurs with respect to comparing long term productivity with, and without, GIS technology.

Depending on the unique circumstances of the implementation process, the status of data compilation, and the organizational climate, increased productivity is normally reached between

the second and fifth year of implementation. This is identified by the threshold point. Again, this is dependent on a variety of factors including :

the skills and experience of the staff involved; the priority and commitment by the organization; the implementation strategy; and the status of data compilation.

The primary issue with implementing GIS is to achieve the threshold point of increased productivity in the shortest possible time frame. In other words, minimize the time in which a decrease in productivity occurs. Of course, the issue of productivity is typically of greaterst concern with private industry, e.g. forestry companies. Nonetheless, the significant investment in hardware/software, data, and training necessitates that a structured approach be utilized to achieve the threshold point in the shortest possible time frame.

A GIS acquisition based on well defined user needs and priorities is more likely to succeed than without. A major pitfall of most installations with GIS technology, e.g. particularly forestry companies and government agencies, is the lack of well defined user needs on which to base the GIS acquisition and implementation.

The Implementation Plan

Implementation can be seen as a six phase process. The phases are :

PHASE I: Creating an awareness

GIS needs to be sold within an organization. The education of staff is very important. Depending on the way in which GIS technology is being introduced to the organization the process for creating an awareness may differ. Technical workshops are often appropriate when a top-down approach exists, while management workshops are often more relevant when a bottoms-up approach exists. Education of the new technology should focus on identifying existing problems within an organization. These often help justify a GIS acquisition and include :

spatial information is poorly maintained or out of date; spatial data is not recorded or stored in a standard way; spatial data may not be defined in a consistent manner, e.g. different

classifications for timber information; data is not shared between departments within an organization; data retrieval and manipulation capabilities are inadequate to meet existing needs; new demands are made on the organization that cannot be met with existing

information systems. Identifying System Requirements

PHASE II: Identifying System Requirements

The definition of system requirements is usually done in a user needs analysis. A user needs analysis identifies users of a system and all information products required by those users. Often a prioritization of the information products and the data requirements of those products is also undertaken. A proper user needs analysis is crucial to the successful evaluation of GIS software alternatives.

After user needs have been identified and prioritized they must be translated into functional requirements. Ideally, the functional requirements definition will result in a set of processing functions, system capabilities, and hardware requirements, e.g. data storage, performance. Experienced GIS consultants often play a major role in this phase.

PHASE III: System Evaluations

Evaluating alternative hardware and software solutions is normally conducted in several stages. Initially a number of candidate systems are identified. Information to support this process is acquired through demonstrations, vendor literature, etc. A short listing of candidates normally occurs based on a low level assessment. This followed by a high level assessment based on the functional requirements identified in the previous phase. This often results in a rating matrix or template. The assessment should take into account production priorities and their appropriate functional translation. After systems have been evaluated based on functional requirements a short list is prepared for those vendors deemed suitable. A standard benchmark, as discussed earlier, is then used to determine the system of choice.

PHASE IV: Justifying the System Acquisition

The proper justification of the chosen system requires consideration of several factors. Typically a cost-benefit analysis is undertaken to analyze the expected costs and benefits of acquiring a system. To proceed further with acquisition the GIS should provide considerable benefits over expected costs. It is important that the identification of intangible benefits also be considered.

The justification process should also include an evaluation of other requirements. These include data base development requirements, e.g. existing data versus new data needs and associated costs; technological needs, e.g. maintenance, training, and organizational requirements, e.g. new staff, reclassification of existing job descriptions for those staff who will use the GIS.

PHASE V: System Acquisition and Start Up

After the system, e.g. hardware, software, and data, is acquired the start up phase begins. This phase should include pilot projects. Pilot projects are a valuable means of assessing progress and identifying problems early, before significant resources have been wasted. Also, because of the costs associated with implementing a GIS it is often appropriate to generate some results quickly to appease management. First impressions are often long remembered.

PHASE VI: System Operations

The operational phase of a GIS implementation involves the on-going maintenance, application, and development of the GIS. The issue of responsibility for the system and liability is critical. It is important that appropriate security and transaction control mechanisms be in place to support the system. A systematic approach to system management, e.g. hardware, software, and data, is essential.

Implementation Issues

IntroductionMost organizations acquiring GIS technology are relatively sophisticated some level of investment already exists in electronic data processing some experience w/ database management & mapping systems . . . some combination of mainframes, workstations, Pcs GIS technology moving into an environment w/ its own institutional structures

departments, areas of responsibility as an integrating technology more organizational changes required cooperation, breaking down of barriers, etc. may have been arguments FOR GIS in the first place existing structures may be changing - e.g., centralized computing services disappearing organizational change is often DIFFICULT to achieve and can lead to FAILURE of a GIS project organizational & institutional issues more often reasons for failure rather than technical issues

Stage Theories of Computer Growthseveral models proposed for the growth of computing within organizations growth divided into a number of stages most prominent model proposed by R. L. Nolan in 1973

Stage 1: Initiationcomputer acquisition use for low profile tasks within a major user department early problems appear!

Stage 2: Contagionefforts to increase use of computing desire to use inactive resources completely top management are supportive fast rise in costs!

Stage 3: Controlefforts to control computing expenditures policy & mgmt board created efforts to centralize computing & control formal systems development policies introduced rate of increase in cost slows

Stage 4: Integrationrefinement of controls greater maturity in mgmt. of computing computing seen as an organization-wide resource application development continues in a controlled way costs rise slowly & smoothly

How Does This Model Fit the GIS Experience?2 versions: incremental radical

Incremental ModelGIS is a limited expansion of existing data processing facilities no major changes required GIS will be managed by data processing dept. as a service probably run on that dept.s server or mainframe this model fits AM/FM & Land Information System applications best notion of adding geographical access to existing administrative database GIS acquisition is expansion of existing facilities thus implementation really begins at Stage 2 of Nolans model (contagion) if acquisition successful, use and costs will grow rapidly, leading to control in Stage 3

Radical ModelGIS is independent of existing data processing facilities e.g., GIS installed on PCs instead of server or mainframe, & promoted by staff w/little or no history of data processing use data processing dept. may resist acquisition, or attempt to persuade mgmt. to adopt an incremental strategy instead may be strong pressure to make GIS hardware compatible w/ main data processing facility to minimize training/maintenance costs this model more likely in GIS applications w/ strong analytical component (resource mgmt., planning, etc.) model assumes GIS will NOT require large supporting infrastructure unlike central data processing facility w/staff of operators, programmers, consultants, etc. unlike incremental model, implementation begins at Step 1 of Nolans model few systems progress beyond Stage 2 - process of contagion still underway, GIS is still new stage 2 is slow with GIS b/c of need to educate/train users in new approach GIS does NOT replace existing manual procedures in many applications (unlike many data processing applications) support by mgmt. may evaporate before honeymoon is over! No Stage 3 or 4 currently little documentation of well-controlled (stage 3), well-integrated (stage 4) systems, but. . . this will change rapidly over next few years

Resistance to Changemany organizations are conservative resistance to change has always been a problem in technological innovation change requires leadership stage 1 requires a missionary within an existing department stage 2 requires commitment of top mgmt., similar commitment of individuals w/in departments despite economic, operational, even political advantages of GIS, the technology is new and outside the experience of many senior managers leaders take great personal risk ample evidence of past failure of GIS projects initial missionary is an obvious scapegoat for failure Chrisman (1988) documents the role of various leaders in the early technical development of GIS similar roles within organizations will likely never be documented!

Implementation Problems:1.Over-Emphasis on Technologyplanning teams made up of technical staff will emphasize technical issues in planning perhaps they will ignore managerial issues planning teams often forced to deal with short-term issues perhaps no time to address longer-term management issues

2.Rigid Work Patternsit may be difficult for the planning team to foresee necessary changes in work patterns a formerly stable workforce may be disrupted e.g., some jobs may disappear! or some jobs may be redefined, e.g., drafting staff reassigned to digitizing some staff may find their new jobs too demanding e.g., former keyboard operators may now need to do database query operations drafting staff may need computing skills people comfortable in their roles will not seek change e.g., people must be persuaded of benefits of change through education/training

3.Organizational Inflexibility

planning team must foresee necessary changes in organization hierarchy, organizations wiring diagram departments that are expected to interact and exchange data must be willing to do so!

4.Decision-Making Proceduresmany GIS projects are initiated by an advisory group drawn from different depts. adequate for early phases of acquisition but must be replaced by a group with a more well-defined decision-making responsibility usually painful to give a single dept. authority (funds must be reassigned to that dept.) but this usually assures a higher rate of success e.g., many states have assigned responsibility for GIS operation to a dept. of natural resources consulting is then mandated from related user departments through committees project may be derailed if any important or influential individuals are left out of the planning process!

5.Assignment of Responsibilitiessubtle mixture of technical, political, and organizational issues typically made on technical grounds then modified to meet pressing political, organizational issues

6.System Support Staffingat a MINIMUM, a multi-user GIS requires: a system manager responsible for day-to-day operation, staffing, financing, meeting of user requests a database manager responsible for database design, planning data input, data security, database integrity planning team may NOT recognize necessity of these positions in ADDITION, the system will require: staff for data input, report production applications programming staff for initial development, although these may be supplied by the GIS vendor management may be tempted to fill these positions from existing staff without adequate attention to qualifications personnel dept. will be unfamiliar w/nature of positions, qualifications, SALARIES

Strategies to Facilitate SUCCESS1.INVOLVE the MANAGEMENT management must take a more active role than just providing money & resources support implementation of multi-disciplinary GIS teams help to develop organizational strategies for crossing internal political boundaries support interagency agreements to assist in data sharing & data acquisition 2.TRAINING & EDUCATION staff and management must be kept current in the technology and applications short courses conferences trade & academic journals 3.CONTINUED PROMOTION project staff must continue to promote the benefits of GIS, even after it has been adopted ensures continued financial & political support projects should be of high quality and value high profile projects often gain public support 4.RESPONSIVENESS project must be seen to be responsive to users needs continue to explore ways to make GIS quick and efficient to use user interfaces

task automation 5.IMPLEMENTATION & FOLLOW-UP PLANS carefully developed implementation plans plans for checking on progress both necessary to ensure controlled management and continued support follow-up plans must assess progress need check points for assessing this. . . audits of productivity perhaps study of costs and benefits

Developments and Trends

This chapter reviews the latest upcoming trends in GIS technology.

New Data SourcesHardware DevelopmentsSoftware Developments

The development and application of geographic information systems is vibrant and exciting. The term GIS remains one of the most popular buzz words in the computer industry today. GIS is perceived as one of the emerging technologies in the computer marketplace. The involvement of major computer vendors is an illustration of this fact. Everybody wants a GIS. This popularity is not without its validity however. GIS is very much a multi-disciplinary tool for the management of spatial data. It is inherently complex because of the need to integrate data from a variety of sources. Functions must accommodate several application areas in a detailed and efficient manner. A variety of important developments are occurring which will have profound effects on the use of GIS. They are identified in the following sections.

New Data Sources

The generation of data from new sources is an on going development. Application specialists have traditionally attempted to research and implement new data sources into their work. Most of these new data sources are based strictly on scientific technological developments.

Remote sensing will become, if it is not already, the primary source for new data. Due to recent technological developments in hardware most GIS software can now accommodate remotely sensed imagery at high resolutions, and in varying formats. Remote sensing data can include aerial photographs, satellite imagery, radar imagery, etc. Some of the past problems with using remotely sensed imagery have been the inability to integrate it with other data layers, particularly vector encoded data. Remote sensing specialists stress that their data is of most value when combined with, and substantiated by, other data sources. Several commercial GIS products are now offering their software bundled with an image processing software package. Many of these packages allow you to interactively view data from both systems simultaneously, and also afford the conversion of data between systems. The integration of GIS and image processing capabilities offers a great potential for resource specialists.

Another data source that has generated much interest is Digital Elevation Models (DEM). Elevation data has traditionally been generated from the interpolation of contour information. However, recent technological developments and the establishment of several digital mapping projects by government agencies has propagated the use of and interest in elevation modelling. Several different sources of DEM data exist within Canada. The most common and readily available DEM data can be acquired from either the federal government, e.g. 1:250,000 map scale, or from selected provincial government agencies. For example, DEM data commensurate with a 1:20,000 map scale is distributed by the Alberta Government under the 1:20,000 Provincial

Digital Mapping project. In British Columbia, DEM data is available with the 1:20,000 TRIM project. In both these cases DEM data is captured photogrammetrically during the stereo-compilation phase of the topographic data capture process. Each DEM is comprised of X,Y, and Z coordinates at regular intervals across a map sheet. This regular grid is supplemented by spot height data points and breakline information (irregular points). In the United States, DEM data is available from a variety of sources, however the most common is the USGS (United States Geological Survey) 1:24,000 QUAD sheets.

DEM data can be used in the generation of a variety of data derivatives. The most common are slope and aspect. The ability to integrate DEM data is a common function within most GIS packages. However, it is typically offered as a separate module that must be purchased individually.

Hardware Developments

The technological advancements made in hardware and software development over the past few years have been phenomenal. The distinction between personal computer and workstation, a mainstay during the 1980’s has become very fuzzy. Recent developments within the micro-chip industry, e.g. the Pentium chip, have made the micro-computer a viable and promising tool for the processing of spatial data. Most notable of these is the emergence of 32-bit Pentium chip micro-computers and the use of the Windows NT operating environment.

Several trends in hardware and software development for GIS technology stand out. These are reviewed below :

The dominant hardware system architecture for GIS systems during the 1980’s was the centralized multi-user host network. The distributed network architecture, utilizing UNIX based servers, and desktop workstations, has been the norm over the past five years.;

The trend in disk storage is towards greatly increased storage sizes for micro-computers, e.g. PC's and workstations, at a lower cost;

The emergence of relatively low cost reliable raster output devices, in particular inexpensive ink jet based plotters, has replaced the more expensive color electrostatic as the ad hoc standard plotting device for GIS.;

The emergence of fast, relatively inexpensive micro-computers with competitive CPU power, e.g. 32-bit Penitum has challenged the traditional UNIX stronghold of GIS.;

While the de facto operating system standard has been UNIX , the Windows NT operating system is emerging as a serious and robust alternative. This is especially prevalent with organizations wishing to integrate their office computing environment with their GIS environment. This trend is closely associated with the development of 32-bit micro-computers.;

SQL (Standard Query Language) has become the standard interface for all relational DBMS;

The ability to customize user interfaces and functionality through Application Programming Interfaces (API) and macro languages. The major development in GIS technology over the past five years has been the ability to customize the GIS for specific needs. Application development is a mandatory requirement for all GIS sites, and should be weighted accordingly when considering a GIS acquisition.

Software Developments

Digital Image Processing

4.1 Introduction

In order to take advantage of and make good use of remote sensing data, we must be able to extract meaningful information from the imagery. This brings us to the topic of discussion in this chapter - interpretation and analysis - the sixth element of the remote sensing process which we defined in Chapter 1. Interpretation and analysis of remote sensing imagery involves the identification and/or measurement of various targets in an image in order to extract useful information about them. Targets in remote sensing images may be any feature or object which can be observed in an image, and have the following characteristics:

Targets may be a point, line, or area feature. This means that they can have any form, from a bus in a parking lot or plane on a runway, to a bridge or roadway, to a large expanse of water or a field.

The target must be distinguishable; it must contrast with other features around it in the image.

 Much interpretation and identification of targets in remote sensing imagery is performed

manually or visually, i.e. by a human interpreter. In many cases this is done using imagery displayed in a pictorial or photograph-type format, independent of what type of sensor was used to collect the data and how the data were collected. In this case we refer to the data as being in analog format. As we discussed in Chapter 1, remote sensing images can also be represented in a computer as arrays of pixels, with each pixel corresponding to a digital number, representing the brightness level of that pixel in the image. In this case, the data are in a digital format. Visual interpretation may also be performed by examining digital imagery displayed on a computer screen. Both analogue and digital imagery can be displayed as black and white (also called monochrome) images, or as colour images (refer back to Chapter 1, Section 1.7) by combining different channels or bands representing different wavelengths.

When remote sensing data are available in digital format, digital processing and analysis may be performed using a computer. Digital processing may be used to enhance data as a prelude to visual interpretation. Digital processing and analysis may also be carried out to automatically identify targets and extract information completely without manual intervention by a human interpreter. However, rarely is digital processing and analysis carried out as a complete replacement for manual interpretation. Often, it is done to supplement and assist the human analyst. 

Manual interpretation and analysis dates back to the early beginnings of remote sensing for air photo interpretation. Digital processing and analysis is more recent with the advent of digital recording of remote sensing data and the development of computers. Both manual and digital techniques for interpretation of remote sensing data have their respective advantages and disadvantages. Generally, manual interpretation requires little, if any, specialized equipment, while digital analysis requires specialized, and often expensive, equipment. Manual interpretation is often limited to analyzing only a single channel of data or a single image at a time due to the difficulty in performing visual interpretation with multiple images. The computer environment is more amenable to handling complex images of several or many channels or from several dates. In this sense, digital analysis is useful for simultaneous analysis of many spectral bands and can process large data sets much faster than a human interpreter. Manual interpretation is a subjective process, meaning that the results will vary with different interpreters. Digital analysis is based on the manipulation of digital numbers in a computer and is thus more objective, generally resulting in more consistent results. However, determining the validity and accuracy of the results from digital processing can be difficult.

It is important to reiterate that visual and digital analyses of remote sensing imagery are not mutually exclusive. Both methods have their merits. In most cases, a mix of both methods is usually employed when analyzing imagery. In fact, the ultimate decision of the utility and relevance of the information extracted at the end of the analysis process, still must be made by humans.

4.2 Elements of Visual Interpretation

As we noted in the previous section, analysis of remote sensing imagery involves the identification of various targets in an image, and those targets may be environmental or artificial features which consist of points, lines, or areas. Targets may be defined in terms of the way they reflect or emit radiation. This radiation is measured and recorded by a sensor, and ultimately is depicted as an image product such as an air photo or a satellite image.

What makes interpretation of imagery more difficult than the everyday visual interpretation of our surroundings? For one, we lose our sense of depth when viewing a two-dimensional image, unless we can view it stereoscopically so as to simulate the third dimension of height. Indeed, interpretation benefits greatly in many applications when images are viewed in stereo, as visualization (and therefore, recognition) of targets is enhanced dramatically. Viewing objects from directly above also provides a very different perspective than what we are familiar with. Combining an unfamiliar perspective with a very different scale and lack of recognizable detail can make even the most familiar object unrecognizable in an image. Finally, we are used to seeing only the visible wavelengths, and the imaging of wavelengths outside of this window is more difficult for us to comprehend.

Recognizing targets is the key to interpretation and information extraction. Observing the differences between targets and their backgrounds involves comparing different targets based on any, or all, of the visual elements of tone, shape, size, pattern, texture, shadow, and association. Visual interpretation using these elements is often a part of our daily lives, whether we are conscious of it or not. Examining satellite images on the weather report, or following high speed chases by views from a helicopter are all familiar examples of visual image interpretation. Identifying targets in remotely sensed images based on these visual elements allows us to further interpret and analyze. The nature of each of these interpretation elements is described below, along with an image example of each.

Tone refers to the relative brightness or colour of objects in an image. Generally, tone is the fundamental element for distinguishing between different targets or features. Variations in tone also allows the elements of shape, texture, and pattern of objects to be distinguished.

Shape refers to the general form, structure, or outline of individual objects. Shape can be a very distinctive clue for interpretation. Straight edge shapes typically represent urban or agricultural (field) targets, while natural features, such as forest edges, are generally more irregular in shape, except where man has created a road or clear cuts. Farm or crop land irrigated by rotating sprinkler systems would appear as circular shapes.

Size of objects in an image is a function of scale. It is important to assess the size of a target relative to other objects in a scene, as well as the absolute size, to aid in the interpretation of that target. A quick approximation of target size can direct interpretation to an appropriate result more quickly. For example, if an interpreter had to distinguish zones of land use, and had identified an area with a number of buildings in it, large

buildings such as factories or warehouses would suggest commercial property, whereas small buildings would indicate residential use.

Pattern refers to the spatial arrangement of visibly discernible objects. Typically an orderly repetition of similar tones and textures will produce a distinctive and ultimately recognizable pattern. Orchards with evenly spaced trees, and urban streets with regularly spaced houses are good examples of pattern.

Texture refers to the arrangement and frequency of tonal variation in particular areas of an image. Rough textures would consist of a mottled tone where the grey levels change abruptly in a small area, whereas smooth textures would have very little tonal variation. Smooth textures are most often the result of uniform, even surfaces, such as fields, asphalt, or grasslands. A target with a rough surface and irregular structure, such as a forest canopy, results in a rough textured appearance. Texture is one of the most important elements for distinguishing features in radar imagery.

Shadow is also helpful in interpretation as it may provide an idea of the profile and relative height of a target or targets which may make identification easier. However, shadows can also reduce or eliminate interpretation in their area of influence, since targets within shadows are much less (or not at all) discernible from their surroundings. Shadow is also useful for enhancing or identifying topography and landforms, particularly in radar imagery.

Association takes into account the relationship between other recognizable objects or features in proximity to the target of interest. The identification of features that one would expect to associate with other features may provide information to facilitate identification. In the example given above, commercial properties may be associated with proximity to major transportation routes, whereas residential areas would be associated with schools, playgrounds, and sports fields. In our example, a lake is associated with boats, a marina, and adjacent recreational land.

4.3 Digital Image Processing

In today's world of advanced technology where most remote sensing data are recorded in digital format, virtually all image interpretation and analysis involves some element of digital processing. Digital image processing may involve numerous procedures including formatting and correcting of the data, digital enhancement to facilitate better visual interpretation, or even automated classification of targets and features entirely by computer. In order to process remote sensing imagery digitally, the data must be recorded and available in a digital form suitable for storage on a computer tape or disk. Obviously, the other requirement for digital image processing is a computer system, sometimes referred to as an image analysis system, with the appropriate hardware and software to process the data. Several commercially available software systems have been developed specifically for remote sensing image processing and analysis.

For discussion purposes, most of the common image processing functions available in image analysis systems can be categorized into the following four categories:

Preprocessing Image Enhancement Image Transformation Image Classification and Analysis

Preprocessing functions involve those operations that are normally required prior to the main data analysis and extraction of information, and are generally grouped as radiometric or geometric corrections. Radiometric corrections include correcting the data for sensor irregularities and unwanted sensor or atmospheric noise, and converting the data so they accurately represent the reflected or emitted radiation measured by the sensor. Geometric corrections include correcting for geometric distortions due to sensor-Earth geometry variations, and conversion of the data to real world coordinates (e.g. latitude and longitude) on the Earth's surface.

 

The objective of the second group of image processing functions grouped under the term of image enhancement, is solely to improve the appearance of the imagery to assist in visual interpretation and analysis. Examples of enhancement functions include contrast stretching to increase the tonal distinction between various features in a scene, and spatial filtering to enhance (or suppress) specific spatial patterns in an image.

Image transformations are operations similar in concept to those for image enhancement. However, unlike image enhancement operations which are normally applied only to a single channel of data at a time, image transformations usually involve combined processing of data from multiple spectral bands. Arithmetic operations (i.e. subtraction, addition, multiplication, division) are performed to combine and transform the original bands into "new" images which better display or highlight certain features in the scene. We will look at some of these operations including various methods of spectral or band ratioing, and a procedure called principal components analysis which is used to more efficiently represent the information in multichannel imagery.

Image classification and analysis operations are used to digitally identify and classify pixels in the data. Classification is usually performed on multi-channel data sets (A) and this process assigns each pixel in an image to a particular class or theme (B) based on statistical characteristics of the pixel brightness values. There are a variety of approaches taken to perform digital classification. We will briefly describe the two generic approaches which are used most often, namely supervised and unsupervised classification.

In the following sections we will describe each of these four categories of digital image processing functions in more detail.

4.4 Pre-processing

Pre-processing operations, sometimes referred to as image restoration and rectification, are intended to correct for sensor- and platform-specific radiometric and geometric distortions of data. Radiometric corrections may be necessary due to variations in scene illumination and viewing geometry, atmospheric conditions, and sensor noise and response. Each of these will vary depending on the specific sensor and platform used to acquire the data and the conditions during data acquisition. Also, it may be desirable to convert and/or calibrate the data to known (absolute) radiation or reflectance units to facilitate comparison between data.

Variations in illumination and viewing geometry between images (for optical sensors) can be corrected by modeling the geometric relationship and distance between the area of the Earth's surface imaged, the sun, and the sensor. This is often required so as to be able to more readily compare images collected by different sensors at different dates or times, or to mosaic multiple images from a single sensor while maintaining uniform illumination conditions from scene to scene.

 As we learned in Chapter 1, scattering of radiation occurs as it passes through and interacts with the atmosphere. This scattering may reduce, or attenuate, some of the energy illuminating the surface. In addition, the atmosphere will further attenuate the signal propagating from the target to the sensor. Various methods of atmospheric correction can be applied ranging from detailed modeling of the atmospheric conditions during data acquisition, to simple calculations based solely on the image data. An example of the latter method is to examine the observed brightness values (digital numbers), in an area of shadow or for a very dark object (such as a large clear lake - A) and determine the minimum value (B). The correction is applied by subtracting the minimum observed value, determined for each specific band, from all pixel values in each respective band. Since scattering is wavelength dependent (Chapter 1), the minimum values will vary from band to band. This method is based on the assumption that the reflectance from these features, if the atmosphere is clear, should be very small, if not zero. If we observe values much greater than zero, then they are considered to have resulted from atmospheric scattering.

Noise in an image may be due to irregularities or errors that occur in the sensor response and/or data recording and transmission. Common forms of noise include systematic striping or banding and dropped lines. Both of these effects should be corrected before further enhancement or classification is performed. Striping was common in early Landsat MSS data due to variations and drift in the response over time of the six MSS detectors. The "drift" was different for each of the six detectors, causing the same brightness to be represented differently by each detector. The overall appearance was thus a 'striped' effect. The corrective process made a relative correction among the six sensors to bring their apparent values in line with each other. Dropped lines occur when there are systems errors which result in missing or defective data along a scan line. Dropped lines are normally 'corrected' by replacing the line

with the pixel values in the line above or below, or with the average of the two.

For many quantitative applications of remote sensing data, it is necessary to convert the digital numbers to measurements in units which represent the actual reflectance or emittance from the surface. This is done based on detailed knowledge of the sensor response and the way in which the analog signal (i.e. the reflected or emitted radiation) is converted to a digital number, called analog-to-digital (A-to-D) conversion. By solving this relationship in the reverse direction, the absolute radiance can be calculated for each pixel, so that comparisons can be accurately made over time and between different sensors.

In section 2.10 in Chapter 2, we learned that all remote sensing imagery are inherently subject to geometric distortions. These distortions may be due to several factors, including: the perspective of the sensor optics; the motion of the scanning system; the motion of the platform; the platform altitude, attitude, and velocity; the terrain relief; and, the curvature and rotation of the Earth. Geometric corrections are intended to compensate for these distortions so that the geometric representation of the imagery will be as close as possible to the real world. Many of these variations are systematic, or predictable in nature and can be accounted for by accurate modeling of the sensor and platform motion and the geometric relationship of the platform with the Earth. Other unsystematic, or random, errors cannot be modeled and corrected in this way. Therefore, geometric registration of the imagery to a known ground coordinate system must be performed.

 

The geometric registration process involves identifying the image coordinates (i.e. row, column) of several clearly discernible points, called ground control points (or GCPs), in the distorted image (A - A1 to A4), and matching them to their true positions in ground coordinates (e.g. latitude, longitude). The true ground coordinates are typically measured from a map (B - B1 to B4), either in paper or digital format. This is image-to-map registration. Once several well-distributed GCP pairs have been identified, the coordinate information is processed by the computer to determine the proper transformation equations to apply to the original (row and column) image coordinates to map them into their new ground coordinates. Geometric registration may also be performed by registering one (or more) images to another image, instead of to geographic coordinates. This is called image-to-image registration and is often done prior to performing various image transformation procedures, which will be discussed in section 4.6, or for multitemporal image comparison.

In order to actually geometrically correct the original distorted image, a procedure called resampling is used to determine the digital values to place in the new pixel locations of the corrected output image. The resampling process calculates the new pixel values from the original digital pixel values in the uncorrected image. There are three common methods for resampling: nearest neighbour, bilinear interpolation, and cubic convolution. Nearest neighbour resampling uses the digital value from the pixel in the original image which is nearest to the new pixel location in the corrected image. This is the simplest method and does not alter the original values, but may result in some pixel values being duplicated while others are lost. This method also tends to result in a disjointed or blocky image appearance.

Bilinear interpolation resampling takes a weighted average of four pixels in the original image nearest to the new pixel location. The averaging process alters the original pixel values and creates entirely new digital values in the output image. This may be undesirable if further processing and analysis, such as classification based on spectral response, is to be done. If this is the case, resampling may best be done after the classification process. Cubic convolution resampling goes even further to calculate a distance weighted average of a block of sixteen pixels from the original image which surround the new output pixel location. As with bilinear interpolation, this method results in completely new pixel values. However, these two methods both produce images which have a much sharper appearance and avoid the blocky appearance of the nearest neighbour method.

4.5 Image Enhancement

Enhancements are used to make it easier for visual interpretation and understanding of imagery. The advantage of digital imagery is that it allows us to manipulate the digital pixel values in an image. Although radiometric corrections for illumination, atmospheric influences, and sensor characteristics may be done prior to distribution of data to the user, the image may still not be optimized for visual interpretation. Remote sensing devices, particularly those operated from satellite platforms, must be designed to cope

with levels of target/background energy which are typical of all conditions likely to be encountered in routine use. With large variations in spectral response from a diverse range of targets (e.g. forest, deserts, snowfields, water, etc.) no generic radiometric correction could optimally account for and display the optimum brightness range and contrast for all targets. Thus, for each application and each image, a custom adjustment of the range and distribution of brightness values is usually necessary.

In raw imagery, the useful data often populates only a small portion of the available range of digital values (commonly 8 bits or 256 levels). Contrast enhancement involves changing the original values so that more of the available range is used, thereby increasing the contrast between targets and their backgrounds. The key to understanding contrast enhancements is to understand the concept of an image histogram. A histogram is a graphical representation of the brightness values that comprise an image. The brightness values (i.e. 0-255) are displayed along the x-axis of the graph. The frequency of occurrence of each of these values in the image is shown on the y-axis.

By manipulating the range of digital values in an image, graphically represented by its histogram, we can apply various enhancements to the data. There are many different techniques and methods of enhancing contrast and detail in an image; we will cover only a few common ones here. The simplest type of enhancement is a linear contrast stretch. This involves identifying lower and upper bounds from the histogram (usually the minimum and maximum brightness values in the image) and applying a transformation to stretch this range to fill the full range. In our example, the minimum value (occupied by actual data) in the histogram is 84 and the maximum value is 153. These 70 levels occupy less than one-third of the full 256 levels available. A linear stretch uniformly expands this small range to cover the full range of values from 0 to 255. This enhances the contrast in the image with light toned areas appearing lighter and dark areas appearing darker, making visual interpretation much easier. This graphic illustrates the increase in contrast in an image before (left) and after (right) a linear contrast stretch.

 

 A uniform distribution of the input range of values across the full range may not always be an appropriate enhancement, particularly if the input range is not uniformly distributed. In this case, a histogram-equalized stretch may be better. This stretch assigns more display values (range) to the frequently occurring portions of the histogram. In this way, the detail

in these areas will be better enhanced relative to those areas of the original histogram where values occur less frequently. In other cases, it may be desirable to enhance the contrast in only a specific portion of the histogram. For example, suppose we have an image of the mouth of a river, and the water portions of the image occupy the digital values from 40 to 76 out of the entire image histogram. If we wished to enhance the detail in the water, perhaps to see variations in sediment load, we could stretch only that small portion of the histogram represented by the water (40 to 76) to the full grey level range (0 to 255). All pixels below or above these values would be assigned to 0 and 255, respectively, and the detail in these areas would be lost. However, the detail in the water would be greatly enhanced.

Spatial filtering encompasses another set of digital processing functions which are used to enhance the appearance of an image. Spatial filters are designed to highlight or suppress specific features in an image based on their spatial frequency. Spatial frequency is related to the concept of image texture, which we discussed in section 4.2. It refers to the frequency of the variations in tone that appear in an image. "Rough" textured areas of an image, where the changes in tone are abrupt over a small area, have high spatial frequencies, while "smooth" areas with little variation in tone over several pixels, have low spatial frequencies. A common filtering procedure involves moving a 'window' of a few pixels in dimension (e.g. 3x3, 5x5, etc.) over each pixel in the image, applying a mathematical calculation using the pixel values under that window, and replacing the central pixel with the new value. The window is moved along in both the row and column dimensions one pixel at a time and the calculation is repeated until the entire image has been filtered and a "new" image has been generated. By varying the calculation performed and the weightings of the individual pixels in the filter window, filters can be designed to enhance or suppress different types of features.

A low-pass filter is designed to emphasize larger, homogeneous areas of similar tone and reduce the smaller detail in an image. Thus, low-pass filters generally serve to smooth the appearance of an image. Average and median filters, often used for radar imagery (and described in Chapter 3), are examples of low-pass filters. High-pass filters do the opposite and serve to sharpen the appearance of fine detail in an image. One implementation of a high-pass filter first applies a low-pass filter to an image and then subtracts the result from the original, leaving behind only the high spatial frequency information. Directional, or edge detection filters are designed to highlight linear features, such as roads or field boundaries. These filters can also be designed to enhance features which are oriented in specific directions. These filters are useful in applications such as geology, for the detection of linear geologic structures.

4.6 Image Transformations

Image transformations typically involve the manipulation of multiple bands of data, whether from a single multispectral image or from two or more images of the same area acquired at different times (i.e. multitemporal image data). Either way, image transformations generate "new" images from two or more sources which highlight particular features or properties of interest, better than the original input images.

Basic image transformations apply simple arithmetic operations to the image data. Image subtraction is often used to identify changes that have occurred between images collected on different dates. Typically, two images which have been geometrically registered (see section 4.4), are used with the pixel (brightness) values in one image (1) being subtracted from the pixel values in the other (2). Scaling the resultant image (3) by adding a constant (127 in this case) to the output values will result in a suitable 'difference' image. In such an image, areas where there has been little or no change (A) between the original images, will have resultant brightness values around 127 (mid-grey tones), while those areas where significant change has occurred (B) will have values higher or lower than 127 - brighter or darker depending on the 'direction' of change in reflectance between the two images . This type of image transform can be useful for mapping changes in urban development around cities and for identifying areas where deforestation is occurring, as in this example.

Image division or spectral ratioing is one of the most common transforms applied to image data. Image ratioing serves to highlight subtle variations in the spectral responses of various surface covers. By ratioing the data from two different spectral bands, the resultant image enhances variations in the slopes of the spectral reflectance curves between the two different spectral ranges that may otherwise be masked by the pixel brightness variations in each of the bands. The following example illustrates the concept of spectral ratioing. Healthy vegetation reflects strongly in the near-infrared portion of the spectrum while absorbing strongly in the visible red. Other surface types, such as soil and water, show near equal reflectances in both the near-infrared and red portions. Thus, a ratio image of Landsat MSS Band 7 (Near-Infrared - 0.8 to 1.1 mm) divided by Band 5

(Red - 0.6 to 0.7 mm) would result in ratios much greater than 1.0 for vegetation, and ratios around 1.0 for soil and water. Thus the discrimination of vegetation from other surface cover types is significantly enhanced. Also, we may be better able to identify areas of unhealthy or stressed vegetation, which show low near-infrared reflectance, as the ratios would be lower than for healthy green vegetation.

Another benefit of spectral ratioing is that, because we are looking at relative values (i.e. ratios) instead of absolute brightness values, variations in scene illumination as a result of topographic effects are reduced. Thus, although the absolute reflectances for forest covered slopes may vary depending on their orientation relative to the sun's illumination, the ratio of their reflectances between the two bands should always be very similar. More complex ratios involving the sums of and differences between spectral bands for various sensors, have been developed for monitoring vegetation conditions. One widely used image transform is the Normalized Difference Vegetation Index (NDVI) which has been used to monitor vegetation conditions on continental and global scales using the Advanced Very High Resolution Radiometer (AVHRR) sensor onboard the NOAA series of satellites (see Chapter 2, section 2.11).

Different bands of multispectral data are often highly correlated and thus contain similar information. For example, Landsat MSS Bands 4 and 5 (green and red, respectively) typically have similar visual appearances since reflectances for the same surface cover types are almost equal. Image transformation techniques based on complex processing of the statistical characteristics of multi-band data sets can be used to reduce this data redundancy and correlation between bands. One such transform is called principal components analysis. The objective of this transformation is to reduce the dimensionality (i.e. the number of bands) in the data, and compress as much of the information in the original bands into fewer bands. The "new" bands that result from this statistical procedure are called components. This process attempts to maximize (statistically) the amount of information (or variance) from the original data into the least number of new components. As an example of the use of principal components analysis, a seven band Thematic Mapper (TM) data set may be transformed such that the first three principal components contain over 90 percent of the information in the original seven bands. Interpretation and analysis of these three bands of data, combining them either visually or digitally, is simpler and more efficient than trying to use all of the original seven bands. Principal components analysis, and other complex transforms, can be used either as an enhancement technique to improve visual interpretation or to reduce the number of bands to be used as input to digital classification procedures, discussed in the next section.

4.7 Image Classification and Analysis

A human analyst attempting to classify features in an image uses the elements of visual interpretation (discussed in section 4.2) to identify homogeneous groups of pixels which represent various features or land cover classes of interest. Digital image classification uses the spectral information represented by the digital numbers in one or more spectral bands, and attempts to classify each individual pixel based on this spectral information.

This type of classification is termed spectral pattern recognition. In either case, the objective is to assign all pixels in the image to particular classes or themes (e.g. water, coniferous forest, deciduous forest, corn, wheat, etc.). The resulting classified image is comprised of a mosaic of pixels, each of which belong to a particular theme, and is essentially a thematic "map" of the original image.

When talking about classes, we need to distinguish between information classes and spectral classes. Information classes are those categories of interest that the analyst is actually trying to identify in the imagery, such as different kinds of crops, different forest types or tree species, different geologic units or rock types, etc. Spectral classes are groups of pixels that are uniform (or near-similar) with respect to their brightness values in the different spectral channels of the data. The objective is to match the spectral classes in the data to the information classes of interest. Rarely is there a simple one-to-one match between these two types of classes. Rather, unique spectral classes may appear which do not necessarily correspond to any information class of particular use or interest to the analyst. Alternatively, a broad information class (e.g. forest) may contain a number of spectral sub-classes with unique spectral variations. Using the forest example, spectral sub-classes may be due to variations in age, species, and density, or perhaps as a result of shadowing or variations in scene illumination. It is the analyst's job to decide on the utility of the different spectral classes and their correspondence to useful information classes.

Common classification procedures can be broken down into two broad subdivisions based on the method used: supervised classification and unsupervised classification. In a supervised classification, the analyst identifies in the imagery homogeneous representative samples of the different surface cover types (information classes) of interest. These samples are referred to as training areas. The selection of appropriate training areas is based on the analyst's familiarity with the geographical area and their knowledge of the actual surface cover types present in the image. Thus, the analyst is "supervising" the categorization of a set of specific classes. The numerical information in all spectral bands for the pixels comprising these areas are used to "train" the computer to recognize spectrally similar areas for each class. The computer uses a special program or algorithm (of which there are several variations), to determine the numerical "signatures" for each training class. Once the computer has determined the signatures for each class, each pixel in the image is compared to these signatures and labeled as the class it most closely "resembles" digitally. Thus, in a supervised classification we are first identifying the information classes which are then used to determine the spectral classes which represent them.

Unsupervised classification in essence reverses the supervised classification process. Spectral classes are grouped first, based solely on the numerical information in the data, and are then matched by the analyst to information classes (if possible). Programs, called clustering algorithms, are used to determine the natural (statistical) groupings or structures in the data. Usually, the analyst specifies how many groups or clusters are to be looked for in the data. In addition to specifying the desired number of classes, the analyst may also specify parameters related to the separation distance among the clusters and the

variation within each cluster. The final result of this iterative clustering process may result in some clusters that the analyst will want to subsequently combine, or clusters that should be broken down further - each of these requiring a further application of the clustering algorithm. Thus, unsupervised classification is not completely without human intervention. However, it does not start with a pre-determined set of classes as in a supervised classification.