finition exam.pdf

14

Click here to load reader

Upload: adek-soleha

Post on 11-Dec-2015

217 views

Category:

Documents


2 download

DESCRIPTION

urban plan

TRANSCRIPT

Page 1: finition exam.pdf

Environmental Vulnerability Indicators forEnvironmental Planning and Decision-Making:Guidelines and ApplicationsFERDINANDO VILLA*Institute for Ecological EconomicsUniversity of MarylandBox 38, Solomons, Maryland 20688, USA

HELENA MCLEOD1

South Pacific Applied Geoscience Commission (SOPAC)Suva, Fiji

ABSTRACT / Environmental decision-making and policy-mak-ing at all levels refers necessarily to synthetic, approximatequantification of environmental properties such as vulnerabil-ity, conservation status, and ability to recover after perturba-tion. Knowledge of such properties is essential to informeddecision-making, but their definition is controversial and theirprecise characterization requires investments in research,modeling, and data collection that are only possible in themost developed countries. Environmental agencies and gov-ernments worldwide have increasingly requested numericalquantification or semiquantitative ranking of such attributes atthe ecosystem, landscape, and country level. We do not havea theory to guide their calculation, in general or specific con-

texts, particularly with the amount of resources usually avail-able in such cases. As a result, these measures are often cal-culated with little scientific justification and high subjectivity,and such doubtful approximations are used for critical deci-sion-making. This problem applies particularly to countrieswith weak economies, such as small island states, where themost precious environmental resources are often concen-trated.

This paper discusses frameworks for a “least disappointing,”approximate quantification of environmental vulnerability. Aftera review of recent research and recent attempts to quantifyenvironmental vulnerability, we discuss models and theoreticalframeworks for obtaining an approximate, standardizable vul-nerability indicator of minimal subjectivity and maximum gen-erality. We also discuss issues of empirical testing and com-parability between indicators developed for differentenvironments. To assess the state of the art, we describe anindependent ongoing project developed in the South Pacificarea and aimed to the comparative evaluation of the vulnera-bility of arbitrary countries.

The issue of ecosystem fragility or vulnerability toexogenous and endogenous stress factors has been thesubject of long and intense debate. Along with therelated discussion around the determinants of commu-nity and ecosystem stability, this debate has deeply in-fluenced the development of modern ecology and pro-duced enormous insight into ecosystem structure andfunction. Nevertheless, the debate has not led to agree-ment on the definition of these properties and has notproduced general and practical conceptual models tocalculate corresponding indicators (e.g., De Leo andLevin, 1997). Instead, a synthesis of the current knowl-edge could lead us to conclude that, due to the com-plexity, nonlinearity, and multiplicity of temporal and

spatial scales typical of natural systems, a sufficientlygeneral conceptual model of this kind will probablynever be developed.

Nevertheless, environmental decision-making andpolicy-making are based on the quantification of envi-ronmental properties such as vulnerability, status ofconservation, and ability to recover. In recent years,explicit demand has been put on the scientific commu-nity to produce such indicators to direct conservationinvestments. The need for answers in short time frameshas led scientists to attempt surrogate measures calcu-lated on the basis of available or easily measurableindicators, which have been and are being developed toserve as a basis for critical decision-making, often in-volving some of the most important ecosystems onEarth.

Ecological risk assessment initiatives are beingfunded at various scales by environmental agencies inmany developed countries. Most notably, the UnitedStates Environmental Protection Agency (EPA) hasstarted a comprehensive risk assessment project that

KEY WORDS: Environmental vulnerability; Indicators; Ecosystemhealth; Small island states

1Present address: Department for International Development (DFID),94 Victoria Street London SW1E 5JL, UK.

*Author to whom correspondence should be addressed.

DOI: 10.1007/s00267-001-0030-2

Environmental Management Vol. 29, No. 3, pp. 335–348 © 2002 Springer-Verlag New York Inc.

Page 2: finition exam.pdf

has produced important results (US EPA 1998, Wil-liams and Kaputska 2000). Most attempts to quantifyenvironmental vulnerability to date refer to specificsystems and particular stressors or classes of stressors.Examples include vulnerability to sea-level rise and cli-mate change (e.g., IPCC 1992, Yamada and others,1995, Sem and others, 1996, Pernetta, 1990), oil spillsin tidal zones (Weslawski and others 1997), groundwa-ter contamination by pesticides at the regional scale(Loague 1994) and sea-level rise at the national scale(Hughes and Brundrit 1992). Elrich and Elrich (1991)deal with anthropogenic risks to ecosystems, estimatingthe impact of human intrusion within a given ecosys-tem. These studies make no attempt to aggregate thescores into a synthetic indicator. Pantin (1997) devel-oped a vulnerability indicator incorporating economicvulnerability to natural disasters. Environmental indica-tors such as these treat the human system rather thanthe ecological system as the responder.

System vulnerability and its measure has long been aconcern in economics, particularly in reference tosmall island developing states (SIDS) (Crowards 1999).To date, economic vulnerability indicators tend to usea small number of indicators and simple aggregationmodels. Methods of ranking have been developed us-ing statistical regression techniques, as in Atkins andothers (1998), or simple weighted averaging of country-level indicators to achieve the composite vulnerabilityscore (Briguglio 1995, 1997, Wells 1996, 1997, Pantin1997).

The US EPA is producing the most comprehensiverisk assessment framework to date. EPA’s program, asdescribed in Linthurst and others (2000), involves foursubprograms dedicated to: (1) ecological monitoringresearch, including development of indicators andmonitoring design; (2) processes and modeling re-search, dedicated to producing the understanding ofsystem dynamics necessary to assess responses to stres-sors; (3) risk assessment research, dedicated to produc-ing methods, guidelines, and pilot studies to performecological assessments; and (4) risk management andrestoration research, involving prevention of pollution,control technologies, remediation and restoration.Most of the research promoted or stimulated by theEPA’s effort has involved the regional scale, particularlywatersheds. EPA’s contributions have led to the publi-cation of risk assessment guidelines that could providepractical standards (US EPA 1996, 1998).

All these studies are good starting points for identi-fying a general framework for a vulnerability indicator.However, some of them (notably the ones grounded ineconomics), employ frameworks that are too simple todeal with the complex interactive nature of ecosystems.

Others (notably the ones based on EPA’s risk assess-ment framework) involve huge research investmentsand a data resolution that is only possible for wealthystates and for specific small-scale systems. This articledeals with a necessary compromise approach: produc-ing the least disappointing quantification of ecosystemvulnerability with the resources available to most coun-tries and in the time frames typical of environmentalpolicy-making. This problem is too complex to besolved in general with simple indicator aggregation,and at the same time it cannot be approached byfollowing guidelines such as the EPA’s, due to theamount and cost of the research involved and the widerange of scales and situations that the nations’ demandencompasses.

The number of recent international programs thathave requested similar quantifications proves the inter-est in this topic. For example, the Nature 2000 projectof the European Community (EEC 1992) requested anumerical scoring of the status of conservation, vulner-ability, and ability to recover relative to a high numberof ecologically relevant sites, identified by all membernations. How to reach such quantifications was neverclearly explained. Nevertheless, many countries pro-duced indicators that were used as an important com-ponent of a knowledge base on ecotypes and sites lo-cated across much of Europe. In 1994 the UnitedNations expressed the need of country-level environ-mental and economic vulnerability indicators in theBarbados Plan of Action (UN 1994). As a result of that,various small island nations started programs to pro-duce such indicators, usually characterized by limitedbudgets and data availability, but dealing with some ofthe most precious ecosystems on Earth. More recently,the European Union Statistics Department has startedproducing a comprehensive listing of indicators of en-vironmental pressures to apply in different Europeancountries for comparison purposes (Eurostat 1998).

Governments and policy-making institutions tend touse estimates provided by local scientific communitieswith limited independent review. Such indicators oftenconsist of linear aggregations of indicators, chosen bylocal experts or committees, then ranked and weightedaccording to ad-hoc schemata. As discussed later, norigorous experimental testing of vulnerability estimatesis possible given our current state of knowledge of thestructure and functions of the environment. Lack ofgood quality historical data, time constraints, and fund-ing availability would probably challenge even a muchmore advanced ecological theory.

These considerations call for a theoretical frame-work ensuring at least the completeness of the infor-mation collected and that the inevitable error and sub-

336 F. Villa and H. McLeod

Page 3: finition exam.pdf

jectivity can only cause more conservative, rather thanoptimistic, estimates. The environmental responsibilityassociated with the application of such indicators iseven greater when properties are compared across dif-ferent systems in order to decide where to direct con-servation effort. In this paper we attempt to indicate apath towards a measure that accepts the inevitable over-simplification, but can nevertheless serve a better-in-formed environmental decision-making process. Wewill refer to vulnerability in general, with reference tothe whole system of factors that can affect a system, andtry to develop a general indicator framework that canbe applied to a wide range of situations and systemconceptualizations. We will discuss aggregation meth-ods that allow keeping the unavoidable error and ap-proximation on the safe side. We will pay attention tothe important issue of comparability between indicatorsdeveloped for different environments and briefly out-line our very limited options to validate such indicators.Finally, we will discuss where and how the unavoidablesubjectivity can be encapsulated and how repeatabilityand comparability can be achieved, to the extent pos-sible, through standardization of procedure.

Theoretical Frameworks for a GeneralizedVulnerability Indicator

A theoretical framework capable of producing a gen-eral vulnerability indicator (VI) needs to include threecomponents. The first is a model of vulnerability, iden-tifying its components and their mutual dependenciesin terms of properties that can be associated to indica-tors. The second is a model of the system, defining away to decompose the target system in a way that makesit practical to relate the view of the system to thedefinition of vulnerability and ensures that differentsystems interpreted according to a common systemmodel are comparable. The third component is a math-ematical model, used to aggregate the information de-fined by the system model into a hierarchically orga-nized set of indicators, whose higher-level aggregationis the VI. In order for different VIs to be comparableacross different environments, all three componentsmust be compatible, i.e., adopt the same model ofvulnerability, the same system model, and the samemathematical model. Each component should lend it-self to being published as a set of guidelines for datacollection and elaboration. The following three sec-tions outline the aspects that generic guidelines shouldaddress in each component and identify practical de-compositions and algorithms to guide the developmentof a vulnerability indicator.

The Vulnerability Model

Williams and Kaputska (2000), reporting the conclu-sions of a symposium dedicated specifically to ecosys-tem vulnerability, define the latter as “. . . the potentialof an ecosystem to modulate its response to stressorsover time and space, where that potential is determinedby characteristics of an ecosystem that include manylevels of organisation, such as a soil, a bioregion, atissue, a species, an organism, a stream reach. It is anestimate of the inability of an ecosystem to toleratestressors over time and space.” It is clear from thegenerality of this definition that finding a decomposi-tion of the concept of vulnerability that can effectivelyguide the development and measurement of indicatorsis not easy. It is productive to start with a brief analysisof the concepts of value, resilience, risk, and scale,which are central to most definitions.

Value. When planning conservation efforts, the no-tion of what is valuable in the environment is alwayscentral, whether explicitly defined or not. Value cancome from functional knowledge of the role of a pro-cess in determining environmental dynamics, but alsofrom the appeal of the element to decision-makers andstakeholders or from the popularity of a single threat-ened species that makes the entire environment valu-able for the simple fact of sustaining it. It is importanthere to enforce a holistic vision, so that the set ofindicators includes, but is not excessively biased by,elements that have a higher value in nonscientificterms.

Resilience. Holling (1973), in an attempt to over-come the limitation of the common equilibrium-cen-tered definitions of resilience, defines it as the ability ofa system to maintain its structure and pattern of behav-ior in the presence of stress. As mentioned, no easy andrigorous indicators of resilience can be described ingeneral, but it is often possible to identify proxy indi-cators whose characterization depends on the defini-tion of the system.

Risk. The US EPA (1998) defined risk as being du-ally composed by hazard and exposure. This distinctionreflects a fundamental difference in treatment andmeasure between actual and potential pressures on theenvironment. To develop a simple but generic modelof vulnerability that fits the purposes of this paper, webelieve that more articulation is needed. In the follow-ing we describe a generalized reinterpretation of thesecomponents.

Scale. Even in semiquantitative studies it is risky andunwise to mix differently scaled information. Our un-derstanding of hierarchically organized complex sys-tems has greatly improved, and scale is now almost

Environmental Vulnerability Indicators 337

Page 4: finition exam.pdf

universally considered in ecological theory (Allen andStarr 1982, O’Neill and others 1986). Yet we do notknow enough to develop theoretical models of howproperties such as vulnerability vary across scales. Forthis reason it is wise to develop VIs that are specific toparticular organisational levels and adapt the mathe-matical model (see below) to deal with different sub-indices calculated at different scales if this is necessary.

Figure 1 shows how the concepts described can bemapped onto a general model of vulnerability to use asa basis for the development of indicators. It is reason-able to assume the existence of two main componentsin environmental vulnerability. Intrinsic vulnerability isthe resultant of the internal dynamics and structure ofthe undisturbed system. Extrinsic vulnerability incorpo-rates the two components of risk—exposure and haz-ard—acting on it. Both components can be said to havepotential and expressed subcomponents.

Intrinsic expressed subcomponent. This relates directlyto the well-debated concepts of ecosystem health andecosystem integrity. As discussed in the next section,these concepts, despite their appeal, are hard to quan-tify practically by means of indicators. A further decom-position of this component discloses conservation sta-tus and the intrinsic criticality of the dynamic processesgoing on in the system as subcomponents. As an exam-ple, indicators for this latter property could include thegenetic resilience of a population as measured by itssize and genetic diversity. Both components are diffi-cult to quantify exactly, but they can be ranked byexperts, using a ranking scheme that they agree upon.Villa (1995) discusses a framework for evaluation ofsimilar properties through organized survey forms to beused by local experts.

Intrinsic potential subcomponent. This relates to theability of recovery of the undisturbed system after stresshas been applied. Quantification of this aspect is prob-

ably the most problematic and should refer to pub-lished perturbation studies, which are likely to behighly specific.

Extrinsic expressed subcomponent. This results from thesystem’s measurable exposure to stress factors. The re-sultant of these factors is obviously not just the sum ofthe effects of each since each stress will most likelymake the system more vulnerable to another. Qualita-tive definitions of this aspect are reasonably easy, whilequantification can be more difficult. For the purposesof developing an indicator, semiquantitative rankingsof exposure to stress are easier to produce: the pub-lished EPA guidelines (US EPA 1986, 1996, 1998) pro-vide useful indications for a variety of stressors andsystems.

Extrinsic potential subcomponent. This results fromhazard, which for our purpose we can define as thelikelihood of stress that is not being currently appliedon the system to be applied in the future, or as thelikelihood of change in the level of stress currentlyapplied. Quantification in approximate rankings wouldneed probability estimates for stress situations occur-ring and appropriately chosen thresholds. Probabilitiescan be assigned on the basis of historical indicators forfactors that have a recurring nature, or careful extrap-olation of trends such as development and pollution.

All of these properties, whether potential or ex-pressed, are more easily and justifiably evaluated rela-tive to the moment of time when the estimates aremade, no matter if they use historical information ornot. They should be regarded as indicators of a currenttension of the system towards a more degraded state,not as a state itself, and should be interpreted as ratesrather than measures of system state variables. Thisshould be made clear in the guidelines to avoid misin-terpretation of the indicators.

It is also not necessary to consider all the compo-

Figure 1. A possible operational conceptual model of environmental vulnerability.

338 F. Villa and H. McLeod

Page 5: finition exam.pdf

nents above in an indicator of vulnerability. As an ex-ample, we might want to characterize only the ex-pressed components for a given system, based onconsiderations of precision or on the difficulty of sup-porting the theoretical arguments upon which othercomponents should be measured. This would yield le-gitimate VIs as long as the vulnerability definition isexplicit and we only compare VIs based on the samemodel.

The System Model

The system model defines a way to map arbitrarysystem characteristics onto the identified componentsof vulnerability. A system model defines a hierarchicaldecomposition of the notion of a system where partic-ular system definitions (e.g., based on structure and/orfunction or ecosystem services provided) can be fittedin order to allow consistent calculation and aggregationof indicators. Many existing studies have devoted mostenergies in developing appropriate system models. Anexplicit system model ensures completeness, coher-ence, generality, and scale compatibility. A specific sys-tem conceptualization is applied to the model to definenotions and specifics of “responders,” “stressors,” and“indicators.”

A suitable system model should: (1) be able to ac-commodate different system concepts under a com-mon vision; (2) provide a generic layout and a coherentsystem of constraints to develop guidelines to serve as abase for a VI; and (3) be suitable to the calculation ofa VI according to a model of vulnerability like the oneoutlined above. To reach these goals, it is productive touse a hierarchical conceptualization, where the highest-level decomposition of the system identifies categoriescorresponding to high-level interacting compartments.These need to be general enough to be applicable towidely different systems. As an example, a functionalsystem conceptualization could define these categoriesas secondary production, primary production, decom-position, and so on. In the definition of such categories,it is critical that generality is maintained, redundancy isminimized, and the set of categories defines a completegeneral system rather than just particular aspects of it.

These categories can be further subdivided into re-sponders, which are the entities subject to vulnerability.In the definition of responders, redundancy is a desir-able property, as it should ensure that at least oneresponder is defined for any category no matter whichsystem it is applied to. When the decomposition incategories is defined, responders can be added to ac-commodate specific environments with no loss of gen-erality. In defining responders within a category, rela-tive independence is also important and should be kept

in mind as a goal. The same responder can appear indifferent categories without affecting the generality ofthe model.

For each responder, indicators should be defined toquantify properties of responders that are relevant tothe calculation of vulnerability. Redundancy is desir-able at this level also, since some indicators may beapplicable to the measurement of a property in a cer-tain environmental context and not another. Indicatorsshould be characterized according to their role inquantifying the components of vulnerability. In otherwords, we should know whether an indicator applies toany of the system properties causing each of the com-ponents of vulnerability outlined in a vulnerabilitymodel, by expressing, e.g., how much a property isinherently at risk of losing its role in the maintenanceof the responder’s function or how well it is performingits role.

The advantage of a similar hierarchical breakdown isthat the high-level divisions provide a common groundthat ensures the comparability of systems, while themore specific, lower-level responders provide thegrounds for application to particular environmentalrealities. Using a hierarchical decomposition, we makesure that different systems, which are not comparable atthe responder level, can be compared when the re-sponse indicators are aggregated at the category level,since the categories are chosen with sufficient general-ity to make this possible. No matter which particularsystem view is adopted, the decomposition must havethe properties of completeness and generality. By com-pleteness we mean that an ecological system must becompletely characterized by describing elements ineach category. By generality we mean that the decom-position should be general enough to make any localspecificity disappear, i.e., any system can be definedaccording to the decomposition, having at least oneelement of description for each category. The mathe-matical model, as discussed below, can recognize thedifferent roles of redundancy and use the appropriateaggregation algorithms at different levels.

Very sophisticated system models are sometimesbuilt as a phase of integrated risk assessment. Thepapers by Cormier and others (2000) and Gordon andMajudmer (2000) provide examples of physical andprobabilistic system models for assessing vulnerabilityon specific environments. When the system is wellknown, extensive, spatially explicit simulation modelssuch as that presented in Voinov and others (1999) canbe used to illuminate the details of how it responds tochange. Even in low-resource studies, adopting a suffi-ciently general system representation is key to the reuseand comparability of indicators calculated for different

Environmental Vulnerability Indicators 339

Page 6: finition exam.pdf

systems. Nevertherless, many existing studies aggregateavailable information with little consideration of thefact that the set of indicators should give a completepicture of a system that functions as an interconnectedwhole (e.g., Eurostat 1998). Below, we identify somepossible system conceptualizations that have inspired orcan inspire the choice of indicators.

Ecological structure and function. Perhaps the mostobvious system conceptualization starts with an inven-tory of ecological structural attributes at different levels(such as species, communities, ecotypes) and key pro-cesses of generally accepted relevance (such as primaryand secondary production, *decomposition, or nutri-ent cycling). Stress factors acting on such responderscould be categorized, and factors such as structuralredundancy in each component or rates of processesthat counter loss of function could be used as indicatorsof resilience. A hybrid structural/functional conceptu-alization is preferable to a simply structural one for anumber of reasons, most notably because a purelystructural vision does not explicitly include time. Thisreduces its relevance since a measure of vulnerabilityshould be sensitive to the time scale chosen and to thedynamics of natural change. Functional characteriza-tions relate directly to the provision of ecosystem ser-vices that humans value (see below). The purely struc-tural conceptualization is nevertheless relevant, sincemost attempts to estimate the value and the vulnerabil-ity of natural systems have concentrated on system’scomponents (particularly key species) rather than pro-cesses.

Ecosystem health and integrity. The concepts of ecosys-tem health and integrity, at the center of the majorconclusions of the Earth Summit (Rio Declaration onEnvironment and Development in 1992), combinestructural and functional aspects of ecosystems withhuman needs and expectations in a higher-level synthe-sis. Ecosystem integrity can be defined as “the mainte-nance of the community structure and function char-acteristic of a particular locale or deemed satisfactory tosociety” (Cairns 1977). It can be said to reflect thecapability of the system to support services that humansvalue (De Leo and Levin 1997). The decomposition ofthe ecosystem health concept operated by Costanzaand others (1992) can also serve as a basis for thedefinition of indicator categories. The health of anenvironmental system is defined as composed of threeproperties: (1) vigor, a measure of activity such as me-tabolism or primary productivity; (2) organization, re-ferring to the structure of interaction among systemcomponents; and (3) resilience, as defined above. Thedefinition of such properties is closely related to vul-nerability. It must be noted, however, that many of

these properties have no easily measured correspond-ing indicators. Furthermore, the debate on such defi-nitions is intense, and the meaning of concepts such ashealth and sustainability is far from stabilized (Gatto1995).

Ecosystem services. Costanza and others (1997) havedeveloped a categorization of the services provided bythe world’s ecosystems to human society as a whole,with the purpose of identifying an approximate globalmonetary value of these services. Their categorizationcan be a base for a system conceptualization on whichto base a choice of indicators, which concentrates ex-plicitly on the perception and exploitation of servicesprovided by the Earth’s natural processes in sustaininglife across the globe. The advantages in adopting suchas framework, which is based on a broad functionalsystem characterization, lie in its easily understandableconceptual appeal and the explicit consideration ofhuman-specific values such as recreation and culture,which have an obvious importance in decision- andpolicy-making.

The system model dictates the list of indicators used.As mentioned, adopting an explicit system model helpsto ensure that all components of a generic system arecharacterized, a necessary condition for enabling com-parison between synthetic indicators calculated for dif-ferent environments. Using at least a two-level systemmodel ensures that all categories of indicators definedare represented; within each category, only the indica-tors that apply to the specific reality under study can bechosen without losing generality. In the next section wewill discuss how a mathematical model can be used witha hierarchical system definition to maintain compatibil-ity between VIs calculated for different systems and toaccount for the environment as a system of interactingcomponents.

The Mathematical Model

The mathematical model is a strategy to standardizeand aggregate indicators through the levels identifiedin the system model, yielding estimates of system vul-nerability as defined in the vulnerability model. Differ-ent aggregation methods reflect different assumptionson the nature of the aggregated entities and theirmutual interactions. In general, linear aggregation isappropriate for noninteracting entities, and multiplica-tive aggregation methods are best when it is known thatthe aggregated entities influence each other as parts ofan interactive system. The following issues are relevantto the definition of a suitable mathematical model.

Standardization. Indicators express properties of re-sponders that relate to their intrinsic or extrinsic vul-nerability. The process of standardization must remap,

340 F. Villa and H. McLeod

Page 7: finition exam.pdf

when necessary, the measurable (raw) value of theindicators as values that can be correlated with thechosen meaning of vulnerability for each property ofeach responder. It is common to employ an integerranking scale with enough levels to be expressive of afairly wide range of values, but not so many that itbecomes impossible to attach a verbal description toeach level, similar to what is done in commonlyadopted earthquake rating scales. The explanatorypower of such a description is particularly important forpurposes of raising awareness and enabling the use ofindicators by policy-makers.

Response scaling. In order to be aggregated success-fully, all indicators should have a similar response overtheir allowed range. As an example, a linear responseimplies that the doubling of the indicator value corre-sponds to approximately twice the effect it estimates. Inenvironmental properties it is very common to refer tononlinear responses or threshold effects; indicators ex-hibiting such behaviors must be made compatible byadopting a common response model (linear is best foreasier interpretation by nontechnical audiences) anddefining appropriate transformations to map the rawindicator values onto the chosen scale and the chosenresponse.

Weighting. Implicit equal weighting is done everytime no weighting scheme is employed, so it is impor-tant that a weighting scheme be built into the mathe-matical model to force consideration of the issue andenable incorporation of priorities. Care must be ex-erted in doing so as different weighting schemes atcertain organizational levels can make different VIsincompatible for purposes of comparison.

Two approaches to weighting are commonly used:(1) the direct attribution of weights, and (2) theircalculation on the basis of a pairwise comparison ma-trix. In the first case, the experimenter specifies a nu-meric weight for each entity, describing its relativeimportance in the context of the system, or assigns eachentity to a predefined importance category. In the sec-ond case, the weights are mathematically extractedfrom a matrix specifying relative importances for eachpossible combination of two entities. The latter ap-proach, described by Saaty (1980) as a phase of theanalytical hierarchy process, eliminates the difficulty ofcomparing a potentially large number of entities at thesame time. It allows exact relative weighting and pro-vides a measure of coherence in the priority structurethat constitutes important feedback in a group decisionprocess. Villa and others (1996) describe the applica-tion of this technique jointly with multivariate statisticalanalysis to analyze and solve opinion conflicts amongdifferent decision-makers. Weights can be assigned in-

dividually by experts in a team and processed statisti-cally to reflect overall tendencies and identify contro-versial issues.

Aggregation. Once the indicators have been chosenand the weighting scheme established, system vulnera-bility can be calculated by aggregating indicator valuesin progressively higher levels as mandated by the systemmodel. Each aggregation step can employ differentmethods. Simple system models reduce the number ofaggregation steps; nevertheless, it is crucial for main-taining comparability of VIs to use at least two levelsand incorporate any system specificity only in the lowerlevels.

In general, the aggregation algorithm should reflectthe dynamic view of the system at each level of aggre-gation. Linear aggregation (e.g., weighted averaging)reflects a view of the system as noninteractive: it is infact appropriate for nonsystems, as adopting it is equiv-alent to assuming that the whole equals the sum of theparts. Nonlinear aggregation reflects functional rela-tionships between the system’s components, and assuch it is usually safer than linear aggregation when“cost” criteria are considered, as in the case of vulner-ability. For example, it would be inappropriate to useaverages to evaluate the health of an organism on thebasis of functionality indicators for its vital organs. Even(and particularly) if the exact dynamics of the interac-tion are not known, a conservative indicator should bedrastically influenced by the fact that even just one (nomatter how many organs are in the system) is verydysfunctional or subjected to high risk.

Multiplicative algorithms are often used for nonlin-ear aggregation. Their high sensitivity to extreme val-ues makes their application difficult in the presence ofmissing data or with high uncertainty. For this reasonthe system model should allow for at least two hierar-chical levels and only the properties calculated for thehighest level should be aggregated with nonlinear al-gorithms. By doing so, the system specificities and theeffects of data uncertainty are absorbed by the linearaggregation at the lower levels and optimal criticalitycan be obtained.

Simple mathematical aggregation is not the onlyoption available. Statistical techniques such as multiple-criteria analysis (Voogd 1983) can be used as a base todevise sophisticated aggregation schemes involving notonly indicator values, but also their concordance ordiscordance with explicitly stated goals. Economic stud-ies have used regression techniques instead of simpleaggregation of indicators (Atkins 1998). As the com-plexity of the objectives increases, however, the appli-cability and clarity of the overall indicators obtaineddecrease. Simplicity and ease of understanding are im-

Environmental Vulnerability Indicators 341

Page 8: finition exam.pdf

portant goals that are best met by simple aggregationschemata.

Using a maximal articulation of the system model asdiscussed previously, a four-step aggregation scheme isrequired. No matter how many steps are involved, it ispossible to identify general guidelines only for the firstand last step.

The first step is the aggregation of indicator valuesinto vulnerability or criticality of the responder’s prop-erty. In this phase a weighted average scheme is prob-ably most appropriate. It is one of the phases where theweighting is most important, and the use of weightsdoes not hamper comparability at this stage since itrefers to the lowest level of aggregation. A linear aggre-gation reflects the assumption that indicators contrib-ute independently to the calculation of the property’scriticality, which should be enforced by the systemmodel.

The last step is the aggregation of per-category vul-nerabilities into the final VI (system-level vulnerability).At this stage it is critical to employ a nonlinear aggre-gation scheme, since by definition a system is composedby interacting entities, at least at the highest level ofaggregation. It is also important that the weighting isequal at this stage unless there are important argu-ments for an unequal weighting; if so, this weightingshould be stated in the system model and not be mod-ified in specific applications, in order to preserve thehigh-level comparability of the VIs calculated accordingto this scheme.

The intermediate steps—if using a maximally artic-ulated system model—are the aggregation of propertyvalues into the responder’s vulnerability and the aggre-gation of the responder’s vulnerabilities into categoryvulnerabilities. The aggregation model at these stagesdepends on the vulnerability model and the systemmodel and little can be said in general. It could beargued that potential and expressed vulnerabilities arenot independent since a compromised system is moreat risk of subsequent stress than a pristine one, and assuch they should multiply each other in order to de-scribe the whole responder-level vulnerability. In thiscase it might be useful to add an intermediate organi-zational level to account for these different compo-nents.

Aggregation algorithms. For linear systems, the simpleweighted average is an obvious choice, as used often ineconomic vulnerability studies. The choice of nonlin-ear aggregation algorithms is less obvious and shouldbe evaluated with care. The literature on evaluation ofland resources is rich in examples of linear and non-linear aggregation algorithms. Among these, the mostwidely used is the Storie (1976) indicator, originally

developed for equitable assessments of land value fortaxation purposes and used since then in various envi-ronmental applications (Leamy 1974, Lal 1989). Thegeneral formula is

��k; A1, A2, . . . , An� � � �i � 1

n

Ai� 1k�n � 1� (1)

where k is the highest score obtainable in a scale 1 to k,A is the individual score, and n the number of scorestaken into consideration. The indicator in this formlends itself to the evaluation of benefit situations: highvalues will contribute to determine the overall valuemore than low values, and the presence of many lowvalues will influence the overall indicator, but never tothe point of compensating the effect of a high onecompletely. In the evaluation of environmental vulner-ability, it is more common to need a cost formulationthat gives higher importance to low values. Its formu-lation in this case can be modified as follows:

��k; A1, A2, . . . , An� � k � � �i � 1

n

(k � Ai � 1)� 1k(n�1)

(2)The indicator in this form has the same behavior

discussed above but reverses the dominance structure:low values will dominate the overall value. It is thus safeto use when it must be ensured that the effects of adysfunctional system component are not entirely com-pensated by the number and relative health of theothers. Koreleski (1988) discusses modifications to theStorie indicator that reduce its very high sensitivity toextreme values. Villa (1995) originally proposed thecost formulation of the Storie indicator for the evalua-tion of conservation status, vulnerability, and potentialof recovery for the BioItaly/Nature 2000 program ofthe EEC (1992). Case studies and comparisons withother indicators are described in Villa and Antonietti(1996).

In general, applying nonlinear aggregation requiresparticular care, since they tend to have unwieldy statis-tical behavior. Algorithms should be carefully evaluatedthrough bootstrap studies and their distribution andproperties compared before adoption, particularlywhen good statistical behavior is important (e.g., whenthe indicator must be postprocessed or compared sta-tistically).

Validation

Validation of approximate indicators of environ-mental properties with controversial definitions is ob-viously impossible in exact scientific terms. The hypoth-

342 F. Villa and H. McLeod

Page 9: finition exam.pdf

eses leading to the development of such aggregatedindicators are normally not falsifiable. The problem ofvalidating vulnerability indicators is at least twofold: onthe one-hand validation methods need to be assessed,and on the other appropriate time scales need to beidentified. Large-scale systems will need to be observedat proportionally large time scales, which limits theability of invalidating the vulnerability estimatesthrough observation and measurement.

A further problem is the inherent stochastic natureof the very concept of vulnerability, and the lack ofreplication at the larger scales. Together, these proper-ties make the whole issue of validation almost entirelynonapplicable in conventional terms. It can be arguedthat peer review (possibly aided by conflict-resolutiontechniques and statistical analysis) is the only way toobtain a reasonable level of confidence in such indica-tors. Of course, the results of peer review cannot beconclusive. In particular, cross-system comparisons arethe most difficult aspect since experts with the breadthof knowledge necessary to compare widely differentenvironments are rare.

In the example discussed in the next section, aprocedure was suggested as an “acid test” for determin-ing when the VI could be considered globally opera-tional. This would involve detailed country vulnerabilityassessments being carried out on a number of countriesacross the world by consultants not initiated in the workof the VI. These assessments would be ranked and thencompared with the vulnerability indicator values to as-certain their correlation. This is, of course, a costlyprocess, but small compared to the cost of conductingdetailed vulnerability assessments of individual coun-tries by on-site visits as is the procedure at present.Furthermore, the cost of testing an indicator should bemeasured against the possible costs of using an un-tested indicator that is faulty in its results.

Given the difficulties in validating high-level syn-thetic indicators, it is very important that a peer reviewprocess is built into their development from the initialphases and that general guidelines are prepared byteams of experts using appropriate conflict analysistechniques and statistical characterization of prioritiesand agreement for each sub-goal. The decompositionof the problem into different, relatively independentmodels operated above can be a base for the develop-ment of such guidelines. In the following we brieflydescribe assumptions and preliminary conclusionscharacterizing the first of such attempts, aimed at thedefinition of a generic framework to evaluate vulnera-bility at the country level, in response to internationallyidentified, long-term goals.

An Example: SOPAC’s EnvironmentalVulnerability Index (EVI)

No case studies inspired by the theoretical principlesoutlined above are currently available. However, theEVI project, led by the South Pacific Applied Geo-sciences Commission (SOPAC), a regional organiza-tion based in Fiji, currently leads the way towards agenerally applicable environmental vulnerability indi-cator, which will get as close as possible to a validationprocess by being applied and compared in widely dif-ferent countries and biotas. Both authors of this paperhave been involved, either as an independent scientificadvisor or as part of the core research team, with thedevelopment of the EVI, which is funded by the NewZealand Overseas Development Assistance (NZODA).The EVI is a work in progress and current details arepublished in reports that are available on the Internet(Kaly and others 1999a,b, Kaly and Pratt 2000, http://www.sopac.org.fj/Projects/Evi). While not all of theprinciples outlined above are incorporated in the cur-rent EVI formulation, we consider it productive tobriefly describe the published features of what is cur-rently the most ambitious VI framework, with the aimof: (1) showing how the theoretical framework we out-lined can be used as a context to develop and discuss aVI, and (2) illustrating an important ongoing develop-ment in the field.

Preliminary EVI Model

In this section we briefly describe the salient aspectsof the published EVI formulation with reference to theframework outlined above. We discuss separately theEVI interpretation of vulnerability, the model of thesystem, and the preliminary aggregation strategy aspublished in the literature cited.

Vulnerability model. Environmental vulnerability inthe EVI formulation is seen as resulting from the com-bination of exposure to stress and system resilience.Resilience is, in turn, divided into an intrinsic compo-nent, viewed as functioning as an immune system in theenvironment, and a component directly related to thestate of environmental degradation. A risk exposurecomponent reflects the historical occurrence of naturalimpacts, such as droughts and cyclones, and anthropo-genic impacts, including resource exploitation and pol-lution. Vulnerability is assumed to increase with theintensity and frequency of impact. Intrinsic resiliencecan be viewed as similar to a human being’s geneticimmune system. Some systems are interpreted as beingmore capable of bouncing back due to faster recovery,and the EVI formulation identifies a number of envi-ronmental properties that can correlate to this ability.

Environmental Vulnerability Indicators 343

Page 10: finition exam.pdf

As an example, one indicator used for relative compar-isons of resilience in systems where coral reefs arepresent was the rate of coral reef accretion, for whichdata are easily available and show wide differences inthe test environments.

Lastly, extrinsic resilience was interpreted as thehealth of a system and assumed to be inversely propor-tional to degradation by outside impact. As an example,indicators of susceptibility to coastal erosion were de-vised by considering removal of the natural mangrovebarriers.

System model. Like other examples of existing aggre-gated indexes of environmental properties, the EVIputs a strong emphasis on the system model, definedthrough a highly articulated choice of environmentalindicators. However, the choice of indicators identifiedfor the EVI is directly related to the assumptions setforth in the vulnerability model and does not attempt,at the current stage, a comprehensive description of ageneral environmental system. Rather, a full descrip-tion of what makes a system vulnerable is attempted,and indicators are classified a priori according to theircontribution to each of the different components ofvulnerability.

The EVI focuses on the vulnerability of the naturalenvironment and excludes human systems as a re-sponder. The justification for this was that human wel-fare is dependent on environmental systems and deg-radation of these systems leads to a reduction in humanwelfare. Many people also value the intrinsic benefits ofknowing the environment exists in an undegradedform. Lastly as a number of vulnerability indicatorswere being developed based on human systems, anindicator such as the EVI excluding these systems willnot overlap with those in existence, allowing the possi-bility of aggregation into a composite indicator. Theresponders would include ecosystems, habitats, popula-tions, and communities of organisms as well as physicaland biological processes. In this sense, the EVI systemview is a “pure” ecological one with structural andfunctional components and no concern for humaninfluence as part of the system.

The system was broken down into a number of cat-egories based on structural and functional factors orresponders. The responders considered in the currentlist of indicators can be broadly classified into severalcategories: (1) ecological entities at the landscape andecosystem level, (2) populations and communities oforganisms (identifiable groupings of organisms andtheir habitats), (3) physical and biological processes(beach building, reproduction, recruitment), (4) en-ergy flows (nutrient cycling and import/export), (5)synthetic attributes (such as diversity) at different levels

(geographic, ecosystem, community, population, spe-cies and genetic diversity), and (6) functional redun-dancy of ecological system components (species whichcarry out similar functions in an ecosystem). This sub-division reflects conceptual categories rather than on-tological differences attributable to different respond-ers or categories thereof. Although the availableinformation has been arranged in order to constitute ahierarchical system model and to enable a multistepaggregation as we have described, no such attempt ismade in the published EVI formulation. As we indicatein better detail below, exploring more articulated ag-gregation strategies is one of the goals for the EVI’sfuture, and the system model may be revised to allowthis.

To reach a satisfactory set of indicators, the SOPACresearch team used a multistep process that engagedthe help of a wide group of national and internationalexperts. An initial formulation was developed (Kaly andothers 1999a) by a multidisciplinary team that includedregional experts. In a later phase, a conference processwas used to correct the focus and provide abundantpeer review on the developing formulation. An inter-national forum was held (Kaly and others 1999b) tooverhaul the indicator for international applicabilityand robustness. Many of the ideas expressed earlier inthis paper were incorporated into the EVI as a result ofthis process.

A total of 47 indicators were selected through suc-cessive refinements of an initially very large list. Thecurrent list of indicators and their values for five testcountries are given in Kaly and Pratt (2000). Table 1gives examples of proposed indicators categorized ac-cording to their relationship with the vulnerabilitymodel and the system components or processes towhich they refer.

The demand that drove the development of the EVImade it necessary for it to be focused at the countrylevel. This requirement, rather typical of government-funded programes, introduces a peculiar and some-what ambiguous role for the variable area, which be-comes not only a proxy indicator for many factorsconnected to resilience, but also an important scalingfactor for cross-country comparison. Land area at thistime is a cross-cutting variable that is considered as amajor contributor to the intrinsic resilience.

Scaling and standardization of indicators. Indicatorswere mapped on a scale of 1 to 7, where 7 reflectsmaximum incidence or effect. In order to obtain ap-proximate linearity of response for each indicator, dif-ferent response classes were defined, corresponding todifferent mappings of the indicators’ raw values ontothe 1–7 scale. The indicators were individually assigned

344 F. Villa and H. McLeod

Page 11: finition exam.pdf

to a class with the help of the consulting team ofexperts, and a mapping of the raw value onto the 1–7scale was defined accordingly. The response classesused are described in Table 2. Other response classeswere devised for possible future use.

Weighting. Two weighting systems were evaluated toexpress the relative importance of each indicator withinthe overall EVI calculation. In all cases the weights areand will be assigned by a team of international experts.An initial set of weights for the 47 indicators was as-signed during the 1999 Think Tank meeting as theaverage of importance rankings (1–4) provided by eachof the 30 experts at the end of the four day workinggroup (Kaly and others 1999b). Standard deviations ofthe ranking for each indicator were used to highlightthe most controversial issues and suggest areas for fur-ther discussion. This weighting system is the basis forthe provisional calculation of vulnerability done for testcountries in the South Pacific. The EVI team is alsoplanning to use pairwise comparisons, a longer andsomewhat more involved process, to obtain more pre-cise weights, evaluate the overall coherence of the sug-gested priorities, and identify the main “lines ofthought” in the consulting group, as done in Villa andothers (1996), through the use of multivariate statisticaltechniques (Kaly and Pratt 2000).

Aggregation method. The overall vulnerability is calcu-lated as an aggregation of the appropriately scaled

subindicators relative to the three components of vul-nerability. The partial scores for risk exposure, intrinsicresilience, and extrinsic resilience are simply averagedto obtain the final EVI. As the EVI is intended forintercountry comparison, the aggregation scheme is ofparamount importance. For this reason, the currentaggregation scheme is considered limited, and differ-ent aggregation algorithms are being evaluated. Teamsof local and international experts will compare the EVIvalues calculated with different aggregation algorithmsfor a number of test countries to identify the mostsatisfactory ones. The preliminary results have beenobtained with linear aggregation for South Pacific smallisland states (see below), where direct comparison ofindicator values is justifiable because countries sharemost ecotypes, problems, and are of relatively similarsize. As the EVI expands towards more different envi-ronmental systems, nonlinear aggregation will be alsoused to aggregate indicator values between categories,while indicator values within categories will be aggre-gated linearly using weighted averages with weightsobtained as explained above. This will require restruc-turing the system model in order to account for thespecificities of different environmental systems withinbroader, general categories of responders. For nonlin-ear aggregation, the EVI team is planning to test themodification of the Storie indicator illustrated in equa-

Table 1. Some examples of factors related to indicators used in the current EVI formulationa

Vulnerabilitycomponent Anthropogenic Biological Geological Meteorological

Exposure Touristicpressure

Pathogenoutbreaks

Earthquakes Sea surfacetemperature

Percentage ofprotectedareas

Speciesintroductions

Volcanic eruptions

Degradation War or civil strife Alloctonousspecies

Mining activity

Resilience Land area; Fragmentation indexes; IsolationaIndicator categories are on the columns. The rows specify the component of vulnerability the indicators refer to. Resilience indicators are notcategorised at this stage.

Table 2. Response classes for the indicators considered in the EVI

Response class Description Example indicators

Linear The raw magnitude of the indicator is proportionalto its value on the importance scale

Percent protected land area

Marginal The effect of the indicator varies more or less thanlinearly with its raw magnitude

Surface sea temperature change

Threshold The effect of the indicator becomes negligible orcatastrophic below or above a given threshold

Human population density

Environmental Vulnerability Indicators 345

Page 12: finition exam.pdf

tion 2 along with other formulations to be identified(Kaly and Pratt 2000).

Results and Future Developments.

Preliminary EVI values for four test countries in theSouth Pacific area have been calculated using simpleweighting and linear aggregation (Kaly and Pratt 2000).Vulnerability estimates as of late 2000 are shown inTable 3 and, despite the preliminary nature of theresults, are considered to satisfactorily express the na-tion’s comparative vulnerabilities as the commonknowledge of the respective ecosystem dynamics sug-gests. Different weighting schemes are being evaluatedwhich produce different results, and some estimates(most notably the ones for Vanuatu) are consideredless reliable because of difficulties obtaining data for anumber of indicators. At this time SOPAC’s arbitrarycriterion for validity is the availability of at least 80% ofthe indicators, which was met for all countries exceptVanuatu. A much more detailed account of these pre-liminary results and future priorities is given in Kalyand Pratt (2000).

At the present stage, the mathematical model isundergoing the most active conceptual development.The efforts planned for phase 3 of the project concen-trate on statistical determination of redundant indica-tors, scoring of individual indicators, weighting of indi-cators, mathematical testing, and validation. The latterwill involve a collaborative review and evaluation ofvulnerability indicators calculated with different weight-ing schemes and aggregation algorithms as indicatedabove, performed by an extended team of local andinternational experts.

A useful by-product of the EVI is the fact that coun-try-level profiles can be useful in their disaggregatedform, showing, for instance, the absolute level of deg-radation and relative level of risk. The various levels ofreporting—from the single EVI score to the individualindicators—can be useful to differently focused policy-

making, from the international to the interagency na-tional level. Further breakdown of risk into anthropo-genic impacts and natural impacts allows furtherclarification of the causes of vulnerability in a particularcountry.

The major challenge and most interesting aspect forfuture EVI development is globalization of the model.The minimum number of different countries set as agoal for evaluation after phase 3 is complete is 15,spread around the globe and not only the Pacific area.Reaching this goal will require that all objectives ofphase 2 to be met and major that advances in thesystem conceptualization be made, possibly followingthe guidelines indicated in this paper. Countries asdifferent as Fiji, Italy, Ireland, and New Zealand haveagreed to use the EVI in pilot evaluations. For thisreason, the EVI project provides a unique occasion fordetermining not only the scientific value, but also thepotential of such a measure to become a tool for raisingawareness about the vulnerability of the environment.The latter is one of the stated goals of the EVI (Kaly andothers 1999a).

Other priorities of the EVI project include the pro-vision of a permanent data collection mechanism, thedevelopment of a friendly user interface for data orga-nization and calculation of the indicators, and the for-malisation of a feedback strategy from local and inter-national experts. The strengths and weaknesses of theEVI were discussed with the same conference processthat defined the list of indicators. As the panel ofexperts pointed out, strengths include: (1) being thefirst comprehensive and convenient measure of envi-ronmental vulnerability; (2) allowing, in principle,comparison between countries, and (3) being easilyexplained to and understood by nontechnical decisionmakers. Weaknesses include: (1) the subjectivity im-plicit in the weighting and response scaling processes;(2) the fact that complex environmental factors areoften represented by proxy indicators, and (3) the factthat the EVI interpretation might not be obvious orunambiguous without specific instruction. Many of theobvious weaknesses of the current model (such as theassumptions of linearity implicit in the choice of alinear aggregation model or the lack of considerationfor the amount of controversy in the choice and weight-ing of indicators) will be overcome by planned devel-opment in the near future as outlined above.

Conclusions

The need for synthetic indicators of critical ecosys-tem properties, such as vulnerability, is clear andstrong, as witnessed by the number of worldwide initi-

Table 3. Preliminary EVI results for 4 South Pacificcountriesa

Fiji Samoa Vanuatu Tuvalu

REI 3.0 2.9 3.0 3.9IRI 4.2 5.1 3.4 6.6EDI 3.7 3.3 3.3 4.3EVI 3.6 3.8 3.2 4.9aData from Kaly and Pratt (2000). EVI � Environmental VulnerabilityIndex; REI � score from indicators of exposure to stressors; IRI �

score expressing factors causing lack of intrinsic resilience; EDI �

score expressing current state of environmental degradation. EVI is arounded arithmetic mean of the other indexes.

346 F. Villa and H. McLeod

Page 13: finition exam.pdf

atives that have requested them. We have indicatedgeneral guidelines and priorities that can help in de-veloping such indicators, based on sets of easily mea-surable properties, while maintaining realistic require-ments concerning time scales, data, research funding,and conceptual accessibility for decision-makers. As theEVI example indicates, we are nearing a phase whensuch indicators will be available for comparison, pro-viding a tool for policy-making that is likely to becomewidespread and important. It is our belief that thetheoretical points we discuss need to be accounted forin any indicator framework that has pretence of gener-ality. The way we “take the pulse” of the environment,particularly in the conditions of limited resources andunderstanding that most nations face, has the potentialof causing high impact on the world’s ecosystems andour own well being. We hope that this contribution canhelp a discussion that, while recognizing the fact thatthe scientific community cannot currently handle theissue rigorously, can nevertheless bring as much rigoras possible and make sure that the inevitable mistakesare made on the safe side.

Acknowledgments

We are grateful to the EVI team at SOPAC for pro-viding inspiration and motivation for the further devel-opment of the theoretical concepts discussed. In par-ticular we want to thank Ursula Kaly, Craig Pratt, andSOPAC’s director Alfred Simpson. The EVI project isfunded by the New Zealand Overseas DevelopmentAssistance (NZODA) program. Funding for Ferdi-nando Villa is provided by the US National ScienceFoundation (NSF) grants 784AT-31057 (subaward#784), 9714835 and 9982938, and by the US Environ-mental Protection Agency grant R87169.

Literature Cited

Allen, T. F. H., and T. B. Starr. 1982. Hierarchy: perspectives forecological complexity. University of Chicago Press, Chicago.

Atkins, J., S. Mazzi, and C. Ramlogan. 1998. A compositeindex of vulnerability. Commonwealth Secretariat, London.

Briguglio, L. 1995. Small islands states and their economicvulnerabilities. World Development 23:1615–1632.

Briguglio, L. 1997. Alternative economic vulnerability indica-tors for developing countries with special reference to SIDS.Report prepared for the Expert Group on VulnerabilityIndices UN-DESA, 17–19 December 1997.

Cairns, J. 1977. Quantification of biological integrity. Pages171–187 in R. K. Ballentine and L. J. Guarraia (eds.), Theintegrity of water. US Environmental Protection Agency,Office of Water and Hazardous Materials, Washington, DC.

Cormier, S.M., M. Smith, S. Norton, and T. Neiheisel. 2000.Assessing ecological risk in watersheds: a case study of prob-lem formulation in the big darby creek watershed, Ohio,USA. Environmental Toxicology and Chemistry 19:1082–1096.

Costanza, R., B. G. Norton, and B. D. Haskell. 1992. Ecosystemhealth: new goals for environmental management. IslandPress, Washington, DC.

Costanza, R., R. D’Arge, R. DeGroot, S. Farber, M. Grasso, B.Hannon, K. Limburg, S. Naeem, R. V. O’Neill, J. Paruelo,R.G. Rasking, P. Sutton, and M. Van den Belt. 1997. Thevalue of the world’s ecosystem services and natural capital.Nature 387:253–260.

Crowards, T. M. 1999. An economic vulnerability index fordeveloping countries, with special reference to the Carib-bean: alternative methodologies and provisional results.Draft. Caribbean Development Bank, Barbados.

De Leo, G. A., and S. Levin. 1997. The multifaceted aspects ofecosystem integrity. Conservation Ecology [online]1 (1):3.URL: http://www.consecol.org/vol1/iss1/art3

EEC. 1992. On the conservation of natural habitats and of wildfauna and flora. Council Directive 92/43/EEC, 21 May 1992

Elrich, P. R. and A. H. Elrich. 1991. Healing the planet.Addison-Wesley, Menlo Park, California.

Eurostat. 1998. Towards environmental pressure indicatorsfor the EU. European Commission.

Gatto, M. 1995. Sustainability: is it a well defined concept?Ecological Applications 5:1181–1184.

Gordon, S. I., and S. Majudmer. 2000. Empirical stressor-response relationship for prospective risk analysis. Environ-mental Toxicology and Chemistry 19:1106–1112.

Holling, C. S. 1973. Resilience and stability of ecological sys-tems. Annual Review of Ecology and Systematics 4:1–23.

Hughes, P., and G.B. Brundrit. 1992. An index to assess SouthAfrica vulnerability to sea-level rise. South African Journal ofScience 88:308–311.

IPCC. 1992. Global Climate Change and the rising challengeof the sea. RSWG report IPCC.

Kaly, U., and C. Pratt. 2000. Environmental vulnerability in-dex: development and provisional indices and profiles forFiji, Samoa, Tuvalu and Vanuatu. Phase II report.TechnicalReport 306 SOPAC, Suva, Fiji.

Kaly, U., L. Briguglio, H. McLeod, S. Schmall, C. Pratt, and R.Pal. 1999a. Environmental vulnerability index (ECI) to sum-marise national environmental vulnerability profiles. Tech-nical Report 275 SOPAC, Suva, Fiji..

Kaly, U., L. Briguglio, H. McLeod, S. Schmall, C. Pratt, R. Pal.1999b. Report on the Environmental vulnerability index(EVI) think tank, 7–10 September 1999, Pacific Harbour,Fiji. Technical Report 299 SOPAC, Suva, Fiji.

Koreleski, K. 1988. Adaptations of the Storie index for landevaluation in Poland. Soil Survey and Land Evaluation 8:23–9.

Lal, S. 1989. Productivity evaluation of some benchmarksoils in India. Journal of the Indian Society of Soil Science37:78 – 86.

Leamy, M. L. 1974. An improved method of assessing the soilfactor in land valuation. New Zealand Soil Bureau. Scien-tific Report LR 506.

Environmental Vulnerability Indicators 347

Page 14: finition exam.pdf

Loague, K. 1994. Regional-scale groundwater vulnerability es-timates—impact of reducing data uncertainties for assess-ments in Hawaii. Ground Water 32:605–616.

O’Neill, R.V., D. DeAngelis, D.L. Waide, and T.F.H. Allen.1986. A hierarchical concept of ecosystems. Princeton Uni-versity Press, Princeton, New Jersey.

Pantin, D. 1997. Alternative ecological vulnerability indicatorsfor developing countries with special reference to smallisland developing states (SIDS). Report to UN Departmentof Economic and Social Affairs, 22 pp.

Pernetta, J. C. 1990. Projected climate change and sea-levelrise: A relative impact rating for the countries of the PacificBasin. Pages 14–23 in J. C. Pemetta and P. J. Hughes (eds.),Implications of the expected climate changes in the South Pacificregion: an overview. UNEP Regional Seas Report 1990.

Saaty, T. L. 1980. The analytical hierarchy process. McGraw-Hill, New York.

Sem, G., J. R. Campbell, J. E. Hay, N. Mimura, E. Ohno, K.Yamada, M. Serizawa and S. Nishioka. 1996. Coastal vul-nerability and resilience in Tuvalu: Assessment of climatechange impacts and adaptation. Integrated Coastal ZoneManagement Programme for Fiji and Tuvalu Phase 4.SPREP, EAJ, OECC report.

Storie, R. E. 1976. Storie index soil rating (revised). SpecialPublication Division of Agricultural Science, University ofCalifornia, Berkeley.

UN (United Nations). 1994. Report of the global conferenceon the sustainable development of small island developingstates, Bridgetown, Barbados, 25 April–6 May 1994, SalesNo. E.94.L18. United Nations.

US EPA (Environmental Protection Agency). 1986. Hazardevaluation division standard evaluation procedures: ecolog-ical risk assessment. Office of Pesticide Programs. EPA-540/9-85-001. Washington, DC.

US (EPA) Environmental Protection Agency. 1996. Proposedguidelines for ecological risk assessment. Federal Regulations61:47552–47631.

US (EPA Environmental Protection Agency). 1998. Guide-lines for ecological assessment. EPA/630/R-95/002F. Wash-ington, DC.

Villa, F. 1995. Guidelines for the collection and evaluation ofenvironmental parameters required by the “Nature 2000”initiative (in Italian). S.It.E. Notizie 24:67–76.

Villa, F., and R. Antonietti. 1996. Empirical methods for eval-uating synthetic ecological parameters: First results in thecontext of the “Rete Natura 2000” project (Italian, Englishabstract). Proceedings of the Italian Society of Ecology (S.It.E.)17:675–677.

Villa, F., M. Ceroni, and A. Mazza. 1996. A GIS-based methodfor multi-objective evaluation of park vegetation. Landscapeand Urban Planning 35:203–212.

Voinov, A. A., R. Costanza, L. A. Wainger, R. M. J. Boumans,F. Villa, T. Maxwell, and H. Voinov. 1999. Integrated eco-logical economic modelling of watersheds. Journal of Envi-ronmental Modelling and Software 14:473–491.

Voogd, H. 1983. Multicriteria evaluation for urban and re-gional planning. Pion Ltd., Amsterdam, The Netherlands.

Wells, J. 1996. Composite vulnerability index: a preliminaryreport. Commonwealth Secretariat, London.

Wells, J. 1997. Composite vulnerability index: A revised re-port. Commonwealth Secretariat, London.

Weslawski, J. M., J. Wiktor, M. Zajaczkowski, G. Futsaeter, andK. A. Moe. 1997. Vulnerability assessment of Svalbard inter-tidal zone for oil spills. Estuarine Coastal and Shelf Science44:33–41.

Williams, L. R. R., and L. A. Kaputska. 2000. Ecosystem vul-nerability: a complex interface with technical components.Environmental Toxicology and Chemistry 19:1055–1058.

Yamada, K., P.D. Nunn, N. Mimura, S. Machida, and M.Yamamoto. 1995. Methodology for the assessment of vul-nerability of South Pacific Island countries to sea-level riseand climate change. Journal of Global Environmental Engineer-ing, 1:101–125.

348 F. Villa and H. McLeod