risk technology insights - willis

48
Risk Technology Insights

Upload: others

Post on 18-Mar-2022

2 views

Category:

Documents


0 download

TRANSCRIPT

Risk Technology Insights

Willis Limited

Ten Trinity SquareLondon EC3P 3AXTelephone: +44 (0)20 748 8111

Website: www.willis.com

WRE/0857/03/01 Member of the General Insurance Standards Council

Contents

Editor’s preface 1

Practical problems with intergrated risk and return management 2

Catastophe risk management for UK insurers: the River Thames 8

Using Synthetic Aperture Radar imagery for flood modelling 14

Business applications of earthquake modelling in Japan using ArcView GIS 22

Catastrophe risk management for Japansese insurers: Japanese typhoon modelling 28

Error estimation in EML models 34

Contact details

For further information please contact your account executive.For additional copies please contact the Reinsurance Publications Department

Tel: +44 (0)20 7488 8093Fax: +44 (0)20 7481 7193E-mail: [email protected]

Willis LimitedTen Trinity SquareLondon EC3P 3AXUnited Kingdom

©Copyright 2003 Willis Limited. All rights reserved: No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, or otherwisewithout the permission of Willis Limited. The information contained in this report is compiled from sources we consider to be reliable; however, we do not guarantee and are not responsible for its accuracy. This report is for generalguidance only and action should not be taken without obtaining specific advice

Willis Risk Technology Insights 1

Editor’s preface

Risk Technology Insights presents new research by Willis' insurance consulting practice into financial and catastropherisk management. These papers have been presented at a number of conferences and taken together describeimportant advances in the modelling of different types of risk facing insurance risk carriers. In the past few years,considerable advances have been made in modelling the risks facing insurers which have culminated in a generation oftotal risk or integrated risk models.

The first paper, by Peterken, reviews the use of these models in practice, concluding that they are not ascomprehensive as their proponents suggest and tend to exclude rather than empower their users. The author goes onto set out principles which should guide the development of high quality models and comments on where futuretechnological advances are likely to be found.

In 1999, Willis launched an innovative flood modelling technique using airborne synthetic aperture radar and anadvanced geographic information system. The second paper by Foote and Galy provides an overview of this project,describing for the lay reader the techniques and modelling processes used to apply this complex technology to the RiverThames. They go on to discuss the business applications, including the numbers of houses affected by a potentialThames flood.

In the third paper, presented by Galy and Sanders, the software tools used to produce the Thames flood model aredescribed. Further details of the synthetic aperture radar product are reviewed, together with the geographicinformation system (GIS) tools used to create the model. The paper concludes with a consideration of the suitability ofthe software tools used and discusses areas for improvement.

In addition to flood, insurers face earthquake risks in many parts of the world. In the fourth paper, Jones looks atthe development of an earthquake model for Japanese insurers using GIS techniques, which offer significantdevelopment advantages over conventional programming approaches. The paper first considers the needs of insurancesector users, before going on to examine the development processes for meeting these needs using GIS. Finally dataissues are explored and examples of business applications for insurers are provided.

The fifth paper, by Tabuchi and Sanders, describes the development of a hurricane model based on numericalweather modelling (NWP) techniques. These advanced computer models of hurricanes offer significant accuracy andresolution gains over the statistical models they are supplanting. The paper describes how a NWP model wasincorporated into an applied risk model for Japanese insurers.

Whenever an aspect of the real world is modelled, estimation error is introduced into the results. Models of naturalcatastrophes suffer particularly acutely from this problem and, in particular, it can prove difficult to accurately measurethe estimation error in them. The final paper by Bains and Ness presents a solution to this problem using a statisticaltechnique called "bootstrapping". The paper considers the components of estimation error and the application of anon-parametric method based on the repeated sampling of model output. An example is provided of the application ofbootstrapping to estimates of insurance losses from a windstorm model.

These papers cover different aspects of one of the central problems facing insurers and reinsures, and otherinstitutions carrying insurance risks: the quantification of risk. The mission of Willis is to further our clients'understanding of risk so that it can be better managed.

Oliver PeterkenManaging Director, Consulting

2 Willis Risk Technology Insights

Oliver L.E. Peterken

Abstract

The author reviews the use of integrated risk and return models by European P&C insurers and reinsurers. Standards for the developmentof such models are described and the performance of current models against these benchmarks is evaluated. IRRM models are found notto be as comprehensive as is often thought and to suffer a serious weakness in their exclusion of users from participating in thedevelopment of business microworlds. Suggestions for the future development of IRRM models are made and new technologies identified.

1. Introduction

Integrated risk and return management (IRRM) or total integrated risk management is frequently seen as the pinnacle of risk analysis andmanagement decision support for risk carrying entities. Positioned above dynamic asset and liability modelling and statistical techniques,IRRM is portrayed by its proponents as the ultimate evolution of modelling for insurance and reinsurance businesses. Since 1996, there hasbeen a rapid growth in the number of publications and real-life implementations of IRRM models (Coutts 1997, Lowe & Stanard 1996,Mulvey et al 1997). Developments have reached the point where most firms of consulting actuaries, large reinsurers and reinsurancebrokers, as well as various consultancies, now offer risk-return modelling services.

This growth in the availability of IRRM models is not unconnected with the increasing pressures under which insurers find themselves.Stock market analysts have led the criticism of insurers' financial performance and have demanded increased earnings accompanied byreduced volatility, together with the minimization of capital retained by insurers. Whilst these three demands might be consideredcontradictory (an efficient market rewards increased risk taking with higher returns but that in itself requires increased capital tounderwrite the higher level of risk and increased volatility in the first place), such sentiments are not helpful to the chief executive of aninsurance company who finds himself under such pressure.

At the same time as increased demands from shareholders, insurers have faced growing calls from credit rating agencies for increasedlevels of capital to be held. This has, in general, led to greater levels of capital being held than would otherwise be the case on a strict riskcapital basis. That the European regulatory authorities and credit agencies use different methods for evaluating solvency is a cause forconcern, leading to contradictory and conflicting pressures on insurers, especially as neither use a risk-based approach to solvency. Afurther ground for concern is that this contradiction is not present for the insurers' growing competitors in the banking sector. It remains tobe seen whether the value at risk techniques accepted by bank regulators will in time be accepted by insurance regulators.

The pressure from shareholders has coincided with developments in the banking sector where risk modelling techniques (alignedclosely with return-on-risk-adjusted-capital performance measures) have quickly become the norm and integrated into everyday tradingand risk management activities (Goldman Sachs 1998). At the same time, banks have increasingly been seen by insurers as a threatfollowing the move of banks into the securitisation of insurance risk and their rapid penetration of the credit risk market. It was, therefore,not surprising that embattled chief executives and chief finance officers of insurance companies should turn to similar risk-returntechniques as their salvation, following the timeless adage "if you can't beat them, join them. It is as a result of such pressures that thedemand for risk-return models has increased over the past 5 years.

However, caution is needed in applying banking risk management techniques to insurers. Banks generally operate in an environmentwhere time frames are much shorter than for insurers and where price discovery is possible. It should also be noted that the main bankingrisks of credit risk and market risk are simpler risks to model than insurance risk. These differences have significant implications for riskmodelling of insurers and a number of the problems being faced by insurers, as they try to implement risk adjusted capital regimes, ariseout of a lack of appreciation of the complexity of insurance risk and the imperfect markets it is traded in.

Most of the comments in this paper are based on the author's experience as a consultant to P&C insurance and reinsurance companies in Europe. Whilstsome will apply also to the US market, the reader should use their judgement as to how far these comparisons can be taken.

"Insurers" should be taken to include reinsurers.

Practical problems with integrated risk and return management

Conference on Integrated Risk and Return Management for Insurance CompaniesNew York University Salomon Center 20-21 May 1999

Willis Risk Technology Insights 3

During the past two years there has been a distinct change in tone of the proponents of IRRM models which can best be summarized asimplying that the IRRM paradigm and its modelling incarnations has reached a stage of maturity: "The technology is mature both from thestandpoint of theory and computations. It is now a matter of implementation" (Mulvey et al., 1997: 18). No longer a specialist tool fortackling unusual and isolated problems, IRRM is held up as a methodology for tackling the strategic issues facing an insurer: "Totalintegrated risk management provides a unified approach for linking all major corporate strategic decisions….. The goal is to maximizeshareholder surplus over time subject to a set of constraints on risk and other factors" (Mulvey et al., 1997: 1). The current stage of modeldevelopment is seen as the ultimate development. Thus Mulvey talks of "the integrated system technology represents a natural evolutionof methods for analysing investment risks" (Mulvey et al 1997: 3) positioning it at the top of a "risk analysis ladder" which leads from lowlyforms of single security pricing, through dynamic asset portfolios to the promised land of "Total Integrated Risk Management".

It is not unreasonable in the light of these claims to ask some fundamental questions about the actual progress of integrated risk andreturn management techniques in insurance companies:

How widespread is the use of such techniques, how are they used and what types of decision are they supporting?

How "totally integrated" are these models in terms of the risks facing an insurer they encompass?

How good are these models when compared to accepted standards for model development?

How well do such models tackle the important issues facing CEO's of insurance companies?

What is needed for further development into the next generation of models?

2. The use of IRRM models

The published information on the actual use of IRRM models in P&C insurers is largely confined to the marketing literature of the sellers ofsuch modelling services and papers on specific implementations (Mulvey 1997, Lowe 1996). The application of a model in a businesscontext needs to be considered in terms of the range of decisions supported, the frequency of such decisions and how the modelsthemselves are used.

The use of IRRM models in systems terms falls under the classification of "decision support", i.e. providing information to managementto improve the effectiveness of their resource allocation decisions. The types of decisions IRRM models have assisted include asset mixstrategies, capital allocation, cost of risk capital, insurance risk control, acquisition/divestment, price setting and risk transfer. Probably themost important decision for the management of an insurance enterprise is the "allocation" of capital between competing divisions,products and distribution channels. Similarly, given the importance of investment returns to the profits of most insurers, asset mixdecisions which allocate capital between different asset classes are of strategic importance. It can, therefore, be said that IRRM modelshave the potential to support the most important decisions made by insurance company senior executives.

In some US applications, IRRM models have been shown to be used on a daily basis to support underwriting and risk control.Renaissance Re is the best known example of this (Lowe & Stanard 1996), but also Falcon Asset Management's FIRM model was usedregularly to support USF&G's investment decisions. Such applications are untypical in Europe and most IRRM models are operated byconsultants in support of ad-hoc assignments and, to a lesser extent, by in-house technical support teams.

4 Willis Risk Technology Insights

There are 3 possible implementations of IRRM models: By external consultants' through stand-alone exercises, by in-house staff on anad hoc basis, or integrated into the main enterprise systems. Currently, most IRRM models are run by consultants. The Renaissance Re casereferred to above is one of the few examples of where an IRRM model has been integrated into the insurer's enterprise systems, althoughof the major reinsurers have similar approaches, albeit not on a real-time basis as is possible with a mono line reinsurer such asRenaissance Re. IRRM models are generally used on a stand alone ad hoc basis to solve particular issues. The lack of integration of suchmodels into enterprise systems constitutes a fundamental limitation in their impact and role in decision support, positioning them as toolsoperated by experts rather than instruments used by management itself.

Given the claims, or at least the intention, to be total integrated risk or enterprise risk models, it is reasonable to examine how manyof the risks facing an insurer these models actually encompass. The risks facing an insurer can be considered in terms of threats toliabilities, assets and the business (Lowe 1996:4-8). These three sources of threat are themselves made up of underlying risks:

Assets:

Capital market risk (price falls and liquidity problems)

Credit risk (premiums and reinsurance recoverables)

Reinsurance risk (reinsurance programmes do not perform as expected)

Liabilities:

Rating risk (losses greater than expected)

Coverage risk (scope of risks greater than expected)

Business:

Operational risks (originating from human error, misfortune and crime)

Regulatory risk (credit agencies, regulators, tax authorities, courts)

Competitor risk (price and customer impacts)

From numerous conversations with senior insurance executives, the author has found that there is a common perception of thehierarchy of risks facing an insurer. Most senior managers feel the biggest risk is from them making a poor strategic decision, such as anacquisition which does not yield the assumed synergies or the expansion of a product line as the underwriting results plummet. Such risksfall under operational risks and are rarely modelled. The next biggest threat is felt to be from a sustained stock market collapse, which byits severity and duration cannot be mitigated through option-based portfolio insurance. This risk from capital markets is well covered byIRRM models. The third source of risk is considered to be from an unforeseen exposure (coverage extension risk) not priced for at the timethe policy was underwritten. Past examples include environmental liabilities created by government action, future risks could come fromnew consumer goods such as mobile phones. These risks fall under coverage risks and, whereas attritional and catastrophe losses areincluded in IRRM models, these unforeseen losses are not.

In addition, IRRM models rarely tackle operational, reinsurance and regulatory risks, although it is interesting to note that in the USboth the Renaissance Re and Falcon models do allude to including such risks on an unquantified basis (Lowe 1996:5, Mulvey 1997:7). Whenthese omissions are combined with the exclusion of two out the three main risks perceived by users as their major threats, it can be seenthat "total integrated risk" or "enterprise risk" models fall short of encompassing the totality of insurance enterprises' risk. What IRRMmodels do, and generally do well, is model the risks to assets and liabilities which form the basis of every day risk to an insurance entity,and this is where their present use lies, not in attempting to model every type of risk facing an insurer, many of which are unsuitable formodelling in a quantified, probabilistic manner. These risks are best left to the less sophisticated but effective techniques of (deterministic)scenario planning.

OLE Peterken: Practical problems with integrated risk and return management

Willis Risk Technology Insights 5

A model is a theoretical system of relationships which encapsulates part of the real world. To do so requires a simplification andabstraction of "reality", where in the socioeconomic field reality cannot be defined without applying a set of theoretical constructs. Thisprocess has been referred to as "the social construction of reality" (Berger 1991). It has profound implications for the testing of economicmodels as it implies that an independent test (as in the natural sciences) of the veracity of an economic model is exceedingly difficult andthat the best which can be achieved is a test of a model against its predictive powers and its principles of construction.

Tests against predictive powers have been examined at length in various standard works on the development of statistical models.However, tests against the principles of what makes a good model has not always been applied as rigorously as is found in other fields,such as operational research. The principles of what makes a good model are very similar to the principles of what makes a good theory,which should be expected given that a model is itself a theoretical construct.

These principles, if followed, should also lead to the development of models which are more useful to their end users. This quality isessential if IRRM models are to continue to enjoy the support of the end-user community in insurance companies. From a user'sperspective, a good model needs to be flexible to adapt rapidly to changing requirements as a business' situation changes. A model alsoneeds to be believable, by which is meant that the model can be independently audited to verify that it functions as specified and that auser can understand the workings and is prepared to use the model's output to support critical business decisions.

Two models can be compared and evaluated as to which is the better model in terms of transparency, evolvability, parsimony,consistency and scalability . Transparency is essential because it enables users to see how their business is being modelled and thus gainstheir acceptance of the model's results. Evolvability is a quality which enables a model to grow in complexity and is required because aneffective development strategy is to build a simpler version to which more complexity is added over time. Parsimony prevents theoriesbecoming too complex and, all things being equal, values fewer assumptions over more. Consistency means that the model explainslogically the relationships between variables. Whereas evolvability refers to the expansion of the logical structure of the model, scalabilityapplies to the model's capacity to capture more scope and more depth of the "real world". For example, a scaleable model can be readilyextended to include more product lines, more sources of risk, more business units etc.

A noticeable feature of many IRRM models is the lack of transparency in that little is published on their workings and assumptions,mainly on commercial grounds. It is therefore up to each model developer to assess their model in the light of the above criteria. In theauthor's experience of a having spoken with a number of model developers and users, parsimony and consistency are generally consideredto be present. But there are frequent criticisms from users as to the lack of transparency, evolvability and scalability present in manymodels.

Whilst parsimony and consistency are desirable from a modelling efficiency stand point, transparency, evolvability and scalability affectthe user and go some way towards explaining the narrow use of IRRM models in practice. The conundrum for IRRM modellers at present isthat despite the compelling arguments in favour of using IRRM models on a regular basis to inform the most critical decisions facingmanagers, they are more often than not on the margins of managers' activities, used to assist infrequently on technical problems.

From a systems perspective, a manager's challenge is to collect information about his world, process it and make optimum decisions onthe allocation of scarce resources. A key tool in doing this is the development of models which explain the past and enable the manager topredict the future. Thus, models from a user's perspective can be viewed on three levels: Knowledge maps, frameworks that organizeknowledge, or "microworlds" for experimenting and learning (Morecroft 1992: 9). An IRRM model which incorporates business as well asasset and liability risks is a microworld and potentially a powerful instrument to support strategic thinking and decisions. As case studieshave shown, to be accepted a model must make the user feel that their view of the world has been modelled and that the model is flexibleenough to adjust to changes in data and interpretation (Morecroft and Sterman 1992). Without full transparency, evolvability and scalabilitythis essential requirement of users cannot be met.

3. Characteristics of "good" models

6 Willis Risk Technology Insights

A further criticism which can be made of IRRM models is that they are often developed and implemented in isolation of the user. Theuser's requirements are noted and the modeller goes away to return in due course with the largely finished model. This process excludesthe user from influencing the development of the model and they are forced to either accept or reject the result. Whilst such an approachmight be acceptable for a narrow, technical problem, it is a fundamentally flawed approach when dealing with a strategic issue. Why shoulda senior executive base a key decision (and one with potentially considerable risk to his future employment) on a model whosedevelopment he has not been involved in?

If IRRM models are to gain wider acceptance amongst the senior executives of insurers and to be used as instruments supporting keystrategic decisions, they must become more transparent with greater evolvability and scalability. At the same time, the process by whichIRRM models are implemented needs to move from one where the modeller acts on the client to a situation where the modeller acts withthe client in a partnership to develop a microworld for the user. Both these issues are a result of the attitude of modellers but also of themodelling tools at their disposal. Many of the present generation of IRRM models are monolithic systems running on computer languagessuch as Fortran, Visual Basic or C which lack transparency. The models' structures are not similar to the business world of the ultimateusers and they require the modeller to intermediate, as well as to interpret, between their model and the user. This creates furtherdistance between modeller and user, as well as increasing the user's potential alienation from the model with a concomitant reduction inits acceptability and limitations on its use.

The tools for more transparent, evolvable and scalable models already exist in component-based architecture, objects and graphicaluser interfaces. Some of the rapid development techniques being used in system development offer exciting possibilities for IRRMmodellers to really engage with their users.

4. Conclusion

IRRM models are typically confined to being technical tools used on an ad hoc basis on the fringes of key management decisions. Whilsthaving the features of internal consistency and parsimony, IRRM models often lack sufficient transparency, evolvability and scalability topersuade managers to base strategic decisions on them. It has been shown how the process of modelling plays an important part in thedegree of acceptance by the user and that the monolithic and excluding nature of present IRRM modelling processes limits the use to whichthey are put.

If IRRM models are to achieve the status of being instruments (as opposed to technical tools) for insurance company executives, theymust become more acceptable to the users. This can only be done by making models more transparent with enhanced features ofevolvability and scalability. At their core must be a process of model development which includes (rather than excludes) the user, capturingtheir perceptions of their business world. The new generation of component-based architecture using object orientated programming offersexciting possibilities for moving IRRM models to the centre stage of insurance company management.

OLE Peterken: Practical problems with integrated risk and return management

Willis Risk Technology Insights 7

The author wishes to thank John Hibbert (Barrie & Hibbert Limited) for first pointing out the 3 principles of a good model and his colleagues,in particular Alan Bulbeck, Sophie O'Sullivan, Clare Richmond and Tim Thomas. Naturally, any errors, omissions or oversights are entirelythe responsibility of the author.

References

Berger, P., (1991), The Social Construction Of Reality, Thomas Kuckmann.

Coutts, S.M., and T.R.H. Thomas, (1997), "Modelling the impact of reinsurance on financial strength", British Actuarial Journal Vol 3 Part III..

Goldman Sachs & Co., and Swiss Bank Corporation, (1998), The Practice of Risk Management, Euromoney Publications.

Lowe, S.P., and J. Stanard, (1996), "An Integrated Dynamic Financial Analysis and Decision Support System for a Property CatastropheReinsurer", CAS Seminar on Dynamic financial Analysis, CAS Forum 1996.

Morecroft J.D.W., (1992), "Executive knowledge, models and learning", in Morecroft J.D.W., and J.D. Sterman (eds), (1992): 9-27.

Morecroft J.D.W., and J.D. Sterman (eds), (1992), "Modelling for learning", European Journal of Operational Research 59.

Mulvey, J.M., S. Correnti and J. Lummis, (1997), Total Integrated Risk Management: Insurance Elements, Technical report SOR-97-2, FalconAsset Management.

Wacker J.G., (1998), "A definition of theory: research guidelines for different theory-building research methods in operational management"in Journal of Operations Management 16, 361-385.

Acknowledgements

8 Willis Risk Technology Insights

Willis Insurance Consulting Solutions - 3rd Annual Conference6 September 1999

Matthew Foote & Hélène Galy

Abstract

This paper describes the development of insurance and reinsurance business applications for flood risk analysis in the Thames area. It willprovide background to the project, introduce the innovative technologies that were used, the processes involved in developing the model,and the application of the model to business needs.

1 Introduction

This paper describes the development of an innovative flood risk assessment system for the River Thames, the most important river in theUK. The model was built by the Willis hazard team using ArcInfo GIS and ERDAS image processing tools and is based upon speciallycommissioned airborne Synthetic Aperture Radar (SAR) data. The model was initially aimed at the insurance and reinsurance market in theUK, but the techniques are applicable to any organisation exposed to flood risk.

1.1River Thames

The River Thames is the UK's most important river. The river catchment covers an area of approximately 14,000 square kilometres. Thereare 1.4 million residential and 100,000 commercial properties in the immediate vicinity of the river, with a population of approximately 3million people.

For most of its length (350 km) the river can be described as semi-managed. It changes from tidal to non-tidal at Teddington lock andeffectively ends at the Thames barrier at Woolwich. This structural defence was completed in 1982 to prevent flooding from coastal surgewith a design criteria of the 1000-year event. Although the non-tidal section does not feature banks or levees, management is carried outusing alleviation techniques such as diversion channels and ponding reservoirs.

1.2 Development partners

Willis uses a blend of hazard and GIS expertise allied to actuarial and financial engineering to provide risk consultancy to the insurance,reinsurance and financial markets worldwide. The SAR data was supplied by Willis's partner, Intermap Technologies. Intermap providesintelligent solutions to major companies, governments and development agencies around the world in the extraction of information fromimages acquired from different airborne and satellite sources.

2 Development

The background to the development can be split into three areas: the physical and insurance conditions that drove the projectcommercially; the data innovations that drove the project technically; and the tools and processes that determined the methodology.

2.1 Physical and insurance conditions

The river Thames is a flood prone river. It has suffered serious flooding 20 times over the last 200 years. Major events occurred in 1894 and1947, both of which caused severe flooding, particularly in London. Minor floods occur more frequently, flooding smaller areas, and areas oflower economic value. In the recent past, there has been considerable property development on the flood plains, particularly 'out-of-town'commercial development, and new residential development in response to the UK government's call for 4 million new homes to be builtduring the next 10 years. Property values have also increased greatly during the past 15 years -by 1.5 times the annual inflation rate -adding to the value of property at risk.

Catastrophe risk management for UK insurers: the River Thames

Willis Risk Technology Insights 9

Flood insurance is normally provided as part of buildings and contents insurance in the UK. For residential property, approximately 70%of property has buildings cover, and 85% has contents cover. Commercial flood insurance is similarly provided as part of normal cover.Flood insurance in the UK covers all types of flood.

The combination of the increasing value of property at risk and the insurance conditions, has resulted in very large exposures to potentialflood risk for insurance companies. This exposure is also of interest to reinsurance companies and other organisations interested in risk.Recent estimates of insurance loss from a 100-year return period event for the Thames have ranged from £5 billion to £20 billion. Willis hasbeen asked to study the feasibility of providing Thames flood loss estimates for insurance companies in the past, but due to data limitationshad not been able to identify a suitable methodology which combined regional coverage, detailed property location and a feasible flood riskarea delineation technique. The technologies and methodologies that made this project possible were developed recently.

2.2 Innovative technology

The main data set used in flood risk assessment is a Digital Elevation Model (DEM). There have been a number of elevation modelsavailable in the UK, principally from the national mapping organisation, the Ordnance Survey. These have been used in the past for coastalflood models, but neither their accuracy nor their resolution were suitable for riverine models. Other data sets are becoming available,including LIDAR (Light Detection And Ranging) and SAR. Satellite SAR does not have sufficient accuracy or resolution and requires datafrom multiple passes, leading to problems of coherence. LIDAR has very high collection and processing costs, and is normally collected at amuch higher resolution than is required for regional models. Airborne SAR has proved to be the ideal source for regional flood modelling.The 5 to 10 meters resolution DEM with an accuracy of up to 2 meters* combined with the 2.5 meters resolution SAR image, provide a dataset which can be handled with reasonable ease, and can provide sufficient vertical and horizontal detail for most flood modellingrequirements. The use of airborne SAR for a commercial, large area flood risk assessment has not previously been carried out in the UK.

2.2.1 Intermap STAR-3i

The STAR-3i system is an interferometric radar system mounted on a Learjet36. The system generates DEMs and Ortho Rectified Images(ORIs) simultaneously. The system consists of two X-band radar antennae mounted on the Learjet. Data is collected from two antennaesimultaneously. Terrain height data is extracted and is used to geometrically correct the radar image. Precise terrain height and highlyaccurate positioning are ensured by the use of on-board laser-based inertial measurement, careful calibration of the baseline (the distancebetween the two antennae) and post-process Differential Global Positioning Systems (DGPS). The achieved level of accuracy means that noin-scene control points are required. In the typical collection mode, the system is flown at a height of 12,000m and acquires a 10km wideswath of 2.5m resolution radar data. The system has been designed to collect <2m* vertical accuracy DEMs at a rate of 100 km˝/min.

ORIs are radar images that have had all distortions caused by platform instability, radial distortion (airphoto) and terrain displacementremoved. A DEM is used to remove the distortions due to terrain and elevation. Correcting for these distortions results in the imagerybecoming a true scale representation of the ground.

10 Willis Risk Technology Insights

The specifications of the Airborne Derived Global Terrain™ data which was used on this project - Intermap's GT2 product - aresummarized in the table below:

DEM ORI

vertical RMS 2m* image pixel size 2.5m

posting every 5m horizontal RMS 2.5m

available in 7.5' tiles available in 7.5' tiles

first surface elevation measured

DEM data in ASCII, BIL or GEOTIFF, 32 bit format image data in BIL, GEOTIFF or TIFF format (8 bit)

* The accuracy of the data, which has been used on this project, is likely to have been better than that quoted. SAR gives higheraccuracy on flat or homogenous surfaces, such as flood plains compared to mountains. The University of Stuttgart has produced anindependent report on the accuracy of STAR-3i.

2.2.2 Flood data

The other essential data for flood risk assessment is information regarding the flood itself. Conventional hydrological modelling is normallyboth data intensive and processing intensive. The complex multiple inputs interact in a way which provides a suitable result for local areas.However, application to areas larger than a single reach is problematic. The process that has been developed for regional flood riskassessment is simpler than a full hydrological model, but is suitable for the purpose. Hydrological modelling is useful for local studies withgood quality, site specific hydrological data. The adopted flood risk assessment process is suitable for regional areas with minimal data. HRWallingford, a leading hydraulic engineering consultant, specialising in river and coastal flooding in the UK and overseas, has been Willis'stechnical partner on river and coastal issues for a number of years. They provided flood information for events with various return periodsat measurement sites along the Thames.

2.3 Tools and processes

2.3.1 Tools

The primary data processing and editing tools for this project were ArcView, ArcInfo and ERDAS Imagine. The processes used on this projectwere closely linked with the tools selected.

2.3.2 Processes

The Thames survey flown by Intermap in August 1998 produced 120 tiles of DEM data. Each 7.5' x 7.5' tile, 120 square kilometres in area,consists of approximately 600Mb of data, in an ASCII format. 34 tiles were used in the flood risk assessment project.

In order to reduce data size, an approximate flood plain was defined using the river elevation as a baseline and an arbitrary 10mcontour. The initial processing carried out on the DEM data involved producing a physically limited and horizontally corrected DEM - 'groundlevel' elevation - which could be used for the flood processes. "Speckle" noise reduction and image enhancement of the ORIs were requiredbefore carrying out an unsupervised classification on a multispectral image to provide landuse/roughness map.

Matthew Foote & Hélène Galy: Catastrophe Risk Management for UK insurers: the River Thames

Willis Risk Technology Insights 11

The next stage in the process was to take the raw flood levels and produce a flood surface. The 55 points were located upstream ofweirs and locks, to produce a representation of the flood that was relatively unaffected by the structures. The derived flood surface wasused as an estimate of the slope of the river and the flood. Two distinct processes were used in the creation of the flood surfaces.

a) Intersecting the flood surface with the DEM was the first step towards producing flood maps. Polygons non-adjacent to the river,i.e. areas which the terrain model indicates are not logically flooded at a given water depth, were removed. This method is unique. Itresults in a logically correct flood risk envelope, taking physical barriers into account.

b) A basic propagation model was designed to propagate the water across the DEM surface and approximate the actual progress ofwater. The flood was then propagated out from the river in a series of iterations until the limit of the surface likely to be flooded wasreached. As with the previous method (a) the resolution of the data allows to take into account embankments and solid features (trees andother 'non-solid' features were previously removed by filter). A visual comparison of techniques a) and b) showed that both methodsprovided similar flood extents.

2.4 Suitability of tools and data

2.4.1 Suitability of tools

The geoidal correction of the DEM was found to be beyond the capabilities of the GIS software used in-house. This process was outsourcedto University College London in order for specialist software and expertise to be applied.

Should we have decided to carry out detailed hydrodynamic investigations, specialist tools would have been required, such as Mike11(from Danish Hydraulics Institute) or ISIS (from HR Wallingford). ESRI and ERDAS tools chosen were very suitable for the methodology whichhas been detailed.

2.4.2 Suitability of SAR imagery

The airborne SAR data proved ideal for the task of regional flood risk assessment. The combination of resolution and accuracy delivered afinal product which is far superior to anything else currently available. Initial problems with data sizes, particularly with the DEM data, wereovercome by 'trimming' the area of interest to the flood plain only and careful disk space management. The SAR data provides a uniquecombination of elevation models and radar images, which allows land cover to be included in an estimation of flood propagation. This is amajor advantage over other data sets and allows a great refinement of the flood estimation process, with little additional processing.

3. Business applications

The flood risk analysis system is designed to give insurance companies and other potential users competitive advantage by improving theestimation of exposure to Thames flood. The system may be used as a Graphical User Interface, or for loss exposure analysis.

3.1. Visualisation and analysis: user interface

When the risk zonations from the flood model are combined with geo-demographic information, informed marketing decisions can be made.Strategies to avoid or target higher risk areas can be designed using GIS techniques.

Willis are currently developing customized GIS packages which will allow brokers to easily demonstrate the capabilities of the model toclients.

12 Willis Risk Technology Insights

3.2. Loss exposure

Flood modelling allows the delineation of areas at risk, as well as determination of indicative flood depths to be expected at any locationwithin the area affected. This data is then used with relevant locations in order to assess risk to property. Property locations for the UK, ata unit postcode level (see table below), were intersected with the flood depth data using a suitable interpolation algorithm, which resultedin a measure of flood intensity (depth) for each location.

Postcode Number Approx. number Approx. number of postcodes

in the UK of properties affected by a 200 year event

Outcode

Area (e.g. KT) 123 200,000 12

District (e.g.KT4) 2,807 9,000 99

Incode

Sector (e.g. KT4 8) 9,114 2,500 246

Sub Sector (e.g. KT4 8X) 106,000 220 1,581

Unit (e.g. KT4 8XW) 1,600,000 15 10,052

The object of the project is to provide an estimate of the loss to an insurance company portfolio of property, from flood events of variousreturn periods. The process is carried out using a proprietary data warehouse system, built using an Oracle RDBMS, and called theIntegrated Catastrophe Modelling PlatformTM (ICMPTM). This process takes the inputs of flood intensity, client portfolios, and suitableloss curves, and outputs an estimated loss, for each location, and/or for the whole portfolio. The initial use of the outputs from the systemis to allow insurance companies to accurately calculate their reinsurance purchase requirements. The data can also be used by insurers asa rating tool for properties and to provide hazard intensity for potential development sites.

4 Conclusion

The Thames flood risk assessment is now available to UK insurance companies, and enables them to estimate their exposure to flood riskon the Thames from a number of different return period events, both at a portfolio level and at a geocode level.

There are a number of potential refinements, which will further increase the value of the flood risk assessment project. By the year2000, Intermap are planning to improve vertical accuracy of the GT2 product to 0.3 meters, which would greatly enhance the flood depthanalysis. The propagation of the flood can be enhanced by the addition of improved volumetrics for historic floods. This may proveproblematic for the Thames, but will prove more successful for other rivers which have defined defences (levees) and for which a realisticlimit can bet set for available floodwater volume. The existing methodology is however applicable to other catchments.

The project has been extremely successful, the methodology is currently being assessed by local and national authorities in the UK andelsewhere, and is being proposed for use by the insurance industry in Europe and the Far East.

"ICMP" and “Integrated Catastrophe Modelling Platform" are registered trademarks of Willis Limited.

Matthew Foote & Hélène Galy: Catastrophe Risk Management for UK insurers: the River Thames

Willis Risk Technology Insights 13

14 Willis Risk Technology Insights

Hélène M. Galy, Richard A. Sanders

Abstract

Airborne radar technology has long been exclusively devoted to military applications. In recent years, applications in telecommunications,oil exploration and agriculture have proved that radar technology can also be used commercially. This paper focuses on an application inthe insurance industry and describes the development of a large-scale flood risk assessment model for the River Thames. The model isbased upon airborne Synthetic Aperture Radar (SAR) data and was built using commonly used Geographic Information Systems (GIS) andimage processing tools. From the Ortho-rectified Images (ORIs) a land cover map was produced from which surface roughness could bederived. A Digital Elevation Model (DEM) had to be processed to remove trees and other soft barriers and obtain the effective ground level.This was achieved by using the land cover information and remote sensing processes. This methodology is applicable to any organisationexposed to flood risk.

As highlighted by recent events in the UK (e.g. Easter 1998) and elsewhere (e.g. Eastern Europe), riverine flood is a major natural hazard inEurope. Flood is the costliest natural hazard in the world and accounts for 31% of the economic losses resulting from natural catastrophes(Munich Re, 1997).

The insurance industry is becoming increasingly concerned that the financial cost of flooding is quantified accurately. This concern isfurther aggravated by the projected increase in UK households by 4 millions by 2010. The Environment Agency (EA, 1999) itself hasrequirements for flood risk assessment. Determination of at risk areas is needed not only for flood warning, but also for flood plaindevelopment control (Parker, 1998).

A lot of research has been done to determine loss estimates by water depth (Black & Evans, 1999). However, adequate flood depthestimates have been difficult to acquire for large areas such as the Thames flood plain. The insurance industry, for example, has typicallyrelied on coarse resolution flood risk assessments. It can be argued that this is due in part to the lack of terrain and land cover informationwith an adequate spatial and temporal resolution. New techniques for terrain collection have become available in the past two or threeyears (e.g. SAR and Laser scanner), which have the potential to provide very high resolution data. In conjunction with other remote sensingdata sources such as CASI or airborne photography, these can be used for state-of-the-art flood modelling.

1. Introduction

1.1Development partners

The project was conducted by the consultancy division of Willis, a worldwide risk practice. Willis uses a blend of hazard and GIS expertiseallied to actuarial and financial engineering to provide risk consultancy to the insurance, reinsurance and financial markets worldwide.

The Synthetic Aperture Radar (SAR) data was supplied by Intermap Technologies, a multi-national digital mapping company. Intermapfocuses on providing image data from radar satellites (e.g Radarsat), interferometric airborne radar (STAR-3i) and aerial photography.Digital Elevation Models (DEMs), Ortho-rectified images (ORIs), and Thematic Maps are the primary outputs.

Hydrological data was supplied by Wallingford, a leading hydraulic engineering consultant, specialising in river and coastal flooding inthe UK and overseas.

RGS-IBG Annual Conference,University of Sussex, 4-7 January 2000

Using Synthetic Aperture Radar imagery for flood modelling

Willis Risk Technology Insights 15

The 340 kilometre long River Thames is the UK's most important river. At Teddington lock, its regime changes from tidal (susceptible tostorm surge) to non-tidal (susceptible only to rainfall induced flooding). Flood modelling has to take this dual regime into account. For thepurpose of this study, the river effectively ends at the Thames barrier (Woolwich) which was completed in 1982 to prevent flooding fromcoastal surge with a design criteria of the 1000-year event. The river has a discharge rate of 680 cumecs for a 50 year flood event atTeddington lock.

The catchment covers an area of approximately 14,000 square kilometres. While the upper reaches are steeper, the majority of theflood plain is relatively flat. For most of its length the river can be described as semi-managed. Although the non-tidal section frequentlydoes not feature banks or levees, management is carried out using alleviation techniques such as diversion channels and pondingreservoirs. There are 1.4 million residential and 100,000 commercial properties in the immediate vicinity of the river, with a population ofapproximately 3 million people.

2 Development

Three factors underpinned this study: the physical and insurance conditions that drove the project commercially, the data innovations thatdrove the project technically, and the tools and processes that determined the methodology.

2.1 Physical and insurance conditions

The River Thames is a flood prone river. It has suffered serious flooding at least 20 times during the last 200 years. Major events occurredin 1894 and 1947, both of which caused severe flooding, particularly in London. Minor floods occur more frequently, flooding smaller areasand areas of lower economic value. In the recent past, there has been considerable property development on the flood plains, particularly'out-of-town' commercial development and new residential development in response to the UK government's call for 4 million new homesto be built during the next 10 years. Property values have also increased greatly during the past 15 years, by 1.5 times the annual inflationrates, thereby adding to the value of property at risk.

In the UK, flood insurance is normally provided as part of buildings and contents insurance. For residential property, approximately 70%of property has buildings cover, and 85% has contents cover. Commercial flood insurance is similarly provided as part of normal cover.Flood insurance in the UK covers all types of flood.

The consequence of the increasing value of property at risk, and the insurance conditions (nearly universal cover), have resulted in verylarge exposures to potential flood risk for insurance companies. This exposure is also of interest to reinsurance companies and otherorganisations interested in risk (e.g emergency services, planning authorities). Willis had been asked to study the feasibility of providingloss estimates for Thames flood for insurance companies in the past, but due to data limitations had not been able to identify a suitablemethodology which combined regional coverage, detailed property location and a feasible flood risk area delineation technique.

1.2. The River Thames

16 Willis Risk Technology Insights

The STAR-3i system is an interferometric radar system mounted on a Learjet36. The system generates DEMs and ORIs simultaneously. Thesystem consists of two X-band radar antennae mounted on the Learjet. Data collection from the two antennae occurs simultaneously. Onlythe first surface elevation is measured. The set of acquired data is interfered by a digital correlation process to extract terrain height data,which is used to geometrically correct the radar image. STAR-3i uses post-processed Differential Global Positioning Systems (DGPS) data,together with on-board laser-based inertial measurement data, to attain highly accurate horizontal and vertical positioning control. Preciseterrain height and positioning data are enhanced by careful calibration of the baseline (the distance between the two antennae). Due to theaccuracy of the positioning information and the careful baseline calibration, no in-scene control points are required. The only restriction isthat a ground-based GPS receiver must be located within 200km of the data collection site for DGPS processing. Adding ground-truthingallows the increase of the data resolution further. In the typical collection mode, the system is flown at a height of 12,000m and acquires a10km wide swath of 2.5m resolution radar data. The system has been designed to collect <2m vertical accuracy DEMs at a rate of up to 100km2/min.

ORI's are images that have had all distortions caused by platform instability, radial distortion (airphoto) and terrain displacementremoved. A DEM is used to remove the distortions due to terrain and elevation. Correcting for these distortions results in the imagerybecoming a true scale representation of the ground, which can be used in a GIS for measurements of length, area, and azimuth.

The main data input used in flood risk assessment is the DEM. There have been a number of elevation models available in the UK,principally from the national mapping organisation, the Ordnance Survey. These have been used in the past for coastal flood models, butneither their accuracy nor their resolution were considered suitable for the planned new riverine models. Other data sets are becomingavailable, including Laser Scanner and SAR. Satellite SAR from ERS1 & 2 and Radarsat does not have sufficient accuracy or resolution andrequires data from multiple passes, leading to problems with coherence. Laser Scanner has very high collection and processing costs, haslonger collection timescales and is normally collected at a much higher resolution than is required for regional models.

In planning this study, airborne SAR has proved to be the ideal source for regional flood modelling. The DEM has a resolution of 5 or 10metres, with a vertical accuracy from 3m to 0.5m. The ORIs have a 2.5 metre resolution. The combination of DEMs and ORIs provides a dataset that can be handled with reasonable ease (due to its reasonable file size), and can provide sufficient vertical and horizontal detail formost regional risk assessment requirements. Further advantages include cloud penetration and day/night acquisition. The use of airborneSAR for a commercial, large area flood risk assessment has not previously been carried out in the UK. This project is based completely oncommercial requirements and was funded by Willis.

The specifications of the SAR products are detailed below:

Specifications of airborne derived SAR products

Data Digital Elevation Model (DEM) Orthorectified Image (ORI) Data:

Horizontal resolution posting every 5m, 10m pixel size 2.5mhorizontal RMS 2.5m

Vertical resolution vertical RMS 0.5 to 3m n/aMosaic coverage 7.5' tiles 7.5' tiles

Format ASCII, BIL or GEOTIFF BIL, GEOTIFF or TIFF

2.2 Innovative technologies

2.2.1 Digital Elevation Model and Surface Roughness information

Hélène M. Galy, Richard A. Sanders: Using SAR Imagery for Flood Modelling

Willis Risk Technology Insights 17

The accuracy of the data which has been used on this project, delivered with a vertical RMS of 2m, is likely to have been better thanthat quoted. SAR gives higher accuracy on flat or homogenous surfaces, such as flood plains. The Institute of Navigation, University ofStuttgart (Kleusberg A. and Klaedtke H. G, 1998) has produced an independent report on the accuracy of STAR-3i.

2.2.2 Hydrological data

The other essential data for flood risk assessment is hydrological data. Conventional hydrological modelling is both data intensive andprocessing intensive. The complex multiple inputs interact in a way which provides a suitable result for local area studies. However,application of full hydrological models to areas larger than a single reach is problematic. Hydrological modelling is useful for local studieswith good quality, site specific data. The new flood risk assessment process outlined in this paper is suitable for regional areas withminimal data. The hydrological data used in this project included raw flood levels for events with various return periods, produced by HRWallingford (HR Wallingford, 1999; HR Wallingford, 1997).

2.3 Tools and processes

The processes used on this study were closely linked with the tools selected. If advanced hydrological modelling had been required, toolssuch as Mike11 (from Danish Hydraulics Institute) or ISIS (from HR Wallingford) would have been used in addition to the GIS and imageprocessing tools. Instead, standard GIS tools were used.

2.3.1 GIS tools

The primary data processing and editing tools for this project were ArcInfo and ERDAS Imagine. ArcInfo 7.2.1 runs on a Sun Enterprise 450and accesses a DEC Raid5 Storage Works for data storage. The GRID and TIN modules were used extensively in this project.

ERDAS Imagine version 8.3.1 runs on a Compaq SP700 PC, which is networked to the UNIX equipment. The Essentials and Advantagemodules were used. ArcView 3.1 was also used in the project, running alongside ERDAS. The extensions Spatial Analyst, Cad Reader, 3DAnalyst and Geoprocessing were particularly useful.

2.3.2 Processes

The Thames survey flown by Intermap in August 1998 produced 120 tiles of data, each 120 square kilometres in area, consisting ofapproximately 600Mb of data each. Only 34 tiles were used for the purpose of this Thames study.

The project was carried out over 12 months elapsed time at the Willis offices in London. The initial processing carried out on the DEMdata involved producing a physically limited and horizontally corrected DEM, which could be used for the flood processes. Imageenhancement tools were used to produce a land cover map from the ORIs, from which surface roughness coefficients, or Manning's n values(Dingman, 1994), could then be derived for flood propagation modelling.

18 Willis Risk Technology Insights

The generalised algorithms are detailed below:

Generalised process for producing DEM

– Reprojection from UTM to UK National Grid

– Geoidal correction from WGS84 to OSGB361

– Conversion of ASCII DEM to GRID Lattice ARC/INFO

– Data size management: an approximate flood plain was definedusing the river elevation as a baseline, and an arbitrary 10m contour

– Use of image processing tools to provide 'ground level' elevation, ERDAS Imagine

– Process for producing landuse map from ORI

– Registration of ORIs using ArcInfo

– Conversion to GRID ARC/INFO

– Reprojection from UTM to UK National Grid

– Data size management: DEM grid used to mask ORI grid

– Noise reduction and image enhancement of ORIs

– Unsupervised classification of multispectral images to ERDAS Imagine

provide landuse/roughness map (Figure 5)

The next stage in the process was to take the raw flood levels provided by HR Wallingford, and produce a flood surface for a number ofreturn periods corresponding to the 50, 100, 200, 500 and 1000 year events. The points were located upstream of weirs and locks, toproduce a representation of the flood that was relatively unaffected by the structures. There were 55 points used in the tidal and non-tidalThames. The flood height at each point was extended along a line perpendicular to the river, which takes into account the orientation ofthe flood plain at each cross-section. These points were used as inputs to build a TIN: the subsequent water surface (Figure 6) was used asan estimate of the slope of the stream channel and the flood.

Two processes were compared for the creation of flood maps (Figure 7):

a) Intersecting the flood surface with the DEM:

A key feature in the algorithm removed polygons non-adjacent to the river, i.e. areas which the terrain model indicates are not logicallyflooded at a given water depth. This process produces a logically correct flood risk envelope.

b) Using a basic propagation model:

Existing propagation models were examined (Consuguera et al., 1993; Giammarco et al., 1994a, 1994b) and an appropriatemethodology was developed, whereby the water is propagated out from the river across the DEM surface in a series of iterations until thelimit of the surface likely to be flooded is reached. As with the previous method (a) this takes into account embankments and solid features(trees and other 'non-solid' features have been previously removed with a filter). Both processes were carried out using ArcInfo GRID toolsand processes. A visual comparison of techniques a) and b) showed that both methods provided similar flood extents.

1 The geoidal correction of the DEM was found to be beyond the capabilities of the GIS software used in-house. This process was outsourced to theDepartment of Geomatic Engineering, University College London in order for specialist software and expertise to be applied.

Hélène M. Galy, Richard A. Sanders: Using SAR Imagery for Flood Modelling

Willis Risk Technology Insights 19

Flood modelling allows the delineation of areas at risk, as well as the estimation of indicative flood depths expected at any location withinthe area affected. This data is then used to assess risk to property. Using a suitable interpolation algorithm, property locations at a unitpostcode level, are intersected with flood depth data. This results in a measure of flood intensity (water depth) for each location.

Potential loss to an insurance company portfolio of properties can then be estimated for flood events of various return periods. Theprocess is carried out using a proprietary data warehouse system, built using an Oracle Relational Database Management System(RDBMS), and called the Integrated Catastrophe Modelling Platform (ICMPTM). This process takes the inputs of flood intensity, clientportfolios, and suitable loss curves, and outputs an estimated loss, for each location, and/or for the whole portfolio. The initial use of theoutputs from the system is to allow insurance companies to accurately calculate their reinsurance purchase requirements. The data canalso be used by insurers as a rating tool for properties and to provide hazard intensity for potential development sites. A user interface wasalso developed (Figure 8), which can be used to interrogate the data and visualise the areas at risk.

4. Discussion and conclusion

The airborne SAR data proved ideal for regional flood risk assessment. The combination of resolution and accuracy delivered a final productthat is far superior to anything else currently available. Initial problems with data sizes, particularly with the DEM data, were overcome by'trimming' the area of interest to the floodplain only and careful disk space management. The SAR data provides a unique combination ofelevation models and ORI image, which allows land use to be included in the estimation of flood propagation. This is a major advantageover other data sets and allows a great refinement of the flood estimation process, with little additional processing. GIS tools also provedadequate for regional risk assessment, as opposed to single-site analysis.

This approach can give planners as well as insurers a valuable tool for assessing the required flood risk. The flood risk assessmentsystem developed by Willis is now available to UK insurance companies, and enables them to estimate their exposure to flood risk on theThames from a number of different return period events, both at a portfolio level and at a geocode level. The methodology developed forthis study is currently being assessed by local and national authorities in the UK and elsewhere, and is being proposed for use by theinsurance industry in Europe and the Far East.

The Bye report (Bye, 1998) for example, is using 100 year (river) and 200 year (coastal) as planning return periods. However, it isbelieved that both planners and insurers will need an assessment of risk for more than one scenario. Therefore, a multi-scenario approachwas adopted.

There are a number of potential refinements, which will further increase the value of the flood risk assessment model. The propagationof the flood can be enhanced by the addition of improved volumetrics for historic floods. This may prove problematic for the Thames, butwill prove more successful for other rivers which have defined defences (levees) and for which a realistic limit can be set for availablefloodwater volume.

3. Application: risk analysis and loss estimation

20 Willis Risk Technology Insights

This work was carried out in partnership with Intermap Technologies Inc, and HR Wallingford. The authors would particularly like toacknowledge the assistance provided by David Ramsbottom of HRW and Hugh Mackay of Intermap, and of colleagues at Willis.

References

Black, A. R. & Evans, S. A., (1999) A New Insurance Perspective on British Flood Losses, Proceedings of the 34th MAFF Conference of Riverand Coastal Engineers Keele University.

Bye, P. & Horner, M. (1998) Easter 1998 Floods, Volume 1.

Consuguera D, Joerin F and Vitalini F, (1993) Flood Delineation and Impact Assessment in Agricultural Land using GIS Technology, 177-198 inCarrara A. and Guzzetti F. (ed.), (1993) Geographical Information Systems in Assessing Natural Hazards, Kluwer Academic Publishers.

Dingman, S.L. , (1994) Physical Hydrology, Prentice Hall.

Environment Agency (1999) Action Plan for Flood Forecasting, Warning and Response, Progress Report - June 1999.

Giammarco P. and Todini E., (1994) A Control Volume Finite Element Method for the Solution of 2-D Overland Flow Problems, 82-101 inProceedings of the Speciality Conference, ENEL-DSR-CRIS, Giammarco P., Todini E., Molinaro P. and Natale L. (ed.), (1994) Modelling of FloodPropagation over initially dry areas, Milan, Italy.

Giammarco P. and Todini E., (1994) Combining a 2-D Flood Plain Model with GIS for Flood Delineation and Damage Assessment, 171-185 inProceedings of the Speciality Conference, ENEL-DSR-CRIS, Giammarco P., Todini E., Molinaro P. and Natale L. (ed.), (1994) Modelling of FloodPropagation over initially dry areas, Milan, Italy.

HR Wallingford, (1997) Hydraulic Factors in Flood Risk Mapping, Report EX 3574.

HR Wallingford, (1999) Thames Flood Levels, Report EX 4030.

Kleusberg A. and Klaedtke H. G., (1998) Accuracy Assessment for the Star-3i derived DHM in Baden-Württemberg, Universität Stuttgart,Institut für Navigation.

Munich Re (1997) Flooding and insurance.

Parker, D. J. (1998) Appendix C: Flood Forecasting, Warning and Response System, in Bye, P. & Horner, M. (1998) Easter 1998 Floods, Volume 1.

Acknowledgements

Hélène M. Galy, Richard A. Sanders: Using SAR Imagery for Flood Modelling

"ICMP" and "Integrated Catastrophe Modelling Platform" are registered trademarks of Willis Limited.

Willis Risk Technology Insights 21

22 Willis Risk Technology Insights

Karl A.H. Jones

Abstract

This paper describes the development of insurance and reinsurance business applications for earthquake risk analysis in Japan. The paperwill provide a background to the particular insurance requirements, the processes involved in developing the models, and the application ofthe model to business needs. The paper will deal with geodemographic analysis for insurance rating and for market analysis and willconsider the availability and suitability of various data sets for analysis of insurance needs. The paper will include examples of businessapplications in ArcView GIS.

1. Introduction

Insurance and reinsurance are not industries which are immediately associated with the application of GIS. There are, however, three keyareas where these industries could use GIS extensively.

a) Visualisation and analysis of a property portfolio

The ease of data interpretation from mapping instead of endless spreadsheets is widely acknowledged. GIS is a suitable tool for theanalysis of data with a spatial reference such as policy numbers, sums insured and claims histories. This is a straightforward task to performwith simple tools in ArcView.

b) Risk assessment through hazard modelling

A major source of claims to any insurance or reinsurance company is from natural hazard events such as earthquakes. The ability toevaluate which portfolio elements are at risk and the severity of that threat enables an insurance company to assess the level of risk. Thiscan assist in decisions for pricing individual policies, both for direct insurance and for large single reinsurance risks. Modelling can alsoassist with reinsurance pricing for a whole portfolio of business and subsequent risk transfer decisions (including reinsurance purchase).Such assessments of risk can be achieved through hazard modelling.

c) Marketing and customer analysis

The use of geodemographic data - such as census information or customer profiling on propensity to purchase insurance products -allows an insurance company to target prospective clients and plan marketing campaigns. When combined with other information with aspatial element such as crime rates or earthquake hazard, a powerful tool for sales targeting and risk management is created.

In Japan we see an insurance market which is changing and has the potential to embrace the benefits of GIS. Japan is located in one ofthe world's most active earthquake zones. As a result, potential demand for insurance is high, but the cost of insurance and reinsurance arealso high, especially given the context of the current state of the Japanese economy. The Kobe event in 1995 has reminded everyone of thethreat and there is always talk of when 'the big one' will strike Tokyo.

Until recently, earthquake insurance policies in Japan were subject to rigid tariffs, however, the liberalisation of the markets will allowmany changes to take place. GIS and earthquake modelling have a leading role to play in this environment. Through hazard modelling,risk assessments can be undertaken to provide a solid basis for the development of earthquake pricing away from the tariff system.Insurers and reinsurers can gain a better understanding of the risk they transfer or accept, by modelling a full range of possible earthquakeevents. Japan is one of the largest markets for insurance in the world and the ability to use geodemographic information in targetingfurther sales (not least in combination with hazard modelling) will provide insurers with new avenues for increasing sales or targetingdesired segments.

Nineteenth Annual ESRI International User ConferenceSan Diego Convention Centre, San Diego California 26-30 July 1999

Business applications of earthquake modelling in Japan using ArcView GIS

Willis Risk Technology Insights 23

2. Development

Over the last two years Willis have developed earthquake modelling systems built in ArcInfo and ArcView. The primary models are built inArcInfo, with considerable spatial modelling using GRID and TIN modules and programming in AML. Some work was also performed withERDAS Imagine version 8.3.1 and the Essentials module.

ArcInfo 7.2.1 runs on a Sun Enterprise 450 and accesses a DEC Raid5 storage works for data storage. ArcView 3.1 runs on PCs,networked to the UNIX and will perform adequately on a Pentium 166 with 64Mb of RAM . ERDAS Imagine version 8.3.1 runs on a CompaqSP700 PC, which is also networked to the UNIX equipment.

Using this system it is possible to query a library of over 2,000 earthquakes and perform all the subsequent analysis required throughArcView. The modelling is available at a number of scales, allowing the analysis of earthquake risk across Japan for either a single locationrisk assessment, right up to a country-wide portfolio with millions of properties.

An important aim of this project was to provide the results of the models to clients in a user-friendly form. A customised ArcViewproject was, therefore, developed allowing easy visualisation of the modelling in conjunction with associated demographic and financialinformation. A start-up script loads the relevant shapefiles and grids and assigns legends to them, based on a pathname defined by anenvironment variable. A main view window and a 'control' view are opened automatically and resized according to the screen resolution.Mapping of the relevant themes is done via a modeless dialog. This allows the client to toggle map themes on/off, search by geographiclocators or grid co-ordinate, and to interrogate the earthquake hazard and geo-demographic information. A toggle to full functionalityArcView is also provided. The data can be accessed by CD-Rom with the apr file loaded onto the computer, or all earthquake scenarios canbe loaded to the hard disk. The project is designed to operate on a machine with either 800 by 600 or 1024 by 780 screen sizes, with atleast 16 million colours.

The ability to link the processing power and tools from ArcInfo directly to a visual output in ArcView is a major advantage over otherdesktop GIS packages used within the insurance and reinsurance industries. A further advantage has been the ability to customise ArcInfothrough the use of Avenue.

ArcView is easy to use and very user-friendly which is of immense help when introducing advanced technologies to an industry such asinsurance which has not always been at the forefront of computing and IT.

Applications can also be made "user-proof" so that functionality is limited or extended as required. This means that new users'utilisation of the system may be restricted, ensuring that they cannot access areas which they would not understand, thereby avoiding anyconfusion.

Their user friendly and user proof nature make ESRI GIS tools extremely effective in enabling access to valuable information.

3 Data issues

3.1 Hazard data

This paper focuses on the applicability of these tools to the situation in Japan rather than how to recreate an earthquake model. Whilstaccess to data sources in the USA is relatively easy, in other nations obtaining and processing data is probably the most time consumingelement of the whole modelling exercise.

Seismic and tectonic information, geology - especially soils - geocoding and demographic data came from varied and disparate sources,including Japan, the UK and the USA. Identifying the data to construct the hazard models was a challenging task and required extensiveJapanese language skills and academic and business contacts within Japan. There was often little consistency between formats and scalesof the data that was incorporated into the models, which required manipulation in ArcInfo before use could be made of it, often requiringcomplete reconstruction of datasets from raw ASCII files.

24 Willis Risk Technology Insights

Karl A.H. Jones: Business Applications of Earthquake Modelling in Japan using ArcView GIS

3.2 Client data

The data that will be input to the models from insurance or reinsurance companies will vary in detail and scale. Ideally, the input data canprovide detailed information by small unit areas (such as postcodes) and is split between property types (such as residential or commercial).The more detailed the information, the more meaningful and accurate the resulting risk assessment studies can be.

Sometimes information is only been available by prefecture - a prefecture could have a population of several million whilst earthquakeeffects could vary over small distances of a couple of kilometres - whilst at the other end of the spectrum, highly accurate data is available,with street mapping used at scales of 1:10,000. The Willis model has been built at a resolution that allows the most detailed studies to becarried out, but the capability must also be available to incorporate coarser data.

Example of data interpretation:

This example of data refinement performed by the model examines coarse portfolio data.

When a client asks for an assessment of earthquake exposure, the model could be provided with client data with little or no indicationof the type of property at risk for assessment. Location may not be detailed and could group a considerable number of policies to settle in avery large area. So, if we are given information that a client has 5,000 policies within an area, with a distribution representative of theinsurance cover given in this area, how can we locate those policies and therefore assess the earthquake risk for an area the size of TokyoBay? Residential, commercial and industrial risks respond very differently to earthquake motions.

Within the model we have undertaken remote sensing analysis of Landsat images using ERDAS software. The imagery analysis has beenused to ascertain the land use type for a given area, such as Tokyo Bay. Insured property will be located in urban areas, so we can separateurban from other land uses, such as parkland, forest or agricultural use. The urban areas can then be divided further into urban land useclassification by the density of the urban development. This enables an identification of the areas most likely to be residential (low or highrise), commercial and light or heavy industries. This is verified by the use of aerial photography and census information from the area understudy. Both of these datasets are incorporated in the model.

From previously knowing just a number of properties in a wide area, we can narrow down the likely location of the insured risks. We canthen assess the distribution of soil types and other factors influencing earthquake motion and its effects in those areas. This enables a moreaccurate determination of the exposure to earthquake risk.

Of course, this is no substitute for perfect data, but it demonstrates the direct applicability of GIS to assist a client with answers toquestions that are otherwise extremely difficult to answer without a fundamental reassessment of their data collection.

4. Application

The application of earthquake modelling has assisted clients in a variety of ways. Examples of this can be categorised into the three areasnoted in the Introduction.

Visualisation and analysis

– Clients have been able to view their earthquake exposure risk by using a spatial identifier - a 'geocode' such as a postcode. Thisenables an understanding of the concentrations and accumulations of risk.

– The ability to assist with coarse data resolution (as noted above) enables a more accurate understanding of property location thanwould have previously been available.

– All outputs from the complex earthquake modelling in ArcInfo are fully accessible and are easily viewed and manipulated in ArcView

Willis Risk Technology Insights 25

Risk assessment through hazard modelling

– At the site-specific level, earthquake risk can be assessed for both direct underwriting by insurers and for single, large (facultative)reinsurance placements.

– A whole portfolio of property data can be assessed for earthquake risk and the level of risk quantified.

– Hazard risk zonations and risk mapping can be produced to assist with insurance pricing and risk management

– The additional information provided by this modelling enables informed choices to be made without recourse, or in addition toexternal sources on hazard zonations and risk assessment

– Differentiation between the risk to different buildings and insured property types can be understood. Not all properties behave inthe same way and not all structures are as vulnerable to the same earthquake.

All our hazard modelling is able to utilise our proprietary Integrated Catastrophe Modelling Platform (ICMPTM). ICMPTM is a proven riskassessment system for business and provides a cohesive framework including the development of a series of generic tools to allow theentry and cleaning of data sets and the modelling of multiple perils on a single client data set. It is capable of multiple exposure modelling(e.g. property loss, business interruption) and of dealing with varied resolutions of location data. The product was developed as a series ofinterchangeable modules to allow updating of individual parts.

Marketing and customer analysis

– When the hazard zonations from the model outputs are combined with the geodemographic information, informed marketingdecisions can be made. Strategies to avoid or target higher risk areas can be made.

– Earthquake events have ramifications beyond the property portfolios. Other insurance product lines such as motor, or life can alsobe significantly impacted by earthquakes.

Example of business application:

We were asked to assess a nationwide client portfolio and how it could be affected by five historical earthquake events (including Kobe andthe 1923 Gt. Kanto) 'as if' they had happened today.

Each insurance company has a unique portfolio and the effects that a particular earthquake could have on them are, therefore, uniqueand could be very different to the effects experienced by a competitor. The results of the analysis in this exercise showed that what washistorically one of the most significant events of this century would have caused a minimal loss to our client. This was because of the natureof their portfolio type and location of policies. Conversely, another of the events would have potentially resulted in an unacceptable level oflosses, beyond the limits of reserves and reinsurance policies.

The significance of this in terms of a business application was outlined and summarised in a series of suggested action plans. For example:

– Where the earthquake risk to the portfolio is greatest, the organisation can either transfer more of the risk, i.e. throughreinsurance arrangements or alternative risk transfer mechanisms such as securitisation of risk. Risk can be avoided or reduced bywriting less business in the area at risk thereby reducing the exposure to earthquake hazard.

– Where there is less risk of loss from earthquake events, the company could build their market share in this area and diversify awayfrom higher risk areas. In conjunction with geodemographic information, this knowledge on areas of lower risk can assist withmarketing and selling operations.

26 Willis Risk Technology Insights

5. Conclusions

Some of the many advantages in using GIS to examine earthquake risk in Japan for the insurance and reinsurance industries have beenshown. The benefits of improved analysis and accurate risk assessment will be reflected in improved performance for both the insurance/reinsurance organisation and the policyholders. The use of GIS and risk assessment can assist a company to achieve competitiveadvantage in the market place and to provide an improved service to clients. The benefits noted in this paper present a real advantage tothe industry, corporations and the individual. How this advantage is measured in terms of improvements in financial results and improvedshareholder value is difficult to quantify, however, this should not be seen as an excuse to ignore the benefits of GIS or a reason not to applythese technologies in the search for a solution.

Acknowledgements

The author wishes to thank a number of academics from both the UK and Japan for their valuable inputs to the hazard modelling.

References

Applied Technology Council. ATC-15-3 "Proceedings of the 4th U.S.-Japan Workshop on the Improvement of Building Structural Design andConstruction Practices", 1992.

Japanese National IDNDR Committee, "The Great Hanshin-Awaji Earthquake. Damages and Response, Stop Disasters, No. 23, Winter 1995.

Matsu'ura, M., Iwasaki, T., Suzuki, Y. & Sato, R. "Statical and Dynamical study on faulting mechanism of the 1923 Kanto Earthquake",Journal of Physics of the Earth, 28, pp.119-143, 1980.

Minami, T. & Ohori, M. " Relatively long period ground motions expected in the Tokyo bay region", Proceedings of the 10th Worldconference of Earthquake Engineering, Balkema, Rotterdam, 1992.

Reiter, L. "Earthquake hazard analysis, issues and insights", 1990.

Swiss Re, "Earthquakes and Volcanic Eruptions: a Handbook on Risk Assessment", 1992.

Swiss Re, "The Great Hanshin Earthquake: Trial, Error, Success", 1995.

Tiedemann, H. "A Model for the Assessment of Seismic Risk", 8th World Conference of Earthquake Engineering, 1984.

Tokyo Metropolitan Government, "Tokyo and Earthquakes", 1995.

Karl A.H. Jones: Business Applications of Earthquake Modelling in Japan using ArcView GIS

"ICMP" and "Integrated Catastrophe Modelling Platform" are registered trademarks of Willis Limited.

Willis Risk Technology Insights 27

28 Willis Risk Technology Insights

Shigeko Tabuchi, Richard Sanders

Abstract

This paper describes the application of advanced numerical modelling, GIS tools and Willis' Integrated Catastrophe Modelling PlatformTM

(ICMPTM) to typhoon modelling for Japanese insurance companies and how it has led to numerous opportunities for business applications.The paper introduces typhoons in Japan and the insurance issues involved. It describes the modelling techniques, the business applicationsand potential future developments which can be carried out in order to bring advantage to both insurance companies and to Willis.

1. Introduction

Typhoon Mireille in 1991 resulted in higher insurance losses than any other single event in Japan. Japan is frequently hit by typhoons and isat great risk from related hazards such as wind and flood. Insurance companies are increasingly becoming aware of risks from such events,and various models have been developed to assist with risk management. The Japan Team of Willis Reinsurance requested a typhoon modelwhich would allow their clients to assess their risk from typhoons.

Willis has worked on Japanese catastrophe projects for the past 3 years, typhoon is the second natural disaster to be focused on; thefirst being earthquake due to the devastating Hanshin earthquake in Kobe in 1995, and also to liberalisation of the Japanese non-lifeinsurance market.

For this project two types of typhoon model were built; a probabilistic/actuarial model and a state-of-the-art deterministic model whichwas developed in conjunction with two external contractors. In this paper typhoon modelling and its business applications, primarily forinsurance and reinsurance markets will be discussed with some background to typhoons and typhoon hazards in Japan. Use of the ArcViewGIS user interface system for Japanese insurance businesses has been discussed further in Jones, KAH (1999).

2. Typhoons in Japan

2.1 Insurance issues

Most global catastrophic insurance losses are caused by wind related hazards, (table 1) Typhoon 19 of 1991 or "Mireille" is the third mostcostly of these events in the period between 1970 and 1998 (SIGMA 1999). It caused more losses to the insurance market than the Kobeevent. This is probably owing to the fact that earthquake insurance penetration for residential property in Japan is only 14.2% even afterthe Kobe event (Non-Life Insurance in Japan 1997-1998, March, 1998) compared to 57.0% for fire insurance which covers losses from waterand wind related perils. In general a typhoon event is likely to cover wider areas hence affecting a greater number of properties than anearthquake event. Typhoon Mireille covered an area from Kyushu in south Japan to Hokkaido in north Japan. Typhoon is also a morefrequent event than earthquake, on average typhoons make landfall four times a year.

Willis Insurance Consulting Solutions - 3rd Annual Conference6 September 1999

Catastrophe risk management for Japanese insurers:Japanese typhoon modelling

Willis Risk Technology Insights 29

Table 1

The 10 most costly insurance losses 1970 - 1998

Insured loss Date Event Country(in USD m at 1998 prices)#

18,600 08.1992 Hurricane Andrew USA

13,762 01.1994 Northridge Earthquake, Southern California USA

6,654 09.1991 Typhoon Mireille Japan

5,733 08.1992 Winterstorm Daria (Hurricane) Europe

5,520 09.1989 Hurricane Hugo Puerto Rico

4,302 10.1987 Autumn Storm Europe

3,984 02.1990 Winter Storm Vivian (Hurricane) Europe

3,530 08.1998 Hurricane Georges, Flooding USA

2,759 07.1988 Explosion on Piper Alpha Offshore Oil Rig Great Britain

2,647 01.1995 Great Hanshin Earthquake in Kobe Japan

2,249 10.1995 Hurricane Opal USA

# Excluding liability

2.2 What is a typhoon?

A typhoon is a severe form of tropical cyclone with a non-frontal, warm centred low-pressure system, developing over tropical and sub-tropical oceans. It generally occurs between May and November in the northwest Pacific region. Similar weather systems occur in otherparts of the world; hurricane in the Atlantic and east Pacific affecting the USA and the Caribbean, and cyclone in Australia and the IndianOcean. Typhoon intensity is classified by its wind speed or by intensity, measured as barometric pressure at the centre of the typhoon.

There are four stages to a typhoon life; birth, developing, maturing and weakening stage (Appendix A). A typhoon is driven by westerlywinds, and normally approaches Japan at maturing or weakening stage. During these stages, central pressure increases and wind speeddecreases and hence the storm weakens, however it can result in extreme damage from a wide extent of relatively strong winds or heavyrainfall.

2.3 Typhoon hazards

There are four main hazards caused by a typhoon event; high wind, heavy rain, storm surge and high wave (Appendix B). Any of thesehazards can result in large losses, however Japan is at particularly high risk from flood. According to the Japanese Ministry of Construction,75% of its GDP is in flood prone areas. Typhoon Mireille is known as a "dry windstorm" and the Willis model concentrated on wind andflood from storm surge risks. The probabilistic/actuarial model looks at the wind speed only.

30 Willis Risk Technology Insights

In order to build this model, 200 historical typhoons from the past 50 years were analysed. Wind speed for different return periods wereobtained by running each typhoon across Japan to reconstruct the actual event using recorded wind speed data. This is based on a principlethat no return period can be given for a particular typhoon track; it is unique, however return periods can be given for wind speeds beingexceeded at specific locations. This is currently done at prefecture level. The output of this analysis can be used to; calculate percentageloss in each prefecture for a given return period event (25, 50 and 100 years), construct loss exceedance curves for a given portfolio.

4. Integrated hazard model - deterministicTwo hazard components for Typhoon Mireille were modelled, wind speed and storm surge.

4.1 Typhoon MireilleTyphoon Mireille occurred in September 1991. It is known to be the most costly typhoon event in Japan in recent history, causing damage

to many properties, utilities and agricultural products. Its meteorologically defined intensity (barometric pressure), was not the lowest everrecorded, however extreme high wind speeds were recorded due to its high tracking speed across Japan. Many insurance companies useMireille for their benchmarking exercise to assess their risks from strong wind. Willis has chosen Mireille as our first typhoon to analysedamages from both wind speed and storm surge. Storm surge is a unique concept to this model. Storm surge occurred locally in thesouthern part of Japan, in particular on a coastal stretch of the Seto Inland Sea.

There has been intensive work carried out by academics and other organisations on Mireille, and meteorological and loss data forstrong winds were readily available from various bodies Japan. Information on flood from the storm surge however was not collected by anysingle organisation and was not obtainable within the time-scales required.

4.2 AEF Wind modelThe project partner on this part of the model was Accurate Environmental Forecasting, Inc (AEF). It was founded by scientists at theUniversity of Rhode Island, Graduate School of Oceanography, USA, which is internationally recognised for oceanographic numericmodelling. AEF build sophisticated numerical tropical cyclone models. Their models provide a major part of the basis of hurricaneforecasting for the US National Weather Service. The models are developed from the Geophysics Fluid Dynamics Laboratory (GFDL) /National Oceanographic and Atmospheric Administrations (NOAA) hurricane prediction system. The main feature of the AEF model is that ituses ocean-atmosphere interaction which allows the following;

– Improved intensity calculation

– Enhanced calculation of storms at sea

– Improved intensity calculations at landfall where properties are at risk

To adapt the model for typhoon Mireille in Japan three dimensional (3-D) meteorological fields from the available observations for thetyphoon itself and for the north-east pacific were prepared so that "hindcasting" can take place. The model output includes the 3-D windstructure of the storm, atmospheric temperature and moisture, surface pressure and precipitation information.

4.3 FSI storm surge modelThe storm surge model for southern Japan was built by Flow Solutions International, Inc (FSI), which was founded at the University of SouthFlorida, Department of Marine Science. The model is fully three-dimensional and time-dependent. It utilises tidal forcing at the shelfboundary to drive tides and AEF wind fields to drive storm surge.

4.4 Loss calculationsThe output grids from the hazard models are then intersected with chosen geocode level (shi-ku-gun, prefecture, etc) to carry out lossanalysis. The resultant tables are then related to client portfolios using ICMPTM.

3. Probabilistic/actuarial model

Shigeko Tabuchi, Richard Sanders: Catastrophe Risk Management for Japanese insurers: Japanese typhoon modelling

Willis Risk Technology Insights 31

5.1 Underwriting Information SystemThe Underwriting Information System (UIS) is an application which has been built using the data provided from the actuarial/probabilisticmodel. The application will allow users to obtain probable maximum wind speeds and percentage damage for specific locations. This canbe used for rating purposes by underwriters, for portfolio management, and also to analyse potential marketing opportunities. The UIS isalso a very valuable demonstration tool for both Willis and for clients. The UIS shows the scope and the capabilities of the model in easilyunderstandable, visual terms, and it provides a powerful image which portrays technical and competitive advantage.

5.2 Portfolio analysis for wind and floodPortfolio analysis is primarily carried out in order to assess reinsurance requirements. Use of the deterministic windstorm and storm surgemodel allows a client to assess its reinsurance requirements, based upon the benchmark event for Japanese typhoon, Mireille. Accurateassessment of exposure will allow the insurer to purchase reinsurance which reflects their attitude to risk. Portfolio analysis can also bebroken down into accurate geocode level results which can allow a client company to manage their portfolio to increase profit.

5.3 Short-term forecastingThe two models used in the deterministic approach are based upon developments made in the field of short term forecasting of tropicalcyclones. The models can be initialised into a "real-time" mode which will provide a series of 12 hourly forecasts for an actual event, withina short time of the meteorological recordings being taken. This forecast data can then be run through the hazard and loss modules toprovide a forecast of loss to a client portfolio. The results can then be used for claims management and to identify erroneous claims.

5.4 Non-insurance applicationsThe model also as applications outside the insurance / reinsurance field, in general risk management, emergency planning and regionaldevelopment.

6. Future development and improvement

Clients will benefit from additional typhoon tracks being added to the deterministic model. In particular clients highest perceived risk is inthe Kanto region which was largely unaffected by Mireille. The addition of typhoons which affected Kanto region will increase theapplication of the model.

Increasing the resolution of the probabilistic model from 50 to 2,500 geocodes to cover Japan will allow a more accurate estimate ofpotential losses to be made, particularly inland.

Client feedback has suggested that loss from flood resulting from heavy rainfall is particularly important in Japan. The model alreadyincludes rainfall as a hazard, it is proposed to use the resultant data to model flood and consequent losses.

7. Conclusion

This project was technically very successful, leading Japanese insurance companies perceived the model to be leading edge. The modelincludes the most advanced typhoon modelling methodology currently available globally, and Willis can apply this to clients' portfolio toprovide highly accurate loss exposure analysis. Other applications will allow clients to effectively rate portfolios or individual risks.

The techniques which were developed for this project are applicable world wide, wherever there is a tropical cyclone risk.

5. Business applications

32 Willis Risk Technology Insights

Appendix A 4 stages of a typhoon life

Stage 1 Birth

Ascending air current over the warm sea develops and cumulonimbus clouds form. These cloud formations gather to form a vortex, thecentral pressure starts to decrease and a typhoon develops.

Stage 2 Developing

Using water vapour from warm sea as an energy source, the Typhoon intensifies. Central pressure deepens to its lowest and the wind speedintensifies to its highest.

Stage 3 Maturing

Wind speed near the centre decreases but the area of strong winds tend to spread out further.

Stage 4 Weakening

Typhoon generally develops in one of two ways;

1 As a typhoon moves closer to Japan, sea temperature decreases dramatically and the typhoon loses its energy source andadditionally loses energy due to land friction and decrease in strength to become a tropical cyclone

2 Due to cold air from the north, a typhoon loses its characteristics. This may result in the formation of an extratropical storm or afrontal system.

Whichever way the typhoon develops, the central pressure increases and the wind speed decreases. However the area of strong windmay increase and it is also possible for the typhoon to redevelop again, particularly after crossing from land onto sea. Even if the windspeed decreases to below typhoon level, strong rain may occur, particularly when a frontal system forms.

Shigeko Tabuchi, Richard Sanders: Catastrophe Risk Management for Japanese insurers: Japanese typhoon modelling

Willis Risk Technology Insights 33

Appendix B Typhoon hazards

Wind hazard

– Wind speed is influenced by topography, and strong wind is expected insome particular locations such as inlet, channel and valley line, andstructures such as tunnel entrance, on bridges and between buildings.

– High wind speed on the right of typhoon track due to the combination ofthe rotation of the storm and the tracking speed

Rain hazard

– Heavy rainfall from a typhoon system itself or a frontal system

– Extreme rainfall occurs for a short period of time.

– It may cause: Flood, landslide, mudflows

“ICMP" and "Integrated Catastrophe Modelling Platform" are registered trademarks of Willis Limited.

Storm surge hazard

– Astronomical tide cycle due to sun / moon gravitation: 1 or 2per day

– Storm surge hazard (continued)

– Maximum sea level occurs twice a year, in February and inSeptember. In September this coincides with the typhoonseason

– High risk at south facing bays: when typhoon moves north,southerly wind blows over the bay causing high surge and highwave

– Past 50 years: sea level difference of over one metre occurred mainly at south facing bays, for example Tokyo, Ise, Osaka, SetoNaikai, Ariake Sea

– Sea level differences due to typhoon surge can be estimated from pressure and wind speed

High wave hazard

Require 3 conditions for a high wave, and they are satisfied by a typhoon

1. Wind strength : high

2. Duration of high wind : several days

3. Area covered by high wind : large

– High waves with very unstable wave patterns are formed by a typhoon

– Combination of surge and wave could be very hazardous to coastal strips

Typhooneye

Two types of surge

Pressure surgeWind drivensurge

34 Willis Risk Technology Insights

Willis Insurance Consulting Solutions - 3rd Annual Conference6 September 1999

Talbir Singh Bains, Jason G Ness

'Some billion years ago an anonymous speck of protoplasm protruded the first primitive pseudopodium into theprimeval slime, and perhaps the first state of uncertainty occurred'

I.J.Good, Science, Feb. 20 1959Abstract

The evident increase in the frequency of natural catastrophes in recent years has accelerated the development of loss analysis techniques.Both insurers and reinsurers strive to buy and sell the right amount of protection, maximising profit whilst minimizing risk and variability.Thus, deriving only a central estimate for a one hundred year loss does not suffice - the end user is keen to quantify the uncertaintysurrounding this estimate. This paper aims to provide a measurement of the uncertainty in the estimation of return period losses forwindstorms with the application of the most powerful statistical technique developed in recent years - the bootstrap. The creation of lossexceedance curves, based on frequency and severity, plays a central role in the estimation of these return periods. But to determine areturn period with any statistical validity requires multiple observations of that period, and in the reinsurance world this is not practical. Wewould need to determine the losses of each entity for windstorms over the past two thousand years. The body of data detailing windstormlosses for a particular company is usually insufficient, which gives rise to estimation error. In this paper we discuss the components ofestimation error, and construct, with the aid of the powerful statistical package S-PLUS, confidence intervals which give us an indication ofthe uncertainty in our return period estimates.

Statement of the problem

It is rare that researchers can gather information from an entire population - if they could then statistics would be unnecessary. This isparticularly true of the reinsurance industry, where data sets are usually limited ones. Error is involved whenever an experiment isperformed, and becomes more significant when data is limited. It is imperative that the vocabulary of error analysis is clearly understood.An error can be defined as "the difference between the result of a measurement and the true value of what we are trying to determine".Errors can be expressed in two different ways - absolute error, and relative error. The first is expressed in physical units, the latter as afraction of the value measured.

An uncertainty or a confidence interval is essentially a range or some 'wiggle room', estimated by the analyst, that is likely to containthe true value of our unknown parameter. These uncertainties can also be expressed in relative or absolute terms. Confidence intervals arerequired because we know that when we calculate a central estimate of a 100 year return period loss, by taking a sample, we could takeanother sample and arrive at a different central estimate. To improve the validity of our central estimate, we simply state what theestimate is and contrive an interval around it, which takes into account the standard error. This allows us to draw inferences on the amountof error involved in our data analysis, and inform us of the precision of statistics such as the mean and standard deviation.

Power is a concept which captures most peoples imagination, and confidence intervals are related to this concept. The larger theconfidence interval, the less power our modelling has to detect differences in factors which may influence the overall result. The confidenceinterval is based on three elements, namely the value of a statistic, the standard error of the measure, and the desired width of theconfidence interval.

Confidence intervals are usually quoted in research literature as '99% confidence intervals' or more commonly '95% confidenceintervals'. The 95% confidence interval is calculated in a way such that under repeated sampling it will contain the true populationparameter 95% of the time. The figure of 95%, is therefore, a measure of confidence one has that the interval contains the true populationparameter. Strictly speaking, 95% of the intervals will contain the true parameter.

For the estimation of return periods in the probabilistic windstorm process, loss exceedance curves are created. The parameters ofthese curves are unknown, and are estimated through techniques known as maximum likelihood estimation, and minimisation of variance.These parameters fully define the loss exceedance curves which can then be extrapolated to obtain point estimates for larger losses andhence higher return periods than are available from the observed data set alone.

Error estimation in EML models

Willis Risk Technology Insights 35

The aim of this paper is to put forward a reliable solution for providing interval estimates for the predicted losses and return periods.Using our data set of windstorm losses for a particular entity, which is our random sample, we find upper and lower limits for the centralestimate, thus creating an interval which is very likely to contain the central estimate. Because we can say just how likely our interval is tocontain the central estimate, this type of interval estimate gives more information than the previous point estimate. We also aim toquantify one component of the overall estimation error - the parameter error. The findings in the error estimation problem aim to providethe end user with increased confidence in the reasonableness of the return periods for a benchmark loss.

Error and its components

The estimation error can be broken down into the following three components : stochastic error, model error, and parameter error.

Stochastic error

This defines the possibility that the outcome is not that which was expected, given that both model and the parameters are correct. Forexample, rolling a dice 6 times may result in 2 rolls of a "1", but this does not mean that we should expect more than 1 roll of a "1", onaverage, for every six rolls. Similarly, the occurrence of storms is not predictable, and actual experience may differ from our return periodestimate. This error is not the most significant, but can be estimated via simulation or calculating the relevant mathematical distribution.

Model error

This is the possibility that the analysis technique used is deficient, and can arise from the following:

(a) the true model could be one of a number of models, and so the error is that the best fit model may not be the true model

(b) the model may be from a family that approximates the true model, and so error is the difference between approximation and truemodel

(c) the model may by coincidence fit the historical data, and so it may not be the true model

In the probabilistic windstorm process, the methodology involves running a company's portfolio through ICMPTM* on storms for whichmarket losses are available, to produce estimates of losses for the major storms. The loss curve used is the "industry" loss curve, andhence the validity of the losses is highly dependent on the appropriateness of the industry curve for the company's portfolio. This is onemajor source of model error, and can only be minimised by obtaining the company's own details of losses from past events.

36 Willis Risk Technology Insights

Model error can be partially estimated by:

(1) using a set of possible models

(2) parameterised super-family of models

The other source of model error arises from the fitting of distributions to the loss estimates, but this is reduced by using a set ofdistributions.

Parameter error

This defines the possibility that the parameters used to define a model are incorrect given that the model is correct. This can be causedby using limited data in estimating the parameters. Secondly, parameters evolve through time, so those applicable for future events areunknown at present. In the previous example of the rolling dice, our chance of rolling a "1" would be 2 in 6. If the dice were fair then thisestimate of our parameter would get closer to the true value of 1 in 6. Thus, parameter error is perhaps the most significant taking intoaccount our limited sample size. Parameter error can be estimated by the following approaches :

a) confidence intervals for parameters

b) bootstrap technique for parameter estimate

c) Bayesian estimation

The most effective technique currently used for quantifying parameter error is bootstrapping, which has the advantage of not assuminga distribution, thus making it more flexible.

Error Propagation

This is a method which can be applied in situations where it is not feasible to gain a number of measurements for the same quantity, andobtain an estimate of the random error. This is the process where two or more random errors are combined to get a third. Error propagationassumes that the errors are Gaussian in nature. It can also be used to combine several sources of random error on the same measurement.

Derivation of the confidence intervals through classical statistics

In classical statistics, we have at our disposal not only point estimates, but confidence intervals within which the true value is expected tolie. As discussed above, parameter error is significant, and in classical statistics, these errors can be quantified if the errors of theunderlying data are known. If we make the common assumption that the errors for a particular data set are normally distributed, withknown variance, then we can estimate the error of the parameters.

As mentioned earlier, standard confidence intervals are made up of three elements, one of which is the standard error. This is defined asthe standard deviation of a large number of samples from the population. But as in our case, data is limited, and the standard error istherefore estimated by the following formula,

Where σ is the standard deviation of our sample of size n. As one would intuitively guess, the size of the standard error decreases as

the sample size increases. The standard error is an indicator of the 'goodness' our point estimator.

Both bootstrap and parametric inferences have the same target : using limited data to estimate the sampling distribution of the chosenestimator. This estimator is used to make inferences about the population parameter. The difference in the approach lies in the methodused to obtain the sampling distribution. Traditional parametric inference uses assumptions about the shape of the estimator's distribution,whereas the non-parametric technique is distribution free - it is not dependent on a class of distributions. The bootstrap relies on the factthat the sample distribution is a good estimate of the population distribution.

Talbir Singh Bains, Jason Ness: Error estimation in EML models

* Integrated Catastrophe Modelling PlatformTM , Willis proprietary customer loss forecasting system

Ste = σ √ n

Willis Risk Technology Insights 37

For example, if the sample mean = 4.52, standard deviation = 6.28, and the number of cases = 15. Then the standard error is 1.62. If wenow desire the 95% confidence interval, we apply the following formula :

The 1.96 is the z-value related to the Normal distribution. These z-values are also known as standard deviates. Substituting thenumbers in our example gives us a 95% confidence interval that ranges from 1.34 to 7.69, around the mean. An approximate confidenceinterval is usually given by the 68-95-99 rule, which states that the probability that the population mean falls within two standard errors ofthe sample mean is 95%, i.e.

However, for historical windstorm data sets, it is often impossible to know the distribution and variance of the errors in the data, andthus the error in the fit cannot be quantified. Here, a relatively new statistical technique allows estimation of the parameter error, using aMonte-Carlo algorithm and the computational resources provided by modern computers.

The following graph portrays the interval estimate, and how far the population mean lies from the sample mean.

1-α confidence interval:

(x-σ xz(α /2), x+σ xz(α /2))

The proposed solution - the bootstrap method

The bootstrap is a recently constructed (1970s) general technique for estimating sampling distributions. This is a resampling method whichcan be used to estimate confidence intervals, and is a powerful technique for drawing inferences about a population's characteristics. Itdoes this by using a random sample drawn from the underlying population. In our case the sample is the entity's estimates of windstormlosses in the UK.

The non-parametric bootstrap method does not assume that the sample is selected from a probability distribution, and no assumptionshave to be made about the population. This is done by selecting randomly with replacement, a large number of 'resamples' of the same sizeas the original sample. For each sample the statistic of interest, i.e. the mean or variance, is calculated. The method relies on the fact thatthe empirical distribution based on the sample is an estimate of the distribution of the population. This estimate is used to calculate aconfidence interval that contains, with a given level of confidence, the true parameter value.

α / 2

α/2

µ low

µhigh

σxzc

σxzc

x

95%CI = µ ±(1.96 x Ste)

95%CI ≈ µ ±(2 x Ste)

38 Willis Risk Technology Insights

Bootstrapping uses Monte-Carlo sampling to generate an empirical estimate of the estimator's sampling distribution. Monte-Carlosampling builds an estimate of the sampling distribution by randomly drawing a large number of samples of size n from a population andcalculating for each one the associated statistic of interest. Therefore, in the case of bootstrapping, the sample is treated as the populationand Monte-Carlo conducted on it. This is done by randomly drawing a large number of 'resamples' of size n from the original sample, withreplacement. So although each resample will have the same number of elements as the original sample, it will include some of the originaldata points more than once, and some not included.

Derivation of CI's through the Bootstrap technique

Consider a random sample of size n, drawn from an unspecified probability distribution. The bootstrap algorithm can be defined as follows :

(1) Place a probability of 1/n at each point, x1,x2,…,xn of the sample, to construct the empirical distribution of the sample.

(2) From this empirical distribution function, draw a random sample of size n, with replacement. This is a 'resample'.

(3) Calculate the statistic of interest, say the mean µ , for this resample, yielding µ*.

(4) Repeat steps (2) and (3) Y times, where Y is a large number. This will create Y resamples. To construct a confidence interval aroundthe mean, Y will typically be at least equal to 1000.

(5) From the Y number of µ*'s, a relative frequency histogram can be constructed by placing a probability of 1/Y at each pointµ*1,µ*2,…,µ*Y. The distribution obtained is the bootstrapped estimate of the sampling distribution of the mean, or our statisticof interest. This distribution can be used to make inferences about the actual population mean, or statistic of interest.

The method is extremely powerful, and works well, since at root is the idea that if the sample is a good approximation of thepopulation, the bootstrap method will provide a good approximation of the sampling distribution of the statistic of interest. As the methodrelies on the fact that the empirical distribution function based on the sample is an estimate of the distribution of the population.

This estimate is used to construct an α -level confidence interval, which from the S-PLUS functions, includes all the values of µ*between the α /2 and (1 - α /2) of the bootstrap sampling distribution. Hence, the end points of an α= 0.05 level confidence interval forthe mean would be the values of the mean estimate at the 2.5th and 97.5th percentile of the bootstrap sampling distribution.

Example

We now demonstrate the power of the bootstrap by showing how parameter error can be quantified, and how this translates into thevariability of a standard measure of risk. In this particular case we will look at the expected 100 year loss from a catastrophe event in

the UK.

All the necessary data and assumptions have been used previously in the UKCATNAP model. Put simply, this model predicts the futureoccurrence of a catastrophe event and its severity. The parameters this model uses are calculated from the past observed data. As hasalready been mentioned, these parameters carry error due to the limited sample size. All models are usually made up of many variableswhich are then integrated into a final figure. The UKCATNAP model is the same in that the frequency of a catastrophe event is firstmodelled and then, given that there is a catastrophe event, the size (or severity) of the event is modelled.

In the model being used here an event is defined as a storm or several storms that cause significant damage and therefore in terms ofthe reinsurance treaty wording will be considered as one event. This is as opposed to treating each storm as a separate event from ameteorological point of view. The distinction is important when dealing with storms that occurred within hours of each other and which aredealt with as one event for the purposes of reinsurance recoveries.

Talbir Singh Bains, Jason Ness: Error estimation in EML models

Willis Risk Technology Insights 39

Parameter error in frequency distribution

The frequency of windstorm losses is modelled using a Poisson distribution which has one parameter, usually called lambda (λ). The centralestimate of λ is calculated by taking the average number of events over the observed time frame.

In our case we are looking at 13 events over the previous 25 years. Therefore our calculated λ is 0.52 which means we are expecting anevent about once every two years. The important question is though: how certain are we of this estimate, what is the probability that it willbe as high as 1.00 say? In other words one storm every year.

We carry out a bootstrap analysis where the statistic of interest is the average number of events over time. Using S-PLUS, we obtain1000 resamples, or bootstrap replicates of the observed data. The average within each of these samples is shown below:

*** Bootstrap Results ***

Number of Replications: 1000

Summary Statistics:

Observed Bias Mean SE

Param 0.5200 0.0052 0.5252 0.1337

BCa Percentiles: (Bias corrected and adjusted)

2.5% 5% 95% 97.5%

Param 0.28 0.28 0.72 0.76

The results show the observed λ, as well as the λ* calculated as the mean of all the replications (shown in bold above). The chartoverleaf shows the empirical distribution of the bootstrapped λ.

Empirical distribution of lambda parameter from a bootstrap of 1000 replications

Lambda value

Den

sity

65

43

21

0

0.2 0.8 1.00.60.4

40 Willis Risk Technology Insights

Now that the distribution is known it is possible to look at how the 100 return period loss varies as the λ varies. The results areshown below:

The straight vertical line indicates the modelled loss from a 100 year event if Poisson parameter error is not taken into account. Thecurved line shows how this modelled loss varies if we want to incorporate the uncertainty surrounding our estimation of the λ parameter.The 90% confidence interval is:

[£2.4bn ← → £3.9bn]

Note that at the 50% position the lines intersect indicating that the λ parameter error is symmetrical and so our expected 100 year lossis still the same whether we account for parameter error or not. If this were the end of the story then maybe just neglecting the Poissonparameter error wouldn't be such a bad thing.

Parameter error in the severity distribution

The above analysis only took into account the parameter error in the frequency distribution - it made no measurement of parameter errorsin the severity distribution (the size of a loss from an event), which in this case was a Pareto distribution. This distribution has twoparameters - a lower bound which is usually equal to the minimum claim, and a scale parameter. In our example the lower bound isassumed to be fixed at £200m, which leaves us with finding the error in estimating the scale parameter.

A similar bootstrapping analysis to that above was conducted for the Pareto scale parameter, except in this instance the statistic ofinterest within each replication was an estimate of the scale parameter.

Below are the results from the bootstrap replications simulated in by S-PLUS:

*** Bootstrap Results ***

Number of Replications: 1000

Summary Statistics:

Observed Bias Mean SE

Param 1.326 0.08657 1.412 0.3617

BCa Confidence Limits:

2.5% 5% 95% 97.5%

Param 0.8435 0.8975 1.897 2.016

Talbir Singh Bains, Jason Ness: Error estimation in EML models

100,0001,000

100%

90%

80%

70%

60%

50%

40%

30%

20%

10%

Non

-Exc

eeda

nce

Prob

abili

ty

Original Estimate

Estimate if poisson parameter varies

100,0001,000 10,000Expected 100 year loss (£m)

Willis Risk Technology Insights 41

This time the mean scale parameter is not as close to the observed parameter. The empirical distribution of the scale parameter isportrayed below, and following this is a graph displaying the new distribution of the 100 year losses, allowing for both errors in thefrequency parameter λ and the Pareto scale parameter:

The extra variation is evident. The 90% confidence interval based upon the above distribution of 100 year losses, taking into accountparameter errors in both distributions, is:

[£1.2bn ← → £11bn]

Pareto scale parameter

Empirical distribution of the Pareto scale parameter from a bootstrap of 1000 replications

1.0

Den

sity

1.4

1.2

1.0

0.8

0.6

0.4

0.2

0.0

1.5 2.0 2.5 3.0 3.5

100%

90%

80%

70%

60%

50%

40%

30%

20%

10%

0%

Original Estimate

Estimate if poisson parameter varies

Estimate if both parameter vary

10,000Expected 100 year loss (£m)

100,000

Non

-Exc

eeda

nce

Prob

abili

ty

42 Willis Risk Technology Insights

This means that 90% of the time, the expected 100 year loss is within this band. The figure 90% is a measure of the confidence we havethat the central estimate of the 100 year loss lies between the high and low limits. The median loss is different to the previous two estimatesand is £2.9bn as opposed to the original estimate of £3.3bn.

A final illustration shows that the process of finding the loss from a 100 year event can be reversed to find the return period of a certainevents losses. The example below attempts to put a confidence interval around our expected return period for the 90A windstorm. Theanalysis assumes the 90A loss was £2.0bn.

The distributions of this return period are shown below, they take account of parameter error for both the Poisson parameter and thePareto parameter:

The 90% confidence intervals are shown below. The numbers in the middle show the median value of the distribution (that value lyingat the 50% probability).

Poisson parameter only varying,

[41.3 years ← 52.8 years → 80.5 years]

Both parameters varying,

[20.7 years ← 60.8 years → 291.4 years]

Talbir Singh Bains, Jason Ness: Error estimation in EML models

100%

90%

80%

70%

60%

50%

40%

30%

20%

10%

0%

Original Estimate

Estimate if poisson parameter varies

Estimate if both parameter vary

100Expected 90A return period (years)

10,000

Non

-Exc

eeda

nce

Prob

abili

ty

10

Willis Risk Technology Insights 43

Conclusion

In this paper, a powerful technique named the bootstrap was presented to estimate confidence intervals containing the central estimateand its probable values. Two different studies were carried out. The first one examined in detail the sources of error in the probabilisticwindstorm loss analysis process. We discussed three sources of error : stochastic error, model error, and parameter error. The first two ofthese were minimized in the modelling process, but the third and most significant, parameter error had not previously been quantified.

In the last part of the study, we have demonstrated, through the use of the bootstrap, how we can construct an interval estimate for thereturn period loss, providing the point estimate and an interval that contains plausible values. It was shown that the samples' distributionsdepart from a Gaussian distribution, thus eliminating the use of the classical statistical method to calculate confidence intervals. The non-parametric bootstrap was then introduced, to estimate the error in the scale parameter of the Pareto distribution, which was chosen torepresent the distribution of windstorm losses. The 90% confidence interval estimated in this case has a very large range.

It is important to consider a few points of caution:

This is a study of the parameter errors, showing how uncertainty of the parameter estimates can affect the measures of risk that wecalculate. It is highly dependent upon the quantity of data used in the analysis because this reflects directly upon the efficiency of theparameters.

– The analysis focused on the 100 year return period loss. This is at an extreme of the overall loss distribution and as such is exposedto considerable uncertainty even before factoring in parameter error.

– The bootstrap interval is sensitive to outliers since the sample mean is sensitive to outliers. S-PLUS has a built-in function toidentify these, and indicate them as 'Influence points'.

– The purpose of the confidence interval is to give a range of plausible values for the central estimate, i.e. the population mean,based on a sample taken from the population. It does NOT provide a range for individual values in the population. The width of theconfidence interval can only indicate the uncertainty surrounding the estimate.

– The confidence interval has two parts - the actual range and the confidence level. As you increase the confidence level, the intervalgets wider. As the sample size increases, the interval becomes narrower.

'Bootstrapping will one day be as necessary for efficient citizenship as the ability to read and write'

44 Willis Risk Technology Insights

"ICMP" and "Integrated Catastrophe Modelling Platform" are registered trademarks of Willis Limited.

References

Mukhophaday N, Sequential estimation problems for the scale parameter of a Pareto distribution, Scan. Actuarial J, 1987 : pp83-103

Bailey A.William, A method for determining confidence intervals for trend, TSA (1992) 44 : pp11-53Doray, Louis G, Constrained forecasting of the number of IBNR claims,Journal of actuarial practice 1996 : 287-305

De Pril Nelson, Error bounds for compound approximations of the individual risk model, STIN bulletin 1992 22 : 135 - 148

Tu Son T, Stochastic modelling and error correlation in dynamic financial analysis, Cas Act Soc Forum 1998 Summer : 207-219

Walker R. S., Stochastic error, parameter error and model error, General Insurance Conv 2 1996 : 461-469

MathSoft, S-PLUS Guide to statistics, Version 2000 Aug 1999

Efron and Tibishirani, An introduction to the Bootstrap, (Mgphs Stats and App.Probability, No 57)

Contents

Editor’s preface 1

Practical problems with intergrated risk and return management 2

Catastophe risk management for UK insurers: the River Thames 8

Using Synthetic Aperture Radar imagery for flood modelling 14

Business applications of earthquake modelling in Japan using ArcView GIS 22

Catastrophe risk management for Japansese insurers: Japanese typhoon modelling 28

Error estimation in EML models 34

Contact details

For further information please contact your account executive.For additional copies please contact the Reinsurance Publications Department

Tel: +44 (0)20 7488 8093Fax: +44 (0)20 7481 7193E-mail: [email protected]

Willis LimitedTen Trinity SquareLondon EC3P 3AXUnited Kingdom

©Copyright 2003 Willis Limited. All rights reserved: No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, or otherwisewithout the permission of Willis Limited. The information contained in this report is compiled from sources we consider to be reliable; however, we do not guarantee and are not responsible for its accuracy. This report is for generalguidance only and action should not be taken without obtaining specific advice

Risk Technology Insights

Willis Limited

Ten Trinity SquareLondon EC3P 3AXTelephone: +44 (0)20 748 8111

Website: www.willis.com

WRE/0857/03/01 Member of the General Insurance Standards Council