timothy aman, fcas maaa managing director, guy carpenter miami statistical limitations of...
Post on 19-Jan-2016
214 Views
Preview:
TRANSCRIPT
Timothy Aman, FCAS MAAAManaging Director, Guy Carpenter Miami
Statistical Limitations of Catastrophe Models
CAS Limited Attendance SeminarNew York, NY
18 September 2006
2
Introduction
Given the limited Atlantic hurricane sample size, speakers discuss the limitations of predictive modeling from three perspectives:
– A frequentist (broker) approach using bootstrapping techniques
– A Bayesian (modeler) approach incorporating new events into a prior assumption framework
– A practical (insurer) approach reconciling the politics of actual claims experience with model-based expectations
3
Introduction
When cat models first came out, loss estimates at various return periods AND upper confidence bounds around those loss estimates were regularly shown as output
Over the course of time, fewer and fewer output summaries have focused on confidence bounds and uncertainty
This panel attempts to remind us of the magnitude of that uncertainty, from various perspectives
4
Outline
Definitions
A frequentist approach
An update
Statistical limitations of cat models
5
Definitions
6
Definitions
Frequentist: One who believes that the probability of an event should be defined as the limit of its relative frequency in a large number of trials
– Probabilities can be assigned only to events
– Need well-defined random experiment and sample space
Bayesian: Probability can be defined as degree to which a person believes a proposition
– Probabilities can be applied to statements
– Need a prior opinion (ideally, based on relevant knowledge)
7
Definitions
A bootstrap sample is obtained by randomly sampling n times, with replacement, from the original data points [Efron]
Bootstrap methods are computer-intensive methods of statistical analysis that use simulation to calculate standard errors, confidence intervals, and significance tests [Davison and Hinkley]
8
Definitions
In statistics bootstrapping is a method for estimating the sampling distribution of an estimator by resampling with replacement from the original sample
– Most often with the purpose of deriving robust estimates of standard errors and confidence intervals of a population parameter
The bootstrap technique assumes that the observed dataset is a representative subset of potential outcomes from some underlying distribution
– Random subsamples from the observed dataset are themselves representative subsets of potential outcomes
9
A frequentist approach
10
A frequentist approach
David Miller: “Uncertainty in Hurricane Risk Modeling and Implications for Securitization”, (Guy Carpenter, 1998)
– CAS Forum 1999, Securitization of Risk
David Miller “thought experiment”
– Create multiple catastrophe simulation models, each based on a simulated historical event set
11
A frequentist approach
Miller’s approach
– Frequency is historical number of hurricanes over time period Assume distributed Poisson
– Conditional severity is based on bootstrap technique Assume stationary climate Each bootstrap replication represents an equivalent realization
of the historical record, and consists of random draw, with replacement, of N hurricanes from the observed record
Confidence intervals can then be determined from the boostrap replications
12
A frequentist approach
Miller’s approach
– Essentially, each bootstrap replication represents a new catastrophe simulation model, created as if the observed historical event set had been the replicated rather than the actual event set
– “Blended” approach Severity distribution is calculated using a given catastrophe
model This severity distribution is fit to a parametric model (Beta
distribution) New parametric severity distribution is fit for each bootstrap
replication Use fitted parametric distribution for severity
13
A frequentist approach
Miller’s conclusions for hurricane loss 90% confidence intervals for three US nationwide portfolios (personal, commercial, and specialty)
– Low return periods (<10 years) Lower bound is 0 Upper bound diverges (as multiple of mean)
– Remote return periods (>80 years) Lower bound 0.5 times mean estimate Upper bound 2.5 times mean estimate
14Return Period (Years)
L̂L )95(.
L̂L )05(.
A frequentist approach
15
An update
16
An update
With the addition of more years of hurricane data, how have relative confidence intervals changed?
17
An update
Suppose we want to estimate “100-year loss” to a portfolio
Suppose we have a reliable sample of 100 years of data
– We might have seen a 100-year loss in the sample (63% of samples, assuming Poisson frequency)
– We might not (37% of samples)
Now suppose we have a reliable sample of 110 years of data
– The above probabilities are revised to 67% and 33%
…and so on…
With a sample of 300 years, the probabilities are 95% and 5%
With a sample of 450 years, the probabilities are 99% and 1%
18
An update
Bootstrap from cat model output
– Simulate datasets using cat model event sets
– “Direct” approach Eliminates need to specify, fit, and re-fit conditional severity
distributions
– Determine relative confidence intervals at various return periods
19
An update
For a given return period n…
Mean
– Generate samples of n years
– Identify largest element of each sample year
– Take the average over all sample years of the largest observation in each year
Confidence intervals
– Capture through repeated experiment the distribution of the above mean
– Take the 5th and 95th sample percentiles of the maximum value across all sample years
– Obtain 90% confidence interval around mean estimate
20
An update
90% confidence intervals around 100-year loss
0.00
0.50
1.00
1.50
2.00
2.50
3.00
Miller Model A Model B Model C
5%ile 95%ile
21
An update
90% confidence intervals around 100-year loss
0.00
0.50
1.00
1.50
2.00
2.50
3.00
Miller Model A Model B Model C
5%ile 95%ile
22
An update
90% confidence intervals around 100-year loss
0.00
0.50
1.00
1.50
2.00
2.50
3.00
Miller Model A Model B Model C
5%ile 95%ile
23
An update
90% confidence intervals around 100-year loss
0.00
0.50
1.00
1.50
2.00
2.50
3.00
Miller Model A Model B Model C
5%ile 95%ile
24
An update
Now a look at the 250-year level…
25
An update
90% confidence intervals around 250-year loss
0.00
0.50
1.00
1.50
2.00
2.50
3.00
Miller Model A Model B Model C
5%ile 95%ile
26
An update
90% confidence intervals around 250-year loss
0.00
0.50
1.00
1.50
2.00
2.50
3.00
Miller Model A Model B Model C
5%ile 95%ile
27
An update
90% confidence intervals around 250-year loss
0.00
0.50
1.00
1.50
2.00
2.50
3.00
Miller Model A Model B Model C
5%ile 95%ile
28
An update
90% confidence intervals around 250-year loss
0.00
0.50
1.00
1.50
2.00
2.50
3.00
Miller Model A Model B Model C
5%ile 95%ile
29
Statistical Limitations of Cat Models
30
Statistical Limitations of Cat Models
John Major: “Uncertainty in Catastrophe Models: Part I: What is it and where does it come from?” and “Part II: How bad is it?”, (Guy Carpenter, 1999)
31
Statistical Limitations of Cat Models
Sources of uncertainty in catastrophe modeling
1. Limited data sample For example, estimating 250-year EQ losses with only 100
years of detailed data
2. Model specification error For example, Poisson frequency (iid assumption)
3. Nonsampling error Identification of all relevant factors For example, global climate change
4. Approximation error For example, limited simulations and discrete event sets
32
Statistical Limitations of Cat Models
Cat models are collections of event scenarios
– Discrete approximations, with probabilities attached to each scenario
– Not exhaustive
– Limited perils
– Calibrated using historical experience Recalibrated as required, based on research and actual event
experience
33
Worldwide Property Catastrophe Insured Losses
$0
$10,000
$20,000
$30,000
$40,000
$50,000
$60,000
$70,000
$80,000
$90,000
'85 '87 '89 '91 '93 '95 '97 '99 '01 '03 '05*
USA Non-US
* Preliminary estimate. Source: Swiss Re Sigma
34
Statistical Limitations of Cat Models
Uncertainty factors due to limited sample size are substantial
Data quality can add significantly to uncertainty
Are we capturing all material factors?
Scientific input can be used to reduce uncertainty
– Hazard sciences (meteorology, seismology, vulcanology)
– Engineering studies
35
Statistical Limitations of Cat Models
Factors potentially influencing relative confidence interval widths
– Larger data sample / destabilizing recent experience
– Improvements in science / weakening of stationary climate assumption
– Improvements in technology
– Differences in modeled portfolios
– Negative Binomial frequency
– Increased awareness of factors contributing to uncertainty
Further exploration of the general factors influencing relative confidence interval widths is material for another presentation
36
Statistical Limitations of Cat Models
Relative widths of individual company confidence intervals will depend on specifics
– Geographical scope e.g., US hurricane, Peru earthquake, UK flood
– Insured portfolio e.g., Dwellings, Petrochemical facilities, Hotels
– Financial variables e.g., Excess policies, EQ sublimits, Business interruption
Further exploration of the portfolio-specific factors influencing relative confidence interval widths is material for another presentation
37
Statistical Limitations of Cat Models
“Don’t believe the cat model point estimates too much, but don’t believe them too little.”
top related