willis re analytics rewire

25
WILLIS RE ANALYTICS REWIRE 2015 Issue 1 In this issue: 1 Editor’s note 2 Emerging ERM risk of 2015: outsourcing 3 Risk appetite and tolerance 5 ORSA summary report “Top 10” checklist 6 ERM: Discussing fatness of tails in risk models 8 Are we safe from tsunamis? 9 A strong El Niño on the way? 10 10 years on: RMS, AIR and Willis Re on the evolution of modeling since Katrina 13 30 years later: the Ontario tornadoes of May 1985 15 After Tohoku: Re-evaluating Japanese earthquake hazard 16 Solvency II equivalence 18 Mutual insurers and non-traditional capital: time for a change of perspective 20 Ghosts in the (driving) machine – and on the witness stand 22 About our bloggers

Upload: vohuong

Post on 14-Feb-2017

231 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: willis re analytics rewire

WILLIS RE ANALYTICS

REWIRE2015 Issue 1

In this issue:1 Editor’s note

2 Emerging ERM risk of 2015: outsourcing

3 Risk appetite and tolerance

5 ORSA summary report “Top 10” checklist

6 ERM: Discussing fatness of tails in risk models

8 Are we safe from tsunamis?

9 A strong El Niño on the way?

10 10 years on: RMS, AIR and Willis Re on the evolution of modeling since Katrina

13 30 years later: the Ontario tornadoes of May 1985

15 After Tohoku: Re-evaluating Japanese earthquake hazard

16 Solvency II equivalence

18 Mutual insurers and non-traditional capital: time for a change of perspective

20 Ghosts in the (driving) machine – and on the witness stand

22 About our bloggers

Page 2: willis re analytics rewire

The analytics of managing extremes: our latest insight Willis Re is committed to bringing you the very latest insight and practical applications in the

analytics of managing extremes.

This year, in place of our Analytics ReView newsletter, the Willis Re Analytics team and global subject matter experts have been publishing articles on the Willis Group blog, Willis Wire. Willis ReWire is a compendium of these articles.

We are very pleased to introduce our first issue of Willis ReWire and we hope you find this new format an informative read.

We will continue to post blogs throughout the year. If you’d like to be notified of these before the next issue of Willis ReWire, you can:

• Follow Willis Re on LinkedIn

• Follow @Willis_Re on Twitter

• Sign up for emails about new reinsurance-related content on Willis Wire

You can also find a variety of interesting reinsurance-related articles and blogs posted on the Willis Group website as they choose to place a spotlight on the topic.

If you have any questions or would like to discuss the topics covered in this issue, please contact me, our authors or a member of your Willis Re broking team.

We look forward to hearing from you!

Best regards,

Alice UnderwoodExecutive Vice PresidentHead of Analytics for Willis Re North [email protected]

Editor’s note

Page 1 Willis ReWire • 2015

Page 3: willis re analytics rewire

Willis ReWire • 2015

Emerging ERM risk of 2015: outsourcingFebruary 2, 2015

This post is a follow-up to Dave’s entry in our 2015 Emerging Risks special feature. Click here to read what Willis experts are keeping their eyes on in other sectors.

If an outsourced process is not only out of sight but also out of mind, this emerging risk may become a current problem.

Page 2

How to control risks of outsourcingThere are two basic ways of controlling the risks of outsourcing: 1. by specifying standards at the outset of the ar-

rangement, and 2. by inspection of the process and output on an on-

going basis.

But with the explosion of outsourcing over the past ten years, even firms that had set down extensive and clear standards at the time of the original agreement and who have allocated the needed amount of resources for inspection of the processes and outputs are at risk from the complacency that comes from the passage of time without serious incident, the changing individuals on both sides of the agreement and the changing pressures on both organizations.

An outsourced process is out of sight. If it also becomes out of mind, then it will likely

move out of the emerging risk category into the current problem category.

Outsourcing might just be the most common business management earnings booster of the past ten years. This means that it is also a top candidate for becoming a major emerging risk in the near future.

The idea of outsourcing is an extension of the fundamental logic of capitalism: specialization. Processes are good candidates for outsourcing when there are other firms who can perform the same service at a significantly lower cost. Outsourced activities need to be significantly less expensive than insourcing because of several cost redundancies that are inherent in outsourcing.

Cost advantagesWhen you start looking at a potential outsourcing situ-ation, you need to understand the source of the cost advantage. There are several possible drivers of a cost advantage:

• Higher efficiency

• Lower wages paid to the people performing the outsourced work

• Lower overhead for the outsourcing partner

But there are other ways that a cost advantage might come about that are not as desirable:

• Lower safety and health standards

• Lower spending on quality control

• Lower amount of slack resources that can be available when a machine breaks or a key person gets sick

• Lower quality source materials used

Dave IngramExecutive Vice PresidentNew [email protected]

Page 4: willis re analytics rewire

Willis ReWire • 2015Page 3

Risk appetite and toleranceApril 27, 2015

In its ORSA Guidance Manual, NAIC defines risk appetite to:

Document the overall principles that a company follows with respect to risk taking, given its business strategy, financial soundness objectives and capital resources.

In other words, risk appetite is the risk-taking strategy statement.

Here are four suggestions for risk-taking strategy statements:

• Grow Risk – increase risks faster than capital

• Manage – balance risk growth and surplus growth

• Grow Capacity – increase capital faster than risk

• Diversify – Any growth will come from new types of risk

Many insurers can identify with one of those four strategies. If not, it is likely that their reasoning would itself constitute a statement of their strategy.

Risk tolerance is risk budgetThe NAIC definition for risk tolerance is also helpful. They define risk tolerance as:

The company’s qualitative and quantitative boundaries around risk taking, consistent with its risk appetite.

In other words, it is a risk budget – a statement of what risks the company will and will not take along with an expression of how much risk the company will take.

The NAIC says that risk tolerance should be consistent with risk appetite. To me, this means that if your strategy is to grow risk, then your risk tolerance should be significantly higher than your current situation. If your strategy is to grow capacity, then your risk tolerance should be pretty restrictive.

For some of us, the exams days at school are the worst memories. Multiple choice, matching and even short answer questions were not so bad. They were over quickly and even if you didn’t know the answer, it was often easy to guess or at least to create a reasonable bluff.

But the worst, for me at least, were those full of open-ended essay questions... Insurers have been told for years that they need a clear statement of their risk tolerance to operate an enterprise risk management (ERM) program. And now, the A.M. Best Supplemental Rating Questionnaire (SRQ) opens with a question followed by almost a whole page of white space. That question is:

Please state any overall risk appetite and risk tolerance statement(s) that have been established or approved by a Board or senior management that apply to the rating unit and provide guidance in providing policyholder security and creating stakeholder value. The risk appetite and risk tolerance statements may be a mix of qualitative and quantitative statements. If no such statements have been formally approved by a Board or senior management, please answer “None”.

In 2012, when A.M. Best first asked a similar question, they reported that over 80% of the answers that they received were inadequate. From that, one may infer that “None” is probably not the right answer.

Risk appetite is the same as risk strategyAlthough most insurers operate with a good risk ap-petite, in most cases, it is not articulated in a way that managers can connect to ERM terminology.

The National Association of Insurance Commissioners (NAIC) has provided some helpful definitions of risk appetite and tolerance that we can use to bridge the gap between company practice and ERM terminology.

(continued on next page)

Page 5: willis re analytics rewire

Willis ReWire • 2015

risk tolerance you your retained potential loss at a 1 in 10 or 1 in 20 return period and your capital risk tolerance linked to your 1 in 100 or 1 in 200 return period loss. A risk tolerance statement based on reinsurance purchasing can be used in conjunction with or instead of a ratings-based risk tolerance statement. There is enough room on the page. Otherwise, if you want to keep it simple, just a statement of your retention can be added to the capital target from the ratings-based risk tolerance.

4. Based upon recent experienceWe sometimes call this the Empirical Risk Tolerance. In general, insurers operate at one of four broad levels of capital:

1. Robust – enough capital to maintain a secure level of capital after a major loss.

2. Secure – enough capital to satisfy sophisticated commercial buyers that you will pay claims in most situations by providing for maintaining a viable level of capital after a major loss event.

3. Viable – enough capital to provide for a single major loss event and to avoid reaching minimal level with “normal” volatility. These companies generally operate comfortably in a market were customers are not focused on assessing their insurer’s financial strength. Sectors like personal auto and health insurance.

4. Minimal – enough capital to survive under normal volatility. A major loss event would render these insurers insolvent. These insurers effectively use the regulator’s risk based capital authorized control level as their risk capital standard.

These capital levels are generally maintained for many years and are thought of as fundamental to the self-definition of the insurer. They are often then closely linked to rating targets and reinsurance purchasing. These four statements could be used or modified to state an insurer’s risk tolerance.

The main point here is that risk tolerance does not need to be a difficult puzzle that takes years to solve. Forming a risk tolerance takes a clear understanding of what is meant by this new terminology and a recognition that most insurers already have a risk tolerance that has driven prior actions and decisions – they just need to learn how to turn it into a formal risk tolerance statement.

Different ways to develop the first risk tolerance statementThe first part of risk tolerance should already be a part of your company strategy document – it is the list of the insurance businesses in which you will participate.

For the second part, we have four different suggestions for how to proceed to develop that first written risk tolerance statement.

1. Based upon what peers are doingWhen a primary consideration is the appearance of security to customers and distributors, an insurer’s risk appetite needs to be set with consideration of the levels of security of peer competitors. This requires some careful analysis of the risk levels of those firms. Usually an insurer will perform their risk analysis using non-public information. To perform the needed peer analysis with public information requires judgment informed by experience working with many insurers. That then needs to be couples with a target for standing within the peer group to get to a risk tolerance statement.

2. Based upon the rating targetMany insurers have a clear target for capital in relation to risk based upon a rating agency capital standard (such as the AM Best BCAR). The risk tolerance statement can then be communicated in terms of a target BCAR score along with a minimum acceptable BCAR score. Since BCAR is actually a risk-adjusted view of required capital, this target and minimum acceptable BCAR scores are actually a clear risk tolerance statement. It is widely known that rating agencies do not favor the use of their calculation as a risk tolerance. But an insurer will get to better understand the concept of risk tolerance if they can actually use their de facto risk tolerance with a plan to modify it as their view of their risk matures. What this means is that eventually, an insurer will notice that BCAR is not the most accurate representation of their risk and will want to develop their own modified risk capital adequacy ratio as their risk tolerance.

3. Based upon reinsurance purchasingThe decisions that an insurer makes to decide on reinsurance retention are an expression of a risk tolerance. Based upon our analysis of your reinsurance purchase, we can tell you the likely loss that you have retained at any return period. For risk tolerance, you might consider an earnings based risk tolerance along with a capital based risk tolerance, linking the earnings

Page 4

Dave IngramExecutive Vice PresidentNew [email protected]

Page 6: willis re analytics rewire

Willis ReWire • 2015

Insurance regulators are expecting an avalanche of new information regarding insurers’ enterprise risk management (ERM) programs—including their risk models, stress testing processes, and the complex assessments needed to produce an opinion about capital adequacy. This avalanche is called the Own Risk and Solvency Assessment (ORSA), and regulators have no one else to blame if they’re buried in information, because they asked for it.

In the United States, some states are requiring the first ORSA Summary Reports in 2015; most of the rest will be starting in 2016.

What regulators want to see Over the past three summers, a score of US insurers have voluntarily sub-mitted preliminary ORSA summary reports for unofficial reviews by the regulators.

At a recent conference, Danny Saenz, Assistant Commissioner of the Texas Department of Insurance, discussed the perspectives that regulators have gained through these pilot reviews. He presented a list of over 75 items that regulators have seen or would like to see.

In honor of David Letterman’s retirement from late-night TV in May, we have selected our Top Ten items from Saenz’s longer list and added our commentary. You can use this as a checklist as you do your final review of your ORSA Summary Report before sending it along to your insurance department.

1. Provide clear definition of who is doing what in the ERM process

It helps make the ERM process real to have actual people’s names for each reported process.

2. Discuss the status of development of the ORSA process

Be candid: it’s okay to admit that ERM processes are not yet perfect and the ORSA isn’t either.

3. Identify model for ERM program

All ERM programs should be customized, but what was your starting point, a general ERM standard like COSO, or an insurance-specific standard?

4. Discuss linkage of overall risk appetite to preferences, tolerances and limits

How are they linked, or are they all really independently determined?

5. Describe the processes in place to manage key and non-key risks

Along with the assurance processes and roles.

6. Explain the escalation process in event of a breach

Degree of planned escalation should be consistent with the size of the breach; give evidence of actual breaches and reactions.

7. Assess all key risks under current and stressed conditions

i.e., losses expected under “normal volatility” and “realistic disaster” in an accessible tabular form.

8. Describe changes to risk profile over time

The CEO’s ability to tell this story is in our opinion the best “use test” for the risk measurement part of ERM.

9. Explain fitness for purpose of risk capital metrics

e.g., why a one-year 99 percentile VaR on a statutory basis (or other selected metric) makes sense for your risks and your firm.

10. Discuss use of risk management to support business decisions

Show how risk tolerance, preferences and limits are consistent with business plans, and how risk acceptance standards and mitigation support financial and other objectives.

Finally, remember that the length and format of the ORSA Summary Report can vary based on insurer size and complexity. The largest insurers are talking about a maximum page count of 100 for the summary report. If your firm is much smaller and less complex, it’s sensible to target a much shorter summary.

ORSA summary report “Top 10” checklistJune 24, 2015

After reviewing preliminary ORSA summary reports, insurance regulators are beginning to form some ideas about what they’d like insurers to say.

Page 5

Dave IngramExecutive Vice PresidentNew [email protected]

Page 7: willis re analytics rewire

Willis ReWire • 2015

ERM: Discussing fatness of tails in risk models

[i] Some one-year loss calculations are performed by calculating a value for a much shorter period and extending that calculation to the full year by making an heroic assumption about the relation-ship between that short period and the full year. That substitutes a problem from that time period assumption for the lack of actual data about full year risk. And whether practitioners realize it or not, that process is an extrapolation into the unknown.

[ii] The 99.9 percentile is chosen to be beyond the values most often used from the model. All of the ideas presented here about CoR would apply with a different chosen reference point.

“Tail” events can be many multiples of standard deviation away from the mean.

August 13, 2015

Most decision makers are familiar with the statistical average and standard deviation measures. But risk management typically focuses on unlikely “tail” events. The financial crisis helped popularize the term “fat tails” to represent the idea that these extreme events are more likely than we might have believed. To move beyond “thin tailed” models, we need a way to describe the fatness of the tail.

Extrapolating the tail of the risk modelThe statistical approach to building a model of risk involves collecting observations and then using the data—along with a general understanding of the underlying phenomena—to choose a probability distribution function (PDF).

This process is often explained in terms of “fitting” one of several common PDFs to the data. But an alternate view of the process would be to think of it as an extrapolation, because most observed values are near the mean. Under the so-called Normal PDF, we expect observations to fall within one standard deviation of the mean about two-thirds of the time, and within two standard deviations almost 98% of the time. When modeling annual phenomena, it is unlikely that we will have even one observation to guide the fit at the 99th percentile[i].

So, in most cases, we really are using the shape of the PDF to extrapolate into the tail. But we often gloss over that fact. Model documentation sometimes states the PDF used for extrapolation, but rarely discusses why that PDF was chosen and almost never mentions the importance of the modeler’s judgment in selecting the parameters that determine extreme values via extrapolation.

A new measure: coefficient of riskinessDuring the financial crisis David Viniar, CFO of Goldman Sachs, famously observed, “We are seeing things that were 25 standard deviation moves, several days in a row.”

That might have suggested he was using the wrong model. But our own work with insurance risk models shows that “tail” events can be many multiples of standard deviation

away from the mean. This is the idea of the “coefficient of riskiness”: it’s a new way to describe fatness of tails.

We define the coefficient of riskiness (CoR) as the number of standard deviations that the 99.9th percentile

value is from the mean[ii].

CoR = (V.999 – µ)/σ

We use mean and standard deviation in defining the CoR not because they are the mathematically optimal way to measure extreme value tendencies, but because they are the two risk modeling terms most widely known to business leaders.

These three metrics—mean, standard deviation, and CoR—let us describe a PDF’s average value, typical level of fluctuation, and potential for producing extreme results.

(continued on next page)

Page 6

Page 8: willis re analytics rewire

Willis ReWire • 2015

Communicating riskiness with CoRThe CoR measure offers a way to explain fatness of tails to business leaders without getting into complicated mathematics. If adopted widely, CoR could come to be used like the Richter Scale for earthquakes or the Saffir-Simpson Hurricane Scale. If you were presenting a model of hurri-canes or earthquakes and mentioned that you had modeled a “4” as the most severe event, property underwriters would have some sense of what that meant, even if they don’t know anything about the details of catastrophe modeling. They can form an opinion about whether 4 is reasonable for the most severe event produced by the model, and participate in a discussion on that basis.

Similarly, CoR could facilitate discussion of model severity. If you believe that Viniar’s comment about 25 standard deviations was based on sound data (rather than an exaggeration to make a point), you

would doubtless reject the validity of the Normal PDF, which has CoR = 3. Were non-technical users of risk models to gain an appreciation of which risks have CoR = 3 and which have CoR = 12, that could be a large advance in understanding a very important risk characteristic.

Few people understand the science or math behind the Richter Scale, but everyone living in an earthquake zone can experience a shake and come pretty close to nailing the Richter Score of that event without any fancy equipment – and they know how to prepare for a quake of magnitude 4, 5, or 6. By bringing the Coefficient of Riskiness into our business conversations, we can help business leaders develop intuition about what the risk models imply about preparing for extreme events of all kinds.

Chart 1. 3400 Insurance Risk Models. The subset of models representing stand-alone Natural Catastrophe risk shows clustering at higher CoR values. (CoR need not be a whole number. For this chart and the following the CoR of 4, for example, indicates a value between 3 and 4.)

Chart 2. 400 Natural Catastrophe Models. None of these models showed a 99.9th percentile result that was 25 standard deviations from the mean. But, as you see, the Natural Catastrophe models did produce CoR values as high as 18.

If adopted widely, Coefficient of Risk (CoR) could come to be used like the Richter Scale for earthquakes or the Saffir-Simpson Hurricane Scale.

0

100

200

300

400

500

600

700

4 5 6 7 8 9 10 11 12 13 14 15 16 17

Num

ber o

f Mod

els

Coefficient of Riskiness

3400 Insurance Risk Models

0

10

20

30

40

50

60

70

80

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Num

ber o

f Mod

els

Coefficient of Riskiness

400 Natural Catastrophe Models

Examples from Insurance Risk ModelsExamining a large number of models Willis Re constructed for insurance company clients (covering all perils and lines of busi-ness), we see quite a wide range of CoR values.

Page 7

Dave IngramExecutive Vice PresidentNew [email protected]

Page 9: willis re analytics rewire

Willis ReWire • 2015

Are we safe from tsunamis?April 7, 2015

While the devastating Asian Tsunamis of 2004 and 2011 left us in no doubt of the threat in that region, people in Europe and the Caribbean generally consider themselves safe. However, according to a new Global Tsunami risk study published by RMS on the fourth anniversary of the Tohoku quake, they shouldn’t: many people are completely unaware they live in direct range of a potentially catastrophic tsunami.

A quick review of history can help put this into perspective.

European TsunamisThere have been a number of notably large European tsunamis caused by earthquakes:

• Alexandria was devastated by a tsunami in 365 AD

• The coastline of Lebanon in 551 AD

• Most of Lisbon was destroyed in 1755. The Lisbon tsunami affected the Caribbean and even hit the UK; it was one of the deadliest in history.

There is a regular record of smaller tsunamis in tectonically active regions such as Italy, Greece, Turkey and many Caribbean islands. The Jamaica earthquake and tsunami in 1692 killed thousands.

Volcanic TsunamisVolcanic tsunamis – which can be much larger – are fortunately rarer. One of the most notable is the tsunami of approximately 1600 BC, which was generated by a major eruption of the Santorini volcano. That tsunami has been associated with the downfall of the Minoan civilization in Crete, and there is clear archaeological evidence of widespread destruction of coastal settlements.

In summary, large tsunamis strike Europe and the Caribbean rarely, but they have the potential to be just as devastating as the Tohoku event. Since such events happen infrequently, with none in living memory, there is little public awareness of the hazard – and it has little influence on the pricing of catastrophe insurance and reinsurance.

However, the rarity of these tsunamis also means that we haven’t been recording history long enough to have seen most of the possible events – hence the value of the RMS scenarios for extreme tail event analysis.

Since such events happen infrequently, with none in living memory, there is little public awareness of the hazard – and it has little influence on the pricing of catastrophe insurance and reinsurance.

Page 8

Rick Thomas, PhDHead of Strategy, Willis Research Network [email protected]

Page 10: willis re analytics rewire

Willis ReWire • 2015

Strong El Niño on the way?May 29, 2015

Figure from CPC.

* The top image represents the “Probabilistic sea surface temperature forecast from the North American Multi-Model Ensemble, made early May 2015, for the October-November-December 2015 average. Forecast is expressed as percent likelihood for each of three categories: Above normal, below normal, and near normal. Figure from CPC.” Credit: Climate.gov ENSO Blog.

Source: US National Weather Service: IRI Compilation of SST Forecasts for the Nino 3.4 Region

The Australian Bureau of Meteorology has predicted a “substantial” El Niño ahead, but about a year ago we also had predictions of a strong El Niño that never arrived. Will it be different this time?

Superficial analysis of the most recent consensus forecasts suggests not. The consensus value for the key indicator, called the sea surface temperature (SST) anomalies,* isn’t high enough. However, this isn’t the whole story.

The case for a strong El NiñoThe most recent measured value of the index (the black square in the graphic at right) lies at the extreme high end of previous forecasts, indicating many of the lower forecasts are already off track.

As well, since it is after April, we have now crossed the “spring predictability barrier,” meaning that the most recent forecasts are significantly more accurate than what’s predicted in the graphic at right. In fact, two major agencies recently issued forecasts indicating there will be very strong to record-breaking levels, suggesting we really are heading for a big El Niño.

Modoki El Niño unstead?On the other hand, a strong El Niño failed to materialize last year. And we have had weak El Niño conditions since then. And while El Niño is typically associated with increased precipitation in Southern California and Latin America, drought persists in California.

Recent conditions have been more typical of a Modoki (a Japanese term for “same but different”) El Niño. The key technical difference between the two sorts of El Niño is the extent of the warm water anomaly in the equatorial Pacific. Standard El Niños are associated with extensive warming beside the western coast of South America, but for Modoki El Niños the warming is restricted to the central Pacific.

More importantly for California, the Modoki El Niño is not associated with increased winter rainfall in the Western US. In fact, Modoki El Niño is associated with increased hurricane activity in the Atlantic Ocean. This is the opposite of what you would generally expect from an El Niño, and has a potentially more significant impact for the insurance industry.

So, are we headed for an El Niño or not?Although current conditions are more reminiscent of a Modoki El Niño, the most up-to-date forecasts and measure-ments suggest something else. Near-term evolution of sea surface temperatures are likely to produce significant warm-ing in the Eastern equatorial Pacific Ocean. This suggests more traditional El Niño behavior, with on average lower hur-ricane activity. It also increases the possibility that there is more of a potential for a break in the California drought.

Page 9

Rick Thomas, PhDHead of Strategy, Willis Research Network [email protected]

Page 11: willis re analytics rewire

Willis ReWire • 2015

The re/insurance industry used catastrophe risk models long before Hurricane Katrina, but Katrina challenged the standards of these models. It called into question the quality of exposure data, how the models were used and their suitability for various business applications.

10 years on: RMS, AIR and Willis Re on the evolution of catastrophe models since Hurricane KatrinaAugust 27, 2015

(continued on next page)

Below is a summary of the podcast and a snapshot of answers to some of the questions put to the panelists.

Prasad Gunturi: We can confidently say that quality of exposure data has improved significantly since Hurricane Katrina. Companies are taking a more rational approach to model-based business decisions and many multinational re/insurance companies and intermediaries, including Willis, have since invested heavily in model research and evaluation. There is also closer scrutiny of the assumptions and science behind the models, resulting in better informed decisions.

Can you briefly describe the state of hurricane models before and after Hurricane Katrina?Jayanta Guin: The risk associated with U.S. hurricanes is probably the best understood of all the natural hazard perils thanks to the long historical record and the wealth of available claims data. Katrina did not fundamentally change our approach to modeling the hurricane peril. Still, there were lessons to be learned. The main focus fell on the problem of exposure data quality which has led to significant improvements. Katrina also paved the way for enhancements in the way storm surge—and flooding in general—is modeled.

Robert Muir-Wood: Before Katrina, hurricane modeling remained strongly influenced by Hurricane Andrew, which was at the far end of the spectrum for having such a small proportion of storm surge losses. Katrina was the opposite of Andrew, creating more loss from flood than from wind.

The flooding of New Orleans was itself a secondary consequence of the hurricane and became a catastrophe in its own right – what we now call a ‘Super-Cat’ when the secondary consequences become larger than the original catastrophe.

Ten years on from Hurricane Katrina, Willis Re’s Prasad Gunturi speaks with Dr. Jayanta Guin of AIR Worldwide and Dr. Robert Muir-Wood of Risk Management Solutions (RMS) to discuss how catastrophe models have evolved since Katrina, and what influence this major loss event has had on the development of hurricane risk models.

Companies are taking a more rational approach to model-based business decisions and ... have since invested heavily in model research and evaluation.

Page 10

Page 12: willis re analytics rewire

Willis ReWire • 2015

What impact does Katrina have in terms of how you develop hur-ricane risk models, and how has the modeled risk to the Gulf coast changed since Katrina?Jayanta Guin: The understanding at the time was that intense storms at low latitudes were relatively small. Katrina, however, was enormous. That led us to make revisions in some of our assumptions.

Katrina also revealed insights into the vulnerability of commercial structures. A good example is the large number of casinos built on barges along the Mississippi coast. Today, there is much better recognition of the wide array of buildings that companies are insuring and our view of the vulnerability of commercial assets has increased as a result. In fact, I would say that overall our view of hurricane risk along the Gulf coast has increased.

Robert Muir-Wood: The biggest change in the modeling agenda after Katrina related to the recognition that storm surge was not just some add-on to a hurricane loss model, which might generate an additional marginal 5% of the loss, but that in terms of ground up losses storm surge could be just as important as the wind.

The storm surge losses are also far more concentrated than the wind losses, which gives much more opportunity to employ modeling. This approach has been well validated in recent events such as Hurricane Ike and Superstorm Sandy, which further refined elements of our storm surge flood modeling capability, in particular around underground space.

Katrina is one of the key benchmark events for the quantification of storm surge risk to coastal properties. How have storm surge models improved since Katrina?Jayanta Guin: Storm surge modeling has improved very significantly. It is true that prior to Katrina, it did not get the attention it deserved because storm surge risk was not thought to be a major driver of overall hurricane losses. We’ve since learned otherwise, not only from Katrina, but from storms like Ike and Sandy.

So at AIR we’ve brought to bear new science in terms of numerically-based hydrodynamic modeling, the computer power necessary to handle high-resolution elevation data, and exhaustive analysis of detailed claims data to ensure that the model, the localized nature of the hazard, and improved exposure data combine in such a way to validate well with datasets from multiple storms—not just one or two. We, as developers of models, need to be cautious and avoid over-calibrating to a single headline event; doing so will result in a model that will not validate well across an entire 10,000-year (or larger) catalog of events.

Robert Muir-Wood: The old ways of modeling storm surges simply did not work. In the Gulf of Mexico storm surges at landfall are commonly much higher than you would find by using the near-shore SLOSH model, because far more storms lose intensity in the two days leading up to landfall. To capture the storm surge at landfall one has to model the wind field and the surface currents and waves generated by the wind, over far more time in the life of the storm than just for the period before landfall. FEMA has identified that there are only two coupled ocean atmosphere hydrodynamic models up to the task of being good enough for generating storm surge hazard information along the US coastline: the ADCIRC model and MIKE21 developed by DHL.

Today, there is much better recognition of the wide array of buildings that companies are insuring and our view of the vulnerability of commercial assets has increased as a result.

(continued on next page)

Page 11

Page 13: willis re analytics rewire

Willis ReWire • 2015

Dr. Jayanta Guin, AIR Worldwide

Jayanta Guin is AIR’s Executive Vice President, responsible for strategic management of the AIR

Research and Modeling group. Under his leadership, the group has developed a global suite of catastrophe models and continues to enhance modeling techniques. Jayanta also provides strategic input into AIR’s product development and consulting work for insurance linked securities. With more than 17 years of experience in probabilistic risk analysis for natural catastrophes worldwide, he is well recognized in the insurance industry for his deep understanding of the financial risk posed by natural perils. His expertise includes a wide range of natural and man-made phenomenon that drive tail-risk.

Jayanta is currently a member of the governing board for the Global Earthquake Model (GEM) initiative. He also contributes to the Research Advisory Council of Institute of Business and Home safety (IBHS).

With thanks to Dr. Jayanta Guin, and Dr. Robert Muir-Wood

Dr. Robert Muir-Wood, Risk Management Solutions

Robert Muir-Wood has been head of research at RMS since 2003 with a

mission to explore enhanced methodologies for natural catastrophe modelling and develop models for new areas of risk. He has been technical lead on a number of catastrophe risk securitizations, was lead author on Insurance, Finance and Climate Change for the 2007 4th IPCC Assessment Report and lead author for the 2011 IPCC ‘Special Report on Managing the Risk of Extreme Events and Disasters to Advance Climate Change Adaptation’.

He is Vice-Chair of the OECD High Level Advisory Board of the International Network on Financial Management of Large Catastrophes and is a visiting professor at the Institute for Risk and Disaster Reduction at University College, London. He has published six books, written scientific papers on earthquake, flood and windstorm perils and published more than 200 articles.

Katrina underscored the issue of certain components of non-modeled losses such as damage due to polluted storm surge water, mold, tree fall, riots, etc. How are your models accounting for amplifying impact to claims from indirect effects of hurricanes?Jayanta Guin: Mold and the toppling of trees during hurricanes are, of course, nothing new. The model cannot be expected to resolve whether a particular tree topples at a particular location. As the losses that arise from these events are present in the claims data, used to calibrate the model’s wind and storm surge damage functions, it is reasonable to say that such sources of loss are captured implicitly.

However, we make no attempt to model other secondary sources of loss, such as rioting or pollution clean-up. The ability to model these sources of loss explicitly is highly questionable because of the inability to distinguish them in claims data.

Robert Muir-Wood: The experience of Katrina triggered a revolution in our thinking about additional factors that drive up loss, from which emerged the structure of post-event loss amplification or PLA. In this structure we can identify four factors that tend to push up loss beyond the simple hazard exposure loss equation of Cat modeling.

First there is ‘economic demand surge’ – when excess demand leads to price increases in materials and labor.

Second there is ‘deterioration vulnerability’ – as seen widely in houses abandoned in New Orleans after Katrina. Even where a property was not flooded, if it had a hole in the roof, after a few weeks the whole interior was contaminated with mold.

Third there is ‘claims inflation’ when insurers are so overwhelmed with claims that they let through claims below some threshold without checking.

Fourth there is ‘coverage expansion’, when typically under political pressure insurers pay beyond the terms of their policies – waiving deductibles, ignoring limits, and covering perils like flood. When the level of disruption is so high that urban areas are evacuated, so that BI losses simply run and run as seen in the Christchurch 2010 and 2011 earthquakes, we call this “super-Cat.”

In terms of our broader modeling agenda we focus on trying to capture economic demand surge and claims inflation and recommend stress tests or add defaults around coverage expansion. We also apply super-Cat factors to the largest loss events affecting cities that could be prone to local evacuations.

Page 12

Prasad GunturiSenior Vice [email protected]

Page 14: willis re analytics rewire

Willis ReWire • 2015

30 years later: the Ontario tornadoes of May 1985June 8, 2015

On May 31, 1985, a deadly tornado outbreak struck the Northeastern United States and Canada, causing widespread property damage and loss of life.

Thirteen tornadoes struck Central Ontario and impacted communities such as Barrie, Grand Valley, Orangeville and Tottenham.

Source: http://www.erh.noaa.gov/cle/office/localinterest/storm_data.pdf

Two of these 13 tornadoes were rated F4, indicating extremely powerful wind speeds:

• The Barrie tornado (#6 in the map), an F4, was most destructive of all in terms of loss of life and property damage. With a track 300 to 450 meters wide and 10 to 15 kilometers long, it killed eight people and caused approximately $100 million CAD in property damage.

• At the same time, the Grand Valley tornado (#7 in the map) began near Arthur and moved east to Campbellford. Traveling over 115 kilometers, it was considered one of the longest tracked tornadoes in Canada.

(continued on next page)

Tornado monitoring, warning systems and building construction standards in Canada have improved over the past 30 years. But tornado risk to Canadian property still exists. Based on Environment Canada’s 1980-2009 tornado dataset, Ontario experienced an average of 13 tornadoes (confirmed and probable) per year, and significant tornado risk also exists in other areas including Edmonton, Winnipeg, and Montreal.

Increases in population and property exposure since 1985 would mean greater property loss if a similar event were to occur today.

Page 13

Page 15: willis re analytics rewire

Willis ReWire • 2015

So how can insurance companies manage their Canadian tornado risk?Deterministic methods and “what-if” scenario loss modeling can help quantify loss potential. Studies performed by David A. Etkin, et. al. in 2002 and Patrick McCarthy, et. al. 2006 demonstrated the value of using “what-if” deterministic loss analysis for hypothetical tornado events in Barrie, ON and Winnipeg, MB respectively. Insurance companies writing property exposure in tornado risk areas may find these and similar sce-narios useful as they seek to effectively manage severe thunderstorm risk.

A systematic way of developing numerous “what-if” scenario events across several communities specific to an insured property portfolio can be a valuable tool for companies wiring severe thunderstorm risks. A tool like this can provide flexibility to derive users to define custom-built hypothetical tornado events that are capable of causing large loss potential to insurance portfolios.

Willis Re’s eXTREME™ Tornado tool is one of such tools containing strong geospatial visualization capabilities via the SpatialKey platform.

For example, “what-if” analysis estimates that a storm like the 1985 Barrie F4 tornado could cause $400 to $600 million CAD in insurance industry loss today. More dramatically, a similar F4 tornado could produce $1.0 to $2.5 billion CAD if it were to occur in Vaughan or Richmond Hills.

Knowing the possibilities makes it easier to price, to plan, and to prepare.

Willis Re’s eXTREME™ Tornado tool is one of such tools containing strong geospatial visualization capabilities via the SpatialKey platform.

Page 14

Prasad GunturiSenior Vice [email protected]

Page 16: willis re analytics rewire

Willis ReWire • 2015

After Tohoku: Re-evaluating Japanese earthquake hazardApril 1, 2015

The magnitude of the 2011 Tohoku earthquake came largely as a surprise to the seismologist community and revealed certain shortcomings in previous hazard studies. Helping to address this, in 2012 and 2013, the Headquarters for Earthquake Research Promotion (HERP) in Japan released provisional studies incorporating new research findings and in December 2014 issued the new National Seismic Hazard Maps. These maps and the accompanying report, currently only available in Japanese, furnish important context for the offering that commercial modeling companies (such as RMS and AIR) provide with regards to Japanese earthquake.

Key updatesWhat are the main changes in the new HERP hazard maps and report?

• Re-evaluation of the magnitude and long-term probabilities of several earthquakes (for example, those triggered along the Sagami Trough) on the basis of new data and considering potentially larger uncertainties

• Increases to potential magnitude of earthquakes triggered by unknown source faults (such as the 1968 Hyuga-nada Plate Earthquake and the Yonaguni Island Earthquake of 1998)

• Expansion and clarification of explanations regarding the fundamental principles, methods of probability evaluation, earthquake category, etc.

In addition, special attention has been paid to the Tohoku-Oki earthquake. At this point, HERP estimates a maximum magnitude of 9.0 with an average recurrence interval of 600 years; they see negligible probability of reoccurrence within the next 50 years.

How do the vendor models compare?Both AIR and RMS rely heavily on HERP analysis to delineate and characterize their seismic source models. But the two mod-els were last updated in 2013 and 2012, respectively. Given the increased magnitudes of HERP’s 2014 report, it is reasonable to ask whether the current vendor models sufficiently contemplate the risk.

Willis Re’s Model Research and Evaluation team investigated this question and determined that for the seismic sources of most interest, i.e. Eastern Japan, the model vendors have incorporated sufficient uncertainty and generally include events in their catalogues with magnitude potential equal to or larger than what is assumed in the 2014 HERP report. This is not the case for the Sagami Trough, where the vendors appear to be using lower magnitudes. However, the probability that HERP assigns to this event is very low.

Business implicationsOverall, we find that the most important changes in the 2014 HERP hazard maps are the further uncertainties considered and the increased maximum magnitude of earthquakes triggered by subduction zones and unknown source faults. For the most part, existing models already include events of equal or greater magnitude and we see no need for vendor model adjustments at this time.

Page 15

John E. Alarcon, PhDExecutive [email protected]

Myrto PapaspiliouSenior Catastrophe Research [email protected]

Lin KeCatastrophe Risk [email protected]

References•全国地震動予測地図2014年版, 地震調査研究推進本部•三陸沖から房総沖にかけての地震活動の長期評価(第二版)について

Page 17: willis re analytics rewire

Willis ReWire • 2015

(continued on next page)

Solvency II equivalenceAs Solvency II draws closer to taking effect in Europe, insurance industry participants are learning more about how it may affect parties outside Europe. A big question has been which other regulatory frameworks will be deemed “equivalent” to Solvency II.

On Friday June 5, the European Commission announced the first set of equivalence decisions. These grant provisional equivalence to the group capital regulatory regimes in six non-E.U. countries:

• Australia • Canada

• Bermuda • Mexico

• Brazil • United States

Separately, Switzerland was granted full equivalence for all three areas considered by Solvency II: reinsurance, group capital, and group supervision.

To understand these announcements, let’s recap the three separate areas that Solvency II considers for equivalence:

Reinsurance (Article 172) If the foreign regulatory system is deemed to be equivalent with regard to reinsurance, reinsurance contracts between E.U. insurers and foreign reinsurers will receive the same treatment under Solvency II as contracts between E.U. insurers and E.U. reinsurers. A major consideration here is whether reinsurers are required to collateralize unearned premiums and unpaid claims.

A big question has been which other regulatory frameworks will be deemed “equivalent” to Solvency II.

Calculation of group capital (Article 227) Is a foreign company owned by an E.U. insurer subject to a solvency regime equiva-lent to Solvency II? If so, the E.U. owner may take into account the Solvency Capital Requirement (SCR) and own funds of the foreign subsidiary when calculating group capital.

Supervision of groups (Article 260)Does the foreign regulator supervise groups in a matter equivalent to Solvency II? If so, E.U. insurers with a foreign parent will rely on the group supervision exercised by the parent’s regulator.

For E.U. companies with foreign parents or subsidiaries, failure to establish equivalence could have created substantial headaches. Effectively, such companies might have had to separately establish compliance with both regimes. And lack of equivalence could have severely hampered the market for reinsurance transactions between E.U. insurers and foreign reinsurers. The U.S. has been a particular concern because its risk-based solvency standard – while long established – differs fundamentally

in approach from Solvency II.

With the June 6 announcements, the European Commission has made the following determinations of equivalence:

Page 16

Page 18: willis re analytics rewire

Willis ReWire • 2015

Reinsurance (Article 172)

Calculation of group capital

(Article 227)

Supervision of groups(Article 260)

Australia Provisional

Bermuda Provisional(excluding captives)

Brazil ProvisionalCanada ProvisionalMexico Provisional

Switzerland FULL FULL FULLUS Provisional

noteProvisional =5 years, non-

renewable

Provisional =10 years, renewable

Provisional =5 years, non-renewable

The next step is for these decisions to be scrutinized by the European Parliament and the Council. Successful completion of this scrutiny process will lead to publication in the E.U. Official Journal, with equivalence taking effect as of January 1, 2016.

The European Commission’s press release states that “Further Solvency II equivalence decisions are envisaged by the Commission in future.”

The U.S. didn’t enter a formal equivalence application which had prompted a great deal of concern that an agreement may not be reached. Perhaps the most notable example was the announcement by Prudential in 2012 that it may consider exiting the UK if an equivalence agreement weren’t reached, given the potential additional capital requirements that may have caused. The decision to grant US provisional equivalence is clearly important and will be well received.

In its 5th June press release the European Commission noted that an important part of its decision to grant the US provisional equivalence was the EU-US Dialogue project which aims to achieve ‘improved mutual understanding of the respective insurance regulatory and supervisory regimes’. The provisional equivalence lasts for 10 years for the solvency calculation (article 227 above) and is renewable at that stage. However, provisional equivalence for group supervision and reinsurance (articles 260 and 172) last for 5 years and is non-renewable. The continuation of this dialogue will therefore be important.

While U.S. regulators are unlikely to change the basic underpinnings of their risk-based capital standard, they are implementing a Solvency Modernization Initiative (SMI) that began in 2008. One of the outcomes of the SMI is the requirement for insurers over a certain size to provide an Own Risk and Solvency Assessment (ORSA). It remains to be seen exactly how the SMI may affect group supervision by U.S. regulators.

One of the outcomes of the SMI is the requirement for insurers over a certain size to provide an Own Risk and Solvency Assessment (ORSA).

Page 17

Stephen MullanRating Advisory & Regulatory [email protected]

Page 19: willis re analytics rewire

Willis ReWire • 2015

Mutual insurers and non-traditional capital: time for a change of perspective August 24, 2015

Only a few years ago, mutual insurers could understandably have viewed the influx of non-traditional capital trickling into the reinsurance market as irrelevant.

Mutual insurers rightly pride themselves on the unrivalled service they provide to their members, offering products tailored to meet their specific needs and developing intimate and long-term relationships with their constituency.

This would have seemed very much at odds with the approach of the new capital, which was focused on only a limited amount of “tail” property catastrophe exposures, offering coverage that was largely parametric or index-based (and hence a step more remote from risks on the ground) and came with high frictional deal costs.

Thinking about the new products being offered may have presented an interesting intellectual exercise to the reinsurance buyer of a mutual insurer, but they are likely to have drawn the conclusion this was something best left for the ceded reinsurance departments of the largest stock companies.

Broader acceptance of non-traditional capital Step forward a few years and the signs of convergence between traditional and non-traditional capital were becoming much clearer. The coverages granted by new capital providers were more likely to be indemnity-based, and a broader range of perils and territories were available.

The number of reinsurance buyers making use of these products was also increasing, breeding greater experience and increased confidence.

However, a significant disconnect remained between the traditional ways of transacting reinsurance and the trading mentality new capital providers brought with them from other financial markets. Coverage was still limited to property catastrophe risks, and deals offered significant execution hurdles in terms of costs, compliance and complexity of documentation.

These challenges would have looked very much at odds with the reinsurance buying philosophies of most mutual insurers. Core to the thinking of any mutual is the pledge it makes to its policyholders to support them in their time of greatest need. This commitment is clearly expressed in the ability to maintain availability of limits and stability of pricing across multiple market cycles, and whether or not equivalent coverage is made available by commercial insurers.

The reinsurance buying of mutual insurers reflects these intentions, and the increasing number of alternative products and reinsurance providers would still seem to have little relevance to all but a few mutuals, despite a broader acceptance in the wider re/insurance community.

We are witnessing a persistent and secular change in the structure and composition of the reinsurance marketplace.

Page 18

(continued on next page)

Page 20: willis re analytics rewire

Willis ReWire • 2015

Inflection point As it stands in the second quarter of 2015, things have moved on significantly. The recently launched Willis Reinsurance Index highlights that dedicated global reinsurance capital at $425B represents an all-time high, with an ever-increasing contribution from non-traditional providers.

The changes driven by this influx of capital have moved the market far beyond an inflection point and can no longer be seen as short-term, cyclical change. We are witnessing a persistent and secular change in the structure and composition of the reinsurance marketplace. The lines between traditional and non-traditional reinsurers are blurring, with many reinsurers, new and established, straddling both approaches and seeking to maintain their relevance by offering capacity through managed special-purpose vehicles, sidecars, and collateralized retrocession agreements.

The choice of reinsurance products available to a buyer, in terms of line of business, loss trigger and financial backing, has never been wider. The way in which these products are executed continues to evolve, but innovations (such as the Willis Resilience Re catastrophe bond platform) make them ever easier to access to buyers of all sizes and territories.

Mutuals must change their per-spective For a mutual insurer, it is time for a change in perspective on the reinsurance market. Even if the increasing range of alternative reinsurance products does not directly appeal, it is now impossible to ignore the impact non-tradition-al capital is having on the market.

A shortage of opportunity and declining prices are pushing reinsurers to deploy capacity outside of their traditional areas of comfort. As a result we are seeing more capital being deployed in longer-tail lines of business, significant overcapacity becoming visible in specialist lines of business, and capital providers beginning to become active in areas that are traditionally seen as insurance rather than reinsurance.

For mutuals, the threat from this is twofold:

1. From competitors and peers accessing cheaper reinsurance capital and passing on the benefits of this to their policyholders in the form of cheaper premiums

2. From new entrants to their markets who will seek to disrupt traditional distribution models to access new clients.

Mutual insurers offer an extraordinary proposition for their policyholders: acute focus and expertise brought to bear on a particular market segment or territory, a commitment to policyholders that extends beyond just the insurance product, consistency over time and unrivalled levels of policyholder service. Ultimately, much of this is only possible because of the unique structure of mutuals, whereby shareholder equity belongs to its policyholders and there is a complete alignment of interest between policyholder and shareholder, who are one and the same. Yet at the same time this structure acts as a constraint, whereby a mutual’s ability to raise capital is more limited compared to the commercial market. For this reason, reinsurance often represents not just the most efficient but also

the most flexible form of capital, putting the onus on a mutual to review their reinsurance purchasing in light of broader market developments to ensure they are maximizing all the potential benefits. Those mutuals that will best serve their members will be those that embrace the changes that have occurred in the reinsurance market; whether directly through the purchase of new products and relationships or indirectly through leveraging

available excess capital to benefit their members.

To do so is not without challenges. As the heightened importance reinsurance plays in their financial stability – when compared to commercial insurers – dictates any reinsurance purchased must recognise the unique features, history and philosophy of a mutual.

The good news is many mutuals are now starting to embrace this. The very nature of the mutual movement has spurred the organization of forums such as the International Co-operative and Mutual Insurance Federation, the Association of Mutual Insurers and Insurance Co-operatives in Europe and the National Association of Mutual Insurance Companies in the US, which provide arenas not only to share ideas but to seek the support of peers.

This article was first published in May 2015 as part of Insurance Day’s special report on Mutuals

Page 19

It is now impossible to ignore the impact non-traditional capital is having on the market.

Robin SwindellExecutive Vice [email protected]

Page 21: willis re analytics rewire

Willis ReWire • 2015

DownsideCyber riskBut there are downsides to the technology: a car can be hacked. It’s not just the idea of terrorists turning self-driving cars of the future into automated cruise missiles that worries security professionals. Today’s cars are riddled with antiquated and defenseless computers using outdated code that is easily manipulated: a modern car has over 100 million lines of code. Autonomous cars will have hundreds of millions of lines of code.

In order for autonomous vehicles to be accepted, car manufacturers are going to have to get serious about computer security. Major manufacturers’ recent problems with control software offer a preview of the hurdles driverless cars must overcome: the average person must believe the software is safe and close to infallible.

ResponsibilityIn the United States, driverless cars may provide significant safety advantages, but they will also change how the criminal and civil systems view the driver, the vehicle owners, the vehicle manufacture, vendors, software engineers,

computer engineers and those who maintain the vehicle. Self-driving cars will make it increasingly challenging to decide who is liable for an accident and what kind of liability is appropriate.

Can a balance be created between life-saving autonomous car technology and tort law, or will the fear of large verdicts stop manufacturers from deploying the new technology?

Self-driving cars will make it increasingly challenging to decide who is liable for an accident and what kind of liability is appropriate.

Ghosts in the (driving) machine – and on the witness standAugust 18, 2015

The promise of autonomous vehiclesIn the 21st century, vehicles need to be safer, more convenient, more efficient and more socially responsible. Chains of networked, supercomputer-managed autonomous cars have the potential to reduce both transportation time and traffic congestion. In this utopian vision, car accidents are things of the past, commut-ing times are shorter and the environment is cleaner.

Self-driving cars could also provide the elderly, infirm and disabled with daily and emergency access to safe transportation. In an emergency, such networked vehicles could save lives by moving people out of harm’s way.

Autonomous vehicles have the potential to dramatically reduce accidents as people who are prone to distractions “give up the steering wheel,” saving millions of lives and tens of millions of injuries. Across the globe, approximately 1.3 million people die in road crashes every year, an average of 3,287 deaths a day. Another 20 million to 50 million people are injured each year – that’s the equivalent of the entire population of South Africa or the State of Illinois.

Arguably, if these road crashes were any other man-made or natural disaster, rather than a commonly accepted cost of a personal convenience, every government in the world would be under great pressure to reduce the risk.

(continued on next page)

Page 20

Page 22: willis re analytics rewire

Willis ReWire • 2015

Vehicular crimesIf drivers are no longer in control of their vehicles, how will speeding, reckless driving, running red lights and parking infractions be handled? Will drunk or underage driving still be a crime? What about driving without a license? Will vehicle hacking be a crime? What about criminal vehicular homicide – will that still exist?

The FBI is taking such questions very seriously. Its Strategic Issues Group recently issued a report suggesting that “game changing” vehicles could revolutionize high-speed car chases within a matter years, and warning that autonomous cars may be used as lethal weapons.

LiabilityAnd manufacturers could be held liable. Recent court cases suggest that even when there is no proof whatsoever that software is at fault, juries can still be convinced that accidents are the result of undiscoverable computer malfunctions. How can any vehicle manufacturer or software programmer prove to the satisfaction of juries or investors that its software is perfect?

Somehow society must ensure the accountability that is a hallmark of a good civil (and criminal) justice system, compensate the injured and remediate damage – but still entice investors and manufacturers to make the necessary investments in the technology. One has to wonder if “ghosts in the machine” will permanently park the autonomous car.

Today’s tort law could kill the successful deployment of autonomous cars. As the autonomous car industry matures, liability will shift from a mélange of driver and manufacturer accountability to predominantly manufacturer accountability. The “ghost in the machine” will become a very real and very frequent defendant in an endless stream of expensive personal tort and products liability lawsuits.

Without an evolution in tort law or the adoption of a different type of deterrence and injury compensation regime, the autonomous vehicle will simply be an interesting technological oddity.

Without an evolution in tort law or the adoption of a different type of deterrence and injury compensation regime, the autonomous vehicle will simply be an interesting technological oddity.

Page 21

Pete ThomasChief Risk OfficerMcLeansville, [email protected]

Page 23: willis re analytics rewire

About our bloggers

By Dave Ingram1. Emerging ERM risk of 2015: outsourcing2. Risk appetite and tolerance3. ORSA summary report “Top 10” checklist4. ERM: Discussing fatness of tail in risk models

Dave is an Executive Vice President of Willis Re, specializing in theory and practice of ERM for insurers. Based in New York, Dave has more than 30 years of actuarial and general management experience in the insurance industry and has published and spoken about ERM all over the world. In 2012 he was named by @TreasuryandRisk as one of the 100 Most Influential People in Finance. You can follow Dave on Twitter at @dingramerm (views his own – not Willis’). [email protected]

By Rick Thomas

1. Are we safe from tsunamis?

2. A strong El Nino on the way?

Rick is an Executive Director at Willis Re international with 20 years of experience in reinsurance underwriting cat model building, risk management and reinsurance buying. He also has detailed knowledge of the capital markets space. In Willis Re, Rick oversees both the model research and evaluation and the analytics and model development teams at Willis Re. He also leads the Willis Re international ILS practice group and is Head of Strategy for the Willis Research Network. [email protected]

By Prasad Gunturi

1. 10 years on: RMS, AIR and Willis Re on the evolution of catastrophe models since Hurricane Katrina

2. 30 years later: the Ontario tornadoes of May 1985

Prasad Gunturi is Senior Vice President of Willis Re Analytics, where he leads the North American catastrophe modeling research and evaluation team. Prasad manages and leads specialized technical projects, including understanding the changes in the catastrophe models, technical evaluation of commercial catastrophe models, developing portfolio specific alternative views of risk and proprietary model development projects [email protected]

Page 22

Page 24: willis re analytics rewire

About our bloggers

By Myrto Papaspiliou, Lin Ke and John Alarcon

After Tohoku: Re-evaluation Japanese earthquake hazard

Myrto Papaspiliou, Lin Ke, and John Alarcon work within Willis Re’s Model Research and Evaluation (MR&E) team, , which is responsible for assessing, comparing, validating and adjusting catastrophe model vendors (AIR, EQE, ERN, RMS) for all perils and territories for Willis Re International and Specialties clients . Dr. Myrto Papaspiliou joined Willis in October 2012 and is a Senior Earthquake Research Analyst. Dr Lin Ke joined Willis Re Japan K.K. in January 2015 as a Catastrophe Risk Analyst. Dr. John E. Alarcon is an Executive Director and leads the MR&E team.

[email protected]

[email protected]

[email protected]

By Pete Thomas

Ghosts in the (driving) machine and on the witness stand

Pete is Willis Re’s Global Chief Risk Officer. He has 39 years of insurance and reinsurance underwriting, broking and management experience. [email protected]

By Stephen Mullan

Stephen Mullan is a Divisional Director within Willis Re, focusing on alignment of reinsurance solutions to client risk appetite; this includes regulatory solvency targets under Solvency [email protected]

By Robin Swindell

Mutual insurers and non-traditional capital: time for a change of perspective

Robin is Executive Vice President & Regional Director of Willis Re. Robin works in the London office as part of the Willis Re North

America team. Since joining Willis in 1989 Robin has continuously been involved in facultative and treaty reinsurance placements of property, casualty and specialty lines for mutual insurers, P&C companies and market pools. [email protected]

Page 23

Page 25: willis re analytics rewire

Global and local reinsurance Willis Re employs reinsurance experts worldwide. Drawing on this highly professional resource, and backed by all the expertise of the wider Willis Group, we offer you every solution you look for in a top tier reinsurance advisor. One that has comprehensive capabilities, with on-the-ground presence and local understanding.

Whether your operations are global, national or local, Willis Re can help you make better reinsurance decisions – access worldwide markets – negotiate optimum terms – and boost your business performance.

How can we help?To find out how we can offer you an extra depth of service combined with extra flexibility, simply contact us.

Begin by visiting our website at www.willisre.comor calling your local office.

© Copyright 2015 Willis Limited / Willis Re Inc. All rights reserved: No part of this publication may be reproduced, disseminated, distributed, stored in a retrieval system, transmitted or otherwise transferred in any form or by any means, whether electronic, mechanical, photocopying, recording, or otherwise, without the permission of Willis Limited / Willis Re Inc. Some information contained in this document may be compiled from third party sources and we do not guarantee and are not responsible for the accuracy of such. This document is for general information only and is not intended to be relied upon. Any action based on or in connection with anything contained herein should be taken only after obtaining specific advice from independent professional advisors of your choice. The views expressed in this document are not necessarily those of Willis Limited / Willis Re Inc., its parent companies, sister companies, subsidiaries or affiliates (hereinafter “Willis”). Willis is not responsible for the accuracy or completeness of the contents herein and expressly disclaims any responsibility or liability for the reader’s application of any of the contents herein to any analysis or other matter, or for any results or conclusions based upon, arising from or in connection with the contents herein, nor do the contents herein guarantee, and should not be construed to guarantee, any particular result or outcome. Willis accepts no responsibility for the content or quality of any third party websites to which we refer. The contents herein are provided for informational purposes only and do not constitute and should not be construed as professional advice. Any and all examples used herein are for illustrative purposes only, are purely hypothetical in nature, and offered merely to describe concepts or ideas. They are not offered as solutions to produce specific results and are not to be relied upon. The reader is cautioned to consult independent professional advisors of his/her choice and formulate independent conclusions and opinions regarding the subject matter discussed herein. Willis is not responsible for the accuracy or completeness of the contents herein and expressly disclaims any responsibility or liability for the reader’s application of any of the contents herein to any analysis or other matter, nor do the contents herein guarantee, and should not be construed to guarantee, any particular result or outcome.

Willis Re Inc.Brookfield Place200 Liberty Street3rd FloorNew York, NY 10281Tel: +1 212 915 7600

Willis LimitedThe Willis Building51 Lime StreetLondon EC3M 7DQTel: +44 (0)20 3124 6000Fax: +44 (0)20 3124 8223