But take up limited by hardware and software limitations
Similarly, the theory for catastophe modelling was becoming formalised
eg Natural Hazard Risk Assessment for an Insurance Program, Don Friedman, 1984
In 1987 Karen Clark, later to found AIR, built first Hurricane model funded by reinsurance broker Blanch (but smartly she kept the IPR)
1991: the key year
I build my first stochastic reinsurance model in Excel with Excel macros
My colleague Andrew Mitchell (now also at Willis) built the first stochastic UK windstorm model
These early models were, in retrospect, poor, and ran incredibly slowly but the die was cast
The fundamentals were there
Hardware and software improving
Theory in place
Market was receptive
Working models had been created
Our new toys were creating a stir in a market that was stressed by a chain of losses
Our new toys were creating a stir in a market that was stressed by a chain of losses
Eg European storms 87J and 90A/Daria, Piper Alpha, Hurricane Andrew
Adverse comparisons were made between the London market and “professional reinsurers” in Europe
This lead to a flight to modelling in the London market
Stochastic Modelling
Guy Carpenter’s Instrat lead the way, with thought leadership from Rodney Kreps and Gary Venter
In 1990 , Kreps published Reinsurer Risk Loads from Marginal Surplus Requirements, laying out the basis for the standard deviation load XL pricing methodology still widely used (and abused) today
@Risk and Crystal Ball emerged, stochastic add-ins for Excel
MetaRisk from Guy Carpenter was the first dedicated broker tool for reinsurance optimisation, followed by ReMetrica from Greig Fester
Catastrophe Modelling
At first catastrophe modelling was pioneered by reinsurance brokers but as the 1990s continued we saw the emergence of the three now dominate modelling companies: RMS. EQECAT and AIR
Storm models were followed first flood models (eg UK storm surge in 1992) and earthquake models
By end of decade most major perils for major markets were modelled
Scientific/industry engagement increased with launch of Greig Fester Hazard Research Centre at UCL and the Risk Prediction Initiative in Bermuda
The new models seemed to offer the panacea to the industries problems
The new models seemed to offer the panacea to the industries problems
A technical basis to replace the now discredited ”underwriters born not made” attitude and the naïve subscription market that arguably pertained before
For me the answer at last, now there was a story, Risk vs Return
At last “cost” could be compared to a consistent measure of benefit
Typically mean reduction in underwriting profit was compared to a percentile value as a proxy for capital, or probability of missing a particular target
Reinsurance pricing seemed to have more logic; Kreps formula separated
Expected Loss cost
Expenses
Cost of Capital
BUT we had that perfect storm
Demand from people wanted a certainty that didn’t exist
Lack of transparency, and indeed understanding, around models and techniques
There were a few people who did understand, but they were learning on the job
The days of the computer says no (or, more perhaps more dangerously, yes)
Some common problems in the early days of modelling:
Some common problems in the early days of modelling:
Modelling too simplistically applied
Confusion of single portfolio VAR points with real capital requirements
Over-simplistic adoption of Kreps pricing formula for XL business
Lack of understanding of inherent complexity of models
Management demanded a number to manage to, not a probability range
Peril models particularly were over sold and over bought
Vendors focussed on model sophistication and robustness, what buyers wanted to hear, not uncertainty, definitely what they didn’t want to hear
Difficult for market to absorb new thinking
Early science/industry collaborations largely failed
People too busy with the day job to try/absorb new ideas
Silo mentality
Models were single purpose, no real joined up thinking
Often there was a conflict between the underwriter (salesman) and modeller (policeman)
All of these problems persist today
For practical reasons, only one portfolio or element of a portfolio was modelled
For practical reasons, only one portfolio or element of a portfolio was modelled
eg only storm element of property catastrophe
“Capital benefit” modelled by change in by 1 in 200 of that portfolio/peril
OK often said to be a proxy, but then confused by spurious “economic value” calculations
Example : UK client in 1999 with a early Internal Capital Model
Broker : This deal saves you £200m of capital (meaning 1 in 100 for UK storm is reduced by £200m)
Client: But if I add in rest of my cat risks the 1 in 100 only drops by £150m after this cover
Client again: Indeed if I add in rest of my property book risks the 1 in 100 is only £120m off
And again: If I add in the rest of my active underwriting risk the benefit is only £80m
And more: If I also add in reserving risk the benefit is only £50m
And the coup de grace: If we add in asset risk, then then my company 1 in 100 is only down £20m
Suddenly the “Economic Benefit “ is not looking so rosy, exit broker wounded
An example of naivety/loose language not an attempt to mislead (I was that broker)
But an example of interpretation running ahead of model ability
Broker did not have the data, knowledge or time to build a full ECM and so assess capital benefit
To be honest neither did client, they later acknowledged that their model was useless
Still widely used for XL reinsurance pricing but how many have read the theory
Still widely used for XL reinsurance pricing but how many have read the theory
Price is a function of expected loss cost plus expenses plus capital charge
Capital charge is “reluctance factor” times standard deviation of result
Reluctance factor, per paper, depends upon the risk aversion and the return expectation of reinsurer, coupled with correlation of new risk with the rest of their book
But how many people using the formula are aware of the theory? 5%?
Assumptions behind Kreps formula
XL losses are normally distributed - clearly wrong, tail is far fatter
In examples in paper, a 1 in 1,000 capital measure is assumed. Now given distribution is normal this is arguably consistent with a real lower return period, but looks a conservative assumption
Paper gives “reluctance factors”, ie percentages applied to standard deviation , of
33% if 12% desired return on capital and risk is 100% correlated with rest of book
52% if 20% desired return on capital and risk is 100% correlated with rest of book
Arguably (perhaps indisputably) current factors are way outside of these guidelines
By theory if low correlation with rest of book, reluctance factor should be near zero
But factor is often higher as confused by non-explicit cat model uncertainty loads and expenses
Rather than use a crude approximation for capital usage as per Kreps formula, why not compute actual marginal capital requirements for each deal?
Rather than use a crude approximation for capital usage as per Kreps formula, why not compute actual marginal capital requirements for each deal?
Because its hard, but peril models seemed to open the door to this approach for catastrophe risks
Seminal paper published by Lowe and Stanard in 2003
“An Integrated Dynamic Financial Analysis and Decision Support System for a Property Catastrophe Reinsurer”
Process copied by many, but as always devil in detail
But Model dependent
What if chosen model is wrong – frequency, severity, regional correlations
How can different models be melded (eg some US Hurricane on RMS, some on AIR)
What if no model exists for a particular territory/peril?
And Portfolio dependent
Capital is marginal to what? Renewed book, held book, budgeted book, forecast book?
How to react to portfolio developing/opportunities offered different from plan
Models typically are order dependent, ie first risk have lower charge than later ones, is this good to encourage early responses or poor as randomly skews optimal portfolio
What about non-cat risks (as per example 1)?
Demand coming from 3 places:
Demand coming from 3 places:
Regulation: insurance regulators reflecting Basle II/III risk adjusted capital regimes
UK: ICAS regime in 2005 encouraged capital model development: 60 + companies in pre-approval
Ireland: Home to many (re)insurance groups: 40+ companies in pre-approval
Germany: A handful of companies in pre-approval
Similar to Germany in most of rest of Europe
Core to many/most internal models is cat risk
Insurers encouraged to develop “own view of risk”
Cannot hide behind 3rd party models or 3rd party advice
As with ICAS, logical end point of Solvency II is a transparent model
Identify risk
Manage risk
Quantify risk?
I would argue no
I would argue no
There are a number of software choices available In the market
Processing power is cheap
But it is true that many models are hampered by poor performance
One example, reinsurance department only allowed three runs a year as model is so slow!
Why? Poor and perhaps over ambitious design?
Better a simple model with known flaws than a complex one nobody understands
But do regulators buy that message?
The drag will be people and money
Not enough actuaries in the world
Not enough money within insurers
Certainly not enough expertise within most regulators
But biggest issue is the cultural shift that full adoption of a capital model entails
But I fully expect that, like in the UK after ICAS, it will be the rare exception for all but the smallest companies not to have an internal capital model in place, partial at least, within 5 years
“Chief Scientific Officers” are emerging in the London Insurance Market
“Chief Scientific Officers” are emerging in the London Insurance Market
Most have catastrophe modelling backgrounds
Clearly influenced by need to deliver “own view of risk”
Open access model platforms are being postulated
Eg OASIS backed with UK research council funding
Circa 20 companies now signed up including all three big brokers, more progressive Lloyd’s syndicates (eg Hiscox, Catlin) and big reinsurers like Renaissance Re and SCOR.
RMS has “mea culpa” moment
Acknowledges that uncertainty in models was underplayed in the past
Developing a new open platform (though on their terms)
All three modellers embrace need for own view of risk (liability reduction?)
But companies still need some certainty
Where do companies “hang their hat”
Regulators and shareholders won’t want to hear huge uncertainty margins
Knowledge may not bring power but paralysis
There is much more good news that bad
There is much more good news that bad
Catastrophe modelling has immeasurably improved how insurers view catastrophe risk
The importance of science, often seen as peripheral, is now fundamental
The concept of “own view of risk” is a grown-up response to the real world
There are many more catastrophe science aware people in the industry now than even 5 years ago
Greater interaction between science and the insurance industry (eg Willis Research Network)
But there are many problems
Most insurers still do not have the wherewithal to form their own view of risk
Regulators don’t really know what they want and expect