Cat Modeling – A Relative Version of the Truth?

As global warming and other external factors begin to affect the geographical “norm” of where the most historically predictive models have provided accurate results in the past, Risk Managers need to begin to look at other third party options in order to create accurate predictive models. The good news is that more historical data exists and is accessible than ever before. The bad news is that most of the tools that are commonly available have not conceptually evolved since the 50’s.

While modeling is without a doubt the most valuable way for carriers to ascertain the risks involved with writing business in specific geographies and zip codes, an over reliance on historical data in modeling could have catastrophically egregious effects on larger companies with larger books. Smaller companies can directly benefit from Modeling because in most cases, there book is small enough to create a fairly accurate model. Larger carriers that are writing across multiple geographical locations can benefit as well, but the risk of accurately modeling large quantities of data is only as good as the historical data for each region.

Risk Managers have to not only deal with the “known” but should also be looking at what is not known. While historical data will provide valuable insights into what has predictively happened in the past, it provides no guarantee of what could potentially happen in the future. Risk Modelers should make it a practice to look at other measures and tools such as:

  • Total Sum Insured (TSI)
  • Validation Models
  • Experience and Stress Testing

Scenario modeling is also very important, although not an exact science. Scenarios provide a very good methodology for calculating potential risks based on certain perils, but scenarios are again future potential events, not tried, proven or tested historical events that may provide valuable insight when it comes to evaluating such things as the amount of damage that could occur in a certain geographical area where let’s say a Cat 3 hurricane had historically passed through and the damage was measurable.

Currently, Cat Modeling criteria is most often created from the input created from three categories. These are

  • information on the site locations, referred to as geocoding data (street address, postal code, county/CRESTA zone, et cetera)
  • information on the physical characteristics of the exposures (construction, occupation/occupancy, year built, number of stories, number of employees, et cetera)
  • information on the financial terms of the insurance coverage (coverage value, limit, deductible, et cetera)

The output is estimates of the losses that the model predicts would be associated with a particular event or set of events. When running a probabilistic model, the output is either a probabilistic loss distribution or a set of events that could be used to create a loss distribution; probable maximum losses (PMLs) and average annual losses (AALs) are calculated from the loss distribution. When running a deterministic model, losses caused by a specific event are calculated; for example, “a magnitude 8.0 earthquake in downtown San Francisco” could be analyzed against the portfolio of exposures.

In the years preceding 1992, the estimated largest possible insured loss from a single hurricane that could occur was pegged at around 7 Billion dollars. Much of this data was predicated on the behavioral damage estimates that were created after Hurricane Andrew.

But in 1987, AIR Worldwide, a creator of some of the worlds most advanced modeling software, estimated that the losses could be upwards of 20-30 billion dollars with Hurricane Andrew falling in the range of 13 Billion in losses and finally settling at close to 15 Billion in total loss dollars.

From 1990 to 1999, losses in the United States exceeded 87 billion dollars.

One thing that failed to materialize in most Cat Models were the storm surges and flooding that devastated Louisiana and hurt alot of gulf coast states.

This is a classic example of where Cat Modeling missed the mark, but let’s not throw out the hypothetical “baby with the bathwater” and throw out Cat modeling as a gamble or “flawed science.” If anything Katrina bought about scenarios that have raised question on how the models and data can be better used to more accurately predict losses.

Jayanta Guin, AIR’s vice president for research and modeling, said initial estimates accounted for the extensive flood damage in New Orleans. While AIR’s model does take into account storm surge, it does not include data on inland flooding, since most insurers do not cover the risk.

“There are a set of issues that are rather complicated once a major city becomes flooded,” Guin said. Although hurricane coverage is usually limited to wind damage, flooding losses may end up being insured or may be contested in court.

Interestingly enough, after Sandy, much of the eastern coastline had to be rebuilt, which required new construction, and many local governments quickly amended their building codes to insure that new construction on the coast would meet a higher standard of storm safety than previously thought, similar to what Florida did after Andrew, requiring new safeguards in homes and other structures to insure that the losses that occurred historically could be better mitigated in the future. Unfortunately, these improvements don’t always lend themselves to savings in terms of premiums, especially if they are mandated, but they do allow modelers to create more accurate models as we learn more about the ever changing weather patterns and the historical data grows larger and more concise.

With the advances in processor technology, Insurers are able to insert large quantities of data into their models and churn out models at a much faster rate than previously thought possible.

Dropping technology costs have allowed small to mid size insurers to also purchase more sophisticated modeling software and hardware that can provide models in a quick and cost effective manner.

“The question they will ask is how closely the model losses predicted in the front end match up against actual losses,”Berg said.

Swiss Re’s Castaldi said that based on last year’s hurricane season, actual losses to a company’s portfolio were significantly higher or lower than those predicted by the catastrophe models typically because of poor data or non-modeled aspects.

“In many instances, local [insurers] using the models were off by a factor of two or three,” he said. “In the proper hands, cat models can come up with good estimates, but if the data is not correct, it can cause significant problems. There is a lot of human error involved.”

Insurers need to remember that modeling while not an exact science, is a very valuable tool for companies looking to expand into regions where catastrophes may or may not be a way of life.

For companies that are not expanding, the best historical data for your modeling may come out of your own claims history.

That being said, The future for catastrophe modeling looks bright, with the asc companies like AIR and others add data from scientific models of hazard and engineering perspectives on damage. But one area that companies need to look at is interlacing Cat Models with their own current risk assessment data.

Each company must look at how models directly affect their own risk assessments.

One area where many companies have failed to stretch the limits of their modeling is by incorporating a data warehousing solution into their models.

Warehouses designed specifically for big data can help insurance companies accelerate the speed and increase the precision of catastrophe risk modeling.

Predictive modeling solutions will allow companies to analyze the losses before a disasters or as a disaster is occurring. With Data Warehousing, Insurers gain the ability to model losses at the policy level, calculating the effect of a new policy on the portfolio while it is being quoted. This cannot be done with a modeling tool in most cases.

Rather than running historical models every quarter, companies could theoretically run CAT models every month against the data warehouse, and include other risk metrics that may not be included or may not be ingestible in the current modeling software.

A great case in point is the burgeoning exodus from residents of the north eastern states to the south. States such as Florida have now surpassed New York in population, with millions more migrating everyday. This has lead to a surge in new construction and the landscape of the state is changing daily. The need for modeling around areas where new construction and new building codes are being implemented is no longer a quarterly task. If Insurance companies want to mitigate their own risk, the models have to take alot of this data into consideration, along with new data sources and “big data” as additional considerations for modeling.

While hazards do not evolve rapidly, the science of understanding them changes at a dizzying pace.

Research by Lloyd’s and catastrophe modelling firms shows that models incorporate changing conditions implicitly but not explicitly.

It is important to realize that catastrophe models are not forecasts. They do not predict what events will occur. They show a range of possible events that might occur and their likely costs as insurers deal in probabilities.

Taking a uniquely different approach to modeling by incorporating a data warehouse will lend itself to better and more frequent models, better accuracy and best of all, it will allow companies to take action sooner, which will mean an increase in the accuracy of loss reserving and an optimized business portfolio.

%d bloggers like this: