Evaluation of Natural Disaster Models

Lixin Zeng Stephen Kurtz

Arkwright Mutual Insurance Company

Waltham MA May, 1997

Presented at the CAS Seminar on Reinsurance June 1 - 3, 1997, Bermuda

Author information: [email protected], [email protected]

1. Introduction

The use of probabilistic modeling software to evaluate exposures to natural disasters such as earthquakes, wind storms and floods has increased dramatically in recent years. The primary reason is that the natural disaster-caused economic losses during the most recent 10 years have grown by a factor of 6 compared with that in the 60’s, discounting inflation (Bertz, 1996). Driven by this trend, natural disaster modeling tools have been developed and greatly improved. These models calculate damage and losses on both scenario and probabilistic bases. For example, they estimate annual expected damage/losses as well as probability of non-exceedance at probability levels corresponding to various return periods. For a more detailed introduction to the models the reader is referred to Tozlowski and Mathewson (1997).

While these natural disaster models have provided insurers unprecedented information regarding the exposures and risks of their portfolios, they have also posed major challenges to the users. First, they require the users to supply detailed and accurate data about the insured portfolio, such as street addresses of the sites, structure and occupancy types that differentiate the sites’ resistance to natural hazards. This challenge can be met only by effective and consistent collection/management of the massive data that insurance companies encounter daily. The focus of this paper is on another, yet greater, challenge. It concerns the results that the models generate: To what extent the model’s results can be trusted and assessing the validity of the models.

In this paper, we share the experiences that we have gained from our efforts to understand, validate and use natural disaster models. We start with a concise description of how the models operate in general (Section 2). Approaches to assess/validate the major components of the models are discussed in Section 3. A brief summary follows in the final section.

2. Natural disaster modeling tools

These programs usually consists of three main components: (1) natural hazard simulations, (2) calculations of fragility of the insured sites subject to natural hazards and (3) financial analyses, including property and business interruption losses before and after deductible, limits and/or reinsurance programs. The models can be used to perform (i) scenario calculations, which describe

1

the damages/losses sustained by a portfolio during an event, and (ii) probabilistic analyses, which estimate the risks associated with natural disasters for a portfolio in a probabilistic sense.

Natural hazards, by definition, are forces of nature. A single historical event, such as the Northridge earthquake or Hurricane Andrew, can be used by the models to perform scenario calculations. Probabilistic analyses require the probabilistic characteristics of the natural hazards. They are estimated from the frequency and severity of historical events. If the record of historical events is not long enough to provide a reliable statistical inference, simulation techniques are usually employed to create a large number of simulated events. These probabilistic characteristics are built into functions commonly known as “hazard curves”.

The fragility (or damageability) of a site is a measure of its response to hazardous events. Examples include the percentage of structural damage of a building in the event of an earthquake or the content damage inside a building with flood water. The relationship between the degree of damage and hazard level is called a “damage curve”, and is usually derived based on historical loss data and/or expert opinions for various structural and occupancy types.

The damages ultimately need to be expressed as monetary losses for insurers. This is accomplished by the financial component of the models. Commonly calculated quantities are gross loss after deductible/limits and net loss after facultative and/or treaty reinsurance programs. Some natural disaster models also attempt to estimate business interruption, or time element losses. Ideally, these numbers will provide insurers a good approximation of the degree of risk associated with the portfolios.

3. Model assessment and validation

In order to base important financial decisions on these models, insurers have a strong incentive to find out how accurate and reliable they are. Some insurers buy models that perform the same task from different vendors and compare their calculations. If the models produce similar results, the user’s faith in the models is reinforced. However, these models frequently share the same underlying scientific assumption, database and engineering methodologies. As a result, such comparisons actually do not provide any independent validation of the models. While some

2

insurers do take the time to calibrate these systems by comparing the model damage estimates and actual loss data for a historical event, few attempt to review the probabilistic and scientific content of these systems in light of recent research and advances in the scientific community. This is unfortunate because based on our experience, the damage-based error to portfolio calculations will generally be well less than an order of magnitude, but errors due to the probabilistic and scientific components of the system can be several orders of magnitude.

In this section, we describe a systematic approach to evaluate and validate natural disaster modeling software. This approach was developed at Arkwright’s Applied Underwriting Research. It consists of a series of simulation experiments and tests. Application of the approach to evaluating a wind storm model is presented in Kelly and Zeng (1996).

3.1 Evaluating underlying natural hazards

Starting from the natural hazard part, we first survey the state-of-the-art scientific research and most recent database related to the specific type of natural hazard addressed by the model. Then we check to determine if they have been incorporated and reflected in the model. However this is not an easy task because most of the models provide the users with only the financial results, which contain the combined effects of the three model components described in last section1. As a result, it is impossible to directly compare the model’s underlying natural hazard mechanisms to those established and well recognized in the scientific community. For example, how would you compare the seismic hazard map produced by the US Geological Survey to the underlying hazard simulations within an earthquake model?

Our approach to tackle this problem is to feed a generic portfolio to the model. The generic portfolio contains only uniform and simple buildings whose damageability is well understood. The insurance policy is assumed to be the simplest: no deductible, limits or reinsurance. We then let the model simulate losses at different sites. Within such a uniform portfolio, the only factor contributing to the differentiation among the simulated losses at different sites is the geographical variability of the underlying natural hazards simulated by the model. Meanwhile, we compute 1

In fact, the hazard simulation and damage calculation components are proprietary and are generally not

available to the users (see Musulin, 1997).

3

losses based well-established understanding of the hazard (for example, the USGS seismic hazard curves can be used to calculate earthquake damage) 2. Comparing our loss calculation to that from the natural disaster model will reveal whether proper hazard information is reflected in the model. Systematic difference in the two sets of data is examined by testing equality of means using the familiar “t-test.” The root mean square difference is also computed to assess the overall accuracy of the model.

3.2 Assessing the damage calculation

We use two methods to assess a model’s damage calculations given natural hazards. The first is to feed the model with the same simple and uniform portfolio described in Section 3.1 so that the resulting gross loss is equal to the damage. We then compute the damage based on an independently developed approach. For example, Arkwright retained Risk Engineering Inc. to develop earthquake damage curves for our sites in L.A., California area. These curves can be compared to the underlying damage curves used by the natural catastrophe models. If they agree, it will increase our confidence in the model. If they do not, we will investigate the situations in which they differ and ask our engineers to look for the reason and provide assessment on the accuracy/reliability of the results.

If necessary data are available, a more direct and effective approach to assess damage calculations is comparing the actual losses during a historical event to the scenario calculation by the model. For example, to assess the damage calculation of an earthquake model, we would feed the model with our exposure in the Los Angeles area at the time of the Northridge Earthquake and let the model run this scenario to calculate gross property loss before deductible/limits and reinsurance. The comparison of the model results and our loss experience is a direct reflection of the accuracy of the model’s damage calculations. It is noted that most insurance companies’ loss data do not contain small losses. As a result, the comparison only reveals the model’s accuracy in a certain range of damage.

2

This can be easily accomplished because the structure type of the buildings in the uniform portfolio is

chosen such that their fragility to natural hazards is relatively well-understood.

4

3.3 Evaluating the financial analyses

Most financial calculations simply involve net losses after deductible/limits/reinsurance based on gross damage. They are usually straightforward and easy to verify. An issue that we watch carefully, on the other hand, is the estimate of business interruption (BI), or time element (TE), losses by some natural disaster models. Ideally, to accurately asses the BI exposure of a company, a system-wide approach must be taken. Complex interdependencies within a company create a “ripple-effect” when a loss occurs to one part of the system. A company may shift product mix to optimize profits in the event of a loss, or may buy materials on the market in order to honor sales agreements. Seasonal fluctuations, make-up capacity, and contingency plans are a few other variables which can affect a BI loss. Also, probabilities of frequency, severity, length of downtimes and recovery periods must be taken into account. However, this level of analyses is not available in any models due to the difficulty for the user to supply the huge amount of information needed and the lack of an effective BI model. According to our knowledge, BI losses in the currently available natural disaster models are estimated based on content value and type. A fundamental inadequacy of this approach is the fact that BI losses are related more to the revenue-generating operations than to the value and type of the content itself.

Arkwright has developed a technique which uses Systems Dynamics and Monte Carlo simulation to determine BI risk. While this is believed to be the best approach available, implementation of such a model is not a feasible option in the near future, since it requires a fundamental change in thinking on the part of many of our customers in order to supply us with the data that it requires. We are currently in the process of developing a simpler BI model which will incorporate some of the important variables and relationships which we have discovered while in the process of creating this new model. Evaluating the natural disaster models’ BI estimates will be one of the first tasks upon its completion. Looking into the future, we plan to incorporate our BI estimate as a component of the natural disaster models used at Arkwright.

3.4 Model uncertainty and bias

Most of the evaluation/validation processes discussed above require a considerable amount of data comparison, which will illustrate the uncertainties and bias in the models. The difference between

5

the model’s loss estimate and the actual loss is called the “residual.” Because of uncertainties inherent in natural disasters, non-zero residuals indicating discrepancies between the model and actual data are normal and acceptable, as long as their standard deviation is within a reasonable range.

The average of the residuals, on the other hand, measures the overall bias in the model. A good model would produce an average residual close to zero. The most conservative statistical approach to test whether the average residual is zero would be to perform a nonparametric hypothesis test that the mean of the residuals is zero. A nonparametric approach is preferable since the assumptions required for more traditional approaches (such as the “t-test”) may be violated. Specifically, the assumption that the data comes from the same probability distribution is violated. With the uniform portfolio (Section 3.1), this assumption was met by way of design. However, in the cases of Section 3.2 and 3.3, since we are comparing model results to actual data, this assumption no longer holds. It is hard to reason that damage variability would be the same in various structures of different sizes. Therefore nonparametric tests would be desired.

Understanding the bias in the models is extremely important. A model with large uncertainties but zero bias may not be produce accurate calculations on individual sites, but can produce a good overall estimate of a large portfolio. However, if a model has a bias, its calculations would be in error no matter how large the portfolio is.

4. Summary

Currently available natural disaster models provide valuable information for insurers with regard to the exposures and risks associated with natural hazards. However, a careful examination of the models themselves as well as the data generated is an integral part of using them properly. A systematic approach to evaluate and validate the models was developed at Arkwright. Applying this approach to wind models (Kelly and Zeng, 1996) have greatly enhanced our understanding and application of the model. It is our plan to apply this approach to all purchased or internally developed natural disaster models.

References:

6

Bertz, G., The worldwide increase of natural disaster losses: engineering and insurance aspects, Proceedings of IABSE Congress, 1996.

Kelly, P. J. and L. Zeng, The engineering, statistical, and scientific validity of EQECAT USWIND modeling software, presented November 7, 1996 at the ACI Conference for Catastrophe Reinsurance, New York, 1996.

Musulin, R.T., Issues in the regulatory acceptance of computer modeling for property insurance ratemaking, Journal of Insurance Regulation, Vol. 15, No. 3, 1997.

Kozlowski, R.T. and S.B. Mathewson, A primer on catastrophe modeling, Journal of Insurance Regulation, Vol. 15, No. 3, 1997.

7

nat-disastermodel evaluation

reason is that the natural disaster-caused economic losses during the most ... calculations of fragility of the insured sites subject to natural hazards and (3) financial analyses, .... and recovery periods must be taken into account. ... assumptions required for more traditional approaches (such as the “t-test”) may be violated.

17KB Sizes 2 Downloads 218 Views

Recommend Documents

Evaluation: How Much Evaluation is Enough? IEEE VIS ...
latter, it is just one of the tools in a large toolbox. Borrowing ... perceptual, cognitive and social aspects of .... Analytics for Command Control and Interoperability.

Erasmus+ Promotion Evaluation Criteria
Applications have to comply with the following ​minimum​ requirements in order to be considered for evaluation: ○ All sections in the application form must be ...

Substitute Unsatisfactory Evaluation Form
Exhibit - Unsatisfactory Performance Report for Substitute Teachers. The following areas are a concern for. School for the substitute named below. Substitute's ...

Evaluation Plan.pdf
Page 1 of 6. Roberto Carlos Flores. EDIT 5584 Program and Product Evaluation. Evaluation Plan Draft. Evaluation of Students Learning Outcomes for Virginia ...

Erasmus+ Promotion Evaluation Criteria
Does the event receive support from universities, local governments, other sponsors? i. monetary ... backgrounds or from remote areas? (3p). H. Dissemination ...

Evaluation copy -
Study name. Statistics for each study. Correlation and 95% CI. Lower Upper. Correlation limit limit. Z-Value p-Value. Experiment 1. -0.317 -0.703. 0.212. -1.185.

Evaluation Report
bilingual education was established in 43 state schools with 1200 pupils aged three .... •An online Supplement which will contain an article setting the BEP against the ..... the CN and CS teachers through, for example, the teaching of specific ...

Evaluation patrimoniale.pdf
There was a problem loading more pages. Retrying... Evaluation patrimoniale.pdf. Evaluation patrimoniale.pdf. Open. Extract. Open with. Sign In. Main menu.

TEACHER PROFESSIONAL PERFORMANCE EVALUATION
Apr 12, 2016 - Principals are required to complete teacher evaluations in keeping with ... Certification of Teachers Regulation 3/99 (Amended A.R. 206/2001).

School Evaluation Framework.pdf
with their peers making progress. appropriate to ... consistently made expected progress. and often made .... Page 4 of 11. School Evaluation Framework.pdf.

Empirical Evaluation of Volatility Estimation
Abstract: This paper shall attempt to forecast option prices using volatilities obtained from techniques of neural networks, time series analysis and calculations of implied ..... However, the prediction obtained from the Straddle technique is.

PERFORMANCE EVALUATION AND ...
As uplinks of most access networks of Internet Service Provider (ISP) are ..... event, the TCP sender enters a slow start phase to set the congestion window to. 8 ...

BPSB SUPPORT PERSONNEL OBSERVATION & EVALUATION ...
utilized to reduce employment numbers should the Bossier Parish School System enact. a "reduction in force.” La. R.S 17:81.4. Sherri Pool. Director of Human Resources. (318) 549-5021. Pam Williamson. Supervisor of Programs of Professional Evaluatio

Human Medicines Evaluation Division Chart
Services / Offices. Departments. Key: Human Medicines Evaluation Division. Anti-infectives. & Vaccines. Marco. Cavaleri. CNS & Ophthalmology. Manuel Haas.

Health Evaluation Form.pdf
Tylenol-pain reliever/fever reducer. Benadryl-allergy/cold symptoms. Ibuprofen (Motrin, Advil). Sudafed PE-decongestant. Tums/Antacid. Exceptions:.

Workshop Evaluation Form.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Workshop ...

Workshop Evaluation Form.pdf
Workshop Evaluation Form.pdf. Workshop Evaluation Form.pdf. Open. Extract. Open with. Sign In. Main menu. Displaying Workshop Evaluation Form.pdf.

Interview Evaluation Form.pdf
Page 1 of 1. Page 1 of 1. Interview Evaluation Form.pdf. Interview Evaluation Form.pdf. Open. Extract. Open with. Sign In. Main menu. Displaying Interview Evaluation Form.pdf. Page 1 of 1.

Evaluation clavier vierge.pdf
Retrying... Evaluation clavier vierge.pdf. Evaluation clavier vierge.pdf. Open. Extract. Open with. Sign In. Main menu. Displaying Evaluation clavier vierge.pdf.