“relative importance of risk sources in insurance systems”, edward w. frees, april 1998

2
RELATIVE IMPORTANCE OF RISK SOURCES IN INSURANCE SYSTEMS 49 general space. Despite its dimension, it is interpreted to be a single source of uncertainty. Proposition A shows that if we wish to assess the effect of X on Y, then T(X)5E(Y| X) is the appropriate function, at least in terms of the WLR ordering. All decision-makers whose preferences can be represented by u(.)εU will prefer T(X) to Y. Moreover, we have that T(.) is essentially unique, at least if it has a unique inverse. DISCUSSIONS EMILIA DI LORENZO* Dr. Frees presents a very interesting paper con- cerning a methodology for decomposing the uncer- tainty in insurance systems into several sources. This kind of problem is important for actuaries who are interested in identifying the various factors that affect the realization of the risk (typically economic and demographic factors) to control the total riskiness of insurance systems. The methodology developed in the paper allows us to estimate the contribution of the various risk fac- tors; in this framework, in order to quantify the rel- ative importance of a source of uncertainty, the author uses a statistical measure, that is, the coeffi- cient of determination. I think that this result can surely be of practical interest, since it furnishes a concrete methodology for selecting the different components that contribute to the composition of the risk, and by evaluating their relative importance, it will be possible to use oppor- tune techniques for controlling them. In addition to the interpretations given in the pa- per, I think that it is also important for actuaries to consider the level of risk determined by the various components and to estimate the probability that such levels exceed allowable values. In particular, as a fur- ther application of the results presented in the paper, taking into account the single risk components and their relative importance, it would be interesting to study the way to reduce the above probability to a certain level. *Emilia Di Lorenzo is Associate Professor of Financial Mathematics at the University of Naples, ‘‘Federico II,’’ Dipartimento di Matematica e Statistica, Complesso Monte S. Angelo via Cintia 80126 Napoli, Italy, e-mail, [email protected]. GRISELDA DEELSTRA* Dr. Frees has written an interesting paper on the quantative measurement of the relative importance of different sources of risk in insurance systems. As he stresses, this is very important for establishing pre- miums and reserves (necessary for the company as well as for regulatory bodies) and for determining management techniques to reduce the risk. Dr. Frees studies the use of the coefficient of deter- mination in analyzing basic risk management tech- niques. I congratulate Dr. Frees for his clear examples of implementation in cases of such basic risk man- agement techniques as risk exchange, financial risk management, and the pooling of risk. His arguments clearly confirm the common actuarial knowledge that investment risk and a common disaster cannot be pooled away. Moreover, by using the coefficient of de- termination Dr. Frees provides quantifications of the different risks, which are very useful. An analysis of the effect of several risk factors, as in Section 5.2, should be recommended in practice for a better un- derstanding of the relative importance of the different risk factors. Like the author suggests, it would be interesting for future research to study the usefulness of other meas- ures for summarizing uncertainty. Therefore, I think it would be interesting to follow the progress of sta- tistical research as well as to examine in detail the tools already used in factorial analysis, regression analysis and analysis of variance. For pricing premi- ums in nonlife sectors, the most important risk sources are sometimes determined by factorial anal- ysis, regression analysis and analysis of variance. With this approach, nonlife insurance companies sometimes already use implicitly the coefficient of de- termination, which is, as the author confirms, a mea- sure used in statistics. So there might be other measures waiting to be studied in detail. Another approach is mentioned by Dr. Frees as he develops the motivation for the coefficient of deter- mination in an interesting way by using arguments of utility theory. He suggests the use of the ratio (4.2) in cases in which the variance is not a suitable measure for capturing uncertainty. Indeed, Levy (1992) sum- marizes that in a utility context, the variance can be *Griselda Deelstra, Ph.D., is Assistant Professor of Finance and Ac- tuarial Science at the Ecole Nationale de la Statistique et de l’Administration Economique (and CREST), 3, Avenue Pierre- Larousse, 92245 Malakoff Cedex, France, e-mail, deelstra@la place.ensae.fr.

Upload: griselda

Post on 09-Feb-2017

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: “Relative Importance of Risk Sources in Insurance Systems”, Edward W. Frees, April 1998

RELATIVE IMPORTANCE OF RISK SOURCES IN INSURANCE SYSTEMS 49

general space. Despite its dimension, it is interpretedto be a single source of uncertainty.

Proposition A shows that if we wish to assess theeffect of X on Y, then T(X)5E(Y|X) is the appropriatefunction, at least in terms of the WLR ordering. Alldecision-makers whose preferences can be representedby u(.)εU will prefer T(X) to Y. Moreover, we have thatT(.) is essentially unique, at least if it has a uniqueinverse.

DISCUSSIONS

EMILIA DI LORENZO*Dr. Frees presents a very interesting paper con-

cerning a methodology for decomposing the uncer-tainty in insurance systems into several sources. Thiskind of problem is important for actuaries who areinterested in identifying the various factors that affectthe realization of the risk (typically economic anddemographic factors) to control the total riskiness ofinsurance systems.

The methodology developed in the paper allows usto estimate the contribution of the various risk fac-tors; in this framework, in order to quantify the rel-ative importance of a source of uncertainty, theauthor uses a statistical measure, that is, the coeffi-cient of determination.

I think that this result can surely be of practicalinterest, since it furnishes a concrete methodology forselecting the different components that contribute tothe composition of the risk, and by evaluating theirrelative importance, it will be possible to use oppor-tune techniques for controlling them.

In addition to the interpretations given in the pa-per, I think that it is also important for actuaries toconsider the level of risk determined by the variouscomponents and to estimate the probability that suchlevels exceed allowable values. In particular, as a fur-ther application of the results presented in the paper,taking into account the single risk components andtheir relative importance, it would be interesting tostudy the way to reduce the above probability to acertain level.

*Emilia Di Lorenzo is Associate Professor of Financial Mathematics atthe University of Naples, ‘‘Federico II,’’ Dipartimento di Matematicae Statistica, Complesso Monte S. Angelo via Cintia 80126 Napoli,Italy, e-mail, [email protected].

GRISELDA DEELSTRA*

Dr. Frees has written an interesting paper on thequantative measurement of the relative importance ofdifferent sources of risk in insurance systems. As hestresses, this is very important for establishing pre-miums and reserves (necessary for the company aswell as for regulatory bodies) and for determiningmanagement techniques to reduce the risk.

Dr. Frees studies the use of the coefficient of deter-mination in analyzing basic risk management tech-niques. I congratulate Dr. Frees for his clear examplesof implementation in cases of such basic risk man-agement techniques as risk exchange, financial riskmanagement, and the pooling of risk. His argumentsclearly confirm the common actuarial knowledge thatinvestment risk and a common disaster cannot bepooled away. Moreover, by using the coefficient of de-termination Dr. Frees provides quantifications of thedifferent risks, which are very useful. An analysis ofthe effect of several risk factors, as in Section 5.2,should be recommended in practice for a better un-derstanding of the relative importance of the differentrisk factors.

Like the author suggests, it would be interesting forfuture research to study the usefulness of other meas-ures for summarizing uncertainty. Therefore, I thinkit would be interesting to follow the progress of sta-tistical research as well as to examine in detail thetools already used in factorial analysis, regressionanalysis and analysis of variance. For pricing premi-ums in nonlife sectors, the most important risksources are sometimes determined by factorial anal-ysis, regression analysis and analysis of variance.With this approach, nonlife insurance companiessometimes already use implicitly the coefficient of de-termination, which is, as the author confirms, a mea-sure used in statistics. So there might be othermeasures waiting to be studied in detail.

Another approach is mentioned by Dr. Frees as hedevelops the motivation for the coefficient of deter-mination in an interesting way by using arguments ofutility theory. He suggests the use of the ratio (4.2) incases in which the variance is not a suitable measurefor capturing uncertainty. Indeed, Levy (1992) sum-marizes that in a utility context, the variance can be

*Griselda Deelstra, Ph.D., is Assistant Professor of Finance and Ac-tuarial Science at the Ecole Nationale de la Statistique et del’Administration Economique (and CREST), 3, Avenue Pierre-Larousse, 92245 Malakoff Cedex, France, e-mail, [email protected].

Page 2: “Relative Importance of Risk Sources in Insurance Systems”, Edward W. Frees, April 1998

50 NORTH AMERICAN ACTUARIAL JOURNAL, VOLUME 2, NUMBER 2

used only as a measure of risk coherent with secondstochastic dominance in case of a quadratic utilityfunction or if distributions are lognormal or normal.The normality of variables is also an underlying hy-pothesis of most statistical methods, which might in-fluence the analysis if it is not fulfilled. Therefore, afurther study of the implementation of the ratio (4.2)seems an interesting subject for further research.

REFERENCE

LEVY, H. 1992. ‘‘Stochastic Dominance and Expected Utility:Survey and Analysis,’’ Management Science 38: 555–93.

LEDA MINKOVA* and NIKOLAI KOLEV†

Dr. Frees has written a very important paper about anumber of models that quantify the relative impor-tance of sources of uncertainty by using the coeffi-cient of determination. The paper provides a powerfuldemonstration of the effectiveness of that statisticalmeasure in the pursuit of a more realistic descriptionof basic risk management techniques.

We find it unnecessary to summarize here the var-ious techniques that Dr. Frees has presented in anexcellent exposition with a clarity that will satisfy thereaders of this article.

It suffices to say that the author suggests a meth-odology for measuring the risk in the situation inwhich several factors affect the risk simultaneously.According to the basic relation (2.1), the riskiness asmeasured by the variance can be divided into twoparts. Parker (1997) applies this relation to the pres-ent value of the benefits of insurance portfolio, andthe parts of the risks are interpreted as insurance andinvestment risks, correspondingly. Dr. Frees developsa decomposition of uncertainty into several sources.The analysis of risk sources is motivated and inter-preted by the coefficient of determination used in re-gression analysis. This presents the possibility ofapplying the present results not only to insurance butalso to finance.

*Dr. Leda Minkova is Associate Professor, Department of Stochasticsand Optimization, Institute of Applied Mathematics and Informat-ics, Technical University of Sofia, 1000 Sofia, P.O. Box 384, Bul-garia, e-mail [email protected].

†Dr. Nikolai Kolev is Assistant Professor, Department of Statistics,Sao Paulo University, C.P. 66281, 05315-970 Sao Paulo, Brazil (onleave from the Institute of Applied Mathematics and Informatics,Bulgaria), e-mail, [email protected].

Let us underline that the proposed methodology,based on relation (2.1), is, in fact, an analogue of theso-called phenomenon of overdispersion, used formodeling of heterogeneity data; see, for instance,Winkelmann (1997). In general, we say that the dataare overdispersed when the variance mean ratio isgreater than that expected under the assumed model.Such types of data are usually observed in insuranceand finance and are clustered, dependent, sparseddata with presence of outliers. The representation(2.1) shows that the unobserved heterogeneity causesoverdispersion. Therefore, the uncertainty discussedby Dr. Frees can be interpreted as unobserved het-erogeneity also. The necessity to deal with overdis-persion was explained by McCullagh and Nelder(1989, p. 124) as ‘‘. . . overdispersion is norm in prac-tice and the nominal dispersion is exception.’’

Cox (1983) pointed out that overdispersion in gen-eral has two effects. One is that the summary statis-tics have a larger variance than anticipated under asimple model. In this case the standard errors of theparameter estimates are underestimated, sometimesconsiderably. The second effect is a possible loss ofefficiency in using statistics appropriate for a single-parameter family. For instance, if the consideredmodel is the Poisson regression model, the conse-quences of overdispersion are similar to those of het-eroskedasticity in a normal regression model.

Note that the phenomenon of overdispersion isequivalent to Cox processes, in terms of stochasticprocesses [see Grandell (1997)]; to increasing failurerate, in terms of reliability [see Barlow and Proshan(1965)]; to heteroscedasticity models in time series[see Firth (1991)]; and so on. These links were pre-sented by Kolev and Minkova (1996) at a workshop‘‘Mathematical Theory of Ruin Probabilities’’ (Febru-ary 5–9, 1996, Bankya, Bulgaria).

For the analysis of heterogeneity data in differentfields, new methods have recently been proposed forexplaining the overdispersed nature of the data de-pendence. Important references, including some newtechniques and tests for detecting overdispersion indifferent contexts, are Aitkin (1994), Gurmu and Tri-vedi (1992), and Ugarte and Kolev (1996).

In a regression context, the apparent overdisper-sion can be due to the failure of model assumptions:the prediction may be nonlinear (in the sense that theexplanatory variables and the parameters interact ina nonlinear way), the link function may be misspe-cified, or there are some measurement errors. Theomission of some important explanatory variables in-troduces unobserved heterogeneity (that is, it arises