model risk and the great recession

12
This article was downloaded by: [University of Otago] On: 03 September 2014, At: 09:43 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK CHANCE Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/ucha20 Model Risk and the Great Recession Tim K. Keyes Published online: 02 Aug 2013. To cite this article: Tim K. Keyes (2011) Model Risk and the Great Recession, CHANCE, 24:4, 6-16 To link to this article: http://dx.doi.org/10.1080/09332480.2011.10739881 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http:// www.tandfonline.com/page/terms-and-conditions

Upload: tim-k

Post on 22-Feb-2017

215 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Model Risk and the Great Recession

This article was downloaded by: [University of Otago]On: 03 September 2014, At: 09:43Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House,37-41 Mortimer Street, London W1T 3JH, UK

CHANCEPublication details, including instructions for authors and subscription information:http://www.tandfonline.com/loi/ucha20

Model Risk and the Great RecessionTim K. KeyesPublished online: 02 Aug 2013.

To cite this article: Tim K. Keyes (2011) Model Risk and the Great Recession, CHANCE, 24:4, 6-16

To link to this article: http://dx.doi.org/10.1080/09332480.2011.10739881

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) containedin the publications on our platform. However, Taylor & Francis, our agents, and our licensors make norepresentations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of theContent. Any opinions and views expressed in this publication are the opinions and views of the authors, andare not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon andshould be independently verified with primary sources of information. Taylor and Francis shall not be liable forany losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoeveror howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use ofthe Content.

This article may be used for research, teaching, and private study purposes. Any substantial or systematicreproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in anyform to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

Page 2: Model Risk and the Great Recession

6 VOL. 24, NO. 4, 2011

Dow

nloa

ded

by [

Uni

vers

ity o

f O

tago

] at

09:

43 0

3 Se

ptem

ber

2014

Page 3: Model Risk and the Great Recession

CHANCE 7

Recent events in the financial services industry showcase key concepts about risk models, their assumptions, usefulness, and limitations. Statistical models for risk

management are becoming increasingly commonplace as lending tools and achieving a level of influence requiring a commensurate level of oversight and suspicion. In fact, these models can contribute significantly to operational risk in financial services: Never before has reliance on models—for credit approval, risk ratings, capitalization, etc.—been so fundamental to the operating health of an industry.

Most every reader has been or knows someone who has been affected by the Great Recession. Whether it is the denial of credit for a credit card or auto or student loan, the erosion of an investment or retirement account, or the loss of employ-ment, the global recession has touched our lives and livelihoods without discrimination. The press and blogs seem to suggest that statistics, statistical modeling, and statisticians (or their ilk) may have played a role in ‘greasing the skids’ of the downturn, as observed by Michael Lewis in his 2008 article, “The End of Wall Street’s Boom,” and Felix Salmon in his 2009 article, “Recipe for Disaster: The Formula That Killed Wall Street.”

Tim K. Keyes

Great Recession Model Risk and the

Dow

nloa

ded

by [

Uni

vers

ity o

f O

tago

] at

09:

43 0

3 Se

ptem

ber

2014

Page 4: Model Risk and the Great Recession

8 VOL. 24, NO. 4, 2011

It is against this backdrop that a better understanding of the situation is attempted here. What was the role of statistics in these events? Where did models go wrong? What could have been done to prevent this situation and all the malaise it has generated? Could it happen again? All are important questions that should be included in the modern dialog about the infl uence of statistics and consequences of improper use or misunderstanding of models. As the world continues to get smaller relative to the global free-fl ow of information, capi-tal, people, disease, etc., these types of events may manifest themselves more readily.

Lending Backdrop The lending market is colossal. According to the Federal Reserve (Federal Reserve Economic Data, FRED II, research.stlouisfed.org/fred2, series TOTBCKR), there was $9.2 trillion in U.S. commercial bank credit outstanding in June of 2011. This number by itself is staggering and conveys the vastness of the fi nancial services sector (note the U.S. national debt is more than $14 trillion as of this writing). Clearly, any use of statistical modeling in this sector could have proportionately huge impact, positive and negative. The nature of statistical modeling discussed here is meant to be more generic and illustrative, but nevertheless representative for the purposes of the discussion.

Some other key observations about financial services should be made. Unlike manufacturing or engineering areas, the fi nancial services market tends to be more behavioral in nature. At times, what we think about the market and exhibit in our investment or purchase decisions can affect the market, making it more nonlinear and elusive, as well as short-cycle (measured in months or quarters, not in years). It is also free and evasive, with capital in the global markets more or less capable of coming and going according to where its owners fi nd the best reward.

An illustration of this over the last decade or so is the series of market “bubbles” (dramatic market expansions fol-lowed by contractions), starting with the Asian fi nancial crisis in the late 1990s. Much speculative commercial devel-opment in Asia—built on a foundation of unsustainable and opaque lending practices—came to an abrupt halt, leav-ing debtors unwilling or unable to repay their lenders, ulti-mately collapsing scores of fi nancial institutions. Capital, seeking safer ground, left Asia behind in a currency crisis, and with devaluation of the Russian ruble and subsequent bond default, soon thereafter helped usher in a global crisis driven by institutions with excessive leverage, including hedge funds. Long-term capital management is a notable casualty in this timeframe, as described in the 1999 report from the President’s Working Group on Financial Markets.

The next bubble was in technology/e-commerce, which had its own downfall, followed by—as a result of low inter-est rates and relaxed underwriting standards—the real estate bubble, which we are all familiar with by now. We’ve already seen commodities (e.g., oil) rise and fall, helping transition us into the most recent recession.

The thrust is that fi nancial services is a highly dynamic area that does not lend itself well to highly structured and precise modeling over any length of time. Indeed, the usefulness of statistical models is ephemeral at best, but the hunger for perceived advantage by virtue of statistical models is insatiable given the size of the market and its accompanying competi-tion. Perhaps among the missing ingredients is the ‘closed loop’ between the theory of modeling and the practice of modeling, “mathematistry vs. cookbookery” (or theory vs. practice … sci-ence vs. technology, etc.) in the parlance of George Box in his 1976 Journal of the American Statistical Association article, “Science and Statistics,” whereby modelers become their own worst critics once models are deployed and they actively search for data to refute their current model ‘hypothesis’ in exchange for what the market will have taught them, garnered through new data generated as the market changes. Recognition of and open dialog about this aspect of fi nancial services as it relates to modeling effectiveness and risk have been underemphasized between modelers and model users.

Risk-Based Capital Just like we, as consumers, do when we purchase a car or home, fi nancial institutions may choose to use debt to do business. This means they will take capital from their own pocket (equity), but also use debt (e.g., bond issuance) to fi nance the transactions they’re interested in. The relative mix of equity and debt is determined from the level of riskiness in the transaction, assuming it’s measured accurately.

Customers of the lender (“obligors”) have varied risks indi-vidually and collectively and are charged interest according to their riskiness. Investors, who purchase bonds or debt from the lender, want a reliable investment, with returns greater than U.S. Treasury notes whenever the level of risk is greater than that of treasuries. By way of example, let’s assume the lender has assessed the level of risk in the underlying obligors and has determined that the mix of debt (bonds) to equity is 9:1. This ratio is called “leverage.” Figure 1 provides an illustration.

Why use leverage? In simple terms, the reason is related to an implied amplification of return-on-equity (ROE),

Dow

nloa

ded

by [

Uni

vers

ity o

f O

tago

] at

09:

43 0

3 Se

ptem

ber

2014

Page 5: Model Risk and the Great Recession

CHANCE 9

typically used by lenders as a measure of performance, along with return-on-investment (ROI):

In other words, the more leverage the lender uses, the better the ROE metric gets—assuming leverage is cor-rect, for there is a cost of leverage: Correct or not, leverage results in interest payments for the debt it implies. If the leverage is incorrect, either more capital is employed than is needed, resulting in an opportunity cost since the capital ‘over-age’ cannot be put to a more profi table use, or less capital is employed than is needed, implying that circumstances could arise in which losses mount and erode the lender’s credit profi le and dry up sources of investment funds.

Back to our example and Figure 1, the lender issues nine $1 bonds with a 5% per year interest rate payable to investors. Five $2 loans are provided to customers with terms including a 10% per year interest rate. In the fi rst year, all customers perform according to plan. The lender can therefore make bond interest payments to investors and thus achieve a 20% ROE. In year two, a customer defaults, resulting in a complete write-off of the customer’s $2 loan by the lender and only 80¢ is obtained from customer interest payments, but with the same expense obligations owing to leverage, resulting in a $1.9 loss after tax. Note that only $1 of capital is in the lender’s pocket to cover this loss. Clearly, there was inadequate risk-based capital to cover this situation. The lender may fi nd itself in default of its bond obligations should this shortfall continue and thereby lose its credit rating and capital-generating wherewithal.

ROE = Net Income Equity

ROI = Net Income (Debt + Equity)

ROE = ROI x (Debt + Equity) / Equity = ROI x [Leverage +1]

➡➡

Deriving Capital While there is a potential benefi t to the use of leverage in terms of amplifi ed ROE, the above example shows how important it can be to use an accurate assessment of leverage, commen-surate with the level of risk inherent in the assets (e.g., loans) being fi nanced. Generally speaking, the more risky (in terms of higher mean or variance of losses) the asset class and specifi c assets being fi nanced, the less the leverage should be, or the more the lender needs to pony up its own capital in reserve. The actual amount of capital reserves also is a function of what credit-worthiness rating the lender desires to achieve, which, in turn, affects how much interest the lender will have to pay to ‘lever up.’ If the lender uses bond issuance to fi nance its loans, a lower rate of interest will be possible with better ratings, provided the market and investors believe these ratings.

Statistical concepts enter the picture here. Lenders need to establish reserves for both expected (i.e., mean) levels of loss for a given year and unexpected levels of loss, as typically measured through a business cycle (Through-The-Cycle). A useful analogy is the planning for hurricane activity in, say, Florida. In any given year, no one should be surprised at Florida experiencing a single hurricane during the Atlantic season. However, few expected four hurricanes to hit Florida as they did during the 2004 season, apparently the only time in recorded history that this happened (http://en.wikipedia.org/wiki/2004_Atlantic_hurricane_season). Could this happen again? It happened before, and setting aside reserves for the occasion would be wise.

We all can read about the less deadly, but perhaps more costly, damage caused by the economic bubble and the result-ing wipeout of an enormous amount of wealth and, perhaps more importantly, confi dence in the global fi nancial system. Setting aside reserves or economic/regulatory capital to buffer

Figure 1. Illustration of leverage

Dow

nloa

ded

by [

Uni

vers

ity o

f O

tago

] at

09:

43 0

3 Se

ptem

ber

2014

Page 6: Model Risk and the Great Recession

10 VOL. 24, NO. 4, 2011

against these human-caused conditions also would be wise. How might this be done?

First and foremost, these assessments should be made on good data, gathered over a reasonably long period—typically more than one business or economic cycle—and representa-tive of the conditions the lender is likely to face in the future. The lender should attempt to measure its annual lending losses at the transaction level, or otherwise have benchmark information useful for this purpose. This is a key point that will resurface later; this aspect of capital evaluation may have been short-circuited in some critical ways. Figure 2 displays a typical histogram corresponding to lending losses over time.

Expected losses are associated with the mean estimate, while unexpected losses are associated with the standard deviation. The term “value-at-risk” (VaR) is a measure of tail risk and incorporates correlation between obligors or loans—an element that is diffi cult to accurately assess. VaR attempts to measure, for example, the level of risk associated with a 1:10,000 prob-ability or a AAA rating for the lender. That is, to target such a rating, the lender would have to pony up enough capital to buffer against all loss probability out to the 99.99th quantile of the appropriate loss distribution. This can be an enormous amount of capital and all attempts to manage it effi ciently can be of paramount importance to stakeholders.

The Basel Initiative (www.BIS.org) offers a relatively new but evolving set of standards for establishing minimum capital requirements for banking organizations. It was established by the Basel Committee on Banking Supervision, a group of central banks and bank supervisory authorities in the G10 countries that developed the fi rst standard in 1988. The fol-lowing discussion is in part based on Basel considerations.

Clearly, not all products a lender offers, or all loans within a product type, are the same in terms of loss potential. Lenders typically attempt to decompose the historic loss data, using regression analyses, into key factors that aid in the forecast

of obligor- and loan-specifi c losses for the lender’s “portfolio” (i.e., complete set) of loans. Table 1 displays these factors and how they’re used in the decomposition of the loss data and the build-up of forecasting models that can be used for default and loss prediction.

For a loss to materialize for a lender, at least one obligor has to default on at least one of its credit obligations. Analysis of losses for the purpose of predicting future portfolio per-formance typically begins with forecasting defaults via prob-ability of default (PD) models. The output of these models typically requires calibration to be put on a probability scale. For example, the well-known FICO score (http://en.wikipedia.org/wiki/Fair_Isaac) is often used to forecast consumer likelihood of default. The model, however, has a score output with a numeric score range of 180–850, which does not directly relate to prob-abilities of default. It also should be noted that PD models tend to perform poorly in eras of high securitization as reported by Rajan et al. in their 2009 work, “The Failure of Models That Predict Failure: Distance, Incentives, and Defaults.” Securi-tization refers to when loans issued by banks are sold off by those originating banks in structured bundles (“securitizations” or collateralized debt obligations, “CDOs”), as was the case leading up to the recent fi nancial meltdown. The assumptions behind the models (e.g., that proper underwriting and back-ground checks were performed under reasonable supervision or control), can be violated, resulting in poor performance of the models. A key takeaway from this is a point made in The Role of Statistics in Business and Industry, by Gerald Hahn and Necip Doganaksoy: The choice of the statistical technique is much less important than data and process considerations.

There are two points here. First, modelers may tend to fall in love with their models once developed and may fail to sufficiently monitor them and their underlying assumptions. To paraphrase from Nobel laureate econo-mist Paul Krugman in describing the economic meltdown,

Figure 2. Lending loss distribution

Dow

nloa

ded

by [

Uni

vers

ity o

f O

tago

] at

09:

43 0

3 Se

ptem

ber

2014

Page 7: Model Risk and the Great Recession

CHANCE 11

“… (modelers) went astray because (modelers), as a group, mistook beauty, clad in impressive-looking mathematics, for truth.” Models are, at best, ephemeral hypotheses about truth; therefore, sound scientifi c methods should continually be used to test if hypotheses (models) still refl ect the truth, as previ-ously ascribed to Box.

More recently, Douglas Hubbard, in his 2009 book, The Failure of Risk Management: Why It’s Broken and How to Fix It, refers to the automatic assumption that sophistication implies cor-rectness as ‘crackpot rigor’ and suggests that risk managers should always monitor such tendencies. He offers interesting evidence that those fi nancial institutions that stressed their modeling assumptions fared better than those that did not.

The second point is a key element of model risk in risk modeling. Assumptions are often subject to violation owing to changes in underwriting policies and practices producing changes in the ‘through-the-door’ population, relative to the population used to build the risk model. In addition, the macro-economic conditions that incubated the historical data might not hold up in future periods. The default correlation, for exam-ple, between obligors is frequently assumed to be small or low, a presumption that should be stress-tested if the macro-economic conditions suggest a regional or even global recession.

Transparency and monitoring of the validity of assump-tions should be reinforced across the profession and industry. In the case of the recent downturn, pervasive use of false or inadequate assumptions and/or practices may have contributed to the outcome. As Darrell Duffi e et al. suggest in their 2008 Journal of Finance paper, “Frailty Correlated Default,” mortgage portfolio default modelers failed to assess, for example, the infl uence of improper ‘self-documentation’ of credit-worthi-ness by the borrowers themselves or their mortgage brokers (i.e., a disconnect between actual and reported credit quality of the mortgage applicant). Similarly, there is some evidence (and debate) that models, including those used by rating agencies, may not have incorporated a suffi cient stress scenario allow-ing for rising defaults in the event of widespread home price

decline, a point that Lewis picks up on in his 2010 book, The Big Short: Inside the Doomsday Machine.

Nevertheless, home prices did decline and, moreover, new features were introduced such as negative-amortizing mort-gages (those for which the outstanding loan balance increases when no principal or interest payments are made). Together, these injected structural disconnects between model assump-tions and actual use. Banks and investors that stayed clear of these subsurface currents—either because they could not see them or because they saw them and prepared—fared much better through the tempest.

A noteworthy example of a fi nancial institution that failed due to incorrectly evaluating risks (and over-leveraging itself) is Merrill Lynch, which after tens of billions of write-offs arising from investments in subprime mortgages (to obligors with low FICO scores … many with new features as described above), was sold to Bank of America. Goldman Sachs was also a highly leveraged fi nancial institution, but hedged its losses with default insurance contracts bought from AIG. While these examples are signifi cant, it should also be noted that the situa-tion was in all likelihood pandemic in that any originator of or investor in these questionable assets was exposed to potential loss, so long as the evaluation of risk depended on models and ratings with inappropriate assumptions on defaults.

The next modeling step is to estimate loss severity for each loan in the event of an obligor default. This loss given default, or LGD, forecast refl ects the potential net present value of loan economic losses. This assessment is often more diffi cult to make compared to a PD forecast, owing to the ‘bathtub’ shape of many LGD historical distributions: In many cases, no or few losses are experienced by the lender in the event of default, particularly on loans that are fully supported by collateral pledged by the obligor to the lender and which can be liquidated to extinguish the loan. On the other hand, the collateral loses value in the event of default in many cases, such as in the case of infl ated home prices and residential mortgages, and the recovery amount could be considerably

Metric Description Comment

Default Bankruptcy, Missed Payment, Distressed Restructuring – for Any Obligation Loss Only Arises if Default Occurs

Loss Economic Impact of a Default Based on Recoveries, Less Expenses, Discounted at Loan Default Interest Rate to Time of Default

EAD Exposure at Default: Forecast of Amount Obligor Will Owe Should Default Occur

When Exposure Can Vary, Statistical Model Used to Forecast Amount That Will Be Owed

PD Probability of Default Statistical Model(s) – Usually Logistic Regression – Predicts Obligor Default Likelihood (marginal), 1-Year Horizon

LGD Loss Given Default Percentage of EAD Forecasted Not to Be Collected (see Loss)

Correlation Correlation in Default Between Obligors Useful in Assessing Joint Probability of Default Distribution

Table 1—Building Blocks of Credit Risk Capital

Dow

nloa

ded

by [

Uni

vers

ity o

f O

tago

] at

09:

43 0

3 Se

ptem

ber

2014

Page 8: Model Risk and the Great Recession

12 VOL. 24, NO. 4, 2011

less than 100%. Should fraudulent activity be involved, one could expect a full loss, or 100% LGD.

A historical database of loan LGDs accumulated over time—along with obligor and loan attributes or predictive variables at and prior to the point of default—are needed to build an LGD model. Due to the nature of the response vari-able distribution, a two-step method is sometimes employed: Create a model to predict whether a loss will be incurred given a default (probability of loss) predict how severe the loss will be, given that a loss is expected (loss given loss). That is,

where X = a vector of obligor and loan attributes measured prior to and at the point of default.

It is also important to mention that there is evidence sug-gesting default rates and loss-given-default or recovery rates are correlated, as observed in 2006 papers by Edward Altman and colleagues. This point is essential to synthesize in the context of attempting to decompose the loan portfolio loss distribution depicted in Figure 2 into the building blocks of obligor PD and loan LGD. Clearly, there can be default correlation among obligors (e.g., homeowners in the same city, working for the same company) and recovery correla-tions among loans (e.g., home sale prices in the same city or neighborhood). Further, as default rates increase, it has been observed that recovery rates decrease, perhaps owing to a scar-city of capital or liquidity during periods of high defaults and an abundance of loans being sold for recovery. In any event, one should be aware of and test any assumption that suggests no correlation between default and recovery rates.

Assessing variation given a loan portfolio becomes the next vital point in deriving capital needs. In simplistic terms, assuming PD and LGD are uncorrelated, it is known that

when Var(PDxLGD) exists. As noted above, there is some debate—and evidence—suggesting that PD and LGD are correlated, in which case (1) is not applicable. It should be noted that there also might be variation arising from exposure at default (EAD), owing to, for example, revolver credit line usage prior to a default event (i.e., that obligors will use all funds available to them as they approach default).

A better, but slightly more complicated, variation expres-

sion could be derived for Var(PD � LGD � EAD), with assumptions made for the correlation of the factors where

appropriate. The expression PD � LGD � EAD is, after all, an estimate of the expected loss for a loan, borrower, or portfolio, depending on the level of measurement used. Often, portfolio-level parameter estimates are substituted for the parameters in (1) (or its applicable replacement) to estimate variation and therefore tail-risk for a portfolio in practice.

There are missing ingredients needed to make a prob-ability statement (e.g., tail-risk) about a portfolio, in par-ticular when data are unavailable (as with new products) or

Glossary of Terms

Asymmetry of InformationImbalance in information between a bank and a buyer of, for example, a mortgage the bank originated

CapitalFunds used as a reserve in case of losses

Collateralized Debt Obligation (CDO)A limited-purpose fi nancial entity used by fi nancial institutions to sell groups of loans

EquityGenerally, a fi nancial institution's portion of direct ownership in an investment such as equity in your home; usually in a "fi rst loss" position when it comes to losses

Hedge FundA fund that makes investments in a diverse range of assets with strategies to "hedge" against market down-turns and capture the benefi t from market upturns

LeverageAn amount of debt supporting an investment

Model RiskRisk arising solely from the use of models to measure things like credit or market risk

PD, LGD, EADProbability of Default, Loss Given Default, Exposure at Default; key metrics in the management of credit risk by risk models

ROIReturn on investment; generally, net income divided by amount of investment

ROEReturn on equity; generally, net income divided by capital (see ROI)

SecuritizationThe process used by fi nancial and other institutions to sell off future cashfl ows from contractual obligations, such as mortgages, in return for investor cash today

TrancheLayers bonds differentiated by levels of riskiness, used in securitizations by CDOs

VaRValue-at-Risk; a measure of less probable, or “tail-risk,” associated with a tail of a loss distribution

LGD(X) = Loss Given Default = Pr(Loss > 0 | X) � Loss(X | Loss > 0)

Var(Expected Loss) = Var(PD � LGD)

=�2

LGD � Var(PD) + �2

PD � Var(LGD) + Var(PD) � Var(LGD)

(1)

Dow

nloa

ded

by [

Uni

vers

ity o

f O

tago

] at

09:

43 0

3 Se

ptem

ber

2014

Page 9: Model Risk and the Great Recession

CHANCE 13

scant: a) an assumption on the correlation between obligors in the portfolio and b) the form of the distribution of losses or an estimate of how many standard deviations would be required to achieve a desired credit rating, as noted above. Data for either of these items are limited in many cases, and portfolio analysts are forced to make assumptions that may or may not prove to be accurate, as in the situation concerning the correlation of PD and LGD.

Simulations using these assumptions can be calculated to establish conditional tail-risk estimates. Salmon provides a summary of a novel methodology proposed by David Li in his 2000 Journal of Fixed Income article, “On Default Correlation: A Copula Function Approach,” and the resulting pervasive use of credit default swap (CDSs, or default insurance contracts) prices and Gaussian copulas for modeling the default correla-tion for potential defaulters, despite awareness that doing so may produce unstable results. Moreover, as Salmon points out, the CDS market exploded from $92 billion in 2001 to $62 trillion at the end of 2007, or almost 700-fold.

To illustrate the risks inherent in faulty assumptions about a) and b) above more clearly, suppose a lender has two accounts in a portfolio. Further assume the following:

where represents the estimated probability of default

for the ith obligor, perhaps arising from a statistical model as

outlined. Also,

where represents the estimated loss given default for the ith account or loan and (�, �)

i are parameters of an

appropriate Beta distribution for which �i + �i �1, not an

unreasonable assumption. Then, from Equations (1)–(3), one can estimate the standard deviation of loss, �, assuming a cor-relation between obligors, �. To assess capital needs, the lender would need to target a desired credit rating or equivalent tail-risk. Clearly, the shape of the loss distribution is important, but for simplicity’s sake, let’s assume the lender believes that

capital should be reserved to cover T = K � � losses over the mean level, �), where it is believed that K � 7 (note from Tchebyshev’s Inequality,

so that without some further knowledge about the loss dis-tribution, the lender may be no more than about 2% sure of needing more capital than what has been planned over a mean level, �).

Suppose in this two-account portfolio, we have the data as in Table 2.

An admittedly extreme, yet simple, simulation of 1,000 trials reveals sensitivities for the required lender’s capital as a percentage of the total portfolio exposure, $200 million, essentially assuming total uncertainty regarding correlations and loss distribution.

In this example, to cover capital requirements for all but 5% of potential risk, the lender could require up to ~70% capital, or $140 million for a $200 million portfolio. This implies a leverage of less than 1:1 (i.e., more lender equity than debt), while the ‘expected losses’ are only $2.7 million, given the (PD, LGD)

i estimates, across all trials. With more

negatively correlated assets, the leverage could be up to 10:1 in this example.

Unless return (ROI) is quite high, this lender will likely not have attractive ROE and should probably fi nd alternative investments for which their expertise can earn them better returns. Clearly, the lender and its shareholders and investors would be better served if sharper estimates of asset correla-tion could be obtained (most lenders strive for this and create larger, less ‘concentrated,’ portfolios, or more negatively cor-related assets, as a means to that end).

By comparison, with the most recent economic meltdown, preconceptions were in place that suggested correlations

Item Account #1 Account #2 Portfolio

3% 7%

30% 25%

EADi $100 million $100 million $200 million

� Uniform(-1,1)

K Uniform(6,9)

Table 2—Account and Portfolio Parameters

(2)

h bl

(3)

h

Dow

nloa

ded

by [

Uni

vers

ity o

f O

tago

] at

09:

43 0

3 Se

ptem

ber

2014

Page 10: Model Risk and the Great Recession

14 VOL. 24, NO. 4, 2011

implied capital ratios in the leftmost part of Figure 3, while capital ratios in the rightmost part were required in some geo-graphic areas and for some products (e.g., subprime) in actual-ity. All fi nancial institutions attempt to manage their capital reserves using approaches akin to what is outlined and illus-trated above. The thrust is that capital requirements are highly sensitive to correlation assumptions, as is easily observed from the above example. The lender in our example could easily be wiped out (insuffi cient capital to cover losses) if the correlation assumption suggested low or negatively correlated assets when

Figure 4. Graphical depiction of a collateralized debt obligation (CDO)

this were not actually true. Layering in the infl uence of using a relatively nascent CDS market for calculating correlation serves to make the risk assessment more sophisticated, per-haps, and more risky as a result. The ‘frothy’ market rewarded lending institutions for underwriting (and selling) more toxic assets, so long as models and ratings could not be refuted with actual defaults and losses, notwithstanding that little or no data existed that matched the assumptions being used. Ultimately, correlation assumptions were proven incorrect in some cases and losses mounted. Unfortunately, losses generated were not

Figure 3. Illustrated capital requirements (%) under correlation and distribution uncertainty

Dow

nloa

ded

by [

Uni

vers

ity o

f O

tago

] at

09:

43 0

3 Se

ptem

ber

2014

Page 11: Model Risk and the Great Recession

CHANCE 15

limited to lenders, owing to the global distribution of risk result-ing from collateralized debt obligations.

Collateralized Debt Obligations Collateralized debt obligations (CDOs) are vehicles that allow investors to place fi nancial bets on pools of loans. CDOs are segmented into tranches or packages that vary in size/amount and credit rating, if measured correctly. CDO tranche evalu-ations are dependent on cash fl ow forecasts generated from an analysis of the underlying asset pool, including PD, LGD, EAD, correlation, and, potentially, distribution assumptions as discussed above. With an ordinary portfolio of loans (or other debt instruments), the originating lender and its inves-tors bear the burden of all losses arising from defaults assets. With a CDO, the assets are compartmentalized into packages that interested parties can invest in directly, according to their risk appetite. In CDOs, an originating bank will make or underwrite a loan, but instead of assuming all risk of loss, some risk is transferred to investors in the “bonds” issued for each tranche.

Should the loans in the asset pool generate losses, they are distributed among the tranches on the liability side from bot-tom to top (Figure 4, right side). The ability to properly rate each tranche is critical to attract investors to them. Naturally, the top-most tranches would be rated higher than the bottom-most, but also would earn a smaller return. The theory is that with suffi cient diversity of assets on the left-hand side of the ‘ledger,’ there will always be enough cash fl ow for the better-rated tranches (on top) of the right-hand side. However, when the assets or their correlations are different than predicted, then the lender has inappropriately transferred risk to inves-tors, who are generally less capable of determining the level

of risk they’re taking, primarily as a result of the asymmetry of information: The closer the investor is to the underwriting of the original loan, the more acute would be their understanding of risks, and they would certainly be more aware of process and policy changes (e.g., with regard to lending standards and requirements) and be able to see inherent risks earlier and proactively make investment decisions. CDO investors are further removed from this information and therefore reliant upon the lender, rating agencies (e.g., Moody’s, S&P, and Fitch), risk analytics teams, and the market for providing appropriate risk rating guidance.

Unfortunately, some investors generally took it for granted that the loss potential embedded in their tranche investment was more limited than it really was. For example, in graphical terms, their view might have been associated with the more narrow loss distribution (see Figure 4, to the left of the liability stack)—with losses largely limited to the ‘equity’ layer retained by the CDO—as opposed to the wider one that implies higher loss rates for their investments than ratings would have suggested. This is chiefl y because of increased positive correlations entering into the mix as lenders expanded their lending practices to meet insatiable demand for new mort-gages, loans, etc., but included fewer credit-worthy debtors, fewer well-structured loans, or new loan features for which little or no performance data existed. The effect of relaxed or altered assumptions and lack of transparency in loan origina-tion practices was magnifi ed across the globe as institutions, municipalities, and individuals invested in these structures.

As if this were not complicated enough, CDO tranches, themselves, could be pooled into other CDOs, some-times called “CDO-squared,” and thereby propagate and accelerate the contagion even further. Certainly, if it is diffi cult for lenders to adequately understand inherent risks when they

Dow

nloa

ded

by [

Uni

vers

ity o

f O

tago

] at

09:

43 0

3 Se

ptem

ber

2014

Page 12: Model Risk and the Great Recession

16 VOL. 24, NO. 4, 2011

presumably have the greatest access to data—key assump-tions on PD, LGD, EAD, correlation, etc.—then it is nearly impossible for investors to fathom their risks when they’re removed from the underlying loans by the layering created by these abstruse structures. Moreover, most investors didn’t think they needed to be so intimate with the assumptions, so long as rating agencies and CDO models kept rating CDO tranches as they did. At the root of this mechanism are, in part, the fairly straightforward concepts illustrated previously and their potentially improper use, which ultimately contributed to a widened gulf that separated modelers and theory from practitioners and investors. In short, what we experienced in the Great Recession was that practice in the use of models outpaced the science in some cases.

Summary and Recommendations We have attempted to briefl y summarize and illustrate in as simple as manner as possible how data, process consider-ations, assumptions, and statistical models are used, wittingly or unwittingly, in the assessment of fi nancial risk at lending institutions. The statistical concepts discussed are not deep per se, which makes the need for a discussion based on them, or rather the context of their use, more puzzling. Some fun-damental take-aways are the following:

1) Data and process considerations swamp specifi c model-ing techniques in priority of importance; an intimate under-standing of the environment that generated the data is key to model, analytics, and inference validity.

2) Checking and testing model assumptions—or at least understanding what they are and the risks of violating them—should get more emphasis by educators, practitioners, lend-ers, investors, and even reformers. As models and modeling are hypotheses of reality, the scientifi c method as a learning paradigm should be reinforced. Models of reality need to continually be tested for predictive validity where possible and stress tested and audited for process control at all times, with the results readily available to all constituents.

3) Modelers should become their own worst critics once their models have been validated for implementation. They also should be held accountable for discovering where the models break down and providing proactive remediation. Statisticians and other practitioners need to think more holis-tically about their role in fi nancial services, for their job is decision support for the betterment of the institution and not strictly modeling. Everyone involved should be suspicious of models and their assumptions and not take for granted that another party has acted on their behalf to validate that a model is working.

Finally, there needs to be more discussion between the statistical community and users or consumers of statistical

modeling. That such fundamental lessons as the items above could be forgotten or drowned out in the euphoria of ‘up-markets’ can be fairly easily remedied with more awareness. We all need to recognize that use of statistical concepts has become democratized and subject to potential improper use. Premier statistical societies such as the American Statistical Association may be able to play a more active, facilitative, and ‘watch dog’ role along these lines.

Editor's Note: The views expressed in this article are those of the author, and not necessarily GE’s or GE Capital’s.

Further Reading

Altman, E. I. 2006. Default recovery rates and LGD in credit risk modeling and practice: An updated review of the literature and empirical evidence. www.defaultrisk.com/pp_recov_53.htm.

Altman, E. I., A. Resti, and A. Sironi. 2006. Default recovery rates: A review of the literature and recent empirical evi-dence. Journal of Finance Literature Winter:21–45.

Box, G. E. P. 1976. Science and statistics. JASA 71(356):791–799.

Duffie, D., A. Eckner, G. Horel, and L. Saita. 2009. Frailty correlated default. The Journal of Finance LXIV(5).

Hahn, G., and N. Doganaksoy. 2008. Financial services. In The role of statistics in business and industry. 235–264. Hoboken, NJ: John Wiley & Sons.

Hubbard, D. 2009. The failure of risk management: Why it’s broken and how to fix it. Hoboken, NJ: John Wiley & Sons.

Lewis, M. 2008. The end of Wall Street’s boom. Portfo-lio.com, www.portfolio.com/news-markets/national-news/portfo-lio/2008/11/11/The-End-of-Wall-Streets-Boom.

Lewis, M. 2010. The big short: Inside the doomsday machine. New York, New York: W. W. Norton & Co.

Li, D. X. 2000. On default correlation: A copula function approach. The Journal of Fixed Income 6:43–54.

(The) President’s Working Group on Financial Markets. 1999. Hedge funds, leverage, and the lessons of long-term capital management. www.treasury.gov/resource-center/fin-mkts/Documents/hedgfund.pdf.

Rajan, U., A. Seru, and V. Vig. 2009. The failure of models that predict failure: Distance, incentives, and defaults. Chicago GSB Research Paper No. 08-19, Ross School of Business Paper No. 1122, March, 1–45.

Salmon, F. 2009. Recipe for disaster: The formula that killed Wall Street. Wired February 23.

Visit The Statistics Forum—a blog brought to you by the American Statistical Association and CHANCE.

http://statisticsforum.wordpress.com

Dow

nloa

ded

by [

Uni

vers

ity o

f O

tago

] at

09:

43 0

3 Se

ptem

ber

2014