liam mescall var project

Upload: liam-mescall

Post on 07-Apr-2018

220 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/4/2019 Liam Mescall VaR Project

    1/10

    Introduction

    The need for a co-ordinated approach to banking regulation was finalised in 1988 and enacted in law

    by the then G-10 countries. This was a long process following instances such as the collapse of Bank

    Herstatt in Cologne in 1974. At the time Herstatt was the 35th largest credit institution in Germanywho speculated heavily on the US dollar following the collapse of the Breton Woods system of fixed

    exchange rates in 1973. An unexpected appreciation in the dollar exposure the bank to a foreign

    exchange position three times the size of its capital which ultimately gave rise to losses four times its

    then capital. Upon noting these positions in a special audit, German regulators withdrew its banking

    license and liquidised the bank [1]. The credit risk focused Basel I was developed to focus strongly on

    capital adequacy in Basel II, identifying VaR as its preferred method of assessing market risk.

    Regulation requires all banks that use internally developed risk management models must

    implement a backtesting procedure. Results of this backtesting procedure will determine the capital

    requirements that banks have to hold for market risk i.e.:

    Required = . max (Va (0.01), (0.01) (1)

    Equation (1) represents the 10-day 99% confidence level VaR where the value of the size of the

    capital requirement is equal to the higher value of the VaR of the day before and the average of the

    VaR values of the previous sixty days multiplied by a factor. The factor has a minimum value of 3 and

    maximum value of four; depending on performance of the back test results i.e. have the expected

    losses from the banks model correctly captured observed losses. Finding a balance between

    measuring power and its imperfections remains its ultimate goal. Regulation requires this testing to

    be performed quarterly using at least one years trading data. As no uniform method exists for VaR

    calculation a degree of subjectivity is introduced as to how it is undertaken with the burden of proof

    resting on the individual banks [2]. This report focuses on historical and Monte Carlo (MC) VaR.

    VaR Calculation

    Historical and MC VaR adopt sharply contrasting methods and assumptions in arriving at potential

    loss figures. Historical makes no assumptions regarding the shape of the returns distribution as it

    uses market observed inputs but do assume the shape of future returns will replicate those of the

    past. Its use requires a significant amount of historic data that illustrates all possible states of the

    portfolio, thus ensuring statistical likelihood is satisfied. MC VaR generates many thousands of

    simulated returns drawn either from a parametric assumption about the shape of the distribution or

    preferably by re-sampling the empirical history and generating enough data to be statistically

  • 8/4/2019 Liam Mescall VaR Project

    2/10

    significant and then ordering them before reading off the desired percentile as in the historic

    calculation method [3]. From this we can tell what could happen and what is likely to happen.

    There are two components to the initial historical VaR calculation, initially the log returns were

    calculated based on the indices twenty year time series. Log returns are used as they are time

    additive i.e. the use of a two period log return is the same as the sum of the two periods, easy to

    interpret and manipulate and are approximately accurate. We calculate the mean of the log returns

    followed by the square of each observation from the mean. These error terms are used to explain

    the variance in the time series. Secondly a GARCH volatility adjustment is applied after running the

    garchfit function in MatLab and parameters omega, alpha and beta are used to calculate the sigmas

    which are a conditional standard deviation measure. Multiplying these sigmas by 250 (days) and

    getting the square root leaves an adjusted GARCH volatility which are then applied to the log returns

    previously calculated. The 1% 1-day VaR estimate is then the lowest one percentile of returns in this

    series.

    MC VaR takes the simulation route by specifying probability distributions for each of the market risk

    factors with each simulation calculated using different sets of random values. The advantages and

    popularity of MC comes from the flexibility to model the returns on more exotic distributions and

    varying inputs. Once completed, a distribution of values exists from which we calculate the lower 1

    percentile of returns when estimating VaR. The model constructed for the purposes of this paper

    introduces a jump diffusion process which allows for the stochastic element of jumps to beaccounted for.

    Backtesting

    Ensuring Basel compliance requires retrospective model testing of the 1%, 1-day for at least one year

    with projected losses compared to observed portfolio movements. Calculations in the historical VaR

    will show how a rolling mean, based on the 250 days previous log returns, was calculated which was

    fed through the process previously described before calculating a rolling VaR figure based on

    adjusted returns.. The MC process similarly employs a rolling mean and volatility as inputs into the

    model, computing a distribution of returns for each day. Again, for both these process, an

    exceedence was noted where the negative log returns were greater than the rolling VaR predictions.

    Historical VaR is calculated in excel workbook attached with MC calculated in MatLab code attached.

  • 8/4/2019 Liam Mescall VaR Project

    3/10

    Results

    The DJIA time series data being tested exhibits the following distribution of returns:

    With 80% of returns falling between -1% and +1% the DJIA returns are quite normally

    distributed with a slight imbalance towards the positive side. The FTSE returns below are

    quite similar with 74% falling within the same range. Limited negative fat-tailed behaviour is

    noted in both distributions.

    We can draw from this that projected VaR estimates, both historical and MC, will be

    influenced by the normality of the observed distribution. As we are testing for 1% 1-day

    VaR, an accurate model will provide an exception a maximum of 1% of the time i.e. 2.5

    0.00%

    20.00%

    40.00%

    60.00%

    80.00%

    100.00%

    120.00%

    0

    10

    20

    30

    40

    50

    60

    70

    80

    90DJIA Distribution of Log Returns

    0.00%

    20.00%

    40.00%

    60.00%

    80.00%

    100.00%

    120.00%

    0

    10

    20

    30

    40

    50

    60

    70

    80

    FTSE Distribution of Log Returns

  • 8/4/2019 Liam Mescall VaR Project

    4/10

    exceptions with a score of 10 or more being rejected by Basel II. The exceptions observed are

    as follows:

    Data Set Historical Monte Carlo

    FTSE 15 3DJIA 9 6

    Total 24 9

    Excluding the MC FTSE testing, model results are largely inaccurate. Noting 25% of FTSE

    observations being negative compared 20% of the DJIA observations and a higher FTSE

    volatility (average of 17.5% over past two years), it appears the MC model will better

    capture negative returns in high volatility. The less volatile DJIA index (Average of 16% over

    past two years) returns consistent results in both historical and MC. We also note the dates

    of the exceedences as follows:

    Data Set Q2 2010 Q3 2010 Q4 2010 Q1 2011

    FTSE (Historical) 6 4 3 2

    DJIA (Historical) 4 3 0 2

    FTSE (MC) 1 1 0 1

    DJIA (MC) 3 1 0 2

    Of these exceptions, 42% are noted in the first quarter of the year being backtested

    suggesting the volatility noted at these points is used to capture future losses. Price

    movement and exceedences are graphed below:

    -0.04

    -0.02

    0

    0.02

    0.04

    0.06

    FTSE Historical Exceedences

    FTSE Log Returns Rolling VaR

  • 8/4/2019 Liam Mescall VaR Project

    5/10

    The FTSE noted sporadic returns in April and June 2010 which accounts for the bulk of the

    exceedences.

    Exceedences in the DJIA, while occurring within, the same timeframe are more staggered

    with a greater reduction in volatility for the second half of the year.

    Volatility persistence in the DJIA returns does not seem to be taken into account by the MC

    simulation which appears to revert quite quickly and upon doing so notes exceedences, in

    particular between April and August 2010.

    -0.05-0.04-0.03-0.02-0.01

    00.010.020.030.040.05

    DJIA Historical Exceedences

    DJIA Log Returns Rolling VaR

    -0.05

    -0.04

    -0.03

    -0.02

    -0.01

    0

    0.01

    0.02

    0.03

    0.040.05

    DJIA MC Exceedences

    DJIA Log Returns Rolling VaR

  • 8/4/2019 Liam Mescall VaR Project

    6/10

    The best performing VaR backtesting model with three exceedences noted. It can be clearly

    seen how VaR estimates are decreased following periods of large volatility.

    Projecting losses by means of a GARCH adjustment has a significant effect on the process.

    The adjustment itself assumes the randomness of the variance process varies with the

    variance itself [4] to exhibit the observed random nature of index price evolution. Historical

    VaR does not capture the volatility clustering in May in both DJIA and FTSE returns while the

    MC goes much further towards achieving this as illustrated by the variation in VaR figures.

    The jump diffusion introduced to capture this volatility in the MC model may be playing a

    part in this.

    Assuming a portfolio with holdings in each index, observed results will have a significant

    impact on the capital position of a bank. A general risk charge (GRC) is calculated by:

    GR = max( ) (2)This is subject to multiplication by 10 in the case of the historical model being adopted and

    3.85 in the case of the MC model.

    Problems with Methodology

    These models have a number of limitations:

    The backtesting methodology requires the comparison of an actual trading result incomparison with a VaR estimate assuming changes taking place are only due to price

    -0.06

    -0.04

    -0.02

    0

    0.02

    0.04

    0.06

    FTSE MC Exceedences

    FTSE Log Returns Rolling VaR

  • 8/4/2019 Liam Mescall VaR Project

    7/10

    and rate movements. Actual portfolio trading results will incorporate fee income

    aswell as trading gains and losses which will contaminate the results. The larger the

    VaR period i.e. 10-day, the greater the contamination [2].

    The prediction of market crashes still escapes VaR. The introduction of jumpdiffusion into the model goes some way to address this.

    For longer periods, the scaling of VaR calculations by the square root of time rule canlead to serious over/under- calculation of actual VaR. Studies have shown that for

    periods of 30 days + errors of 13% can be noted [3].

    When a normal distribution is not used the subadditivity requirement formathematical coherence is not met i.e. the sum of the component VaRs may be

    greater or less than the whole. Use of conditional VaR can address this.

    The presence of overlapping in the testing reduces the power of the test (probabilityof null being rejected when it is false).

    Empirical tests have shown in historical simulation that volatility dynamics are notcaptured causing clustering in VaR violations [5]. GARCH modelling goes some way to

    reduce this.

    Alternative Backtesting Approach

    Due to the variety of results noted in testing, I propose the use of a variety of VaR models to

    improve accuracy as per Basel Committee recommendations in 2006, even though only one

    is used for reporting. These can take a number of forms:

    Proportion of failures test is based on the assumptions that VaR exceptions can bemodelled as independent draws from the binomial distribution (and model is

    accurate). This test produces a likelihood ratio figure from the observed VaRexceedences and the expected number of violations based on the coverage ratio i.e.:

    (3)

    Where is the unconditional coverage in the sample which increases statistic whenit differs from indicating an overstatement or understatement of portfolio risk.

    Lopez (1999) has analysed this testing and found only slight power properties of this

    backtest procedure leading to development of

  • 8/4/2019 Liam Mescall VaR Project

    8/10

    Likelihood ratio test was developed to the Markov-test where the test ratio ofindependence is formulated through a first order Markov chain (random process

    whereby next state only depends on current state). Introducing the maximum

    likelihood estimate the model then takes the form:

    (4)

    The combination of both the test statistic of independence and the test statistic of

    unconditional coverage for the likelihood ration test statistic making the test an

    updated proportion of failure test. This broader scope losses some statistical power.

    The duration based approach was developed to address the lack of completeness inthe independence testing. Here, an alternative independence hypothesis is

    introduced stating the independent VaR exceedences are independent of the time

    since the last VaR exceedence. The durations are established as test statistics which

    are defined as the elapsed time between the VaR exceedences:

    (5)

    Where is the day of the violation. The likelihood ratio test is also used. Thehazard function is then calculated using a Weibull distribution to determine the

    number of exceedences.

    The use of a martingale difference test stems from the belief that the unconditionalcoverage and independence tests are a result of the martingale difference

    hypothesis. By testing a series for martingale behaviour you can identify potential

    exceedences. By using the de-meaned elements of the indicator function ( a martingale sequence can be formed. If the martingale difference hypothesisholds for the de-meaned violation series, the VaR exceedence sequence is

    uncorrelated at all lags and leads. While theoretically incomplete the model has

    proven to be quite successful [7].

  • 8/4/2019 Liam Mescall VaR Project

    9/10

    Conclusion

    From the testing undertaken and alternatives reviewed it is clear that best practise

    requires the use of a number of models that should be refined to portfolio

    specifications. While these models provide useful information there remains an inability

    to predict market crashes from exceptional events i.e. Lehman collapse, 9/11. The

    limitations of each model must be understood and decisions taken in light of them.

  • 8/4/2019 Liam Mescall VaR Project

    10/10

    Appendix

    *1+ Basel Committee for Banking Supervision, 2004. Banking Failures in Mature Economies,

    [online];Basel. Available athttp://www.bis.org/publ/bcbs_wp13.pdf. Accessed on 22-3-2011.

    *2+ Roekel, G, 2008.Extended Analysis of Back Testing Framework for Value at Risk. *online+;

    Twente. Available athttp://essay.utwente.nl/59173/1/scriptie_G_van_Roekel.pdf. Accessed on 22-

    3-2011.

    *3+ Urbani, P, 2008.All About Value at Risk (VaR). *online+;Available athttp://www.edge-

    fund.com/Urbani.pdf. Accessed on 23-3-2011.

    [4] Wikipedia, 2009. Stochastic volatility.[online]. Available at

    http://en.wikipedia.org/wiki/Stochastic_volatility

    [5] Berkowitz, Jeremy, Christoffersen, Peter and Pelletier, Denis, Evaluating Value-at-Risk Models

    with Desk-Level Data (July 31, 2007). CREATES Research Paper Series 2009. Available at SSRN:

    http://ssrn.com/abstract=1127226

    [6] Campbell, D. 2005. A Review of Backtesting and Backtesting Procedures.[online]. Available at:citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.142.4084.

    [7] Lehikoinen, K. 2007. Development of Systematic Backtesting Processes of Value-at-Risk.[online]

    Available at: http://www.sal.tkk.fi/publications/pdf-files/tleh07.pdf

    http://www.bis.org/publ/bcbs_wp13.pdfhttp://www.bis.org/publ/bcbs_wp13.pdfhttp://www.bis.org/publ/bcbs_wp13.pdfhttp://essay.utwente.nl/59173/1/scriptie_G_van_Roekel.pdfhttp://essay.utwente.nl/59173/1/scriptie_G_van_Roekel.pdfhttp://essay.utwente.nl/59173/1/scriptie_G_van_Roekel.pdfhttp://www.edge-fund.com/Urbani.pdfhttp://www.edge-fund.com/Urbani.pdfhttp://www.edge-fund.com/Urbani.pdfhttp://www.edge-fund.com/Urbani.pdfhttp://en.wikipedia.org/wiki/Stochastic_volatilityhttp://en.wikipedia.org/wiki/Stochastic_volatilityhttp://ssrn.com/abstract=1127226http://ssrn.com/abstract=1127226http://ssrn.com/abstract=1127226http://en.wikipedia.org/wiki/Stochastic_volatilityhttp://www.edge-fund.com/Urbani.pdfhttp://www.edge-fund.com/Urbani.pdfhttp://essay.utwente.nl/59173/1/scriptie_G_van_Roekel.pdfhttp://www.bis.org/publ/bcbs_wp13.pdf