good masters thesis on var with pca 10.1.1.197.557

61
A Comparison Of Value-At-Risk Methods For Portfolios Consisting Of Interest Rate Swaps And FRAs Robyn Engelbrecht Supervisor: Dr. Graeme West December 9, 2003

Upload: fishsakana21

Post on 14-Apr-2015

6 views

Category:

Documents


0 download

DESCRIPTION

A Comparison Of Value-At-Risk Methods ForPortfolios Consisting Of Interest Rate Swaps AndFRAs

TRANSCRIPT

Page 1: Good Masters Thesis on Var With Pca 10.1.1.197.557

A Comparison Of Value-At-Risk Methods For

Portfolios Consisting Of Interest Rate Swaps And

FRAs

Robyn EngelbrechtSupervisor: Dr. Graeme West

December 9, 2003

Page 2: Good Masters Thesis on Var With Pca 10.1.1.197.557

AcknowledgementsI would like to thank Graeme West and Hardy Hulley for all their help, andespecially Dr. West for the many consultations, as well as for providing mewith his bootstrapping code. Thanks also to Shaun Barnarde of SCMB forproviding the historical data.

1

Page 3: Good Masters Thesis on Var With Pca 10.1.1.197.557

Contents

1 Introduction 5

1.1 Introduction to Value at Risk . . . . . . . . . . . . . . . . . . 5

1.2 Problem Description . . . . . . . . . . . . . . . . . . . . . . . 7

2 Assumptions and Background 9

2.1 Choice of Return . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.2 The Holding Period . . . . . . . . . . . . . . . . . . . . . . . 10

2.3 Choice of Risk Factors when Dealing with the Yield Curve . . 11

2.4 Modelling Risk Factors . . . . . . . . . . . . . . . . . . . . . . 11

2.5 The Historical Data . . . . . . . . . . . . . . . . . . . . . . . 12

3 The Portfolios 15

3.1 Decomposing Instruments into their Building Blocks . . . . . 15

3.2 Mapping Cashflows onto the set of Standard Maturities . . . 17

4 The Delta-Normal Method 18

4.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

2

Page 4: Good Masters Thesis on Var With Pca 10.1.1.197.557

4.2 Description of Method . . . . . . . . . . . . . . . . . . . . . . 18

5 Historical Value at Risk 21

5.1 Classical Historical Simulation . . . . . . . . . . . . . . . . . 21

5.1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . 21

5.1.2 Description of Method . . . . . . . . . . . . . . . . . . 22

5.2 Historical Simulation with Volatility Updating (Hull-WhiteHistorical Simulation) . . . . . . . . . . . . . . . . . . . . . . 23

5.2.1 Background . . . . . . . . . . . . . . . . . . . . . . . . 23

5.2.2 Description of Method . . . . . . . . . . . . . . . . . . 23

6 Monte Carlo Simulation 25

6.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

6.2 Monte Carlo Simulation using Cholesky decomposition . . . . 26

6.3 Monte Carlo Simulation using Principal Components Analysis 26

7 Results 30

8 Conclusions 36

A Appendix 39

A.1 The EWMA Model . . . . . . . . . . . . . . . . . . . . . . . . 39

A.2 Cash Flow Mappping . . . . . . . . . . . . . . . . . . . . . . . 40

A.3 The Variance of the Return on a Portfolio . . . . . . . . . . . 41

B Appendix 42

3

Page 5: Good Masters Thesis on Var With Pca 10.1.1.197.557

B.1 VBA Code: Bootstrapping Module . . . . . . . . . . . . . . . 42

B.2 VBA Code: VaR Module . . . . . . . . . . . . . . . . . . . . 44

B.3 VBA Code: Portfolio Valuation Module . . . . . . . . . . . . 54

4

Page 6: Good Masters Thesis on Var With Pca 10.1.1.197.557

Chapter 1

Introduction

1.1 Introduction to Value at Risk

Value at Risk, or VaR, is a widely used measure of financial risk, whichprovides a way of quantifying and managing the risk of a portfolio. Ithas, in the past few years, become a key component of the management ofmarket risk for many financial institutions1. It is used as an internal riskmanagement tool, and has also been chosen by the Basel Committee onBanking Supervision as the international standard for external regulatorypurposes2.Definition 1.1 The VaR of a portfolio is a function of 2 parameters, a timeperiod and a confidence interval. It equals the loss on a portfolio that willnot be exceeded by the end of the time period with the specified confidencelevel [2].

Estimating the VaR of a portfolio thus involves determining a probabilitydistribution for the change in the value of the portfolio over the time period(known as the holding period). Consider a portfolio of financial instrumentsi = 1, . . . , n, whose values at time t depend on the k market variables (riskfactors)

xt :=(x

(1)t , . . . , x

(k)t

)

1Market risk is the risk associated with uncertainty about future earnings relating tochanges in asset prices and market rates.

2The capital that a bank is required to hold against its market risk is based on VaRwith a 10-day holding period at a 99% confidence level. Specifically, the regulatory capitalrequirement for market risk is defined as max(VaRt−1, k × Avg{V aRt−i|i = 1, . . . , 60}).Here k is the multiplication factor, which is set to between 3 and 4 depending on previousbacktest results, and VaRt refers to a VaR estimate for day t based on a 10 day holdingperiod [1].

5

Page 7: Good Masters Thesis on Var With Pca 10.1.1.197.557

These market variables could be exchange rates, stock prices, interest rates,etc. Denote the monetary positions in these instruments by

W := (W1, . . . ,Wn)

(A fundamental assumption of VaR is that the positions in each instrumentremain static over the holding period, so the subscript t above has beenomitted.) Let the values of the instruments at time t be given by

Pt :=(P

(1)t (xt), . . . , P

(n)t (xt)

)(1.1.1)

where P(i)t (xt), i ∈ {1, . . . , n} are pricing formulae, such as the Black-Scholes

pricing formula. The value of the portfolio at time t is given by

Vt(Pt, W ) =n∑

i=1

WiP(i)t (xt) (1.1.2)

Let δt be the length of the holding period, and α ∈ (0, 1) be the level ofsignificance3. VaR can be defined in terms of either the distribution of theportfolio value Vt+δt, or in terms of the distribution of the arithmetic returnof the portfolio Rt+δt = (Vt+δt−Vt)/Vt. Consider the (100×α)th percentileof the distribution of values of the portfolio at t + δt. This is given by thevalue vα in the expression4

P [Vt+δt ≥ vα] = α

Let Rα be the arithmetic return corresponding to vα. In other words vα =Vt(1 + Rα). The relative VaR of the portfolio is defined by

VaR(relative) := Et[Vt+δt]− vα = −Vt(Rα − Et[Rt+δt]) (1.1.3)

(1.1.3) gives the monetary loss of the portfolio over the holding period rela-tive to the mean of the distribution. Sometimes, the value of the portfolio attime t, Vt, is used instead of Et[Vt+δt]. This is known as the absolute VaR:

VaR(absolute) := Vt − vα = −VtRα (1.1.4)

(1.1.4) gives the monetary loss relative to zero, or without reference to the ex-pected value. The equations above follow from the fact that Et[Vt+δt]−vα =Vt(1+Et[Rt+δt])−Vt(1+Rα) and Vt−vα = Vt−Vt(1+Rα). When the timehorizon is small, the expected return can be quite small, and so the results of(1.1.3) and (1.1.4) can be similar, but (1.1.3) is a better estimate in general,

3Typically α = 0.05 or α = 0.01, which correspond to a confidence level of 95% and99% respectively.

4This percentile always corresponds to a negative value, but VaR is given by a positiveamount, hence the greater than or equals to sign here.

6

Page 8: Good Masters Thesis on Var With Pca 10.1.1.197.557

since it accounts for the time value of money, and pull-to-par effects whichmay become significant towards the end of the life of a portfolio [3].

Thus the problem of estimating VaR is reduced to the problem of esti-mating the distribution of the portfolio value, or that of the portfolio return,at the end of the holding period. Typically this is done via the estimationof the distribution of the underlying risk factors. However, there is no sin-gle industry standard used to do this. The general techniques commonlyused include analytic techniques (the Delta-Normal method and the Delta-Gamma method), Historical simulation, and Monte Carlo simulation. Asdescribed in [4], the techniques differ along two lines:

Local/Full valuation This refers to whether the distribution is estimatedusing a Taylor series approximation (local valuation), or whether themethod generates a number of scenarios and estimates the distributionby revaluing a portfolio under these scenarios (full valuation).

Parametric/Nonparametric Parametric techniques involve the selectionof a distribution for the returns of the market variables, and the esti-mation of the statistical parameters of these returns. Nonparametrictechniques do not rely on these estimations, since it is assumed thatthe sample distribution is an adequate proxy for the population dis-tribution.

For large portfolios, the decision of which method is chosen presents a trade-off of speed against accuracy, with the fast analytic methods relying on roughapproximations; and the most realistic approach, Monte Carlo simulation,often considered too slow to be practical.

1.2 Problem Description

The aim of this project is to implement various VaR methods, and to com-pare these methods on portfolios consisting of the linear interest rate deriva-tives, forward rate agreements (FRAs) and interest rate swaps. The perfor-mance of a VaR method depends very much on the type of portfolio beingconsidered, and the aim of this project is therefore to determine which ofthe methods is best for this type of portfolio. These derivatives representan important share of the interest rate component of a bank’s trading port-folios, since the more complicated interest rate derivatives don’t trade thatmuch in this country. The methods to be considered are:

1. The Delta-Normal method (classic RiskMetrics approach)

7

Page 9: Good Masters Thesis on Var With Pca 10.1.1.197.557

2. Historical Simulation

3. Historical Simulation with Volatility updating

4. Monte Carlo simulation using Cholesky decomposition

5. Monte Carlo Simulation using Principal components analysis

Although the Delta-Normal method and Monte Carlo simulation are para-metric, and Historical simulation is nonparametric, direct comparison is pos-sible since the distributional parameters will be estimated using the samehistorical data as will be used to generate scenarios for the Historical sim-ulation methods. The methods will be tested on various hypothetical port-folios, by estimating VaR for these portfolios over the historical period, andcomparing the estimates to the actual losses that occurred. VaR will beestimated for these portfolios at both the 95% and 99% confidence levels.Implementation of the methods is to be done in VBA. This was decided onmostly for convenience, due to the large amount of data that needs to beread from and written to spreadsheet.

Historical swap curve interest rate data for a period of 2 years will beused, from 2 July 2001 to 30 June 2003. For the historical simulation tech-niques, a 1-day moving window of 250 days of data will be used when esti-mating VaR. This is in accordance with the Basel Committee requirementsfor length of historical observation period. VaR can therefore be determinedfrom day 251 onwards that is, from 2 July 2002 to 30 June 2003. For theDelta-Normal method and the Monte Carlo simulation methods, VaR canbe determined for the entire historical period, since all that is required isa set of volatility and correlation estimates, which can be obtained fromday one. All volatility and correlation estimates are performed using theExponentially Weighted Moving Average technique, with a decay factor ofλ = 0.94, as described in Appendix A.1.

8

Page 10: Good Masters Thesis on Var With Pca 10.1.1.197.557

Chapter 2

Assumptions andBackground

2.1 Choice of Return

The arithmetic return on an instrument at time t + δt is given by

Ra(P (i)t+δt

) = (P (i)t+δt

− P(i)t )/P

(i)t

The equivalent geometric return is given by

Rg(P (i)t+δt

) = ln(P (i)t+δt

/P(i)t )

We define the arithmetic returns in the risk factors, Ra(x(i)t+δt

) and Rg(x(i)t+δt

)in exactly the same way. Both formulations have their own advantagesand disadvantages. The arithmetic return of an instrument is needed whenaggregating assets across portfolios, since the arithmetic return of a portfoliois a weighted linear sum of the arithmetic returns of the individual assets(the corresponding formula for the return of a portfolio in terms of geometricreturns of the assets is not linear). However, the geometric return aggregatesbetter across time (the n day return is equal to a linear sum of the 1 dayreturns, whereas the corresponding formula for the n day arithmetic returnis not so simple). See [4] for more detail on this.

Also the geometric return is more meaningful economically, since, forexample, if the geometric returns of an instrument, or of a market variable,are normally distributed, then the corresponding instrument prices, or themarket variables themselves, are lognormally distributed, and so will neverbe negative. However, if arithmetic returns are normally distributed, then,

9

Page 11: Good Masters Thesis on Var With Pca 10.1.1.197.557

looking at the left tail of the distribution of these returns, Ra(P (i)t+δt

) → −∞implies that P

(i)t+δt

< 0.

If returns are small, then the difference between Ra(P (i)t+δt

) and Rg(P (i)t+δt

)is small, since we can use a Taylor series approximation to get

Rg(P (i)t+δt

) = ln(P (i)t+δt/P

(i)t )

= ln((P (i)t+δt − P

(i)t )/P

(i)t + 1)

= (P (i)t+δt − P

(i)t )/P

(i)t − 1

2((P (i)

t+δt − P(i)t )/P

(i)t )2 + . . .

≈ (P (i)t+δt − P

(i)t )/P

(i)t (2.1.1)

to first order. (Equivalently, for Rg(x(i)t+δt

).) In practice, when modellingreturns, for the above reasons, geometric returns are preferred, but the as-sumption is always made, for convenience, that the geometric return of aportfolio is the linear weighted sum of the geometric returns of the individ-ual instruments.

Also, the definitions in (1.1.3) and (1.1.4) pertain to the arithmetic re-turns of a portfolio. In practice, regarding these definitions, the distributionis again modelled according to geometric returns, and the assumption in(2.1.1) is used.

2.2 The Holding Period

Although the holding period is typically a day, 10 days (for regulatory pur-poses), or a month, VaR calculations are always initially done on a holdingperiod of 1 day, since this provides the maximum amount of historical in-formation with which to estimate parameters. Time aggregation techniquesare then used to transform the distribution for daily VaR into a distributionfor the longer holding period. (This is described in [3]). In the case whereportfolio returns can be assumed to be i.i.d., it is possible to simply scale theVaR number obtained by multiplying it by the square root of the requiredtime horizon. So from now on we assume that δt = 1 day.1

1Note that the short term market risk measurement which VaR concerns itself with isin contrast with credit risk measurement, where it is necessary to model changes in marketvariables over much longer periods of time (as well as to model the changing structure ofthe portfolio over time, whilst VaR, as mentioned previously, assumes a static portfolio).

10

Page 12: Good Masters Thesis on Var With Pca 10.1.1.197.557

2.3 Choice of Risk Factors when Dealing with theYield Curve

The distribution used to estimate the VaR of a portfolio is determined fromthe distribution of the risk factors to which the portfolio is exposed. Thefirst step to measuring the risk of a portfolio is to determine what these riskfactors are. When dealing with the yield curve, the question which arises,due to the nontraded nature of interest rates, is whether the underlying riskfactors are in fact the rates themselves, or the corresponding zero couponbond prices. In this project, they were taken to be the zero coupon bondprices2. However, both methods are used in practice.

2.4 Modelling Risk Factors

The easiest and most common model for risk factors is the geometric Brow-nian motion model, in other words, the risk factors follow the process

dx(i)t = µix

(i)t dt + σix

(i)t dW

(i)t

for 1 ≤ i ≤ k, where µi and σi represent the instantaneous drift and volatil-ity, respectively, of the process for the ith risk factor and (W (i)

t )t≥0 areBrownian motions, which will typically be correlated. To determine an ex-pression for x

(j)t , let g(xj) = lnxj . Then ∂g

∂xi= δij

1xi

and ∂2g∂xi∂xj

= −δij1x2

i

where δij is the indicator function. By the multidimensional Ito formula weget

dg(xjt ) =

k∑

i=1

∂g

∂xidx

(i)t +

12

k∑

i=1

k∑

j=1

∂2g

∂xi∂xjdx

(i)t dx

(j)t (2.4.2)

=k∑

i=1

∂g

∂xidx

(i)t +

12

k∑

i=1

∂2g

∂x2i

(dx(i)t )2

=1

x(j)t

dx(j)t +

12

(−1

(x(i)t )2

)(dx

(i)t )2

=(

µj − 12σ2

j

)dt + σjdW

(j)t

Thus, discretizing this, over δt,

x(j)t+δt = x

(j)t exp

((µj − 1

2σ2

j

)δt + σjδW

(j)t

)

2This is the method used in [4].

11

Page 13: Good Masters Thesis on Var With Pca 10.1.1.197.557

= x(j)t exp

((µj − 1

2σ2

j

)δt + σj

√δtZj

)

≈ x(j)t exp

(σj

√δtZj

)(2.4.3)

where Z ∼ φk(0, Σ) and Σ is the variance-covariance matrix which will bedefined in (4.2.2). (2.4.3) follows since the holding period δt is only one day.The major problem with the normality assumption is that, as describedin [5], the distribution of daily returns of any risk factor would in realitytypically show significant amounts of positive kurtosis. This leads to fattertails and extreme outcomes occurring much more frequently than would bepredicted by the normal distribution assumption, which would lead to anunderestimation of VaR (since VaR is concerned with the tails of the distri-bution). But the model is generally considered to be an adequate descriptionof the process followed by risk factors such as equities and exchange rates.

However, when dealing with the yield curve, the assumption of normalityof returns clearly does not give an adequate description of reality, since inter-est rates are known to be mean reverting, and zero coupon bonds are subjectto pull-to-par effects towards the end of their life. Although a simple inter-est rate model, such as a short rate model, may price an instrument moreaccurately (since it would capture the mean reverting property of interestrates), short rate models assume that all rates along the curve are perfectlycorrelated with the short rate, and thus will all move in the same direction.Since this is of course not the case in reality, it doesn’t give much of an in-dication of the risk of a portfolio to moves in the yield curve. A multi-factormodel increases complexity dramatically, and still does not give us what wewant. To illustrate this, suppose that rates have been on the increase for awhile. An interest rate model would model a decrease in rates due to themean reverting effect which would come into play. However, what a riskmeasurement technique is in fact interested in, is a continued upward trend.So although the normality assumption would lead to unrealistic term struc-tures over longer time horizons, since we are dealing with a holding period ofa day, the assumption seems to be appropriate for risk measurement, sinceto quote [3], ”for risk management purposes, what matters is to capture therichness in movements of the term structure, not necessarily to price today’sinstruments to the last decimal point.”

2.5 The Historical Data

The historical data provided consists of the following array of swap curvemarket rates:

12

Page 14: Good Masters Thesis on Var With Pca 10.1.1.197.557

• overnight, 1 month, and 3 month rates (simple yields)

• 3v6, 6v9, 9v12, 12v15, 15v18, 18v21 and 21v24 FRA rates (simpleyields) and

• 3, 4, 5,. . . , 10, 12, 15, 20, 23, 25, 30 year swap rates (NACQ),

for the period from 2 July 2001 to 30 June 2003. Each array of rates wasbootstrapped to determine a daily NACC swap curve. This was done usingthe Monotone Piecewise Convergence (MPC) interpolation method3. Giventhat a large portfolio could depend on any number of the rates on the curve,and that measuring the risk of a portfolio typically involves the estimation ofa variance-covariance matrix of the underlying risk factors, it can become in-feasible, if not impossible, to estimate the risk of a portfolio to each of theserates individually. A standard set of maturities needs to be defined in orderto implement a risk measurement technique with a fixed set of data require-ments. The rates at these standard maturities are then assumed to describethe interest rate risk of the entire curve, and all calculations (of volatilities,correlations, etc) are done based on this standard set of maturities. Thestandard maturities selected consists of the following 24 maturities, in days(these correspond as closely as possible to the actual/365 daycount used inSouth Africa):

1, 30, 91, 182, 273, 365, 456, 547, 638, 730, 1095, 1460, 1825, 2190, 2555,2920, 3285, 3650, 4380, 5475, 6205, 7300, 9125 and 10950 days

Figure 2.1 shows daily zero coupon yield data over the 2 year period forthe standard set of maturities. From this graph it is evident that interestrates were quite volatile during this period. In particular, note the ’spike’ inalmost all the rates which occurs in December 2001. (This event significantlyaffects the results seen). Also, note that in June 2002, the entire yield curveinverts, resulting in the term structure changing from being approximatelyincreasing to approximately decreasing.

3For an explanation of this interpolation method, see [6]. The bootstrapping code wasprovided by Graeme West.

13

Page 15: Good Masters Thesis on Var With Pca 10.1.1.197.557

Figure 2.1: Daily zero coupon yields (NACC)

14

Page 16: Good Masters Thesis on Var With Pca 10.1.1.197.557

Chapter 3

The Portfolios

3.1 Decomposing Instruments into their BuildingBlocks

Since it is impossible to measure the risk of each of the enormous number ofrisk factors which a portfolio could depend on, simplifications are required.The first step to measuring the risk of a portfolio is to decompose the in-struments making up a portfolio into simpler components. Both FRAs andswaps can be valued by decomposing them into a series of zero-coupon in-struments1. The next step is to map these positions onto positions for whichwe have prices of the components. Finally, we estimate VaR of the mappedportfolio.

A forward rate agreement (FRA) is an agreement that a certain interestrate will apply to certain notional principal during a specific future periodof time [7]. If the contract, with a notional principal of L, is entered intoat time t, and applies for the period T1 to T2 (T2 - T1 is always 3 months),then the party on the long side of the contract agrees to pay a fixed rater (the FRA rate) and receive a floating rate rf (3 month JIBAR, a simpleyield rate) over T1 to T2. This is equivalent to the following cashflows:

time T1: −Ltime T2: +L(1 + r(T2 − T1))

So at any time s, where t ≤ s ≤ T1, a FRA can be valued by discount-1Although FRAs and swaps can both be represented in terms of implied forward rates,

decomposing them into zero coupon bonds makes them linear in the underlying risk factors.

15

Page 17: Good Masters Thesis on Var With Pca 10.1.1.197.557

ing these cashflows. In other words

V = LB(s, T2)(1 + r(T2 − T1))− LB(s, T1)

where B(s, T ) is the discount factor at s for the period s to T . The valueto the short side is just the negative of this amount.

A swap is an agreement between two counterparties to exchange a seriesof future cashflows. The party on the long side of the swap agrees to make aseries of fixed payments and to receive a series of floating payments, typicallyevery 3 months, based on the notional principal L. The floating paymentsare based on the floating rate (typically 3 month JIBAR) observed at thebeginning of that 3 month period. Payments always occur in arrears, i.e.they follow the natural time lag which many interest rate derivatives follow,allowing us to value a swap in terms of bonds. The value of the swap tothe long side can be determined as the exchange of a fixed rate bond for afloating rate bond.

V = Bfloat −BfixThe value to the short side is just the negative of this. Let L be the notionalprincipal and R the effective swap rate per period, let there be n paymentsremaining in the life of the swap, and let the time till the ith payment beti, 1 ≤ i ≤ n. Then the value of the fixed leg at time t is

Bfix = RL

n∑

i=1

B(t, ti)

If t is a cashflow date, then the value of the floating leg is just

Bfloat = L(1−B(t, tn))

If t is not a cashflow date, then the value of the fixed leg is unchanged, butthe value of the floating leg is determined using prespecified JIBAR rate forthe next payment. Suppose ti−1 < t < ti. Let the JIBAR rate for (ti−1, ti]be rf (simple yield rate). Then the value of the floating leg is

Bfloat = L

(1 + rf

(ti − t)365

)B(t, ti)− LB(t, tn)

The formula above is derived in [8]. This method of analysing risk by meansof discounted cashflows, enables us to get an idea of the sensitivity of themarket value of an instrument to changes in the term structure of interestrates.

16

Page 18: Good Masters Thesis on Var With Pca 10.1.1.197.557

3.2 Mapping Cashflows onto the set of StandardMaturities

In general, we have a cashflow occurring at time t2, and two standard matu-rities t1 and t3 which bracket t2. We need a way of determining the value ofthe discount factor at t2, which will necessarily involve some type of interpo-lation. There are various procedures for mapping cashflows of interest ratepositions. The technique of cash flow mapping used here, again consistentwith [4, Ch 6], splits each cashflow into two cashflows occurring at its closeststandard vertices, and has the property that it assigns weights to these twovertices in such a way that

• Market value is preserved i.e. the total market value of the two stan-dardized cash flows is equal to the market value of the original cashflow.

• Variance is preserved i.e. the variance of the portfolio of the twostandardized cash flows is equal to the variance of the original cashflow.

• Sign is preserved i.e. the standardized cash flows have the same signas the original cash flow.

The method is described in Appendix A.2. This mapping technique is donefor all cashflows in a portfolio, except of course in the case where the cash-flow corresponds exactly to one of the standard vertices, where mappingbecomes redundant. Using the mapped cashflows, we are able to determineVaR for the portfolio, since we have the required volatilities and correlationsof the bond prices for these maturities.

In the (nonparametric) Historical Simulation techniques, rather thanmapping the position, a cashflow occurring at a non-standard maturity t2is valued by a simple linear interpolation between the zero coupon yield tomaturities which occur at standard maturities t1 and t3, and, in the case ofHull-White Historical simulation, also between the volatilities which occurat the standard maturities.

17

Page 19: Good Masters Thesis on Var With Pca 10.1.1.197.557

Chapter 4

The Delta-Normal Method

4.1 Background

The Delta-Normal method (classic RiskMetrics Approach) is a parametric,analytic technique where the distributional assumption made is that thedaily geometric returns of the market variables are multivariate normallydistributed with mean return zero. The major advantage of this is that wecan use the normal probability density function to estimate VaR analyti-cally, using just a local valuation.

The normality assumption for returns of the underlying risk factors wasdiscussed in Section 2.4. The more problematic assumption in general isthe linearity assumption, since a first order approximation won’t be able toadequately describe the risk of portfolios containing optionality. However,since this project is only concerned with linear instruments, the assumptionis not problematic here.

The advantages of this method include its speed (since it is a local valua-tion technique) and simplicity, and the fact that the distribution of returnsneed not be assumed to be stationary through time, since volatility updatingis incorporated into the parameter estimation.

4.2 Description of Method

VaR is determined as the absolute VaR based on the distribution of portfolioreturns, defined in (1.1.4). This in effect means that Delta-Normal VaR isa direct application of Markowitz portfolio theory, which measures the risk

18

Page 20: Good Masters Thesis on Var With Pca 10.1.1.197.557

of a portfolio as the standard deviation of returns, based on the variancesand covariances of the returns of the underlying instruments. The onlyreal difference here is that in Markowitz portfolio theory, the underlyingvariables are taken to be the instruments making up a portfolio, whereasin Delta-Normal VaR, because the distributional assumptions are based onthe returns of the risk factors and not on the instruments, the underlyingvariables are taken to be the risk factors themselves. As a consequenceof the linearity assumption, we can in fact consider a portfolio of holdingsin n instruments as a portfolio of holdings, W = (W1, . . . , Wk), in the kunderlying risk factors, where k is not necessarily equal to n. Scaling theseto determine the relative holdings ωi = Wi

P(i)t

, i = 1, . . . , k so that the latter

sum to unity, we get the vector ω = (ω1, . . . , ωk),Let R

(i)t denote the arithmetic return of the ith risk factor at time t. The

(arithmetic) return of the portfolio at time t is

Rt =k∑

i=1

wiR(i)t

The expected return of the portfolio at time t is

E[Rt] = E

[k∑

i=1

wiR(i)t

]=

k∑

i=1

wiE[R(i)t ] = 0

using the approximation in (2.1.1) and the assumption that the geometricreturns have mean zero. The variance of the portfolio return at time t isgiven by

Var[Rt] = ωT Σω (4.2.1)

where Σ is the Variance-Covariance matrix given by

Σ =

σ21 σ1σ2ρ1,2 . . . σ1σkρ1,k

σ2σ1ρ1,2 σ22 . . . σ2σkρ2,k

......

. . ....

σkσ1ρ1,k σkσ2ρ2,k . . . σ2k

(4.2.2)

σi is the standard deviation (volatility) of the ith risk factor and ρi,j is

the correlation coefficient of R(i)t and R

(j)t , defined by ρi,j := Cov[R

(i)t ,R

(j)t ]

σiσj.

See Appendix A.3 for the derivation of this. Now, under the assumptionof normality of the distribution of Rt+δt, given the value of the standarddeviation of Rt+δt, SDev[Rt+δt] =

√Var[Rt+δt], the VaR can be determined

analytically via the (100×α)-th percentile of Rt+δt. It should first be notedthat volatility is always an annual measure, so to measure daily VaR we needto include a scaling factor of

√250 (where 250 is the approximate number

19

Page 21: Good Masters Thesis on Var With Pca 10.1.1.197.557

of business days in a year). VaR is determined as

VaR(absolute) = −|Vt| × zα × SDev[Rt+δt]× 1√250

= −|Vt| × zα ×√

ωT Σω

250

= −zα ×√

W T ΣW

250(4.2.3)

where zα is the inverse of the cumulative normal distribution function. Thevariance-covariance matrix is estimated using the Exponentially WeightedMoving Average scheme.

20

Page 22: Good Masters Thesis on Var With Pca 10.1.1.197.557

Chapter 5

Historical Value at Risk

5.1 Classical Historical Simulation

5.1.1 Background

Here the distribution of the returns of the risk factors is determined bydrawing samples from the time series of historical returns. This is a non-parametric technique, since no distributional assumptions are made, otherthan assuming stationarity of the distribution of returns of market variables(in particular their volatility), through time. If this assumption did nothold, the returns would not be i.i.d., and the drawings would originate fromdifferent underlying distributions. The assumption does not hold in reality,since as described in [4], volatility changes through time, and periods of highvolatility and low volatility tend to cluster together. Historical simulationis a full revaluation technique. A possible problem with the implementationof this method could be a lack of sufficient historical data. Also, comparingthis method to Monte Carlo simulation, whereas Monte Carlo will typicallysimulate about 10000 sample paths for each risk factor, Historical simulationconsiders only one sample path for each risk factor (the one that actuallyhappened).

The advantage of Historical simulation is that since it is nonparametric,all aspects of the actual distribution are captured, including kurtosis and fattails. Also, the full revaluation means that all nonlinearites are accountedfor, although it does make this technique more computationally intensivethan the Delta-Normal method.

21

Page 23: Good Masters Thesis on Var With Pca 10.1.1.197.557

5.1.2 Description of Method

The approach is straightforward. Let the window be of size N ≥ 250, tobe in accordance with the regulatory requirements for minimum length ofobservation window. Scenarios are generated by determining a time series ofhistorical day-to-day returns of the market variables over the last N businessdays, and then assuming that the return of each market variable from t tot + 1 is the same as the return was over each day in the time series.

In other words, as described in [9], if x(j)t ∈ xt is the value of a particular

market variable at time t, then a set of hypothetical values x(j,i)t+1 for all

i ∈ {t-N, . . . , t-1} is determined by the relationship

ln

[x

(j,i)t+1

x(j)t

]= ln

[x

(j)i+1

x(j)i

](5.1.1)

⇒ x(j,i)t+1 = x

(j)t .

x(j)i+1

x(j)i

This is done for all j ∈ {1, . . . , k}, to determine the matrix x(j,i)t+1 , 1 ≤ j ≤ k,

1 ≤ i ≤ (N − 1), and a full revaluation of each instrument in the portfoliois done for each column using (1.1.1), to determine P i

t+1, i ∈ {t-N, . . . , t-1}.From these, we determine V i

t+1, i ∈ {t-N, . . . , t-1} and sort the results togenerate a histogram describing the pmf for the future portfolio value. VaRis then determined using definition (1.1.3), by determining the mean andappropriate percentile of this distribution.

Since interest rates are not taken to be the risk factors, and zero couponbond prices are, to relate these to the above, let B(t, τ) denote the price of aτ−period zero coupon bond at time t, which corresponds to the continuouslycompounded annual yield to maturity rt(τ), so B(t, τ) = exp(−rt(τ)τ).Then (5.1.1) becomes

ln

[B(i)(t + 1, τ)

B(t, τ)

]= ln

[B(i + 1, τ)

B(i, τ)

]

⇒ −[r(i)t+1(τ)− rt(τ)

]τ = − [ri+1(τ)− ri(τ)] τ

⇒ r(i)t+1(τ) = rt(τ) + ri+1(τ)− ri(τ)

22

Page 24: Good Masters Thesis on Var With Pca 10.1.1.197.557

5.2 Historical Simulation with Volatility Updating(Hull-White Historical Simulation)

5.2.1 Background

This method is proposed in [2]. It is an extension of traditional Histori-cal Simulation, which does away with the drawback of assuming constantvolatility. [2] mentions that the distribution of a market variable, whenscaled by an estimate of its volatility, is often found to be approximatelystationary. If the volatility of the return of a market variable at the currenttime t is high, then due to the tendency of volatility to cluster, one wouldexpect a high return at time t + 1. If historical volatility was in reality lowrelative to the value at t, however, the historical returns would underesti-mate the returns one would expect to occur from t to t + 1. The reverse istrue if the volatility is low at time t and the historical volatility is high rela-tive to this. This approach incorporates the exponentially weighted movingaverage volatility updating scheme, used in the Delta-Normal method, intoclassical Historical Simulation.

5.2.2 Description of Method

Rather than assuming that the return of each market variable from t to t+1is the same as it was over each day in the time series, it is now the returnsscaled by their volatility which we assume will reoccur. This is described in[9]. If x

(j)t ∈ xt is the value of a particular market variable at time t, and σt,j

is the volatility at time t of this market variable, then a set of hypotheticalvalues x

(j,i)t+1 for all i ∈ {t-N, . . . , t-1}, is determined by the relationship

1σt,j

ln

[x

(j,i)t+1

x(j)t

]=

1σi,j

ln

[x

(j)i+1

x(j)i

](5.2.2)

⇒ x(j,i)t+1 = x

(j)t

(x

(j)i+1

x(j)i

)σt,jσi,j

and we proceed as in (section 5.1.2). Relating this to interest rates, usingthe same notation as above, (5.2.2) becomes

1σt,j

ln

[B(i)(t + 1, τ)

B(t, τ)

]=

1σi,j

ln[B(i + 1, τ)

B(i, τ)

]

23

Page 25: Good Masters Thesis on Var With Pca 10.1.1.197.557

⇒ − 1σt,j

[r(i)t+1(τ)− rt(τ)

]τ = − 1

σi,j[ri+1(τ)− ri(τ)] τ

⇒ r(i)t+1(τ) = rt(τ) +

σt,j

σi,j[ri+1(τ)− ri(τ)]

24

Page 26: Good Masters Thesis on Var With Pca 10.1.1.197.557

Chapter 6

Monte Carlo Simulation

6.1 Background

Monte Carlo simulation techniques are by far the most flexible and powerful,since they are able to take into account all non-linearities of the portfoliovalue with respect to its underlying risk factors, and to incorporate all desir-able distributional properties, such as fat tails and time varying volatilities.Also, Monte Carlo simulations can be extended to apply over longer holdingperiods, making it possible to use these techniques for measuring credit risk.However, these techniques are also by far the most expensive computation-ally. Typically the number of simulations of each random variable needed,in order to get a sample which reasonably approximates the actual distribu-tion, is around 10000.

Like Historical simulation, this is a full revaluation approach. Here, how-ever, rather than drawing samples from the distribution of historical returns,a stochastic process is selected for each of the risk factors, the parametersof the returns process are estimated (again using exponentially weightedmoving averages here), and scenarios are determined by simulating pricepaths for each of these risk factors, over the holding period. The portfoliois revaluated under each of these scenarios to determine a pmf of portfoliovalues, given by a histogram. VaR is determined as in historical simulation,using definition (1.1.3).

Apart from the computational time needed, another potential drawbackof this method is that since specific stochastic processes need to be selectedfor each market variable, the method is very exposed to model risk. Also,sampling variation, due to only being able to perform a limited number ofsimulations, can be a problem.

25

Page 27: Good Masters Thesis on Var With Pca 10.1.1.197.557

6.2 Monte Carlo Simulation using Cholesky de-composition

Each bond price corresponding to the set of standard maturities is simu-lated according to its variance-covariance matrix, using (2.4.3). Returnsare generated by performing a Cholesky decomposition of the matrix to de-termine the lower triangular matrix L such that Σ = LLT . (A derivationof Cholesky decomposition is contained in [10]). Then, we can simulatenormal random numbers according to this distribution by simulating the in-dependant normal random vector Z ∼ φk(0, I), and calculating the matrixproduct X = LZ, since we have that E[X] = LE[Z] = 0, and

Var[X] = E[(LZ)(LZ)T ]= E[LZZT LT ]= LE[ZZT ]LT

= LLT

= Σ

An issue to note here is that Cholesky decomposition can only be performedon matrices which are positive semi-definite1. Theoretically, this will be thecase, since the formula for the variance of a portfolio, given by (4.2.1), wherew is a (nonzero) vector of portfolio weights, can never be negative. How-ever, in practice, when dealing with large covariance matrices with highlycorrelated components, this theoretical property can break down. Thus co-variance matrices should always be checked for positive semi-definitenessbefore trying to apply Cholesky decomposition (or principal componentsanalysis, which is discussed below). A solution to the problem is describedin [11].

6.3 Monte Carlo Simulation using Principal Com-ponents Analysis

Principal Component Analysis (PCA) provides a way of decreasing the di-mensionality of a set of random variables which are highly correlated. It isin fact often possible to describe a very large proportion of the movements

1A symmetric matrix A is positive semidefinite if and only if b′Ab ≥ 0 for all nonzero b

26

Page 28: Good Masters Thesis on Var With Pca 10.1.1.197.557

in these random variables using a relatively small number of principal com-ponents, which can very effectively decrease the computational time neededfor simulation. As could be seen in Figure 2.1, the movements of rates alongthe yield curve are highly interdependant, thus PCA is often applied to theyield curve. The random variables are now taken to be the returns in zerocoupon yields at all 24 standard maturities, as opposed to using the bondprice returns, as was done in the previous methods considered.

Principal components are hypothetical random variables that are con-structed to be uncorrelated with one another. Suppose we have a vec-tor of returns in zero coupon yields (dropping the subscript t for now)x = (x1, . . . , xk)T . The first step is to determine the normalized k × 1vector of returns y = (y1, . . . , yk)T where yi = xi/σi

2. y is transformedinto a vector of principal components, d = (d1, . . . , dk), where each princi-pal component is a simple linear combination of the original random vector.At the same time, it is determined how much of the total variation of theoriginal vector is described by each of the principal components.

Suppose the k×k correlation matrix of y is given by ρy = [ρij ], 1 ≤ i, j ≤k. It is determined from the covariance matrix (4.2.2) of x, found by expo-nentially weighted moving averages. To define the principal components ofy, as described in [12], define the the k × 1 vector λ := (λ1, . . . , λk)T to bethe vector of eigenvalues of ρy, ordered in decreasing order of magnitude.Also define the k × k matrix ν := [ν(1), . . . , ν(k)], where the columns ν(i),1 ≤ i ≤ k are the orthonormal3 eigenvectors of ρy, corresponding to λi.The principal components are defined by

d := νT y (6.3.1)

Since ν is orthogonal, premultiplying by ν gives

ννT y = y = νd (6.3.2)

⇒ yi = d1ν(1)i + d2ν

(2)i + . . . + dkν

(k)i (6.3.3)

In other words, y can be determined as a linear combination of the compo-nents. Now, since the new random variables di are ordered by the amountof variation they explain4, considering the ith entry of y, we get

yi = d1ν(1)i + d2ν

(2)i + . . . + dmν

(m)i + εi

≈ d1ν(1)i + d2ν

(2)i + . . . + dmν

(m)i (6.3.4)

2The normalized random variables are defined as yi = (xi − µi)/σi where µi and σi

are the mean and standard deviation of xi, but as before, we assume a mean return of 0.The analysis is done on these normalized variables, and once the components have beendetermined, we simply transform back to the original form of the data.

3A k × k matrix U is orthonormal iff UT U = I = UUT .4The total variation described by the ith eigenvector is measured as λi/(

Pnj=1 λj).

27

Page 29: Good Masters Thesis on Var With Pca 10.1.1.197.557

where m < k and the error term εi is introduced since we are only using thefirst m components. These m components are then considered to be the keyrisk factors, with the rest of the variation in y being considered as noise.

As shown in [13], the principal components di are also orthogonal, inother words they are uncorrelated, so their covariance matrix is just thediagonal matrix of their variances. Also, their variances are equal to theircorresponding eigenvalues λi. This means they can be easily simulated inde-pendantly, without the need for Cholesky decomposition, which cuts downthe computational requirements even further. Now, using the approximation(6.3.4), we can rewrite (6.3.2) as

y ≈ ν̂d̂ (6.3.5)

where ν̂ is the k×m matrix of the first m eigenvectors, and d̂ = (d1, . . . , dm)T

is an m× 1 vector. Simulating d̂ enables us to obtain an approximation fory. The more components that are taken, the better the approximation.Here the assumption was made that the yields are lognormally distributed,so by simulating the components according to the normal distribution, weobtain an approximation for the vector of normalized returns y as a linearcombination of these (this is also normally distributed). This is transformedback into the form of the returns x, from which we can determine a scenariovector for the yields themselves.

When considering a vector x of yield curve returns, typically m is taken tobe 3 (the first 3 components are assumed to describe most of the movementin the yield curve). The eigenvectors of ρy describe the modes of fluctuationof y [12]. Specifically, the first eigenvector describes a parallel shift of allrates on the curve, the second eigenvector describes a twist or steepening ofthe yield curve, and the third describes a butterfly move, i.e. short rates andlong rates going in one direction, and intermediate rates going in the oppositedirection. The remaining eigenvectors describe other modes of fluctuation.To illustrate this, Figure 6.1 shows the correlation matrix for 30 June 2003,and the decomposition of this matrix into its eigenvectors. Note how thesigns of the first eigenvector are all the same - this is corresponding to aparallel shift in the yield curve. Likewise, the second component correspondsto half the rates going in one direction and the other half going in the otherdirection (a twist in the yield curve), and the third corresponds to a butterflymove. Also, the total variation described by the first 3 eigenvectors is seenhere to be approximately 90%.

28

Page 30: Good Masters Thesis on Var With Pca 10.1.1.197.557

Figure 6.1: Correlation matrix of yield returns, and the corresponding eigen-values and eigenvectors (ordered in decreasing magnitude of eigenvalue). Itis easily seen that the first three eigenvectors correspond to a parallel shift,a twist, and a butterfly move of the yield curve.

29

Page 31: Good Masters Thesis on Var With Pca 10.1.1.197.557

Chapter 7

Results

The way in which a VaR model is assessed statistically is by performing abacktest, which determines if the number of times VaR is exceeded is consis-tent with what is expected for the model. The Basel Committee regulatoryrequirements for the backtesting window for Historical simulation methodsis 250 trading days. This is over and above the 250 trading day observationwindow needed to estimate VaR itself, which means that backtesting canonly begin 500 days into the data. There is therefore insufficient historicaldata to perform a backtest and so a qualitative assessment is done instead.The methods will be tested on various hypothetical portfolios, by estimatingVaR for these portfolios over the historical period, and comparing the esti-mates to the actual losses that occurred. Test portfolios consist of individualinstruments (a swap or a FRA). A variety of maturities were considered sothat some portfolios were only exposed to short term interest rate risk, andothers to more medium term interest rate risk. Both long and short posi-tions were considered.

Note that we are interested in the ability of a method to estimate VaR fora specific portfolio, so, for example, if we are estimating VaR of a portfolioconsisting of a 3v6 FRA at time t, we are estimating the value of that port-folio one business day into its life, at time t + 1. When we are at time t + 1we are interested in estimating the VaR of a new 3v6 FRA, one businessday into its life. In all graphs, the histogram shows the daily profit-and-loss (P&L), and the line graphs show the negative daily VaR of the variousmethods (negative in order to correspond with the losses of the P&L). Fromhere on, methods are abbreviated as follows.

DN Delta-Normal method

HS Classical Historical simulation

30

Page 32: Good Masters Thesis on Var With Pca 10.1.1.197.557

HW Hull-White Historical simulation

MC Monte Carlo simulation using Cholesky decomposition

My code for Monte Carlo simulation using PCA unfortunately has a bug init which, due to time constraints, I was unable to fix, so the results for thismethod have not been graphed. Looking at the trends of results, though,the method seems to correspond fairly well to MC, but results are on a muchsmaller scale.

Figure 7.1 and Figure 7.2 give results for a long position in a 3v6 FRAwith a notional principal of R10m, using all methods, at the 95% and 99%confidence levels respectively. Notice that the high period of volatility to-wards the end of 2001, which we saw in Figure 2.1, is very prevalent here,and in all the results. In both the graphs, apart from very slight samplingerrors in MC (which was done using 8000 simulations), DN and MC trackone another exactly. This is because the FRA is completely linear in theunderlying zero coupon bond prices; both methods assumed that the dailybond price returns were multivariate normally distributed; and both meth-ods are using the same volatility and correlation estimates. In this case,therefore, the use of MC is not justified at all, since the computational timerequired was enormous in comparison to that of DN. This was the case forall portfolios considered, since swaps are completely linear in the underlyingbond prices as well.

Now, to consider the performance of the methods, notice how quicklyMC and DN react to increases in volatility. This is due to the exponentiallyweighted moving average technique which places a very high weighting onrecent events. Often these methods can overreact to a sudden increase involatility because of this. Considering Figure 7.1 and Figure 7.2, DN andMC do seem to be overreacting to large losses, relative to the other methods.DN and MC seem to be performing similarly at both the 95% and 99% con-fidence levels (although of course 99% VaR is necessarily higher than 95%VaR).

Consider the performance of the HS. At the 95% confidence level, al-though VaR is not underestimated or overestimated very badly, the methodjust doesn’t react to changes in P&L. At the 99% confidence level, themethod is reacting slightly more, but is overestimating VaR badly. The rea-son it is reacting more, is because the 99% confidence level takes us furtherinto the tails of the distribution. Essentially, what HS does, is to take anunweighted average of the 1st or the 5th percentile of the estimated value dis-tribution, based on a 250 day window. Considering Figure 7.2, the volatileperiod towards the end of 2001 will therefore contribute significantly, upuntil the day it drops out of the observation window, at which point theVaR drops dramatically.

The drop in VaR in December 2002 appears to be corresponding to the

31

Page 33: Good Masters Thesis on Var With Pca 10.1.1.197.557

drop in VaR of the other methods. However, the reason for the drop isbecause this is exactly the point where the very volatile from a year agobegins to drop out of the window. By the time we get to February 2003, HSis at its lowest, whilst the other methods are at their highest. At this point,however, it begins to increase again, since there are now enough days in thenew window which had relatively high returns. HS then remains high, eventhough the last four months were very quiet.

Finally, considering HW, at the 95% confidence level, we see that thismethod is a major improvement on HS. In this particular graph, it performsbetter than all the other methods. It is reacting well to changes in volatility,yet not overreacting, which is the case with DN and MC.

At the 99% confidence level, HW isn’t performing well at all, however,and is overreacting to changes in volatility even more than DN and HS.There is no obvious explanation for this, and it perhaps merits further in-vestigation.

Figure 7.3 and Figure 7.4 show results for a short position in a 3 year in-terest rate swap, with a notional principal of R1m and quarterly payments,using the 4 methods, at the 95% and 99% confidence levels respectively.Again we see that December 2001 is prevalent. Considering the 95% Con-fidence level, we see that DN and MC are performing very well over mostof the period, except for a few months in early 2002 where it is overesti-mating VaR. The performance of HS in this case is almost identicle to DNand MC, and, in this graph, even HS seems to be performing reasonably,although it is overestimating VaR until the volatile period drops out of thewindow. The 99% confidence level shows up bigger differences in the meth-ods, with DN and MC definitely performing better than HS and HW. HS isagain completely overestimating VaR, for the reasons explained above. Thedeeper into the tails we go, the greater the effect of December 2001 will beon HS, since the 1st percentile would correspond to worse loss than the 5thpercentile, which means VaR is overestimated even more.

Figure 7.5 shows results for a long position in a 5 year interest rateswap, with a notional principal of R1m and quarterly payments, at the 95%confidence level. (MC is not included since it is equivalent to DN again.)This longer dated swap is exposed to bond prices at three monthly intervalsout to five years. We see a similar pattern to before, with HW reacting bestto changes in the P&L at the 95% confidence level, but the performance ofall methods seems to be satisfactory.

All in all, for the portfolios which were considered, it seems as thoughthe methods are commonly overestimating VaR. Although a VaR methodis assessed by the number of times VaR was exceeded, to few exceedencesis not a good thing either, since an overestimation of VaR would lead to acapital requirement for market risk which is higher than it should be (thebank should, in this case, rather be spending some of its capital on risktaking).

32

Page 34: Good Masters Thesis on Var With Pca 10.1.1.197.557

Figure 7.1: Long 3v6 FRA at 95% confidence level

Figure 7.2: Long 3v6 FRA at 99% confidence level

33

Page 35: Good Masters Thesis on Var With Pca 10.1.1.197.557

Figure 7.3: Short 3 year swap at 95% confidence level

Figure 7.4: Short 3 year swap at 99% confidence level

34

Page 36: Good Masters Thesis on Var With Pca 10.1.1.197.557

Figure 7.5: Long 5 year swap at 95% confidence level

35

Page 37: Good Masters Thesis on Var With Pca 10.1.1.197.557

Chapter 8

Conclusions

The objective of this project was to implement the various VaR methods,and then to compare the performance of the methods on linear interestrate derivatives. Once the methods had been implemented, it proved tobe difficult to draw conclusive results of which method is best, using onlytwo years worth of historical data (which contains one incredible volatileperiod). However, a lot of the typical characteristics of these methods wereevident in the results obtained. One conclusion that can be drawn, is thatHS performs the worst. This is despite the fact that the vast majorityof banks worldwide use this approach. HW seems to be, in general, animprovement on HS. Also, it is probably considered more feasible than MC(based on Cholesky decomposition) when dealing with large portfolios, sincethe computational time required for the latter method is incredibly high.Monte Carlo simulation using PCA seems like a promising alternative toCholesky decomposition, and it would have been very interesting to be ableto compare the performance of these two methods, since the PCA code runssignificantly faster. To expand on the work done in this project, furtherresearch into the performance of these VaR methods for nonlinear interestrate derivatives could be done. However, a longer period of historical datashould definitely be considered in order to be able draw more conclusiveresults.

36

Page 38: Good Masters Thesis on Var With Pca 10.1.1.197.557

Bibliography

[1] Basel Committee on Banking Supervision. Amendment to the capi-tal accord to incorporate market risks. www.bis.org/publ/bcbs24.htm,January 1996.

[2] John Hull and Alan White. Incorporating volatility up-dating into thehistorical simulation method for V@R. Journal of Risk, Fall 1998.

[3] Philippe Jorion. Value at Risk: the new benchmark for controllingmarket risk. McGraw-Hill, second edition, 2001.

[4] J.P.Morgan and Reuters. RiskMetrics - Technical Document.J.P.Morgan and Reuters, New York, fourth edition, December 18, 1996.www.riskmetrics.com.

[5] John Hull and Alan White. V@R when daily changes are not normal.Journal of Derivatives, Spring 1998.

[6] James M. Hyman. Accurate monotonicity preserving cubic interpola-tion. SIAM Journal of Statistics and Computation, 4(4):645–654, 1983.

[7] John Hull. Options, Futures, and Other Derivatives. Prentice Hall, fifthedition, 2002.

[8] Robert Jarrow and Stuart Turnbull. Derivative Securities. Secondedition, 2000.

[9] Graeme West. Risk Measurement. Financial Modelling Agency, [email protected].

[10] Gene Golub and Charles Van Loan. Matrix Computations. Third edi-tion, 1996.

[11] Nicholas J. Higham. Computing the nearest correlation matrix - aproblem from finance. IMA Journal of Numerical Analysis, 22:329–343, 2002.

37

Page 39: Good Masters Thesis on Var With Pca 10.1.1.197.557

[12] Glyn A. Holton. Value at Risk: Theory and Practice. Academic Press,2003.

[13] Carol Alexander. Orthogonal methods for generating large positivesemi-definite covariance matrices. Discussion Papers in Finance, 2000-06, 2000.

38

Page 40: Good Masters Thesis on Var With Pca 10.1.1.197.557

Appendix A

Appendix

A.1 The EWMA Model

The exponentially weighted moving average (EMWA) model is the modelused by [4] to determine historical volatility and correlation estimates. Amoving average of historical observations is used, where the latest observa-tions carry the highest weight in the estimates. This is taken from [9]. If wehave historical data for market variables x0, x1, . . . , xn, first determine thegeometric returns of these variables

ui(x) = lnxi

xi−1, 1 ≤ i ≤ n

For time 0, define

σ0(x)2 = 10n∑

i=1

ui(x)2

Cov0(x, y) = 10n∑

i=1

ui(x)ui(y), 1 ≤ i ≤ n

For 1 ≤ i ≤ n, the volatilities and covariances are updated recursivelyaccording to the decay factor λ, which, in this case, is defined to be λ = 0.94.The updating equations are

σi(x)2 = λσ2i−1 + (1− λ)u2

i 250

Covi(x, y) = λCovi−1(x, y) + (1− λ)ui(x)ui(y)250

These equations give an annualized measure of the volatility and covariance.To determine correlations, for 0 ≤ i ≤ n, set

ρi(x, y) =Covi(x, y)σi(x)σi(y)

39

Page 41: Good Masters Thesis on Var With Pca 10.1.1.197.557

A.2 Cash Flow Mappping

This is adapted from the description in [4]. Suppose a cashflow occurs at thenon-standard maturity t2, where t2 is bracketed by the standard maturitiest1 and t3. The cashflow occurring at time t2 is mapped onto two cashflowsoccurring at t1 and t3 as follows. Firstly, since bond prices don’t interpolatewithout further assumptions, the interpolation is done on yields. Let y1, y2,and y3 be the continuously compounded yields corresponding to maturitiest1, t2, and t3 respectively, so B(t1) = exp (−y1t1), B(t2) = exp (−y2t2),B(t3) = exp (−y3t3). Let σ1, σ2 and σ3 be the volatilities of these bondprices. The procedure is to firstly linearly interpolate between y1 and y3 todetermine y2, and to linearly interpolate between σ1 and σ3 to determineσ2. We want a portfolio consisting of the two assets B(t1) and B(t3) withrelative weights ω and (1 − ω) invested in each, such that the volatility ofthis portfolio is equivalent to σ2. The variance of the portfolio is given by(4.2.1), and so we have

σ22 =

[ω (1− ω)

] [σ2

1 σ1σ3ρ13

σ3σ1ρ13 σ23

] [ω

(1− ω)

]

= ω2σ21 + 2ω(1− ω)ρ13σ1σ3 + (1− ω)2σ2

3

where ρ13 is the correlation coefficient of B(t1) and B(t3). Rearranging theabove equation gives a quadratic equation in ω:

ω2σ21 + 2ω(1− ω)ρ13σ1σ3 + (1− ω)2σ2

3 − σ22 = 0

⇒ (σ21 − 2ρ13σ1σ3 + σ2

3)ω2 + (2ρ13σ1σ3 − 2σ2

3)ω + (σ23 − σ2

2) = 0⇒ aω2 + bω + c = 0

where

a = σ21 − 2ρ13σ1σ3 + σ2

3

b = 2ρ13σ1σ3 − 2σ23

c = σ23 − σ2

2

which is then solved for ω. ω is taken to be the smaller of the two roots ofthis equation, in order to satisfy the third condition of the map, i.e. thatthe standardized cash flows have the same sign as the original cash flow.

40

Page 42: Good Masters Thesis on Var With Pca 10.1.1.197.557

A.3 The Variance of the Return on a Portfolio

The Variance of the return of a portfolio of assets, based on Markowitzportfolio theory, is given by the following, which is derived in [9].

Var[Rt] = E[(Rt − E[Rt])2]

= E

(k∑

i=1

wi(Rt,i − E[Rt])

)2

= E

k∑

i=1

k∑

j=1

wiwj(Rt,i − E[Rt])(Rt,j − E[Rt])

=k∑

i=1

k∑

j=1

wiwjE[(Rt,i − E[Rt])(Rt,j − E[Rt])]

=k∑

i=1

k∑

j=1

wiwjCov[Rt,i, Rt,j ]

=k∑

i=1

k∑

j=1

wiwjρi,jσiσj

where σi is the standard deviation (volatility) of the ith risk factor and ρi,j

is the correlation coefficient of Rt,i and Rt,j , defined by ρi,j := Cov[Rt,i,Rt,j ]σiσj

.(A.3.1) can be written in matrix form as

Var[Rt] = ωT Σω

where Σ is the Variance-Covariance matrix given by

Σ =

σ21 σ1σ2ρ1,2 . . . σ1σkρ1,k

σ2σ1ρ1,2 σ22 . . . σ2σkρ2,k

......

. . ....

σkσ1ρ1,k σkσ2ρ2,k . . . σ2k

41

Page 43: Good Masters Thesis on Var With Pca 10.1.1.197.557

Appendix B

Appendix

B.1 VBA Code: Bootstrapping Module

Option Explicit

Public Sub generateCurves()

Dim numRates As Integer ’number of rates included in the array of market yields

Dim i As Integer, j As Integer, col As Integer

Dim writerow, writecol As Integer

Dim d As Date

Dim curveCount As Integer, readrow As Integer

Dim SCB() As Object

Dim curve() As Double

’declare an array of standard days (out to 30 years) for which NACC rates will be written

’to the spreadsheet (these correspond approximately to the 365 day count used in SA)

Dim stdVerts As Variant

stdVerts = Array(1, 30, 91, 182, 273, 365, 456, 547, 638, 730, 1095, 1460, 1825, _

2190, 2555, 2920, 3285, 3650, 4380, 5475, 6205, 7300, 9125, 10950)

Const stdVertCount As Integer = 24

Application.ScreenUpdating = False

numRates = 24

’determine the number of curves in the historical data

With Sheets("Hist Data")

readrow = 2

curveCount = 0

Do Until .Cells(readrow + 1, 1) = ""

readrow = readrow + 1

curveCount = curveCount + 1

Loop

42

Page 44: Good Masters Thesis on Var With Pca 10.1.1.197.557

ReDim SCB(1 To curveCount)

For i = 1 To curveCount

Set SCB(i) = CreateObject("YieldCurve.SwapCurveBootstrap")

Next i

’read data into the SCB objects

For i = 1 To curveCount

Application.StatusBar = "Processing data for curve " & i

col = 1

SCB(i).Effective_Date = .Cells(2 + i, 1)

SCB(i).IntMethod = "MPC"

For j = 1 To numRates

’avoid leaving gaps in the arrays in the case where some rates are missing

If .Cells(2 + i, 1 + j) <> "" Then

SCB(i).Values(col) = .Cells(2 + i, 1 + j)

SCB(i).Rate_Type(col) = .Cells(2, 1 + j)

SCB(i).Rate_Term_Description(col) = .Cells(1, 1 + j)

col = col + 1

End If

Next j

SCB(i).Top_Index = col - 1

Next i

End With ’sheets("Hist Data")

’generate the NACC spot and forward curves and output

With Sheets("NACC Swap Curves")

.Range(.Cells(1, 1), .Cells(11000, 256)).ClearContents

For writecol = 1 To stdVertCount

.Cells(1, writecol + 1) = stdVerts(writecol - 1)

Next writecol

writerow = 2

For i = 1 To curveCount

Application.StatusBar = "Generating curve " & i

ReDim curve(1 To (SCB(i).Curve_Termination_Date - SCB(i).Effective_Date + 1))

For d = SCB(i).Effective_Date + 1 To SCB(i).Curve_Termination_Date

curve(d - SCB(i).Effective_Date) = SCB(i).Output_Rate(d)

Next d

.Cells(writerow, 1) = SCB(i).Effective_Date

.Cells(writerow, 1).NumberFormat = "d-mmm-yy"

For writecol = 1 To stdVertCount

.Cells(writerow, writecol + 1) = curve(stdVerts(writecol - 1))

Next writecol

writerow = writerow + 1

Next i

End With ’Sheets("NACC Swap Curves")

For i = 1 To curveCount

Set SCB(i) = Nothing

Next i

Application.StatusBar = "Done!"

Beep

End Sub

43

Page 45: Good Masters Thesis on Var With Pca 10.1.1.197.557

B.2 VBA Code: VaR Module

Option Explicit

Private P() As Double

Private vol() As Double, covol() As Double

Private volR() As Double, covolR() As Double

Private dates() As Date

Private n As Integer ’# rows

Private stdVerts() As Variant

Const m As Integer = 24 ’# cols

Const window = 250

Public Sub main()

Const lambda As Double = 0.94, confidenceLevel As Double = 0.95 ’0.95

Dim i As Integer, j As Integer, k As Integer, readrow As Integer, readcol As Integer

Dim data1 As Range, data2 As Range, data3 As Range

’data1 is a row of dates, data2 is the matrix of market variables through time.

stdVerts = Array(1, 30, 91, 182, 273, 365, 456, 547, 638, 730, 1095, 1460, 1825, _

2190, 2555, 2920, 3285, 3650, 4380, 5475, 6205, 7300, 9125, 10950)

With Sheets("NACC swap curves")

readrow = 2

n = 0

Do Until .Cells(readrow + 1, 1) = ""

readrow = readrow + 1

n = n + 1

Loop

n = n + 1

Set data1 = .Range(.Cells(2, 1), .Cells(n, 1))

Set data2 = .Range(.Cells(2, 2), .Cells(n, m))

ReDim dates(0 To n)

For i = 0 To n - 1

dates(i) = data1.Cells(i + 1, 1)

Next i

dates(n) = dates(n - 1) + 1

End With

Application.StatusBar = "Estimating Parameters..."

Call emwa(data2, lambda)

With Sheets("Disc factors")

Set data3 = .Range(.Cells(3, 2), .Cells(n + 1, m))

End With

Call profitAndLoss(data2)

Application.StatusBar = "Calculating Historical VaR..."

Call histVar(data2, confidenceLevel)

Call deltaNormal(data2, confidenceLevel)

Call monteCarlo(data2, 0.99)

Call doPCA(data2, confidenceLevel)

Application.StatusBar = "Done!"

End Sub

Private Sub emwa(data As Range, lambda As Double)

Dim i As Integer, j As Integer, k As Integer

Dim sum() As Double, sum2() As Double

44

Page 46: Good Masters Thesis on Var With Pca 10.1.1.197.557

ReDim P(1 To n - 1, 1 To m)

ReDim vol(0 To n - 1, 1 To m)

ReDim covol(0 To n - 1, 1 To m, 1 To m)

For i = 1 To n - 1

For j = 1 To m

P(i, j) = (data.Cells(i, j) - _

data.Cells(i + 1, j)) * stdVerts(j - 1) / 365 ’bond price log returns

Next j

Next i

’get initial vols and covols

ReDim sum(1 To m)

ReDim sum2(1 To m, 1 To m)

For k = 1 To 25

For j = 1 To m

sum(j) = sum(j) + P(k, j) ^ 2

For i = 1 To m

sum2(j, i) = sum2(j, i) + P(k, j) * P(k, i)

Next i

Next j

Next k

For j = 1 To m

volR(0, j) = (10 * sum(j)) ^ 0.5

For k = 1 To m

covol(0, j, k) = sum2(j, k) * 10

Next k

Next j

’rolling calculator for vols and covols

For i = 1 To n - 1

For j = 1 To m

volR(i, j) = (lambda * (volR(i - 1, j) ^ 2 + (1 - lambda) * (P(i, j) ^ 2) * 250)) ^ 0.5

For k = 1 To m

covol(i, j, k) = lambda * covol(i - 1, j, k) + _

(1 - lambda) * P(i, j) * P(i, k) * 250

Next k

Next j

Next i

Call displayVols("vols", vol, dates, n, m)

Call displayCovars("rate covar", covolR, n, m)

End Sub

Public Sub histVar(data As Range, confidenceLevel As Double)

Dim i As Integer, j As Integer, today As Integer, vposn As Integer, k As Integer

Dim hs_v(1 To window) As Double ’portfolio values for each scenario

Dim hw_v(1 To window) As Double

Dim hsPercentile As Double, hwPercentile As Double

Dim hsRateScenario(0 To m - 1) As Double ’Historical simulation

Dim hwRateScenario(0 To m - 1) As Double ’Hull White Historical

Dim hsVar() As Double, hwVar() As Double

Dim hsSum As Double, hwSum As Double, percentile As Double

ReDim hsVar(1 To n - window)

ReDim hwVar(1 To n - window)

’don’t need to value the new instrument each day, only 1 day later under each scenario

For today = window + 1 To n

45

Page 47: Good Masters Thesis on Var With Pca 10.1.1.197.557

Application.StatusBar = "Busy with historical VaR for " & dates(today - 1)

vposn = 1

hsSum = 0

hwSum = 0

For i = today - window To today - 1

For j = 1 To m

hsRateScenario(j - 1) = data.Cells(today, j) + _

data.Cells(i + 1, j) - data.Cells(i, j)

hwRateScenario(j - 1) = data.Cells(today, j) + _

vol(today - 1, j) / vol(i - 1, j) * (data.Cells(i + 1, j) - data.Cells(i, j))

Next j

’contract entered into today, we are interested how much we could lose on it tomorrow:

’hs_v(vposn) = valueFRA(hsRateScenario, stdVerts, today, dates(today - 1), dates(today))

’hs_v(vposn) = valueSwap1(hsRateScenario, stdVerts, today, dates(today - 1), dates(today))

hs_v(vposn) = valueSwap2(hsRateScenario, stdVerts, today, dates(today - 1), dates(today))

’hw_v(vposn) = valueFRA(hwRateScenario, stdVerts, today, dates(today - 1), dates(today))

’hw_v(vposn) = valueSwap1(hwRateScenario, stdVerts, today, dates(today - 1), dates(today))

hw_v(vposn) = valueSwap2(hwRateScenario, stdVerts, today, dates(today - 1), dates(today))

hsSum = hsSum + hs_v(vposn)

hwSum = hwSum + hw_v(vposn)

vposn = vposn + 1

Next i

quickSort hs_v

quickSort hw_v

percentile = window * (1 - confidenceLevel)

If Round(percentile) = percentile Then

hsVar(today - window) = hsSum / (n - window) - hs_v(percentile)

hwVar(today - window) = hwSum / (n - window) - hw_v(percentile)

Else ’interpolate

hsVar(today - window) = hsSum / (n - window) - 0.5 * (hs_v(Round(percentile)) _

+ hs_v(Round(percentile) - 1))

hwVar(today - window) = hwSum / (n - window) - 0.5 * (hw_v(Round(percentile)) _

+ hw_v(Round(percentile) - 1))

End If

Next today

With Sheets("long swap")

For i = 1 To n - window

.Cells(window + 3 + i, 2) = hsVar(i)

.Cells(window + 3 + i, 4) = hwVar(i)

Next i

End With

End Sub

Public Sub doPCA(data As Range, confidenceLevel As Double)

Dim i As Integer, readrow As Integer, corr() As Double

Dim today As Integer, pcaCorr() As Double, pcaCov() As Double, k As Integer

Dim v() As Double ’the 5 eigenvects corresp to the largest evals

Dim sqrtLambda() As Double, lambdaTruncated(1 To 3, 1 To 3) As Double

Dim rates() As Double, j As Integer, sum As Double, percentile As Double

Dim rateScenario() As Double, vols() As Double

Dim u(1 To 3) As Double, z(1 To 3, 1 To 1) As Double

Dim vals() As Double, mcVar() As Double, result As Variant

ReDim pcaCorr(1 To m, 1 To m), pcaCov(1 To m, 1 To m)

46

Page 48: Good Masters Thesis on Var With Pca 10.1.1.197.557

ReDim sqrtLambda(1 To m, 1 To m)

ReDim v(1 To m, 1 To 3), rates(0 To m - 1)

ReDim rateScenario(0 To m - 1), vals(1 To 8000)

ReDim mcVar(1 To n), vols(0 To m - 1)

Dim A As Double

Dim summ As Double

Dim l As Integer

Randomize

readrow = 2

For today = 1 To n

sum = 0

Application.StatusBar = "Busy with PCA for " & dates(today - 1)

With Sheets("rate covar")

Call covarToCorr(corr, .Range(.Cells(readrow, 1), .Cells(readrow + m, m)), m)

For j = 1 To m

rates(j - 1) = data.Cells(today, j)

vols(j - 1) = volR(today - 1, j)

Next j

A = 0

For j = 1 To 24

A = A + vols(j - 1) ^ 2

Next j

A = Sqr(A / 24 / 250)

Call findComponents(sqrtLambda, lambdaTruncated, pcaCorr, v, m, corr)

’need to convert the corr matrix into a cov matrix

For i = 1 To m

For j = 1 To m

pcaCov(i, j) = pcaCorr(i, j) * vols(i - 1) * vols(j - 1)

Next j

Next i

’now we can simulate the components (which are independant)

For k = 1 To 8000

For i = 1 To 3

u(i) = Rnd()

If u(i) > 0.999999 Then u(i) = 0.999999

If u(i) < 0.000001 Then u(i) = 0.000001

z(i, 1) = Application.WorksheetFunction.NormSInv(u(i)) ’std normal random numbers

Next i

For i = 1 To 5

lambdaTruncated(i, i) = lambdaTruncated(i, i) ^ 0.5

Next i

result = Application.WorksheetFunction.MMult(v, lambdaTruncated)

result = Application.WorksheetFunction.MMult(result, z)

summ = 0

For j = 1 To m

rateScenario(j - 1) = rates(j - 1) + A * result(j,1)

Next j

’vals(k) = valueFRA(rateScenario, stdVerts, today, dates(today - 1), dates(today))

’vals(k) = valueSwap1(rateScenario, stdVerts, today, dates(today - 1), dates(today))

vals(k) = valueSwap2(rateScenario, stdVerts, today, dates(today - 1), dates(today))

sum = sum + vals(k)

Next k

47

Page 49: Good Masters Thesis on Var With Pca 10.1.1.197.557

readrow = readrow + m

End With

quickSort vals

percentile = 8000 * (1 - confidenceLevel)

If Round(percentile) = percentile Then

mcVar(today) = sum / 8000 - vals(percentile)

Else ’interpolate

mcVar(today) = sum / 1000 - 0.5 * (vals(Round(percentile)) _

+ vals(Round(percentile) - 1)) ’sum / 8000 - 0.5 * (vals(Round(percentile)) _

End If

Next today

With Sheets("long swap")

For i = 1 To n

.Cells(3 + i, 15) = mcVar(i)

Next i

End With

End Sub

Public Sub findComponents(ByRef sqrtLambda() As Double, ByRef lambdaTruncated() _

As Double, ByRef pcaCorr() As Double, ByRef v() As Double, size As Integer, _

corr() As Double)

Dim evals As Variant, evecs As Variant

Dim i As Integer, j As Integer

Dim result As Variant, inv As Variant

Dim eigval() As Double, count As Double

ReDim eigval(1 To size)

result = Application.Run("NTpca", corr) ’get eigenvalues and eigenvectors

For i = 1 To size

eigval(i) = result(1, i)

For j = 1 To size

If i = j Then

sqrtLambda(i, i) = result(1, i) ^ 0.5

End If

Next j

Next i

For count = 1 To 3

For i = 2 To size + 1

v(i - 1, count) = result(i, count)

Next i

Next count

Call getCovPC(lambdaTruncated, eigval)

With Application.WorksheetFunction

result = .MMult(v, lambdaTruncated)

inv = .Transpose(v)

result = .MMult(result, inv)

End With

’return the pca correlation matrix

For i = 1 To size

For j = 1 To size

pcaCorr(i, j) = result(i, j)

Next j

Next i

End Sub

48

Page 50: Good Masters Thesis on Var With Pca 10.1.1.197.557

Private Sub getCovPC(newCov() As Double, eigval() As Double)

’create a new covariance matrix for the principal components

Dim i As Integer, j As Integer

For i = 1 To 3

For j = 1 To 3

If i = j Then

newCov(i, j) = eigval(i)

Else: newCov(i, j) = 0

End If

Next j

Next i

End Sub

Public Sub monteCarlo(data As Range, confidenceLevel As Double)

Dim z(1 To 24, 1 To 8000) As Double, covar(1 To m, 1 To m) As Double

Dim currRates(0 To m - 1) As Double, v(0 To m - 1) As Double

Dim i As Integer, j As Integer, k As Integer

Dim rateScenario(0 To m - 1) As Double, vals(1 To 8000) As Double

Dim sum As Double, vposn As Integer, mcVar() As Double, today As Integer

Dim seed1 As Long, seed2 As Long, percentile As Double

Dim u1 As Double, u2 As Double, z1 As Double, z2 As Double

ReDim mcVar(1 To n)

Randomize

For today = 1 To n

sum = 0

Application.StatusBar = "Busy with Monte Carlo VaR for " & dates(today - 1)

’get the rates and bond price vols for today

For j = 1 To m

currRates(j - 1) = data.Cells(today, j)

v(j - 1) = vol(today - 1, j) / Sqr(250) ’daily vols

Next j

’determine the covariance matrix for today

For i = 1 To m

For j = 1 To m

covar(i, j) = covol(today - 1, i, j) / 250

Next j

Next i

’generate 10000 rate scenarios for today and revalue the portfolio under each

Call generate(z, 24, 8000)

Call getCorrNums(z, covar, v) ’converts N(0,I) numbers to N(0,covar) numbers

For k = 1 To 8000

For j = 1 To m

rateScenario(j - 1) = currRates(j - 1) - 365 / stdVerts(j - 1) * v(j - 1) * z(j, k)

Next j

’vals(k) = valueFRA(rateScenario, stdVerts, today, dates(today - 1), dates(today))

’vals(k) = valueSwap1(rateScenario, stdVerts, today, dates(today - 1), dates(today))

vals(k) = valueSwap2(rateScenario, stdVerts, today, dates(today - 1), dates(today))

sum = sum + vals(k)

Next k

quickSort vals

percentile = 8000 * (1 - confidenceLevel)

If Round(percentile) = percentile Then

49

Page 51: Good Masters Thesis on Var With Pca 10.1.1.197.557

mcVar(today) = sum / 8000 - vals(percentile)

Else ’interpolate

mcVar(today) = sum / 8000 - 0.5 * (vals(Round(percentile)) _

+ vals(Round(percentile) - 1))

End If

Sheets("long swap").Cells(3 + today, 9) = mcVar(today)

Next today

End Sub

Public Sub generate(ByRef nrmlNums() As Double, rows As Integer, cols As Integer)

’nrmlNums has dimensions 24x8000

Dim i As Integer, j As Integer, u As Double

For i = 1 To rows

For j = 1 To cols

u = Rnd()

If u > 0.999999 Then

u = 0.999999

Else: If u < 0.000001 Then u = 0.000001

End If

nrmlNums(i, j) = Application.WorksheetFunction.NormSInv(u)

Next j

Next i

End Sub

Public Sub getCorrNums(ByRef nrmlNums() As Double, cov() As Double, vol() As Double)

Dim i As Integer, j As Integer

’converts N(0,I) numbers to N(0,cov) numbers using Cholesky decomposition

Dim chol As Variant

Dim X() As Double

Dim corr(1 To n, 1 To n) As Double

ReDim X(1 To n, 1 To k)

For i = 1 To n

For j = 1 To n

corr(i, j) = cov(i, j) / (vol(i - 1) * vol(j - 1))

Next j

Next i

chol = cholesky(corr)

’get corresponding covar matrix

For i = 1 To n

For j = 1 To n

cov(i, j) = corr(i, j) * vol(i - 1) * vol(j - 1)

Next j

Next i

Call matProd(X, chol, nrmlNums)

For i = 1 To n

For j = 1 To k

nrmlNums(i, j) = X(i, j) ’these are distributed N(0,cov)

Next j

Next i

For i = 1 To 24

For j = 1 To 24

With Sheets("test")

.Cells(i, j) = corr(i, j)

50

Page 52: Good Masters Thesis on Var With Pca 10.1.1.197.557

.Cells(i, j + 26) = chol(i, j)

End With

Next j

Next i

End Sub

Function cholesky(Mat)

’performs the Cholesky decomposition A=L*L^T

Dim A, l() As Double, S As Double

Dim rows As Integer, cols As Integer, i As Integer, j As Integer, k As Integer

A = Mat

rows = UBound(A, 1)

cols = UBound(A, 2)

’begin Cholesky decomposition

ReDim l(1 To rows, 1 To rows)

For j = 1 To rows

S = 0

For k = 1 To j - 1

S = S + l(j, k) ^ 2

Next k

l(j, j) = A(j, j) - S

If l(j, j) <= 0 Then Exit For ’the matrix can not be decomp

l(j, j) = Sqr(l(j, j))

For i = j + 1 To rows

S = 0

For k = 1 To j - 1

S = S + l(i, k) * l(j, k)

Next k

l(i, j) = (A(i, j) - S) / l(j, j)

Next i

Next j

cholesky = l

End Function

Private Sub matProd(A() As Double, B, C)

’Multiplies 2 matrices A(n x r) <-- B(n x m) x C(m x r)

Dim n As Integer, m As Integer, R As Integer

Dim i As Integer, j As Integer, k As Integer

n = UBound(B, 1)

m = UBound(B, 2)

R = UBound(C, 2)

ReDim A(1 To n, 1 To R)

For i = 1 To n

For j = 1 To R

For k = 1 To m

A(i, j) = A(i, j) + B(i, k) * C(k, j)

Next k

Next j

Next i

End Sub

Public Sub deltaNormal(data As Range, confidenceLevel As Double)

Dim rates(0 To m - 1) As Double, v(0 To m - 1) As Double

Dim i As Integer, j As Integer, today As Integer, vposn As Integer

51

Page 53: Good Masters Thesis on Var With Pca 10.1.1.197.557

Dim covar(1 To m, 1 To m) As Double, variance

Dim dnVar() As Double, stddev As Double

ReDim dnVar(1 To n)

vposn = 1

For today = 1 To n ’window + 1 To n - 1

Application.StatusBar = "Busy with Delta-Normal VaR for " & dates(today - 1)

For j = 1 To m

rates(j - 1) = data.Cells(today, j)

v(j - 1) = vol(today - 1, j) / Sqr(250)

Next j

’determine the covariance matrix for today

For i = 1 To m

For j = 1 To m

covar(i, j) = covol(today - 1, i, j) / 250

Next j

Next i

’stddev = valueFRA(rates, stdVerts, today, dates(today - 1), dates(today), v, covar)

’stddev = valueSwap1(rates, stdVerts, today, dates(today - 1), dates(today), v, covar)

stddev = valueSwap2(rates, stdVerts, today, dates(today - 1), dates(today), v, covar)

dnVar(vposn) = 2.326 * stddev

vposn = vposn + 1

Next today

With Sheets("long swap")

For i = 1 To n ’1 To n - window

.Cells(3 + i, 6) = dnVar(i)

Next i

End With

End Sub

Private Sub profitAndLoss(data As Range)

Dim rates(0 To m - 1) As Double, vols(0 To m - 1) As Double

Dim j As Integer, k As Integer, today As Integer, vposn As Integer

Dim val1 As Double, val2 As Double

Dim valNewInstr() As Double, valOldInstr() As Double

’valOldInstr values the instrument initiated 1 day previously

ReDim valNewInstr(1 To n)

ReDim valOldInstr(1 To n)

vposn = 1

Application.StatusBar = "Determining P&L..."

For today = 1 To n ’window + 1 To n - 1

For j = 1 To m

rates(j - 1) = data.Cells(today, j)

Next j

If Not (today = n) Then

’valNewInstr(vposn) = valueFRA(rates, stdVerts, today, dates(today - 1), dates(today - 1))

’valNewInstr(vposn) = valueSwap1(rates, stdVerts, today, dates(today - 1), dates(today - 1))

valNewInstr(vposn) = valueSwap2(rates, stdVerts, today, dates(today - 1), dates(today - 1))

End If

If Not (today = 1) Then ’(today = window + 1) Then

’valOldInstr(vposn) = valueFRA(rates, stdVerts, today - 1, dates(today - 2), dates(today - 1))

’valOldInstr(vposn) = valueSwap1(rates, stdVerts, today - 1, dates(today - 2), dates(today - 1))

valOldInstr(vposn) = valueSwap2(rates, stdVerts, today - 1, dates(today - 2), dates(today - 1))

52

Page 54: Good Masters Thesis on Var With Pca 10.1.1.197.557

End If

vposn = vposn + 1

Next today

With Sheets("long swap")

For j = 1 To n - 1 ’1 To n - window - 1

.Cells(4 + j, 8) = valOldInstr(j + 1) - valNewInstr(j)

Next j

End With

End Sub

Public Sub quickSort(ByRef v() As Double, _

Optional ByVal left As Long = -2, Optional ByVal right As Long = -2)

’quicksort is good for arrays consisting of several hundred elements

Dim i, j, mid As Long

Dim testVal As Double

If left = -2 Then left = LBound(v)

If right = -2 Then right = UBound(v)

If left < right Then

mid = (left + right) \ 2

testVal = v(mid)

i = left

j = right

Do

Do While v(i) < testVal

i = i + 1

Loop

Do While v(j) > testVal

j = j - 1

Loop

If i <= j Then

Call SwapElements(v, i, j)

i = i + 1

j = j - 1

End If

Loop Until i > j

’sort smaller segment first

If j <= mid Then

Call quickSort(v, left, j)

Call quickSort(v, i, right)

Else

Call quickSort(v, i, right)

Call quickSort(v, left, j)

End If

End If

End Sub

’used in QuickSort

Private Sub SwapElements(ByRef v() As Double, ByVal item1 As Long, ByVal item2 As Long)

Dim temp As Double

temp = v(item2)

v(item2) = v(item1)

53

Page 55: Good Masters Thesis on Var With Pca 10.1.1.197.557

v(item1) = temp

End Sub

Public Sub displayVols(sheetname As String, ByRef vols() As Double, ByRef dates() As Date, _

rows As Integer, cols As Integer)

Dim i As Integer, j As Integer

With Sheets(sheetname)

For i = 1 To rows

.Cells(i + 1, 1) = dates(i - 1)

.Cells(i + 1, 1).NumberFormat = "d-mmm-yy"

For j = 1 To cols

.Cells(i + 1, j + 1) = vols(i - 1, j)

Next j

Next i

End With

End Sub

Public Sub displayCovars(sheetname As String, ByRef covol() As Double, _

rows As Integer, m As Integer)

’writes the covariances to spreadsheet

Dim i As Integer, j As Integer, k As Integer, writerow As Integer, writerow2 As Integer

writerow = 1

writerow2 = 1

For i = 1 To rows

Sheets("dates").Cells(writerow2, 1) = dates(i - 1)

For j = 1 To m

Sheets(volsheet).Cells(writerow2, j) = vols(i - 1, j)

For k = 1 To m

’convert from annual to daily measure of covariance

Sheets(sheetname).Cells(writerow + j, k) = covol(i - 1, j, k) / 250

Next k

Next j

writerow = writerow + m

writerow2 = writerow2 + 1

Next i

End Sub

B.3 VBA Code: Portfolio Valuation Module

Option Explicit

Private Function valueSwap(ByRef rateScenario() As Double, ByRef stdVerts() As Variant, _

startDate As Date, valDate As Date, deltat As Integer, n As Integer, NP As Double, _

effSwapRate As Double, jibar As Double, longOrShort As String) As Double

Dim T As Integer, daysTillNextFlow As Integer, i As Integer

Dim rate As Double, B() As Double, vFix As Double, vFloat As Double, sum As Double

Dim nextFlow As Date, Bfloat As Double, R As Double

54

Page 56: Good Masters Thesis on Var With Pca 10.1.1.197.557

nextFlow = DateSerial(Year(startDate), Month(startDate) + deltat, day(startDate))

ReDim B(1 To n)

For i = 1 To n

daysTillNextFlow = nextFlow - valDate

R = interpRates(rateScenario, stdVerts, daysTillNextFlow)

B(i) = Exp(-R * daysTillNextFlow / 365)

nextFlow = DateSerial(Year(nextFlow), Month(nextFlow) + deltat, day(nextFlow))

Next i

If valDate = startDate Then

vFloat = NP * (1 - B(n))

Else

nextFlow = DateSerial(Year(startDate), Month(startDate) + deltat, day(startDate))

daysTillNextFlow = nextFlow - valDate

R = interpRates(rateScenario, stdVerts, daysTillNextFlow)

Bfloat = Exp(-R * daysTillNextFlow / 365)

vFloat = NP * (1 + jibar * daysTillNextFlow / 365) * Bfloat - NP * B(n)

End If

sum = 0

For i = 1 To n

sum = sum + B(i)

Next i

vFix = effSwapRate * NP * sum

valueSwap = vFloat - vFix

If longOrShort = "short" Then

valueSwap = -valueSwap

End If

End Function

Private Function interpRates(ByRef rateScenario() As Double, ByRef stdVerts() As Variant, _

daysTillNextFlow As Integer) As Double

’given a set of rates at the set of stdVerts, determines the rate at the specified # days

’by interpolating between 2 closest nodes in stdVerts

Dim R As Double, i As Integer, curr As Integer ’current position in stdVerts

i = 0

curr = stdVerts(i)

While daysTillNextFlow > curr ’find the closest standard vertex occurring after ’days’

i = i + 1

curr = stdVerts(i)

Wend

If daysTillNextFlow = curr Then

R = rateScenario(i) ’no interpolation needed

Else

R = interp((stdVerts(i - 1)), (stdVerts(i)), rateScenario(i - 1), _

rateScenario(i), (daysTillNextFlow))

End If

interpRates = R

End Function

Private Function splitOntoStdVerts(daysTillNextFlow As Integer, _

ByRef stdVerts() As Variant, ByRef splitVals() As Double, _

val As Double, Optional ByRef vols, Optional ByRef cov) As Integer

’given the number of days till a cashflow and its value, and vols for the std vertices,

’split the cashflow onto 2 nodes and determine the 2 values at each node (splitVals array)

’return the index in stdVerts of the first of the 2 nodes onto which it was split

55

Page 57: Good Masters Thesis on Var With Pca 10.1.1.197.557

Dim v1 As Double, v2 As Double, v3 As Double

Dim i As Integer, curr As Integer ’current position in stdVerts

Dim W As Double, sigma As Double

Dim posn As Integer

i = 0

curr = stdVerts(i)

While daysTillNextFlow > curr ’find the closest standard vertex occurring after ’days’

i = i + 1

curr = stdVerts(i)

Wend

If daysTillNextFlow = curr Then

v2 = vols(i) ’no interpolation needed

W = 1

splitVals(0) = val

splitVals(1) = 0

posn = i - 1

Else

v1 = vols(i - 1)

v3 = vols(i)

v2 = interp((stdVerts(i - 1)), (stdVerts(i)), v1, v3, (daysTillNextFlow))

W = quadratic(v1, v2, v3, (cov(i, i + 1)))

splitVals(0) = val * W

splitVals(1) = val * (1 - W)

posn = i - 1

End If

End Function

Public Function interp(C As Double, d As Double, f_c As Double, f_d As Double, _

X As Double) As Double

interp = (X - C) / (d - C) * f_d + (d - X) / (d - C) * f_c

End Function

Public Function quadratic(sig_a As Double, sig_b As Double, sig_c As Double, sig_ac As Double) _

As Double

Dim alpha As Double, beta As Double, gamma As Double

Dim root1 As Double, root2 As Double, k As Double

alpha = sig_a ^ 2 + sig_c ^ 2 - 2 * sig_ac

beta = 2 * sig_ac - 2 * sig_c ^ 2

gamma = sig_c ^ 2 - sig_b ^ 2

k = Sqr(beta ^ 2 - 4 * alpha * gamma)

root1 = (-beta + k) / (2 * alpha)

root2 = (-beta - k) / (2 * alpha)

If (root1 < root2) Then

quadratic = root1

Else

quadratic = root2

End If

End Function

Public Function valueSwap1(ByRef rateScenario() As Double, ByRef stdVerts() As Variant, _

startDatePosn As Integer, startDate As Date, valDate As Date, _

Optional ByRef vols, Optional ByRef cov) As Double

56

Page 58: Good Masters Thesis on Var With Pca 10.1.1.197.557

’short position in a 3 yr vanilla interest rate swap, quarterly payments

’startDatePosn determines the rate to be used, and valDate is the valuation date (this

’will either correspond exactly to dateposn, or to 1 day (the holding period) later

’if vols are provided ie for delta-normal, returns the std dev, else returns the price

Const deltat As Integer = 3

Const n As Integer = 12

Const NP = 10000000 ’notional principal

Dim swapRate As Double, effSwapRate As Double, jibar As Double

swapRate = Sheets("Hist Data").Cells(startDatePosn + 2, 12) ’NACQ swap rate

effSwapRate = swapRate / 4 ’effective rate per quarter

jibar = Sheets("Hist Data").Cells(startDatePosn + 2, 4) ’3 month JIBAR for the first period

’can use this because we are only ever valuing 1 day into the contract in this case

If Not (IsMissing(vols)) Then

valueSwap1 = valueThreeYrSwapDN(rateScenario, stdVerts, startDate, valDate, deltat, n, NP, _

effSwapRate, jibar, "short", vols, cov) ’delta normal - returns the std dev

Else

valueSwap1 = valueSwap(rateScenario, stdVerts, startDate, valDate, deltat, n, NP, _

effSwapRate, jibar, "short") ’general swap valuation

End If

End Function

Public Function valueSwap2(ByRef rateScenario() As Double, ByRef stdVerts() As Variant, _

startDatePosn As Integer, startDate As Date, valDate As Date, _

Optional ByRef vols, Optional ByRef cov) As Double

’long position in a 5 yr vanilla interest rate swap

’quarterly payments, there are 20 payments remaining.

’startDatePosn determines the floating rate to be used, and valDate is the valuation

’date (this will either correspond exactly to startDate, or to 1 day (the holding

’period) later.

Const deltat As Integer = 3

Const n As Integer = 20 ’number of payments remaining

Const NP = 10000000 ’notional principal

Dim swapRate As Double, effSwapRate As Double, jibar As Double

swapRate = Sheets("Hist Data").Cells(startDatePosn + 2, 14) ’5 yr NACQ swap rate

effSwapRate = swapRate / 4 ’effective rate per quarter

jibar = Sheets("Hist Data").Cells(startDatePosn + 2, 4) ’3 month JIBAR for this period

If Not (IsMissing(vols)) Then

valueSwap2 = valueFiveYrSwapDN(rateScenario, stdVerts, startDate, valDate, deltat, n, NP, _

effSwapRate, jibar, "short", vols, cov) ’delta normal - returns the std dev

Else

valueSwap2 = valueSwap(rateScenario, stdVerts, startDate, valDate, deltat, n, NP, _

effSwapRate, jibar, "long")

End If

End Function

Private Function valueThreeYrSwapDN(ByRef rateScenario() As Double, ByRef stdVerts() As Variant, _

startDate As Date, valDate As Date, deltat As Integer, n As Integer, NP As Double, _

effSwapRate As Double, jibar As Double, longOrShort As String, Optional ByRef vols, _

Optional ByRef cov) As Double

57

Page 59: Good Masters Thesis on Var With Pca 10.1.1.197.557

’simplifying assumptions: no mapping is done for the 1st 8 payments since we have the rate

’stored (to within 1 or 2 days). The 9th, 10th and 11th payments are mapped onto the

’2 yr and 3 yr nodes. The 12th payment is allocated entirely to the 3 yr node.

’This means that we have a portfolio of 15 cashflows based on 9 of the std vertices.

’fixed leg and floating leg are considered separately and combined at the end

Dim T As Integer, daysTillNextFlow As Integer, i As Integer

Dim rate As Double, B() As Double, vFix As Double, vFloat As Double, sum As Double

Dim nextFlow As Date, Bfloat As Double, R As Double

Dim W(1 To 10, 1 To 1) As Double ’array of value weights

Dim splitVals(1) As Double, index As Integer

Dim weightToBeSplit As Double, newCov() As Double, variance

nextFlow = DateSerial(Year(startDate), Month(startDate) + deltat, day(startDate))

ReDim B(1 To n)

For i = 1 To 8

daysTillNextFlow = nextFlow - valDate

R = interpRates(rateScenario, stdVerts, daysTillNextFlow)

B(i) = Exp(-R * daysTillNextFlow / 365)

’PV flow gives the value weight of that cashflow

W(i + 1, 1) = effSwapRate * NP * B(i) ’positive since we are short!

nextFlow = DateSerial(Year(nextFlow), Month(nextFlow) + deltat, day(nextFlow))

Next i

W(9, 1) = 0

W(10, 1) = 0

For i = 9 To 11

daysTillNextFlow = nextFlow - valDate

R = interpRates(rateScenario, stdVerts, daysTillNextFlow)

B(i) = Exp(-R * daysTillNextFlow / 365)

weightToBeSplit = effSwapRate * NP * B(i)

index = splitOntoStdVerts(daysTillNextFlow, stdVerts, splitVals, weightToBeSplit, vols, cov)

W(9, 1) = W(9, 1) + splitVals(0)

W(10, 1) = W(10, 1) + splitVals(1)

nextFlow = DateSerial(Year(nextFlow), Month(nextFlow) + deltat, day(nextFlow))

Next i

daysTillNextFlow = nextFlow - valDate

R = interpRates(rateScenario, stdVerts, daysTillNextFlow)

B(12) = Exp(-R * daysTillNextFlow / 365)

W(10, 1) = W(10, 1) + effSwapRate * NP * B(12)

’now the floating leg:

W(10, 1) = W(10, 1) + NP * B(12) ’since we a are short!

If valDate = startDate Then

W(1, 1) = -NP

Else

nextFlow = DateSerial(Year(startDate), Month(startDate) + deltat, day(startDate))

daysTillNextFlow = nextFlow - valDate

R = interpRates(rateScenario, stdVerts, daysTillNextFlow)

Bfloat = Exp(-R * daysTillNextFlow / 365)

’vFloat = NP * (1 + jibar * daysTillNextFlow / 365) * Bfloat - NP * B(n)

W(2, 1) = W(2, 1) - NP * (1 + jibar * daysTillNextFlow / 365) * Bfloat

End If

ReDim newCov(1 To 10, 1 To 10)

Call getNewCov(newCov, 2, 11, cov) ’check this

58

Page 60: Good Masters Thesis on Var With Pca 10.1.1.197.557

With Application.WorksheetFunction

variance = .MMult(.Transpose(W), .MMult(newCov, W))

End With

valueThreeYrSwapDN = variance(1) ^ 0.5

End Function

Private Sub getNewCov(ByRef newCov() As Double, ind1 As Integer, ind2 As Integer, Optional ByRef cov)

’returns a submatrix of the original covar matrix, from ind1 to ind2

Dim i As Integer, j As Integer, corresp_i As Integer, corresp_j As Integer

corresp_i = ind1

corresp_j = ind1

For i = 1 To ind2 - ind1 + 1

corresp_j = ind1

For j = 1 To ind2 - ind1 + 1

newCov(i, j) = cov(corresp_i, corresp_j)

corresp_j = corresp_j + 1

Next j

corresp_i = corresp_i + 1

Next i

End Sub

Public Function valueFRA(ByRef rateScenario() As Double, ByRef stdVerts() As Variant, _

startDatePosn As Integer, startDate As Date, valDate As Date, _

Optional ByRef vols, Optional ByRef cov) As Double

’long position in a 3v6 FRA

’if vols and correlations are provided ie for Delta-Normal method, then return the std deviation

’of the FRA, otherwise return the value of the FRA

Const NP = 10000000

Dim fraRate As Double, date1 As Date, date2 As Date, r1 As Double, r2 As Double

Dim t1 As Integer, t2 As Integer, B_t1 As Double, B_t2 As Double

Dim W1 As Double, W2 As Double ’value weights, as given by the PV of the 2 payments

Dim covFRA(1 To 2, 1 To 2) As Double

Dim W(1 To 2, 1 To 1) As Double

Dim variance

fraRate = Sheets("Hist Data").Cells(startDatePosn + 2, 5) ’3v6 FRA rate NACQ

B_t1 = Exp(-rateScenario(2) * stdVerts(2) / 365)

B_t2 = Exp(-rateScenario(3) * stdVerts(3) / 365)

W1 = -NP * B_t1

W2 = NP * (1 + fraRate * 91 / 365) * B_t2

If Not (IsMissing(vols)) Then ’return std dev of the FRA (applies to Delta Normal)

covFRA(1, 1) = cov(3, 3)

covFRA(1, 2) = cov(3, 4)

covFRA(2, 1) = cov(4, 3)

covFRA(2, 2) = cov(4, 4)

W(1, 1) = W1

W(2, 1) = W2

variance = W1 ^ 2 * vols(2) ^ 2 + 2 * W1 * W2 * cov(3, 4) + W2 ^ 2 * vols(3) ^ 2

valueFRA = variance ^ 0.5

Else

valueFRA = W1 + W2 ’this applies to historical and HW historical

59

Page 61: Good Masters Thesis on Var With Pca 10.1.1.197.557

End If

End Function

60