Successfully reported this slideshow.
Upcoming SlideShare
×

# CDS and Loss function: CDO Tranches

918 views

Published on

• Full Name
Comment goes here.

Are you sure you want to Yes No
• Be the first to comment

• Be the first to like this

### CDS and Loss function: CDO Tranches

1. 1. Advanced tools for Risk Management and Asset Pricing Group assignment - a.a. 2011-2012 Giulio Laudani (1256809) Alberto Manassero (1248376) Marco Micieli (1255470) June 8, 2012 1
2. 2. CommentsExercise 1.The ﬁle Data.xls contains data on CDS term structure as of the 1st March 2012 for Danone, Carrefourand Tesco, as well as the risk-free zero curve on the Euro at the same date.1.a.Assumptions: 1. Intensities are piecewise ﬂat and increasing over time; 2. Default can occur once a month (we choose a discretization of m = 12 to approximate the continuos case); 3. S(t0 , tN ) is assumed to be the contractual annualized fair CDS Spread; 4. The currency in which both the CDSs and the risk-free zero curve are denominated is assumed to be the same; 5. The premium is paid at the end of the quarter.For each company, we derive the term structure of hazard rates 6,12 , 12,24 , ..., 84,120 (subscripts stand for maturities expressed in months) deﬁned as the proba-bility of a credit event occurring during the time interval dt conditional on surviving up to time t asfollowing: P r(✓ < t + dt | ✓ > t) = (t)dtwhere ✓ is the dafault time. Default intensity can be interpreted as the rate of default in time unit. To construct the hazard term structure, we use an iterative process commonly known as bootstrap.It starts by taking the shortest maturity contract and using it to calculate the ﬁrst survival probability. 2
3. 3. In equilibrium, for a contract starting in t0 with quarterly premium payments, the following must hold: X 12 X 0,k tn 0,k tm 0,k tm S(t0 , t0 + k) 4(tn 3 , tn )Z(0, tn )e = (1 RR) Z(t0 , tm )(e 1 e ) n=3,6,9,12 m=1 i.e. P V (P remium leg) = P V (P rotection leg), where e 0,k tn is the survival probability up to tnand 0,k the constant instant default probability between 0 and k. for the shortest maturity is backed out from the above equation and the process is iterated for allmaturities: 12, 24, 36 up to 120. We deﬁne the customized function termstructuredeﬁnt on the basisof typical Matlab solve function. 3
4. 4. It should be noted how, for time horizon lower than a year, default intensities for the three referenceentities are ﬂat and close to zero while their patterns diﬀer when a longer time horizon is considered.Such upward trend must be kept in mind when analysing the diﬀerences between piecewise and constantlambdas as a clear evidence in favour of the former. From the CDS term structure we know that Carrefour is the riskiest reference entity followed byTesco while Danone is the least risky one. This ﬁnding is conﬁrmed by the default intensity termstructure. Markets assign the highest default intensities to Carrefour and the lowest to Danone.1.b.We performed a sensitivity analysis to understand how the hazard rate curve reacts to changes indefault-frequency discretization, recovery rate and maturity. 4
5. 5. 5
6. 6. Default intensities are positive increasing function of the frequency at which default can occur. Thedefault frequency value ranges from 0.1 (default can occur once every 10 months) to 4 (4 times permonth). The idea is pretty straightforward: if we let the company be at risk of default each week, insteadof each month, we are improving the quality of our approximation of continuos PV(Protection leg).O’Kane and Turnbull found that this discretization error, in presence of ﬂat hazard structure (i.e.keeping constant) leads to a diﬀerence between the corresponding computed spread proportional to1/m where m is the number of intervals in which we divide the year. Speciﬁcally, diﬀerence betweencontinuos and discrete case is given by r/2m. In this case, having ﬁxed the spread and the recoveryrate by hypothesis, the consequent upward bias of PV(Protection leg) can be justiﬁed only by anincrease in default intensities. The sensitivity of w.r.t. RR was performed by changing it from 10% to 80%, with a 1% increment. 6
7. 7. 7
8. 8. As can be observed in the above graphs, given the same maturity, an increase in the recoveryrate results in an increase of the default intensities. Recovery rate is deﬁned as the expected price ofthe CTD obligation at the time of the credit event expressed as a percentage of its face value. Therationale is the following: suppose you enter a CDS contract whose price is 100bps and recovery rate40%. Suppose further to keep the price (unrealistically) constant and imagine the recovery rate couldchange: if it goes up to 60%, it means that you are buying protection “for a lower loss”, so you shouldhave to pay lower premium to insure against default. But if the spread is somehow kept constant, youwill be willing to pay still 100bps only if the probability of default has increased. The relationship we ﬁnd can be described by the formula: ⇠ S = 1 RR Moreover, the sensitivity of to recovery rate grows, in turn, with maturity: in fact, as maturityincreases, RR becomes more relevant in setting CDS price, since a credit event becomes more likely.Following the same line of reasoning as before, keeping price constant, the reduction in expected lossmust be oﬀset by a sharper increase in the higher is RR. 8
9. 9. 1.c.The cumulative default probabilities for the three companies were derived through the Matlab functionCumprob which replicates the formula below: Pt (s) s f (0, t) = 1 p(0, t) = 1 e 0Cumulative deault probabilities, in case of piecewise constant : Cumulative Default Prob. (years) Danone Tesco Carrefour 1 0.00320 0.00602 0.00764 5 0.06890 0.09532 0.16159 10 0.21414 0.26655 0.392141.d.To carry out this point the above hypotheses were maintained except for the pattern of the intensityprocess which was assumed to be constant over time. Default intensities derived under such assumptionare contained in the matlab matrix “def_int”. It is worthwhile stressing that the term structure ofdefault intensities derived under a constant intensity process is constant through time as opposed tothat derived under the piecewise constant intensity process which is a piecewise ﬂat function of time. Cumulative default probabilities, in case of constant : Cumulative Default Prob. (years) Danone Tesco Carrefour 1 0.01216 0.01704 0.02765 5 0.05849 0.08117 0.12900 10 0.11316 0.15523 0.24060 By using a constant default intensity the default event is modelled as an exponential distributionwith parameter , or equivalently, as the ﬁrst arrival of a Poisson process with intensity . That 9
10. 10. is why increments are not 1:1 functions of time. Compared to the previous assumption (piecewiseconstant intensities), constant leads to an overestimation of cumulative default probabilities forshort maturities and to an underestimation in case of longer maturities. The justiﬁcation of this behavior is related to the nature of the constant structure hypothesis. Inthis model, the hazard rate is a weighted average between the short and long term rate, hence allthe information given by the CDS market structure about the time dependence of credit events isembedded into a unique value, which will be greater compared to the short term piecewise interval andsmaller for the longer one. The graph gives a suitable representation of this general idea: the piecewisestructure should be preferred as it ensures a better usage of market informations (as provided by theCDS structure) and a more handy compromise with respect to the continous case. 10
11. 11. Exercise 2.2.a-b.We derive the loss function of the portfolio at 1, 5 and 10 years, under the assumption of the Li modelwith Gaussian copula. According to the Li model we: • Generate (through the function mvnrnd ) random variates (X1 , X2 , X3 ) following a multivari- ate normal distribution in which the dependence structure is captured by diﬀerent correlation parameters r=[0.3 0.5 0.8 0.95] • Derive the distribution functions of the survival times (T1 , T2 , T3 ) which are equal to the inverse of the survival probability function computed in the cumulative gaussian distribution for Xi (as computed above). The matrix tao will contain in each column the survival times arising from the diﬀerent simulation (rows) for the three assets contained in the portfolio • We compare the default times to the particular time frame established: time=[0.5 1 3 10]. Said in another way, we compute how many assets have a survival time which is less or equal than 6 months, ... , 10 years. We then sum all the assets that have a survival time lower than the variable time to obtain the number of assets defaulting in our portfolio (n_def ) • The Loss function in our Matlab code will be equal to: Loss(j,:,o)=n_def.*(notional*(1-RR)) and the portfolio will be priced accordingly • The above steps are repeated 1.000.000 times 11
12. 12. The graph above shows diﬀerent percentage losses (on x-axis) and their experiment-driven frequen-cies1 , according to diﬀerent values of rho and diﬀerent times to default: loss distributions tends to beU-shaped as rho increases. Higher values for correlation imply an increase in joint event probability(i.e.the probability that either all underlying assets default or all survive becomes larger relative to thelikelihood of “central” events). Note that this “shape-eﬀect” becomes meaningful only for long time todefault: intuitively, in 1 month, probability of joint default is very low, regardless of the correlation2 . An important limitation in this approach is that the dependence among defaults is entirely drivenby the correlation matrix, making joint extreme events very rare. This limitation is partially overcome 1 Recallthat the y-axis simply displays the relative frequency of observations over 1 million runs. 2 Hazard rates were kept constant. However we acknowledge that by increasing hazard rates the above U-shapedpattern would have been more pronounced. 12
13. 13. by assuming a t-copula for the dependence structure: in this way tail dependence is introduced andbecomes sizable. In the graph below we plot the loss distribution derived under the t-copula assumption. The following tridimensional continous versions of the above ﬁgures illustrate loss occurrence prob-ability (relative frequency) as function of rho, under both Gaussian and t-student models and time todefault equal to 10 years (to make results more appreciable). The Gaussian distribution shows that forcorrelation values higher than 0.3 a variation in the sensitivity trend is noticeable (in the t-student thiseﬀect is less pronounced). Rising correlation levels are associated with an increase in the probabilityof a zero loss, as the chances of a cross-default are higher and those of a single default diminish. Twoother eﬀects are noteworthy: as rho increases, central values of losses become less frequent3 , while 3 Bear in mind that a 20% loss implies that only one reference entity in the portfolio defaulted. 13
14. 14. extreme percentages display growing frequency, conﬁrming the tendency outlined before. 14
15. 15. Moreover, the following diagram shows the diﬀerence (for levels of correlation of 0.3 and 0.8 only)between losses computed according to Gaussian and t-student dependence structures4 . Whenever thediﬀerence is positive the gaussian prevails over the t-student and viceversa. As we know, t-studentdistribution displays fatter tails: for level of losses equal to 0, the gaussian prevails (due to its largerrelative frequency around the mean) while for losses higher than 0 (or joint extreme losses) the t-studentprevails. 4 Please refer to ﬁgures 11 and 12 of the Matlab code for the losses computed according to Gaussian and t-student. 15
16. 16. As a further step, we ﬁnally perform time sensitivity analysis by examining how loss frequencychanges according to time to default, for a given level of rho and constant default intentities . Thegraph was built by allowing loss occurrence probability (relative frequency) to vary as function of time(ranging from 2 to 10). Under both Gaussian and t-student dependence assumptions, frequency ofzero losses decreases on time, while non-zero losses become more likely. Diﬀerence between Gaussianand t-student, in favor of the latter, are relevant only in case of small losses. Around 20% of losses,t-student displays higher probability only for short time: the two distribution tend to converge as timeraises. 16
17. 17. 17
18. 18. The following graphs are relative to a piecewise structure. 18
19. 19. As we have already said, constant hypothesis entails a weighted average of instant default probabil-ity, thus turning into smoother strucure for and, by the same token, into less spread loss distribution.On the contrary, piecewise constant can better approximate credit event behaviours: time sensitivitytrends feature sharper hikes in occurrence probability for longer time and lower values for shorter.Speciﬁc diﬀerences due to the two dependence models are driven by the same factors as before5 . 5 Graphs for the diﬀerences are generated by code run: here we omit them for handiness’ sake 19
20. 20. 2.c.The loss distribution of a second-to-default swap under the assumption of the Li model is derived.The code introduces a variable controllo which triggers the repayment of the recovery value6 once theinequality n_def>=num2def is veriﬁed. In our case num2def was set to 2 to account for the second-to-default feature of the swap. The following graph plots the loss function for two values of rho, underboth model assumptions: 6 The recovery value accounts for the loss on the unique entity defaulted (which in the code is expressed as:Loss(j,:,o)=n_def.*(notional*(1-RR)) where n_def = (n_def.*controllo +(1-num2def )*controllo)) 20
21. 21. As we would expect, frequency of second-to-default is even lower than that of a ﬁrst-to-default andalmost next to zero up to 5 years. In this case the maximum loss threshold is 0.4. The T-studentcopula allows for higher probability in joint event (due to fatter tails) and a lower one for centralvalues: tendency to U-shaped distribution is preserved, as positive increasing function of rho. Thefollowing ﬁgure shows the diﬀerence in default frequency, according to the two dependence strucures: 21
22. 22. The peak of frequency for around-zero losses in 5 years could seem counterintuitive: however,possible explanations for such a pattern might be due to liquidity issues (ﬁve-years contract are themost traded). 22