SlideShare a Scribd company logo
Selection of Research Material relating to 
RiskMetrics Group CDO Manager 
E: europe@riskmetrics.com 
W: www.riskmetrics.com 
T: 020 7842 0260 
F: 020 7842 0269
CONTENTS 
1. Introductory Technical Note on the CDO Manager Software. 
2. A comparison of stochastic default rate models, Christopher C. 
Finger. RiskMetrics Group Working Paper Number 00-02 
3. On Default Correlation: A Copula Function Approach, David X. 
Li. RiskMetrics Group Working Paper Number 99-07 
4. The Valuation of the ith-to-Default Basket Credit Derivatives, 
David X. Li. RiskMetrics Group Working Paper. 
5. Worst Loss Analysis of BISTRO Reference Portfolio, Toru 
Tanaka, Sheikh Pancham, Tamunoye Alazigha, Fuji Bank. 
RiskMetrics Group CreditMetrics Monitor April 1999. 
6. The Valuation of Basket Credit Derivatives, David X. Li. 
RiskMetrics Group CreditMetrics Monitor April 1999. 
7. Conditional Approaches for CreditMetrics Portfolio Distributions, 
Christopher C. Finger. RiskMetrics Group CreditMetrics Monitor 
April 1999.
Product Technical Note 
CDO Model - Key Features 
 
, ,QWURGXFWLRQ 
,, ,VVXHVLQPRGHOOLQJ'2VWUXFWXUHV 
,,, /LPLWDWLRQVRI3UHVHQW$SSURDFKHV 
,9 (QKDQFHGUHGLW0HWULFVÔEDVHG0HWKRGRORJ 
9 '20RGHO)ORZFKDUW 
9, 6DPSOH5HVXOWV 
Page 1 
 
, ,QWURGXFWLRQ 
7KH '2 PRGHO DOORZV RX WR DQDOVH FDVK IORZ '2·V $ FRPSUHKHQVLYH 0RQWH DUOR 
IUDPHZRUN GLIIHUV IURP H[LVWLQJ FDVK IORZ PRGHOV XVHG E PDQ VWUXFWXUHUV DQG UDWLQJ 
DJHQFLHV LQ WKDW ZH JHQHUDWH D PRUH FRPSOHWH VHW RI VFHQDULRV LQVWHDG RI MXVW OLPLWHG VWUHVV 
WHVWLQJ VFHQDULRV VHW E XVHUV 7KH PRGHO LV PXOWLSHULRG ZKHUH WKH GQDPLFV RI 
FROODWHUDOLVDWLRQ DQG DVVHW WHVWV DUH FDSWXUHG LQ WKLV IUDPHZRUN ,QVWHDG RI DSSUR[LPDWLQJ D 
FRUUHODWHG KHWHURJHQHRXV SRUWIROLR E DQ LQGHSHQGHQW KRPRJHQHRXV SRUWIROLR DV XVHG E 
VRPHDJHQFLHVWKHPRGHOWDNHLQWRFRQVLGHUDWLRQDOOFROODWHUDODVVHWV 
 
,, ,VVXHVLQPRGHOOLQJ'2VWUXFWXUHV 
·  6WUXFWXUHVYDUGLIILFXOWWRPRGHODOOYDULDWLRQV 
·  1ROLTXLGVHFRQGDUPDUNHW 
·  XUUHQWHYDOXDWLRQEDVHGRQDJHQFUDWLQJ 
o DWRULJLQDWLRQSOXV 
o UDWLQJGRZQJUDGHVXUSULVHV 
·  7KHIHZDYDLODEOHWRROVODFNDFRQVLVWHQWSRUWIROLREDVHGFUHGLWPHWKRGRORJ 
·  1HHG IRU DQ LQGHSHQGHQW ULVN DVVHVVPHQW WRRO ZLWK ZHOOGHILQHG FUHGLW PHWKRGRORJ ZKLFK 
LQFRUSRUDWHVDVWRFKDVWLFSURFHVV 
 
,,, /LPLWDWLRQVRI3UHVHQW$SSURDFKHV 
·  0RVWDVVXPHFRQVWDQWJOREDODQQXDOGHIDXOWUDWHIRUDVVHWVLQFROODWHUDOSRRO(JRIDVVHWV 
GHIDXOWLQVWHDUQGHDUDQGVRRQ 
o 6LPSOHEDVHFDVHEXWQRWUHDOLVWLFDVWLPLQJRIORVVHVLQ'2LVH[WUHPHOLPSRUWDQW 
o 7UHDWVFROODWHUDODVDKRPRJHQHRXVSRRORIDVVHWV,QUHDOLWZHPDKDYHLVVXHV 
YDULQJ IHDWXUHV DQG FRPSOH[ FRUUHODWLRQ VWUXFWXUHV ZKLFK DUH VWURQJO QRQ 
KRPRJHQHRXV
Product Technical Note 
 
 
/LPLWDWLRQVRI3UHVHQW$SSURDFKHVFRQWLQXHG«
·  /LPLWHGWRIURQWORDGHGGHIDXOWUDWHDQDOVLV,HPRVWGHIDXOWVLQILUVWIHZHDUV 
·  0RVWVLPXODWHGHIDXOWUDWHVRQO 
o 'RHVQRWFDSWXUHDVVHWVSHFLILFGHIDXOW 
o 'HILFLHQWLQDQDOVLQJ0H]]DQLQHQRWHVDQG(TXLWLQYHVWPHQWV 
Page 2 
 
,9 (QKDQFHGUHGLW0HWULFVÔEDVHG0HWKRGRORJ 
 
·  7UHDW GHIDXOW ULVN DW 2EOLJRU RU $VVHW OHYHO 7KLV WDNHV LQWR DFFRXQW REOLJRU VSHFLILF ULVN 
LQGXVWURUVHFWRUULVNDQGFRUUHODWLRQVEHWZHHQDVVHWVXVLQJUHGLW0HWULFVPHWKRGRORJ 
·  %XLOG D VFHQDULR RI ´'HIDXOW WLPHµ IRU HDFK DVVHW WKHUHE HIIHFWLQJ WLPLQJ RI FDVK IORZV 
UHFHLYHGDQGGHODVLQUHFRYHU 
·  *HQHUDWHVFHQDULRVRIGHIDXOWWLPHVIRUHDFKDVVHWLQWKHFROODWHUDOSRROXVLQJD0RQWHDUOR 
VLPXODWLRQ SURFHVV )URQW ORDGHG GHIDXOWV DQDOVLV ZRXOG EH DFFRXQWHG IRU LQ VRPH RI WKH 
VFHQDULRV 
·  5HVXOWVDQDOVLVFDQIRFXVRQDGYHUVHVFHQDULRVZLWKZRUVWVLPXODWHGULVNUHWXUQ 
·  $JJUHJDWHUHVXOWLQJFDVKIORZVDQGDOORFDWHRYHUWKHGLVWULEXWLRQVWUXFWXUHFRPSXWLQJUHVXOWV 
DWHDFKFRXSRQSHULRGDQGRYHUWKHOLIHRIWKHGHDO3HUIRUPDQFHLVWKXVSDWKGHSHQGHQWRQ 
WLPLQJDQGVHYHULWRIORVVHV 
·  DQPRGHODQGDQDOVHFDVKIORZVZLWKDVVXPSWLRQRIQRPDQDJHULQWHUYHQWLRQ3URYLGHVDQ 
H[SHFWHGDQGZRUVWFDVHDJDLQVWZKLFKWRDVVHVVPDQDJHUV·SHUIRUPDQFH 
·  RPELQLQJWKHUHGLW0HWULFVDSSURDFKZLWKRSXODIXQFWLRQVDOORZVXVWRPRGHOGHIDXOWRYHU 
PXOWLSOHSHULRGV7KLVH[WHQGVRXURQHSHULRGIUDPHZRUNXVHGLQUHGLW0HWULFV
Product Technical Note 
Page 3 
 
9 '20RGHO)ORZFKDUW 
9, 6DPSOHUHVXOWV 
6DPSOHUHVXOWVIRUZRUVWFDVHVFHQDULRVIURPVLPXODWLRQVRQWKHVHQLRUWUDQFKHRIDJHQHULFVWUXFWXUH 
Yield vs Collateral Loss 
4.74% 
4.74% 
4.74% 
4.74% 
4.73% 
4.73% 
4.73% 
0 1 1 2 2 3 3 
Collateral Loss (in thousands) 
Yield 
Duration vs Collateral Loss 
2.96 
2.95 
2.94 
2.93 
2.92 
2.91 
2.90 
2.89 
2.88 
2.87 
2.86 
0 1 1 2 2 3 3 
Collateral Loss (in thousands) 
Duration 
Yield vs Average Life 
3.20 
3.18 
3.16 
3.14 
3.12 
3.10 
4.74% 
4.74% 
4.74% 
4.74% 
4.73% 
4.73% 
4.73% 
3.08 3.10 3.12 3.14 3.16 3.18 3.20 
Average Life 
Yield 
Average Life vs Collateral Loss 
3.08 
0 1 1 2 2 3 3 
Collateral Loss (in thousands) 
Average Life
The RiskMetrics Group 
Working Paper Number 00-02 
A comparison of stochastic default rate models 
Christopher C. Finger 
This draft: August 2000 
First draft: July 2000 
44Wall St. chris.finger@riskmetrics.com 
NewYork, NY 10005 www.riskmetrics.com
A comparison of stochastic default rate models 
Christopher C. Finger 
August 2000 
Abstract 
For single horizon models of defaults in a portfolio, the effect of model and distribution choice on the 
model results is well understood. Collateralized Debt Obligations in particular have sparked interest in 
default models over multiple horizons. For these, however, there has been little research, and there is little 
understanding of the impact of various model assumptions. In this article, we investigate four approaches 
to multiple horizon modeling of defaults in a portfolio. We calibrate the four models to the same set of 
input data (average defaults and a single period correlation parameter), and examine the resulting default 
distributions. The differences we observe can be attributed to the model structures, and to some extent, 
to the choice of distributions that drive the models. Our results show a significant disparity. In the single 
period case, studies have concluded that when calibrated to the same first and second order information, 
the various models do not produce vastly different conclusions. Here, the issue of model choice is much 
more important, and any analysis of structures over multiple horizons should bear this in mind. 
Keywords: Credit risk, default rate, collateralized debt obligations
1 Introduction 
In recent years, models of defaults in a portfolio context have been well studied. Three separate 
approaches (CreditMetrics, CreditRisk+, and CreditPortfolioView1) were made public in 1997. 
Subsequently, researchers2 have examined the mathematical structure of the various models. Each 
of these studies has revealed that it is possible to calibrate the models to each other and that the 
differences between the models lie in subtle choices of the driving distributions and in the data 
sources one would naturally use to feed the models. 
Common to all of these models, and to the subsequent examinations thereof, is the fact that the 
models describe only a single period. In otherwords, the models describe, for a specific risk horizon, 
whether each asset of interest defaults within the horizon. The timing of defaults within the risk 
horizon is not considered, nor is the possibility of defaults beyond the horizon. This is not a flaw 
of the current models, but rather an indication of their genesis as approaches to risk management 
and capital allocation for a fixed portfolio. 
Not entirely by chance, the development of portfolio models for credit risk management has coin-cided 
with an explosion in issuance of Collateralized Debt Obligations (CDO’s). The performance 
of a CDO structure depends on the default behavior of a pool of assets. Significantly, the depen-dence 
is not just on whether the assets default over the life of the structure, but also on when the 
defaults occur. Thus, while an application of the existing models can give a cursory view of the 
structure (by describing, for instance, the distribution of the number of assets that will default over 
the structure’s life), a more rigorous analysis requires a model of the timing of defaults. 
In this paper, we will survey a number of extensions of the standard single-period models that allow 
for a treatment of default timing over longer horizons. We will examine two extensions of the Cred-itMetrics 
approach, one that models only defaults over time and a second that effectively accounts 
1SeeWilson (1997). 
2See Finger (1998), Gordy (2000), and Kolyoglu and Hickman (1998). 
1
for rating migrations. In addition, we will examine the copula function approach introduced by Li 
(1999 and 2000), as well as a simple version of the stochastic intensity model applied by Duffie 
and Garleanu (1998). 
We will seek to investigate the differences in the four approaches that arise from model – rather than 
data – differences. Thus, we will suppose that we begin with satisfactory estimates of expected 
default rates over time, and of the correlation of default events over one period. Higher order 
information, such as the correlation of defaults in subsequent periods or the joint behavior of three 
or more assets, will be driven by the structure of the models. The analysis of the models will 
then illuminate the range of results that can arise given the same initial data. Nagpal and Bahar 
(1999) adopt a similar approach in the single horizon context, investigating the range of possible 
full distributions that can be calibrated to first and second order default statistics. 
In the following section, we present terminology and notation to be used throughout. We proceed to 
detail the four models. Finally,we present two comparison exercises: in the first, we use closed form 
results to analyze default rate volatilities and conditional default probabilities, while in the second, 
we implement Monte Carlo simulations in order to investigate the full distribution of realized default 
rates. 
2 Notation and terminology 
In order to compare the properties of the four models, we will consider a large homogeneous pool 
of assets. By homogeneous, we mean that each asset has the same probability of default (first order 
statistics) at every time we consider; further, each pair of assets has the same joint probability of 
default (second order statistics) at every time. 
To describe the first order statistics of the pool, we specify the cumulative default probability qk 
– the probability that a given asset defaults in the next k years – for k D 1; 2; : : : T , where T is 
the maximum horizon we consider. Equivalently, we may specify the marginal default probability 
2
pk – the probability that a given asset defaults in year k. Clearly, cumulative and marginal default 
probabilities are related through 
qk D qk−1 C pk; for k D 2; : : : ; T: (1) 
It is important to distinguish a third equivalent specification, that of conditional default probabilities. 
The conditional default probability in year k is defined as the conditional probability that an asset 
defaults in year k, given that the asset has survived (that is, has not defaulted) in the first k−1 years. 
This probability is given by pk=.1 − qk−1/. 
Finally, to describe the second order statistics of the pool, we specify the joint cumulative default 
probability qj;k – the probability that for a given pair of assets, the first asset defaults sometime in 
the first j years and the second defaults sometime in the first k years – or equivalently, the joint 
marginal default probability pj;k – the probability that the first asset defaults in year j and the 
second defaults in year k. These two notions are related through 
qj;k D qj−1;k−1 C 
Xj−1 
iD1 
pi;k C 
Xk−1 
iD1 
pj;i C pj;k; for j; k D 2; : : : ; T: (2) 
In practice, it is possible to obtain first order statistics for relatively long horizons, either by observing 
market prices of risky debt and calibrating cumulative default probabilities as in Duffie and Singleton 
(1999), or by taking historical cumulative default experience from a study such asKeenan et al (2000) 
or Standard  Poor’s (2000). Less information is available for second order statistics, however, and 
therefore we will assume that we can obtain the joint default probability for the first year (p1;1)3, 
but not any of the joint default probabilities for subsequent years. Thus, our exercise will be to 
calibrate each of the four models to fixed values of q1; q2; : : : qT and p1;1, and then to compare the 
higher order statistics implied by the models. 
The model comparison can be a simple task of comparing values of p1;2, p2;2, q2;2, and so on. 
However, to make the comparisons a bit more tangible, we will consider the distributions of realized 
3This is a reasonable supposition, since all of the single period models mentioned previously essentially require p1;1 as an input. 
3
default rates. The term default rate is often used loosely in the literature, without a clear notion 
of whether default rate is synonymous with default probability, or rather is itself a random variable. 
To be clear, in this article, default rate is a random variable equal to the proportion of assets in 
a portfolio that default. For instance, if the random variable X.k/ 
i is equal to one if the ith asset 
defaults in year k, then the year k default rate is equal to 
1 
n 
Xn 
iD1 
X.k/ 
i : (3) 
For our homogeneous portfolio, the mean year k default rate is simply pk, the marginal default 
probability for year k. Furthermore, the standard deviation of the year k default rate (which we will 
refer to as the year k default rate volatility) is 
q 
pk;k − p2 
k 
C .pk − pk;k/=n: (4) 
Of interest to us is the large portfolio limit (that is, n ! 1) of this quantity, normalized by the 
default probability. We will refer to this as the normalized year k default volatility, which is given 
by 
q 
pk;k − p2 
k 
pk 
: (5) 
Additionally, we will examine the normalized cumulative year k default volatility, which is defined 
similarly to the above, with the exception that the default rate is computed over the first k years 
rather than year k only. The normalized cumulative default volatility is given by 
q 
qk;k − q2 
k 
qk 
: (6) 
Finally, we will use 8 to denote the standard normal cumulative distribution function. In the 
bivariate setting, we will use 82.z1; z2I / to indicate the probability that Z1  z1 and Z2  z2, 
where Z1 and Z2 are standard normal random variables with correlation . 
In the following four sections, we describe the models to be considered, and discuss in detail the 
calibration to our initial data. 
4
3 Discrete CreditMetrics extension 
In its simplest form, the single period CreditMetrics model, calibrated for our homogeneous port-folio, 
can be stated as follows: 
(i) Define a default threshold  such that 8./ D p1. 
(ii) To each asset i, assign a standard normal random variable Z.i/, where the correlation between 
distinct Z.i/ and Z.j / is equal to , such that 
82.; I / D p1;1: (7) 
(iii) Asset i defaults in year 1 if Z.i/  . 
The simplest extension of this model to multiple horizons is to simply repeat the one period model. 
We then have default thresholds 1; 2; : : : ; T corresponding to each period. For the first period, 
we assign standard normal random variables Z.i/ 
1 to each asset as above, and asset i defaults in the 
first period if Z.i/ 
1  1. For assets that survive the first period, we assign a second set of standard 
normal random variables Z.i/ 
2 , such that the correlation between distinct Z.i/ 
2 and Z 
.j / 
2 is  but the 
variables from one period to the next are independent. Asset i then defaults in the second period 
if Z.i/ 
1  1 (it survives the first period) and Z.i/ 
2  2. The extension to subsequent periods 
should be clear. In the end, the model is specified by the default thresholds 1; 2; : : : ; T and the 
correlation parameter . 
To calibrate this model to our cumulative default probabilities q1; q2; : : : ; qT and joint default 
probability, we begin by setting the first period default threshold: 
−1.q1/: (8) 
1 D 8 
For subsequent periods, we set k such that the probability that Z.k/ 
i  k is equal to the conditional 
default probability for period k: 
−1 
k D 8 
 
qk − qk−1 
1 − qk−1 
 
: (9) 
5
We complete the calibration by choosing  to satisfy (7), with  replaced by 1. 
The joint default probabilities and default volatilities are easily obtained in this context. For instance, 
the marginal year two joint default probability is given by (for distinct i and j ): 
p2;2 D P 
n 
Z.i/ 
1  1  Z 
.j / 
1  1  Z.i/ 
2  2  Z 
.j / 
2  2 
o 
D P 
n 
Z.i/ 
1  1  Z 
.j / 
1  1 
o 
 P 
n 
Z.i/ 
2  2  Z 
.j / 
2  2 
o 
D .1 − 2p1 C p1;1/  82.2; 2I /: (10) 
Similarly, the probability that asset i defaults in the first period, and asset j in the second period is 
p1;2 D P 
n 
Z.i/ 
1  1  Z 
.j / 
1  1  Z 
.j / 
2  2 
o 
D .p1 − p1;1/  q2 − p1 
1 − p1 
: (11) 
It is then possible to obtain q2;2 using (2) and the default volatilities using (5) and (6). 
4 Diffusion-driven CreditMetrics extension 
By construction, the discrete CreditMetrics extension above does not allow for any correlation of 
default rates through time. For instance, if a high default rate is realized in the first period, this has 
no bearing on the default rate in the second period, since the default drivers for the second period 
(the Z.i/ 
2 above) are independent of the default drivers for the first. Intuitively, we would not expect 
this behavior from the market. If a high default rate occurs in one period, then it is likely that those 
obligors that did not default would have generally decreased in credit quality. The impact would 
then be that the default rate for the second period would also have a tendency to be high. 
In order to capture this behavior, we introduce a CreditMetrics extension where defaults in con-secutive 
periods are not driven by independent random variables, but rather by a single diffusion 
process. Our diffusion-driven CreditMetrics extension is described by: 
(i) Define default thresholds 1; 2; : : : ; T for each period. 
6
(ii) To each obligor, assign a standard Wiener process W.i/, with W.i/ 
0 
D 0, where the instanta-neous 
correlation between distinct W.i/ and W.j / is .4 
(iii) Obligor i defaults in the first year if W.i/ 
1  1. 
(iv) For k  1, obligor i defaults in year k if it survives the first k − 1 years (that is, W.i/ 
1  
1; : : : ;W.i/ 
k−1  k−1) and W.i/ 
k  k. 
Note that this approach allows for the behavior mentioned above. If the default rate is high in the 
first year, this is because many of the Wiener processes have fallen below the threshold 1. The 
Wiener processes for non-defaulting obligors will have generally trended downward as well, since 
all of the Wiener processes are correlated. This implies a greater likelihood of a high number of 
defaults in the second year. In effect, then, this approach introduces a notion of credit migration. 
Cases where the Wiener process trends downward but does not cross the default threshold can be 
thought of as downgrades, while cases where the process trends upward are essentially upgrades. 
To calibrate the first threshold 1, we observe that 
P 
n 
W.i/ 
1  1 
o 
D 8.1/; (12) 
and thus that 1 is given by (8). For the second threshold, we require that the probability that an 
obligor defaults in year two is equal to p2: 
P 
n 
W.i/ 
1  1  W.i/ 
2  2 
o 
D p2: (13) 
Since W.i/ is a Wiener process, we know that the standard deviation of W.i/ 
t is 
p 
t and that for 
s  t, the correlation between W.i/ 
s and W.i/ 
t is 
p 
s=t. Thus, given 1, we find the value of 2 that 
satisfies 
p 
2/ − 82.1; 2= 
8.2= 
p 
2I 
p 
1=2/ D p2: (14) 
4Technically, the cross variation process for W.i/ and W.j / is dt . 
7
For the kth period, given 1; : : : ; k−1, we calibrate k by solving 
P 
n 
W.i/ 
1  1  : : :  W.i/ 
k−1  k−1  W.i/ 
k  k 
o 
D pk; (15) 
again utilizing the properties of theWiener processW.i/ to compute the probability on the left hand 
side. 
We complete the calibration by finding  such that the year one joint default probability is p1;1: 
P 
n 
W.i/ 
1  1  W 
.j / 
1  1 
o 
D p1;1: (16) 
Since W.i/ 
.j / 
1 each follow a standard normal distribution, and have a correlation of , the 
1 and W 
solution for  here is identical to that of the previous section. 
With the calibration complete, it is a simple task to compute the joint default probabilities. For 
instance, the joint year two default probability is given by 
p2;2 D P 
n 
W.i/ 
.j / 
1  1  W.i/ 
1  1  W 
.j / 
2  2 
2  2  W 
o 
; (17) 
where we use the fact that fW.i/ 
.j / 
1 ;W.i/ 
1 ;W 
.j / 
2 
2 ;W 
g follow a multivariate normal distribution with 
covariance 
CovfW.i/ 
.j / 
1 ;W.i/ 
1 ;W 
.j / 
2 
2 ;W 
g D 
0 
BB@ 
1  1  
 1  1 
1  2 2 
 1 2 2 
1 
CCA 
: (18) 
5 Copula functions 
A drawback of both the CreditMetrics extensions above is that in a Monte Carlo setting, they require 
a stepwise simulation approach. In other words, we must simulate the pool of assets over the first 
year, tabulate the ones that default, then simulate the remaining assets over the second year, and so 
on. Li (1999 and 2000) introduces an approach wherein it is possible to simulate the default times 
directly, thus avoiding the need to simulate each period individually. 
The normal copula function approach is as follows: 
8
(i) Specify the cumulative default time distribution F, such that F.t/ gives the probability that a 
given asset defaults prior to time t . 
(ii) Assign a standard normal random variable Z.i/ to each asset, where the correlation between 
distinct Z.i/ and Z.j / is . 
(iii) Obtain the default time i for asset i through 
i D F 
−1.8.Z.i///: (19) 
Since we are concerned here only with the year in which an asset defaults, and not the precise 
timing within the year, we will consider a discrete version of the copula approach: 
(i) Specify the cumulative default probabilities q1; q2; : : : ; qT as in Section 2. 
(ii) For k D 1; : : : ; T compute the threshold k D 8 
−1.qk/. Clearly, 1  2  : : :  T . 
Define 0 D −1. 
(iii) Assign Z.i/ to each asset as above. 
(iv) Asset i defaults in year k if k−1  Z.i/  k. 
The calibration to the cumulative default probabilities is already given. Further, it is easy to observe5 
that the correlation parameter  is calibrated exactly as in the previous two sections. 
The joint default probabilities are perhaps simplest to obtain for this approach. For example, the 
joint cumulative default probability qk;l is given by 
qk;l D P 
n 
Z.i/  k  Z.j /  l 
o 
D 82.k; lI /: (20) 
5Details are presented in Li (1999) and Li (2000). 
9
6 Stochastic default intensity 
6.1 Description of the model 
The approaches of the three previous sections can all be thought of as extensions of the single 
period CreditMetrics framework. Each approach relies on standard normal random variables to 
drive defaults, and calibrates thresholds for these variables. Furthermore, it is easy to see that over 
the first period, the three approaches are identical; they only differ in their behavior over multiple 
periods. 
Our fourth model takes a different approach to the construction of correlated defaults over time, and 
can be thought of as an extension of the single period CreditRisk+ framework. In the CreditRisk+ 
model, correlations between default events are constructed through the assets’ dependence on a 
common default probability, which itself is a random variable.6 Importantly, given the realization 
of the default probability, defaults are conditionally independent. The volatility of the common 
default probability is in effect the correlation parameter for this model; a higher default volatility 
induces stronger correlations, while a zero volatility produces independent defaults.7 
The natural extension of the CreditRisk+ framework to continuous time is the stochastic intensity 
approach presented in Duffie and Garleanu (1998) and Duffie and Singleton (1999). Intuitively, the 
stochastic intensity model stipulates that in a given small time interval, assets default independently, 
with probability proportional to a common default intensity.8 In the next time interval, the intensity 
changes, and defaults are once again independent, but with the default probability proportional to 
the new intensity level. The evolution of the intensity is described through a stochastic process. In 
practice, since the intensity must remain positive, it is common to apply similar stochastic processes 
as are utilized in models of interest rates. 
6More precisely, assets may depend on different default probabilities, each of which are correlated. 
7See Finger (1998), Gordy (2000), and Kolyoglu and Hickman (1998) for further discussion. 
8As with our description of the CreditRisk+ model, this is a simplification. The Duffie-Garleanu framework provides for an 
intensity process for each asset, with the processes being correlated. 
10
For our purposes, we will model a single intensity process h. Conditional on h, the default time 
for each asset is then the first arrival of a Poisson process with arrival rate given by h. The Poisson 
processes driving the defaults for distinct assets are independent, meaning that given a realization 
of the intensity process h, defaults are independent. The Poisson process framework implies that 
given h, the probability that a given asset survives until time t is 
exp 
 
− 
Z 
t 
0 
du hu 
 
: (21) 
Further, because defaults are conditionally independent, the conditional probability, given h, that 
two assets both survive until time t is 
exp 
 
−2 
Z 
t 
0 
du hu 
 
: (22) 
The unconditional survival probabilities are given by expectations over the process h, so that in 
particular, the survival probability for a single asset is given by 
1 − qt D Eexp 
 
− 
Z 
t 
0 
du hu 
 
: (23) 
For the intensity process, we assume that h evolves according to the stochastic differential equation 
dht D −.ht − Nh 
k/dt C  
p 
htdWt ; (24) 
where W is a Wiener process and Nh 
k is the level to which the process trends during year k. (That 
is, the mean reversion is toward Nh 
1 for t  1, toward Nh 
2 for 1  t  2, etc.) Let h0 D Nh 
1. Note 
Nh 
that this is essentially the model for the instantaneous discount rate used in the Cox-Ingersoll-Ross 
interest rate model. Note also that in Duffie-Garleanu, there is a jump component to the evolution 
of h, while the level of mean reversion is constant. 
In order to express the default probabilities implied by the stochastic intensity model in closed 
form, we will rely on the following result from Duffie-Garleanu.9 For a process h with h0 D and 
9We have changed the notation slightly from the Duffie-Garleanu result, in order to make more explicit the dependence on N h. 
11
evolving according to (24) with Nh 
k D Nh 
for all k, we have 
Et exp 
 
− 
Z 
tCs 
t 
du hu 
 
exp[x C yhs ] D exp 
 
x C s.y/Nh 
C
s.y/ht 
 
; (25) 
where Et denotes conditional expectation given information available at time t . The functions s 
and
s are given by 
s.y/ D  
c 
s C .a.y/c − d.y// 
bcd.y/ 
log 
 
c C d.y/ebs 
c C d 
 
; and (26)
s.y/ D 1 C a.y/ebs 
c C d.y/ebs 
; (27) 
where 
c D − C 
p 
2 C 22 
2 
; (28) 
d.y/ D .1 − cy/ 
2y −  C 
p 
. 2y − /2 − 2. 2y2 − 2y − 2/ 
2y2 − 2y − 2 
; (29) 
a.y/ D .d.y/ C c/y − 1; (30) 
b D 
−d.y/. C 2c/ C a.y/. 2 − c/ 
a.y/c − d.y/ 
: (31) 
6.2 Calibration 
Our calibration approach for this model will be to fix the mean reversion speed , solve for Nh 
1 and 
 to match p1 and p1;1, and then to solve in turn for Nh 
2; : : : ; Nh 
T to match p2; : : : ; pT . To begin, 
we apply (23) and (25) to obtain 
p1 D 1 − exp 
 
1.0/Nh 
1 C
1.0/h0 
 
D 1 − exp 
 
[1.0/ C
1.0/]Nh 
1 
 
: (32) 
To compute the joint probability that two obligors each survive the first year, we must take the 
expectation of (22), which is essentially the same computation as above, but with the process h 
replaced by 2h. We observe that the process 2h also evolves according to (24) with the same mean 
reversion speed , and with Nh 
k replaced by 2Nh 
k and  replaced by  
p 
2. Thus, we define the 
12
functions O 
s and O
s in the same way as s and
s , with  replaced by  
p 
2. We can then compute 
the joint one year survival probability: 
Eexp 
 
−2 
Z 
t 
0 
du hu 
 
D exp 
h 
2[O 
1.0/ C O
1.0/]Nh 
1 
i 
: (33) 
Finally, since the joint survival probability is equal to 1 − 2p1 C p1;1, we have 
p1;1 D 2p1 − 1 C exp 
h 
2[O 
1.0/ C O
1.0/]Nh 
1 
i 
: (34) 
To calibrate  and Nh 
1 to (32) and (34), we first find the value of  such that 
1.0/ C O
1.0// 
1.0/ C
1.0/ 
2.O 
D log[1 − 2p1 C p1;1] 
log[1 − p1] 
; (35) 
and then set 
1 D log[1 − p1] 
Nh 
1.0/ C
1.0/ 
: (36) 
Note that though the equations are lengthy, the calibration is actually quite straightforward, in that 
we only are ever required to fit one parameter at a time. 
In order to calibrate Nh 
2, we need to obtain an expression for the two year cumulative default 
probability q2. To this end, we must compute the two year survival probability 
1 − q2 D Eexp 
 
− 
Z 
2 
0 
du hu 
 
: (37) 
Since the process h does not have a constant level of mean reversion over the first two years, we 
cannot apply (25) directly here. However (25) can be applied once we express the two year survival 
probability as 
1 − q2 D Eexp 
 
− 
Z 
1 
0 
du hu 
 
E1 exp 
 
− 
Z 
2 
1 
du hu 
 
: (38) 
Now given h1, the process h evolves according to (24) from t D 1 to t D 2 with a constant mean 
reversion level Nh 
2, meaning we can apply (25) to the conditional expectation in (38), yielding 
1 − q2 D Eexp 
 
− 
Z 
1 
0 
du hu 
 
exp 
 
1.0/Nh 
2 C
1.0/h1 
 
: (39) 
13
The same argument allows us to apply (25) again to (39), giving 
1 − q2 D exp 
 
1.0/Nh 
2 C [1.
1.0// C
1.
1.0//]Nh 
1 
 
: (40) 
Thus, our calibration for the second year requires setting 
2 D 1 
Nh 
1.0/ 
 
log[1 − q2] − [1.
1.0// C
1.
1.0//]Nh 
1 
	 
: (41) 
The remaining mean reversion levels Nh 
3; : : : ; Nh 
T are calibrated similarly. 
6.3 Joint default probabilities 
The computation of joint probabilities for longer horizons is similar to (34). The joint probability 
that two obligors each survive the first two years is given by 
Eexp 
 
−2 
Z 
2 
0 
du hu 
 
: (42) 
Here, we apply the same arguments as in (38) through (40) to derive 
Eexp 
 
−2 
Z 
2 
0 
du hu 
 
D exp 
h 
2O 
1.0/Nh 
2 C 2[O 
1. O
1.0// C O
1. O
1.0//]Nh 
1 
i 
: (43) 
For the joint probability that the first obligor survives the first year and the second survives the first 
two years, we must compute 
Eexp 
 
− 
Z 
1 
0 
du hu 
 
exp 
 
− 
Z 
2 
0 
du hu 
 
D Eexp 
 
−2 
Z 
1 
0 
du hu 
 
exp 
 
− 
Z 
2 
1 
du hu 
 
(44) 
The same reasoning yields 
Eexp 
 
− 
Z 
1 
0 
du hu 
 
exp 
 
− 
Z 
2 
0 
du hu 
 
D exp 
h 
1.0/Nh 
2 C 2[O 
1. O
1.0/=2/ C O
1. O
1.0/=2/]Nh 
1 
i 
: 
(45) 
The joint default probabilities p2;2 and p1;2 then follow from (43) and (45). 
14
7 Model comparisons – closed form results 
Our first set of model comparisons will utilize the closed form results described in the previous 
sections. We will restrict the comparisons here to the two period setting, and to second order results 
(that is, default volatilities and joint probabilities for two assets); results for multiple periods and 
actual distributions of default rates will be analyzed through Monte Carlo in the next section. 
For our two period comparisons, we will analyze four sets of parameters: investment and speculative 
grade default probabilities10, each with two correlation values. The lowand high correlation settings 
will correspond to values of 10% and 40%, respectively, for the asset correlation parameter  in 
the first three models. For the stochastic intensity model, we will investigate two values for the 
mean reversion speed . The slow setting will correspond to  D 0:29, such that a random shock 
to the intensity process will decay by 25% over the next year; the fast setting will correspond 
to  D 1:39, such that a random shock to the intensity process will decay by 75% over one year. 
Calibration results are presented in Table 1. 
We present the normalized year two default volatilities for each model in Figure 1. As defined in (5) 
and (6), the marginal and cumulative default volatilities are the standard deviation of the marginal 
and cumulative two year default rates of a large, homogeneous portfolio. As we would expect, the 
default volatilities are greater in the high correlation cases than in the low correlation cases. Of the 
five models tested, the stochastic intensity model with slow mean reversion seems to produce the 
highest levels of default volatility, indicating that correlations in the second period tend to be higher 
for this model than for the others. 
It is interesting to note that of the first three models, all of which are based on the normal distribution 
and default thresholds, the copula approach in all four cases has a relatively low marginal default 
volatility but a relatively high cumulative default volatility. (The slow stochastic intensity model is 
in fact the only other model to show a marginal volatility less than the cumulative volatility.) Note 
10Taken from Exhibit 30 of Keenan et al (2000). 
15
that the cumulative two year default rate is the sum of the first and second year marginal default 
rates, and thus that the two year cumulative default volatility is composed of three terms: the first 
and second year marginal default volatilities and the covariance between the first and second years. 
Our calibration guarantees that the first year default volatilities are identical across the models. 
Thus, the behavior of the copula model suggests a stronger covariance term (that is, a stronger link 
between year one and year two defaults) than for either of the two CreditMetrics extensions. 
To further investigate the links between default events, we examine conditional probability of a 
default in the second year, given the default of another asset. To be precise, for two distinct assets i 
and j , we will calculate the conditional probability that asset i defaults in year two, given that asset 
j defaults in year one, normalized by the unconditional probability that asset i defaults in year two. 
In terms of quantities we have already defined, this normalized conditional probability is equal to 
p1;2=.p1p2/. We will also calculate the normalized conditional probability that asset i defaults in 
year two, given that asset j defaults in year two, given by p2;2=p2 
2. For both of these quantities, a 
value of one indicates that the first asset defaulting does not affect the chance that the second asset 
defaults; a value of four indicates that the second asset is four times more likely to default if the 
first asset defaults than it is if we have no information about the first asset. Thus, the probability 
conditional on a year two default can be interpreted as an indicator of contemporaneous correlation 
of defaults, and the probability conditional on a year one default as an indicator of lagged default 
correlation. 
The normalized conditional probabilities under the five models are presented in Figure 2. As we 
expect, there is no lagged correlation for the discrete CreditMetrics extension. Interestingly, the 
copula and both stochastic intensity models often show a higher lagged than contemporaneous 
correlation. While it is difficult to establish much intuition for the copula model, this phenomenon 
can be rationalized in the stochastic intensity setting. For this model, any shock to the default 
intensity will tend to persist longer than one year. If one asset defaults in the first year, it is most 
likely due to a positive shock to the intensity process; this shock then persists into the second year, 
where the other asset is more likely to default than normal. Further, shocks are more persistent for the 
16
slower mean reversion, explaining why the difference in lagged and contemporaneous correlation 
is more pronounced in this case. By contrast, the two CreditMetrics extensions show much higher 
contemporaneous than lagged correlation; this lack of persistence in the correlation structure will 
manifest itself more strongly over longer horizons. 
To this point, we have calibrated the collection of models to have the same means over two periods, 
and the same volatilities over one period. We have then investigated the remaining second order 
statistics – the second period volatility and the correlation between the first and second periods – that 
depend on the particular models. In the next section, we will extend the analysis on two fronts: first, 
we will investigate more horizons in order to examine the effects of lagged and contemporaneous 
correlations over longer times; second, we will investigate the entire distribution of portfolio defaults 
rather than just the second order moments. 
8 Model comparisons – simulation results 
In this section, we perform Monte Carlo simulations for the five models investigated previously. 
In each case, we begin with a homogeneous portfolio of one hundred speculative grade bonds. We 
calibrate the model to the cumulative default probabilities in Table 2 and to the two correlation 
settings from the previous section. Over 1,000 trials, we simulate the number of bonds that default 
within each year, up to a final horizon of six years.11 
The simulation procedures are straightforward for the two CreditMetrics extensions and the copula 
approach. For the stochastic intensity framework, we simulate the evolution of the intensity process 
according to (24). This requires a discretization of (24): 
htC1t  −.ht − Nh 
k/1t C  
p 
ht 
p 
1t; (46) 
11As we have pointed out before, it is possible to simulate continuous default times under the copula and stochastic intensity 
frameworks. In order to compare with the two CreditMetrics extensions, we restrict the analysis to annual buckets. 
17
where  is a standard normal random variable.12 Given the intensity process path for a particular 
scenario, we then compute the conditional survival probability for each annual period as in (21). Fi-nally, 
we generate defaults by drawing independent binomial random variables with the appropriate 
probability. 
The simulation time for the five models is a direct result of the number of timesteps needed. The 
copula model simulates the default times directly, and is therefore the fastest. The two CreditMetrics 
models require only annual timesteps, and require roughly 50% more runtime than the copula model. 
For the stochastic intensity model, the need to simulate over many timesteps produces a runtime 
over one hundred times greater than the simpler models. 
We first examine default rate volatilities over the six horizons. As in the previous section, we 
consider the normalized cumulative default rate volatility. For year k, this is the standard deviation 
of the number of defaults that occur in years one through k, divided by the expected number of 
defaults in that period. This is essentially the quantity defined in (6), with the exception that 
here we consider a finite portfolio. The default volatilities from our simulations are presented 
in Figure 3. Our calibration guarantees that the first year default volatilities are essentially the 
same. The second year results are similar to those in Figure 1, with slightly higher volatility for 
the slow stochastic intensity model, and slightly lower volatility for the discrete CreditMetrics 
extension. At longer horizons, these differences are amplified: the slow stochastic intensity and 
discrete CreditMetrics models show high and low volatilities, respectively, while the remaining 
three models are indistinguishable. 
Thought default rate volatilities are illustrative, they do not provide us information about the full dis-tribution 
of defaults through time. At the one year horizon, our calibration guarantees that volatility 
will be consistent across the five models; the distribution assumptions, however influence the pre- 
12Note that while (24) guarantees a non-negative solution for h, the discretized version admits a small probability that htC1t will 
be negative. To reduce this possibility, we choose 1t for each timestep such that the probability that htC1t  0 is sufficiently small. 
The result is that while we only need 50 timesteps per year in some cases, we require as many as one thousand when the value of  
is large, as in the high correlation, fast mean reversion case. 
18
cise shape of the portfolio distribution. We see in Table 3 that there is actually very little difference 
between even the 1st percentiles of the distributions, particularly in the low correlation case. For 
the full six year horizon, Table 4 shows more differences between the percentiles. Consistent with 
the default volatility results, the tail percentiles are most extreme for the slow stochastic intensity 
model, and least extreme for discrete CreditMetrics. Interestingly, though the CreditMetrics diffu-sion 
model shows similar volatility to the copula and fast stochastic intensity models, it produces 
less extreme percentiles than these other models. Note also that among distributions with similar 
means, the median serves well as an indicator of skewness. The high correlation setting generally, 
and the slow stochastic intensity model in particular, show lower medians. For these cases, the 
distribution places higher probability on the worst default scenarios as well as the scenarios with 
few or no defaults. 
The cumulative probability distributions for the six year horizons are presented in Figures 4 through 
7. As in the other comparisons, the slow stochastic intensity model is notable for placing large prob-ability 
on the very low and high default rate scenarios, while the discrete CreditMetrics extension 
stands out as the most benign of the distributions. Most striking, however, is the similarity between 
the fast stochastic intensity and copula models, which are difficult to differentiate even at the most 
extreme percentile levels. 
As a final comparison of the default distributions, we consider the pricing of a simple structure 
written on our portfolio. Suppose each of the one hundred bonds in the portfolio has a notional 
value of $1 million, and that in the event of a default the recovery rate on each bond is forty percent. 
The structure is composed of three elements: 
(i) First loss protection. As defaults occur, the protection seller reimburses the structure up to a 
total payment of $10 million. Thus, the seller pays $600,000 at the time of the first default, 
$600,000 at the time of each of the subsequent fifteen defaults, and $400,000 at the time of 
the seventeenth default. 
(ii) Second loss protection. The protection seller reimburses the structure for losses in excess of 
19
$10 million, up to a total payment of $20 million. This amounts to reimbursing the losses on 
the seventeenth through the fiftieth defaults. 
(iii) Senior notes. Notes with a notional value of $100 million maturing after six years. The notes 
suffer a principal loss if the first and second loss protection are fully utilized – that is, if more 
than fifty defaults occur. 
For the first and second loss protection, we will estimate the cost of the protection based on a 
constant discount rate of 7%. In each scenario, we produce the timing and amounts of the protection 
payments, and discount these back to the present time. The price of the protection is then the average 
discounted value across the 1,000 scenarios. For the senior notes, we compute the expected principal 
loss at maturity, which is used by Moody’s along with Table 5 to determine the notes’ rating. 
Additionally, we compute the total amount of protection (capital) required to achieve a rating of A3 
(an expected loss of 0.5%) and Aa3 (an expected loss of 0.101%). 
We present the first and second loss prices in Table 6, along with the expected loss, current rating, 
and required capital for the senior notes. The slow stochastic intensity model yields the lowest 
pricing for the first loss protection, the worst rating for the senior notes, and the highest required 
capital. The results for the other models are as expected, with the copula and fast mean reversion 
models yielding the most similar results. 
9 Conclusion 
The analysis of Collateralized Debt Obligations, and other structured products written on credit 
portfolios, requires a model of correlated defaults over multiple horizons. For single horizon 
models, the effect of model and distribution choice on the model results is well understood. For 
the multiple horizon models, however, there has been little research. 
We have outlined four approaches to multiple horizon modeling of defaults in a portfolio. We 
have calibrated the four models to the same set of input data (average defaults and a single period 
20
correlation parameter), and have investigated the resulting default distributions. The differences we 
observe can be attributed to the model structures, and to some extent, to the choice of distributions 
that drive the models. Our results show a significant disparity. The rating on a class of senior 
notes under our low correlation assumption varied from Aaa to A3, and under our high correlation 
assumption fromA1 to Baa3. Additionally, the capital required to achieve a target investment grade 
rating varied by as much as a factor of two. 
In the single period case, a number of studies have concluded that when calibrated to the same 
first and second order information, the various models do not produce vastly different conclusions. 
Here, the issue of model choice is much more important, and any analysis of structures over multiple 
horizons should heed this potential model error. 
References 
Cifuentes, A., Choi, E., andWaite, J. (1998). Stability of rations of CBO/CLO tranches. Moody’s 
Investors Service. 
Credit Suisse Financial Products. (1997). CreditRisk+: A credit risk management framework. 
Duffie, D. and Garleanu, N. (1998). Risk and valuation of Collateralized Debt Obligations. Working 
paper. Graduate School of Business, Stanford University. 
http://www.stanford.edu/˜duffie/working.htm 
Duffie, D. and Singleton, K. (1998). Simulating correlated defaults. Working paper. Graduate 
School of Business, Stanford University. 
http://www.stanford.edu/˜duffie/working.htm 
Duffie, D. and Singleton, K. (1999). Modeling term structures of defaultable bonds. Review of 
Financial Studies, 12, 687-720. 
21
Finger, C. (1998). Sticks and stones. Working paper. RiskMetrics Group. 
http://www.riskmetrics.com/research/working 
Gordy, M. (2000). A comparative anatomy of credit risk models. Journal of Banking  Finance, 
24 (January), 119-149. 
Gupton, G., Finger, C., and Bhatia, M. (1997). CreditMetrics – Technical Document. Morgan 
Guaranty Trust Co. http://www.riskmetrics.com/research/techdoc 
Li, D. (1999). The valuation of basket credit derivatives. CreditMetrics Monitor, April, 34-50. 
http://www.riskmetrics.com/research/journals 
Li, D. (2000). On default correlation: a copula approach. The Journal of Fixed Income, 9 (March), 
43-54. 
Keenan, S., Hamilton, D. and Berthault, A. (2000). Historical default rates of corporate bond 
issuers, 1920-1999. Moody’s Investors Service. 
Kolyoglu, U. and Hickman, A. (1998). Reconcilable differences. Risk, October. 
Nagpal, K. and Bahar, R. (1999). An analytical approach for credit risk analysis under correlated de-faults. 
CreditMetrics Monitor, April, 51-74. http://www.riskmetrics.com/research/journals 
Standard  Poor’s. (2000). Ratings performance 1999: Stability  Transition. 
Wilson, T. (1997). Portfolio Credit Risk I. Risk, September. 
Wilson, T. (1997). Portfolio Credit Risk II. Risk, October. 
22
Table 1: Calibration results. 
Investment grade Speculative grade 
Parameter Low correlation High correlation Low correlation High correlation 
Inputs 
p1 0.16% 0.16% 3.35% 3.35% 
p2 0.33% 0.33% 3.41% 3.41% 
p1;1 0.0007% 0.0059% 0.1776% 0.5190% 
Discrete CreditMetrics extension 
1 -2.95 -2.95 -1.83 -1.83 
2 -2.72 -2.72 -1.81 -1.81 
 10% 40% 10% 40% 
Diffusion CreditMetrics extension 
1 -2.95 -2.95 -1.83 -1.83 
2 -3.78 -3.78 -2.34 -2.34 
 10% 40% 10% 40% 
Copula functions 
1 -2.95 -2.95 -1.83 -1.83 
2 -2.58 -2.58 -1.49 -1.49 
 10% 40% 10% 40% 
Stochastic intensity – slow mean reversion 
 0.29 0.29 0.29 0.29 
 0.10 0.37 0.28 0.76 
Nh 
1 0.16% 0.16% 3.44% 3.67% 
Nh 
2 1.47% 1.58% 6.06% 12.10% 
Stochastic intensity – fast mean reversion 
 1.39 1.39 1.39 1.39 
 0.14 0.53 0.40 1.12 
Nh 
1 0.16% 0.16% 3.44% 3.68% 
Nh 
2 0.53% 0.55% 4.00% 5.02% 
Table 2: Moody’s speculative grade cumulative default probabilities. From Exhibit 30, Keenan et al (2000). 
Year 1 2 3 4 5 6 
Probability 3.35% 6.76% 9.98% 12.89% 15.57% 17.91% 
23
Table 3: One year default statistics. Speculative grade. 
CreditMetrics CreditMetrics Stoch. Int. Stoch. Int. 
Statistic Discrete Diffusion Copula Slow Fast 
Low correlation 
Mean 3.37 3.36 3.51 3.20 3.20 
St. Dev. 3.15 3.27 3.40 3.03 3.05 
Median 3 2 3 3 2 
5th percentile 10 9 10 9 10 
1st percentile 14 15 15 13 14 
High correlation 
Mean 3.62 3.24 3.72 3.69 3.56 
St. Dev. 7.08 6.32 7.52 6.84 6.73 
Median 1 1 1 1 1 
5th percentile 19 15 19 19 16 
1st percentile 37 32 34 30 35 
Table 4: Six year cumulative default statistics. Speculative grade. 
CreditMetrics CreditMetrics Stoch. Int. Stoch. Int. 
Statistic Discrete Diffusion Copula Slow Fast 
Low correlation 
Mean 17.72 16.93 18.04 17.34 18.10 
St. Dev. 6.40 8.68 9.66 16.15 9.73 
Median 17 16 17 12 16 
5th percentile 29 33 37 52 37 
1st percentile 34 42 47 73 49 
High correlation 
Mean 18.41 17.28 18.61 19.81 20.41 
St. Dev. 13.49 17.41 19.27 24.37 19.36 
Median 15 12 12 9 13 
5th percentile 45 54 63 82 62 
1st percentile 59 73 78 98 86 
24
Table 5: Target expected losses for six year maturity. From Chart 3, Cifuentes et al (2000). 
Rating Expected loss 
Aaa 0.002% 
Aa1 0.023% 
Aa2 0.048% 
Aa3 0.101% 
A1 0.181% 
A2 0.320% 
A3 0.500% 
Baa1 0.753% 
Baa2 1.083% 
Baa3 2.035% 
Table 6: Prices (in $M) for first and second loss protection. Expected loss, rating, and required capital ($M) 
for senior notes. Speculative grade collateral. 
Senior notes 
First loss Second loss Exp. loss Rating Capital (Aa3) Capital (A3) 
Low correlation 
CM Discrete 7.227 1.350 0.000% Aaa 17.3 13.8 
CM Diffusion 6.676 1.533 0.017% Aa1 21.6 15.9 
Copula 6.788 1.936 0.022% Aa1 24.5 18.0 
Stoch. int. – slow 5.533 2.501 0.466% A3 39.8 29.4 
Stoch. int. – fast 6.763 1.911 0.038% Aa2 25.7 18.3 
High correlation 
CM Discrete 6.117 2.698 0.159% A1 32.3 23.6 
CM Diffusion 5.144 2.832 0.514% Baa1 41.1 30.2 
Copula 5.210 3.200 0.821% Baa2 43.7 34.4 
Stoch. int. – slow 4.856 3.307 1.903% Baa3 54.5 46.1 
Stoch. int. – fast 5.685 3.500 0.918% Baa2 45.9 35.2 
25
Figure 1: Marginal and cumulative year two default volatility. 
Marginal 
Cumulative 
CM CM Copula Stoch int Stoch int 
1.4 
1.2 
1 
0.8 
0.6 
0.4 
0.2 
0 
Investment grade, low correlation 
Discrete Diffusion Slow Fast 
Marginal 
Cumulative 
CM CM Copula Stoch int Stoch int 
4 
3.5 
3 
2.5 
2 
1.5 
1 
0.5 
0 
Investment grade, high correlation 
Discrete Diffusion Slow Fast 
Marginal 
Cumulative 
CM CM Copula Stoch int Stoch int 
1 
0.9 
0.8 
0.7 
0.6 
0.5 
0.4 
0.3 
0.2 
0.1 
0 
Speculative grade, low correlation 
Discrete Diffusion Slow Fast 
Marginal 
Cumulative 
CM CM Copula Stoch int Stoch int 
2 
1.8 
1.6 
1.4 
1.2 
1 
0.8 
0.6 
0.4 
0.2 
0 
Speculative grade, high correlation 
Discrete Diffusion Slow Fast 
26
Figure 2: Year two conditional default probability given default of a second asset. 
Cond on 1st yr default 
Cond on 2nd yr default 
CM CM Copula Stoch int Stoch int 
2.5 
2 
1.5 
1 
0.5 
0 
Investment grade, low correlation 
Discrete Diffusion Slow Fast 
Cond on 1st yr default 
Cond on 2nd yr default 
CM CM Copula Stoch int Stoch int 
18 
16 
14 
12 
10 
8 
6 
4 
2 
0 
Investment grade, high correlation 
Discrete Diffusion Slow Fast 
Cond on 1st yr default 
Cond on 2nd yr default 
CM CM Copula Stoch int Stoch int 
2 
1.8 
1.6 
1.4 
1.2 
1 
0.8 
0.6 
0.4 
0.2 
0 
Speculative grade, low correlation 
Discrete Diffusion Slow Fast 
Cond on 1st yr default 
Cond on 2nd yr default 
CM CM Copula Stoch int Stoch int 
6 
5 
4 
3 
2 
1 
0 
Speculative grade, high correlation 
Discrete Diffusion Slow Fast 
27
Figure 3: Normalized cumulative default rate volatilities. Speculative grade. 
1 2 3 4 5 6 
1 2 3 4 5 6 
1.2 
1 
0.8 
0.6 
0.4 
2.5 
2 
1.5 
1 
0.5 
Time 
Default volatility 
High correlation 
CM Discrete 
CM Diffusion 
Copula 
St.Int. Slow 
St.Int. Fast 
0.2 
Default volatility 
Low correlation 
28
Figure 4: Distribution of cumulative six year defaults. Speculative grade, low correlation. 
0 10 20 30 40 50 60 70 80 90 100 
100% 
80% 
60% 
40% 
20% 
0 
Defaults 
Cumulative probability 
CM Discrete 
CM Diffusion 
Copula 
St.Int. Slow 
St.Int. Fast 
29
Figure 5: Distribution of cumulative six year defaults, extreme cases. Speculative grade, low correlation. 
100% 
96% 
92% 
88% 
84% 
80% 
20 30 40 50 60 70 80 90 100 
Defaults 
Cumulative probability 
CM Discrete 
CM Diffusion 
Copula 
St.Int. Slow 
St.Int. Fast 
30
Figure 6: Distribution of cumulative six year defaults. Speculative grade, high correlation. 
0 10 20 30 40 50 60 70 80 90 100 
100% 
80% 
60% 
40% 
20% 
0 
Defaults 
Cumulative probability 
CM Discrete 
CM Diffusion 
Copula 
St.Int. Slow 
St.Int. Fast 
31
Figure 7: Distribution of cumulative six year defaults, extreme cases. Speculative grade, high correlation. 
100% 
96% 
92% 
88% 
84% 
80% 
20 30 40 50 60 70 80 90 100 
Defaults 
Cumulative probability 
CM Discrete 
CM Diffusion 
Copula 
St.Int. Slow 
St.Int. Fast 
32
The RiskMetrics Group 
Working Paper Number 99-07 
On Default Correlation: A Copula Function Approach 
David X. Li 
This draft: February 2000 
First draft: September 1999 
44Wall St. 
NewYork, NY 10005 
david.li@riskmetrics.com 
www.riskmetrics.com
On Default Correlation: A Copula Function Approach 
David X. Li 
February 2000 
Abstract 
This paper studies the problem of default correlation. We first introduce a random variable called “time-until- 
default” to denote the survival time of each defaultable entity or financial instrument, and define the 
default correlation between two credit risks as the correlation coefficient between their survival times. 
Then we argue why a copula function approach should be used to specify the joint distribution of survival 
times after marginal distributions of survival times are derived from market information, such as risky 
bond prices or asset swap spreads. The definition and some basic properties of copula functions are 
given. We show that the current CreditMetrics approach to default correlation through asset correlation 
is equivalent to using a normal copula function. Finally, we give some numerical examples to illustrate 
the use of copula functions in the valuation of some credit derivatives, such as credit default swaps and 
first-to-default contracts.
1 Introduction 
The rapidly growing credit derivative market has created a new set of financial instruments which can be 
used to manage the most important dimension of financial risk - credit risk. In addition to the standard 
credit derivative products, such as credit default swaps and total return swaps based upon a single underlying 
credit risk, many new products are now associated with a portfolio of credit risks. A typical example is the 
product with payment contingent upon the time and identity of the first or second-to-default in a given credit 
risk portfolio. Variations include instruments with payment contingent upon the cumulative loss before a 
given time in the future. The equity tranche of a collateralized bond obligation (CBO) or a collateralized 
loan obligation (CLO) is yet another variation, where the holder of the equity tranche incurs the first loss. 
Deductible and stop-loss in insurance products could also be incorporated into the basket credit derivatives 
structure. As more financial firms try to manage their credit risk at the portfolio level and the CBO/CLO 
market continues to expand, the demand for basket credit derivative products will most likely continue to 
grow. 
Central to the valuation of the credit derivatives written on a credit portfolio is the problem of default 
correlation. The problem of default correlation even arises in the valuation of a simple credit default swap 
with one underlying reference asset if we do not assume the independence of default between the reference 
asset and the default swap seller. Surprising though it may seem, the default correlation has not been well 
defined and understood in finance. Existing literature tends to define default correlation based on discrete 
events which dichotomize according to survival or nonsurvival at a critical period such as one year. For 
example, if we denote 
qA = Pr[EA], qB = Pr[EB], qAB = Pr[EAEB] 
where EA, EB are defined as the default events of two securities A and B over 1 year. Then the default 
correlation ρ between two default events EA and EB, based on the standard definition of correlation of two 
random variables, are defined as follows 
1
ρ = qAB − qA · √ qB 
qA(1 − qA)qB(1 − qB) 
. (1) 
This discrete event approach has been taken by Lucas [1995]. Hereafter we simply call this definition of 
default correlation the discrete default correlation. 
However the choice of a specific period like one year is more or less arbitrary. It may correspond with many 
empirical studies of default rate over one year period. But the dependence of default correlation on a specific 
time interval has its disadvantages. First, default is a time dependent event, and so is default correlation. Let 
us take the survival time of a human being as an example. The probability of dying within one year for a 
person aged 50 years today is about 0.6%, but the probability of dying for the same person within 50 years is 
almost a sure event. Similarly default correlation is a time dependent quantity. Let us now take the survival 
times of a couple, both aged 50 years today. The correlation between the two discrete events that each dies 
within one year is very small. But the correlation between the two discrete events that each dies within 100 
years is 1. Second, concentration on a single period of one year wastes important information. There are 
empirical studies which show that the default tendency of corporate bonds is linked to their age since issue. 
Also there are strong links between the economic cycle and defaults. Arbitrarily focusing on a one year period 
neglects this important information. Third, in the majority of credit derivative valuations, what we need is 
not the default correlation of two entities over the next year. We may need to have a joint distribution of 
survival times for the next 10 years. Fourth, the calculation of default rates as simple proportions is possible 
only when no samples are censored during the one year period1. 
This paper introduces a few techniques used in survival analysis. These techniques have been widely applied 
to other areas, such as life contingencies in actuarial science and industry life testing in reliability studies, 
which are similar to the credit problems we encounter here. We first introduce a random variable called 
1A company who is observed, default free, by Moody’s for 5-years and then withdrawn from the Moody’s study must have 
a survival time exceeding 5 years. Another company may enter into Moody’s study in the middle of a year, which implies that 
Moody’s observes the company for only half of the one year observation period. In the survival analysis of statistics, such incomplete 
observation of default time is called censoring. According to Moody’s studies, such incomplete observation does occur in Moody’s 
credit default samples. 
2
“time-until-default” to denote the survival time of each defaultable entity or financial instrument. Then, 
we define the default correlation of two entities as the correlation between their survival times. In credit 
derivative valuation we need first to construct a credit curve for each credit risk. A credit curve gives all 
marginal conditional default probabilities over a number of years. This curve is usually derived from the 
risky bond spread curve or asset swap spreads observed currently from the market. Spread curves and asset 
swap spreads contain information on default probabilities, recovery rate and liquidity factors etc. Assuming 
an exogenous recovery rate and a default treatment, we can extract a credit curve from the spread curve or 
asset swap spread curve. For two credit risks, we would obtain two credit curves from market observable 
information. Then, we need to specify a joint distribution for the survival times such that the marginal 
distributions are the credit curves. Obviously, this problem has no unique solution. Copula functions used in 
multivariate statistics provide a convenient way to specify the joint distribution of survival times with given 
marginal distributions. The concept of copula functions, their basic properties, and some commonly used 
copula functions are introduced. Finally, we give a few numerical examples of credit derivative valuation to 
demonstrate the use of copula functions and the impact of default correlation. 
2 Characterization of Default by Time-Until-Default 
In the study of default, interest centers on a group of individual companies for each of which there is defined 
a point event, often called default, (or survival) occurring after a length of time. We introduce a random 
variable called the time-until-default, or simply survival time, for a security, to denote this length of time. 
This random variable is the basic building block for the valuation of cash flows subject to default. 
To precisely determine time-until-default, we need: an unambiguously defined time origin, a time scale for 
measuring the passage of time, and a clear definition of default. 
We choose the current time as the time origin to allow use of current market information to build credit 
curves. The time scale is defined in terms of years for continuous models, or number of periods for discrete 
models. The meaning of default is defined by some rating agencies, such as Moody’s. 
3
2.1 Survival Function 
Let us consider an existing security A. This security’s time-until-default, TA, is a continuous random variable 
which measures the length of time from today to the time when default occurs. For simplicity we just use T 
which should be understood as the time-until-default for a specific securityA. Let F(t) denote the distribution 
function of T , 
F(t) = Pr(T ≤ t), t ≥0 (2) 
and set 
S(t) = 1 − F(t) = Pr(T  t ), t ≥ 0. (3) 
We also assume that F(0) = 0, which implies S(0) = 1. The function S(t) is called the survival function. 
It gives the probability that a security will attain age t . The distribution of TA can be defined by specifying 
either the distribution function F(t) or the survival function S(t). We can also define a probability density 
function as follows 
f (t) = F 
 
(t) = −S 
 
(t) = lim 
→0+ 
Pr[t ≤T t + ] 
 
. 
To make probability statements about a security which has survived x years, the future life time for this 
security is T − x|T x. We introduce two more notations 
tqx = Pr[T − x ≤ t |T x], t≥ 0 
tpx = 1 − tqx = Pr[T −x  t|T x], t≥ 0. (4) 
The symbol tqx can be interpreted as the conditional probability that the security A will default within the 
next t years conditional on its survival for x years. In the special case of X = 0, we have 
tp0 = S(t) x ≥ 0. 
4
If t = 1, we use the actuarial convention to omit the prefix 1 in the symbols tqx and tpx , and we have 
px = Pr[T −x  1|T x] 
qx = Pr[T − x ≤ 1|T x]. 
The symbol qx is usually called the marginal default probability, which represents the probability of default 
in the next year conditional on the survival until the beginning of the year. A credit curve is then simply 
defined as the sequence of q0, q1, · · · , qn in discrete models. 
2.2 Hazard Rate Function 
The distribution function F(t) and the survival function S(t) provide two mathematically equivalent ways 
of specifying the distribution of the random variable time-until-default, and there are many other equiva-lent 
functions. The one used most frequently by statisticians is the hazard rate function which gives the 
instantaneous default probability for a security that has attained age x. 
Pr[x  T ≤ x + x|T x] = F(x + x) − F(x) 
1 − F(x) 
≈ f (x)x 
1 − F(x) 
. 
The function 
f (x) 
1 − F(x) 
has a conditional probability density interpretation: it gives the value of the conditional probability density 
function of T at exact age x, given survival to that time. Let’s denote it as h(x), which is usually called 
the hazard rate function. The relationship of the hazard rate function with the distribution function and 
survival function is as follows 
5
h(x) = f (x) 
1 − F(x) 
 
(x) 
S(x) 
= −S 
. (5) 
Then, the survival function can be expressed in terms of the hazard rate function, 
S(t) = e 
− 
 
t 
0 h(s)ds . 
Now, we can express tqx and tpx in terms of the hazard rate function as follows 
tpx 
= e 
− 
 
t 
0 h(s+x)ds , (6) 
tqx = 1 − e 
− 
 
t 
0 h(s+x)ds . 
In addition, 
F(t) = 1 − S(t) = 1 − e 
− 
 
t 
0 h(s)ds , 
and 
f (t) = S(t) · h(t). (7) 
which is the density function for T . 
A typical assumption is that the hazard rate is a constant, h, over certain period, such as [x, x + 1]. In this 
case, the density function is 
f (t) = he 
−ht 
6
which shows that the survival time follows an exponential distribution with parameter h. Under this assump-tion, 
the survival probability over the time interval [x, x + t ] for 0  t ≤ 1 is 
tpx 
= 1 − tqx 
= e 
− 
 
t 
0 h(s)ds = e 
−ht = (px)t 
where px is the probability of survival over one year period. This assumption can be used to scale down the 
default probability over one year to a default probability over a time interval less than one year. 
Modelling a default process is equivalent to modelling a hazard function. There are a number of reasons why 
modelling the hazard rate function may be a good idea. First, it provides us information on the immediate 
default risk of each entity known to be alive at exact age t . Second, the comparisons of groups of individuals 
are most incisively made via the hazard rate function. Third, the hazard rate function based model can be 
easily adapted to more complicated situations, such as where there is censoring or there are several types 
of default or where we would like to consider stochastic default fluctuations. Fourth, there are a lot of 
similarities between the hazard rate function and the short rate. Many modeling techniques for the short rate 
processes can be readily borrowed to model the hazard rate. 
Finally, we can define the joint survival function for two entities A and B based on their survival times TA 
and TB, 
STATB (s, t) = Pr[TA  s,TB  t]. 
The joint distributional function is 
F(s, t) = Pr[TA ≤ s, TB ≤ t ] 
= 1 − STA(s) − STB (t) + STATB (s, t ). 
The aforementioned concepts and results can be found in survival analysis books, such as Bowers et al. 
[1997], Cox and Oakes [1984]. 
7
3 Definition of Default Correlations 
The default correlation of two entities A and B can then be defined with respect to their survival times TA 
and TB as follows 
ρAB = √ Cov(TA, TB) 
Var(TA)V ar(TB) 
= E(√TATB) − E(TA)E(TB) 
Var(TA)V ar(TB) 
. (8) 
Hereafter we simply call this definition of default correlation the survival time correlation. The survival 
time correlation is a much more general concept than that of the discrete default correlation based on a one 
period. If we have the joint distribution f (s, t) of two survival times TA, TB, we can calculate the discrete 
default correlation. For example, if we define 
E1 = [TA  1], 
E2 = [TB  1], 
then the discrete default correlation can be calculated using equation (1) with the following calculation 
q12 = Pr[E1E2] = 
 
1 
0 
 
1 
0 
f (s, t)dsdt 
q1 = 
 
1 
0 
fA(s)ds 
q2 = 
 
1 
0 
fB(t)dt . 
However, knowing the discrete default correlation over one year period does not allow us to specify the 
survival time correlation. 
4 The Construction of the Credit Curve 
The distribution of survival time or time-until-default can be characterized by the distribution function, 
survival function or hazard rate function. It is shown in Section 2 that all default probabilities can be 
8
calculated once a characterization is given. The hazard rate function used to characterize the distribution of 
survival time can also be called a credit curve due to its similarity to a yield curve. But the basic question is: 
how do we obtain the credit curve or the distribution of survival time for a given credit? 
There exist three methods to obtain the term structure of default rates: 
(i) Obtaining historical default information from rating agencies; 
(ii) Taking the Merton option theoretical approach; 
(iii) Taking the implied approach using market prices of defaultable bonds or asset swap spreads. 
Rating agencies like Moody’s publish historical default rate studies regularly. In addition to the commonly 
cited one-year default rates, they also present multi-year default rates. From these rates we can obtain the 
hazard rate function. For example, Moody’s (see Carty and Lieberman [1997]) publishes weighted average 
cumulative default rates from 1 to 20 years. For the B rating, the first 5 years cumulative default rates in 
percentage are 7.27, 13.87, 19.94, 25.03 and 29.45. From these rates we can obtain the marginal conditional 
default probabilities. The first marginal conditional default probability in year one is simply the one-year 
default probability, 7.27%. The other marginal conditional default probabilities can be obtained using the 
following formula: 
n+1qx = nqx + npx · qx+n, (9) 
which simply states that the probability of default over time interval [0, n + 1] is the sum of the probability 
of default over the time interval [0, n], plus the probability of survival to the end of nth year and default in 
the following year. Using equation (9) we have the marginal conditional default probability: 
qx+n = n+1qx − nqx 
1 − nqx 
which results in the marginal conditional default probabilities in year 2, 3, 4, 5 as 7.12%, 7.05%, 6.36% and 
5.90%. If we assume a piecewise constant hazard rate function over each year, then we can obtain the hazard 
rate function using equation (6). The hazard rate function obtained is given in Figure (1). 
9
Using diffusion processes to describe changes in the value of the firm, Merton [1974] demonstrated that a 
firm’s default could be modeled with the Black and Scholes methodology. He showed that stock could be 
considered as a call option on the firm with strike price equal to the face value of a single payment debt. 
Using this framework we can obtain the default probability for the firm over one period, from which we 
can translate this default probability into a hazard rate function. Geske [1977] and Delianedis and Geske 
[1998] extended Merton’s analysis to produce a term structure of default probabilities. Using the relationship 
between the hazard rate and the default probabilities we can obtain a credit curve. 
Alternatively, we can take the implicit approach by using market observable information, such as asset swap 
spreads or risky corporate bond prices. This is the approach used by most credit derivative trading desks. The 
extracted default probabilities reflect the market-agreed perception today about the future default tendency of 
the underlying credit. Li [1998] presents one approach to building the credit curve from market information 
based on the Duffie and Singleton [1996] default treatment. In that paper the author assumes that there exists 
a series of bonds with maturity 1, 2, .., n years, which are issued by the same company and have the same 
seniority. All of those bonds have observable market prices. From the market price of these bonds we can 
calculate their yields to maturity. Using the yield to maturity of corresponding treasury bonds we obtain a 
yield spread curve over treasury (or asset swap spreads for a yield spread curve over LIBOR). The credit 
curve construction is based on this yield spread curve and an exogenous assumption about the recovery rate 
based on the seniority and the rating of the bonds, and the industry of the corporation. 
The suggested approach is contrary to the use of historical default experience information provided by rating 
agencies such as Moody’s. We intend to use market information rather than historical information for the 
following reasons: 
• The calculation of profit and loss for a trading desk can only be based on current market information. 
This current market information reflects the market agreed perception about the evolution of the market 
in the future, on which the actual profit and loss depend. The default rate derived from current market 
information may be much different than historical default rates. 
• Rating agencies use classification variables in the hope that homogeneous risks will be obtained 
10
after classification. This technique has been used elsewhere like in pricing automobile insurance. 
Unfortunately, classification techniques omit often some firm specific information. Constructing a 
credit curve for each credit allows us to use more firm specific information. 
• Rating agencies reacts much slower than the market in anticipation of future credit quality. A typical 
example is the rating agencies reaction to the recent Asian crisis. 
• Ratings are primarily used to calculate default frequency instead of default severity. However, much 
of credit derivative value depends on both default frequency and severity. 
• The information available from a rating agency is usually the one year default probability for each rating 
group and the rating migration matrix. Neither the transition matrixes, nor the default probabilities 
are necessarily stable over long periods of time. In addition, many credit derivative products have 
maturities well beyond one year, which requires the use of long term marginal default probability. 
It is shown under the Duffie and Singleton approach that a defaultable instrument can be valued as if it is a 
default free instrument by discounting the defaultable cash flow at a credit risk adjusted discount factor. The 
credit risk adjusted discount factor or the total discount factor is the product of risk-free discount factor and 
the pure credit discount factor if the underlying factors affecting default and those affecting the interest rate 
are independent. Under this framework and the assumption of a piecewise constant hazard rate function, we 
can derive a credit curve or specify the distribution of the survival time. 
5 Dependent Models - Copula Functions 
Let us study some problems of an n credit portfolio. Using either the historical approach or the market 
implicit approach, we can construct the marginal distribution of survival time for each of the credit risks in 
the portfolio. If we assume mutual independence among the credit risks, we can study any problem associated 
with the portfolio. However, the independence assumption of the credit risks is obviously not realistic; in 
reality, the default rate for a group of credits tends to be higher in a recession and lower when the economy 
11
is booming. This implies that each credit is subject to the same set of macroeconomic environment, and that 
there exists some form of positive dependence among the credits. To introduce a correlation structure into 
the portfolio, we must determine how to specify a joint distribution of survival times, with given marginal 
distributions. 
Obviously, this problem has no unique solution. Generally speaking, knowing the joint distribution of 
random variables allows us to derive the marginal distributions and the correlation structure among the 
random variables, but not vice versa. There are many different techniques in statistics which allow us to 
specify a joint distribution function with given marginal distributions and a correlation structure. Among 
them, copula function is a simple and convenient approach. We give a brief introduction to the concept of 
copula function in the next section. 
5.1 Definition and Basic Properties of Copula Function 
A copula function is a function that links or marries univariate marginals to their full multivariate distribution. 
For m uniform random variables, U1, U2, · · · ,Um, the joint distribution function C, defined as 
C(u1, u2, · · · , um, ρ) = Pr[U1 ≤ u1,U2 ≤ u2, · · · ,Um ≤ um] 
can also be called a copula function. 
Copula functions can be used to link marginal distributions with a joint distribution. For given univariate 
marginal distribution functions F1(x1), F2(x2),· · · , Fm(xm), the function 
C(F1(x1), F2(x2), · · · , Fm(xm)) = F(x1, x2, · · · xm), 
which is defined using a copula function C, results in a multivariate distribution function with univariate 
marginal distributions as specified F1(x1), F2(x2),· · · , Fm(xm). 
This property can be easily shown as follows: 
12
C(F1(x1), F2(x2), · · · , Fm(xm), ρ) = Pr [U1 ≤ F1(x1),U2 ≤ F2(x2), · · · ,Um ≤ Fm(xm)] 
= Pr 
 
F 
−1 
1 (U1) ≤ x1, F 
−1 
2 (U2) ≤ x2, · · · , F 
−1 
m (Um) ≤ xm 
 
= Pr [X1 ≤ x1,X2 ≤ x2, · · · ,Xm ≤ xm] 
= F(x1, x2, · · · xm). 
The marginal distribution of Xi is 
C(F1(+∞), F2(+∞), · · · Fi(xi), · · · , Fm(+∞), ρ) 
= Pr [X1 ≤ +∞,X2 ≤ +∞, · · · ,Xi ≤ xi,Xm ≤ +∞] 
= Pr[Xi ≤ xi ] 
= Fi(xi ). 
Sklar [1959] established the converse. He showed that any multivariate distribution function F can be 
written in the form of a copula function. He proved the following: If F(x1, x2, · · · xm) is a joint multivariate 
distribution function with univariate marginal distribution functions F1(x1), F2(x2),· · · , Fm(xm), then there 
exists a copula function C(u1, u2, · · · , um) such that 
F(x1, x2, · · · xm) = C(F1(x1), F2(x2), · · · , Fm(xm)). 
If each Fi is continuous then C is unique. Thus, copula functions provide a unifying and flexible way to 
study multivariate distributions. 
For simplicity’s sake, we discuss only the properties of bivariate copula functions C(u, v, ρ) for uniform 
random variables U and V , defined over the area {(u, v)|0  u ≤ 1, 0  v ≤ 1}, where ρ is a correlation 
parameter. We call ρ simply a correlation parameter since it does not necessarily equal the usual correlation 
coefficient defined by Pearson, nor Spearman’s Rho, nor Kendall’s Tau. The bivariate copula function has 
the following properties: 
(i) Since U and V are positive random variables, C(0, v, ρ) = C(u, 0, ρ) = 0. 
13
(ii) Since U and V are bounded above by 1, the marginal distributions can be obtained by C(1, v, ρ) = v, 
C(u, 1, ρ) = u. 
(iii) For independent random variables U and V , C(u, v, ρ) = uv. 
Frechet [1951] showed there exist upper and lower bounds for a copula function 
max(0, u + v − 1) ≤ C(u, v) ≤ min(u, v). 
The multivariate extension of Frechet bounds is given by Dall’Aglio [1972]. 
5.2 Some Common Copula Functions 
We present a few copula functions commonly used in biostatistics and actuarial science. 
Frank Copula The Frank copula function is defined as 
C(u, v) = 1 
α 
ln 
 
1 + (eαu − 1)(eαv − 1) 
eα − 1 
 
, −∞  α  ∞. 
Bivariate Normal 
C(u, v) = %2(% 
−1(u),% 
−1(v), ρ), −1 ≤ ρ ≤ 1, (10) 
−1 is the inverse of a 
where%2 is the bivariate normal distribution function with correlation coefficient ρ, and% 
univariate normal distribution function. As we shall see later, this is the copula function used in CreditMetrics. 
Bivariate Mixture Copula Function We can form new copula function using existing copula functions. 
If the two uniform random variables u and v are independent, we have a copula function C(u, v) = uv. If 
the two random variables are perfect correlated we have the copula function C(u, v) = min(u, v). Mixing 
the two copula functions by a mixing coefficient (ρ  0) we obtain a new copula function as follows 
14
C(u, v) = (1 − ρ)uv + ρ min(u, v), ifρ  0. 
If ρ ≤ 0 we have 
C(u, v) = (1 + ρ)uv − ρ(u − 1 + v)(u − 1 + v), if ρ ≤ 0, 
where 
(x) = 1, if x ≥ 0 
= 0, if x  0. 
5.3 Copula Function and Correlation Measurement 
To compare different copula functions, we need to have a correlation measurement independent of marginal 
distributions. The usual Pearson’s correlation coefficient, however, depends on the marginal distributions 
(See Lehmann [1966]). Both Spearman’s Rho and Kendall’s Tau can be defined using a copula function only 
as follows 
ρs = 12 
 
[C(u, v) − uv]dudv, 
τ = 4 
 
C(u, v)dC(u, v) − 1. 
Comparisons between results using different copula functions should be based on either a common Spear-man’s 
Rho or a Kendall’s Tau. 
Further examination of copula functions can be found in a survey paper by Frees and Valdez [1988] and a 
recent book by Nelsen [1999]. 
15
5.4 The Calibration of Default Correlation in Copula Function 
Having chosen a copula function, we need to compute the pairwise correlation of survival times. Using the 
CreditMetrics (Gupton et al. [1997]) asset correlation approach, we can obtain the default correlation of two 
discrete events over one year period. As it happens, CreditMetrics uses the normal copula function in its 
default correlation formula even though it does not use the concept of copula function explicitly. 
First let us summarize how CreditMetrics calculates joint default probability of two credits Aand B. Suppose 
the one year default probabilities for A and B are qA and qB. CreditMetrics would use the following steps 
• Obtain ZA and ZB such that 
qA = Pr[Z  ZA] 
qB = Pr[Z  ZB] 
where Z is a standard normal random variable 
• If ρ is the asset correlation, the joint default probability for credit A and B is calculated as follows, 
Pr[Z  ZA,Z  ZB] = 
ZA  
−∞ 
ZB  
−∞ 
φ2(x, y|ρ)dxdy = %2(ZA,ZB, ρ) (11) 
where φ2(x, y|ρ) is the standard bivariate normal density function with a correlation coefficient ρ, and 
%2 is the bivariate accumulative normal distribution function. 
If we use a bivariate normal copula function with a correlation parameter γ , and denote the survival times 
for A and B as TA and TB, the joint default probability can be calculated as follows 
Pr[TA  1, TB  1] = %2(% 
−1(FA(1)),% 
−1(FB(1), γ ) (12) 
where FA and FB are the distribution functions for the survival times TA and TB. If we notice that 
16
qi = Pr[Ti  1] = Fj (1) and Zi = % 
−1(qi) for i = A,B, 
then we see that equation (12) and equation (11) give the same joint default probability over one year period 
if ρ = γ . 
We can conclude that CreditMetrics uses a bivariate normal copula function with the asset correlation as the 
correlation parameter in the copula function. Thus, to generate survival times of two credit risks, we use 
a bivariate normal copula function with correlation parameter equal to the CreditMetrics asset correlation. 
We note that this correlation parameter is not the correlation coefficient between the two survival times. The 
correlation coefficient between the survival times is much smaller than the asset correlation. Conveniently, 
the marginal distribution of any subset of an n dimensional normal distribution is still a normal distribution. 
Using asset correlations, we can construct high dimensional normal copula functions to model the credit 
portfolio of any size. 
6 Numerical Illustrations 
This section gives some numerical examples to illustrate many of the points discussed above. Assume that 
we have two credit risks, A and B, which have flat spread curves of 300 bps and 500 bps over LIBOR. These 
spreads are usually given in the market as asset swap spreads. Using these spreads and a constant recovery 
assumption of 50% we build two credit curves for the two credit risks. For details, see Li [1998]. The two 
credit curves are given in Figures (2) and (3). These two curves will be used in the following numerical 
illustrations. 
6.1 Illustration 1. Default Correlation v.s. Length of Time Period 
In this example, we study the relationship between the discrete default correlation (1) and the survival time 
correlation (8). The survival time correlation is a much more general concept than the discrete default 
17
correlation defined for two discrete default events at an arbitrary period of time, such as one year. Knowing 
the former allows us to calculate the latter over any time interval in the future, but not vice versa. 
Using two credit curves we can calculate all marginal default probabilities up to anytime t in the future, i.e. 
tq0 = Pr[τ  t] = 1 − e 
− 
 
t 
0 h(s)ds , 
where h(s) is the instantaneous default probability given by a credit curve. If we have the marginal default 
probabilities tqA 
0 and tqB 
0 for both A and B, we can also obtain the joint probability of default over the time 
interval [0, t] by a copula function C(u, v), 
Pr[TA  t,TB  t] = C(tqA 
0 , tqB 
0 ). 
Of course we need to specify a correlation parameter ρ in the copula function. We emphasize that knowing 
ρ would allow us to calculate the survival time correlation between TA and TB. 
We can now obtain the discrete default correlation coefficient ρt between the two discrete events that A and 
B default over the time interval [0, t] based on the formula (1). Intuitively, the discrete default correlation ρt 
should be an increasing function of t since the two underlying credits should have a higher tendency of joint 
default over longer periods. Using the bivariate normal copula function (10) and ρ = 0.1 as an example we 
obtain Figure (4). 
From this graph we see explicitly that the discrete default correlation over time interval [0, t] is a function 
of t . For example, this default correlation coefficient goes from 0.021 to 0.038 when t goes from six months 
to twelve months. The increase slows down as t becomes large. 
6.2 Illustration 2. Default Correlation and Credit Swap Valuation 
The second example shows the impact of default correlation on credit swap pricing. Suppose that credit A 
is the credit swap seller and credit B is the underlying reference asset. If we buy a default swap of 3 years 
18
with a reference asset of credit B from a risk-free counterparty we should pay 500 bps since holding the 
underlying asset and having a long position on the credit swap would create a riskless portfolio. But if we 
buy the default swap from a risky counterparty how much we should pay depends on the credit quality of the 
counterparty and the default correlation between the underlying reference asset and the counterparty. 
Knowing only the discrete default correlation over one year we cannot value any credit swaps with a maturity 
longer than one year. Figure (5) shows the impact of asset correlation (or implicitly default correlation) on the 
credit swap premium. From the graph we see that the annualized premium decreases as the asset correlation 
between the counterparty and the underlying reference asset increases. Even at zero default correlation the 
credit swap has a value less than 500 bps since the counterparty is risky. 
6.3 Illustration 3. Default Correlation and First-to-Default Valuation 
The third example shows how to value a first-to-default contract. We assume we have a portfolio of n credits. 
Let us assume that for each credit i in the portfolio we have constructed a credit curve or a hazard rate function 
for its survival time Ti . The distribution function of Ti is Fi (t). Using a copula function C we also obtain 
the joint distribution of the survival times as follows 
F(t1, t2, · · · , tn) = C(F1(t1), F2(t2), · · · , Fn(tn)). 
If we use normal copula function we have 
F(t1, t2, · · · , tn) = %n(% 
−1(F1(t1)),% 
−1(F2(t2)), · · · ,% 
−1(Fn(tn))) 
where %n is the n dimensional normal cumulative distribution function with correlation coefficient matrix 
-. 
To simulate correlated survival times we introduce another series of random variables Y1, Y2, · · · Yn, such 
that 
Y1 = % 
−1(F1(T1)), Y2 = % 
−1(F2(T2)), · · · , Yn = % 
−1(Fn(Tn)). (13) 
19
Then there is a one-to-one mapping between Y and T . Simulating {Ti |i = 1, 2, ..., n} is equivalent to 
simulating {Yi |i = 1, 2, ..., n}. As shown in the previous section the correlation between the Y 
s is the asset 
correlation of the underlying credits. Using CreditManager from RiskMetrics Group we can obtain the asset 
correlation matrix -. We have the following simulation scheme 
• Simulate Y1, Y2, · · · Yn from an n-dimension normal distribution with correlation coefficient matrix -. 
• Obtain T1, T2, · · · Tn using Ti = F 
−1 
i (N(Yi)), i = 1, 2, · · · , n. 
With each simulation run we generate the survival times for all the credits in the portfolio. With this 
information we can value any credit derivative structure written on the portfolio. We use a simple structure 
for illustration. The contract is a two-year transaction which pays one dollar if the first default occurs during 
the first two years. 
We assume each credit has a constant hazard rate of h = 0.1 for 0  t  +∞. From equation (7) we 
know the density function for the survival time is −T he 
ht . This shows that the survival time is exponentially 
distributed with mean 1/h. We also assume that every pair of credits in the portfolio has a constant asset 
correlation σ2. 
Suppose we have a constant interest rate r = 0.1. If all the credits in the portfolio are independent, the hazard 
rate of the minimum survival time T = min(T1, T2, · · · , Tn) is easily shown to be 
hT = h1 + h2 +· · ·+hn = nh. 
IfT 2, the present value of the contract is 1 · e 
−r·T . The survival time for the first-to-default has a density 
function f (t) = hT · e 
−hT t , so the value of the contract is given by 
2To have a positive definite correlation matrix, the constant correlation coefficient has to satisfy the conditionσ  − 1 
n−1 . 
20
V = 
 
2 
0 
1 · e 
−rtf (t)dt 
= 
 
2 
0 
1 · e 
−r thT · e 
−hT tdt (14) 
= hT 
r + hT 
 
1 − e 
−2.0·(r+hT ) 
 
. 
In the general case we use the Monte Carlo simulation approach and the normal copula function to obtain 
the distribution of T . For each simulation run we have one scenario of default times t1, t2, · · · tn, from which 
we have the first-to-default time simply as t = min(t1, t2, · · · tn). 
Let us examine the impact of the asset correlation on the value of the first-to-default contract of 5-assets. If 
σ = 0, the expected payoff function, based on equation (14), should give a value of 0.5823. Our simulation of 
50,000 runs gives a value of 0.5830. If all 5 assets are perfectly correlated, then the first-to-default of 5-assets 
should be the same as the first-to-default of 1-asset since any one default induces all others to default. In this 
case the contract should worth 0.1648. Our simulation of 50,000 runs produces a result of 0.1638. Figure 
(6) shows the relationship between the value of the contract and the constant asset correlation coefficient. 
We see that the value of the contract decreases as the correlation increases. We also examine the impact of 
correlation on the value of the first-to-default of 20 assets in Figure (6). As expected, the first-to-default of 
5 assets has the same value of the first-to-default of 20 assets when the asset correlation approaches to 1. 
7 Conclusion 
This paper introduces a few standard technique used in survival analysis to study the problem of default 
correlation. We first introduce a random variable called “the time-until-default” to characterize the default. 
Then the default correlation between two credit risks is defined as the correlation coefficient between their 
survival times. In practice we usually use market spread information to derive the distribution of survival 
times. When it comes to credit portfolio studies we need to specify a joint distribution with given marginal 
distributions. The problem cannot be solved uniquely. The copula function approach provides one way of 
21
specifying a joint distribution with known marginals. The concept of copula functions, their basic properties 
and some commonly used copula functions are introduced. The calibration of the correlation parameter used 
in copula functions against some popular credit models is also studied. We have shown that CreditMetrics 
essentially uses the normal copula function in its default correlation formula even though CreditMetrics does 
not use the concept of copula functions explicitly. Finally we show some numerical examples to illustrate the 
use of copula functions in the valuation of credit derivatives, such as credit default swaps and first-to-default 
contracts. 
References 
[1] Bowers, N. L., JR., Gerber, H. U., Hickman, J. C., Jones, D. A., and Nesbitt, C. J., Actuarial Mathe-matics, 
2nd Edition, Schaumberg, Illinois, Society of Actuaries, (1997). 
[2] Carty, L. and Lieberman, D. Historical Default Rates of Corporate Bond Issuers, 1920-1996, Moodys 
Investors Service, January (1997). 
[3] Cox, D. R. and Oakes, D. Analysis of Survival Data, Chapman and Hall, (1984). 
[4] Dall’Aglio, G., Frechet Classes and Compatibility of Distribution Functions, Symp. Math., 9, (1972), 
pp. 131-150. 
[5] Delianedis, G. and R. Geske, Credit Risk and Risk Neutral Default Probabilities: Information about 
Rating Migrations and Defaults,Working paper, The Anderson School at UCLA, (1998). 
[6] Duffie, D. and Singleton, K. Modeling Term Structure of Defaultable Bonds,Working paper, Graduate 
School of Business, Stanford University, (1997). 
[7] Frechet, M. Sur les Tableaux de Correlation dont les Marges sont Donnees, Ann. Univ. Lyon, Sect. A 9, 
(1951), pp. 53-77. 
[8] Frees, E.W. and Valdez, E., 1998, Understanding Relationships Using Copulas, North American Actu-arial 
Journal, (1998), Vol. 2, No. 1, pp. 1-25. 
22
[9] Gupton, G. M., Finger, C. C., and Bhatia, M. CreditMetrics – Technical Document, NewYork: Morgan 
Guaranty Trust Co., (1997). 
[10] Lehmann, E. L. Some Concepts of Dependence, Annals of Mathematical Statistics, 37, (1966), pp. 
1137-1153. 
[11] Li, D. X., 1998, Constructing a credit curve, Credit Risk, A RISK Special report, (November 1998), pp. 
40-44. 
[12] Litterman, R. and Iben, T. Corporate BondValuation and theTerm Structure of Credit Spreads, Financial 
Analyst Journal, (1991), pp. 52-64. 
[13] Lucas, D. Default Correlation and Credit Analysis, Journal of Fixed Income, Vol. 11, (March 1995), 
pp. 76-87. 
[14] Merton, R. C. On the Pricing of Corporate Debt: The Risk Structure of Interest Rates, Journal of 
Finance, 29, pp. 449-470. 
[15] Nelsen, R. An Introduction to Copulas, Springer-Verlag NewYork, Inc., 1999. 
[16] Sklar, A., Random Variables, Joint Distribution Functions and Copulas, Kybernetika 9, (1973), pp. 
449-460. 
23
Figure 1: Hazard Rate Function of B Grade Based on Moody’s Study (1997) 
Hazard Rate Function 
1 2 3 4 5 6 
0.075 
0.070 
0.065 
0.060 
hazard rate 
Years 
24
Figure 2: Credit Curve A 
0.045 0.050 0.055 0.060 0.065 0.070 0.075 
Credit Curve A: Instantaneous Default Probability 
09/10/1998 09/10/2000 09/10/2002 09/10/2004 09/10/2006 09/10/2008 09/10/2010 
Date 
Hazard Rate 
(Spread = 300 bps, Recovery Rate = 50%) 
25
Figure 3: Credit Curve B 
0.08 0.09 0.10 0.11 0.12 
Credit Curve B: Instantaneous Default Probability 
09/10/1998 09/10/2000 09/10/2002 09/10/2004 09/10/2006 09/10/2008 09/10/2010 
Date 
Hazard Rate 
(Spread = 500 bps, Recovery Rate = 50%) 
26
Figure 4: The Discrete Default Correlation v.s. the Length of Time Interval 
1 3 5 7 9 
Length of Period (Years) 
0.30 
0.25 
0.20 
0.15 
Discrete Default Correlation 
Discrete Default Correlation v. s. Length of Period 
27
Figure 5: Impact of Asset Correlation on the Value of Credit Swap 
-1.0 -0.5 0.0 0.5 1.0 
Asset Correlation 
500 
480 
460 
440 
420 
400 
Default Swap Value 
Asset Correlation v. s. Default Swap Premium 
28
Figure 6: The Value of First-to-Default v. s. Asset Correlation 
0.1 0.3 A0s.5set Correl0a.7tion 
1.0 
First-to-Default Premium 
0.8 
0.6 
0.4 
0.2 
The Value of First-to-Default v. s. the Asset Correlation 
5-Asset 
20-Asset 
29
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager
Selection of Research Material relating to RiskMetrics Group CDO Manager

More Related Content

Similar to Selection of Research Material relating to RiskMetrics Group CDO Manager

A value at risk framework for longevity risk printversion 0
A value at risk framework for longevity risk printversion 0A value at risk framework for longevity risk printversion 0
A value at risk framework for longevity risk printversion 0
Okuda Boniface
 
Risk and volatiltily engle 2004
Risk and volatiltily engle 2004Risk and volatiltily engle 2004
Risk and volatiltily engle 2004
COX LWAKA
 
Xue paper-01-13-12
Xue paper-01-13-12Xue paper-01-13-12
Xue paper-01-13-12
Yuhong Xue
 
Real time clustering of time series
Real time clustering of time seriesReal time clustering of time series
Real time clustering of time series
csandit
 
02. predicting financial distress logit mode jones
02. predicting financial distress logit mode jones02. predicting financial distress logit mode jones
02. predicting financial distress logit mode jones
Sailendra Nangadam
 
What is wrong with the quantitative standards for market risk
What is wrong with the quantitative standards for market riskWhat is wrong with the quantitative standards for market risk
What is wrong with the quantitative standards for market risk
Alexander Decker
 
Transition matrix models of consumer credit ratings
Transition matrix models of consumer credit ratingsTransition matrix models of consumer credit ratings
Transition matrix models of consumer credit ratings
Madhur Malik
 
3. .xiaofei li 1
3. .xiaofei li 13. .xiaofei li 1
3. .xiaofei li 1
guitarefolle
 
PROBABILISTIC CREDIT SCORING FOR COHORTS OF BORROWERS
PROBABILISTIC CREDIT SCORING FOR COHORTS OF BORROWERSPROBABILISTIC CREDIT SCORING FOR COHORTS OF BORROWERS
PROBABILISTIC CREDIT SCORING FOR COHORTS OF BORROWERS
Andresz26
 
Federico Thibaud - Capital Structure Arbitrage
Federico Thibaud - Capital Structure ArbitrageFederico Thibaud - Capital Structure Arbitrage
Federico Thibaud - Capital Structure Arbitrage
Federico Thibaud
 
Nduati Michelle Wanjiku Undergraduate Project
Nduati Michelle Wanjiku Undergraduate ProjectNduati Michelle Wanjiku Undergraduate Project
Nduati Michelle Wanjiku Undergraduate Project
Michelle Nduati
 
MODELING THE AUTOREGRESSIVE CAPITAL ASSET PRICING MODEL FOR TOP 10 SELECTED...
  MODELING THE AUTOREGRESSIVE CAPITAL ASSET PRICING MODEL FOR TOP 10 SELECTED...  MODELING THE AUTOREGRESSIVE CAPITAL ASSET PRICING MODEL FOR TOP 10 SELECTED...
MODELING THE AUTOREGRESSIVE CAPITAL ASSET PRICING MODEL FOR TOP 10 SELECTED...
IAEME Publication
 
risks-07-00010-v3 (1).pdf
risks-07-00010-v3 (1).pdfrisks-07-00010-v3 (1).pdf
risks-07-00010-v3 (1).pdf
RaviBhushan600295
 
Advantages of Regression Models Over Expert Judgement for Characterizing Cybe...
Advantages of Regression Models Over Expert Judgement for Characterizing Cybe...Advantages of Regression Models Over Expert Judgement for Characterizing Cybe...
Advantages of Regression Models Over Expert Judgement for Characterizing Cybe...
Thomas Lee
 
Dissertation (2)
Dissertation (2)Dissertation (2)
Dissertation (2)
Clive X. XIAO
 
Optimal investment and reinsurance for mean-variance insurers under variance ...
Optimal investment and reinsurance for mean-variance insurers under variance ...Optimal investment and reinsurance for mean-variance insurers under variance ...
Optimal investment and reinsurance for mean-variance insurers under variance ...
International Journal of Business Marketing and Management (IJBMM)
 
Hedge Fund Predictability Under the Magnifying Glass:The Economic Value of Fo...
Hedge Fund Predictability Under the Magnifying Glass:The Economic Value of Fo...Hedge Fund Predictability Under the Magnifying Glass:The Economic Value of Fo...
Hedge Fund Predictability Under the Magnifying Glass:The Economic Value of Fo...
Ryan Renicker CFA
 
Fund returnsandperformanceevaluationtechniques grinblatt
Fund returnsandperformanceevaluationtechniques grinblattFund returnsandperformanceevaluationtechniques grinblatt
Fund returnsandperformanceevaluationtechniques grinblatt
bfmresearch
 
Volatility forecasting a_performance_mea
Volatility forecasting a_performance_meaVolatility forecasting a_performance_mea
Volatility forecasting a_performance_mea
ijscmcj
 
VOLATILITY FORECASTING - A PERFORMANCE MEASURE OF GARCH TECHNIQUES WITH DIFFE...
VOLATILITY FORECASTING - A PERFORMANCE MEASURE OF GARCH TECHNIQUES WITH DIFFE...VOLATILITY FORECASTING - A PERFORMANCE MEASURE OF GARCH TECHNIQUES WITH DIFFE...
VOLATILITY FORECASTING - A PERFORMANCE MEASURE OF GARCH TECHNIQUES WITH DIFFE...
ijscmcj
 

Similar to Selection of Research Material relating to RiskMetrics Group CDO Manager (20)

A value at risk framework for longevity risk printversion 0
A value at risk framework for longevity risk printversion 0A value at risk framework for longevity risk printversion 0
A value at risk framework for longevity risk printversion 0
 
Risk and volatiltily engle 2004
Risk and volatiltily engle 2004Risk and volatiltily engle 2004
Risk and volatiltily engle 2004
 
Xue paper-01-13-12
Xue paper-01-13-12Xue paper-01-13-12
Xue paper-01-13-12
 
Real time clustering of time series
Real time clustering of time seriesReal time clustering of time series
Real time clustering of time series
 
02. predicting financial distress logit mode jones
02. predicting financial distress logit mode jones02. predicting financial distress logit mode jones
02. predicting financial distress logit mode jones
 
What is wrong with the quantitative standards for market risk
What is wrong with the quantitative standards for market riskWhat is wrong with the quantitative standards for market risk
What is wrong with the quantitative standards for market risk
 
Transition matrix models of consumer credit ratings
Transition matrix models of consumer credit ratingsTransition matrix models of consumer credit ratings
Transition matrix models of consumer credit ratings
 
3. .xiaofei li 1
3. .xiaofei li 13. .xiaofei li 1
3. .xiaofei li 1
 
PROBABILISTIC CREDIT SCORING FOR COHORTS OF BORROWERS
PROBABILISTIC CREDIT SCORING FOR COHORTS OF BORROWERSPROBABILISTIC CREDIT SCORING FOR COHORTS OF BORROWERS
PROBABILISTIC CREDIT SCORING FOR COHORTS OF BORROWERS
 
Federico Thibaud - Capital Structure Arbitrage
Federico Thibaud - Capital Structure ArbitrageFederico Thibaud - Capital Structure Arbitrage
Federico Thibaud - Capital Structure Arbitrage
 
Nduati Michelle Wanjiku Undergraduate Project
Nduati Michelle Wanjiku Undergraduate ProjectNduati Michelle Wanjiku Undergraduate Project
Nduati Michelle Wanjiku Undergraduate Project
 
MODELING THE AUTOREGRESSIVE CAPITAL ASSET PRICING MODEL FOR TOP 10 SELECTED...
  MODELING THE AUTOREGRESSIVE CAPITAL ASSET PRICING MODEL FOR TOP 10 SELECTED...  MODELING THE AUTOREGRESSIVE CAPITAL ASSET PRICING MODEL FOR TOP 10 SELECTED...
MODELING THE AUTOREGRESSIVE CAPITAL ASSET PRICING MODEL FOR TOP 10 SELECTED...
 
risks-07-00010-v3 (1).pdf
risks-07-00010-v3 (1).pdfrisks-07-00010-v3 (1).pdf
risks-07-00010-v3 (1).pdf
 
Advantages of Regression Models Over Expert Judgement for Characterizing Cybe...
Advantages of Regression Models Over Expert Judgement for Characterizing Cybe...Advantages of Regression Models Over Expert Judgement for Characterizing Cybe...
Advantages of Regression Models Over Expert Judgement for Characterizing Cybe...
 
Dissertation (2)
Dissertation (2)Dissertation (2)
Dissertation (2)
 
Optimal investment and reinsurance for mean-variance insurers under variance ...
Optimal investment and reinsurance for mean-variance insurers under variance ...Optimal investment and reinsurance for mean-variance insurers under variance ...
Optimal investment and reinsurance for mean-variance insurers under variance ...
 
Hedge Fund Predictability Under the Magnifying Glass:The Economic Value of Fo...
Hedge Fund Predictability Under the Magnifying Glass:The Economic Value of Fo...Hedge Fund Predictability Under the Magnifying Glass:The Economic Value of Fo...
Hedge Fund Predictability Under the Magnifying Glass:The Economic Value of Fo...
 
Fund returnsandperformanceevaluationtechniques grinblatt
Fund returnsandperformanceevaluationtechniques grinblattFund returnsandperformanceevaluationtechniques grinblatt
Fund returnsandperformanceevaluationtechniques grinblatt
 
Volatility forecasting a_performance_mea
Volatility forecasting a_performance_meaVolatility forecasting a_performance_mea
Volatility forecasting a_performance_mea
 
VOLATILITY FORECASTING - A PERFORMANCE MEASURE OF GARCH TECHNIQUES WITH DIFFE...
VOLATILITY FORECASTING - A PERFORMANCE MEASURE OF GARCH TECHNIQUES WITH DIFFE...VOLATILITY FORECASTING - A PERFORMANCE MEASURE OF GARCH TECHNIQUES WITH DIFFE...
VOLATILITY FORECASTING - A PERFORMANCE MEASURE OF GARCH TECHNIQUES WITH DIFFE...
 

Recently uploaded

What's a worker’s market? Job quality and labour market tightness
What's a worker’s market? Job quality and labour market tightnessWhat's a worker’s market? Job quality and labour market tightness
What's a worker’s market? Job quality and labour market tightness
Labour Market Information Council | Conseil de l’information sur le marché du travail
 
falcon-invoice-discounting-a-premier-investment-platform-for-superior-returns...
falcon-invoice-discounting-a-premier-investment-platform-for-superior-returns...falcon-invoice-discounting-a-premier-investment-platform-for-superior-returns...
falcon-invoice-discounting-a-premier-investment-platform-for-superior-returns...
Falcon Invoice Discounting
 
Bridging the gap: Online job postings, survey data and the assessment of job ...
Bridging the gap: Online job postings, survey data and the assessment of job ...Bridging the gap: Online job postings, survey data and the assessment of job ...
Bridging the gap: Online job postings, survey data and the assessment of job ...
Labour Market Information Council | Conseil de l’information sur le marché du travail
 
How to Use Payment Vouchers in Odoo 18.
How to Use Payment Vouchers in  Odoo 18.How to Use Payment Vouchers in  Odoo 18.
How to Use Payment Vouchers in Odoo 18.
FinShe
 
快速办理(美国Fordham毕业证书)福德汉姆大学毕业证学历证书一模一样
快速办理(美国Fordham毕业证书)福德汉姆大学毕业证学历证书一模一样快速办理(美国Fordham毕业证书)福德汉姆大学毕业证学历证书一模一样
快速办理(美国Fordham毕业证书)福德汉姆大学毕业证学历证书一模一样
5spllj1l
 
RMIT University degree offer diploma Transcript
RMIT University degree offer diploma TranscriptRMIT University degree offer diploma Transcript
RMIT University degree offer diploma Transcript
cahyrnui
 
在线办理(GU毕业证书)美国贡萨加大学毕业证学历证书一模一样
在线办理(GU毕业证书)美国贡萨加大学毕业证学历证书一模一样在线办理(GU毕业证书)美国贡萨加大学毕业证学历证书一模一样
在线办理(GU毕业证书)美国贡萨加大学毕业证学历证书一模一样
5spllj1l
 
Upanishads summary with explanations of each upnishad
Upanishads summary with explanations of each upnishadUpanishads summary with explanations of each upnishad
Upanishads summary with explanations of each upnishad
ajaykumarxoxo04
 
真实可查(nwu毕业证书)美国西北大学毕业证学位证书范本原版一模一样
真实可查(nwu毕业证书)美国西北大学毕业证学位证书范本原版一模一样真实可查(nwu毕业证书)美国西北大学毕业证学位证书范本原版一模一样
真实可查(nwu毕业证书)美国西北大学毕业证学位证书范本原版一模一样
28xo7hf
 
Enhancing Asset Quality: Strategies for Financial Institutions
Enhancing Asset Quality: Strategies for Financial InstitutionsEnhancing Asset Quality: Strategies for Financial Institutions
Enhancing Asset Quality: Strategies for Financial Institutions
shruti1menon2
 
The state of welfare Resolution Foundation Event
The state of welfare Resolution Foundation EventThe state of welfare Resolution Foundation Event
The state of welfare Resolution Foundation Event
ResolutionFoundation
 
Detailed power point presentation on compound interest and how it is calculated
Detailed power point presentation on compound interest  and how it is calculatedDetailed power point presentation on compound interest  and how it is calculated
Detailed power point presentation on compound interest and how it is calculated
KishanChaudhary23
 
1比1复刻(ksu毕业证书)美国堪萨斯州立大学毕业证本科文凭证书原版一模一样
1比1复刻(ksu毕业证书)美国堪萨斯州立大学毕业证本科文凭证书原版一模一样1比1复刻(ksu毕业证书)美国堪萨斯州立大学毕业证本科文凭证书原版一模一样
1比1复刻(ksu毕业证书)美国堪萨斯州立大学毕业证本科文凭证书原版一模一样
28xo7hf
 
Money20/20 and EU Networking Event of 20/24!
Money20/20 and EU Networking Event of 20/24!Money20/20 and EU Networking Event of 20/24!
Money20/20 and EU Networking Event of 20/24!
FinTech Belgium
 
Discover the Future of Dogecoin with Our Comprehensive Guidance
Discover the Future of Dogecoin with Our Comprehensive GuidanceDiscover the Future of Dogecoin with Our Comprehensive Guidance
Discover the Future of Dogecoin with Our Comprehensive Guidance
36 Crypto
 
Ending stagnation: How to boost prosperity across Scotland
Ending stagnation: How to boost prosperity across ScotlandEnding stagnation: How to boost prosperity across Scotland
Ending stagnation: How to boost prosperity across Scotland
ResolutionFoundation
 
一比一原版美国新罕布什尔大学(unh)毕业证学历认证真实可查
一比一原版美国新罕布什尔大学(unh)毕业证学历认证真实可查一比一原版美国新罕布什尔大学(unh)毕业证学历认证真实可查
一比一原版美国新罕布什尔大学(unh)毕业证学历认证真实可查
taqyea
 
Seeman_Fiintouch_LLP_Newsletter_Jun_2024.pdf
Seeman_Fiintouch_LLP_Newsletter_Jun_2024.pdfSeeman_Fiintouch_LLP_Newsletter_Jun_2024.pdf
Seeman_Fiintouch_LLP_Newsletter_Jun_2024.pdf
Ashis Kumar Dey
 
Does teamwork really matter? Looking beyond the job posting to understand lab...
Does teamwork really matter? Looking beyond the job posting to understand lab...Does teamwork really matter? Looking beyond the job posting to understand lab...
Does teamwork really matter? Looking beyond the job posting to understand lab...
Labour Market Information Council | Conseil de l’information sur le marché du travail
 
Governor Olli Rehn: Inflation down and recovery supported by interest rate cu...
Governor Olli Rehn: Inflation down and recovery supported by interest rate cu...Governor Olli Rehn: Inflation down and recovery supported by interest rate cu...
Governor Olli Rehn: Inflation down and recovery supported by interest rate cu...
Suomen Pankki
 

Recently uploaded (20)

What's a worker’s market? Job quality and labour market tightness
What's a worker’s market? Job quality and labour market tightnessWhat's a worker’s market? Job quality and labour market tightness
What's a worker’s market? Job quality and labour market tightness
 
falcon-invoice-discounting-a-premier-investment-platform-for-superior-returns...
falcon-invoice-discounting-a-premier-investment-platform-for-superior-returns...falcon-invoice-discounting-a-premier-investment-platform-for-superior-returns...
falcon-invoice-discounting-a-premier-investment-platform-for-superior-returns...
 
Bridging the gap: Online job postings, survey data and the assessment of job ...
Bridging the gap: Online job postings, survey data and the assessment of job ...Bridging the gap: Online job postings, survey data and the assessment of job ...
Bridging the gap: Online job postings, survey data and the assessment of job ...
 
How to Use Payment Vouchers in Odoo 18.
How to Use Payment Vouchers in  Odoo 18.How to Use Payment Vouchers in  Odoo 18.
How to Use Payment Vouchers in Odoo 18.
 
快速办理(美国Fordham毕业证书)福德汉姆大学毕业证学历证书一模一样
快速办理(美国Fordham毕业证书)福德汉姆大学毕业证学历证书一模一样快速办理(美国Fordham毕业证书)福德汉姆大学毕业证学历证书一模一样
快速办理(美国Fordham毕业证书)福德汉姆大学毕业证学历证书一模一样
 
RMIT University degree offer diploma Transcript
RMIT University degree offer diploma TranscriptRMIT University degree offer diploma Transcript
RMIT University degree offer diploma Transcript
 
在线办理(GU毕业证书)美国贡萨加大学毕业证学历证书一模一样
在线办理(GU毕业证书)美国贡萨加大学毕业证学历证书一模一样在线办理(GU毕业证书)美国贡萨加大学毕业证学历证书一模一样
在线办理(GU毕业证书)美国贡萨加大学毕业证学历证书一模一样
 
Upanishads summary with explanations of each upnishad
Upanishads summary with explanations of each upnishadUpanishads summary with explanations of each upnishad
Upanishads summary with explanations of each upnishad
 
真实可查(nwu毕业证书)美国西北大学毕业证学位证书范本原版一模一样
真实可查(nwu毕业证书)美国西北大学毕业证学位证书范本原版一模一样真实可查(nwu毕业证书)美国西北大学毕业证学位证书范本原版一模一样
真实可查(nwu毕业证书)美国西北大学毕业证学位证书范本原版一模一样
 
Enhancing Asset Quality: Strategies for Financial Institutions
Enhancing Asset Quality: Strategies for Financial InstitutionsEnhancing Asset Quality: Strategies for Financial Institutions
Enhancing Asset Quality: Strategies for Financial Institutions
 
The state of welfare Resolution Foundation Event
The state of welfare Resolution Foundation EventThe state of welfare Resolution Foundation Event
The state of welfare Resolution Foundation Event
 
Detailed power point presentation on compound interest and how it is calculated
Detailed power point presentation on compound interest  and how it is calculatedDetailed power point presentation on compound interest  and how it is calculated
Detailed power point presentation on compound interest and how it is calculated
 
1比1复刻(ksu毕业证书)美国堪萨斯州立大学毕业证本科文凭证书原版一模一样
1比1复刻(ksu毕业证书)美国堪萨斯州立大学毕业证本科文凭证书原版一模一样1比1复刻(ksu毕业证书)美国堪萨斯州立大学毕业证本科文凭证书原版一模一样
1比1复刻(ksu毕业证书)美国堪萨斯州立大学毕业证本科文凭证书原版一模一样
 
Money20/20 and EU Networking Event of 20/24!
Money20/20 and EU Networking Event of 20/24!Money20/20 and EU Networking Event of 20/24!
Money20/20 and EU Networking Event of 20/24!
 
Discover the Future of Dogecoin with Our Comprehensive Guidance
Discover the Future of Dogecoin with Our Comprehensive GuidanceDiscover the Future of Dogecoin with Our Comprehensive Guidance
Discover the Future of Dogecoin with Our Comprehensive Guidance
 
Ending stagnation: How to boost prosperity across Scotland
Ending stagnation: How to boost prosperity across ScotlandEnding stagnation: How to boost prosperity across Scotland
Ending stagnation: How to boost prosperity across Scotland
 
一比一原版美国新罕布什尔大学(unh)毕业证学历认证真实可查
一比一原版美国新罕布什尔大学(unh)毕业证学历认证真实可查一比一原版美国新罕布什尔大学(unh)毕业证学历认证真实可查
一比一原版美国新罕布什尔大学(unh)毕业证学历认证真实可查
 
Seeman_Fiintouch_LLP_Newsletter_Jun_2024.pdf
Seeman_Fiintouch_LLP_Newsletter_Jun_2024.pdfSeeman_Fiintouch_LLP_Newsletter_Jun_2024.pdf
Seeman_Fiintouch_LLP_Newsletter_Jun_2024.pdf
 
Does teamwork really matter? Looking beyond the job posting to understand lab...
Does teamwork really matter? Looking beyond the job posting to understand lab...Does teamwork really matter? Looking beyond the job posting to understand lab...
Does teamwork really matter? Looking beyond the job posting to understand lab...
 
Governor Olli Rehn: Inflation down and recovery supported by interest rate cu...
Governor Olli Rehn: Inflation down and recovery supported by interest rate cu...Governor Olli Rehn: Inflation down and recovery supported by interest rate cu...
Governor Olli Rehn: Inflation down and recovery supported by interest rate cu...
 

Selection of Research Material relating to RiskMetrics Group CDO Manager

  • 1. Selection of Research Material relating to RiskMetrics Group CDO Manager E: europe@riskmetrics.com W: www.riskmetrics.com T: 020 7842 0260 F: 020 7842 0269
  • 2. CONTENTS 1. Introductory Technical Note on the CDO Manager Software. 2. A comparison of stochastic default rate models, Christopher C. Finger. RiskMetrics Group Working Paper Number 00-02 3. On Default Correlation: A Copula Function Approach, David X. Li. RiskMetrics Group Working Paper Number 99-07 4. The Valuation of the ith-to-Default Basket Credit Derivatives, David X. Li. RiskMetrics Group Working Paper. 5. Worst Loss Analysis of BISTRO Reference Portfolio, Toru Tanaka, Sheikh Pancham, Tamunoye Alazigha, Fuji Bank. RiskMetrics Group CreditMetrics Monitor April 1999. 6. The Valuation of Basket Credit Derivatives, David X. Li. RiskMetrics Group CreditMetrics Monitor April 1999. 7. Conditional Approaches for CreditMetrics Portfolio Distributions, Christopher C. Finger. RiskMetrics Group CreditMetrics Monitor April 1999.
  • 3. Product Technical Note CDO Model - Key Features , ,QWURGXFWLRQ ,, ,VVXHVLQPRGHOOLQJ'2VWUXFWXUHV ,,, /LPLWDWLRQVRI3UHVHQW$SSURDFKHV ,9 (QKDQFHGUHGLW0HWULFVÔEDVHG0HWKRGRORJ 9 '20RGHO)ORZFKDUW 9, 6DPSOH5HVXOWV Page 1 , ,QWURGXFWLRQ 7KH '2 PRGHO DOORZV RX WR DQDOVH FDVK IORZ '2·V $ FRPSUHKHQVLYH 0RQWH DUOR IUDPHZRUN GLIIHUV IURP H[LVWLQJ FDVK IORZ PRGHOV XVHG E PDQ VWUXFWXUHUV DQG UDWLQJ DJHQFLHV LQ WKDW ZH JHQHUDWH D PRUH FRPSOHWH VHW RI VFHQDULRV LQVWHDG RI MXVW OLPLWHG VWUHVV WHVWLQJ VFHQDULRV VHW E XVHUV 7KH PRGHO LV PXOWLSHULRG ZKHUH WKH GQDPLFV RI FROODWHUDOLVDWLRQ DQG DVVHW WHVWV DUH FDSWXUHG LQ WKLV IUDPHZRUN ,QVWHDG RI DSSUR[LPDWLQJ D FRUUHODWHG KHWHURJHQHRXV SRUWIROLR E DQ LQGHSHQGHQW KRPRJHQHRXV SRUWIROLR DV XVHG E VRPHDJHQFLHVWKHPRGHOWDNHLQWRFRQVLGHUDWLRQDOOFROODWHUDODVVHWV ,, ,VVXHVLQPRGHOOLQJ'2VWUXFWXUHV · 6WUXFWXUHVYDUGLIILFXOWWRPRGHODOOYDULDWLRQV · 1ROLTXLGVHFRQGDUPDUNHW · XUUHQWHYDOXDWLRQEDVHGRQDJHQFUDWLQJ o DWRULJLQDWLRQSOXV o UDWLQJGRZQJUDGHVXUSULVHV · 7KHIHZDYDLODEOHWRROVODFNDFRQVLVWHQWSRUWIROLREDVHGFUHGLWPHWKRGRORJ · 1HHG IRU DQ LQGHSHQGHQW ULVN DVVHVVPHQW WRRO ZLWK ZHOOGHILQHG FUHGLW PHWKRGRORJ ZKLFK LQFRUSRUDWHVDVWRFKDVWLFSURFHVV ,,, /LPLWDWLRQVRI3UHVHQW$SSURDFKHV · 0RVWDVVXPHFRQVWDQWJOREDODQQXDOGHIDXOWUDWHIRUDVVHWVLQFROODWHUDOSRRO(JRIDVVHWV GHIDXOWLQVWHDUQGHDUDQGVRRQ o 6LPSOHEDVHFDVHEXWQRWUHDOLVWLFDVWLPLQJRIORVVHVLQ'2LVH[WUHPHOLPSRUWDQW o 7UHDWVFROODWHUDODVDKRPRJHQHRXVSRRORIDVVHWV,QUHDOLWZHPDKDYHLVVXHV YDULQJ IHDWXUHV DQG FRPSOH[ FRUUHODWLRQ VWUXFWXUHV ZKLFK DUH VWURQJO QRQ KRPRJHQHRXV
  • 4. Product Technical Note /LPLWDWLRQVRI3UHVHQW$SSURDFKHVFRQWLQXHG«
  • 5. · /LPLWHGWRIURQWORDGHGGHIDXOWUDWHDQDOVLV,HPRVWGHIDXOWVLQILUVWIHZHDUV · 0RVWVLPXODWHGHIDXOWUDWHVRQO o 'RHVQRWFDSWXUHDVVHWVSHFLILFGHIDXOW o 'HILFLHQWLQDQDOVLQJ0H]]DQLQHQRWHVDQG(TXLWLQYHVWPHQWV Page 2 ,9 (QKDQFHGUHGLW0HWULFVÔEDVHG0HWKRGRORJ · 7UHDW GHIDXOW ULVN DW 2EOLJRU RU $VVHW OHYHO 7KLV WDNHV LQWR DFFRXQW REOLJRU VSHFLILF ULVN LQGXVWURUVHFWRUULVNDQGFRUUHODWLRQVEHWZHHQDVVHWVXVLQJUHGLW0HWULFVPHWKRGRORJ · %XLOG D VFHQDULR RI ´'HIDXOW WLPHµ IRU HDFK DVVHW WKHUHE HIIHFWLQJ WLPLQJ RI FDVK IORZV UHFHLYHGDQGGHODVLQUHFRYHU · *HQHUDWHVFHQDULRVRIGHIDXOWWLPHVIRUHDFKDVVHWLQWKHFROODWHUDOSRROXVLQJD0RQWHDUOR VLPXODWLRQ SURFHVV )URQW ORDGHG GHIDXOWV DQDOVLV ZRXOG EH DFFRXQWHG IRU LQ VRPH RI WKH VFHQDULRV · 5HVXOWVDQDOVLVFDQIRFXVRQDGYHUVHVFHQDULRVZLWKZRUVWVLPXODWHGULVNUHWXUQ · $JJUHJDWHUHVXOWLQJFDVKIORZVDQGDOORFDWHRYHUWKHGLVWULEXWLRQVWUXFWXUHFRPSXWLQJUHVXOWV DWHDFKFRXSRQSHULRGDQGRYHUWKHOLIHRIWKHGHDO3HUIRUPDQFHLVWKXVSDWKGHSHQGHQWRQ WLPLQJDQGVHYHULWRIORVVHV · DQPRGHODQGDQDOVHFDVKIORZVZLWKDVVXPSWLRQRIQRPDQDJHULQWHUYHQWLRQ3URYLGHVDQ H[SHFWHGDQGZRUVWFDVHDJDLQVWZKLFKWRDVVHVVPDQDJHUV·SHUIRUPDQFH · RPELQLQJWKHUHGLW0HWULFVDSSURDFKZLWKRSXODIXQFWLRQVDOORZVXVWRPRGHOGHIDXOWRYHU PXOWLSOHSHULRGV7KLVH[WHQGVRXURQHSHULRGIUDPHZRUNXVHGLQUHGLW0HWULFV
  • 6. Product Technical Note Page 3 9 '20RGHO)ORZFKDUW 9, 6DPSOHUHVXOWV 6DPSOHUHVXOWVIRUZRUVWFDVHVFHQDULRVIURPVLPXODWLRQVRQWKHVHQLRUWUDQFKHRIDJHQHULFVWUXFWXUH Yield vs Collateral Loss 4.74% 4.74% 4.74% 4.74% 4.73% 4.73% 4.73% 0 1 1 2 2 3 3 Collateral Loss (in thousands) Yield Duration vs Collateral Loss 2.96 2.95 2.94 2.93 2.92 2.91 2.90 2.89 2.88 2.87 2.86 0 1 1 2 2 3 3 Collateral Loss (in thousands) Duration Yield vs Average Life 3.20 3.18 3.16 3.14 3.12 3.10 4.74% 4.74% 4.74% 4.74% 4.73% 4.73% 4.73% 3.08 3.10 3.12 3.14 3.16 3.18 3.20 Average Life Yield Average Life vs Collateral Loss 3.08 0 1 1 2 2 3 3 Collateral Loss (in thousands) Average Life
  • 7. The RiskMetrics Group Working Paper Number 00-02 A comparison of stochastic default rate models Christopher C. Finger This draft: August 2000 First draft: July 2000 44Wall St. chris.finger@riskmetrics.com NewYork, NY 10005 www.riskmetrics.com
  • 8. A comparison of stochastic default rate models Christopher C. Finger August 2000 Abstract For single horizon models of defaults in a portfolio, the effect of model and distribution choice on the model results is well understood. Collateralized Debt Obligations in particular have sparked interest in default models over multiple horizons. For these, however, there has been little research, and there is little understanding of the impact of various model assumptions. In this article, we investigate four approaches to multiple horizon modeling of defaults in a portfolio. We calibrate the four models to the same set of input data (average defaults and a single period correlation parameter), and examine the resulting default distributions. The differences we observe can be attributed to the model structures, and to some extent, to the choice of distributions that drive the models. Our results show a significant disparity. In the single period case, studies have concluded that when calibrated to the same first and second order information, the various models do not produce vastly different conclusions. Here, the issue of model choice is much more important, and any analysis of structures over multiple horizons should bear this in mind. Keywords: Credit risk, default rate, collateralized debt obligations
  • 9. 1 Introduction In recent years, models of defaults in a portfolio context have been well studied. Three separate approaches (CreditMetrics, CreditRisk+, and CreditPortfolioView1) were made public in 1997. Subsequently, researchers2 have examined the mathematical structure of the various models. Each of these studies has revealed that it is possible to calibrate the models to each other and that the differences between the models lie in subtle choices of the driving distributions and in the data sources one would naturally use to feed the models. Common to all of these models, and to the subsequent examinations thereof, is the fact that the models describe only a single period. In otherwords, the models describe, for a specific risk horizon, whether each asset of interest defaults within the horizon. The timing of defaults within the risk horizon is not considered, nor is the possibility of defaults beyond the horizon. This is not a flaw of the current models, but rather an indication of their genesis as approaches to risk management and capital allocation for a fixed portfolio. Not entirely by chance, the development of portfolio models for credit risk management has coin-cided with an explosion in issuance of Collateralized Debt Obligations (CDO’s). The performance of a CDO structure depends on the default behavior of a pool of assets. Significantly, the depen-dence is not just on whether the assets default over the life of the structure, but also on when the defaults occur. Thus, while an application of the existing models can give a cursory view of the structure (by describing, for instance, the distribution of the number of assets that will default over the structure’s life), a more rigorous analysis requires a model of the timing of defaults. In this paper, we will survey a number of extensions of the standard single-period models that allow for a treatment of default timing over longer horizons. We will examine two extensions of the Cred-itMetrics approach, one that models only defaults over time and a second that effectively accounts 1SeeWilson (1997). 2See Finger (1998), Gordy (2000), and Kolyoglu and Hickman (1998). 1
  • 10. for rating migrations. In addition, we will examine the copula function approach introduced by Li (1999 and 2000), as well as a simple version of the stochastic intensity model applied by Duffie and Garleanu (1998). We will seek to investigate the differences in the four approaches that arise from model – rather than data – differences. Thus, we will suppose that we begin with satisfactory estimates of expected default rates over time, and of the correlation of default events over one period. Higher order information, such as the correlation of defaults in subsequent periods or the joint behavior of three or more assets, will be driven by the structure of the models. The analysis of the models will then illuminate the range of results that can arise given the same initial data. Nagpal and Bahar (1999) adopt a similar approach in the single horizon context, investigating the range of possible full distributions that can be calibrated to first and second order default statistics. In the following section, we present terminology and notation to be used throughout. We proceed to detail the four models. Finally,we present two comparison exercises: in the first, we use closed form results to analyze default rate volatilities and conditional default probabilities, while in the second, we implement Monte Carlo simulations in order to investigate the full distribution of realized default rates. 2 Notation and terminology In order to compare the properties of the four models, we will consider a large homogeneous pool of assets. By homogeneous, we mean that each asset has the same probability of default (first order statistics) at every time we consider; further, each pair of assets has the same joint probability of default (second order statistics) at every time. To describe the first order statistics of the pool, we specify the cumulative default probability qk – the probability that a given asset defaults in the next k years – for k D 1; 2; : : : T , where T is the maximum horizon we consider. Equivalently, we may specify the marginal default probability 2
  • 11. pk – the probability that a given asset defaults in year k. Clearly, cumulative and marginal default probabilities are related through qk D qk−1 C pk; for k D 2; : : : ; T: (1) It is important to distinguish a third equivalent specification, that of conditional default probabilities. The conditional default probability in year k is defined as the conditional probability that an asset defaults in year k, given that the asset has survived (that is, has not defaulted) in the first k−1 years. This probability is given by pk=.1 − qk−1/. Finally, to describe the second order statistics of the pool, we specify the joint cumulative default probability qj;k – the probability that for a given pair of assets, the first asset defaults sometime in the first j years and the second defaults sometime in the first k years – or equivalently, the joint marginal default probability pj;k – the probability that the first asset defaults in year j and the second defaults in year k. These two notions are related through qj;k D qj−1;k−1 C Xj−1 iD1 pi;k C Xk−1 iD1 pj;i C pj;k; for j; k D 2; : : : ; T: (2) In practice, it is possible to obtain first order statistics for relatively long horizons, either by observing market prices of risky debt and calibrating cumulative default probabilities as in Duffie and Singleton (1999), or by taking historical cumulative default experience from a study such asKeenan et al (2000) or Standard Poor’s (2000). Less information is available for second order statistics, however, and therefore we will assume that we can obtain the joint default probability for the first year (p1;1)3, but not any of the joint default probabilities for subsequent years. Thus, our exercise will be to calibrate each of the four models to fixed values of q1; q2; : : : qT and p1;1, and then to compare the higher order statistics implied by the models. The model comparison can be a simple task of comparing values of p1;2, p2;2, q2;2, and so on. However, to make the comparisons a bit more tangible, we will consider the distributions of realized 3This is a reasonable supposition, since all of the single period models mentioned previously essentially require p1;1 as an input. 3
  • 12. default rates. The term default rate is often used loosely in the literature, without a clear notion of whether default rate is synonymous with default probability, or rather is itself a random variable. To be clear, in this article, default rate is a random variable equal to the proportion of assets in a portfolio that default. For instance, if the random variable X.k/ i is equal to one if the ith asset defaults in year k, then the year k default rate is equal to 1 n Xn iD1 X.k/ i : (3) For our homogeneous portfolio, the mean year k default rate is simply pk, the marginal default probability for year k. Furthermore, the standard deviation of the year k default rate (which we will refer to as the year k default rate volatility) is q pk;k − p2 k C .pk − pk;k/=n: (4) Of interest to us is the large portfolio limit (that is, n ! 1) of this quantity, normalized by the default probability. We will refer to this as the normalized year k default volatility, which is given by q pk;k − p2 k pk : (5) Additionally, we will examine the normalized cumulative year k default volatility, which is defined similarly to the above, with the exception that the default rate is computed over the first k years rather than year k only. The normalized cumulative default volatility is given by q qk;k − q2 k qk : (6) Finally, we will use 8 to denote the standard normal cumulative distribution function. In the bivariate setting, we will use 82.z1; z2I / to indicate the probability that Z1 z1 and Z2 z2, where Z1 and Z2 are standard normal random variables with correlation . In the following four sections, we describe the models to be considered, and discuss in detail the calibration to our initial data. 4
  • 13. 3 Discrete CreditMetrics extension In its simplest form, the single period CreditMetrics model, calibrated for our homogeneous port-folio, can be stated as follows: (i) Define a default threshold such that 8./ D p1. (ii) To each asset i, assign a standard normal random variable Z.i/, where the correlation between distinct Z.i/ and Z.j / is equal to , such that 82.; I / D p1;1: (7) (iii) Asset i defaults in year 1 if Z.i/ . The simplest extension of this model to multiple horizons is to simply repeat the one period model. We then have default thresholds 1; 2; : : : ; T corresponding to each period. For the first period, we assign standard normal random variables Z.i/ 1 to each asset as above, and asset i defaults in the first period if Z.i/ 1 1. For assets that survive the first period, we assign a second set of standard normal random variables Z.i/ 2 , such that the correlation between distinct Z.i/ 2 and Z .j / 2 is but the variables from one period to the next are independent. Asset i then defaults in the second period if Z.i/ 1 1 (it survives the first period) and Z.i/ 2 2. The extension to subsequent periods should be clear. In the end, the model is specified by the default thresholds 1; 2; : : : ; T and the correlation parameter . To calibrate this model to our cumulative default probabilities q1; q2; : : : ; qT and joint default probability, we begin by setting the first period default threshold: −1.q1/: (8) 1 D 8 For subsequent periods, we set k such that the probability that Z.k/ i k is equal to the conditional default probability for period k: −1 k D 8 qk − qk−1 1 − qk−1 : (9) 5
  • 14. We complete the calibration by choosing to satisfy (7), with replaced by 1. The joint default probabilities and default volatilities are easily obtained in this context. For instance, the marginal year two joint default probability is given by (for distinct i and j ): p2;2 D P n Z.i/ 1 1 Z .j / 1 1 Z.i/ 2 2 Z .j / 2 2 o D P n Z.i/ 1 1 Z .j / 1 1 o P n Z.i/ 2 2 Z .j / 2 2 o D .1 − 2p1 C p1;1/ 82.2; 2I /: (10) Similarly, the probability that asset i defaults in the first period, and asset j in the second period is p1;2 D P n Z.i/ 1 1 Z .j / 1 1 Z .j / 2 2 o D .p1 − p1;1/ q2 − p1 1 − p1 : (11) It is then possible to obtain q2;2 using (2) and the default volatilities using (5) and (6). 4 Diffusion-driven CreditMetrics extension By construction, the discrete CreditMetrics extension above does not allow for any correlation of default rates through time. For instance, if a high default rate is realized in the first period, this has no bearing on the default rate in the second period, since the default drivers for the second period (the Z.i/ 2 above) are independent of the default drivers for the first. Intuitively, we would not expect this behavior from the market. If a high default rate occurs in one period, then it is likely that those obligors that did not default would have generally decreased in credit quality. The impact would then be that the default rate for the second period would also have a tendency to be high. In order to capture this behavior, we introduce a CreditMetrics extension where defaults in con-secutive periods are not driven by independent random variables, but rather by a single diffusion process. Our diffusion-driven CreditMetrics extension is described by: (i) Define default thresholds 1; 2; : : : ; T for each period. 6
  • 15. (ii) To each obligor, assign a standard Wiener process W.i/, with W.i/ 0 D 0, where the instanta-neous correlation between distinct W.i/ and W.j / is .4 (iii) Obligor i defaults in the first year if W.i/ 1 1. (iv) For k 1, obligor i defaults in year k if it survives the first k − 1 years (that is, W.i/ 1 1; : : : ;W.i/ k−1 k−1) and W.i/ k k. Note that this approach allows for the behavior mentioned above. If the default rate is high in the first year, this is because many of the Wiener processes have fallen below the threshold 1. The Wiener processes for non-defaulting obligors will have generally trended downward as well, since all of the Wiener processes are correlated. This implies a greater likelihood of a high number of defaults in the second year. In effect, then, this approach introduces a notion of credit migration. Cases where the Wiener process trends downward but does not cross the default threshold can be thought of as downgrades, while cases where the process trends upward are essentially upgrades. To calibrate the first threshold 1, we observe that P n W.i/ 1 1 o D 8.1/; (12) and thus that 1 is given by (8). For the second threshold, we require that the probability that an obligor defaults in year two is equal to p2: P n W.i/ 1 1 W.i/ 2 2 o D p2: (13) Since W.i/ is a Wiener process, we know that the standard deviation of W.i/ t is p t and that for s t, the correlation between W.i/ s and W.i/ t is p s=t. Thus, given 1, we find the value of 2 that satisfies p 2/ − 82.1; 2= 8.2= p 2I p 1=2/ D p2: (14) 4Technically, the cross variation process for W.i/ and W.j / is dt . 7
  • 16. For the kth period, given 1; : : : ; k−1, we calibrate k by solving P n W.i/ 1 1 : : : W.i/ k−1 k−1 W.i/ k k o D pk; (15) again utilizing the properties of theWiener processW.i/ to compute the probability on the left hand side. We complete the calibration by finding such that the year one joint default probability is p1;1: P n W.i/ 1 1 W .j / 1 1 o D p1;1: (16) Since W.i/ .j / 1 each follow a standard normal distribution, and have a correlation of , the 1 and W solution for here is identical to that of the previous section. With the calibration complete, it is a simple task to compute the joint default probabilities. For instance, the joint year two default probability is given by p2;2 D P n W.i/ .j / 1 1 W.i/ 1 1 W .j / 2 2 2 2 W o ; (17) where we use the fact that fW.i/ .j / 1 ;W.i/ 1 ;W .j / 2 2 ;W g follow a multivariate normal distribution with covariance CovfW.i/ .j / 1 ;W.i/ 1 ;W .j / 2 2 ;W g D 0 BB@ 1 1 1 1 1 2 2 1 2 2 1 CCA : (18) 5 Copula functions A drawback of both the CreditMetrics extensions above is that in a Monte Carlo setting, they require a stepwise simulation approach. In other words, we must simulate the pool of assets over the first year, tabulate the ones that default, then simulate the remaining assets over the second year, and so on. Li (1999 and 2000) introduces an approach wherein it is possible to simulate the default times directly, thus avoiding the need to simulate each period individually. The normal copula function approach is as follows: 8
  • 17. (i) Specify the cumulative default time distribution F, such that F.t/ gives the probability that a given asset defaults prior to time t . (ii) Assign a standard normal random variable Z.i/ to each asset, where the correlation between distinct Z.i/ and Z.j / is . (iii) Obtain the default time i for asset i through i D F −1.8.Z.i///: (19) Since we are concerned here only with the year in which an asset defaults, and not the precise timing within the year, we will consider a discrete version of the copula approach: (i) Specify the cumulative default probabilities q1; q2; : : : ; qT as in Section 2. (ii) For k D 1; : : : ; T compute the threshold k D 8 −1.qk/. Clearly, 1 2 : : : T . Define 0 D −1. (iii) Assign Z.i/ to each asset as above. (iv) Asset i defaults in year k if k−1 Z.i/ k. The calibration to the cumulative default probabilities is already given. Further, it is easy to observe5 that the correlation parameter is calibrated exactly as in the previous two sections. The joint default probabilities are perhaps simplest to obtain for this approach. For example, the joint cumulative default probability qk;l is given by qk;l D P n Z.i/ k Z.j / l o D 82.k; lI /: (20) 5Details are presented in Li (1999) and Li (2000). 9
  • 18. 6 Stochastic default intensity 6.1 Description of the model The approaches of the three previous sections can all be thought of as extensions of the single period CreditMetrics framework. Each approach relies on standard normal random variables to drive defaults, and calibrates thresholds for these variables. Furthermore, it is easy to see that over the first period, the three approaches are identical; they only differ in their behavior over multiple periods. Our fourth model takes a different approach to the construction of correlated defaults over time, and can be thought of as an extension of the single period CreditRisk+ framework. In the CreditRisk+ model, correlations between default events are constructed through the assets’ dependence on a common default probability, which itself is a random variable.6 Importantly, given the realization of the default probability, defaults are conditionally independent. The volatility of the common default probability is in effect the correlation parameter for this model; a higher default volatility induces stronger correlations, while a zero volatility produces independent defaults.7 The natural extension of the CreditRisk+ framework to continuous time is the stochastic intensity approach presented in Duffie and Garleanu (1998) and Duffie and Singleton (1999). Intuitively, the stochastic intensity model stipulates that in a given small time interval, assets default independently, with probability proportional to a common default intensity.8 In the next time interval, the intensity changes, and defaults are once again independent, but with the default probability proportional to the new intensity level. The evolution of the intensity is described through a stochastic process. In practice, since the intensity must remain positive, it is common to apply similar stochastic processes as are utilized in models of interest rates. 6More precisely, assets may depend on different default probabilities, each of which are correlated. 7See Finger (1998), Gordy (2000), and Kolyoglu and Hickman (1998) for further discussion. 8As with our description of the CreditRisk+ model, this is a simplification. The Duffie-Garleanu framework provides for an intensity process for each asset, with the processes being correlated. 10
  • 19. For our purposes, we will model a single intensity process h. Conditional on h, the default time for each asset is then the first arrival of a Poisson process with arrival rate given by h. The Poisson processes driving the defaults for distinct assets are independent, meaning that given a realization of the intensity process h, defaults are independent. The Poisson process framework implies that given h, the probability that a given asset survives until time t is exp − Z t 0 du hu : (21) Further, because defaults are conditionally independent, the conditional probability, given h, that two assets both survive until time t is exp −2 Z t 0 du hu : (22) The unconditional survival probabilities are given by expectations over the process h, so that in particular, the survival probability for a single asset is given by 1 − qt D Eexp − Z t 0 du hu : (23) For the intensity process, we assume that h evolves according to the stochastic differential equation dht D −.ht − Nh k/dt C p htdWt ; (24) where W is a Wiener process and Nh k is the level to which the process trends during year k. (That is, the mean reversion is toward Nh 1 for t 1, toward Nh 2 for 1 t 2, etc.) Let h0 D Nh 1. Note Nh that this is essentially the model for the instantaneous discount rate used in the Cox-Ingersoll-Ross interest rate model. Note also that in Duffie-Garleanu, there is a jump component to the evolution of h, while the level of mean reversion is constant. In order to express the default probabilities implied by the stochastic intensity model in closed form, we will rely on the following result from Duffie-Garleanu.9 For a process h with h0 D and 9We have changed the notation slightly from the Duffie-Garleanu result, in order to make more explicit the dependence on N h. 11
  • 20. evolving according to (24) with Nh k D Nh for all k, we have Et exp − Z tCs t du hu exp[x C yhs ] D exp x C s.y/Nh C
  • 21. s.y/ht ; (25) where Et denotes conditional expectation given information available at time t . The functions s and
  • 22. s are given by s.y/ D c s C .a.y/c − d.y// bcd.y/ log c C d.y/ebs c C d ; and (26)
  • 23. s.y/ D 1 C a.y/ebs c C d.y/ebs ; (27) where c D − C p 2 C 22 2 ; (28) d.y/ D .1 − cy/ 2y − C p . 2y − /2 − 2. 2y2 − 2y − 2/ 2y2 − 2y − 2 ; (29) a.y/ D .d.y/ C c/y − 1; (30) b D −d.y/. C 2c/ C a.y/. 2 − c/ a.y/c − d.y/ : (31) 6.2 Calibration Our calibration approach for this model will be to fix the mean reversion speed , solve for Nh 1 and to match p1 and p1;1, and then to solve in turn for Nh 2; : : : ; Nh T to match p2; : : : ; pT . To begin, we apply (23) and (25) to obtain p1 D 1 − exp 1.0/Nh 1 C
  • 24. 1.0/h0 D 1 − exp [1.0/ C
  • 25. 1.0/]Nh 1 : (32) To compute the joint probability that two obligors each survive the first year, we must take the expectation of (22), which is essentially the same computation as above, but with the process h replaced by 2h. We observe that the process 2h also evolves according to (24) with the same mean reversion speed , and with Nh k replaced by 2Nh k and replaced by p 2. Thus, we define the 12
  • 26. functions O s and O
  • 27. s in the same way as s and
  • 28. s , with replaced by p 2. We can then compute the joint one year survival probability: Eexp −2 Z t 0 du hu D exp h 2[O 1.0/ C O
  • 29. 1.0/]Nh 1 i : (33) Finally, since the joint survival probability is equal to 1 − 2p1 C p1;1, we have p1;1 D 2p1 − 1 C exp h 2[O 1.0/ C O
  • 30. 1.0/]Nh 1 i : (34) To calibrate and Nh 1 to (32) and (34), we first find the value of such that 1.0/ C O
  • 32. 1.0/ 2.O D log[1 − 2p1 C p1;1] log[1 − p1] ; (35) and then set 1 D log[1 − p1] Nh 1.0/ C
  • 33. 1.0/ : (36) Note that though the equations are lengthy, the calibration is actually quite straightforward, in that we only are ever required to fit one parameter at a time. In order to calibrate Nh 2, we need to obtain an expression for the two year cumulative default probability q2. To this end, we must compute the two year survival probability 1 − q2 D Eexp − Z 2 0 du hu : (37) Since the process h does not have a constant level of mean reversion over the first two years, we cannot apply (25) directly here. However (25) can be applied once we express the two year survival probability as 1 − q2 D Eexp − Z 1 0 du hu E1 exp − Z 2 1 du hu : (38) Now given h1, the process h evolves according to (24) from t D 1 to t D 2 with a constant mean reversion level Nh 2, meaning we can apply (25) to the conditional expectation in (38), yielding 1 − q2 D Eexp − Z 1 0 du hu exp 1.0/Nh 2 C
  • 34. 1.0/h1 : (39) 13
  • 35. The same argument allows us to apply (25) again to (39), giving 1 − q2 D exp 1.0/Nh 2 C [1.
  • 37. 1.
  • 38. 1.0//]Nh 1 : (40) Thus, our calibration for the second year requires setting 2 D 1 Nh 1.0/ log[1 − q2] − [1.
  • 40. 1.
  • 41. 1.0//]Nh 1 : (41) The remaining mean reversion levels Nh 3; : : : ; Nh T are calibrated similarly. 6.3 Joint default probabilities The computation of joint probabilities for longer horizons is similar to (34). The joint probability that two obligors each survive the first two years is given by Eexp −2 Z 2 0 du hu : (42) Here, we apply the same arguments as in (38) through (40) to derive Eexp −2 Z 2 0 du hu D exp h 2O 1.0/Nh 2 C 2[O 1. O
  • 43. 1. O
  • 44. 1.0//]Nh 1 i : (43) For the joint probability that the first obligor survives the first year and the second survives the first two years, we must compute Eexp − Z 1 0 du hu exp − Z 2 0 du hu D Eexp −2 Z 1 0 du hu exp − Z 2 1 du hu (44) The same reasoning yields Eexp − Z 1 0 du hu exp − Z 2 0 du hu D exp h 1.0/Nh 2 C 2[O 1. O
  • 46. 1. O
  • 47. 1.0/=2/]Nh 1 i : (45) The joint default probabilities p2;2 and p1;2 then follow from (43) and (45). 14
  • 48. 7 Model comparisons – closed form results Our first set of model comparisons will utilize the closed form results described in the previous sections. We will restrict the comparisons here to the two period setting, and to second order results (that is, default volatilities and joint probabilities for two assets); results for multiple periods and actual distributions of default rates will be analyzed through Monte Carlo in the next section. For our two period comparisons, we will analyze four sets of parameters: investment and speculative grade default probabilities10, each with two correlation values. The lowand high correlation settings will correspond to values of 10% and 40%, respectively, for the asset correlation parameter in the first three models. For the stochastic intensity model, we will investigate two values for the mean reversion speed . The slow setting will correspond to D 0:29, such that a random shock to the intensity process will decay by 25% over the next year; the fast setting will correspond to D 1:39, such that a random shock to the intensity process will decay by 75% over one year. Calibration results are presented in Table 1. We present the normalized year two default volatilities for each model in Figure 1. As defined in (5) and (6), the marginal and cumulative default volatilities are the standard deviation of the marginal and cumulative two year default rates of a large, homogeneous portfolio. As we would expect, the default volatilities are greater in the high correlation cases than in the low correlation cases. Of the five models tested, the stochastic intensity model with slow mean reversion seems to produce the highest levels of default volatility, indicating that correlations in the second period tend to be higher for this model than for the others. It is interesting to note that of the first three models, all of which are based on the normal distribution and default thresholds, the copula approach in all four cases has a relatively low marginal default volatility but a relatively high cumulative default volatility. (The slow stochastic intensity model is in fact the only other model to show a marginal volatility less than the cumulative volatility.) Note 10Taken from Exhibit 30 of Keenan et al (2000). 15
  • 49. that the cumulative two year default rate is the sum of the first and second year marginal default rates, and thus that the two year cumulative default volatility is composed of three terms: the first and second year marginal default volatilities and the covariance between the first and second years. Our calibration guarantees that the first year default volatilities are identical across the models. Thus, the behavior of the copula model suggests a stronger covariance term (that is, a stronger link between year one and year two defaults) than for either of the two CreditMetrics extensions. To further investigate the links between default events, we examine conditional probability of a default in the second year, given the default of another asset. To be precise, for two distinct assets i and j , we will calculate the conditional probability that asset i defaults in year two, given that asset j defaults in year one, normalized by the unconditional probability that asset i defaults in year two. In terms of quantities we have already defined, this normalized conditional probability is equal to p1;2=.p1p2/. We will also calculate the normalized conditional probability that asset i defaults in year two, given that asset j defaults in year two, given by p2;2=p2 2. For both of these quantities, a value of one indicates that the first asset defaulting does not affect the chance that the second asset defaults; a value of four indicates that the second asset is four times more likely to default if the first asset defaults than it is if we have no information about the first asset. Thus, the probability conditional on a year two default can be interpreted as an indicator of contemporaneous correlation of defaults, and the probability conditional on a year one default as an indicator of lagged default correlation. The normalized conditional probabilities under the five models are presented in Figure 2. As we expect, there is no lagged correlation for the discrete CreditMetrics extension. Interestingly, the copula and both stochastic intensity models often show a higher lagged than contemporaneous correlation. While it is difficult to establish much intuition for the copula model, this phenomenon can be rationalized in the stochastic intensity setting. For this model, any shock to the default intensity will tend to persist longer than one year. If one asset defaults in the first year, it is most likely due to a positive shock to the intensity process; this shock then persists into the second year, where the other asset is more likely to default than normal. Further, shocks are more persistent for the 16
  • 50. slower mean reversion, explaining why the difference in lagged and contemporaneous correlation is more pronounced in this case. By contrast, the two CreditMetrics extensions show much higher contemporaneous than lagged correlation; this lack of persistence in the correlation structure will manifest itself more strongly over longer horizons. To this point, we have calibrated the collection of models to have the same means over two periods, and the same volatilities over one period. We have then investigated the remaining second order statistics – the second period volatility and the correlation between the first and second periods – that depend on the particular models. In the next section, we will extend the analysis on two fronts: first, we will investigate more horizons in order to examine the effects of lagged and contemporaneous correlations over longer times; second, we will investigate the entire distribution of portfolio defaults rather than just the second order moments. 8 Model comparisons – simulation results In this section, we perform Monte Carlo simulations for the five models investigated previously. In each case, we begin with a homogeneous portfolio of one hundred speculative grade bonds. We calibrate the model to the cumulative default probabilities in Table 2 and to the two correlation settings from the previous section. Over 1,000 trials, we simulate the number of bonds that default within each year, up to a final horizon of six years.11 The simulation procedures are straightforward for the two CreditMetrics extensions and the copula approach. For the stochastic intensity framework, we simulate the evolution of the intensity process according to (24). This requires a discretization of (24): htC1t −.ht − Nh k/1t C p ht p 1t; (46) 11As we have pointed out before, it is possible to simulate continuous default times under the copula and stochastic intensity frameworks. In order to compare with the two CreditMetrics extensions, we restrict the analysis to annual buckets. 17
  • 51. where is a standard normal random variable.12 Given the intensity process path for a particular scenario, we then compute the conditional survival probability for each annual period as in (21). Fi-nally, we generate defaults by drawing independent binomial random variables with the appropriate probability. The simulation time for the five models is a direct result of the number of timesteps needed. The copula model simulates the default times directly, and is therefore the fastest. The two CreditMetrics models require only annual timesteps, and require roughly 50% more runtime than the copula model. For the stochastic intensity model, the need to simulate over many timesteps produces a runtime over one hundred times greater than the simpler models. We first examine default rate volatilities over the six horizons. As in the previous section, we consider the normalized cumulative default rate volatility. For year k, this is the standard deviation of the number of defaults that occur in years one through k, divided by the expected number of defaults in that period. This is essentially the quantity defined in (6), with the exception that here we consider a finite portfolio. The default volatilities from our simulations are presented in Figure 3. Our calibration guarantees that the first year default volatilities are essentially the same. The second year results are similar to those in Figure 1, with slightly higher volatility for the slow stochastic intensity model, and slightly lower volatility for the discrete CreditMetrics extension. At longer horizons, these differences are amplified: the slow stochastic intensity and discrete CreditMetrics models show high and low volatilities, respectively, while the remaining three models are indistinguishable. Thought default rate volatilities are illustrative, they do not provide us information about the full dis-tribution of defaults through time. At the one year horizon, our calibration guarantees that volatility will be consistent across the five models; the distribution assumptions, however influence the pre- 12Note that while (24) guarantees a non-negative solution for h, the discretized version admits a small probability that htC1t will be negative. To reduce this possibility, we choose 1t for each timestep such that the probability that htC1t 0 is sufficiently small. The result is that while we only need 50 timesteps per year in some cases, we require as many as one thousand when the value of is large, as in the high correlation, fast mean reversion case. 18
  • 52. cise shape of the portfolio distribution. We see in Table 3 that there is actually very little difference between even the 1st percentiles of the distributions, particularly in the low correlation case. For the full six year horizon, Table 4 shows more differences between the percentiles. Consistent with the default volatility results, the tail percentiles are most extreme for the slow stochastic intensity model, and least extreme for discrete CreditMetrics. Interestingly, though the CreditMetrics diffu-sion model shows similar volatility to the copula and fast stochastic intensity models, it produces less extreme percentiles than these other models. Note also that among distributions with similar means, the median serves well as an indicator of skewness. The high correlation setting generally, and the slow stochastic intensity model in particular, show lower medians. For these cases, the distribution places higher probability on the worst default scenarios as well as the scenarios with few or no defaults. The cumulative probability distributions for the six year horizons are presented in Figures 4 through 7. As in the other comparisons, the slow stochastic intensity model is notable for placing large prob-ability on the very low and high default rate scenarios, while the discrete CreditMetrics extension stands out as the most benign of the distributions. Most striking, however, is the similarity between the fast stochastic intensity and copula models, which are difficult to differentiate even at the most extreme percentile levels. As a final comparison of the default distributions, we consider the pricing of a simple structure written on our portfolio. Suppose each of the one hundred bonds in the portfolio has a notional value of $1 million, and that in the event of a default the recovery rate on each bond is forty percent. The structure is composed of three elements: (i) First loss protection. As defaults occur, the protection seller reimburses the structure up to a total payment of $10 million. Thus, the seller pays $600,000 at the time of the first default, $600,000 at the time of each of the subsequent fifteen defaults, and $400,000 at the time of the seventeenth default. (ii) Second loss protection. The protection seller reimburses the structure for losses in excess of 19
  • 53. $10 million, up to a total payment of $20 million. This amounts to reimbursing the losses on the seventeenth through the fiftieth defaults. (iii) Senior notes. Notes with a notional value of $100 million maturing after six years. The notes suffer a principal loss if the first and second loss protection are fully utilized – that is, if more than fifty defaults occur. For the first and second loss protection, we will estimate the cost of the protection based on a constant discount rate of 7%. In each scenario, we produce the timing and amounts of the protection payments, and discount these back to the present time. The price of the protection is then the average discounted value across the 1,000 scenarios. For the senior notes, we compute the expected principal loss at maturity, which is used by Moody’s along with Table 5 to determine the notes’ rating. Additionally, we compute the total amount of protection (capital) required to achieve a rating of A3 (an expected loss of 0.5%) and Aa3 (an expected loss of 0.101%). We present the first and second loss prices in Table 6, along with the expected loss, current rating, and required capital for the senior notes. The slow stochastic intensity model yields the lowest pricing for the first loss protection, the worst rating for the senior notes, and the highest required capital. The results for the other models are as expected, with the copula and fast mean reversion models yielding the most similar results. 9 Conclusion The analysis of Collateralized Debt Obligations, and other structured products written on credit portfolios, requires a model of correlated defaults over multiple horizons. For single horizon models, the effect of model and distribution choice on the model results is well understood. For the multiple horizon models, however, there has been little research. We have outlined four approaches to multiple horizon modeling of defaults in a portfolio. We have calibrated the four models to the same set of input data (average defaults and a single period 20
  • 54. correlation parameter), and have investigated the resulting default distributions. The differences we observe can be attributed to the model structures, and to some extent, to the choice of distributions that drive the models. Our results show a significant disparity. The rating on a class of senior notes under our low correlation assumption varied from Aaa to A3, and under our high correlation assumption fromA1 to Baa3. Additionally, the capital required to achieve a target investment grade rating varied by as much as a factor of two. In the single period case, a number of studies have concluded that when calibrated to the same first and second order information, the various models do not produce vastly different conclusions. Here, the issue of model choice is much more important, and any analysis of structures over multiple horizons should heed this potential model error. References Cifuentes, A., Choi, E., andWaite, J. (1998). Stability of rations of CBO/CLO tranches. Moody’s Investors Service. Credit Suisse Financial Products. (1997). CreditRisk+: A credit risk management framework. Duffie, D. and Garleanu, N. (1998). Risk and valuation of Collateralized Debt Obligations. Working paper. Graduate School of Business, Stanford University. http://www.stanford.edu/˜duffie/working.htm Duffie, D. and Singleton, K. (1998). Simulating correlated defaults. Working paper. Graduate School of Business, Stanford University. http://www.stanford.edu/˜duffie/working.htm Duffie, D. and Singleton, K. (1999). Modeling term structures of defaultable bonds. Review of Financial Studies, 12, 687-720. 21
  • 55. Finger, C. (1998). Sticks and stones. Working paper. RiskMetrics Group. http://www.riskmetrics.com/research/working Gordy, M. (2000). A comparative anatomy of credit risk models. Journal of Banking Finance, 24 (January), 119-149. Gupton, G., Finger, C., and Bhatia, M. (1997). CreditMetrics – Technical Document. Morgan Guaranty Trust Co. http://www.riskmetrics.com/research/techdoc Li, D. (1999). The valuation of basket credit derivatives. CreditMetrics Monitor, April, 34-50. http://www.riskmetrics.com/research/journals Li, D. (2000). On default correlation: a copula approach. The Journal of Fixed Income, 9 (March), 43-54. Keenan, S., Hamilton, D. and Berthault, A. (2000). Historical default rates of corporate bond issuers, 1920-1999. Moody’s Investors Service. Kolyoglu, U. and Hickman, A. (1998). Reconcilable differences. Risk, October. Nagpal, K. and Bahar, R. (1999). An analytical approach for credit risk analysis under correlated de-faults. CreditMetrics Monitor, April, 51-74. http://www.riskmetrics.com/research/journals Standard Poor’s. (2000). Ratings performance 1999: Stability Transition. Wilson, T. (1997). Portfolio Credit Risk I. Risk, September. Wilson, T. (1997). Portfolio Credit Risk II. Risk, October. 22
  • 56. Table 1: Calibration results. Investment grade Speculative grade Parameter Low correlation High correlation Low correlation High correlation Inputs p1 0.16% 0.16% 3.35% 3.35% p2 0.33% 0.33% 3.41% 3.41% p1;1 0.0007% 0.0059% 0.1776% 0.5190% Discrete CreditMetrics extension 1 -2.95 -2.95 -1.83 -1.83 2 -2.72 -2.72 -1.81 -1.81 10% 40% 10% 40% Diffusion CreditMetrics extension 1 -2.95 -2.95 -1.83 -1.83 2 -3.78 -3.78 -2.34 -2.34 10% 40% 10% 40% Copula functions 1 -2.95 -2.95 -1.83 -1.83 2 -2.58 -2.58 -1.49 -1.49 10% 40% 10% 40% Stochastic intensity – slow mean reversion 0.29 0.29 0.29 0.29 0.10 0.37 0.28 0.76 Nh 1 0.16% 0.16% 3.44% 3.67% Nh 2 1.47% 1.58% 6.06% 12.10% Stochastic intensity – fast mean reversion 1.39 1.39 1.39 1.39 0.14 0.53 0.40 1.12 Nh 1 0.16% 0.16% 3.44% 3.68% Nh 2 0.53% 0.55% 4.00% 5.02% Table 2: Moody’s speculative grade cumulative default probabilities. From Exhibit 30, Keenan et al (2000). Year 1 2 3 4 5 6 Probability 3.35% 6.76% 9.98% 12.89% 15.57% 17.91% 23
  • 57. Table 3: One year default statistics. Speculative grade. CreditMetrics CreditMetrics Stoch. Int. Stoch. Int. Statistic Discrete Diffusion Copula Slow Fast Low correlation Mean 3.37 3.36 3.51 3.20 3.20 St. Dev. 3.15 3.27 3.40 3.03 3.05 Median 3 2 3 3 2 5th percentile 10 9 10 9 10 1st percentile 14 15 15 13 14 High correlation Mean 3.62 3.24 3.72 3.69 3.56 St. Dev. 7.08 6.32 7.52 6.84 6.73 Median 1 1 1 1 1 5th percentile 19 15 19 19 16 1st percentile 37 32 34 30 35 Table 4: Six year cumulative default statistics. Speculative grade. CreditMetrics CreditMetrics Stoch. Int. Stoch. Int. Statistic Discrete Diffusion Copula Slow Fast Low correlation Mean 17.72 16.93 18.04 17.34 18.10 St. Dev. 6.40 8.68 9.66 16.15 9.73 Median 17 16 17 12 16 5th percentile 29 33 37 52 37 1st percentile 34 42 47 73 49 High correlation Mean 18.41 17.28 18.61 19.81 20.41 St. Dev. 13.49 17.41 19.27 24.37 19.36 Median 15 12 12 9 13 5th percentile 45 54 63 82 62 1st percentile 59 73 78 98 86 24
  • 58. Table 5: Target expected losses for six year maturity. From Chart 3, Cifuentes et al (2000). Rating Expected loss Aaa 0.002% Aa1 0.023% Aa2 0.048% Aa3 0.101% A1 0.181% A2 0.320% A3 0.500% Baa1 0.753% Baa2 1.083% Baa3 2.035% Table 6: Prices (in $M) for first and second loss protection. Expected loss, rating, and required capital ($M) for senior notes. Speculative grade collateral. Senior notes First loss Second loss Exp. loss Rating Capital (Aa3) Capital (A3) Low correlation CM Discrete 7.227 1.350 0.000% Aaa 17.3 13.8 CM Diffusion 6.676 1.533 0.017% Aa1 21.6 15.9 Copula 6.788 1.936 0.022% Aa1 24.5 18.0 Stoch. int. – slow 5.533 2.501 0.466% A3 39.8 29.4 Stoch. int. – fast 6.763 1.911 0.038% Aa2 25.7 18.3 High correlation CM Discrete 6.117 2.698 0.159% A1 32.3 23.6 CM Diffusion 5.144 2.832 0.514% Baa1 41.1 30.2 Copula 5.210 3.200 0.821% Baa2 43.7 34.4 Stoch. int. – slow 4.856 3.307 1.903% Baa3 54.5 46.1 Stoch. int. – fast 5.685 3.500 0.918% Baa2 45.9 35.2 25
  • 59. Figure 1: Marginal and cumulative year two default volatility. Marginal Cumulative CM CM Copula Stoch int Stoch int 1.4 1.2 1 0.8 0.6 0.4 0.2 0 Investment grade, low correlation Discrete Diffusion Slow Fast Marginal Cumulative CM CM Copula Stoch int Stoch int 4 3.5 3 2.5 2 1.5 1 0.5 0 Investment grade, high correlation Discrete Diffusion Slow Fast Marginal Cumulative CM CM Copula Stoch int Stoch int 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 Speculative grade, low correlation Discrete Diffusion Slow Fast Marginal Cumulative CM CM Copula Stoch int Stoch int 2 1.8 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0 Speculative grade, high correlation Discrete Diffusion Slow Fast 26
  • 60. Figure 2: Year two conditional default probability given default of a second asset. Cond on 1st yr default Cond on 2nd yr default CM CM Copula Stoch int Stoch int 2.5 2 1.5 1 0.5 0 Investment grade, low correlation Discrete Diffusion Slow Fast Cond on 1st yr default Cond on 2nd yr default CM CM Copula Stoch int Stoch int 18 16 14 12 10 8 6 4 2 0 Investment grade, high correlation Discrete Diffusion Slow Fast Cond on 1st yr default Cond on 2nd yr default CM CM Copula Stoch int Stoch int 2 1.8 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0 Speculative grade, low correlation Discrete Diffusion Slow Fast Cond on 1st yr default Cond on 2nd yr default CM CM Copula Stoch int Stoch int 6 5 4 3 2 1 0 Speculative grade, high correlation Discrete Diffusion Slow Fast 27
  • 61. Figure 3: Normalized cumulative default rate volatilities. Speculative grade. 1 2 3 4 5 6 1 2 3 4 5 6 1.2 1 0.8 0.6 0.4 2.5 2 1.5 1 0.5 Time Default volatility High correlation CM Discrete CM Diffusion Copula St.Int. Slow St.Int. Fast 0.2 Default volatility Low correlation 28
  • 62. Figure 4: Distribution of cumulative six year defaults. Speculative grade, low correlation. 0 10 20 30 40 50 60 70 80 90 100 100% 80% 60% 40% 20% 0 Defaults Cumulative probability CM Discrete CM Diffusion Copula St.Int. Slow St.Int. Fast 29
  • 63. Figure 5: Distribution of cumulative six year defaults, extreme cases. Speculative grade, low correlation. 100% 96% 92% 88% 84% 80% 20 30 40 50 60 70 80 90 100 Defaults Cumulative probability CM Discrete CM Diffusion Copula St.Int. Slow St.Int. Fast 30
  • 64. Figure 6: Distribution of cumulative six year defaults. Speculative grade, high correlation. 0 10 20 30 40 50 60 70 80 90 100 100% 80% 60% 40% 20% 0 Defaults Cumulative probability CM Discrete CM Diffusion Copula St.Int. Slow St.Int. Fast 31
  • 65. Figure 7: Distribution of cumulative six year defaults, extreme cases. Speculative grade, high correlation. 100% 96% 92% 88% 84% 80% 20 30 40 50 60 70 80 90 100 Defaults Cumulative probability CM Discrete CM Diffusion Copula St.Int. Slow St.Int. Fast 32
  • 66. The RiskMetrics Group Working Paper Number 99-07 On Default Correlation: A Copula Function Approach David X. Li This draft: February 2000 First draft: September 1999 44Wall St. NewYork, NY 10005 david.li@riskmetrics.com www.riskmetrics.com
  • 67. On Default Correlation: A Copula Function Approach David X. Li February 2000 Abstract This paper studies the problem of default correlation. We first introduce a random variable called “time-until- default” to denote the survival time of each defaultable entity or financial instrument, and define the default correlation between two credit risks as the correlation coefficient between their survival times. Then we argue why a copula function approach should be used to specify the joint distribution of survival times after marginal distributions of survival times are derived from market information, such as risky bond prices or asset swap spreads. The definition and some basic properties of copula functions are given. We show that the current CreditMetrics approach to default correlation through asset correlation is equivalent to using a normal copula function. Finally, we give some numerical examples to illustrate the use of copula functions in the valuation of some credit derivatives, such as credit default swaps and first-to-default contracts.
  • 68. 1 Introduction The rapidly growing credit derivative market has created a new set of financial instruments which can be used to manage the most important dimension of financial risk - credit risk. In addition to the standard credit derivative products, such as credit default swaps and total return swaps based upon a single underlying credit risk, many new products are now associated with a portfolio of credit risks. A typical example is the product with payment contingent upon the time and identity of the first or second-to-default in a given credit risk portfolio. Variations include instruments with payment contingent upon the cumulative loss before a given time in the future. The equity tranche of a collateralized bond obligation (CBO) or a collateralized loan obligation (CLO) is yet another variation, where the holder of the equity tranche incurs the first loss. Deductible and stop-loss in insurance products could also be incorporated into the basket credit derivatives structure. As more financial firms try to manage their credit risk at the portfolio level and the CBO/CLO market continues to expand, the demand for basket credit derivative products will most likely continue to grow. Central to the valuation of the credit derivatives written on a credit portfolio is the problem of default correlation. The problem of default correlation even arises in the valuation of a simple credit default swap with one underlying reference asset if we do not assume the independence of default between the reference asset and the default swap seller. Surprising though it may seem, the default correlation has not been well defined and understood in finance. Existing literature tends to define default correlation based on discrete events which dichotomize according to survival or nonsurvival at a critical period such as one year. For example, if we denote qA = Pr[EA], qB = Pr[EB], qAB = Pr[EAEB] where EA, EB are defined as the default events of two securities A and B over 1 year. Then the default correlation ρ between two default events EA and EB, based on the standard definition of correlation of two random variables, are defined as follows 1
  • 69. ρ = qAB − qA · √ qB qA(1 − qA)qB(1 − qB) . (1) This discrete event approach has been taken by Lucas [1995]. Hereafter we simply call this definition of default correlation the discrete default correlation. However the choice of a specific period like one year is more or less arbitrary. It may correspond with many empirical studies of default rate over one year period. But the dependence of default correlation on a specific time interval has its disadvantages. First, default is a time dependent event, and so is default correlation. Let us take the survival time of a human being as an example. The probability of dying within one year for a person aged 50 years today is about 0.6%, but the probability of dying for the same person within 50 years is almost a sure event. Similarly default correlation is a time dependent quantity. Let us now take the survival times of a couple, both aged 50 years today. The correlation between the two discrete events that each dies within one year is very small. But the correlation between the two discrete events that each dies within 100 years is 1. Second, concentration on a single period of one year wastes important information. There are empirical studies which show that the default tendency of corporate bonds is linked to their age since issue. Also there are strong links between the economic cycle and defaults. Arbitrarily focusing on a one year period neglects this important information. Third, in the majority of credit derivative valuations, what we need is not the default correlation of two entities over the next year. We may need to have a joint distribution of survival times for the next 10 years. Fourth, the calculation of default rates as simple proportions is possible only when no samples are censored during the one year period1. This paper introduces a few techniques used in survival analysis. These techniques have been widely applied to other areas, such as life contingencies in actuarial science and industry life testing in reliability studies, which are similar to the credit problems we encounter here. We first introduce a random variable called 1A company who is observed, default free, by Moody’s for 5-years and then withdrawn from the Moody’s study must have a survival time exceeding 5 years. Another company may enter into Moody’s study in the middle of a year, which implies that Moody’s observes the company for only half of the one year observation period. In the survival analysis of statistics, such incomplete observation of default time is called censoring. According to Moody’s studies, such incomplete observation does occur in Moody’s credit default samples. 2
  • 70. “time-until-default” to denote the survival time of each defaultable entity or financial instrument. Then, we define the default correlation of two entities as the correlation between their survival times. In credit derivative valuation we need first to construct a credit curve for each credit risk. A credit curve gives all marginal conditional default probabilities over a number of years. This curve is usually derived from the risky bond spread curve or asset swap spreads observed currently from the market. Spread curves and asset swap spreads contain information on default probabilities, recovery rate and liquidity factors etc. Assuming an exogenous recovery rate and a default treatment, we can extract a credit curve from the spread curve or asset swap spread curve. For two credit risks, we would obtain two credit curves from market observable information. Then, we need to specify a joint distribution for the survival times such that the marginal distributions are the credit curves. Obviously, this problem has no unique solution. Copula functions used in multivariate statistics provide a convenient way to specify the joint distribution of survival times with given marginal distributions. The concept of copula functions, their basic properties, and some commonly used copula functions are introduced. Finally, we give a few numerical examples of credit derivative valuation to demonstrate the use of copula functions and the impact of default correlation. 2 Characterization of Default by Time-Until-Default In the study of default, interest centers on a group of individual companies for each of which there is defined a point event, often called default, (or survival) occurring after a length of time. We introduce a random variable called the time-until-default, or simply survival time, for a security, to denote this length of time. This random variable is the basic building block for the valuation of cash flows subject to default. To precisely determine time-until-default, we need: an unambiguously defined time origin, a time scale for measuring the passage of time, and a clear definition of default. We choose the current time as the time origin to allow use of current market information to build credit curves. The time scale is defined in terms of years for continuous models, or number of periods for discrete models. The meaning of default is defined by some rating agencies, such as Moody’s. 3
  • 71. 2.1 Survival Function Let us consider an existing security A. This security’s time-until-default, TA, is a continuous random variable which measures the length of time from today to the time when default occurs. For simplicity we just use T which should be understood as the time-until-default for a specific securityA. Let F(t) denote the distribution function of T , F(t) = Pr(T ≤ t), t ≥0 (2) and set S(t) = 1 − F(t) = Pr(T t ), t ≥ 0. (3) We also assume that F(0) = 0, which implies S(0) = 1. The function S(t) is called the survival function. It gives the probability that a security will attain age t . The distribution of TA can be defined by specifying either the distribution function F(t) or the survival function S(t). We can also define a probability density function as follows f (t) = F (t) = −S (t) = lim →0+ Pr[t ≤T t + ] . To make probability statements about a security which has survived x years, the future life time for this security is T − x|T x. We introduce two more notations tqx = Pr[T − x ≤ t |T x], t≥ 0 tpx = 1 − tqx = Pr[T −x t|T x], t≥ 0. (4) The symbol tqx can be interpreted as the conditional probability that the security A will default within the next t years conditional on its survival for x years. In the special case of X = 0, we have tp0 = S(t) x ≥ 0. 4
  • 72. If t = 1, we use the actuarial convention to omit the prefix 1 in the symbols tqx and tpx , and we have px = Pr[T −x 1|T x] qx = Pr[T − x ≤ 1|T x]. The symbol qx is usually called the marginal default probability, which represents the probability of default in the next year conditional on the survival until the beginning of the year. A credit curve is then simply defined as the sequence of q0, q1, · · · , qn in discrete models. 2.2 Hazard Rate Function The distribution function F(t) and the survival function S(t) provide two mathematically equivalent ways of specifying the distribution of the random variable time-until-default, and there are many other equiva-lent functions. The one used most frequently by statisticians is the hazard rate function which gives the instantaneous default probability for a security that has attained age x. Pr[x T ≤ x + x|T x] = F(x + x) − F(x) 1 − F(x) ≈ f (x)x 1 − F(x) . The function f (x) 1 − F(x) has a conditional probability density interpretation: it gives the value of the conditional probability density function of T at exact age x, given survival to that time. Let’s denote it as h(x), which is usually called the hazard rate function. The relationship of the hazard rate function with the distribution function and survival function is as follows 5
  • 73. h(x) = f (x) 1 − F(x) (x) S(x) = −S . (5) Then, the survival function can be expressed in terms of the hazard rate function, S(t) = e − t 0 h(s)ds . Now, we can express tqx and tpx in terms of the hazard rate function as follows tpx = e − t 0 h(s+x)ds , (6) tqx = 1 − e − t 0 h(s+x)ds . In addition, F(t) = 1 − S(t) = 1 − e − t 0 h(s)ds , and f (t) = S(t) · h(t). (7) which is the density function for T . A typical assumption is that the hazard rate is a constant, h, over certain period, such as [x, x + 1]. In this case, the density function is f (t) = he −ht 6
  • 74. which shows that the survival time follows an exponential distribution with parameter h. Under this assump-tion, the survival probability over the time interval [x, x + t ] for 0 t ≤ 1 is tpx = 1 − tqx = e − t 0 h(s)ds = e −ht = (px)t where px is the probability of survival over one year period. This assumption can be used to scale down the default probability over one year to a default probability over a time interval less than one year. Modelling a default process is equivalent to modelling a hazard function. There are a number of reasons why modelling the hazard rate function may be a good idea. First, it provides us information on the immediate default risk of each entity known to be alive at exact age t . Second, the comparisons of groups of individuals are most incisively made via the hazard rate function. Third, the hazard rate function based model can be easily adapted to more complicated situations, such as where there is censoring or there are several types of default or where we would like to consider stochastic default fluctuations. Fourth, there are a lot of similarities between the hazard rate function and the short rate. Many modeling techniques for the short rate processes can be readily borrowed to model the hazard rate. Finally, we can define the joint survival function for two entities A and B based on their survival times TA and TB, STATB (s, t) = Pr[TA s,TB t]. The joint distributional function is F(s, t) = Pr[TA ≤ s, TB ≤ t ] = 1 − STA(s) − STB (t) + STATB (s, t ). The aforementioned concepts and results can be found in survival analysis books, such as Bowers et al. [1997], Cox and Oakes [1984]. 7
  • 75. 3 Definition of Default Correlations The default correlation of two entities A and B can then be defined with respect to their survival times TA and TB as follows ρAB = √ Cov(TA, TB) Var(TA)V ar(TB) = E(√TATB) − E(TA)E(TB) Var(TA)V ar(TB) . (8) Hereafter we simply call this definition of default correlation the survival time correlation. The survival time correlation is a much more general concept than that of the discrete default correlation based on a one period. If we have the joint distribution f (s, t) of two survival times TA, TB, we can calculate the discrete default correlation. For example, if we define E1 = [TA 1], E2 = [TB 1], then the discrete default correlation can be calculated using equation (1) with the following calculation q12 = Pr[E1E2] = 1 0 1 0 f (s, t)dsdt q1 = 1 0 fA(s)ds q2 = 1 0 fB(t)dt . However, knowing the discrete default correlation over one year period does not allow us to specify the survival time correlation. 4 The Construction of the Credit Curve The distribution of survival time or time-until-default can be characterized by the distribution function, survival function or hazard rate function. It is shown in Section 2 that all default probabilities can be 8
  • 76. calculated once a characterization is given. The hazard rate function used to characterize the distribution of survival time can also be called a credit curve due to its similarity to a yield curve. But the basic question is: how do we obtain the credit curve or the distribution of survival time for a given credit? There exist three methods to obtain the term structure of default rates: (i) Obtaining historical default information from rating agencies; (ii) Taking the Merton option theoretical approach; (iii) Taking the implied approach using market prices of defaultable bonds or asset swap spreads. Rating agencies like Moody’s publish historical default rate studies regularly. In addition to the commonly cited one-year default rates, they also present multi-year default rates. From these rates we can obtain the hazard rate function. For example, Moody’s (see Carty and Lieberman [1997]) publishes weighted average cumulative default rates from 1 to 20 years. For the B rating, the first 5 years cumulative default rates in percentage are 7.27, 13.87, 19.94, 25.03 and 29.45. From these rates we can obtain the marginal conditional default probabilities. The first marginal conditional default probability in year one is simply the one-year default probability, 7.27%. The other marginal conditional default probabilities can be obtained using the following formula: n+1qx = nqx + npx · qx+n, (9) which simply states that the probability of default over time interval [0, n + 1] is the sum of the probability of default over the time interval [0, n], plus the probability of survival to the end of nth year and default in the following year. Using equation (9) we have the marginal conditional default probability: qx+n = n+1qx − nqx 1 − nqx which results in the marginal conditional default probabilities in year 2, 3, 4, 5 as 7.12%, 7.05%, 6.36% and 5.90%. If we assume a piecewise constant hazard rate function over each year, then we can obtain the hazard rate function using equation (6). The hazard rate function obtained is given in Figure (1). 9
  • 77. Using diffusion processes to describe changes in the value of the firm, Merton [1974] demonstrated that a firm’s default could be modeled with the Black and Scholes methodology. He showed that stock could be considered as a call option on the firm with strike price equal to the face value of a single payment debt. Using this framework we can obtain the default probability for the firm over one period, from which we can translate this default probability into a hazard rate function. Geske [1977] and Delianedis and Geske [1998] extended Merton’s analysis to produce a term structure of default probabilities. Using the relationship between the hazard rate and the default probabilities we can obtain a credit curve. Alternatively, we can take the implicit approach by using market observable information, such as asset swap spreads or risky corporate bond prices. This is the approach used by most credit derivative trading desks. The extracted default probabilities reflect the market-agreed perception today about the future default tendency of the underlying credit. Li [1998] presents one approach to building the credit curve from market information based on the Duffie and Singleton [1996] default treatment. In that paper the author assumes that there exists a series of bonds with maturity 1, 2, .., n years, which are issued by the same company and have the same seniority. All of those bonds have observable market prices. From the market price of these bonds we can calculate their yields to maturity. Using the yield to maturity of corresponding treasury bonds we obtain a yield spread curve over treasury (or asset swap spreads for a yield spread curve over LIBOR). The credit curve construction is based on this yield spread curve and an exogenous assumption about the recovery rate based on the seniority and the rating of the bonds, and the industry of the corporation. The suggested approach is contrary to the use of historical default experience information provided by rating agencies such as Moody’s. We intend to use market information rather than historical information for the following reasons: • The calculation of profit and loss for a trading desk can only be based on current market information. This current market information reflects the market agreed perception about the evolution of the market in the future, on which the actual profit and loss depend. The default rate derived from current market information may be much different than historical default rates. • Rating agencies use classification variables in the hope that homogeneous risks will be obtained 10
  • 78. after classification. This technique has been used elsewhere like in pricing automobile insurance. Unfortunately, classification techniques omit often some firm specific information. Constructing a credit curve for each credit allows us to use more firm specific information. • Rating agencies reacts much slower than the market in anticipation of future credit quality. A typical example is the rating agencies reaction to the recent Asian crisis. • Ratings are primarily used to calculate default frequency instead of default severity. However, much of credit derivative value depends on both default frequency and severity. • The information available from a rating agency is usually the one year default probability for each rating group and the rating migration matrix. Neither the transition matrixes, nor the default probabilities are necessarily stable over long periods of time. In addition, many credit derivative products have maturities well beyond one year, which requires the use of long term marginal default probability. It is shown under the Duffie and Singleton approach that a defaultable instrument can be valued as if it is a default free instrument by discounting the defaultable cash flow at a credit risk adjusted discount factor. The credit risk adjusted discount factor or the total discount factor is the product of risk-free discount factor and the pure credit discount factor if the underlying factors affecting default and those affecting the interest rate are independent. Under this framework and the assumption of a piecewise constant hazard rate function, we can derive a credit curve or specify the distribution of the survival time. 5 Dependent Models - Copula Functions Let us study some problems of an n credit portfolio. Using either the historical approach or the market implicit approach, we can construct the marginal distribution of survival time for each of the credit risks in the portfolio. If we assume mutual independence among the credit risks, we can study any problem associated with the portfolio. However, the independence assumption of the credit risks is obviously not realistic; in reality, the default rate for a group of credits tends to be higher in a recession and lower when the economy 11
  • 79. is booming. This implies that each credit is subject to the same set of macroeconomic environment, and that there exists some form of positive dependence among the credits. To introduce a correlation structure into the portfolio, we must determine how to specify a joint distribution of survival times, with given marginal distributions. Obviously, this problem has no unique solution. Generally speaking, knowing the joint distribution of random variables allows us to derive the marginal distributions and the correlation structure among the random variables, but not vice versa. There are many different techniques in statistics which allow us to specify a joint distribution function with given marginal distributions and a correlation structure. Among them, copula function is a simple and convenient approach. We give a brief introduction to the concept of copula function in the next section. 5.1 Definition and Basic Properties of Copula Function A copula function is a function that links or marries univariate marginals to their full multivariate distribution. For m uniform random variables, U1, U2, · · · ,Um, the joint distribution function C, defined as C(u1, u2, · · · , um, ρ) = Pr[U1 ≤ u1,U2 ≤ u2, · · · ,Um ≤ um] can also be called a copula function. Copula functions can be used to link marginal distributions with a joint distribution. For given univariate marginal distribution functions F1(x1), F2(x2),· · · , Fm(xm), the function C(F1(x1), F2(x2), · · · , Fm(xm)) = F(x1, x2, · · · xm), which is defined using a copula function C, results in a multivariate distribution function with univariate marginal distributions as specified F1(x1), F2(x2),· · · , Fm(xm). This property can be easily shown as follows: 12
  • 80. C(F1(x1), F2(x2), · · · , Fm(xm), ρ) = Pr [U1 ≤ F1(x1),U2 ≤ F2(x2), · · · ,Um ≤ Fm(xm)] = Pr F −1 1 (U1) ≤ x1, F −1 2 (U2) ≤ x2, · · · , F −1 m (Um) ≤ xm = Pr [X1 ≤ x1,X2 ≤ x2, · · · ,Xm ≤ xm] = F(x1, x2, · · · xm). The marginal distribution of Xi is C(F1(+∞), F2(+∞), · · · Fi(xi), · · · , Fm(+∞), ρ) = Pr [X1 ≤ +∞,X2 ≤ +∞, · · · ,Xi ≤ xi,Xm ≤ +∞] = Pr[Xi ≤ xi ] = Fi(xi ). Sklar [1959] established the converse. He showed that any multivariate distribution function F can be written in the form of a copula function. He proved the following: If F(x1, x2, · · · xm) is a joint multivariate distribution function with univariate marginal distribution functions F1(x1), F2(x2),· · · , Fm(xm), then there exists a copula function C(u1, u2, · · · , um) such that F(x1, x2, · · · xm) = C(F1(x1), F2(x2), · · · , Fm(xm)). If each Fi is continuous then C is unique. Thus, copula functions provide a unifying and flexible way to study multivariate distributions. For simplicity’s sake, we discuss only the properties of bivariate copula functions C(u, v, ρ) for uniform random variables U and V , defined over the area {(u, v)|0 u ≤ 1, 0 v ≤ 1}, where ρ is a correlation parameter. We call ρ simply a correlation parameter since it does not necessarily equal the usual correlation coefficient defined by Pearson, nor Spearman’s Rho, nor Kendall’s Tau. The bivariate copula function has the following properties: (i) Since U and V are positive random variables, C(0, v, ρ) = C(u, 0, ρ) = 0. 13
  • 81. (ii) Since U and V are bounded above by 1, the marginal distributions can be obtained by C(1, v, ρ) = v, C(u, 1, ρ) = u. (iii) For independent random variables U and V , C(u, v, ρ) = uv. Frechet [1951] showed there exist upper and lower bounds for a copula function max(0, u + v − 1) ≤ C(u, v) ≤ min(u, v). The multivariate extension of Frechet bounds is given by Dall’Aglio [1972]. 5.2 Some Common Copula Functions We present a few copula functions commonly used in biostatistics and actuarial science. Frank Copula The Frank copula function is defined as C(u, v) = 1 α ln 1 + (eαu − 1)(eαv − 1) eα − 1 , −∞ α ∞. Bivariate Normal C(u, v) = %2(% −1(u),% −1(v), ρ), −1 ≤ ρ ≤ 1, (10) −1 is the inverse of a where%2 is the bivariate normal distribution function with correlation coefficient ρ, and% univariate normal distribution function. As we shall see later, this is the copula function used in CreditMetrics. Bivariate Mixture Copula Function We can form new copula function using existing copula functions. If the two uniform random variables u and v are independent, we have a copula function C(u, v) = uv. If the two random variables are perfect correlated we have the copula function C(u, v) = min(u, v). Mixing the two copula functions by a mixing coefficient (ρ 0) we obtain a new copula function as follows 14
  • 82. C(u, v) = (1 − ρ)uv + ρ min(u, v), ifρ 0. If ρ ≤ 0 we have C(u, v) = (1 + ρ)uv − ρ(u − 1 + v)(u − 1 + v), if ρ ≤ 0, where (x) = 1, if x ≥ 0 = 0, if x 0. 5.3 Copula Function and Correlation Measurement To compare different copula functions, we need to have a correlation measurement independent of marginal distributions. The usual Pearson’s correlation coefficient, however, depends on the marginal distributions (See Lehmann [1966]). Both Spearman’s Rho and Kendall’s Tau can be defined using a copula function only as follows ρs = 12 [C(u, v) − uv]dudv, τ = 4 C(u, v)dC(u, v) − 1. Comparisons between results using different copula functions should be based on either a common Spear-man’s Rho or a Kendall’s Tau. Further examination of copula functions can be found in a survey paper by Frees and Valdez [1988] and a recent book by Nelsen [1999]. 15
  • 83. 5.4 The Calibration of Default Correlation in Copula Function Having chosen a copula function, we need to compute the pairwise correlation of survival times. Using the CreditMetrics (Gupton et al. [1997]) asset correlation approach, we can obtain the default correlation of two discrete events over one year period. As it happens, CreditMetrics uses the normal copula function in its default correlation formula even though it does not use the concept of copula function explicitly. First let us summarize how CreditMetrics calculates joint default probability of two credits Aand B. Suppose the one year default probabilities for A and B are qA and qB. CreditMetrics would use the following steps • Obtain ZA and ZB such that qA = Pr[Z ZA] qB = Pr[Z ZB] where Z is a standard normal random variable • If ρ is the asset correlation, the joint default probability for credit A and B is calculated as follows, Pr[Z ZA,Z ZB] = ZA −∞ ZB −∞ φ2(x, y|ρ)dxdy = %2(ZA,ZB, ρ) (11) where φ2(x, y|ρ) is the standard bivariate normal density function with a correlation coefficient ρ, and %2 is the bivariate accumulative normal distribution function. If we use a bivariate normal copula function with a correlation parameter γ , and denote the survival times for A and B as TA and TB, the joint default probability can be calculated as follows Pr[TA 1, TB 1] = %2(% −1(FA(1)),% −1(FB(1), γ ) (12) where FA and FB are the distribution functions for the survival times TA and TB. If we notice that 16
  • 84. qi = Pr[Ti 1] = Fj (1) and Zi = % −1(qi) for i = A,B, then we see that equation (12) and equation (11) give the same joint default probability over one year period if ρ = γ . We can conclude that CreditMetrics uses a bivariate normal copula function with the asset correlation as the correlation parameter in the copula function. Thus, to generate survival times of two credit risks, we use a bivariate normal copula function with correlation parameter equal to the CreditMetrics asset correlation. We note that this correlation parameter is not the correlation coefficient between the two survival times. The correlation coefficient between the survival times is much smaller than the asset correlation. Conveniently, the marginal distribution of any subset of an n dimensional normal distribution is still a normal distribution. Using asset correlations, we can construct high dimensional normal copula functions to model the credit portfolio of any size. 6 Numerical Illustrations This section gives some numerical examples to illustrate many of the points discussed above. Assume that we have two credit risks, A and B, which have flat spread curves of 300 bps and 500 bps over LIBOR. These spreads are usually given in the market as asset swap spreads. Using these spreads and a constant recovery assumption of 50% we build two credit curves for the two credit risks. For details, see Li [1998]. The two credit curves are given in Figures (2) and (3). These two curves will be used in the following numerical illustrations. 6.1 Illustration 1. Default Correlation v.s. Length of Time Period In this example, we study the relationship between the discrete default correlation (1) and the survival time correlation (8). The survival time correlation is a much more general concept than the discrete default 17
  • 85. correlation defined for two discrete default events at an arbitrary period of time, such as one year. Knowing the former allows us to calculate the latter over any time interval in the future, but not vice versa. Using two credit curves we can calculate all marginal default probabilities up to anytime t in the future, i.e. tq0 = Pr[τ t] = 1 − e − t 0 h(s)ds , where h(s) is the instantaneous default probability given by a credit curve. If we have the marginal default probabilities tqA 0 and tqB 0 for both A and B, we can also obtain the joint probability of default over the time interval [0, t] by a copula function C(u, v), Pr[TA t,TB t] = C(tqA 0 , tqB 0 ). Of course we need to specify a correlation parameter ρ in the copula function. We emphasize that knowing ρ would allow us to calculate the survival time correlation between TA and TB. We can now obtain the discrete default correlation coefficient ρt between the two discrete events that A and B default over the time interval [0, t] based on the formula (1). Intuitively, the discrete default correlation ρt should be an increasing function of t since the two underlying credits should have a higher tendency of joint default over longer periods. Using the bivariate normal copula function (10) and ρ = 0.1 as an example we obtain Figure (4). From this graph we see explicitly that the discrete default correlation over time interval [0, t] is a function of t . For example, this default correlation coefficient goes from 0.021 to 0.038 when t goes from six months to twelve months. The increase slows down as t becomes large. 6.2 Illustration 2. Default Correlation and Credit Swap Valuation The second example shows the impact of default correlation on credit swap pricing. Suppose that credit A is the credit swap seller and credit B is the underlying reference asset. If we buy a default swap of 3 years 18
  • 86. with a reference asset of credit B from a risk-free counterparty we should pay 500 bps since holding the underlying asset and having a long position on the credit swap would create a riskless portfolio. But if we buy the default swap from a risky counterparty how much we should pay depends on the credit quality of the counterparty and the default correlation between the underlying reference asset and the counterparty. Knowing only the discrete default correlation over one year we cannot value any credit swaps with a maturity longer than one year. Figure (5) shows the impact of asset correlation (or implicitly default correlation) on the credit swap premium. From the graph we see that the annualized premium decreases as the asset correlation between the counterparty and the underlying reference asset increases. Even at zero default correlation the credit swap has a value less than 500 bps since the counterparty is risky. 6.3 Illustration 3. Default Correlation and First-to-Default Valuation The third example shows how to value a first-to-default contract. We assume we have a portfolio of n credits. Let us assume that for each credit i in the portfolio we have constructed a credit curve or a hazard rate function for its survival time Ti . The distribution function of Ti is Fi (t). Using a copula function C we also obtain the joint distribution of the survival times as follows F(t1, t2, · · · , tn) = C(F1(t1), F2(t2), · · · , Fn(tn)). If we use normal copula function we have F(t1, t2, · · · , tn) = %n(% −1(F1(t1)),% −1(F2(t2)), · · · ,% −1(Fn(tn))) where %n is the n dimensional normal cumulative distribution function with correlation coefficient matrix -. To simulate correlated survival times we introduce another series of random variables Y1, Y2, · · · Yn, such that Y1 = % −1(F1(T1)), Y2 = % −1(F2(T2)), · · · , Yn = % −1(Fn(Tn)). (13) 19
  • 87. Then there is a one-to-one mapping between Y and T . Simulating {Ti |i = 1, 2, ..., n} is equivalent to simulating {Yi |i = 1, 2, ..., n}. As shown in the previous section the correlation between the Y s is the asset correlation of the underlying credits. Using CreditManager from RiskMetrics Group we can obtain the asset correlation matrix -. We have the following simulation scheme • Simulate Y1, Y2, · · · Yn from an n-dimension normal distribution with correlation coefficient matrix -. • Obtain T1, T2, · · · Tn using Ti = F −1 i (N(Yi)), i = 1, 2, · · · , n. With each simulation run we generate the survival times for all the credits in the portfolio. With this information we can value any credit derivative structure written on the portfolio. We use a simple structure for illustration. The contract is a two-year transaction which pays one dollar if the first default occurs during the first two years. We assume each credit has a constant hazard rate of h = 0.1 for 0 t +∞. From equation (7) we know the density function for the survival time is −T he ht . This shows that the survival time is exponentially distributed with mean 1/h. We also assume that every pair of credits in the portfolio has a constant asset correlation σ2. Suppose we have a constant interest rate r = 0.1. If all the credits in the portfolio are independent, the hazard rate of the minimum survival time T = min(T1, T2, · · · , Tn) is easily shown to be hT = h1 + h2 +· · ·+hn = nh. IfT 2, the present value of the contract is 1 · e −r·T . The survival time for the first-to-default has a density function f (t) = hT · e −hT t , so the value of the contract is given by 2To have a positive definite correlation matrix, the constant correlation coefficient has to satisfy the conditionσ − 1 n−1 . 20
  • 88. V = 2 0 1 · e −rtf (t)dt = 2 0 1 · e −r thT · e −hT tdt (14) = hT r + hT 1 − e −2.0·(r+hT ) . In the general case we use the Monte Carlo simulation approach and the normal copula function to obtain the distribution of T . For each simulation run we have one scenario of default times t1, t2, · · · tn, from which we have the first-to-default time simply as t = min(t1, t2, · · · tn). Let us examine the impact of the asset correlation on the value of the first-to-default contract of 5-assets. If σ = 0, the expected payoff function, based on equation (14), should give a value of 0.5823. Our simulation of 50,000 runs gives a value of 0.5830. If all 5 assets are perfectly correlated, then the first-to-default of 5-assets should be the same as the first-to-default of 1-asset since any one default induces all others to default. In this case the contract should worth 0.1648. Our simulation of 50,000 runs produces a result of 0.1638. Figure (6) shows the relationship between the value of the contract and the constant asset correlation coefficient. We see that the value of the contract decreases as the correlation increases. We also examine the impact of correlation on the value of the first-to-default of 20 assets in Figure (6). As expected, the first-to-default of 5 assets has the same value of the first-to-default of 20 assets when the asset correlation approaches to 1. 7 Conclusion This paper introduces a few standard technique used in survival analysis to study the problem of default correlation. We first introduce a random variable called “the time-until-default” to characterize the default. Then the default correlation between two credit risks is defined as the correlation coefficient between their survival times. In practice we usually use market spread information to derive the distribution of survival times. When it comes to credit portfolio studies we need to specify a joint distribution with given marginal distributions. The problem cannot be solved uniquely. The copula function approach provides one way of 21
  • 89. specifying a joint distribution with known marginals. The concept of copula functions, their basic properties and some commonly used copula functions are introduced. The calibration of the correlation parameter used in copula functions against some popular credit models is also studied. We have shown that CreditMetrics essentially uses the normal copula function in its default correlation formula even though CreditMetrics does not use the concept of copula functions explicitly. Finally we show some numerical examples to illustrate the use of copula functions in the valuation of credit derivatives, such as credit default swaps and first-to-default contracts. References [1] Bowers, N. L., JR., Gerber, H. U., Hickman, J. C., Jones, D. A., and Nesbitt, C. J., Actuarial Mathe-matics, 2nd Edition, Schaumberg, Illinois, Society of Actuaries, (1997). [2] Carty, L. and Lieberman, D. Historical Default Rates of Corporate Bond Issuers, 1920-1996, Moodys Investors Service, January (1997). [3] Cox, D. R. and Oakes, D. Analysis of Survival Data, Chapman and Hall, (1984). [4] Dall’Aglio, G., Frechet Classes and Compatibility of Distribution Functions, Symp. Math., 9, (1972), pp. 131-150. [5] Delianedis, G. and R. Geske, Credit Risk and Risk Neutral Default Probabilities: Information about Rating Migrations and Defaults,Working paper, The Anderson School at UCLA, (1998). [6] Duffie, D. and Singleton, K. Modeling Term Structure of Defaultable Bonds,Working paper, Graduate School of Business, Stanford University, (1997). [7] Frechet, M. Sur les Tableaux de Correlation dont les Marges sont Donnees, Ann. Univ. Lyon, Sect. A 9, (1951), pp. 53-77. [8] Frees, E.W. and Valdez, E., 1998, Understanding Relationships Using Copulas, North American Actu-arial Journal, (1998), Vol. 2, No. 1, pp. 1-25. 22
  • 90. [9] Gupton, G. M., Finger, C. C., and Bhatia, M. CreditMetrics – Technical Document, NewYork: Morgan Guaranty Trust Co., (1997). [10] Lehmann, E. L. Some Concepts of Dependence, Annals of Mathematical Statistics, 37, (1966), pp. 1137-1153. [11] Li, D. X., 1998, Constructing a credit curve, Credit Risk, A RISK Special report, (November 1998), pp. 40-44. [12] Litterman, R. and Iben, T. Corporate BondValuation and theTerm Structure of Credit Spreads, Financial Analyst Journal, (1991), pp. 52-64. [13] Lucas, D. Default Correlation and Credit Analysis, Journal of Fixed Income, Vol. 11, (March 1995), pp. 76-87. [14] Merton, R. C. On the Pricing of Corporate Debt: The Risk Structure of Interest Rates, Journal of Finance, 29, pp. 449-470. [15] Nelsen, R. An Introduction to Copulas, Springer-Verlag NewYork, Inc., 1999. [16] Sklar, A., Random Variables, Joint Distribution Functions and Copulas, Kybernetika 9, (1973), pp. 449-460. 23
  • 91. Figure 1: Hazard Rate Function of B Grade Based on Moody’s Study (1997) Hazard Rate Function 1 2 3 4 5 6 0.075 0.070 0.065 0.060 hazard rate Years 24
  • 92. Figure 2: Credit Curve A 0.045 0.050 0.055 0.060 0.065 0.070 0.075 Credit Curve A: Instantaneous Default Probability 09/10/1998 09/10/2000 09/10/2002 09/10/2004 09/10/2006 09/10/2008 09/10/2010 Date Hazard Rate (Spread = 300 bps, Recovery Rate = 50%) 25
  • 93. Figure 3: Credit Curve B 0.08 0.09 0.10 0.11 0.12 Credit Curve B: Instantaneous Default Probability 09/10/1998 09/10/2000 09/10/2002 09/10/2004 09/10/2006 09/10/2008 09/10/2010 Date Hazard Rate (Spread = 500 bps, Recovery Rate = 50%) 26
  • 94. Figure 4: The Discrete Default Correlation v.s. the Length of Time Interval 1 3 5 7 9 Length of Period (Years) 0.30 0.25 0.20 0.15 Discrete Default Correlation Discrete Default Correlation v. s. Length of Period 27
  • 95. Figure 5: Impact of Asset Correlation on the Value of Credit Swap -1.0 -0.5 0.0 0.5 1.0 Asset Correlation 500 480 460 440 420 400 Default Swap Value Asset Correlation v. s. Default Swap Premium 28
  • 96. Figure 6: The Value of First-to-Default v. s. Asset Correlation 0.1 0.3 A0s.5set Correl0a.7tion 1.0 First-to-Default Premium 0.8 0.6 0.4 0.2 The Value of First-to-Default v. s. the Asset Correlation 5-Asset 20-Asset 29