Published on

Published in: Economy & Finance, Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide


  1. 1. Pricing CDOs using the Intensity GammaApproachChristelle Ho Hio HenAaron IpsaAloke MukherjeeDharmanshu ShahComputing MethodsDecember 19, 2006
  2. 2. Table of ContentsTable of Contents.................................................................................................................2Introduction..........................................................................................................................3Advantages of Intensity Gamma Approach.....................................................................3Default Correlation in Intensity Gamma..........................................................................4Implementation Overview...............................................................................................4Survival Curve.....................................................................................................................5Construction of the Business Time Path..............................................................................9Calculating IG forward intensities (ci)..............................................................................14Getting the Default Times from the Business Time Paths.................................................16CDO pricer.........................................................................................................................17Validation...........................................................................................................................17Roundtrip Test...............................................................................................................17A Fast Analytic Price Approximation in the Intensity-Gamma Framework.................19Calibration..........................................................................................................................21Future work........................................................................................................................22References..........................................................................................................................23
  3. 3. IntroductionIt has become fashionable to attack David Li’s copula model [Li 2000] as an inadequatetool for pricing CDO tranches and more generally any security exposed to defaultcorrelation. This criticism belies the importance of the model. Despite its inadequacies ithas framed and motivated much of the current research.Mark Joshi and Alan Stacey’s Intensity Gamma [Joshi 2005] model attempts to addresssome of the issues of the standard Gaussian copula model by combining two importantconcepts from past models in both option pricing and credit modeling: conditionalindependence and business time. Conditional independence refers to the fact that eventsare independent conditioned on some random variable. In copula models default timesare independent conditioning on a systematic factor. Business time refers to the idea thatchanges in economic variables are driven by the arrival of information and not simply bythe passage of time. Both of these ideas underly the Variance Gamma model [Madan1998] which the authors refer to and pay homage to in their naming of the model.In a nutshell: “Conditional on the information arrival or business time process, (It),defaults of different names are independent.”Advantages of Intensity Gamma Approach1) Market Does not believe in the Gaussian Copula: Different tranches of a CDOhave different base correlations. We observe an increasing base correlation withincreasing detachment point. Hence, the market implies that there is no constantcorrelation amongst the names in the CDO, since this number keeps varying fromtranche to tranche. The Intensity Gamma approach does not need a basecorrelation to price a CDO.2) Pricing Non-Standard Tranches in CDOs: If the Gaussian Copula is to be usedto price a non-standard tranche such as 0-35%, we would need to extrapolate fromthe market-observed base correlation curve of standard tranches. This can price a0-34% tranche with a lower spread than the 0-35% tranche. The Intensity Gammaapproach does not suffer from this.3) Pricing exotic credit derivatives: Portfolio credit derivatives with complicatedpayoffs can be priced easily by Intensity Gamma. The Gaussian Copula needs abase correlation to spit out default times, and hence is unfit for use in exotic creditderivatives.4) Time Homogeneity: The Intensity Gamma approach is not based on a particulartime frame. Once the method is calibrated, it can be used to price portfolio creditderivatives of any maturity. In contrast the Gaussian Copula approach isdependent on the time frame that we are working in. The base correlation forpricing a 3 year CDO differs from that for a 5 year CDO, making it more difficultto work with that Intensity Gamma approach. However, there is a caveat thatalthough the intensity gamma approach prices a lower base correlation with
  4. 4. increasing maturities, which is desired, the change is more pronounced thanobserved in the market. Hence, it is not advisable to price products acrossdifferent maturities using the intensity gamma approach.Default Correlation in Intensity GammaHow does default correlation occur in such a model? Imagine a string marked atuniformly distributed intervals, each marking representing a default. On average eachsegment has about the same number of defaults. This represents the concept ofconditional independence: for a given business time path events occur independently.Now imagine the string is kinked at various points, although the marks remain equallyspaced when we look at the string edgewise we see that the marks (defaults) have movedcloser together. This is illustrated in the diagram below.The economic intuition is that information arrives in bursts. In periods of highinformation arrival we can expect more events to occur than when no information hasarrived. The amount of business time which arrives between defaults is uniform but theamount of calendar time is not: we clearly see periods of clustered defaults.Implementation OverviewImplementation of the IG model consists of the following steps:1. Survival curve construction2. Calculating IG forward default intensitiesBusiness time Calendar time
  5. 5. 3. Simulation of business time paths4. Calculation of default times5. Pricing CDO tranches6. CalibrationCalibrating the model requires iterating steps 2 through 5 with different model parametersin order to match model prices to market prices. The relationship between the blocks isdepicted schematically below:The implementation was validated using the following tests:1. Round trip test: verify CDS spreads are recovered2. Homogeneous portfolio approximationSurvival Curvea) Intuition6mo 1y 2y .. 5yname1 .name2..name125CDS spreadsSurvival CurveConstructionIG DefaultIntensitiesCalibrationParameter guessBusiness timepath generatorDefault timecalculatorTranche pricerObjective function0-3% …3-6% …6-9% …..Market tranchequotesErr<tol? NOYESFinal parameters
  6. 6. A CDO is composed of a collection of CDSs and each CDS is a contract based on thechances that a firm will survive to the maturity of the CDS. A survival curve isessentially a mapping of the probability of survival of an entity, as time progresses. It isessential to the pricing of a CDO since it speaks about how likely it is for a firm todefault.Default is commonly modeled as Cox process, which is a doubly stochastic Poissonprocess. Here λ, the forward intensity of a Poisson process is not constant in time. Whilefor a Coz process λ is stochastic, in our case we treat lambda as piecewise constant,changing with time. The probability that a firm survives to time T is given byPr (τ > T) = exp[0( )Tu duλ−∫ ]Thus, instead of the actual survival probability X(0,T), the forward intensity curve λ(t)gives the same information, i.e. both talk about the chances of a firm surviving in time.Hence, in practice the λ(t) curve is referred to as the survival curve, and is integral to thepricing of CDOs, CDSs and other credit products.Survival Curve in terms of λ00.0010.0020.0030.0040.0050.0060 1 2 3 4 5 6TimeIntensityb) CDS PricingA CDS is a contract where the “protection buyer” pays a running coupon c to the“protection seller”, in return for protection against default on some firm. If default on thespecified firm takes place before the final CDS maturity, the protection seller pays theprotection buyer 1−R at the time of default. R is the observed recovery rate. At time of
  7. 7. default, coupon payments cease. For a CDS to fairly priced, the present values of thefixed coupon leg and floating contingent payment at default must equal each other.The present value of the protection buyers coupon payment stream considering a notionalof $1 isPVcoup(0) = c1(0, )Ni iiB Tδ=∑Where B(0,Ti) is the price of a risky bond. A risky bond can be priced by multiplying arisk free bond by the probability that the bond would survive till maturity.B(0,Ti) = P(0,Ti) * X(0,Ti)(Using notations and terminology as in Prof. Leif Andersen’s lecture notes at NYU)The present value the protection seller’s payment isPVprot(0) = C(0; 1 − R)Where C(0; 1-R) represents the present value of the contingent payment of 1-R at time ofdefault.Hence, the present value of the CDS can be expressed as the difference of the two legsdescribed abovePVCDS(0) = C(0; 1 − R) - c1(0, )Ni iiB Tδ=∑Equating the present value to zero, we obtain the fiar spread on the CDS asCpar =1(0,1 )(0, )Ni iiC RB Tδ=−∑Calculation of Present Value of Contingent claim : C(0,1-R)At the time of default, the protection seller pays the protection buyer 1-R. It is our goal tofind the present value of this contingent payment. Using elementary probability theory,we know that the present value is the discounted value of 1-R multipled by theprobability of default. Also, we don’t know the exact time of default, and hence workwith expectations.C(0;1-R) = E[(1-R) . exp(0( )r u duτ∫ ) . 1τ<T]Where 1τ<T is 1 if τ<T and 0 otherwise.Now, let us first find this value such that default takes place in an infinitesimal time[z,z+dz]C(0;1-R) = E[(1-R) . exp(-0( )zr u du∫ ) . 1τ Є[z,z+dz] ]C(0;1-R) = E[(1-R) . λ(z) . exp(-0( ( ) ( ))zr u u duλ+∫ ) . dz]
  8. 8. Now adding up over the entire interval [0,T], we obtain the general form asC(0;1-R) = E[0T∫ (1-R) . λ(z) . exp(-0( ( ) ( ))zr u u duλ+∫ ) . dz]But X(0,t) = exp(-0( )tu duλ∫ )Hence we can writedXdt= -λ . exp(-0( )tu duλ∫ )Substituting in the equation for C(0;1-R), we obtainC(0;1-R) = -0T∫ (1-R) . P(0,z) .(0, )dX sdzdzWhere P(0,z) is the price of a risk free bond of maturity z.However, in reality, we cannot integrate continuously. It seems logical to discretize theequation stated above, into time intervals equal to that between subsequent CDSs. Also,we make a practical assumption that irrespective of exactly when default takes place, wemay treat it as taking place in the middle of that time period of contention.Using these ground rules, the equation can be restated asC(0;1-R) = (1-R) .t T<=∑ P(0, 12j jt t− +) [X(0,tj-1) – X(0,tj)]c) Bootstrapping the Survival CurveBootstrapping procedure is commonly used to build a survival curve in practice. Let usassume that we observe CDSs of maturities T1, T2, T3…TN in the market. The survivalcurve is essentially a chart of λ versus time. It is customary to treat it piecewise constant,although in some cases piecewise linear may make more sense. Bootstrapping works withthe following steps:1) Assume λ(1) is the intensity for the survival curve from 0 to T1. Using thisassumption of λ(1), we calculate the fair price of a CDS of maturity T1.2) If the calculated fair spread is same as the spread observed in the market, weaccept our guess of λ(1). Otherwise we need to make another guess of λ(1) so asto find a fair price that matches the market. In MATLAB we can simply use thefzero function to find this root.3) After we have found λ(1), we make a guess for λ(2), which is the intensity fromT1 to T2. Using known λ(1), and guessed λ(2), we compute the fair spread for theCDS with maturity T2. If the calculated fair price matches that seen in the market,we accept our guess of λ(2) or else we use some root finding method to find thevalue of λ(2) that matched the observed CDS price.4) We keep doing this for subsequent λ’s.
  9. 9. Construction of the Business Time PathThe business time process was modeled as two gamma processes ( Γ) and a drift asfollows,),(),( 221 1λγλγ ttatIt Γ+Γ+=A gamma process is a positive, pure jump process where the t∆ increments areindependently gamma distributed as ),;( λγ txf ∆ . The parameter γ controls the rate ofjump arrivals and λ inversely controls the size of the jumps. The naïve way to generatea gamma process is to break up an interval into small subintervals and draw gammarandom variables ~ ),;( λγ txf ∆ for each period t∆ . However, this would becomputationally intensive since one would have to generate gamma random variables,which are typically obtained via acceptance-rejection methods.The gamma probability density function is defined,)()(),;(1γλλλγλγΓ=−− xexxf…where )(γΓ is the gamma function, which can be thought of as a function that “fillsin” the factorial function. If z is an integer, then,!)1( zz =+ΓNote that if γ =1, then the gamma density reduces to the exponential density with rate λ.xexf λλλ −=),1;(Furthermore, a gamma distributed random variable can be thought of as the sum of γexponential random variables, even though γ is not restricted to being a whole number.From Cont and Tankov1we have the following, more efficient, expression for the gammaprocess as an infinite series:∑∑=∞=≤Γ−−=Γ=ijiiiTUitTVeX ii11/11γλ1Rama Cont and Peter Tankov, Financial Modeling with Jump Processes (Chapman and Hall, 2003)
  10. 10. Ti, Vi – standard exponential random variable (contribute to determination of jump size)Ui – standard uniform (determines the jump time)Cont and Tankov suggest truncating the series at a point when Γi exceeds a threshold τ.Joshi suggests that the remaining small jumps can then be approximated by a drift term.The algorithm we employed is as follows, from Cont and Tankov, where Tenor is thelength in real time of our gamma process (5 years for a typical CDO, for example.)Initialize k = 0;REPEAT WHILE τ<∑=kiiT1Set k = k+1;Simulate kT , kV : standard exponentials.Simulate kU : uniform [0,Tenor].ENDSet G = cumulative sum of T.Sort U in ascending order – these can be thought of as the jumping times in acompound Poisson process.Sort G in the same order as U.Then the gamma process increments are:kieVX TenorGitii...1,*==−γλNoticing that the maximum iG will be approximately equal toτ , the terms we aretruncating in this series will be of the order  −Tenor*expγτor less. What is the totalmagnitude of this truncation error? We can express the residual series R as:∑∑∞+=Γ−−=Γ−−=≡>Γ=+=1/11/11 };inf{,kiiikiiTVeRikRVeXiiγγλτλThe expected value of the residual can be computed. This is the magnitude of the driftterm. Given that Γk is the first point at which the sum of exponential variables exceeds
  11. 11. the threshold τ we can approximate it as τ+δ. What is the expected value of δ? In fact δis also exponentially distributed with mean 1. This is related to the memorylessnessproperty of exponential random variables: given that we have waited to a certain point intime the expected time till the next arrival is unchanged. We therefore approximate Γk asτ+1.λγγγλλλλγτγτγγγγ/)1(1/)1(11//11/11/11][][][][][+−∞=+−−∞=−Γ−−∞+=Γ−−∞+=Γ−−=+====∑∑∑∑+eeeEeeEVEeEREiiiiTkikiikikiiRecall that all of the random variables are independent which allows us to factorize theexpectations in the first and third lines. The fourth line follows by substituting into themoment generating function of a standard exponential (1 – t/λ)-1with t = 1/γ. The fifthline is a standard result for geometric series.What τ should we choose if we’d like to use a constant compensating drift b for all paths?Equating E[R] to b gives the following expression:1log −−=γλγτbbFor generating gamma paths to an arbitrary time T, substitute γ with γT.Business Time Sample ResultsThe function for generating Business Time Paths was tested by ensuring that the meansand variances were correct. Sample results are shown below for the following parameters:1.,3.,1,5. 2121 ==== λλγγ , drift a = 1, Tenor = 5, number of paths = 100,000.Sample Mean Std Err of the Mean Expected Mean63.2672 0.0723 63.3333Sample Var Expected Var522.3126 527.7778
  12. 12. Sample Business Time Plots are shown in Figure 1 & Figure 2. Notice that the higherγin the second plot results in an increased number of recorded jumps (the actual number ofjumps is infinite, of course.)Figure 1: Sample Business Time Path.
  13. 13. Figure 2: Sample Business Time PathThe truncation error was added back to the process in order to ensure the final BusinessTime distribution had the correct mean. How much computation time was saved by thisimplementation? As noted above, the truncation error was controlled by setting τappropriately. Table 1 shows that the speed enhancement due to this method wasapproximately 20%.Time to Generate All PathsNo Truncation Error Adjustment, set Error = 0.001 42 secondsWith Truncation Error Adjustment, set Error = 0.05 34 secondsTable 1: The table shows the speed enhancement achieved by truncating the gamma process earlier andadding in the truncation error correction term. Parameters: 1.,3.,1,5. 2121 ==== λλγγ , drift a = 1,Tenor = 5, number of paths = 100,000. Note that setting the truncation error to 0.05 is relative to thetimescale of CDO Tenor, which is 5 in this case. So 0.05 represents 1% error.
  14. 14. Calculating IG forward intensities (ci)As discussed above the survival curve construction process produces a set of piecewiseconstant λ values for each name in the portfolio. These represent forward defaultintensities or hazard rate for the company. With the standard assumption of Poissonarrivals, survival probabilities for each name i are constructed using the expression:∫=−Ti dtteTX 0)(),0(λOne way of looking at this is that the survival probability of each company decays at arate proportional to time. That proportion is lambda. In the IG model survivalprobabilities decay at a rate proportional to business time. The proportion is c. In IGsurvival probabilities for each name i are constructed using the expression:∫=−Tti dItceTX 0)(),0(Here dIt represents the incremental arrival of information or business time. Calculatingthe appropriate value for ci relies on the fact that these quantities must be equal. Putanother way the survival probability from the IG model must be consistent with thesurvival probability implied by the market. If it were not so, the calibrating instruments,single-name CDS, would not be priced consistently with the market: a fatal flaw for anymodel.Although we could calculate ci from the survival probability itself, it is much moreconvenient to use the λ values already calculated. Since they are piecewise constant thefirst integral above can be replaced with a summation and the survival probability is seenas a product of conditional survival probabilities determined by the λ on each interval.The IG ci values are assumed piecewise constant on the same intervals. This leads to thefollowing identity for c in terms of λ for an arbitrary interval from T1 to T2. To minimizeconfusion λ̃ refers to the hazard rate and λi refers to the parameter of the ith gammaprocess.
  15. 15. ∑∏==−−−−+−=∴+==−≡++=−−≡2121)(~22111212/11log~][)),((),(),(12i iii iicaIIcTTccaceeEetGammaMGFGammaGammaaIITTiTTλγλλλλλλγλτγλτγτττγττλγA subtle point noted by Joshi is that one of the parameters is in fact redundant. If theparameter a is multiplied by m and each λi is divided by m this is equivalent tomultiplying each gamma path by m. Looking at the above we see that the resulting cvalues will simply be divided through by m. This means that defaults will occur at thesame times and prices will be the same for the parameterization (a, γ1, λ1, γ2, λ2) as for(ma, γ1, λ1/m, γ2, λ2/m). Thus a can be set to 1 without losing any flexibility in the model.The identity above does not have a simple closed form for c and must be solved using azero-finding algorithm. The calculation must be done for each name and for eachpiecewise constant interval. Since the ci values are a function of the IG parameters, thisinner calibration must be performed for each iteration of the outer calibration whichattempts to find IG parameters which consistently price different CDO tranches. Thus itis important that it be done efficiently.MATLAB’s fzero routine offers a simple solution for zero-finding. However it has thelimitation that it is not vectorized. Instead we implemented a parallel bisection methodwhich finds the zero for all names and intervals simultaneously. This is possible becauseeach of the calculations is completely independent. The algorithm is exactly identical tostandard bisection with the exception that each operation is performed on the entire arrayof values therefore it has the performance of the longest individual iteration. This can becompared to the serial method whose execution time is the sum of each individualiteration. This method allows us to exploit MATLAB’s powerful matrix routines. Themain loop from ig_intensity2.m is shown below:for k = 1:maxitercmid = (cmin + cmax)/2;vmid = f(cmid);rootfound = (vmid == 0);cmin(rootfound) = cmid(rootfound);cmax(rootfound) = cmid(rootfound);% root is below midpointrootbelow = (vmax.*vmid > 0);cmax(rootbelow) = cmid(rootbelow);vmax(rootbelow) = vmid(rootbelow);% root is above midpointrootabove = ~rootfound & ~rootbelow;cmin(rootabove) = cmid(rootabove);vmin(rootabove) = vmid(rootabove);
  16. 16. % global exit conditionif (max(max(cmax-cmin)) < tol)% disp(sprintf(error < tol k = %gn, k));breakendendc = (cmax + cmin)/2;A similar technique is applied to speed up the calculation of the λ values. In that case allvalues cannot be calculated simultaneously since the value of λ at a given time dependsall previous λ. The parallel bisection relies on the values being independent. Instead abootstrapping mechanism is employed in which we step forward through time calculatingthe hazard rate for all names simultaneously. This code is implemented inintensity_bisection.m and intensity_curve_test.m. It also serves to validate the results ofthe survival curve generator discussed above since they should both produce identicalresults.Getting the Default Times from the Business Time PathsIn Joshi’s Intensity-Gamma Model, )(tci refers to the default rate for name i per unit ofinformation arrival. Therefore the probability of survival for a name i is:−= ∫Toti dItctX )(exp)(Since )(tci is piecewise constant with respect to t (real time), we can calculate defaulttimes using the customary method of drawing a uniform random variable U~[0,1] andsetting it equal to the survival probability. Then, the default time iτ can be found by fromthe following formula<−= ∫Totiii dItcUT )()log(:minτSince the )(tci are piecewise constant, the integration must be performed in parts whichcan take time, but the process was sped up by noting that most names will not default forany given path. Thus, we first assume the )(tci are constant and assign each name itsmaximum value. Then the earliest business time that a default could have occurred is:max,min,)log(iiicUI−=If this is greater than the maximum business time, TenorI , then no default could haveoccurred within the timeframe of the CDO.
  17. 17. CDO pricerThe CDO pricer code was adapted from C code written by Aloke Mukherjee for the classInterest Rate and Credit Models implementing a Monte Carlo simulator of the one-factorGaussian copula model. The MATLAB port of this code is cdo.m. The default timegeneration was removed and only the pricing portion of the code was retained in igcdo.m.This is a misnomer since the code is not specific to IG: it takes the model-dependentsimulated default time scenarios as an argument.Given this (sorted) set of times we simply simulate the effect of each firm’s default on thefixed and floating leg cashflows. Default requires a payment on the fixed leg of thenotional associated with the affected fraction of the tranche. This payment must bediscounted to the present from the time of default. On the floating leg, default reducesthe amount of tranche notional on which the coupon is payed. Calculating the PV of thefixed leg is done by applying the appropriate discount factor from default time to thepresent weighted by the proportion of affected notional. For the floating leg we keeptrack of remaining notional in each discretized time interval and make the simplifyingassumption that the coupon is payed on the average notional over each period. The breakeven spread for the tranche is then calculated as the fixed spread which would make theexpected value of each leg equal: Cbe = E[Vfix]/(E[Vflt] / c) where c is the couponassumed in the valuation of the fixed leg.ValidationTwo important validations were performed. First we verify that we can reprice CDSconsistently with the input spreads. Secondly we use an analytic approximationsuggested in the paper to verify the Monte Carlo simulation.Roundtrip TestThe model must reproduce the input CDS spreads. The roundtrip test (roundtriptest.m)verifies this by assuming a CDS with a flat spread, recovering the associated (flat)intensities and then using these to price the CDS using the intensity gamma model. ACDS can be thought of as a CDO with only one firm and attachment and detachmentpoints of 0% and (1 – Recovery Rate)% respectively. Additionally, coupons are paid onthe entire notional until default at which point the coupons cease: this logic is triggered inthe pricer if the number of firms is one.We verify that the output spread matches the input spread. In addition, the IG priceroutputs a default time for each path. We can use these default times to construct asurvival curve simply by summing the number of defaults before a given time anddividing by the total number of paths. This survival curve should be consistent with thesurvival curve created when bootstrapping the default intensities. A sample run is shown
  18. 18. below as well as a graph of the survival curve. For comparison purposes the results forthe one-factor Gaussian copula model simulator are also shown.EDU>> help roundtriptestfunction pass = roundtriptest(spreadbps, paths);Test pricer and intermediate steps by seeing whether werecover the input spread.spread -> lambda -> igprice -> spreadinput:spreadbps - a flat spread in bps - defaults to 40paths - number of monte carlo paths to simulate - defaults to 100000output:pass - 1 if successfulAlso produces a graph comparing the implied survival probabilitiesand survival probabilities calculated from the generated default times.2006 aloke mukherjeeEDU>> roundtriptest(100,100000);closed form vfix = 0.0421812, vflt = 0.0421812Gaussian vfix = 0.0422499, vflt = 0.0428865IG vfix = 0.0429348, vflt = 0.0422907input spread = 100, gaussian spread = 101.507, IG spread = 98.49980 0.5 1 1.5 2 2.5 3 3.5 4 4.5 50.910.920.930.940.950.960.970.980.991Implied survival curveGaussian copulaIntensity Gamma
  19. 19. A Fast Analytic Price Approximation in the Intensity-GammaFrameworkJoshi and Stacey suggest a clever method for obtaining a fast, analytic price for a CDOby making some approximations. The default intensities λi are assumed to be constantover the period of the CDO and are calculated to match the survival probabilities atmaturity. For example, λi = (5 year CDS spread for name i) / (1-Recovery). The businesstime default intensities ci were then derived in the regular way. The probability that aspecific name survives until the CDO’s maturity, given a business time IT, isTi Ice−.Then, the probability that there were exactly k defaults, given a business time IT, can becalculated quickly. The method employed to accomplish this task was borrowed from theLarge Pool Model from CDO pricing.Let Xi be the probability that firm i survives to business time IT, and P(k,n) be theprobability that k firms have defaulted by time IT, given that n firms existed at time zero.• Start with 1 firm: P(k=0, n=1) = XiP(k=1, n=1) = (1-X1)• Add 1 firm: P(k=0, n=2) = P(k=0,n=1) * X2P(k=1, n=2) = P(k=0,n=1) * (1-X2) + P(k=1,n=1)*X2P(k=2, n=2) = P(k=1,n=1) *(1-X2)• And so on…We assume that all defaults occur exactly at the midpoint of the CDO’s lifetime. Then wecan price the floating leg and the fixed leg of a CDO by integrating over all possiblebusiness times as follows.dIIIpIkKpkKvfltVfloat TNkT )(*)|(*)(0 1==== ∫ ∑∞=For a business time process consisting of one gamma process (and possibly a drift term),the probability density of business times is trivially a gamma distribution. (What is thePDF for two gamma processes?) We tested this method and compared the results to ourmain Intensity Gamma CDO pricer in Table 2, Table 3, and Table 4. The results showthat the approximate prices are not terribly accurate. However, Joshi, suggest using it as acontrol variate for performance improvement.Table 3 is particularly informative, because it compares the two pricing methods whileusing the same constant default intensities. Thus, the pricing differences are due solely tothe effect of assuming all defaults occur exactly in the middle of the CDO’s lifetime. Itseems that only the equity tranche is substantially affected by this.
  20. 20. Table 4 shows the results if, in the Approximate Intensity Gamma Pricer, that defaults areevenly distributed over the CDO’s lifetime. The prices match almost exactly, as wewould expect.TRANCHE Approximate Intensity-Gamma Price (bps)Complete Intensity GammaPrice (bps)0-3% 1429 17783-7% 135 1877-10% 14 2910-15% 1 515-30% 0 0Table 2: Results from pricing CDO tranches using the Intensity Gamma method comparedto an approximate method. Actual default intensities curves were used for theComplete Intensity Gamma pricing. The parameters used were: γ=.1, λ=.1, a=1.0, rfr = .05.TRANCHE Approximate Intensity-Gamma Price (bps)Complete Intensity GammaPrice (bps)0-3% 1429 15733-7% 135 1337-10% 14 1310-15% 1 115-30% 0 0Table 3. Results from pricing CDO tranches using the Intensity Gamma method comparedto an approximate method. In this case, the same approximate default intensitieswere used in both runs, along with the same parameters: γ=.1, λ=.1, a=1.0, rfr = .05.TRANCHE Approximate Intensity-Gamma Price (bps)Complete Intensity GammaPrice (bps)0-3% 1584 15733-7% 144 1337-10% 14 1310-15% 1 115-30% 0 0Table 4:Results from pricing CDO tranches using the Intensity Gamma method comparedto an approximate method. In this case, the same approximate default intensitieswere used in both runs, and the defaults were assume to be evenly distributed over the CDO’slifetime. The parameters were the same: γ=.1, λ=.1, a=1.0, rfr = .05.
  21. 21. CalibrationWe chose to test the good fitting of correlation skew using two gamma processes. Thuswe tried to replicate market spreads with a two gamma processes CDO pricer.There is a redundancy in the parameters needed to calibrate two gamma processes withdrift. Indeed when we multiply the multi gamma process drift a by some factor m anddividing each lambda by the same m, is equivalent to multiplying the gamma paths by m.Thus we also multiply all the individual names default rate by m and lose our calibrationof default rate. To avoid such a problem we set a = 1. Now there remain four parametersto estimate. Our goal is to minimize the squared difference of market prices andsimulated prices. Mark Joshi advises to use a Downhill Simplex Method, however due tothe variability of results from our simulation we considered that a noisy optimizationmethod would be more suited. As we didn’t have any idea of the scale of parameters tochoose, we needed an optimization method that would allow a large search space.Because of long computation times, genetic algorithm was not suited for thisoptimization. We chose to use a Simulated Annealing algorithm, described in Wikipediaas:“The name and inspiration come from annealing in metallurgy, a technique involvingheating and controlled cooling of a material to increase the size of its crystals and reducetheir defects. The heat causes the atoms to become unstuck from their initial positions (alocal minimum of the internal energy) and wander randomly through states of higherenergy; the slow cooling gives them more chances of finding configurations with lowerinternal energy than the initial one.By analogy with this physical process, each step of the SA algorithm replaces the currentsolution by a random "nearby" solution, chosen with a probability that depends on thedifference between the corresponding function values and on a global parameter T (calledthe temperature), that is gradually decreased during the process. The dependency is suchthat the current solution changes almost randomly when T is large, but increasingly"downhill" as T goes to zero. The allowance for "uphill" moves saves the method frombecoming stuck at local minima—which are the bane of greedier methods.”We first calibrated a one gamma process to fit our CDO spreads then used thoseparameters as initial estimates for a two gamma processes optimization. Because of ourpricer computation time, the calibration took 48 hours to get a good approximation of astandard 125 names 5 tranches 5 year maturity CDO. Mark Joshi claims that his C++pricer takes 5 seconds to run a similar calibration. It might come from the fact that we usea matlab pricer. However, it is hardly conceivable.After getting implied base correlation from the resulting CDO spreads, we compare themarket and simulated base correlation skew.
  22. 22. Comparison of Base Correlations0,00%20,00%40,00%60,00%80,00%100,00%120,00%0-3% 3-7% 7-10% 10-15% 15-30%TranchesBaseCorrelationMarketBaseCorrelationSimulatedBaseCorrelationThe skews do not perfectly match. However, we do obtain a correlation skew with thismethod. We might think that our calibration was not perfect or we need to add a thirdgamma process. However in our current computation time, adding a third gamma processis forbidden.Future workWe successfully implemented the Intensity Gamma model and calibrated it to marketprices. The implementation was validated by repricing CDS and by checking against ananalytic approximation. There are a few avenues for improvement.The performance of the current implementation could be sped up by using thehomogeneous portfolio approximation described above as a control variate. This alongwith the application of quasi-random numbers are both identified in the paper as sourcesfor performance improvement. Additionally it is possible that there are improvements tobe made by vectorizing more of the implementation but it is already quite parallelized.Two more advanced techniques to extend the model are also outlined in the paper.Observing that using IG to price for products of different maturities than the calibratinginstruments produce a greater decrease in correlation than implied by the market, theauthors suggest introducing a random delay between the arrival of a sufficient amount ofinformation to trigger default and the default itself. Another enhancement to the modelwould be to allow different gamma processes for modeling sector or geographic effects.
  23. 23. ReferencesLi, David X. (2000) On Default Correlation: A Copula Function Approach. Journal ofFixed Income, March 2000, pp 41-50.Joshi, M. and Stacey, A. (2005) Intensity Gamma: A new approach to pricingportfolio credit derivatives. Madan, P. Carr, E.C. Chang. (1998) The Variance Gamma process and option pricing,European Finance Review 2 (1)