SlideShare a Scribd company logo
Credit Risk Management through
CreditRisk+
Dr Howard Haughton
Holistic Risk Solutions Limited
What is CreditRisk+?
CreditRisk+ is a method for quantifying:
The probability loss distribution for a portfolio of
loans. For example, it will be able to answer the question as
to what the probability would be corresponding to a certain
level of credit loss in the portfolio
A summary risk measure such as Value At Risk (VAR)
or Expected Shortfall (ES). Hence it can provide
information indicating the minimum loss in the portfolio that
will coincide with a given confidence level as well as the
average of those losses above that confidence level.
Quantify the contribution to either the VAR and/or ES
on a borrower, sector or portfolio basis. Hence it will be
able to provide insight as to which borrowers/loans, sectors
and portfolios are the most/least riskiest.
CreditRisk+ is not
A method for determining credit ratings
A method for determining the risk-adjusted price of
loans
A method for determining default probabilities/rating
transitions
Theoretical framework
some mathematics
CreditRisk+ generates probability loss distributions based
on the theory of probability generating functions (PGF).
In the above the powers of s denote the various states
that the random variable X might attain. The values of p
(for each state) denote the probability of the random
variable being in that state.
( ) ( )
( ) ( ) 11,0 0
1
1
0
0
0
==
++=== ∑
∞
=
XX
n
n
X
k
k
kX
FpF
where
spspspsEspsF L
Default Events
Assume that the event of default for a single
borrower can be viewed as a discrete event (i.e. it
either occurs or does not occur over a finite time
frame such as 1-year).
Assume that the probability of default is p and no
default is 1-p=q. There are therefore 2 states of
the world for the random variable denoting the
event of default: either default occurs (call this
state 1) or it does not occur (call this state 0).
Default Event continued
The probability generating function for this type of
default event would be:
In CreditRisk+ notation:
In the above, A denotes that the random defaults are
associated with a borrower A.
( ) ( ) ( ) psqpsppsspsEsF X
X +=+−=+−== 11 10
( ) ( )111 −+=+−= spsppsF AAAA
Some more mathematics
Independence of default events implies that:
For convenience:
( ) ( ) ( )( )∏∏ −+==
A
A
A
A spsFsF 11
( )( ) ( )( )
( )
( )
( )
( )
∑
∑
∑
=
=
∑
=
⇒
−
≈
−+=
−
−
A
A
s
sp
A
A
A
A
p
where
eesF
sp
spsF
A
A
μ
μ 1
1
1
11loglog
Taylor series representation
This implies that the probability distribution for the
number of defaults is:
This follows from observing the coefficient of the z
term in the expression.
( ) ( ) n
n
n
uss
z
n
eeeesF ∑
∞
=
−−−
===
0
1
!
μμμμ
( )
!n
edefaultsnp
n
μμ−
=
Common size losses
In CreditRisk+ losses are calculated with respect to
integer multiples of a common size loss.
AAAA
A
A
A
LvLL
LlossCommon
LossExpected
pyprobabilitDefault
LExposure
AObligor
ελ
λ
*,* ==
=
Common size continued
All exposures are scaled so that they can be expressed
as an integer multiple of the common loss L. The
common loss can be determined as follows:
( )
A
L
L
Intv
RELLthenLRELif
Biggest
RoundL
EL
RoundREL
EL
A
A
pp
p
p
A
AP
∀⎟
⎠
⎞
⎜
⎝
⎛
−−=
=>
⎟
⎠
⎞
⎜
⎝
⎛
=
⎟⎟
⎠
⎞
⎜⎜
⎝
⎛
=
= ∑
,
0,
100
0,
1000
λ
Common loss example
A Pa La lambda a Va Exp band
1 30.00% 358,475 107,542.50 2 2
2 30.00% 1,089,819 326,945.70 6 6
3 10.00% 1,799,710 179,971.00 9 9
4 15.00% 1,933,116 289,967.40 10 10
5 15.00% 2,317,327 347,599.05 12 12
6 15.00% 2,410,929 361,639.35 12 14
7 30.00% 2,652,184 795,655.20 14 15
8 15.00% 2,957,685 443,652.75 15 16
9 5.00% 3,137,989 156,899.45 16 24
10 5.00% 3,204,044 160,202.20 16 25
11 1.50% 4,727,724 70,915.86 24 27
12 5.00% 4,830,517 241,525.85 24 28
13 5.00% 4,912,097 245,604.85 25 29
14 30.00% 4,928,989 1,478,696.70 25 32
15 10.00% 5,042,312 504,231.20 25 33
16 7.50% 5,320,364 399,027.30 27 39
17 5.00% 5,435,457 271,772.85 27 77
18 3.00% 5,517,586 165,527.58 28 100
19 7.50% 5,764,596 432,344.70 29
20 3.00% 5,847,845 175,435.35 29
21 30.00% 6,466,533 1,939,959.90 32
22 30.00% 6,480,322 1,944,096.60 33
23 1.60% 7,727,651 123,642.42 39
24 10.00% 15,410,906 1,541,090.60 77
25 7.50% 20,238,895 1,517,917.13 100
Example continued
389,2020,
100
222,140,
1000
48.863,221,14
=⎟
⎠
⎞
⎜
⎝
⎛
=
=⎟⎟
⎠
⎞
⎜⎜
⎝
⎛
=
== ∑
Biggest
RoundL
EL
RoundREL
EL
p
p
A
AP λ
Exposure bands
∑
∑
=
=
==
==
⇒
=
m
j
j
vvA A
A
j
j
j
jjj
j
j
j
jA
vv
v
defaultsnumberExpected
jbandlossExpected
vjbandlossCommon
1
:
134.3
*
μμ
εε
μ
με
μ
ε
Exposure bands continued
band number band eta j mu j
1 2 0.531 0.266
2 6 1.615 0.269
3 9 0.889 0.099
4 10 1.433 0.143
5 12 3.504 0.292
6 14 3.931 0.281
7 15 2.192 0.146
8 16 1.567 0.098
9 24 1.544 0.064
10 25 11.011 0.440
11 27 3.314 0.123
12 28 0.818 0.029
13 29 3.003 0.104
14 32 9.585 0.300
15 33 9.606 0.291
16 39 0.611 0.016
17 77 7.614 0.099
18 100 7.500 0.075
Default loss distribution
( ) ( )∑
∞
=
==
0
*
n
n
sLnlossesaggregatepsG
( ) ( )
( ) ( )
( ) ( )( )
( )( )
( )
∑
∑∑
∏
∑∑
∏
=
==
−
+−
=
+−
+−
∞
=
−∞
=
=
⎟
⎟
⎠
⎞
⎜
⎜
⎝
⎛
⎟
⎟
⎠
⎞
⎜
⎜
⎝
⎛
==
==
∑∑
==
⇒
===
=
==
m
j j
j
v
m
j j
j
m
j
v
j
sp
sm
j
s
s
n
nv
n
j
n
nv
j
m
j
j
v
s
v
s
sP
where
sPFeeesG
es
n
e
sdefaultsnpsG
sGsG
jj
m
j
jv
j
m
j
jjv
jj
jv
jjj
j
j
1
11
1
1
00
1
11
!
ε
ε
μ
μ
μ
μ
μμ
μμ
μμ
μ
Loss distribution expression
Differentiate the Taylor series expansion of G to
determine probability expression, see Wilson [W]:
( ) ( )
( ) ( )( )00
!
1
*
1
0
:
0
PFeGA
A
n
A
A
ds
sGd
n
Lnoflossp
m
j j
j
j
j
v
nvj
vn
j
n
nsn
n
=
∑
==
=
⇒
==
=
−
≤
−
=
∑
ε
ε
Alternative representation
( )
( ) ( )[ ] ( )
( )
( ) ( ) ( ) 2
1
2
,min
1
22
2
10deg
1
1
1
σμσμ
μσσ
μσ
−
=
=
+===
−−+
+
=
∑
∑
S
jv
Aj
vm
j
jSS
S
AandpandPmwhere
jvAjv
v
vA
A
Note the above representation (based on Panjer [P]) is useful as it
explicitly shows the relationship between loss probabilities and the
mean and variance of portfolio losses. Other means of
implementing the PGF are due to Melchiori [M] using FFT methods.
It is assumed in the above that there is only one sector. However
as shown by Kurth [K], Tashe [T] and others the formalism can be
used for a multi-sector case with the corresponding portfolio
expected loss and volatility of losses being calculated and
substituted.
Assumptions revisited
The exposures are expressed net of collateral value (i.e. recovery)
The recovery values are constant
The exposures can be approximated as integer multiples of a fixed unit
of loss. This is a necessary assumption for the discrete probability
model used
The distribution of the number of defaults can be approximated via the
Poisson distribution (valid for small default probabilities)
In the case of multi-sectors it is assumed that sectors are independent.
It is shown by Kurth et al [K] how the above assumptions can be
relaxed to incorporate correlation between sectors and non-constant
severity of loss.
Average correlation
Default correlations can be modeled in a number of
ways:
Equity
Asset
Credit spreads
Implied based on model
The use of an average correlation (i.e. all pair-wise
credits have the same default correlation) makes
calibration/modeling easier.
Correlation continued
Kurth et al., [K] show how an average correlation can
be derived in their extension of CreditRisk+
incorporating default correlations and severity
variations. Whilst useful these estimates suffer from
the assumption that correlations are constant.
Giese [G] also shows how dependent risk factors can
be modeled in the CreditRisk+ framework.
Some research finding about
correlation
1. Default correlations are an increasing function of time
(see Standard & Poor’s [S&P], Zhou [Z])
2. Default correlations increase in recessionary periods
and decrease in boom times (see Gersbach and
Lipponer [GL])
3. Default correlations are, on the whole, positive (see
Standard & Poor’s [S&P])
4. The higher the default probabilities the higher the
default correlations (above references and others)
5. Default correlations less relevant for higher grade
credits than lower ones
Conditional correlation
Average default correlation should be conditional on the state of the
economy:
( )
( ) 01
1
=
+=
−
−
tt
tt
t
Ave
E
where
XF
ε
εβρ
In the above, F is any suitable function for mapping the product of the
sequence of macro-economic variables (along with the estimated
regression coefficients) into a forecasted correlation value.
Conditional correlation
continued
Note that if a Probit model is used (e.g. the Normal
distribution) then:
Note that, more generally, rather than just assuming an
expected value any percentile for the forecasted correlation
could be derived via sampling of the distribution associated
with the error term.
Given the above it would be easy to incorporate scenario
analysis/stress testing into the credit modeling framework.
For example, a 99% “worst case” correlation could be
obtained on the basis of the generated values.
( )( ) βρ ˆ
1
1
−
−
=Φ t
t
Ave XE
Estimation of default probabilities
1. Actuarial approach via default history for each credit
rating category (based on in-house data)
2. Inferred from delinquency probabilities for rating
classes in the absence of sufficient history of defaults.
Note some conjectured relationship between
delinquency and default (possibly for each rating
class) must be used here.
3. Implied based on in-house rating model e.g.
Logit/Probit modeling approaches.
4. Note default probabilities from (3) can be made
conditional on the state of the economy.
Loan aggregation
The original CreditRisk+ formalism does not specify
how to deal with cases where a borrower has more
than 1 loan in a portfolio. A simple aggregation rule
can however be applied to determine a single
aggregate loan from a collection of loans (possibly
multi-currency).
Aggregation example
Sectors Split Total Collateral Split Total
Exposure  Sector 1 Sector 2 Sector 3 Sector N Sector Sector 1 Sector 2 Sector 3 Sector N Collateral
358,475  50.0% 30.0% 10.0% 10.0% 100.00% 10.0% 20.0% 30.0% 40.0% 100.0%
358,475  50.0% 30.0% 10.0% 10.0% 100.00% 20.0% 10.0% 5.0% 65.0% 100.0%
The above shows 2 partial loan details for a borrower.  To aggregate these loans we 
assume that:
1. Sector contributions will remain the same for the aggregate loan
2. The total exposure is the sum of the exposure for both loans
3. PD and STD estimates are the same for both loans and aggregate loan
4. Multiply collateral splits by respective exposure amount, sum the result for each 
sector and divide individual sector amount by total aggregate exposure.
35,847.50  71,695.00  107,542.50  143,390.00 
71,695.00  35,847.50  17,923.75  233,008.75 
107,542.50  107,542.50  125,466.25  376,398.75 
15.00% 15.00% 17.50% 52.50%
Sectors Split Total Collateral Split Total
Exposure  Sector 1 Sector 2 Sector 3 Sector N Sector Sector 1 Sector 2 Sector 3 Sector N Collateral
716,950  50.0% 30.0% 10.0% 10.0% 100.00% 15.0% 15.0% 17.5% 52.5% 100.0%
Sectors Split Total Collateral Split Total
Exposure  Sector 1 Sector 2 Sector 3 Sector N Sector Sector 1 Sector 2 Sector 3 Sector N Collateral
358,475  50.0% 30.0% 10.0% 10.0% 100.00% 10.0% 20.0% 30.0% 40.0% 100.0%
358,475  50.0% 30.0% 10.0% 10.0% 100.00% 20.0% 10.0% 5.0% 65.0% 100.0%
Note that for cases where a borrower has multiple currency loans then all loan 
amounts must be converted to a chosen base currency.
Risk Measures
Value At Risk (VAR)
The VAR can be defined as the maximum loss credit loss, based
on a given confidence level, that a portfolio might incur over the
year. One can calculate the VAR by simply reading of the loss
probability distribution. For example once loss probabilities are
calculated a VAR at a 99% confidence level can be calculated by
observing the losses (ordered by size) that correspond to the
cumulative probability >=99%. Another way of stating this is to
say the VAR (given a confidence level of 99% say) is the
smallest loss such that the probability of exceeding this loss is
greater than or equal to 1-99%=1%.
Expected Shortfall (ES)
The ES can be viewed as the average of the losses conditional
on those losses being greater than or equal to the VAR.
Risk contributions
It would be advantageous to be able to ascertain the
contributions that each borrower makes to the VAR
and ES.
Knowing the contributions provides a measure as to
the relative riskiness of one borrower over another.
Conclusions
The actuarial approach as popularized via the CreditRisk+
method is widely used for the modeling of credit risk
This type of model is more practically applied than many
other portfolio methods to developing economies as
significantly less parameters need to be
calibrated/estimated much of which can’t be observed in
any event
The method easily lends itself to be combined with factor
models allowing for incorporation of credit-cycle factor
considerations into the risk process.
Stress testing is easily accommodated in the framework.
Values can provide checks on the adequacy of provisions
and capital and compared to regulatory standards.
References
[S&P] -Arnaud de Servigny & Olivier Renault (2002): Standard & Poor’s
presentation of Default Correlation: Empirical Evidence, November.
[K] - Burgisser, A. Kurth, and A. Wagner. Incorporating severity variations
into credit risk. Journal of Risk; 3(4):5-31, 2001
[C1] -Crouhy, M, D Galai and R Mark (2001): “Prototype Risk Rating
System”, Journal of Banking and Finance, January, pp 47-95
[C2] - Crouhy, M, D Galai and R Mark (2000): “A Comparative Analysis of
Current Credit Risk Models”, Journal of Banking and Finance, January, pp
57-117
[G]- Giese, G. 2004. Dependent Risk Factors. In: CreditRisk+ in the Banking
Industry. Berlin, Heidelberg: Springer Verlag. 153–165.
[GL]- Gersbach, H & Lipponer, A (2000): “The correlation effect”. University
of Heidelberg working paper, October
[M]- Mario Melchiori (2004): CreditRisk+ by Fast Fourier Transform,
Universidad Nacional del Litoral
References continued
[T]- Tasche, D. Expected shortfall and beyond. Journal of Banking
& Finance, 26(7):1519-1533, 2002.
[P]- Panjer, H. Recursive evaluation of a family of compound
distributions. ASTIN Bulletin, 12:22-26, 1981.
[W]- Wilson, T. CreditRisk+: A credit risk management framework.
London 1997, Credit Suisse Financial Products.
[Z]- Zhou, C (2001): “An Analysis of Default Correlations and
Multiple Defaults”, The Review of Financial Studies, Summer, pp
555-576.

More Related Content

What's hot

Chapter 11
Chapter 11Chapter 11
Chapter 11bmcfad01
 
Quantitative Methods for Lawyers - Class #15 - R Boot Camp - Part 2 - Profess...
Quantitative Methods for Lawyers - Class #15 - R Boot Camp - Part 2 - Profess...Quantitative Methods for Lawyers - Class #15 - R Boot Camp - Part 2 - Profess...
Quantitative Methods for Lawyers - Class #15 - R Boot Camp - Part 2 - Profess...
Daniel Katz
 
Inferences about Two Proportions
 Inferences about Two Proportions Inferences about Two Proportions
Inferences about Two Proportions
Long Beach City College
 
Quantitative Methods for Lawyers - Class #6 - Basic Statistics + Probability ...
Quantitative Methods for Lawyers - Class #6 - Basic Statistics + Probability ...Quantitative Methods for Lawyers - Class #6 - Basic Statistics + Probability ...
Quantitative Methods for Lawyers - Class #6 - Basic Statistics + Probability ...
Daniel Katz
 
Quantitative Methods for Lawyers - Class #22 - Regression Analysis - Part 5
Quantitative Methods for Lawyers - Class #22 - Regression Analysis - Part 5Quantitative Methods for Lawyers - Class #22 - Regression Analysis - Part 5
Quantitative Methods for Lawyers - Class #22 - Regression Analysis - Part 5
Daniel Katz
 
17 ch ken black solution
17 ch ken black solution17 ch ken black solution
17 ch ken black solutionKrunal Shah
 
Quantitative Methods for Lawyers - Class #9 - Bayes Theorem (Part 2), Skewnes...
Quantitative Methods for Lawyers - Class #9 - Bayes Theorem (Part 2), Skewnes...Quantitative Methods for Lawyers - Class #9 - Bayes Theorem (Part 2), Skewnes...
Quantitative Methods for Lawyers - Class #9 - Bayes Theorem (Part 2), Skewnes...
Daniel Katz
 
Some study materials
Some study materialsSome study materials
Some study materials
SatishH5
 
ELEMENTS OF STATISTICS / TUTORIALOUTLET DOT COM
ELEMENTS OF STATISTICS / TUTORIALOUTLET DOT COMELEMENTS OF STATISTICS / TUTORIALOUTLET DOT COM
ELEMENTS OF STATISTICS / TUTORIALOUTLET DOT COM
albert0076
 
Estimating a Population Standard Deviation or Variance
Estimating a Population Standard Deviation or Variance Estimating a Population Standard Deviation or Variance
Estimating a Population Standard Deviation or Variance
Long Beach City College
 
Quantitative Methods for Lawyers - Class #7 - Probability & Basic Statistics ...
Quantitative Methods for Lawyers - Class #7 - Probability & Basic Statistics ...Quantitative Methods for Lawyers - Class #7 - Probability & Basic Statistics ...
Quantitative Methods for Lawyers - Class #7 - Probability & Basic Statistics ...
Daniel Katz
 
Chapter11
Chapter11Chapter11
Chapter11
Richard Ferreria
 
161783709 chapter-04-answers
161783709 chapter-04-answers161783709 chapter-04-answers
161783709 chapter-04-answers
Firas Husseini
 
2012 predictive clusters
2012 predictive clusters2012 predictive clusters
2012 predictive clusters
Alejandro Correa Bahnsen, PhD
 
Chapter12
Chapter12Chapter12
Chapter12
Richard Ferreria
 
Decision Theory
Decision TheoryDecision Theory
Decision Theory
kzoe1996
 
Sarah Brown. Portfolio Allocation, Background Risk and Households’ Flight to ...
Sarah Brown. Portfolio Allocation, Background Risk and Households’ Flight to ...Sarah Brown. Portfolio Allocation, Background Risk and Households’ Flight to ...
Sarah Brown. Portfolio Allocation, Background Risk and Households’ Flight to ...
Eesti Pank
 
Chapter 19 decision-making under risk
Chapter 19   decision-making under riskChapter 19   decision-making under risk
Chapter 19 decision-making under riskBich Lien Pham
 
Lec1.regression
Lec1.regressionLec1.regression
Lec1.regression
Aftab Alam
 

What's hot (20)

Chapter 11
Chapter 11Chapter 11
Chapter 11
 
Quantitative Methods for Lawyers - Class #15 - R Boot Camp - Part 2 - Profess...
Quantitative Methods for Lawyers - Class #15 - R Boot Camp - Part 2 - Profess...Quantitative Methods for Lawyers - Class #15 - R Boot Camp - Part 2 - Profess...
Quantitative Methods for Lawyers - Class #15 - R Boot Camp - Part 2 - Profess...
 
Inferences about Two Proportions
 Inferences about Two Proportions Inferences about Two Proportions
Inferences about Two Proportions
 
Quantitative Methods for Lawyers - Class #6 - Basic Statistics + Probability ...
Quantitative Methods for Lawyers - Class #6 - Basic Statistics + Probability ...Quantitative Methods for Lawyers - Class #6 - Basic Statistics + Probability ...
Quantitative Methods for Lawyers - Class #6 - Basic Statistics + Probability ...
 
Quantitative Methods for Lawyers - Class #22 - Regression Analysis - Part 5
Quantitative Methods for Lawyers - Class #22 - Regression Analysis - Part 5Quantitative Methods for Lawyers - Class #22 - Regression Analysis - Part 5
Quantitative Methods for Lawyers - Class #22 - Regression Analysis - Part 5
 
17 ch ken black solution
17 ch ken black solution17 ch ken black solution
17 ch ken black solution
 
Quantitative Methods for Lawyers - Class #9 - Bayes Theorem (Part 2), Skewnes...
Quantitative Methods for Lawyers - Class #9 - Bayes Theorem (Part 2), Skewnes...Quantitative Methods for Lawyers - Class #9 - Bayes Theorem (Part 2), Skewnes...
Quantitative Methods for Lawyers - Class #9 - Bayes Theorem (Part 2), Skewnes...
 
Some study materials
Some study materialsSome study materials
Some study materials
 
ELEMENTS OF STATISTICS / TUTORIALOUTLET DOT COM
ELEMENTS OF STATISTICS / TUTORIALOUTLET DOT COMELEMENTS OF STATISTICS / TUTORIALOUTLET DOT COM
ELEMENTS OF STATISTICS / TUTORIALOUTLET DOT COM
 
Estimating a Population Standard Deviation or Variance
Estimating a Population Standard Deviation or Variance Estimating a Population Standard Deviation or Variance
Estimating a Population Standard Deviation or Variance
 
Quantitative Methods for Lawyers - Class #7 - Probability & Basic Statistics ...
Quantitative Methods for Lawyers - Class #7 - Probability & Basic Statistics ...Quantitative Methods for Lawyers - Class #7 - Probability & Basic Statistics ...
Quantitative Methods for Lawyers - Class #7 - Probability & Basic Statistics ...
 
Chapter11
Chapter11Chapter11
Chapter11
 
161783709 chapter-04-answers
161783709 chapter-04-answers161783709 chapter-04-answers
161783709 chapter-04-answers
 
Team 1 post-challenge final report
Team 1 post-challenge final reportTeam 1 post-challenge final report
Team 1 post-challenge final report
 
2012 predictive clusters
2012 predictive clusters2012 predictive clusters
2012 predictive clusters
 
Chapter12
Chapter12Chapter12
Chapter12
 
Decision Theory
Decision TheoryDecision Theory
Decision Theory
 
Sarah Brown. Portfolio Allocation, Background Risk and Households’ Flight to ...
Sarah Brown. Portfolio Allocation, Background Risk and Households’ Flight to ...Sarah Brown. Portfolio Allocation, Background Risk and Households’ Flight to ...
Sarah Brown. Portfolio Allocation, Background Risk and Households’ Flight to ...
 
Chapter 19 decision-making under risk
Chapter 19   decision-making under riskChapter 19   decision-making under risk
Chapter 19 decision-making under risk
 
Lec1.regression
Lec1.regressionLec1.regression
Lec1.regression
 

Similar to creditriskmanagment_howardhaughton121510

Risks and returns
Risks and returnsRisks and returns
WEKA:Credibility Evaluating Whats Been Learned
WEKA:Credibility Evaluating Whats Been LearnedWEKA:Credibility Evaluating Whats Been Learned
WEKA:Credibility Evaluating Whats Been Learned
weka Content
 
WEKA: Credibility Evaluating Whats Been Learned
WEKA: Credibility Evaluating Whats Been LearnedWEKA: Credibility Evaluating Whats Been Learned
WEKA: Credibility Evaluating Whats Been Learned
DataminingTools Inc
 
Data classification sammer
Data classification sammer Data classification sammer
Data classification sammer
Sammer Qader
 
CreditRisk+.pdf
CreditRisk+.pdfCreditRisk+.pdf
CreditRisk+.pdf
OstapKhomenko
 
InstructionsView CAAE Stormwater video Too Big for Our Ditches.docx
InstructionsView CAAE Stormwater video Too Big for Our Ditches.docxInstructionsView CAAE Stormwater video Too Big for Our Ditches.docx
InstructionsView CAAE Stormwater video Too Big for Our Ditches.docx
dirkrplav
 
Logistic Regression in Case-Control Study
Logistic Regression in Case-Control StudyLogistic Regression in Case-Control Study
Logistic Regression in Case-Control Study
Satish Gupta
 
13 ch ken black solution
13 ch ken black solution13 ch ken black solution
13 ch ken black solutionKrunal Shah
 
BOOTSTRAPPING TO EVALUATE RESPONSE MODELS: A SAS® MACRO
BOOTSTRAPPING TO EVALUATE RESPONSE MODELS: A SAS® MACROBOOTSTRAPPING TO EVALUATE RESPONSE MODELS: A SAS® MACRO
BOOTSTRAPPING TO EVALUATE RESPONSE MODELS: A SAS® MACRO
Anthony Kilili
 
Presentation on Regression Analysis
Presentation on Regression AnalysisPresentation on Regression Analysis
Presentation on Regression Analysis
J P Verma
 
Estimating Market Risk Measures: An Introduction and Overview
Estimating Market Risk Measures: An Introduction and OverviewEstimating Market Risk Measures: An Introduction and Overview
Estimating Market Risk Measures: An Introduction and Overview
Namrata Das, MBA, FRM
 
Machine learning algorithms and business use cases
Machine learning algorithms and business use casesMachine learning algorithms and business use cases
Machine learning algorithms and business use cases
Sridhar Ratakonda
 
Value At Risk Sep 22
Value At Risk Sep 22Value At Risk Sep 22
Value At Risk Sep 22
av vedpuriswar
 
Risk-Analysis.pdf
Risk-Analysis.pdfRisk-Analysis.pdf
Risk-Analysis.pdf
MaheshBika
 
M3_Statistics foundations for business analysts_Presentation.pdf
M3_Statistics foundations for business analysts_Presentation.pdfM3_Statistics foundations for business analysts_Presentation.pdf
M3_Statistics foundations for business analysts_Presentation.pdf
ACHALSHARMA52
 
Multiple Regression.ppt
Multiple Regression.pptMultiple Regression.ppt
Multiple Regression.ppt
TanyaWadhwani4
 
Eviews forecasting
Eviews forecastingEviews forecasting
Eviews forecasting
Rafael Bustamante Romaní
 
Intro to Quant Trading Strategies (Lecture 10 of 10)
Intro to Quant Trading Strategies (Lecture 10 of 10)Intro to Quant Trading Strategies (Lecture 10 of 10)
Intro to Quant Trading Strategies (Lecture 10 of 10)
Adrian Aley
 
Machine learning in credit risk modeling : a James white paper
Machine learning in credit risk modeling : a James white paperMachine learning in credit risk modeling : a James white paper
Machine learning in credit risk modeling : a James white paper
James by CrowdProcess
 

Similar to creditriskmanagment_howardhaughton121510 (20)

Risks and returns
Risks and returnsRisks and returns
Risks and returns
 
WEKA:Credibility Evaluating Whats Been Learned
WEKA:Credibility Evaluating Whats Been LearnedWEKA:Credibility Evaluating Whats Been Learned
WEKA:Credibility Evaluating Whats Been Learned
 
WEKA: Credibility Evaluating Whats Been Learned
WEKA: Credibility Evaluating Whats Been LearnedWEKA: Credibility Evaluating Whats Been Learned
WEKA: Credibility Evaluating Whats Been Learned
 
Data classification sammer
Data classification sammer Data classification sammer
Data classification sammer
 
CreditRisk+.pdf
CreditRisk+.pdfCreditRisk+.pdf
CreditRisk+.pdf
 
InstructionsView CAAE Stormwater video Too Big for Our Ditches.docx
InstructionsView CAAE Stormwater video Too Big for Our Ditches.docxInstructionsView CAAE Stormwater video Too Big for Our Ditches.docx
InstructionsView CAAE Stormwater video Too Big for Our Ditches.docx
 
Logistic Regression in Case-Control Study
Logistic Regression in Case-Control StudyLogistic Regression in Case-Control Study
Logistic Regression in Case-Control Study
 
13 ch ken black solution
13 ch ken black solution13 ch ken black solution
13 ch ken black solution
 
BOOTSTRAPPING TO EVALUATE RESPONSE MODELS: A SAS® MACRO
BOOTSTRAPPING TO EVALUATE RESPONSE MODELS: A SAS® MACROBOOTSTRAPPING TO EVALUATE RESPONSE MODELS: A SAS® MACRO
BOOTSTRAPPING TO EVALUATE RESPONSE MODELS: A SAS® MACRO
 
Presentation on Regression Analysis
Presentation on Regression AnalysisPresentation on Regression Analysis
Presentation on Regression Analysis
 
Estimating Market Risk Measures: An Introduction and Overview
Estimating Market Risk Measures: An Introduction and OverviewEstimating Market Risk Measures: An Introduction and Overview
Estimating Market Risk Measures: An Introduction and Overview
 
Machine learning algorithms and business use cases
Machine learning algorithms and business use casesMachine learning algorithms and business use cases
Machine learning algorithms and business use cases
 
Value At Risk Sep 22
Value At Risk Sep 22Value At Risk Sep 22
Value At Risk Sep 22
 
Risk-Analysis.pdf
Risk-Analysis.pdfRisk-Analysis.pdf
Risk-Analysis.pdf
 
M3_Statistics foundations for business analysts_Presentation.pdf
M3_Statistics foundations for business analysts_Presentation.pdfM3_Statistics foundations for business analysts_Presentation.pdf
M3_Statistics foundations for business analysts_Presentation.pdf
 
Multiple Regression.ppt
Multiple Regression.pptMultiple Regression.ppt
Multiple Regression.ppt
 
Eviews forecasting
Eviews forecastingEviews forecasting
Eviews forecasting
 
Chap 18
Chap 18Chap 18
Chap 18
 
Intro to Quant Trading Strategies (Lecture 10 of 10)
Intro to Quant Trading Strategies (Lecture 10 of 10)Intro to Quant Trading Strategies (Lecture 10 of 10)
Intro to Quant Trading Strategies (Lecture 10 of 10)
 
Machine learning in credit risk modeling : a James white paper
Machine learning in credit risk modeling : a James white paperMachine learning in credit risk modeling : a James white paper
Machine learning in credit risk modeling : a James white paper
 

creditriskmanagment_howardhaughton121510

  • 1. Credit Risk Management through CreditRisk+ Dr Howard Haughton Holistic Risk Solutions Limited
  • 2. What is CreditRisk+? CreditRisk+ is a method for quantifying: The probability loss distribution for a portfolio of loans. For example, it will be able to answer the question as to what the probability would be corresponding to a certain level of credit loss in the portfolio A summary risk measure such as Value At Risk (VAR) or Expected Shortfall (ES). Hence it can provide information indicating the minimum loss in the portfolio that will coincide with a given confidence level as well as the average of those losses above that confidence level. Quantify the contribution to either the VAR and/or ES on a borrower, sector or portfolio basis. Hence it will be able to provide insight as to which borrowers/loans, sectors and portfolios are the most/least riskiest.
  • 3. CreditRisk+ is not A method for determining credit ratings A method for determining the risk-adjusted price of loans A method for determining default probabilities/rating transitions
  • 4. Theoretical framework some mathematics CreditRisk+ generates probability loss distributions based on the theory of probability generating functions (PGF). In the above the powers of s denote the various states that the random variable X might attain. The values of p (for each state) denote the probability of the random variable being in that state. ( ) ( ) ( ) ( ) 11,0 0 1 1 0 0 0 == ++=== ∑ ∞ = XX n n X k k kX FpF where spspspsEspsF L
  • 5. Default Events Assume that the event of default for a single borrower can be viewed as a discrete event (i.e. it either occurs or does not occur over a finite time frame such as 1-year). Assume that the probability of default is p and no default is 1-p=q. There are therefore 2 states of the world for the random variable denoting the event of default: either default occurs (call this state 1) or it does not occur (call this state 0).
  • 6. Default Event continued The probability generating function for this type of default event would be: In CreditRisk+ notation: In the above, A denotes that the random defaults are associated with a borrower A. ( ) ( ) ( ) psqpsppsspsEsF X X +=+−=+−== 11 10 ( ) ( )111 −+=+−= spsppsF AAAA
  • 7. Some more mathematics Independence of default events implies that: For convenience: ( ) ( ) ( )( )∏∏ −+== A A A A spsFsF 11 ( )( ) ( )( ) ( ) ( ) ( ) ( ) ∑ ∑ ∑ = = ∑ = ⇒ − ≈ −+= − − A A s sp A A A A p where eesF sp spsF A A μ μ 1 1 1 11loglog
  • 8. Taylor series representation This implies that the probability distribution for the number of defaults is: This follows from observing the coefficient of the z term in the expression. ( ) ( ) n n n uss z n eeeesF ∑ ∞ = −−− === 0 1 ! μμμμ ( ) !n edefaultsnp n μμ− =
  • 9. Common size losses In CreditRisk+ losses are calculated with respect to integer multiples of a common size loss. AAAA A A A LvLL LlossCommon LossExpected pyprobabilitDefault LExposure AObligor ελ λ *,* == =
  • 10. Common size continued All exposures are scaled so that they can be expressed as an integer multiple of the common loss L. The common loss can be determined as follows: ( ) A L L Intv RELLthenLRELif Biggest RoundL EL RoundREL EL A A pp p p A AP ∀⎟ ⎠ ⎞ ⎜ ⎝ ⎛ −−= => ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ = ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ = = ∑ , 0, 100 0, 1000 λ
  • 11. Common loss example A Pa La lambda a Va Exp band 1 30.00% 358,475 107,542.50 2 2 2 30.00% 1,089,819 326,945.70 6 6 3 10.00% 1,799,710 179,971.00 9 9 4 15.00% 1,933,116 289,967.40 10 10 5 15.00% 2,317,327 347,599.05 12 12 6 15.00% 2,410,929 361,639.35 12 14 7 30.00% 2,652,184 795,655.20 14 15 8 15.00% 2,957,685 443,652.75 15 16 9 5.00% 3,137,989 156,899.45 16 24 10 5.00% 3,204,044 160,202.20 16 25 11 1.50% 4,727,724 70,915.86 24 27 12 5.00% 4,830,517 241,525.85 24 28 13 5.00% 4,912,097 245,604.85 25 29 14 30.00% 4,928,989 1,478,696.70 25 32 15 10.00% 5,042,312 504,231.20 25 33 16 7.50% 5,320,364 399,027.30 27 39 17 5.00% 5,435,457 271,772.85 27 77 18 3.00% 5,517,586 165,527.58 28 100 19 7.50% 5,764,596 432,344.70 29 20 3.00% 5,847,845 175,435.35 29 21 30.00% 6,466,533 1,939,959.90 32 22 30.00% 6,480,322 1,944,096.60 33 23 1.60% 7,727,651 123,642.42 39 24 10.00% 15,410,906 1,541,090.60 77 25 7.50% 20,238,895 1,517,917.13 100
  • 14. Exposure bands continued band number band eta j mu j 1 2 0.531 0.266 2 6 1.615 0.269 3 9 0.889 0.099 4 10 1.433 0.143 5 12 3.504 0.292 6 14 3.931 0.281 7 15 2.192 0.146 8 16 1.567 0.098 9 24 1.544 0.064 10 25 11.011 0.440 11 27 3.314 0.123 12 28 0.818 0.029 13 29 3.003 0.104 14 32 9.585 0.300 15 33 9.606 0.291 16 39 0.611 0.016 17 77 7.614 0.099 18 100 7.500 0.075
  • 15. Default loss distribution ( ) ( )∑ ∞ = == 0 * n n sLnlossesaggregatepsG ( ) ( ) ( ) ( ) ( ) ( )( ) ( )( ) ( ) ∑ ∑∑ ∏ ∑∑ ∏ = == − +− = +− +− ∞ = −∞ = = ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ == == ∑∑ == ⇒ === = == m j j j v m j j j m j v j sp sm j s s n nv n j n nv j m j j v s v s sP where sPFeeesG es n e sdefaultsnpsG sGsG jj m j jv j m j jjv jj jv jjj j j 1 11 1 1 00 1 11 ! ε ε μ μ μ μ μμ μμ μμ μ
  • 16. Loss distribution expression Differentiate the Taylor series expansion of G to determine probability expression, see Wilson [W]: ( ) ( ) ( ) ( )( )00 ! 1 * 1 0 : 0 PFeGA A n A A ds sGd n Lnoflossp m j j j j j v nvj vn j n nsn n = ∑ == = ⇒ == = − ≤ − = ∑ ε ε
  • 17. Alternative representation ( ) ( ) ( )[ ] ( ) ( ) ( ) ( ) ( ) 2 1 2 ,min 1 22 2 10deg 1 1 1 σμσμ μσσ μσ − = = +=== −−+ + = ∑ ∑ S jv Aj vm j jSS S AandpandPmwhere jvAjv v vA A Note the above representation (based on Panjer [P]) is useful as it explicitly shows the relationship between loss probabilities and the mean and variance of portfolio losses. Other means of implementing the PGF are due to Melchiori [M] using FFT methods. It is assumed in the above that there is only one sector. However as shown by Kurth [K], Tashe [T] and others the formalism can be used for a multi-sector case with the corresponding portfolio expected loss and volatility of losses being calculated and substituted.
  • 18. Assumptions revisited The exposures are expressed net of collateral value (i.e. recovery) The recovery values are constant The exposures can be approximated as integer multiples of a fixed unit of loss. This is a necessary assumption for the discrete probability model used The distribution of the number of defaults can be approximated via the Poisson distribution (valid for small default probabilities) In the case of multi-sectors it is assumed that sectors are independent. It is shown by Kurth et al [K] how the above assumptions can be relaxed to incorporate correlation between sectors and non-constant severity of loss.
  • 19. Average correlation Default correlations can be modeled in a number of ways: Equity Asset Credit spreads Implied based on model The use of an average correlation (i.e. all pair-wise credits have the same default correlation) makes calibration/modeling easier.
  • 20. Correlation continued Kurth et al., [K] show how an average correlation can be derived in their extension of CreditRisk+ incorporating default correlations and severity variations. Whilst useful these estimates suffer from the assumption that correlations are constant. Giese [G] also shows how dependent risk factors can be modeled in the CreditRisk+ framework.
  • 21. Some research finding about correlation 1. Default correlations are an increasing function of time (see Standard & Poor’s [S&P], Zhou [Z]) 2. Default correlations increase in recessionary periods and decrease in boom times (see Gersbach and Lipponer [GL]) 3. Default correlations are, on the whole, positive (see Standard & Poor’s [S&P]) 4. The higher the default probabilities the higher the default correlations (above references and others) 5. Default correlations less relevant for higher grade credits than lower ones
  • 22. Conditional correlation Average default correlation should be conditional on the state of the economy: ( ) ( ) 01 1 = += − − tt tt t Ave E where XF ε εβρ In the above, F is any suitable function for mapping the product of the sequence of macro-economic variables (along with the estimated regression coefficients) into a forecasted correlation value.
  • 23. Conditional correlation continued Note that if a Probit model is used (e.g. the Normal distribution) then: Note that, more generally, rather than just assuming an expected value any percentile for the forecasted correlation could be derived via sampling of the distribution associated with the error term. Given the above it would be easy to incorporate scenario analysis/stress testing into the credit modeling framework. For example, a 99% “worst case” correlation could be obtained on the basis of the generated values. ( )( ) βρ ˆ 1 1 − − =Φ t t Ave XE
  • 24. Estimation of default probabilities 1. Actuarial approach via default history for each credit rating category (based on in-house data) 2. Inferred from delinquency probabilities for rating classes in the absence of sufficient history of defaults. Note some conjectured relationship between delinquency and default (possibly for each rating class) must be used here. 3. Implied based on in-house rating model e.g. Logit/Probit modeling approaches. 4. Note default probabilities from (3) can be made conditional on the state of the economy.
  • 25. Loan aggregation The original CreditRisk+ formalism does not specify how to deal with cases where a borrower has more than 1 loan in a portfolio. A simple aggregation rule can however be applied to determine a single aggregate loan from a collection of loans (possibly multi-currency).
  • 26. Aggregation example Sectors Split Total Collateral Split Total Exposure  Sector 1 Sector 2 Sector 3 Sector N Sector Sector 1 Sector 2 Sector 3 Sector N Collateral 358,475  50.0% 30.0% 10.0% 10.0% 100.00% 10.0% 20.0% 30.0% 40.0% 100.0% 358,475  50.0% 30.0% 10.0% 10.0% 100.00% 20.0% 10.0% 5.0% 65.0% 100.0% The above shows 2 partial loan details for a borrower.  To aggregate these loans we  assume that: 1. Sector contributions will remain the same for the aggregate loan 2. The total exposure is the sum of the exposure for both loans 3. PD and STD estimates are the same for both loans and aggregate loan 4. Multiply collateral splits by respective exposure amount, sum the result for each  sector and divide individual sector amount by total aggregate exposure.
  • 27. 35,847.50  71,695.00  107,542.50  143,390.00  71,695.00  35,847.50  17,923.75  233,008.75  107,542.50  107,542.50  125,466.25  376,398.75  15.00% 15.00% 17.50% 52.50% Sectors Split Total Collateral Split Total Exposure  Sector 1 Sector 2 Sector 3 Sector N Sector Sector 1 Sector 2 Sector 3 Sector N Collateral 716,950  50.0% 30.0% 10.0% 10.0% 100.00% 15.0% 15.0% 17.5% 52.5% 100.0% Sectors Split Total Collateral Split Total Exposure  Sector 1 Sector 2 Sector 3 Sector N Sector Sector 1 Sector 2 Sector 3 Sector N Collateral 358,475  50.0% 30.0% 10.0% 10.0% 100.00% 10.0% 20.0% 30.0% 40.0% 100.0% 358,475  50.0% 30.0% 10.0% 10.0% 100.00% 20.0% 10.0% 5.0% 65.0% 100.0% Note that for cases where a borrower has multiple currency loans then all loan  amounts must be converted to a chosen base currency.
  • 28. Risk Measures Value At Risk (VAR) The VAR can be defined as the maximum loss credit loss, based on a given confidence level, that a portfolio might incur over the year. One can calculate the VAR by simply reading of the loss probability distribution. For example once loss probabilities are calculated a VAR at a 99% confidence level can be calculated by observing the losses (ordered by size) that correspond to the cumulative probability >=99%. Another way of stating this is to say the VAR (given a confidence level of 99% say) is the smallest loss such that the probability of exceeding this loss is greater than or equal to 1-99%=1%. Expected Shortfall (ES) The ES can be viewed as the average of the losses conditional on those losses being greater than or equal to the VAR.
  • 29. Risk contributions It would be advantageous to be able to ascertain the contributions that each borrower makes to the VAR and ES. Knowing the contributions provides a measure as to the relative riskiness of one borrower over another.
  • 30.
  • 31.
  • 32.
  • 33. Conclusions The actuarial approach as popularized via the CreditRisk+ method is widely used for the modeling of credit risk This type of model is more practically applied than many other portfolio methods to developing economies as significantly less parameters need to be calibrated/estimated much of which can’t be observed in any event The method easily lends itself to be combined with factor models allowing for incorporation of credit-cycle factor considerations into the risk process. Stress testing is easily accommodated in the framework. Values can provide checks on the adequacy of provisions and capital and compared to regulatory standards.
  • 34. References [S&P] -Arnaud de Servigny & Olivier Renault (2002): Standard & Poor’s presentation of Default Correlation: Empirical Evidence, November. [K] - Burgisser, A. Kurth, and A. Wagner. Incorporating severity variations into credit risk. Journal of Risk; 3(4):5-31, 2001 [C1] -Crouhy, M, D Galai and R Mark (2001): “Prototype Risk Rating System”, Journal of Banking and Finance, January, pp 47-95 [C2] - Crouhy, M, D Galai and R Mark (2000): “A Comparative Analysis of Current Credit Risk Models”, Journal of Banking and Finance, January, pp 57-117 [G]- Giese, G. 2004. Dependent Risk Factors. In: CreditRisk+ in the Banking Industry. Berlin, Heidelberg: Springer Verlag. 153–165. [GL]- Gersbach, H & Lipponer, A (2000): “The correlation effect”. University of Heidelberg working paper, October [M]- Mario Melchiori (2004): CreditRisk+ by Fast Fourier Transform, Universidad Nacional del Litoral
  • 35. References continued [T]- Tasche, D. Expected shortfall and beyond. Journal of Banking & Finance, 26(7):1519-1533, 2002. [P]- Panjer, H. Recursive evaluation of a family of compound distributions. ASTIN Bulletin, 12:22-26, 1981. [W]- Wilson, T. CreditRisk+: A credit risk management framework. London 1997, Credit Suisse Financial Products. [Z]- Zhou, C (2001): “An Analysis of Default Correlations and Multiple Defaults”, The Review of Financial Studies, Summer, pp 555-576.