View stunning SlideShares in full-screen with the new iOS app!Introducing SlideShare for AndroidExplore all your favorite topics in the SlideShare appGet the SlideShare app to Save for Later — even offline
View stunning SlideShares in full-screen with the new Android app!View stunning SlideShares in full-screen with the new iOS app!
Clear that mdl spec (LogL), def par. / quant. of int., spec. par. sp., ident. of rel. data req. judg. & are subj. to crit. & requ. just. E.g., PD in (0,1) – def par sp,& if for a part rtg/segm an idea of the loc of the rate A simple example in the case of estimating default rates is sketched in Kiefer (2007) MCMC & related are widely disc. in the econ lit & have been applied in the default estimation setting But applications typ spec a conv prior adds min inf but misses the true power of the Bayesian approach Requ thought & effort, rather than mere computational power, and is therefore not commonly done.
Data inf typ overwhelms nondogmatic pr (irrel asympt)-> econ oft just ign. on this basis E.g., not be avail in quant for low-dflt assets, new prod, str econ chngs -> doubt relev. hist. data BCBS 2009 in response to the credit crisis,
1st mdls consistent with the ASRM underlying B2 & the rd adds temp. corr. in asset values by acc. for AC Later 2 mdls repr smpl ex. GLLMs (now prev cr risk lit) Gen. B2 and perhaps in line w/val. exp. of BCBC (2009b) & ind best pract
Pr dsn will be comb with data likel thru Bayes' Thrm to derive post dist of risk meas E.g. other appl.: any sit w/ no or very lim data (mdls to descr, und, pred compl beh) Mdl dev in conj w/expert typ prop sens mdl par to obt output where there is unc re. inp true val. As in our appl, this highlights the imp of having a coh appr to repr that unc.
Qual of elicit. = acc with which exp. know. trans to prob form Elic. is done well if dsn deriv. is an acc. repr. of expert's know. no matter what the qual. of that know. But good fac asking prob. ques. may also be able to det. if expert really has val. inf. E.g. of diff. dflt for a set of obl., esp. if highly rated & we lack of hist. perf. data on a portf. of sim. cr. Note the symm. - we char. the unc. reg. the unk. prob. gov. distr PD itself in terms of prob’s. Ans to why worth trouble - use of elic. as part of bus. dec. making, get auto buyin, Use test
E.g., in forming a prior for the dsn of the DR, a gen sense of where cntrd (5 bps, 1% or 10%?) & degr to which the tail is elong. may be enuf On spec LL pnt str made in normal case: whole set of prob’s spec. functions of a mean / var. (can hardly be cred. as exact descr. of a real data dsn, but still usefulness proven in countless appl. When stat’s write down a LF, just inf. opin. re. a data gen. proc. cond. on a par. set E.g. alt metric: reg cap imp or the exp utility of the sup, which may be quite robust to det. of exp. opin. E.g. close to appl.: ins. focus on a set of plausible obs. DRs over a set hor. w.r.t obl. of a part. cr. qual. E.g. meaning. post: prod. PD est. not only for a compl. ex, but a compl. pred. dsn of DR for other risk man. purp (cr. dec., acc. mgt, portf. str. tst.) Prep: identifying the expert, training the expert, and identifying what aspects of the problem to elicit. In pract. may be overlap betw. prep & summ: choice of data to elicit oft. foll. choice of dsn. form (e.g., a smpl. par. dsn. like beta for prior PD-> few quant. maybe OK vs. non-par. kern. dens. may requ. inf.) Elic. alm. alw. iter.->ass. of adeq. poss. ret. more summ.
E.g., qual exp: exper. in related risk-mgt sit. & educ. E.g., choice exp.: for the relev. portf. in a succ. fin. instit. E.g., qual. arg: convinc. other exp? Reas. from part. sim. portf. or econ. cond.? E.g. basis exp know: st of the cr cyc., ind. cond. of the obl. or avg feat. of the portf. that are drivers of dflt risk E.g., confl int: cr exec bonus funct of the reg cap chrg on portf E.g. doc: code for mom. match. or smth. perf.
B2 requir. an ann. dflt prob est over smpl long enuf cover full econ. cycle Many disc. of the infer. issue foc. on the bin mdl & assoc. frequ. est.
This is true bec. for fixed n data dep on dflt only through r & the suff princ This is the mdl underlying rtg ag est of dflt rates & banks with expert rating sys for whsl B2
Get dsn of vit from the normality of x Cond dflt prob foll from stand vit We are inter. est. the marg. dflt prob. – need the dsn of the yr t drt Not that this theta_t is diff. from the func. For the cond. dflt prob. above B2 formula for stressed PD that goes into the reg cap expr
Now th_t is cond PD funct NOT the RV yr t dflt Likl contr in any year dep on the prev yr – no intertemp. ind
We choose the dsn. that adds the min inf. (or max entr = disorder) subj. to K cnstr (incl. hat p is a proper prob. dsn) H is a widely used meas. inf. in an obs. or exp.
Jnt dsn of data & par follows from prod rule, marg of R from def of marg dsn & post dsn of par from def of cond dsn Taken together, this is known as “Bayes Rule”, but this is simply a set of basic rules of prob. Post. exp. has nice theor.prop. (under very gen. cond. – do not need norm.!) with resp. to pred. a likely value of the par. For some mdls. num inetrgr. of likl. not – is in our cases, but then probl. with acc. of the int. lead to probl. in making inf. on par. Sim. techn. Like MCMC are
Exp. w/these meth. useful to gain und. Sftwr. prov. val. guid./warnings avail. onlen. Gen. AR~ 25% is good (see Roberts, Gelman, and Gilks (1997)), tuned by adj. the var’s of epsilon Scl. prop. dsn. allowed us AR 22-25%. No way to prove that conv. but nonconv. often obv. from tm-ser plots. Long runs are better than short: M samples (Mdl 1,2-3 = 10-40K) > 5K burnin.
E.g. uses post dsn: use entire dsn PD pr. Cr. & set in-house cap. lvls (EC) Also useful stress of IRB mdls (plug high quantile PD into the reg cap form. mdl. adv. Scenario)
Transcript
1.
The Bayesian Approach to Default Risk: A Guide Michael Jacobs, Jr. Credit Risk Analysis Division Office of the Comptroller of the Currency Nicholas M. Kiefer Cornell University Departments of Economics and Statistical Science March 2010 Forthcoming: “Rethinking Risk Measurement & Reporting”, Risk Books, Ed. Klaus Blocker The views expressed herein are those of the authors and do not necessarily represent the views of the Office of the Comptroller of the Currency or the Department of the Treasury.
While may not be important in "large“ samples, expert information is of value if data is scarce, costly, or unreliable
Herein we illustrate the practical steps in the Bayesian analysis of a PD estimation for a group of homogeneous assets
This is required for determining minimum regulatory capital under the Basel II (B2) framework (BCBS, 2006)
This also implications for BCBS (2009a), which stresses the continuing importance of quantitative risk management
Focus 1 st on elicitation & representation of expert information, then on Bayesian inference in nested simple models of default
As we do not know in advance whether default occurs or not, we model this uncertain event with a probability distribution
We assert that uncertainty about the default probability should be modeled the same way as uncertainty about defaults - i.e, represented in a probability distribution
There is information available to model the PD distribution - the fact that loans are made shows that risk assessment occurs!
The information from this should be organized & used in the analysis in a sensible, transparent and structured way
First discuss the process for elicitation of expert information and later show a particular example using the maximum entropy
Present a sequence of simple models of default generating likelihood functions (with generalized linear mixed models ):
The binomial model, 2-factor ASRM of B2, and an extension with an autocorrelated systematic risk factor
We sketch the Markov Chain Monte Carlo approach to calculating the posterior distribution from combining data and expert information coherently using the rules of probability
Illustrate all of these steps using annual Moody's corporate Ba default rates for the period 1999-2009
The simplest probability model for defaults for a homogeneous portfolio segment is Binomial, which assumes independence across assets & time, with common probability
As in Basel 2 IRB and the rating agencies, this is marginal with respect to conditions (e.g., through taking long-term averages)
Suppose the value of the i th asset in time t is:
Where , and default occurs if asset value falls below a common predetermined threshold , so that:
It follows that default on asset i is distributed Bernoulli:
Denote the defaults in the data and the total count of defaults
The distribution of the data and the default count is:
This is Model 1, which underlies rating agency estimation of default rates, where the MLE estimator is simply
Basel II guidance suggests there may be heterogeneity due to systematic temporal changes in asset characteristics or to changing macroeconomic conditions, giving rise to our Model 2:
Where is a common time-specific shock or a systematic factor and is asset value correlation
The conditional distribution of the number of defaults in each period is (6):
From which we obtain the distribution of defaults conditional on the underlying parameters by integration over the default rate distribution (6.1):
By intertemporal independence we have the data likelihood across all years (7):
Model II allows clumping of defaults within time periods, but not correlation across time periods, so the next natural extension lets the systematic risk factor x t follow an AR(1) process:
The formula for the conditional PD (4) still holds, but we don’t get the Vasicek distribution of the default rate (5) and (6)-(6.1) becomes this without the Vasicek distributed default rate:
Now the unconditional distribution is given by the T-dimensional integration as the likelihood now can’t be broken up the period-by-period (8):
Where is the joint-density of a zero-mean random variable following and AR(1) process
While Model 1 is a very simple example of a Generalized Linear Model - GLMs (McCullagh and Nelder, 1989), Models II &III are Generalized Linear Mixed Models - GLMMs), a parametric mixture (McNeil and Wendin, 2007; Kiefer, 2009)
We asked an expert to consider a portfolio of middle market loans in a bank's portfolio, typically commercial loans to un-rated companies (if rated, these be about Moody's Ba-Baa)
This is an experienced banking professional in credit portfolio risk management and business analytics, having seen many portfolios of this type in different institutions
The expert thought the median value was 0.01, minimum of 0.0001, that a value above 0.035 would occur with probability less than 10%, and an absolute upper bound was 0.3
Quantiles were assessed by asking the expert to consider the value at which larger or smaller values would be equiprobable given the value was less or greater than the median
The 0.25 (0.75) quantile was assessed at 0.0075 (0.0125), and he added a 0.99 quantile at 0.2, splitting up the long upper tail from 0.035 to 0.3
How should we mathematically express the expert information?
Commonly we specify a parametric distribution, assuming standard identification properties (i.e., K conditions can determine a K-parameter distribution-see Kiefer 2010 a)
Disadvantage: rarely good guidance other than convenience of functional form & this can insert extraneous information
We prefer the nonparametric the maximum entropy (ME) approach (Cover & Thomas, 1991), where we choose a probability distribution p that maximizes the entropy H subject to K constraints:
Our constraints are the values of the quantiles , and we can express the solution in terms of the Lagrangian multipliers chosen such they are satisfied, so from the 1 st order conditions:
This is a piecewise uniform distribution, which we decide to smooth with an Epanechnikov kernel, under the assumption that discontinuities are unlikely to reflect the expert’s view:
Where h is the bandwidth, chosen such that the expert was satisfied with the final product
We address the boundary problem, that K has a larger support by p ME , using the reflection technique (Schuster, 1985):
For asset correlation in Models 2 & 3, B2 recommends a value of about 0.20 for this segment, so due to little expert information on this, we choose a Beta(12.6, 50.4) prior centered at to 0.20
With even less guidance on the autocorrelation in Model 3, other than from asset pricing literature that is likely to be positive, we chose a uniform prior in [-1,1], with the B2 value of 0 as its mean
Let us write the likelihood function of the data generically:
The joint distribution of the data R and the prior p is:
The marginal (predictive) distribution of R is:
Finally, we obtain the posterior (conditional) distribution of the parameter as:
Perhaps take a summary statistic like , the posterior expectation, for B2 or other purposes, which is (asymptotically) optimal under (bowl-shaped) quadratic loss
Computationally high dimensional numerical integration may be hard and inference a problem, therefore simulation techniques
22.
Inference: Computation by Markov Chain Monte Carlo
MCMC methods are a wide class of procedures for sampling from a distribution when the normalizing constant is unknown
In the simple case, the Metropolis method, we sample from our posterior distribution that is only know up to a constant:
We construct a Markov Chain which has this as its stationary distribution by starting with a proposal distribution
The new parameter depends upon the old one stochastically, and the diagonal covariance matrix of the normal error is chosen specially to make the algorithm work
We draw from this distribution and accept the new draw according to the ratio of joint likelihoods of the data and the parameter, known as the acceptance rate
Modeling the data distribution & expert information statistically increases the range of applicability of econometric methods
We have gone through the steps of a formal Bayesian analysis for PD, required under B2 for many institutions worldwide
We concluded with posterior distributions for the parameters of a nested sequence of models with summary statistics
The mean PD a natural estimator for minimum regulatory capital requirements, but such distributions have many uses
E.g., stressing IRB models, economic capital or credit pricing
More general models provide insight into the extent to which default rates over time are predictable & the extent to which risk calculations should look ahead over a number of years
Analysis of LGD or economic capital using Bayesian methods (jointly with PD?) would be useful (substantial experience here)
Many other possible analyses could build on these methods
Views
Actions
Embeds 0
Report content