Val Econ Cap Mdls Risk Conf Jacobs 1 10 V1


Published on

  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • VaR increas. in size Comp diff risk agg meth acr bnk obs VCA prod cons the lowest VaR & either the ECS or AGCS highest, foll by TCS cons., GCS “benchmark” & FC us. tow. mdl., CS tow. Low but above VCA. FG tends close GCS but us. just little lower,TCS just little higher GCS (some cases not by much). e.g., T200; GS 764B vs. EC 859B vs. GC 930B (larger) vs. VCA 688B smallest, CC 728B sm. Side & FC 752B toward middle
  • NCVs for PDB > VaR: excl VCA & ECS 17-76% vs 7-12% ECS uniformly lowest 12.2-17.6% VCA much higher 83.4-158.2% Excl above: AGC notably higher stand. Cop’s: 44.1%-75.5% GC on low side 17.3-34.3% vs. TC sl higher 22.5-39.5% CC 22.7-30.0% close GC on lower end
  • As % BVA VaR looks to be somewhat incr w/size – 6-8% AT200, lower-mid teens T45 & then mid-high teens PNC
  • PDBs wide var. acr. bnk/mdlsbut cannot see diff by biz mix ECS yild very high values->risk updoubl to tripl if simpl add risks Exc ECS rng 10%-50 GCS tow middle (41-58%), VCA lower end (31-41%), GCAS lowest (10-21%).
  • GOF tests highly mixed: rej null mdl fits data (rel to EC) just under 1/2 cases 14/30 & do not lend to clear pattern. But gen. rej. fit not very high sig lvl->perh mdls dec job: only 3 rej > 1% lvl (AGCS for AT200&JPMC, AGCS for JPMC), only 1 at 5% AFCS for AT200, rem 9 at only 10% AT200 most rej all cases (but 1&2 @ 10&5%), foll by JPMC (2 rej T&AG), CITI and WELLS (2 rej e. 10%), BofA&PNC the least (1 e. at 10%l).GCS maybe OK in that rej on AT200 @ 5%? Across bnks TCS&FCS rej 4X the most often (4/3 10%,FCS 1 @ 5%) AGC only 2 rej but only 2 @ 1% lvl & lowest PVs, CCS other 1% rej & 1 10% rej
  • VCA much higher anything else: 27.8-45.3 GCS 7.1-9% slight< ECS 8.1-13.6% TC cl ECS 7.5-16.4% GC in betw 7.5-10.8% CC on low side 5.9-7.0% Hard to see biz mix patt (JPMC/CITI vs others)
  • Marg btstrp ord mag gr: excl EC/VCA 6-14% in corr vs. 34-70% marg TC highest 43.6-62.2% GCS 35.4-44.8% GC close GC but more range 33.5-52.7% CC most range 25.2-69.6% Hard to see biz mix patt (JPMC/CITI vs others)
  • NCVs for PDB much > VaR: excl VCA & ECS 17-76% vs 7-12% ECS uniformly lowest 12.2-17.6% VCA much higher 83.4-158.2% Excl above: TC highest 39.5-69.5 NOW GC high end 38.6-56.7% &. close AGC on high side 36.8-53.1% CC lowest 19.9-43.7%
  • Val Econ Cap Mdls Risk Conf Jacobs 1 10 V1

    1. 1. Validation of Economic Capital Models: State of the Practice, Supervisory Expectations and Results from a Bank Study Michael Jacobs, Ph.D., CFA Senior Economist / Credit Risk Analysis Division U.S. Office of the Comptroller of the Currency Risk Conference on Economic Capital, February 2010 The views expressed herein are those of the authos and do not necessarily represent the views of the Office of the Comptroller of the Currency or the Department of the Treasury.
    2. 2. Outline <ul><li>Introduction, Background and Motivation </li></ul><ul><li>Fitness for Use of Economic Capital (EC) Models </li></ul><ul><li>Providing Confidence Regarding EC Model Assumptions </li></ul><ul><li>Assessing the Value of Validation Methodologies </li></ul><ul><ul><li>Qualitative Approaches </li></ul></ul><ul><ul><ul><li>Use Testing </li></ul></ul></ul><ul><ul><ul><li>Data Quality Analysis </li></ul></ul></ul><ul><ul><li>Quantitative Approaches </li></ul></ul><ul><ul><ul><li>Validating of Inputs and Parameters </li></ul></ul></ul><ul><ul><ul><li>Model Replication and Benchmarking </li></ul></ul></ul><ul><ul><ul><li>Stress Testing </li></ul></ul></ul><ul><li>Technical Challenges in Testing the Accuracy of EC Models </li></ul><ul><ul><li>The Tails of the Loss Distribution </li></ul></ul><ul><ul><li>Example: Alternative Models for Risk Aggregation </li></ul></ul><ul><li>Effective Reporting of EC Model Outputs </li></ul><ul><ul><li>Avoidance of Misuse and Misunderstanding of the EC Model </li></ul></ul>
    3. 3. Introduction, Background and Motivation <ul><li>The validation of EC models is at a very preliminary stage </li></ul><ul><li>EC models can be complex, having many components, and it may not be immediately obvious that a such model works satisfactorily </li></ul><ul><li>Models may embody assumptions about relationships amongst or behavior of variables that may not always hold (e.g., stress) </li></ul><ul><li>Validation can provide a degree of confidence that assumptions are appropriate, increasing the confidence of users in the model outputs </li></ul><ul><li>Additionally, validation can be also useful in identifying the limitations of EC models (i.e., where embedded assumptions do not fit reality) </li></ul><ul><li>There exists a wide range of validation techniques, each providing evidence regarding only some of the desirable properties of a model </li></ul><ul><li>Such techniques are powerful in some areas (risk sensitivity) but not in others (accuracy - overall / absolute or In the tail of the distribution) </li></ul><ul><ul><li>.. </li></ul></ul>
    4. 4. Introduction, Background and Motivation (continued) <ul><li>Used in combination, particularly with good controls and governance, a range of validation techniques can provide more substantial evidence for or against the performance of the model </li></ul><ul><li>There appears to be scope for the industry to improve the validation practices that shed light on the overall calibration of models, particularly in cases where assessment of overall capital is an important application of the model </li></ul><ul><li>. </li></ul>
    5. 5. Fitness for Purpose of Economic Capital Models <ul><li>In some cases the term validation is used exclusively to refer to statistical ex post validation (e.g., backtesting of a VaR) </li></ul><ul><li>In other cases it is seen as a broader but still quantitative process that also incorporates evidence from the model development stage </li></ul><ul><li>Herein, “validation” is meant broadly, meaning all the processes that provide evidence-based assessment of a model's fitness for purpose </li></ul><ul><li>This assessment might extend to the management and systems environment within which the model is operated </li></ul><ul><li>It is advisable that validation processes are designed alongside development of the models, rather than chronologically </li></ul><ul><li>This interpretation of validation is consistent with the Basel Committee (2005) in relation to the Basel II Framework </li></ul><ul><ul><li>However, that was phrased in terms of the IRB parameters & developed in the context of assessment of risk estimates for use in minimum capital requirements </li></ul></ul><ul><ul><li>Validation of EC differs to an IRB model as the output is a distribution rather than a single predicted forecast against which actual outcomes may be compared </li></ul></ul>
    6. 6. Fitness for Purpose of EC Models (continued) <ul><li>EC are conceptually similar to VaR models, but several differences force validation methods to differ in practice from those used in VaR </li></ul><ul><ul><li>Long time horizon, high confidence levels, and the scarcity of data </li></ul></ul><ul><li>Full internal EC models are not used for Pillar 1 minimum capital requirements, so fitness for purpose needs to cover a range of uses </li></ul><ul><ul><li>Most of which and perhaps all these uses are internal to the firm in question </li></ul></ul><ul><li>Note that EC models and regulatory capital serve different objectives & may reasonably differ in some details of implementation </li></ul><ul><li>BCBS’s validation principle 1 refers to predictive ability of credit rating systems, an emphasis on performance of model forecasts </li></ul><ul><li>The natural evolution of this principle for EC is that validation is concerned with the predictive properties of those models </li></ul><ul><ul><li>I.e., embody forward-looking estimates of risk & their validation involves assessing those estimates, so this related principle remains appropriate. </li></ul></ul><ul><li>Broadly interpreted validation processes set out herein in different ways all provide insight into the predictive ability of EC model </li></ul>
    7. 7. Providing Confidence Regarding EC Model Assumptions <ul><li>Properties of an EC model that can be assessed using powerful tools, and hence that are capable of robust assessment, include: </li></ul><ul><ul><li>Integrity of model implementation </li></ul></ul><ul><ul><li>Grounded in historical experience </li></ul></ul><ul><ul><li>Sensitivity to risk & to external environment </li></ul></ul><ul><ul><li>Good marginal properties </li></ul></ul><ul><ul><li>Rank ordering & relative quantification. </li></ul></ul><ul><li>Properties for which only weaker processes are available include: </li></ul><ul><ul><li>Conceptual soundness </li></ul></ul><ul><ul><li>Degree to which forward-looking </li></ul></ul><ul><ul><li>Absolute risk quantification </li></ul></ul><ul><li>It is important to stress the power of individual tests & acknowledge that views as to strength and weakness are likely to differ. </li></ul>
    8. 8. Providing Confidence Regarding EC Model Assumptions (cont.) <ul><li>There is great difficulty in validating conceptual soundness of an EC model due to many untestable or hard-to-test assumptions made: </li></ul><ul><ul><li>Family of statistical distributions for risk factors </li></ul></ul><ul><ul><li>Economic processes driving default or loss </li></ul></ul><ul><ul><li>Dependency structure among defaults or losses </li></ul></ul><ul><ul><li>Likely behavior of management or economic agents how these vary over time. </li></ul></ul><ul><li>Some EC models are of risk aggregation models where estimates for individual categories are combined to generate a single risk figure </li></ul><ul><ul><li>There may be no best or unique way to do this aggregation </li></ul></ul><ul><li>Since many of these assumptions may be untestable, it may be impossible to be certain that a model is conceptually sound </li></ul><ul><li>While the conceptual underpinnings may appear coherent and plausible, they may in practice be no more than untested hypotheses </li></ul><ul><li>Opinions may reasonably differ about the strength or weakness of any particular process in respect of any given property </li></ul>
    9. 9. Validation of EC Models: Introduction to Range of Practice <ul><li>While we will describe the types of validation processes that are in use or could be used, note that the list is not comprehensive </li></ul><ul><li>We do not suggest that all techniques should or could be used by banks </li></ul><ul><li>We wish to demonstrate that there is a wide range of techniques potentially covered by our broad definition of validation </li></ul><ul><li>This is creating a layered approach, the more (fewer) of which that can be provided, the more (less) comfort that validation is able to provide evidence for or against the performance of the model </li></ul><ul><li>Each validation process provides evidence for (or against) only some of the desirable properties of a model </li></ul><ul><li>The list presented below moves from the more qualitative to the more quantitative validation processes, and the extent of use is briefly discussed </li></ul>
    10. 10. Validation of EC Models: Range of Practice in Qualitative Approaches <ul><li>The philosophy of the use test as incorporated into the Basel II Framework: if a bank is actually using its risk measurement systems for internal purposes, then we can place more reliance on it </li></ul><ul><ul><li>Applying the use test successfully will entail gaining a careful understanding of which model properties are being used and which are not </li></ul></ul><ul><li>Banks tend to subject their models to some form of qualitative review process, which could entail: </li></ul><ul><ul><li>Review of documentation or development work </li></ul></ul><ul><ul><li>Dialogue with model developers or model managers </li></ul></ul><ul><ul><li>Review and derivation of any formulae or algorithms </li></ul></ul><ul><ul><li>Comparison to other firms or with publicly available information </li></ul></ul><ul><li>Qualitative review is best able to answer questions such as: </li></ul><ul><ul><li>Does the model work in theory? </li></ul></ul><ul><ul><li>Does the model incorporate the right risk drivers? </li></ul></ul><ul><ul><li>Is any theory underpinning it conceptually well-founded? </li></ul></ul><ul><ul><li>Is the mathematics of the model right? </li></ul></ul>
    11. 11. Range of Practice in Qualitative Approaches to Validation (continued) <ul><li>Extensive systems implementation testing is standard for production-level risk measurement systems prior to implementation </li></ul><ul><ul><li>Such as user acceptance testing, checking of model code etc. </li></ul></ul><ul><ul><li>These processes could be viewed as part of the overall validation effort, since they would assist in evaluating whether the model is implemented with integrity </li></ul></ul><ul><li>Management oversight is the involvement of senior management in the validation process </li></ul><ul><ul><li>E.g., reviewing output from the model & using the results in business decisions. </li></ul></ul><ul><ul><li>Senior management knowing how the model is used & outputs are interpreted, </li></ul></ul><ul><ul><li>This should take account of the specific implementation framework adopted and the assumptions underlying the model and its parameterization. </li></ul></ul><ul><li>Data quality checks refer to the processes designed to provide assurance of the completeness, accuracy and appropriateness of data used to develop, validate and operate the model. </li></ul><ul><ul><li>E.g., Review of: data collection and storage, data cleaning of errors, extent of proxy data, processes that need to be followed to convert raw data into suitable model inputs, and verification of transaction data such as exposure levels </li></ul></ul><ul><ul><li>While not traditionally viewed by the industry as a form of validation, increasingly forming a major part of regulatory thinking </li></ul></ul>
    12. 12. Range of Practice in Qualitative Approaches to Validation (concluded) <ul><li>As all models rest on premises of various kinds, varying in the degree to which obvious, we have examination of assumptions </li></ul><ul><li>Certain aspects of an EC model are 'built-in' and cannot be altered without fundamentally changing the model. </li></ul><ul><li>To illustrate, these assumptions could be about: </li></ul><ul><ul><li>Fixed model parameters (PDs, correlations or recovery rates) </li></ul></ul><ul><ul><li>Distributional assumptions (shape of tail distributions) </li></ul></ul><ul><ul><li>Behavior of senior management or of customers </li></ul></ul><ul><li>Some banks go through a deliberate process of detailing the assumptions underpinning their models, including examination of: </li></ul><ul><ul><li>Impact on model outputs </li></ul></ul><ul><ul><li>Limitations that the assumptions place on model usage and applicability. </li></ul></ul>
    13. 13. Range of Practice in Quantitative Approaches to Validation: Inputs <ul><li>A complete validation of an EC model would involve the inputs and parameters , both those that may be statistically estimated </li></ul><ul><ul><li>Examples of estimated (assumed) parameters are the main IRB parameters such PD or LGD) (is PD in a low default portfolio) </li></ul></ul><ul><li>Techniques could include assessing parameters against: </li></ul><ul><ul><li>Historical data through replication of estimators </li></ul></ul><ul><ul><li>Outcomes over time through backtesting </li></ul></ul><ul><ul><li>Market-implied parameters such as implied volatility or implied correlation </li></ul></ul><ul><ul><li>Materiality of model output to input and parameters through sensitivity testing </li></ul></ul><ul><li>This testing of input parameters could complement examination of assumptions & sensitivity testing described previously </li></ul><ul><ul><li>However, that checking of model inputs is unlikely to be fully satisfactory since, every model is based on underlying assumptions </li></ul></ul><ul><li>The more sophisticated the model, the more susceptible to model error, so checking input parameters will not help here </li></ul><ul><ul><li>However, model accuracy and appropriateness can be assessed, at least to some degree, using the processes described in this section </li></ul></ul>
    14. 14. Range of Practice in Quantitative Validation: Model Replication <ul><li>Model replication is useful technique that attempts to replicate EC model results obtained by the bank </li></ul><ul><li>This could use independently developed algorithms or data sources, but in practice replication might leverage a bank’s existing processes </li></ul><ul><ul><li>E.g., run a the bank's algorithms on a different data-set or vice versa, but once the either of these have been validated and are reliable </li></ul></ul><ul><li>This technique and the questions that often arise in implementing replication can help identify if: </li></ul><ul><ul><li>Definitions & algorithms the bank claims to use correctly are understood by staff develop, maintain, operate and validate the model </li></ul></ul><ul><ul><li>The bank is using in practice the modeling framework that it purports to </li></ul></ul><ul><ul><li>Computer code is correct, efficient and well-documented </li></ul></ul><ul><ul><li>Data used in this validation are those used by the bank to obtain its results </li></ul></ul><ul><li>However, this technique is rarely sufficient to validate models, and in practice there is little evidence of it being used by banks for either validation or to explore the degree of accuracy of their models </li></ul><ul><li>Note that replication simply by re-running a set of algorithms to produce an identical set of results would not be sufficient model validation due diligence </li></ul>
    15. 15. Range of Practice in Quantitative Validation: Benchmarking <ul><li>Benchmarking and hypothetical portfolio testing is examination of whether the model produces results comparable to a standard reference model or comparing models on a set of reference portfolio </li></ul><ul><ul><li>E.g., benchmarking could be a comparison of an in-house EC model to other well-known or vendor models (after standardization of parameters) </li></ul></ul><ul><li>Benchmarking is among the most commonly used forms of quantitative validation </li></ul><ul><li>From a supervisory perspective, this permits comparison of several banks' models against the same reference model and identification of models that produce outliers </li></ul><ul><li>Hypothetical portfolio testing means comparison of models against the same reference portfolio, internal or external to the bank </li></ul><ul><ul><li>Capable of addressing similar questions to benchmarking by different means. </li></ul></ul><ul><ul><li>The technique is a powerful one and can be adapted to analyse many of the preferred model properties such as rank-ordering and relative risk quantification </li></ul></ul><ul><li>A limitation of benchmarking is it only provides relative assessments and provides little assurance that any model accurately reflects reality or about the absolute levels of capital </li></ul>
    16. 16. Range of Practice in Quantitative Validation: Benchmarking (continued) <ul><li>Therefore, as a validation technique, benchmarking is limited to providing comparison of one model against another or one calibration to others, but not testing against ‘reality’. </li></ul><ul><li>It is therefore difficult to assess the degree of comfort provided by such benchmarking methods, as they may only be capable of providing broad comparisons confirming that input parameters or model outputs are broadly comparable </li></ul><ul><li>There may be good reasons why models produce outliers in benchmarking, all of which complicate interpretation of the results: </li></ul><ul><ul><li>May be designed to perform well under differing circumstances </li></ul></ul><ul><ul><li>May be more or less conservatively parameterized </li></ul></ul><ul><ul><li>May differ in their economic foundations </li></ul></ul><ul><li>Comparisons of internal EC are made with varied alternatives: </li></ul><ul><ul><li>Industry survey results </li></ul></ul><ul><ul><li>Rating agency or industry-wide models </li></ul></ul><ul><ul><li>Consultancy marketed models </li></ul></ul><ul><ul><li>Academic papers </li></ul></ul><ul><ul><li>Regulatory capital models </li></ul></ul>
    17. 17. Range of Practice in Quantitative Validation: Backtesting <ul><li>Backtesting addresses the question of how well the model forecasts the distribution of outcomes. </li></ul><ul><li>There are many forms of this that entail some degree of comparison of outcomes to forecasts, and there is a wide literature on the subject. </li></ul><ul><li>However, weak power of backtesting tests for models of risk that quantify high quantiles has been noted </li></ul><ul><ul><li>E.g., for portfolio credit models see BCBS (1999) </li></ul></ul><ul><li>Variations to the basic backtesting approach which can increase the power of the tests have been suggested in the literature: </li></ul><ul><ul><li>Backtesting more frequently over shorter holding periods (e.g., in market risk using a one-day standard versus the 10-day regulatory capital standard </li></ul></ul><ul><ul><li>Using cross-sectional data on a range of reference portfolios </li></ul></ul><ul><ul><li>Using information in forecasts of the full distribution </li></ul></ul><ul><ul><li>Testing expected values of distributions as opposed to high quantiles </li></ul></ul><ul><li>Backtesting is useful principally for models whose outputs are a quantifiable metric with which to compare an outcome </li></ul><ul><li>However, some risk measurement systems in use have outputs cannot be interpreted in this way and cannot be backtested </li></ul>
    18. 18. Range of Practice in Quantitative Validation: Backtesting (continued) <ul><li>Such risk measurement approaches not amenable to outcomes-based validation might nevertheless be valuable tools for banks </li></ul><ul><ul><li>E.g., rating systems, sensitivity tests and aggregated stress losses. </li></ul></ul><ul><ul><li>The role of backtesting for such models, if used, would need elaboration </li></ul></ul><ul><li>In practice, backtesting is not yet a key component of banks' validation practices for economic capital purposes </li></ul><ul><li>Stress testing covers both stressing of the model and comparison of model outputs to stress losses </li></ul><ul><li>The outputs of the model might be examined under conditions where model inputs and model assumptions might be stressed </li></ul><ul><li>This process can reveal model limitations or highlight capital constraints that might only become apparent under stress </li></ul><ul><li>Stress testing of regulatory capital models, particularly IRB models, is undertaken by banks but there is more limited evidence of stress testing of economic capital models </li></ul>
    19. 19. Range of Practice in Quantitative Validation: Stress Testing <ul><li>Stress testing covers both stressing of the model and comparison of model outputs to stress losses </li></ul><ul><li>The outputs of the model might be examined under conditions where model inputs and model assumptions might be stressed </li></ul><ul><li>This process can reveal model limitations or highlight capital constraints that might only become apparent under stress </li></ul><ul><li>Stress testing of regulatory capital models, particularly IRB models, is undertaken by banks but there is more limited evidence of stress testing of economic capital models </li></ul><ul><li>Through a complementary programme of stress testing, the bank may be able to quantify the likely losses that the firm would confront under a range of stress events </li></ul><ul><li>Comparison of stress losses against model-based capital estimates may provide a modest degree of comfort of the absolute level of capital. </li></ul><ul><li>Banks report some use of this stress testing technique to validate the approximate level of model output. </li></ul>
    20. 20. Range of Practice in Validation: Additional Considerations <ul><li>We have not mentioned internal audit , but validation of the overall implementation framework and process should also be subject to independent and periodic review </li></ul><ul><li>This work should be made by parties within the banking organization that are independent of those accountable for the design and implementation of the validation process </li></ul><ul><li>A possibility is that internal audit would be in charge of undertaking this review process, and as such it could be viewed as comprising a part of the management oversight process listed above </li></ul><ul><li>The list of validation tools also does not address the issue of adequate internal standards relevant for validation </li></ul><ul><li>Examples of such standards include: </li></ul><ul><ul><li>A description of the issues that need to be addressed as part of validation </li></ul></ul><ul><ul><li>The standards that capital models are expected to achieve </li></ul></ul><ul><ul><li>A series of quantitative thresholds that models need to meet </li></ul></ul><ul><ul><li>Warning indicators for particular monitoring metrics </li></ul></ul><ul><ul><li>Assessment against model development standards </li></ul></ul>
    21. 21. Technical Challenges in Assessing the Adequacy of an EC Model <ul><li>A fundamental difficulty faced in EC modeling is the lack of data to estimate high quantiles in the tails of the loss distribution leads to a very high degree of uncertainty </li></ul><ul><li>One approach to dealing with this problem is Bayesian techniques, which combine expert assessments with available data (Kiefer, 2008) </li></ul><ul><ul><li>However, this is computationally demanding, and also requires the elicitation of a prior distribution from an expect, which is very involved </li></ul></ul><ul><ul><li>As we don’t see this used in practice currently, we will not further pursue this approach here </li></ul></ul><ul><li>As noted previously, traditional backtesting procedures as applied in market risk VaR models are impractical in an EC model setting </li></ul><ul><li>An alternative approach is to try to assess the accuracy of the EC output by approximating a statistical measure of uncertainty </li></ul><ul><ul><li>But thin data in the tails implies that such confidence bounds are likely to be very wide </li></ul></ul><ul><li>ehf </li></ul>
    22. 22. EC Model Validation Example: Alternative Risk Aggregation Models <ul><li>“ Models for Risk Aggregation and Sensitivity Analysis: An Application to Bank Economic Capital”, by Hulusi Inanoglu and Michael Jacobs, OCC & Federal Reserve BOG, Working Paper </li></ul><ul><li>Develops proxies for 5 risk types (credit, market, operational, trading and interest income) from historical quarterly call report data for 5 largest banks as of 4Q08 </li></ul><ul><li>Compares EC output of different copula models for combining these according to absolute levels and variability </li></ul><ul><ul><li>Use a non-parametric bootstrap to assess accuracy of output to estimation error in inputs (parameters of margins and correlations) </li></ul></ul><ul><li>While not a study of EC model validation per se, this illustrates several quantitative techniques discussed herein: </li></ul><ul><ul><li>Benchmarking / hypothetical portfolio analysis of alternative models </li></ul></ul><ul><ul><li>Sensitivity analysis to inputs </li></ul></ul><ul><ul><li>Testing accuracy of EC (model output) quantile estimates </li></ul></ul><ul><li>Conclusion: a non-parametric model (empirical copula) is more conservative than common copulas (e.g., Gaussian) and also less variable in the resampling experiment (more stable or accurarate) </li></ul>
    23. 23. Validation Example: Alternative Risk Aggregation Models – Risk Proxy Data Summary (Largest Banks As Of 4Q08)
    24. 24. Validation Example: Alternative Risk Aggregation Models – Distributions of Risk Proxies (Largest Banks As Of 4Q08)
    25. 25. Validation Example: Alternative Risk Aggregation Models – Correlations of Risk Proxies (Largest Banks As Of 4Q08) )
    26. 26. Validation Example: Alternative Risk Aggregation Models – Absolute EC Comparison (Largest Banks As of 4Q08)
    27. 27. Validation Example: Alternative Risk Aggregation Models – Absolute EC Comparison (Largest Banks As of 4Q08)
    28. 28. Validation Example: Alternative Risk Aggregation Models – Relative EC Comparison (Largest Banks As of 4Q08)
    29. 29. Validation Example: Alternative Risk Aggregation Models – % Diversification Comparison (Largest Banks As of 4Q08)
    30. 30. Validation Example: Alternative Risk Aggregation Models – Goodness of Fit Comparison (Largest Banks As of 4Q08)
    31. 31. Validation Example: Alternative Risk Aggregation Models – EC Variability Comparison (Largest Banks As of 4Q08)
    32. 32. Validation Example: Alternative Risk Aggregation Models – EC Variability Comparison (Largest Banks As of 4Q08)
    33. 34. Summary of Contributions and Major Findings <ul><li>We have compared alternative risk aggregation methodologies used in practice: the VCA, well-known GCS & less-well known ECS </li></ul><ul><li>First major exercise involved fitting the models, describing & comparing VaR & PDBs across banks & models, GOF statistics </li></ul><ul><li>The second part involved measuring the statistical variation in VaR & PBD </li></ul><ul><li>Across models & banks ECS & AGCS produce the highest absolute magnitudes of VaR vs. either GCS, STCS or other Archimadean copulas </li></ul><ul><ul><li>ECS – a variant of legacy “historical simulation” in market risk practice – in many cases the most conservative (a surprise according to asymptotic theory) </li></ul></ul><ul><ul><li>VCA consistently produces the lowest VaR number (disturbing in that several practitioners for the lack of theory or supervisory guidance adopt this </li></ul></ul><ul><li>But PDB tended to be largest for the ECS vs.GCS or VCA, while AGCS produces the lowest, a point of caution if banks choose this route </li></ul><ul><li>Failed to find business mix to exert a directionally consistent impact on risk </li></ul>
    34. 35. Summary of Contributions and Major Findings (continued) <ul><li>In application of a blanket GOF tests (Genest et al, 2009) found mixed results: about ½ the cases parametric copula models fail to fit the data, confidence levels tended to be modest, so clearly needs more study </li></ul><ul><li>Bootstrapping experiment revealed the variability of the VaR or PDB itself to be significantly lowest (highest) for the ECS (VCA) relative to other copulas </li></ul><ul><li>Contribution of the sampling error in the parameters of the marginal distributions is an order of magnitude greater than the correlations </li></ul><ul><li>Results constituted a sensitivity analysis that argues for practitioners to err on the side of conservatism in considering a non-parametric copula alternative in order to quantify integrated risk </li></ul><ul><li>Standard copula formulations produced a wide divergence in measured VaR, diversification benefits as well as the sampling variation in both of these across different measurement frameworks and types of institutions </li></ul>