Internal Ratings Validations


Published on

Published in: Technology, Business
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Internal Ratings Validations

  1. 1. Results of BBA/ISDA/RMA IRB Validation Study BBA/ISDA/RMA Advanced IRB Forum Monika Mars London - June 23, 2003
  2. 2. Agenda <ul><li>Survey Approach & Participants </li></ul><ul><li>Background – Use of Ratings </li></ul><ul><li>Survey Findings </li></ul><ul><li>Conclusions and Implications </li></ul>
  3. 3. Survey Approach Survey research and design Data collection, and analysis Report preparation Report presentation Interviews Jan – Feb 2003 June 19/23 Feb – Mar 2003 1 st Draft Mid March 2003 Final Report Draft – early May 4 th Quarter 2002
  4. 4. Survey responses covered all asset classes representing a diverse group of institutions
  5. 5. <ul><li>Survey Methodology & Participants </li></ul><ul><li>Background – Use of Ratings </li></ul><ul><li>Survey Findings </li></ul><ul><li>Conclusions and Implications </li></ul>Agenda
  6. 6. Internal ratings are key to managing the business at most firms
  7. 7. Most banks use “Master Scales” to compare ratings information across portfolios
  8. 8. Default definitions, time horizons and alignment to external sources vary among institutions <ul><li>The definition of default is not in all cases in line with the BASEL II definition – this is particularly the case for retail portfolios </li></ul><ul><li>Time horizons of one year are most common, however the estimate of a 1-year PD might be based on a multiyear sample </li></ul><ul><ul><li>Some banks use more than one year as a time horizon while a few use less than a one year time horizon to estimate PD </li></ul></ul><ul><ul><li>A small number of banks estimate PDs over the life of the loan </li></ul></ul><ul><li>Most participants align a “majority” of their ratings in the corporate asset class to an external source, while the majority don’t do this in the retail asset class </li></ul>
  9. 9. <ul><li>Survey Methodology & Participants </li></ul><ul><li>Background – Use of Ratings </li></ul><ul><li>Survey Findings </li></ul><ul><li>Conclusions and Implications </li></ul>Agenda
  10. 10. Key Findings <ul><li>Banks employ a wide range of techniques for internal ratings validation </li></ul><ul><li>Ratings validation is not an exact science </li></ul><ul><li>Expert judgment is of critical importance in the process </li></ul><ul><li>Data issues are centred around quantity not quality </li></ul><ul><li>Regional differences exist with respect to the validation of internal ratings </li></ul><ul><li>Defining standards for stress testing requires additional work </li></ul>
  11. 11. Banks employ a wide range of techniques to validate internal ratings - key differences exist between corporate and retail ratings <ul><li>Corporate Asset Class </li></ul><ul><ul><ul><li>Statistical models where the quantity of default data allows for strong estimation (particularly in middle market) </li></ul></ul></ul><ul><ul><ul><li>Expert judgment models for portfolios where default data is limited </li></ul></ul></ul><ul><ul><ul><li>Hybrid and/or Vendor models to complete the picture </li></ul></ul></ul><ul><li>Retail Asset Class </li></ul><ul><ul><ul><li>Statistical models are heavily relied upon due to the greater availability of internal data history </li></ul></ul></ul>
  12. 12. A variety of model types are employed within each asset class 5 7 10 Hybrid 17 2 7 External Vendor 8 11 15 Expert Judgement 23 4 7 Statistical Retail Middle Market Corporate Model Type
  13. 13. Models for bank and sovereign exposures extensively use external information and expert judgement <ul><li>Ratings for bank exposure are mostly derived by benchmarking against external ratings as well as using expert judgment or hybrid models </li></ul><ul><li>Ratings for sovereign exposures are similarly derived by benchmarking against external ratings as well as using expert judgment </li></ul><ul><li>Published default statistics are used for PD estimation for both bank and sovereign exposures </li></ul>
  14. 14. Most banks surveyed have a rating system for specialised lending in place but face major issues in its validation <ul><li>A common theme is the lack of default data </li></ul><ul><li>Validation issues specific to specialised lending include: </li></ul><ul><ul><li>differentiation of borrower and transaction, </li></ul></ul><ul><ul><li>definition of default (particularly the restructuring clause), </li></ul></ul><ul><ul><li>inconsistent data history, </li></ul></ul><ul><ul><li>and the time horizon of the model </li></ul></ul>
  15. 15. Rating validation is not an exact science <ul><li>Even with the use of statistical techniques to assess model performance absolute triggers and thresholds are not used </li></ul><ul><li>There is no absolute KS statistic, GINI coefficient, COC or ROC measure that models need to reach to be considered adequate </li></ul><ul><li>Default statistics published by the major rating agencies are used differently from bank to bank depending on each bank’s assessment of the most appropriate use of the external data </li></ul><ul><li>Benchmarking against external ratings raises many issues including the “unknown” quality of external ratings, methodology differences, and the like </li></ul>
  16. 16. The performance of statistical rating models is achieved through a number of different techniques
  17. 17. Different triggers are used to evaluate the overall performance of expert judgement rating models
  18. 18. A variety of techniques are employed for evaluating vendor models
  19. 19. Expert judgement is essential in the validation process <ul><li>Data scarcity prevents the use of statistical models for some asset classes: corporate, bank, sovereign, and specialised lending </li></ul><ul><li>Most respondents use judgemental overlay by rating experts (account officer, credit analyst) to confirm or modify the risk rating output of their assessment model (statistical, hybrid, vendor) </li></ul><ul><li>Large proportions of banks’ exposures are covered by expert-judgment type rating systems </li></ul>
  20. 20. Most data issues centre around quantity of data available not the quality of the data <ul><li>Most banks surveyed have initiated projects to collect the necessary data in a consistent manner across the institution to allow for statistical modelling in the future </li></ul><ul><li>The quantity of default data around large corporate, bank, sovereign, and specialised lending exposure classes is a real problem for most institutions </li></ul><ul><li>Institutions have begun data pooling initiatives for PD and LGD data, however there is scepticism as to whether these measures will solve the data quantity problem </li></ul>
  21. 21. Clear regional differences exist with regard to internal ratings for corporate assets and their validation <ul><li>Expert judgment models are used for large corporate portfolios, however the structure of the ratings differ significantly </li></ul><ul><ul><li>In North America fixed weightings are not assigned for the factors to be assessed by the experts </li></ul></ul><ul><ul><li>In Europe specific weights for each factor are often set </li></ul></ul><ul><li>Models based on equity market information (KMV) or balance sheet information (Moody’s RiskCalc) are used for corporate and middle market portfolios </li></ul><ul><ul><li>In North America, these models tend to be an integral part of the rating and are used in conjunction with expert judgment in a hybrid approach </li></ul></ul><ul><ul><li>In Europe, these models are more likely to be used as a benchmark or a validation of the internal rating model </li></ul></ul>
  22. 22. Similar differences can be observed for the retail asset class <ul><li>Statistical (scorecard) techniques for retail exposures tend to be product specific in the US and UK, while in Continental Europe the focus is on customer scores/ratings </li></ul><ul><li>US and UK scorecards are redeveloped more often than those on Continental Europe, where robustness of ratings and long-term stability factors are of higher priority </li></ul><ul><li>This often has direct implications for validation, as longer term more stable models tend to show – for example - lower GINIs than models using the latest available data </li></ul>
  23. 23. More work needs to be done in defining standards for stress testing <ul><li>There is currently no uniform approach regarding the type of stress testing undertaken, its frequency, or actions taken in response to stress testing results </li></ul><ul><li>At the moment, stress testing is performed on the portfolio level with risk ratings being a key input in stress testing scenarios for economic capital requirements </li></ul><ul><li>There is uncertainty around BASEL II requirements with respect to stress testing of rating model inputs – and also considerable debate as to its usefulness </li></ul>
  24. 24. <ul><li>Survey Methodology & Participants </li></ul><ul><li>Background – Use of Ratings </li></ul><ul><li>Survey Findings </li></ul><ul><li>Conclusions and Implications </li></ul>Agenda
  25. 25. The industry, regulators and other stakeholders must continue a dialogue to address Basel II implementation issues <ul><li>Recognition of different techniques for validating internal rating systems – no one “right” method </li></ul><ul><li>Increased debate and guidance with respect to validation of expert judgement based rating systems </li></ul><ul><li>Recognition of regional / cultural differences as they impact internal ratings and the consequences for validation </li></ul><ul><li>Guidance on requirements for the use of pooled data </li></ul><ul><li>Additional discussion and clarification with respect to stress testing requirements </li></ul>