This document provides an overview of a market-based indicators approach to stress testing financial institutions in the United States. It describes using a systemic risk dashboard to monitor risks, a contingent claims analysis model to estimate institutions' default probabilities, and generalized additive models to project default probabilities under stress scenarios. Historical results are also recapped. Key findings on macroeconomic contributions and inter-sector spillovers are presented. Annexes provide details on modeling methodologies.
Model Performance Monitoring and Back-Testing as a Business and Risk Manageme...Jonathan Harris
Presentation given at the Risk USA Conference in November 2011, New York, New York. Includes regulatory and business best practices concerning model performance management along with examples of creative approaches to dealing with tricky issues.
Learn about common model risk program issues, the initial focus for the model risk management team, five early actions, overcoming legacy issues, and the key success factors. Get the steps for an initial development plan and view an example operating plan.
CBE16 - Community Collaborations: How Craft Beverage Producers Benefit from ...CraftBev
Tourism is changing - with agri-culinary based activities (focused on local eating and drinking) among the fastest growing sectors. How can craft beverage producers find ways to partner with other producers, as well as hotels, restaurants, recreational service providers, associations and others to provide a more personal and interesting travel experience for consumers? This presentation provides award-winning examples from around Michigan - from Dianna Stampfler of Promote Michigan (www.PromoteMichigan.com).
Model Performance Monitoring and Back-Testing as a Business and Risk Manageme...Jonathan Harris
Presentation given at the Risk USA Conference in November 2011, New York, New York. Includes regulatory and business best practices concerning model performance management along with examples of creative approaches to dealing with tricky issues.
Learn about common model risk program issues, the initial focus for the model risk management team, five early actions, overcoming legacy issues, and the key success factors. Get the steps for an initial development plan and view an example operating plan.
CBE16 - Community Collaborations: How Craft Beverage Producers Benefit from ...CraftBev
Tourism is changing - with agri-culinary based activities (focused on local eating and drinking) among the fastest growing sectors. How can craft beverage producers find ways to partner with other producers, as well as hotels, restaurants, recreational service providers, associations and others to provide a more personal and interesting travel experience for consumers? This presentation provides award-winning examples from around Michigan - from Dianna Stampfler of Promote Michigan (www.PromoteMichigan.com).
We understand that selling a home is more than just a transaction: it’s a life-changing experience. That’s why our team of highly-seasoned real estate professionals is dedicated to providing exceptional, personalized service for all of our clients. We take great pride in the relationships we build and always work relentlessly on the client’s behalf to help them achieve their real estate goals.
Anshuman Prasad, Director, Risk and Analytics, CRISIL GR&A on Practical Challenges in Building Effective Models for Stress Testing, a presentation made at EMEA 2014, London.
EAD Parameter : A stochastic way to model the Credit Conversion FactorGenest Benoit
This white paper aims at estimating credit risk by modelling the Credit Conversion Factor (CCF) parameter related to the Exposure-at-Default (EAD). It has been decided to perform the estimation thanks to stochastic processes instead of usual statistical methodologies (such as classification tree or GLM).
Our paper will focus on two types of model: the Ornstein Uhlenbeck (OU) model – part of ARMA model types – and the Geometric Brownian Movement (GBM) model. First, we will describe, then implement and calibrate each model to ensure relevance and robustness of our results. Then, we will focus on GBM model to model CCF.
Overview of the Basel Committee's revised "Minimum capital requirements for market risk" (formerly FRTB), with notes and tips for technical implementation.
We understand that selling a home is more than just a transaction: it’s a life-changing experience. That’s why our team of highly-seasoned real estate professionals is dedicated to providing exceptional, personalized service for all of our clients. We take great pride in the relationships we build and always work relentlessly on the client’s behalf to help them achieve their real estate goals.
Anshuman Prasad, Director, Risk and Analytics, CRISIL GR&A on Practical Challenges in Building Effective Models for Stress Testing, a presentation made at EMEA 2014, London.
EAD Parameter : A stochastic way to model the Credit Conversion FactorGenest Benoit
This white paper aims at estimating credit risk by modelling the Credit Conversion Factor (CCF) parameter related to the Exposure-at-Default (EAD). It has been decided to perform the estimation thanks to stochastic processes instead of usual statistical methodologies (such as classification tree or GLM).
Our paper will focus on two types of model: the Ornstein Uhlenbeck (OU) model – part of ARMA model types – and the Geometric Brownian Movement (GBM) model. First, we will describe, then implement and calibrate each model to ensure relevance and robustness of our results. Then, we will focus on GBM model to model CCF.
Overview of the Basel Committee's revised "Minimum capital requirements for market risk" (formerly FRTB), with notes and tips for technical implementation.
Credit Default Swap (CDS) Rate Construction by Machine Learning TechniquesZhongmin Luo
1. Financial institutions need to construct proxy CDS rates for counterparties lacking liquid CDS quotes, which are required for CVA pricing, CVA risk charge calculation, etc;
2. Existing CDS Proxy Methods do not meet regulatory requirements and are vulnerable to arbitrage;
3. After investigating 8 most popular Machine Learning algorithms, we show that Machine Learning techniques can be used to construct reliable CDS proxies that meet regulatory regulations while free from the above problem
4. Feature variable selection can be critical for performance of CDS-proxy construction methods
5. Effects of feature variable correlations on classification performances have to be investigated in the case of financial data
A Framework Driven Approach to Model Risk Management (www.dataanalyticsfinanc...QuantUniversity
Model risk and the importance of model risk management has gotten significant attention in the last few years. As financial companies increase their reliance on quants and quantitative models for decision making, they are increasingly exposed to model risk and are looking for ways to mitigate it. The financial crisis of 2008 and various high profile financial accidents due to model failures has brought model risk management to the forefront as an important topic to be addressed. Many regulatory efforts (Solvency II, Basel III, Dodd-Frank etc.) have been initiated obligating banks and financial institutions to incorporate formal model risk management programs to address model risk. Regulatory agencies have issued guidance letters and supervisory insights to assist companies in developing model risk management programs. In the United States, as the Dodd-Frank act is implemented, newer guidance letters have been issued that emphasize model risk management. Despite these efforts, in practice, financial companies continue to struggle in formulating and developing a model risk management program. A lot of companies acknowledge and understand the model risk management guidelines in spirit but have practical challenges in implementing these guidance letters. In our prior article on model risk , we discussed many drivers to address model risk and challenges in integrating model risk into the quant development process. In this talk, we will discuss ten best practices for the implementation of an effective model risk management program. These best practices have evolved from discussions with industry experts and consulting projects we have worked with in the recent years to create robust risk management programs. These best practices meant to provide practical tips for companies embarking on a formal model risk management program or enhancing their model risk methodologies to address the new realities
In-spite of large volumes of Contingent Credit Lines (CCL) in all commercial banks, the paucity of Exposure at Default (EAD) models, unsuitability of external data and inconsistent internal data with partial draw-downs has been a major challenge for risk managers as well as regulators in for managing CCL portfolios. This current paper is an attempt to build an easy to implement, pragmatic and parsimonious yet accurate model to determine the exposure distribution of a CCL portfolio. Each of the credit line in a portfolio is modeled as a portfolio of large number of option instruments which can be exercised by the borrower, determining the level of usage. Using an algorithm similar to basic the CreditRisk+ and Fourier Transforms we arrive at a portfolio level probability distribution of usage. We perform a simulation experiment using data from Moody\'s Default Risk Service, historical draw-down rates estimated from the history of defaulted CCLs and a current rated portfolio of such.
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
Show drafts
volume_up
Empowering the Data Analytics Ecosystem: A Laser Focus on Value
The data analytics ecosystem thrives when every component functions at its peak, unlocking the true potential of data. Here's a laser focus on key areas for an empowered ecosystem:
1. Democratize Access, Not Data:
Granular Access Controls: Provide users with self-service tools tailored to their specific needs, preventing data overload and misuse.
Data Catalogs: Implement robust data catalogs for easy discovery and understanding of available data sources.
2. Foster Collaboration with Clear Roles:
Data Mesh Architecture: Break down data silos by creating a distributed data ownership model with clear ownership and responsibilities.
Collaborative Workspaces: Utilize interactive platforms where data scientists, analysts, and domain experts can work seamlessly together.
3. Leverage Advanced Analytics Strategically:
AI-powered Automation: Automate repetitive tasks like data cleaning and feature engineering, freeing up data talent for higher-level analysis.
Right-Tool Selection: Strategically choose the most effective advanced analytics techniques (e.g., AI, ML) based on specific business problems.
4. Prioritize Data Quality with Automation:
Automated Data Validation: Implement automated data quality checks to identify and rectify errors at the source, minimizing downstream issues.
Data Lineage Tracking: Track the flow of data throughout the ecosystem, ensuring transparency and facilitating root cause analysis for errors.
5. Cultivate a Data-Driven Mindset:
Metrics-Driven Performance Management: Align KPIs and performance metrics with data-driven insights to ensure actionable decision making.
Data Storytelling Workshops: Equip stakeholders with the skills to translate complex data findings into compelling narratives that drive action.
Benefits of a Precise Ecosystem:
Sharpened Focus: Precise access and clear roles ensure everyone works with the most relevant data, maximizing efficiency.
Actionable Insights: Strategic analytics and automated quality checks lead to more reliable and actionable data insights.
Continuous Improvement: Data-driven performance management fosters a culture of learning and continuous improvement.
Sustainable Growth: Empowered by data, organizations can make informed decisions to drive sustainable growth and innovation.
By focusing on these precise actions, organizations can create an empowered data analytics ecosystem that delivers real value by driving data-driven decisions and maximizing the return on their data investment.
Opendatabay - Open Data Marketplace.pptxOpendatabay
Opendatabay.com unlocks the power of data for everyone. Open Data Marketplace fosters a collaborative hub for data enthusiasts to explore, share, and contribute to a vast collection of datasets.
First ever open hub for data enthusiasts to collaborate and innovate. A platform to explore, share, and contribute to a vast collection of datasets. Through robust quality control and innovative technologies like blockchain verification, opendatabay ensures the authenticity and reliability of datasets, empowering users to make data-driven decisions with confidence. Leverage cutting-edge AI technologies to enhance the data exploration, analysis, and discovery experience.
From intelligent search and recommendations to automated data productisation and quotation, Opendatabay AI-driven features streamline the data workflow. Finding the data you need shouldn't be a complex. Opendatabay simplifies the data acquisition process with an intuitive interface and robust search tools. Effortlessly explore, discover, and access the data you need, allowing you to focus on extracting valuable insights. Opendatabay breaks new ground with a dedicated, AI-generated, synthetic datasets.
Leverage these privacy-preserving datasets for training and testing AI models without compromising sensitive information. Opendatabay prioritizes transparency by providing detailed metadata, provenance information, and usage guidelines for each dataset, ensuring users have a comprehensive understanding of the data they're working with. By leveraging a powerful combination of distributed ledger technology and rigorous third-party audits Opendatabay ensures the authenticity and reliability of every dataset. Security is at the core of Opendatabay. Marketplace implements stringent security measures, including encryption, access controls, and regular vulnerability assessments, to safeguard your data and protect your privacy.
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
Explore our comprehensive data analysis project presentation on predicting product ad campaign performance. Learn how data-driven insights can optimize your marketing strategies and enhance campaign effectiveness. Perfect for professionals and students looking to understand the power of data analysis in advertising. for more details visit: https://bostoninstituteofanalytics.org/data-science-and-artificial-intelligence/
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Pillar III presentation 2 27-15 - redacted version
1. MARKET-BASED INDICATORS APPROACH
TO STRESS TESTING: FINAL RESULTS
BENJAMIN HUSTON
DALE GRAY
This presentation and its findings are intended as background for discussions with the U.S. stress
testing experts in the context of the FSAP. Some findings have not undergone a full internal review
and should not be shared outside the technical team involved in the US FSAP stress testing exercise.
3. WHY MARKET-BASED INDICATORS?
Supervisory data is confidential and often cannot be utilized for FSAP stress testing purposes
Market prices contain valuable information that can be used to corroborate traditional stress
testing methodologies and findings
Stress tests can be extended to sectors that are not traditionally subject to bank-like
supervisory oversight
3
5. SYSTEMIC RISK DASHBOARD
The Systemic Risk Dashboard is an integral part of the market-based indicator stress testing regime.
It uses established IMF-methodologies* to analyze systemic risk along a number of dimensions
Some of the metrics that will be featured in the dashboard include:
SRISK
SyRin
Equity-Composite Z-scores
Financial Cycles
Other misalignment measures
*For further information see Systemic Risk Monitoring (‘SysMo’)Toolkit, IMF working paper No. 13168
5
7. 7
SyRin
Derives widely-applicable financial stability indicators and systemic loss measures to detect direct/indirect linkages
among institutions/sectors within a given financial system
[REDACTED]
8. 8
Source: IMF staff estimates; *APT: Arbitrage Pricing Theory Source IMF staff estimates; equity market under- or overvaluations are based on deviations of
various equity market valuation indicators from long-term averages (Z scores).
Source: IMF staff estimates; financial cycles are computed using the BIS methodology (BIS, 2014) and capture the co-
movement between credit growth and residential property prices. Empirically, downward inflections in a financial is shown
to be a good predictive measure of an impending domestic financial crisis
Source: IMF staff estimates; defined as the difference of the credit-to-gdp ratio to its long term trend,
calculated using an HP filter with a smoothing parameter of 400000
[REDACTED] [REDACTED]
[REDACTED] [REDACTED]
12. CCA APPROACH
CCA was used in the 2010 US FSAP (and in 9 other FSAPs)
2015 US FSAP covers more institutions across wider range of sectors
than before
Analysis is enhanced by integrating macro factor stress testing with
spillover and interconnectedness measures
12
13. SAMPLE INSTITUTIONS
Number Selection Criteria
Asset Managers 41 10 billion USD plus market cap
NBFIs 13 10 billion USD plus market cap
Insurers 44 20 billion USD plus market cap
Corporates 32
Must be one of the largest non-financial
DJIA public companies, or an auto maker
that received government support, or an
iconic “new economy” technology
company with a large and rapidly growing
market cap
Banks 46 20 billion USD plus market cap
GSEs 2
Must have entered government
conservatorship
Foreign Insurers
and Foreign Banks
32
All banks and insurers designated by the
FSB as GSIB/GSII plus largest non-US
domiciled global insurers
Total 210
13
14. CORE CONCEPT: CONTINGENT CLAIMS ANALYSIS (CCA)
Assets = Equity + Risky Debt
= Equity + PV of Debt Payments – Expected Loss due to Default
= Implicit Call Option + PV of Debt Payments – Implicit Put Option
Assets
Equity
or Jr
Claims
Risky
Debt
•Value of liabilities
derived from value of
assets
• Uncertainty in asset
value
14
15. DEFAULT PROCESS IN THE CCA STRUCTURAL MODEL
ValueofAssets/Liabilities
Timet = 0 T = 1 year
Notional value of liabilities =
Default Barrier
XT
Distribution of market
value of assets
E[AT] = μ
Probability of
Default ≈ EDF
Distance to
default (DD) in σ
σ
Asset Volatility
15
16. CALIBRATION AND DERIVED RISK INDICATORS
Market capitalization, equity volatility, and book values
of debt are used to calculate implied value of assets and
asset volatility. For each institution, these are used to
calculate a “distance-to-default” indicator. This indicator is then
mapped to one year default probabilities using Moody’s default
database and the CreditEdge 9.0 modeling methodology.
16
17. STRESS TESTING APPROACH
Construct a set of sector regression models to assess the impact of adverse macroeconomic
changes and increased connectivity on median credit/default risk
Credit risk: ten years of daily CreditEdge default probability data (2004Q3 to 2014Q3)
Macro risk: IMF/DFAST macro variables
Connectivity: network clustering coefficient time-series
Conduct stress tests under “baseline” and “stress” scenarios and forecast default
probabilities for five domestic and two foreign sectors
Use default probability forecasts to assess potential inward cross-border spillovers
using a separate model for total U.S. financial system 17
23. What is GAMLSS?
General Adaptive Models of Location, Scale and Shape (GAMLSS) are a flexile class of statistical models which can estimate a quantity of interest
using dozens of different distributional assumptions. This model class also allows for explicit estimation of each distributional parameter (i.e.,
mean, variance, skewness, kurtosis). See Annex II for details.
Why GAMLSS?
GAMLSS is a practical framework for utilizing the following functionalities to address the following issues and concerns
[I
Functionality Methodological Issue End-User Concern
Semi- and non-parametric/nonlinear additive
terms
Violation of normality assumption Non-normality
High dimensional model selection algorithms Contemporaneous correlations “Excessive interdependence”
Penalty functions to prevent over fitting Heteroscadisticty
Validation/training/testing regime to assess
model predictive power
Excess skewness and kurtosis “Fat tails/tail risk”
Robust White-Hall standard errors Non-constant (i.e., adaptive) distributional
properties
Non-linearity
GENERAL ADAPTIVE MODELS OF LOCATION, SCALE AND SHAPE
23
24. GAMLSS FOR STRESS TESTS
Default probability data is bounded along a 0-1 interval, has a skewed distribution, and can change in response to
macro factors in a non-linear manner. Econometric modeling of macro variables and default probabilities must
account for these characteristics.
Approach
Beta, generalized gamma, inverse gamma, inverse gaussian, and generalized inverse gaussian distributions were used to
model median sector and aggregate financial system default probabilities
Semi- and fully-nonparametric additive terms were utilized to capture non-linear and/or localized relationships
Variable selection algorithms and generalized informational coefficient were used to chose best models
Penalty functions and training/test sets were used to prevent over-fitting and assess predictive power
Diagnostic tests were used to consistently check for modeling assumption violations
24
25. This connectivity time series was
included as an independent variable
in all GAMLSS models
MEASURING CONNECTIVITY
Three step process to measure
connectivity
1. Perform Spearman Rank Correlation
Tests to identify correlated default
probabilities
2. Create “correlation networks” from test
results
3. Calculate global clustering coefficient
score for entire network
Above process was repeated applied to institution-level
data using 30-day rolling windows
* Pruned exact linear time (PELT) tests were performed to identify significant structural
changes (“regime changes”) in connectivity mean and variance. (See Annex I)
[REDACTED]
32. CONNECTIONS AND SPILLOVERS
So far we have:
Controlled for firm idiosyncratic risk by using the median sector default probability;
Controlled for macro risk by using the macro variables;
Controlled for connectivity and the system level via the inclusion of the connectivity measure;
What remains it the impact of one sectors’ spillover impact on another sector either + or –
See next slide for this spillover effect………………………….
32
33. DOMESTIC AND CROSS-BORDER SPILLOVERS*
33
* Results represent
linear spillover
estimates only
(domestic system
result withstanding)
OriginatingSector
Receiving Sector
[REDACTED]
34. THE EFFECT OF CONNECTIVITY ON CREDIT RISK
34
[REDACTED]
38. 38
DIAGNOSTICS: THE AGGREGATE FINANCIAL SYSTEM MODEL
Orthogonalized additive
terms greatly decrease
correlation among predictor
variables and help to mitigate
estimation biases. (Shown right:
predictor correlation matrix.)
Worms plot (below) of the aggregate model’s
residuals shows that the model does not violate any
distribution assumptions. (Curved dotted lines are
95% CIs; fitted central red line should look fairly
straight)
39. 39Model normalized quantile residuals appear completely normal which means the choice of distributional
model was correct
46. GAMLSS
46
* See Stasinopoulos, D, and R. Rigby, 2007, Generalized Additive Models for Location, Scale and Shape
(GAMLSS) in R, Journal of Statistical Software, v. 23 Issue 7.
47. REFERENCES
Blancher, Nicolas, and others, 2013,“Systemic Risk Monitoring “Sysmo” Tool Kit - A User Guide”, IMFWorking Paper 13/168.
http://www.imf.org/external/pubs/cat/longres.aspx?sk=40791
Gray, Dale. F., R.C. Merton, and Z. Bodie, 2008,“A New Framework for Measuring and Managing Macrofinancial Risk and Financial Stability,”
Harvard Business SchoolWorking Paper No. 09/15 (Cambridge).
Gray, Dale, and Samuel Malone, 2008, Macrofinancial Risk Analysis (London:Wiley Finance).
US Financial Stability StressTesting Note, July 2010, International Monetary Fund
Acharya,V., R. Engle, and M. Richardson, Capital Shortfall:A New Approach to Ranking and Regulating Systemic Risks, AEA, January 7, 2012 ---
SRISK Model, NYUVlab.
Stasinopoulos, D, and R. Rigby, 2007, Generalized Additive Models for Location, Scale and Shape (GAMLSS) in R, Journal of Statistical Software,
v. 23 Issue 7.
47