Model risk management programs often began their journey by first creating a definition of a model. Then model risk groups would perform model risk activities on each item that met the definition of a model. These model risk activities include classifying risk, assessing current uses, evaluating ongoing monitoring results, validating conceptual soundness, testing model changes, and so forth. This approach was an important beginning for the field of model risk management as it helped identify existing models, discover fundamental errors in existing models, and prevent inappropriate use of models. However, model risk teams often focused only on processes that already include models and did not identify processes that would be significantly improved by using models. This results in model risk teams overlooking modeling capabilities that a process truly needs. However, model risk teams can go on the offensive and use their model inventory as a source of crucial business intelligence. Model risk teams can start to identify processes that do not include models and could recommend the use of existing models to improve those processes. Furthermore, model risk teams can reduce expenses at a bank by guarding against the development or purchase of models with redundant capabilities. Model risk management teams can ultimately be a champion for the extensibility and efficient use of models at an institution. The article was written by Jacob Kosoff, Aaron Bridgers, and Henry Lee. The article was published by the RMA Journal in September 2020.
Credit Audit's Use of Data Analytics in Examining Consumer Loan PortfoliosJacob Kosoff
Written by Jacob Kosoff and published in September 2013 by the RMA Journal. This article describes banks in 2012 & 2013 were modernizing their Credit Review functions.
This article explores how financial institutions can provide effective risk management for qualitative models. Written by Jacob Kosoff, Ximena Zambrano, and Matthew Grayson.
Understanding and validating the uses of machine learning modelsJacob Kosoff
WHILE MACHINE LEARNING (ML) CAN OFFER THE BENEFIT OF IMPROVED MODEL RESULTS, A BANK SHOULD CONSIDER WHETHER IT IS APPROPRIATE TO ACCEPT THE ADDITIONAL COMPLEXITY, AS WELL AS THE TESTING AND MONITORING, INVOLVED. THIS ARTICLE DISCUSSES BEST PRACTICES IN PERFORMING VALIDATIONS OF MACHINE LEARNING MODELS.
Written by Shannon Kelly of Zions Bank, Jacob Kosoff of Regions Bank, Agus Sudjianto of Wells Fargo, and Aaron Bridgers of Regions Bank.
Moderating the Churn: Retaining employees in the quantitative banking spaceJacob Kosoff
This article describes strategies on how to attract, develop and retain data scientists and other individuals with strong quantitative and data skills. Regions Model Risk Management and Validation has benefited from under 10% external turnover for the past five years and the article discusses how we at Regions has reached that success. Written by Jacob Kosoff and Irina Pritchett.
Regulatory scrutiny has significantly increased and has prompted banks to develop complex models at the lowest level of granularity to capture the impact of economic cycles. Segmentation is one of the first steps in establishing a quantitative basis for the enterprisewide scenario analysis of stress testing.
In the backdrop of the buzz that Interest Rate Risk in the Banking Book (IRRBB) has generated in the banking industry, Aptivaa is pleased to launch a series of articles providing our perspective on various issues highlighted by our esteemed clients during interactions in the recent months. This post gives an overview of the revised guidelines on IRRBB which has been issued by the Basel Committee, the approaches and the associated challenges in the implementation of IRRBB framework for all internationally active banks.We look forward to your valuable feedback on the current article or the challenges faced by you in IRRBB implementation.
CCAR & DFAST: How to incorporate stress testing into banking operations + str...Grant Thornton LLP
Banks are integrating elements of regulatory stress testing into their everyday business processes and strategic planning exercises, and optimizing enterprise risk management in the process. What does enterprise wide stress testing mean for a financial institution? What are the impacts and implications to a financial institution?
Credit Audit's Use of Data Analytics in Examining Consumer Loan PortfoliosJacob Kosoff
Written by Jacob Kosoff and published in September 2013 by the RMA Journal. This article describes banks in 2012 & 2013 were modernizing their Credit Review functions.
This article explores how financial institutions can provide effective risk management for qualitative models. Written by Jacob Kosoff, Ximena Zambrano, and Matthew Grayson.
Understanding and validating the uses of machine learning modelsJacob Kosoff
WHILE MACHINE LEARNING (ML) CAN OFFER THE BENEFIT OF IMPROVED MODEL RESULTS, A BANK SHOULD CONSIDER WHETHER IT IS APPROPRIATE TO ACCEPT THE ADDITIONAL COMPLEXITY, AS WELL AS THE TESTING AND MONITORING, INVOLVED. THIS ARTICLE DISCUSSES BEST PRACTICES IN PERFORMING VALIDATIONS OF MACHINE LEARNING MODELS.
Written by Shannon Kelly of Zions Bank, Jacob Kosoff of Regions Bank, Agus Sudjianto of Wells Fargo, and Aaron Bridgers of Regions Bank.
Moderating the Churn: Retaining employees in the quantitative banking spaceJacob Kosoff
This article describes strategies on how to attract, develop and retain data scientists and other individuals with strong quantitative and data skills. Regions Model Risk Management and Validation has benefited from under 10% external turnover for the past five years and the article discusses how we at Regions has reached that success. Written by Jacob Kosoff and Irina Pritchett.
Regulatory scrutiny has significantly increased and has prompted banks to develop complex models at the lowest level of granularity to capture the impact of economic cycles. Segmentation is one of the first steps in establishing a quantitative basis for the enterprisewide scenario analysis of stress testing.
In the backdrop of the buzz that Interest Rate Risk in the Banking Book (IRRBB) has generated in the banking industry, Aptivaa is pleased to launch a series of articles providing our perspective on various issues highlighted by our esteemed clients during interactions in the recent months. This post gives an overview of the revised guidelines on IRRBB which has been issued by the Basel Committee, the approaches and the associated challenges in the implementation of IRRBB framework for all internationally active banks.We look forward to your valuable feedback on the current article or the challenges faced by you in IRRBB implementation.
CCAR & DFAST: How to incorporate stress testing into banking operations + str...Grant Thornton LLP
Banks are integrating elements of regulatory stress testing into their everyday business processes and strategic planning exercises, and optimizing enterprise risk management in the process. What does enterprise wide stress testing mean for a financial institution? What are the impacts and implications to a financial institution?
Emerging Differentiators of a Successful Wealtlh Management PlatformCognizant
Changes in the wealth management industry are driving the need for a flexible, scalable platform that enables wealth managers to differentiate their services and profitably serve the mass affluent and mass markets.
Continuing with our updates on the key aspects of IFRS 9 Implementation, our current post (attached) talks about “IFRS 9 Impairment Solution”. The post aims to provide key insights, which might assist banks’ in selecting a strategic solution that will future-proof the investment towards successful IFRS 9 implementation. The post enumerates on the key desirable features both from functional and technical viewpoints, which a strategic IFRS 9 solution should possess and will benefit our readers to make an important choice.
Continuing with our updates on the key aspects of IFRS 9 Implementation, our current post (attached) talks about “Exposure at Default (EAD)” where, possible uses and business interpretation nuances of terms linked to EAD are highlighted. The post enumerates on the computation methods of EAD and the modeling approaches available for each of the methods with key consideration points from Basel and IFRS9 perspectives highlighted in between for the readers.
We look forward to your valuable feedback on the current article or the challenges faced by you in IFRS9 implementation.
Aptivaa is pleased to launch a series of blogs to apprise readers of some of the key aspects related mostly to Impairment Modeling, for compliance with the new accounting standards (IFRS 9), as well as to have a conversation with the readers about the challenges that banks are facing in their implementation efforts.
Compliance implications of crossing the $10 billion asset thresholdGrant Thornton LLP
Since the passage of the Dodd-Frank Act, small regional banks have been forced to rethink their growth strategies as they inch closer to the $10 billion assets threshold. Here’s guidance on navigating the new regulatory field.
As the methodologies for IFRS 9 Implementation are still evolving, many banks are in the process of developing a roadmap towards implementation and are still evaluating methodologies that are likely to conform to the principles of proportionality and materiality. To this end, Banks being advised are to develop a Target Operating Model (TOM) design, which seeks to identify and document the work program required to meet IFRS 9 requirements on Impairment modelling and ECL estimation.
As the race against time to comply with IFRS 9 guidelines begins, several software solutions are being bandied about as a quick fix solution for automating the entire impairment modelling process. While automating is definitely the way to go in initiatives such as these, the question remains as to whether the software architecture should be of a strategic integrated nature or one that is decoupled and modular. In Aptivaa, we believe the answer to this lies in the 4Rs question: Readiness, Reflectiveness, Redundancy and Regularity.
Keys to extract value from the data analytics life cycleGrant Thornton LLP
Regulatory mandates driving transparency and financial objectives requiring accurate understanding of customer needs have heightened the importance of data analytics to unprecedented levels making it a critical element of doing business.
5 AI Solutions Every Chief Risk Officer NeedsAlisa Karybina
For the risk manager, AI means greater efficiency, lower costs, and less risk. There are many potential applications of AI when it comes to managing risk in banking, but this report will focus on five key solutions with huge potential ROI that every chief risk officer (CRO) can begin building immediately. Representing foundational capabilities for risk management, these five solutions have the potential to substantially impact a bank’s financial results, and an automated machine learning platform represents the most efficient and effective method of delivering on the promise of these AI use cases.
Rethinking Analytics, Analytical Processes, and Risk Architecture Across the ...Jacob Kosoff
Risk analytics infrastructure—and even how the banking industry thinks about analytical model risk—have evolved by leaps and bounds over the last decade. In this article, Steve Maglic and Jacob Kosoff discuss transformations within risk analytics.
Tiered Application Management: Meeting the Need for Speed and ReliabilityCognizant
Deploying a multitiered approach to application management, guided by analysis of historic performance issues, helps companies respond to digital requirements while cutting costs.
Emerging Differentiators of a Successful Wealtlh Management PlatformCognizant
Changes in the wealth management industry are driving the need for a flexible, scalable platform that enables wealth managers to differentiate their services and profitably serve the mass affluent and mass markets.
Continuing with our updates on the key aspects of IFRS 9 Implementation, our current post (attached) talks about “IFRS 9 Impairment Solution”. The post aims to provide key insights, which might assist banks’ in selecting a strategic solution that will future-proof the investment towards successful IFRS 9 implementation. The post enumerates on the key desirable features both from functional and technical viewpoints, which a strategic IFRS 9 solution should possess and will benefit our readers to make an important choice.
Continuing with our updates on the key aspects of IFRS 9 Implementation, our current post (attached) talks about “Exposure at Default (EAD)” where, possible uses and business interpretation nuances of terms linked to EAD are highlighted. The post enumerates on the computation methods of EAD and the modeling approaches available for each of the methods with key consideration points from Basel and IFRS9 perspectives highlighted in between for the readers.
We look forward to your valuable feedback on the current article or the challenges faced by you in IFRS9 implementation.
Aptivaa is pleased to launch a series of blogs to apprise readers of some of the key aspects related mostly to Impairment Modeling, for compliance with the new accounting standards (IFRS 9), as well as to have a conversation with the readers about the challenges that banks are facing in their implementation efforts.
Compliance implications of crossing the $10 billion asset thresholdGrant Thornton LLP
Since the passage of the Dodd-Frank Act, small regional banks have been forced to rethink their growth strategies as they inch closer to the $10 billion assets threshold. Here’s guidance on navigating the new regulatory field.
As the methodologies for IFRS 9 Implementation are still evolving, many banks are in the process of developing a roadmap towards implementation and are still evaluating methodologies that are likely to conform to the principles of proportionality and materiality. To this end, Banks being advised are to develop a Target Operating Model (TOM) design, which seeks to identify and document the work program required to meet IFRS 9 requirements on Impairment modelling and ECL estimation.
As the race against time to comply with IFRS 9 guidelines begins, several software solutions are being bandied about as a quick fix solution for automating the entire impairment modelling process. While automating is definitely the way to go in initiatives such as these, the question remains as to whether the software architecture should be of a strategic integrated nature or one that is decoupled and modular. In Aptivaa, we believe the answer to this lies in the 4Rs question: Readiness, Reflectiveness, Redundancy and Regularity.
Keys to extract value from the data analytics life cycleGrant Thornton LLP
Regulatory mandates driving transparency and financial objectives requiring accurate understanding of customer needs have heightened the importance of data analytics to unprecedented levels making it a critical element of doing business.
5 AI Solutions Every Chief Risk Officer NeedsAlisa Karybina
For the risk manager, AI means greater efficiency, lower costs, and less risk. There are many potential applications of AI when it comes to managing risk in banking, but this report will focus on five key solutions with huge potential ROI that every chief risk officer (CRO) can begin building immediately. Representing foundational capabilities for risk management, these five solutions have the potential to substantially impact a bank’s financial results, and an automated machine learning platform represents the most efficient and effective method of delivering on the promise of these AI use cases.
Rethinking Analytics, Analytical Processes, and Risk Architecture Across the ...Jacob Kosoff
Risk analytics infrastructure—and even how the banking industry thinks about analytical model risk—have evolved by leaps and bounds over the last decade. In this article, Steve Maglic and Jacob Kosoff discuss transformations within risk analytics.
Tiered Application Management: Meeting the Need for Speed and ReliabilityCognizant
Deploying a multitiered approach to application management, guided by analysis of historic performance issues, helps companies respond to digital requirements while cutting costs.
Banks are scrambling to meet with IFRS 9 guidelines and are setting down on the path to implement various ECL estimation methodologies and models. But a topic that hasn’t been given enough attention is the need for governance of these models and the attendant model risk management framework that needs to be set up to lend credibility to the model estimates. This blog touches upon the need for validation of models and how model risk governance has become paramount in view of the new guidelines.
A New Approach to Application Portfolio Assessment for New-Age Business-Techn...Cognizant
SMAC technologies are propelling new business models, requiring an application portfolio assessment that considers the necessary capabilities and processes to enable effective digital business transformation.
Cloud allows banks to serve customers in new ways and re-imagine their business models. It can help surface valuable insights from their data and transform how they make decisions. It enables them to tap expertise from across their entire ecosystem. Read the whitepaper to find out more.
Towards enterprise-ready AI deployments: Minimizing the risk of consuming AI ...alekn
Abstract: The stochastic nature of artificial intelligence (AI) models introduces risk to business applications that use AI models without careful consideration. This paper offers an approach to use AI techniques to gain insights on the usage of the AI models and control how they are deployed to a production application.
Authors: Aleksander Slominski*, Vinod Muthusamy*, Vatche Ishakian+
*IBM Research, +Bentley University
Presented at Artificial Intelligence for Industries (ai4i 2018) Sept 28, 2018
The US regulatory model governance standard, provides a framework for effective model governance, focusing on the separation of model development and use, from model validation, the application of a company-wide model risk management initiatives, as well as full model inventory management and documentation.
Web and internet computing is evolving into a combination of social media, mobile, analytics and cloud (SMAC) solutions. There is a need for an integrated approach when developing
solutions that address web scale requirements with technologies that enable SMAC solutions. This paper presents an architecture model for the integrated approach that can form the basis for solutions and result in reuse, integration and agility for the business and IT in an enterprise.
Web and internet computing is evolving into a combination of social media, mobile, analytics and cloud (SMAC) solutions. There is a need for an integrated approach when developing
solutions that address web scale requirements with technologies that enable SMAC solutions.This paper presents an architecture model for the integrated approach that can form the basis
for solutions and result in reuse, integration and agility for the business and IT in an enterprise.
Similar to Adopting a Top-Down Approach to Model Risk Governance to Optimize Digital Transformation (20)
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
Explore our comprehensive data analysis project presentation on predicting product ad campaign performance. Learn how data-driven insights can optimize your marketing strategies and enhance campaign effectiveness. Perfect for professionals and students looking to understand the power of data analysis in advertising. for more details visit: https://bostoninstituteofanalytics.org/data-science-and-artificial-intelligence/
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
Adopting a Top-Down Approach to Model Risk Governance to Optimize Digital Transformation
1. The RMA Journal September 2020 | Copyright 2020 by RMA
28
Adopting a
Top-Down
Approach to
Model Risk
Governance
to Optimize
Digital
MODEL
RISK
MANAGEMENT/TECHNOLOGY
Transformation
2. September 2020 The RMA Journal 29
Several of the recent trends emerging from the COVID-19 pandemic, such as low-contact consumer
commerce, the stay-at-home economy, rapidly evolving credit risk due to the closure of parts of the economy,
and virtual integrations with vendors have prompted banks to accelerate key projects. This article explores
how banks are accelerating improvements to the customer experience and automating manual processes by
fundamentally transforming their model risk management processes.
The article will specifically address how banks are adopting a top-down approach to model gov-
ernance that inventories the business capabilities of models, identifies essential business outputs
derived from models, and determines which model capabilities are most critical to these processes.
This approach identifies unused model capabilities that could enable automation and improve busi-
ness insights. Additionally, a top-down approach would create transparency around potential gaps
in model capabilities needed to support the strategic direction of the bank.
BY AARON BRIDGERS, HENRY LEE, AND JACOB KOSOFF
3. The RMA Journal September 2020 | Copyright 2020 by RMA
30
A focus on model capabilities that
support critical processes aligns model
governance with recent trends in
banking technology architecture be-
ing pursued by member banks of the
Bank Industry Architecture Network
(BIAN). For this approach to be suc-
cessful, model risk groups would need
to keep pace with the evolving land-
scape of quantitative modeling driven
by the software community, as well as
learn new technical skills. Assessing
the technology attributes of models
such as interoperability and scalability
will be as important as the veracity of
the underlying mathematics.
Traditional model risk management
programs typically start by creating a
policy definition of a model that is
based on the mathematical approach,
or lack thereof, taken to solve a business
problem. The definition of a model is
usually broad enough to meet regula-
tory expectations found in the Federal
Reserve’s SR 11-7 Supervisory Guid-
ance on Model Risk Management, but
narrow enough to not include every
spreadsheet at the bank. Model risk
groups take a bottom-up approach by
first identifying all potential models as
the fundamental unit of observation
and determining the use of each model
across processes.
Model risk groups then begin
performing model risk activities on
each item that meets the definition
of a model. These activities include
classifying risk, validating the current
uses, evaluating ongoing monitoring,
and evaluating model changes. This
approach was an important beginning
for the field of model risk manage-
ment, as it helped identify existing
models, discover fundamental errors
in existing models, and prevent inap-
propriate use of models.
However, the traditional bottom-up
approach may have shortcomings that
could hinder digital transformation
at banks. First, it is difficult to apply
the definition of a model to complex
processes such as artificial intelligence-
driven customer support, which require
generation of regulatory reporting,
and digital customer journeys. Model
development and risk practitioners can
connect with peers in the business and
IT architecture to understand current
processes as well as what processes
could be improved through employing
model capabilities.
Model risk groups would focus tra-
ditional model risk testing activities on
the most important processes. These
traditional activities should be coupled
with modern model risk evaluations to
ensure that models are interoperable
across processes and that models scale
based on the expected volume of each
process. These tests would align with
the SR 11-7 requirement that “model
calculations should be properly coordi-
nated with the capabilities and require-
ments of information systems.”
“Sound model risk management,”
the guidelines note, “depends on
substantial investment in supporting
systems to ensure data and reporting
integrity, together with controls and
testing to ensure proper implementa-
tion of models, effective systems inte-
gration, and appropriate use.”1
Model risk groups could modern-
ize their inventories to include discrete
business capabilities and indicate
which models are accessible through
APIs, and facilitate automation by en-
couraging the use of APIs and creating
awareness around the model capabili-
ties within the business and IT archi-
tecture. Automation can be further
enabled by also identifying model
capability gaps or gaps in interoper-
ability. Models that are interoperable
using APIs would allow model risk
groups to automate and scale many of
their testing processes as well.
Finally, a model risk framework
that focuses on model-driven busi-
ness capabilities accessible through
APIs would create more efficient and
effective processes. The use of mi-
croservices has been made popular
by technology companies that use an
API business model. Microservices
enable companies to react quickly to
O
n
P
revious
P
age
:
S
hutterstock
.
com
many predictive calculations. This can
result in redundant model risk activi-
ties and difficulty discerning the real
impact these models have on business
decisions. Second, model risk teams of-
ten focus only on processes that already
include models and do not identify pro-
cesses that would be significantly im-
proved by using models. This results in
model risk teams overlooking modeling
capabilities that a process truly needs.
Third, the bottom-up approach does
not align with the digital transforma-
tion occurring at banks through the
creation of business capability appli-
cation programming interfaces (APIs)
and microservices, which are primary
enablers of process automation.
A top-down approach, also called a
business capability approach, would al-
low banks to better think through the
component pieces of their processes,
build microservices on top of those
component pieces, and allow compa-
nies to build flexible, cost-effective IT
infrastructures that can quickly adapt to
customer needs. In other words, the top-
down approach focuses on reimagining
bank operations as a set of microservices
built on top of defined business capabili-
ties that are accessed through APIs. In
recent times, this approach has evolved
into services offered through containers
within the cloud. Model risk activities
that focus on the definition of a model
instead of the business capabilities of-
fered by models may slow down digital
transformation.
A model risk program that uses a
top-down approach would focus on
improving processes with models and
improving the model risk management
processes around models. This type of
approach could improve risk manage-
ment by evaluating the most important
business outputs and their associated
processes to identify any models cur-
rently being used and necessary model
capabilities that are not being used ef-
fectively. Key business processes may
include origination and servicing of
bank products, enablement of bank ser-
vices, creation of financial statements,
4. September 2020 The RMA Journal 31
customer needs, integrate with third-
party partners, rapidly re-engineer
processes, reduce overall IT spend,
and scale quickly. The banking in-
dustry has formed the not-for-profit
BIAN to rethink bank processes, in-
cluding quantitative models using the
service-driven design through APIs.
Models can serve as components of mi-
croservices, but this would inherently
change the focus from primarily math-
ematical techniques to include heavy
consideration of model architecture.
At the corporate level, outputs
meant for external consumption, such
as financial statements and regulatory
reports, are being fed by a network of
models. Furthermore, there are stra-
tegic decisions being made internally
about risk and risk appetite, which are
also supported by models. Identifying
these final important outputs provides
focus and elucidates the models and
data that feed them. This is a perfect
opportunity for banks to simplify
and automate their operations. This
top-down approach will help banks
learn where they need models and
will provide opportunities to build in
elements of API services, which may
be expanded over time.
Consider how many models are used
in commercial credit. Often, several
lines of businesses have commercial
credit exposure, each with their own
models. There are often separate models
for each portfolio and product; even
if they have different inputs, they are
producing the same fundamental out-
puts. Charge-off models depend upon
PD, EAD, LGD models, etc. PPNR
models use expected interest and non-
interest revenues and expenses. The
top-level models in the process should
be targeted first.
While the inputs to these models
will be calculated differently de-
pending upon the financial product
and other characteristics, much of
the redundancy may be eliminated
through modularization and a ser-
vice-driven business model. These
modules are independent and can
be used across multiple portfolios
if developed as generalized modules
and built with scalability in mind.
The upstream models that produce
the idiosyncratic inputs to these
modules can then be streamlined
in successive phases to make the
transformation manageable.
In conclusion, managing model risk
through this top-down approach cre-
ates four main benefits:
• It improves risk management ef-
fectiveness by focusing risk man-
agement resources on models that
impact real business outcomes.
• It cuts costs by greatly reducing
model risk activities spent on mod-
els and processes that pose little or
no risk.
• It draws attention to unused model
capabilities.
• It facilitates process automation and enables
digital transformation by aligning models
to the standards maintained by the Bank
Industry Architecture Network.
The opinions expressed in the article are statements
of the authors’ opinion, are intended only for infor-
mational purposes, and are not formal opinions of,
nor binding on Regions Bank, its parent company,
Regions Financial Corporation and their subsid-
iaries, and any representation to the contrary is
expressly disclaimed.
Notes
1. https://www.federalreserve.gov/supervisionreg/
srletters/sr1107a1.pdf, page 7.
JACOB KOSOFF is a senior vice
president and head of model risk
management and validation at
Regions Bank. He can be reached
at Jacob.Kosoff@Regions.com.
HENRY LEE, PH.D. is a senior
risk and financial intelligence
consultant at the SAS Institute.
He can be reached at reareached
at henry.lee@sas.com.
AARON BRIDGERS is a senior vice
president and the head of risk
testing optimization at Regions
Bank. He can be reached at
Aaron.Bridgers@Regions.com.
"Models that are interoperable
using APIs would allow model risk
groups to automate and scale many
of their testing processes."