Risk analytics infrastructure—and even how the banking industry thinks about analytical model risk—have evolved by leaps and bounds over the last decade. In this article, Steve Maglic and Jacob Kosoff discuss transformations within risk analytics.
Strong model risk management is important for insurers given their increasing reliance on complex models. Leading insurers develop MRM frameworks with policies for model governance, development and validation. Key aspects of model governance include defining models, maintaining an inventory, setting risk limits, and establishing roles and responsibilities. Models should be developed by experts in a controlled environment with thorough documentation. Independent validation ensures models are performing as intended, with priority given to riskier models. Validators check models against other independent models and outputs to identify any issues requiring remediation.
The US regulatory model governance standard, provides a framework for effective model governance, focusing on the separation of model development and use, from model validation, the application of a company-wide model risk management initiatives, as well as full model inventory management and documentation.
Model risk management aims to identify and address risks from model failures. It evaluates models across their lifecycle from development to usage. A three lines of defense approach is used with model owners, a validation team, and internal audit each providing oversight. Regular model validation is important to assess performance, assumptions, and risk. Agile validation processes that provide ongoing feedback can help address increasing model volumes and changing regulatory needs.
Adopting the Right Software Test Maturity Assessment ModelCognizant
A brief guide to software test maturity assesment models, weighing pros and cons of the TMMi Foundation certification approach vs. advisory assessment models.
The document provides instructions for answering 31 essay questions related to business process change and information systems projects. Students are advised to write essay answers that are a few paragraphs long, fully address each part of the question, and demonstrate knowledge through examples and citations. The questions cover topics like reasons for outsourcing IT solutions, strengths of service-oriented architectures, analyzing business strategy and processes, and planning and managing information systems projects and changes.
The document provides an overview of product life cycle (PLC) analysis, which describes how sales of a product evolve over time through four distinct stages: introduction, growth, maturity, and decline. It explains the characteristics and appropriate marketing strategies for each stage. For example, during introduction sales are low but advertising is high, while growth focuses on increasing sales and consumer loyalty. The document also cautions that PLC analysis has limitations and provides a case study analyzing the retail coffee industry through the PLC framework.
Mtm9 white paper macro-environmental (steep) analysisIntelCollab.com
This document provides an overview of macro-environmental (STEEP) analysis, which examines the external social, technological, economic, environmental, and political factors that influence an organization. STEEP analysis involves scanning these macro-level factors to identify opportunities and threats in order to understand the overall context surrounding an industry. The results of STEEP analysis provide insights into how these external forces could impact a company. While useful for strategic planning, challenges in applying STEEP analysis include difficulties interpreting factors, inaccuracies in forecasts, short-term orientations, and lack of acceptance by management.
The document discusses challenges in assessing the performance of heterogeneous systems and provides recommendations for an effective approach. It recommends identifying system specifics, defining test boundaries, and building a testing solution by choosing a framework, simulating usage models, and using tools to design load models. The approach involves understanding the system, visualizing the test environment, selecting representative test cases, and establishing baselines and benchmarks to assess performance.
Strong model risk management is important for insurers given their increasing reliance on complex models. Leading insurers develop MRM frameworks with policies for model governance, development and validation. Key aspects of model governance include defining models, maintaining an inventory, setting risk limits, and establishing roles and responsibilities. Models should be developed by experts in a controlled environment with thorough documentation. Independent validation ensures models are performing as intended, with priority given to riskier models. Validators check models against other independent models and outputs to identify any issues requiring remediation.
The US regulatory model governance standard, provides a framework for effective model governance, focusing on the separation of model development and use, from model validation, the application of a company-wide model risk management initiatives, as well as full model inventory management and documentation.
Model risk management aims to identify and address risks from model failures. It evaluates models across their lifecycle from development to usage. A three lines of defense approach is used with model owners, a validation team, and internal audit each providing oversight. Regular model validation is important to assess performance, assumptions, and risk. Agile validation processes that provide ongoing feedback can help address increasing model volumes and changing regulatory needs.
Adopting the Right Software Test Maturity Assessment ModelCognizant
A brief guide to software test maturity assesment models, weighing pros and cons of the TMMi Foundation certification approach vs. advisory assessment models.
The document provides instructions for answering 31 essay questions related to business process change and information systems projects. Students are advised to write essay answers that are a few paragraphs long, fully address each part of the question, and demonstrate knowledge through examples and citations. The questions cover topics like reasons for outsourcing IT solutions, strengths of service-oriented architectures, analyzing business strategy and processes, and planning and managing information systems projects and changes.
The document provides an overview of product life cycle (PLC) analysis, which describes how sales of a product evolve over time through four distinct stages: introduction, growth, maturity, and decline. It explains the characteristics and appropriate marketing strategies for each stage. For example, during introduction sales are low but advertising is high, while growth focuses on increasing sales and consumer loyalty. The document also cautions that PLC analysis has limitations and provides a case study analyzing the retail coffee industry through the PLC framework.
Mtm9 white paper macro-environmental (steep) analysisIntelCollab.com
This document provides an overview of macro-environmental (STEEP) analysis, which examines the external social, technological, economic, environmental, and political factors that influence an organization. STEEP analysis involves scanning these macro-level factors to identify opportunities and threats in order to understand the overall context surrounding an industry. The results of STEEP analysis provide insights into how these external forces could impact a company. While useful for strategic planning, challenges in applying STEEP analysis include difficulties interpreting factors, inaccuracies in forecasts, short-term orientations, and lack of acceptance by management.
The document discusses challenges in assessing the performance of heterogeneous systems and provides recommendations for an effective approach. It recommends identifying system specifics, defining test boundaries, and building a testing solution by choosing a framework, simulating usage models, and using tools to design load models. The approach involves understanding the system, visualizing the test environment, selecting representative test cases, and establishing baselines and benchmarks to assess performance.
Adopting a Top-Down Approach to Model Risk Governance to Optimize Digital Tra...Jacob Kosoff
Model risk management programs often began their journey by first creating a definition of a model. Then model risk groups would perform model risk activities on each item that met the definition of a model. These model risk activities include classifying risk, assessing current uses, evaluating ongoing monitoring results, validating conceptual soundness, testing model changes, and so forth. This approach was an important beginning for the field of model risk management as it helped identify existing models, discover fundamental errors in existing models, and prevent inappropriate use of models. However, model risk teams often focused only on processes that already include models and did not identify processes that would be significantly improved by using models. This results in model risk teams overlooking modeling capabilities that a process truly needs. However, model risk teams can go on the offensive and use their model inventory as a source of crucial business intelligence. Model risk teams can start to identify processes that do not include models and could recommend the use of existing models to improve those processes. Furthermore, model risk teams can reduce expenses at a bank by guarding against the development or purchase of models with redundant capabilities. Model risk management teams can ultimately be a champion for the extensibility and efficient use of models at an institution. The article was written by Jacob Kosoff, Aaron Bridgers, and Henry Lee. The article was published by the RMA Journal in September 2020.
Banks are scrambling to meet with IFRS 9 guidelines and are setting down on the path to implement various ECL estimation methodologies and models. But a topic that hasn’t been given enough attention is the need for governance of these models and the attendant model risk management framework that needs to be set up to lend credibility to the model estimates. This blog touches upon the need for validation of models and how model risk governance has become paramount in view of the new guidelines.
1. Model validation is one of the key requirements for internal model approval under Solvency II and involves validating several components of an insurer's model, including inputs, assumptions, governance processes, and model results.
2. Effective validation requires dividing the model into smaller parts and focusing validation efforts based on materiality. It also involves validating not just the calculation engine but also external models, data quality, governance and model use within the business.
3. Insurers should start validation in parallel with model development to have enough time for approval and establish an ongoing validation process, avoiding pitfalls like inconsistent treatment of risks and lack of documentation.
Modelling: What’s next for Financial Services in Europe?GRATeam
This paper outlines a practical roadmap to realising cost savings, delivering a material reduction in the volume and complexity of models by outlining five key principles of model optimisation: develop a comprehensive review of models, harmonise methodologies, re-design model validation/monitoring process, re-think its modelling team’s organisation & governance and build new expertise and recruit talent.
A Business Continuity Management Maturity Model For The UAE Banking SectorBecky Goins
This document presents a business continuity management (BCM) maturity model developed for banks in the United Arab Emirates. The model was created using a two-stage approach: 1) analyzing existing maturity models to develop an initial model, and 2) validating the model with focus groups of BCM experts from 10 UAE banks. The research found the model to be a useful self-assessment tool for banks to evaluate the maturity of their BCM processes. However, the model is limited to the banking sector in the UAE and was not validated more broadly. The model provides a framework for assessing maturity across different areas and levels to determine an overall BCM maturity score.
This document discusses considerations for building out model risk management (MRM) frameworks for qualitative models at banks. It begins by defining qualitative models as those where the functional specification is determined primarily by expert judgment or assumptions rather than quantitative methodologies.
It notes that while qualitative models pose model risk, approaches to managing this risk may differ from quantitative models due to different risk sources. Specifically, staffing, scheduling, scope and inventory size of MRM programs may vary significantly between large global banks and regional banks based on factors like resources. Regional banks especially may need to validate qualitative and quantitative models using the same team.
The document provides examples of how existing risk management processes at regional banks could take on aspects of qualitative model validation to
The document discusses validation of economic capital models from a regulatory perspective. It outlines a range of qualitative and quantitative validation approaches used in practice to assess different properties of economic capital models, from integrity of implementation to predictive ability. While individual tests have limitations, a layered approach using multiple validation techniques can provide more robust evidence of a model's fitness for its intended purposes. Key challenges include validating conceptual soundness and assumptions given many are untestable, as well as assessing accuracy, particularly in tail distributions where data is scarce.
Onno de vrij (sas) better decision making 12-10Wim Assink
The document discusses various topics relating to risk modeling including:
1. Organizations are at different levels of analytical maturity, ranging from immature approaches to mature structured processes.
2. Regulators are pushing banks to improve model governance, deploy models faster, and use more models to make decisions. This will be a long process of change.
3. The expected credit loss approach under new IFRS 9 accounting standards will require banks to estimate future losses for financial assets, increasing provisions and creating more volatility, affecting profit, equity, capital ratios. Banks must balance compliance and optimal financial impact.
Accelerating Machine Learning as a Service with Automated Feature EngineeringCognizant
Building scalable machine learning as a service, or MLaaS, is critical to enterprise success. Key to translate machine learning project success into program success is to solve the evolving convoluted data engineering challenge, using local and global data. Enabling sharing of data features across a multitude of models within and across various line of business is pivotal to program success.
This document proposes a Service-Oriented Architecture (SOA) Maturity Model to help organizations assess the current state of alignment between their business and IT implementations of SOA and identify areas for improvement. The model defines 5 levels of maturity - Primitive, Standardized Services, Manageable Services, Measurable Services, and Agile Enterprise. It also identifies 6 dimensions used to evaluate maturity: Organizational Factors, Tools & Processes, Architecture, Services, Governance, and Support & Operations. The document then provides a matrix that describes the criteria for evaluating each level based on the 6 dimensions. The model is intended to help organizations benchmark their SOA implementation and guide process improvement efforts.
The document discusses testing approaches for banking/finance domain applications. It notes that these applications typically have multi-tier functionality, systems of systems integration, and bespoke development with lacking documentation. It argues that while the domain is traditionally conservative, an agile testing approach makes sense. An agile approach emphasizes business user involvement, defining requirements and acceptance criteria through user stories, and allowing for incremental development and testing. The approach could work well for integration testing and addressing regression testing risks in this complex domain.
Simplifying Model-Based Systems Engineering - an Implementation Journey White...Alex Rétif
Model-Based Systems Engineering (MBSE) is perhaps one of the most misunderstood and often abused acronyms in the engineering vernacular. Many companies struggle to understand how it will improve their entire product life-cycle and address the ever-increasing complexity of products. In many companies, executives and middle management experience a lack of understanding regarding the rapid pace of today’s technology and its impact on organizations and processes. Technical practitioners may gain additional insight as they focus their energies on establishing strong MBSE practices. The successful implementation of MBSE includes transformations and enhancements in three key areas: organization, process and technology. This white paper shares proper planning and implementation considerations in adopting an MBSE practice. It provides a high-level view, defines critical components to help success and identifies many problematic areas to avoid in an implementation journey.
This document summarizes potential pitfalls that can occur in large process modeling projects based on focus groups and interviews. It discusses pitfalls related to:
1) Tools and requirements, such as underestimating the number of models needed and challenges matching tools to modeling frameworks.
2) Modeling practices, such as limitations of modeling languages and losing the translation between business and system models.
3) Designing future ("to-be") models, and
4) Dealing with modeling success and maintenance over time.
The document provides examples and advice to increase awareness of common mistakes in process modeling projects.
1) Effective model validation aims to provide confidence in a model's robustness and results for management decision making. A validation process assesses a model's capabilities and limitations and promotes continuous improvement.
2) For validation to generate value, it must consider building trust in the model, using the model in decision making ("embedding"), and enabling continuous model improvement. Building trust requires demonstrating validation's business value through transparent communication.
3) Embedding a model requires senior management engagement and understanding. Validation helps build necessary trust for management to use model outputs in decisions. This embedding takes time as a significant change process.
An Empirical Evaluation of Capability Modelling using Design Rationale.pdfSarah Pollard
This study evaluated a capability modeling meta-model by having two designers independently model capabilities for the same use case. The designers' modeling processes and rationales were documented using a design reasoning framework. Analysis found differences in how the designers defined key concepts like capability and context, and in their modeling processes due to lack of guidance from the meta-model. The study provided feedback on improving the meta-model and capability-driven design methodology.
This document discusses 5 major challenges facing financial services modelling functions in Europe: 1) The modelling scope is expanding with more models required, 2) Fully harmonized methodologies across institutions and business units are imperative for transparency and cost reduction, 3) Modelling structures need to become more efficient to reduce costs, 4) Modelling governance needs to be broadened, and 5) Emerging data and techniques allow for model innovations. It provides implications for banks, outlining a 5-point plan for banks to develop a comprehensive model review, harmonize methodologies, redesign validation processes, rethink governance, and build new expertise in data science to address these challenges. The plan aims to reduce total model count by 15% and associated
Quant Labs, the research division of Quant Foundry has developed an operational risk model that supports the COO to pin point areas of process weaknesses. The model continuously learns the business operating model and enables the COO to target investment under different strategic scenarios.
Daniel Kocis is the president of Applied Multivariate Algorithms Inc. He has extensive experience developing SAS models and reporting systems to support regulatory risk reporting and credit risk management at a large bank. Some of his work includes:
1) Developing a model risk management tool for consumer credit cards that automated model building, validation, and tracking.
2) Creating risk reporting and data governance processes across multiple lines of business.
3) Developing models and reports to track credit performance, delinquency rates, and risk exposures across all of the bank's consumer credit products.
4) Using credit bureau data to profile auto and specialty loan portfolios and track their credit risk characteristics.
The Impact of Recent Supervisory Guidance on Capital Planning by Kosoff and B...Jacob Kosoff
The Federal Reserve has tailored capital planning management expectations in certain areas for financial institutions with assets between $50bn and $250bn, while the Federal Reserve has heightened expectations in other areas including ongoing monitoring, firm-wide sensitivity analysis, change management, internal controls and board reporting. Written by Jacob Kosoff and Rachel Bryant.
Adopting a Top-Down Approach to Model Risk Governance to Optimize Digital Tra...Jacob Kosoff
Model risk management programs often began their journey by first creating a definition of a model. Then model risk groups would perform model risk activities on each item that met the definition of a model. These model risk activities include classifying risk, assessing current uses, evaluating ongoing monitoring results, validating conceptual soundness, testing model changes, and so forth. This approach was an important beginning for the field of model risk management as it helped identify existing models, discover fundamental errors in existing models, and prevent inappropriate use of models. However, model risk teams often focused only on processes that already include models and did not identify processes that would be significantly improved by using models. This results in model risk teams overlooking modeling capabilities that a process truly needs. However, model risk teams can go on the offensive and use their model inventory as a source of crucial business intelligence. Model risk teams can start to identify processes that do not include models and could recommend the use of existing models to improve those processes. Furthermore, model risk teams can reduce expenses at a bank by guarding against the development or purchase of models with redundant capabilities. Model risk management teams can ultimately be a champion for the extensibility and efficient use of models at an institution. The article was written by Jacob Kosoff, Aaron Bridgers, and Henry Lee. The article was published by the RMA Journal in September 2020.
Banks are scrambling to meet with IFRS 9 guidelines and are setting down on the path to implement various ECL estimation methodologies and models. But a topic that hasn’t been given enough attention is the need for governance of these models and the attendant model risk management framework that needs to be set up to lend credibility to the model estimates. This blog touches upon the need for validation of models and how model risk governance has become paramount in view of the new guidelines.
1. Model validation is one of the key requirements for internal model approval under Solvency II and involves validating several components of an insurer's model, including inputs, assumptions, governance processes, and model results.
2. Effective validation requires dividing the model into smaller parts and focusing validation efforts based on materiality. It also involves validating not just the calculation engine but also external models, data quality, governance and model use within the business.
3. Insurers should start validation in parallel with model development to have enough time for approval and establish an ongoing validation process, avoiding pitfalls like inconsistent treatment of risks and lack of documentation.
Modelling: What’s next for Financial Services in Europe?GRATeam
This paper outlines a practical roadmap to realising cost savings, delivering a material reduction in the volume and complexity of models by outlining five key principles of model optimisation: develop a comprehensive review of models, harmonise methodologies, re-design model validation/monitoring process, re-think its modelling team’s organisation & governance and build new expertise and recruit talent.
A Business Continuity Management Maturity Model For The UAE Banking SectorBecky Goins
This document presents a business continuity management (BCM) maturity model developed for banks in the United Arab Emirates. The model was created using a two-stage approach: 1) analyzing existing maturity models to develop an initial model, and 2) validating the model with focus groups of BCM experts from 10 UAE banks. The research found the model to be a useful self-assessment tool for banks to evaluate the maturity of their BCM processes. However, the model is limited to the banking sector in the UAE and was not validated more broadly. The model provides a framework for assessing maturity across different areas and levels to determine an overall BCM maturity score.
This document discusses considerations for building out model risk management (MRM) frameworks for qualitative models at banks. It begins by defining qualitative models as those where the functional specification is determined primarily by expert judgment or assumptions rather than quantitative methodologies.
It notes that while qualitative models pose model risk, approaches to managing this risk may differ from quantitative models due to different risk sources. Specifically, staffing, scheduling, scope and inventory size of MRM programs may vary significantly between large global banks and regional banks based on factors like resources. Regional banks especially may need to validate qualitative and quantitative models using the same team.
The document provides examples of how existing risk management processes at regional banks could take on aspects of qualitative model validation to
The document discusses validation of economic capital models from a regulatory perspective. It outlines a range of qualitative and quantitative validation approaches used in practice to assess different properties of economic capital models, from integrity of implementation to predictive ability. While individual tests have limitations, a layered approach using multiple validation techniques can provide more robust evidence of a model's fitness for its intended purposes. Key challenges include validating conceptual soundness and assumptions given many are untestable, as well as assessing accuracy, particularly in tail distributions where data is scarce.
Onno de vrij (sas) better decision making 12-10Wim Assink
The document discusses various topics relating to risk modeling including:
1. Organizations are at different levels of analytical maturity, ranging from immature approaches to mature structured processes.
2. Regulators are pushing banks to improve model governance, deploy models faster, and use more models to make decisions. This will be a long process of change.
3. The expected credit loss approach under new IFRS 9 accounting standards will require banks to estimate future losses for financial assets, increasing provisions and creating more volatility, affecting profit, equity, capital ratios. Banks must balance compliance and optimal financial impact.
Accelerating Machine Learning as a Service with Automated Feature EngineeringCognizant
Building scalable machine learning as a service, or MLaaS, is critical to enterprise success. Key to translate machine learning project success into program success is to solve the evolving convoluted data engineering challenge, using local and global data. Enabling sharing of data features across a multitude of models within and across various line of business is pivotal to program success.
This document proposes a Service-Oriented Architecture (SOA) Maturity Model to help organizations assess the current state of alignment between their business and IT implementations of SOA and identify areas for improvement. The model defines 5 levels of maturity - Primitive, Standardized Services, Manageable Services, Measurable Services, and Agile Enterprise. It also identifies 6 dimensions used to evaluate maturity: Organizational Factors, Tools & Processes, Architecture, Services, Governance, and Support & Operations. The document then provides a matrix that describes the criteria for evaluating each level based on the 6 dimensions. The model is intended to help organizations benchmark their SOA implementation and guide process improvement efforts.
The document discusses testing approaches for banking/finance domain applications. It notes that these applications typically have multi-tier functionality, systems of systems integration, and bespoke development with lacking documentation. It argues that while the domain is traditionally conservative, an agile testing approach makes sense. An agile approach emphasizes business user involvement, defining requirements and acceptance criteria through user stories, and allowing for incremental development and testing. The approach could work well for integration testing and addressing regression testing risks in this complex domain.
Simplifying Model-Based Systems Engineering - an Implementation Journey White...Alex Rétif
Model-Based Systems Engineering (MBSE) is perhaps one of the most misunderstood and often abused acronyms in the engineering vernacular. Many companies struggle to understand how it will improve their entire product life-cycle and address the ever-increasing complexity of products. In many companies, executives and middle management experience a lack of understanding regarding the rapid pace of today’s technology and its impact on organizations and processes. Technical practitioners may gain additional insight as they focus their energies on establishing strong MBSE practices. The successful implementation of MBSE includes transformations and enhancements in three key areas: organization, process and technology. This white paper shares proper planning and implementation considerations in adopting an MBSE practice. It provides a high-level view, defines critical components to help success and identifies many problematic areas to avoid in an implementation journey.
This document summarizes potential pitfalls that can occur in large process modeling projects based on focus groups and interviews. It discusses pitfalls related to:
1) Tools and requirements, such as underestimating the number of models needed and challenges matching tools to modeling frameworks.
2) Modeling practices, such as limitations of modeling languages and losing the translation between business and system models.
3) Designing future ("to-be") models, and
4) Dealing with modeling success and maintenance over time.
The document provides examples and advice to increase awareness of common mistakes in process modeling projects.
1) Effective model validation aims to provide confidence in a model's robustness and results for management decision making. A validation process assesses a model's capabilities and limitations and promotes continuous improvement.
2) For validation to generate value, it must consider building trust in the model, using the model in decision making ("embedding"), and enabling continuous model improvement. Building trust requires demonstrating validation's business value through transparent communication.
3) Embedding a model requires senior management engagement and understanding. Validation helps build necessary trust for management to use model outputs in decisions. This embedding takes time as a significant change process.
An Empirical Evaluation of Capability Modelling using Design Rationale.pdfSarah Pollard
This study evaluated a capability modeling meta-model by having two designers independently model capabilities for the same use case. The designers' modeling processes and rationales were documented using a design reasoning framework. Analysis found differences in how the designers defined key concepts like capability and context, and in their modeling processes due to lack of guidance from the meta-model. The study provided feedback on improving the meta-model and capability-driven design methodology.
This document discusses 5 major challenges facing financial services modelling functions in Europe: 1) The modelling scope is expanding with more models required, 2) Fully harmonized methodologies across institutions and business units are imperative for transparency and cost reduction, 3) Modelling structures need to become more efficient to reduce costs, 4) Modelling governance needs to be broadened, and 5) Emerging data and techniques allow for model innovations. It provides implications for banks, outlining a 5-point plan for banks to develop a comprehensive model review, harmonize methodologies, redesign validation processes, rethink governance, and build new expertise in data science to address these challenges. The plan aims to reduce total model count by 15% and associated
Quant Labs, the research division of Quant Foundry has developed an operational risk model that supports the COO to pin point areas of process weaknesses. The model continuously learns the business operating model and enables the COO to target investment under different strategic scenarios.
Daniel Kocis is the president of Applied Multivariate Algorithms Inc. He has extensive experience developing SAS models and reporting systems to support regulatory risk reporting and credit risk management at a large bank. Some of his work includes:
1) Developing a model risk management tool for consumer credit cards that automated model building, validation, and tracking.
2) Creating risk reporting and data governance processes across multiple lines of business.
3) Developing models and reports to track credit performance, delinquency rates, and risk exposures across all of the bank's consumer credit products.
4) Using credit bureau data to profile auto and specialty loan portfolios and track their credit risk characteristics.
The Impact of Recent Supervisory Guidance on Capital Planning by Kosoff and B...Jacob Kosoff
The Federal Reserve has tailored capital planning management expectations in certain areas for financial institutions with assets between $50bn and $250bn, while the Federal Reserve has heightened expectations in other areas including ongoing monitoring, firm-wide sensitivity analysis, change management, internal controls and board reporting. Written by Jacob Kosoff and Rachel Bryant.
Credit Audit's Use of Data Analytics in Examining Consumer Loan PortfoliosJacob Kosoff
Written by Jacob Kosoff and published in September 2013 by the RMA Journal. This article describes banks in 2012 & 2013 were modernizing their Credit Review functions.
Moderating the Churn: Retaining employees in the quantitative banking spaceJacob Kosoff
This article describes strategies on how to attract, develop and retain data scientists and other individuals with strong quantitative and data skills. Regions Model Risk Management and Validation has benefited from under 10% external turnover for the past five years and the article discusses how we at Regions has reached that success. Written by Jacob Kosoff and Irina Pritchett.
Understanding and validating the uses of machine learning modelsJacob Kosoff
WHILE MACHINE LEARNING (ML) CAN OFFER THE BENEFIT OF IMPROVED MODEL RESULTS, A BANK SHOULD CONSIDER WHETHER IT IS APPROPRIATE TO ACCEPT THE ADDITIONAL COMPLEXITY, AS WELL AS THE TESTING AND MONITORING, INVOLVED. THIS ARTICLE DISCUSSES BEST PRACTICES IN PERFORMING VALIDATIONS OF MACHINE LEARNING MODELS.
Written by Shannon Kelly of Zions Bank, Jacob Kosoff of Regions Bank, Agus Sudjianto of Wells Fargo, and Aaron Bridgers of Regions Bank.
This document discusses best practices for model risk audits, which provide assurance that a bank's model risk management is adequate. It focuses on how a model risk audit team can effectively examine stakeholders in the first and second lines of defense. Regarding the first line, the audit team should assess model development, data quality, usage, performance, output, and human resources. For the second line, the team should evaluate the model risk management function, including the model inventory, validation process, and governance. The goal is to test the overall quality and timeliness of model validation and risk management.
End-to-end pipeline agility - Berlin Buzzwords 2024Lars Albertsson
We describe how we achieve high change agility in data engineering by eliminating the fear of breaking downstream data pipelines through end-to-end pipeline testing, and by using schema metaprogramming to safely eliminate boilerplate involved in changes that affect whole pipelines.
A quick poll on agility in changing pipelines from end to end indicated a huge span in capabilities. For the question "How long time does it take for all downstream pipelines to be adapted to an upstream change," the median response was 6 months, but some respondents could do it in less than a day. When quantitative data engineering differences between the best and worst are measured, the span is often 100x-1000x, sometimes even more.
A long time ago, we suffered at Spotify from fear of changing pipelines due to not knowing what the impact might be downstream. We made plans for a technical solution to test pipelines end-to-end to mitigate that fear, but the effort failed for cultural reasons. We eventually solved this challenge, but in a different context. In this presentation we will describe how we test full pipelines effectively by manipulating workflow orchestration, which enables us to make changes in pipelines without fear of breaking downstream.
Making schema changes that affect many jobs also involves a lot of toil and boilerplate. Using schema-on-read mitigates some of it, but has drawbacks since it makes it more difficult to detect errors early. We will describe how we have rejected this tradeoff by applying schema metaprogramming, eliminating boilerplate but keeping the protection of static typing, thereby further improving agility to quickly modify data pipelines without fear.
4th Modern Marketing Reckoner by MMA Global India & Group M: 60+ experts on W...Social Samosa
The Modern Marketing Reckoner (MMR) is a comprehensive resource packed with POVs from 60+ industry leaders on how AI is transforming the 4 key pillars of marketing – product, place, price and promotions.
"Financial Odyssey: Navigating Past Performance Through Diverse Analytical Lens"sameer shah
Embark on a captivating financial journey with 'Financial Odyssey,' our hackathon project. Delve deep into the past performance of two companies as we employ an array of financial statement analysis techniques. From ratio analysis to trend analysis, uncover insights crucial for informed decision-making in the dynamic world of finance."
Global Situational Awareness of A.I. and where its headedvikram sood
You can see the future first in San Francisco.
Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans. Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might. By the end of the decade, American electricity production will have grown tens of percent; from the shale fields of Pennsylvania to the solar farms of Nevada, hundreds of millions of GPUs will hum.
The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be un-leashed, and before long, The Project will be on. If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war.
Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Nvidia analysts still think 2024 might be close to the peak. Mainstream pundits are stuck on the wilful blindness of “it’s just predicting the next word”. They see only hype and business-as-usual; at most they entertain another internet-scale technological change.
Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them. A few years ago, these people were derided as crazy—but they trusted the trendlines, which allowed them to correctly predict the AI advances of the past few years. Whether these people are also right about the next few years remains to be seen. But these are very smart people—the smartest people I have ever met—and they are the ones building this technology. Perhaps they will be an odd footnote in history, or perhaps they will go down in history like Szilard and Oppenheimer and Teller. If they are seeing the future even close to correctly, we are in for a wild ride.
Let me tell you what we see.
Predictably Improve Your B2B Tech Company's Performance by Leveraging DataKiwi Creative
Harness the power of AI-backed reports, benchmarking and data analysis to predict trends and detect anomalies in your marketing efforts.
Peter Caputa, CEO at Databox, reveals how you can discover the strategies and tools to increase your growth rate (and margins!).
From metrics to track to data habits to pick up, enhance your reporting for powerful insights to improve your B2B tech company's marketing.
- - -
This is the webinar recording from the June 2024 HubSpot User Group (HUG) for B2B Technology USA.
Watch the video recording at https://youtu.be/5vjwGfPN9lw
Sign up for future HUG events at https://events.hubspot.com/b2b-technology-usa/
Codeless Generative AI Pipelines
(GenAI with Milvus)
https://ml.dssconf.pl/user.html#!/lecture/DSSML24-041a/rate
Discover the potential of real-time streaming in the context of GenAI as we delve into the intricacies of Apache NiFi and its capabilities. Learn how this tool can significantly simplify the data engineering workflow for GenAI applications, allowing you to focus on the creative aspects rather than the technical complexities. I will guide you through practical examples and use cases, showing the impact of automation on prompt building. From data ingestion to transformation and delivery, witness how Apache NiFi streamlines the entire pipeline, ensuring a smooth and hassle-free experience.
Timothy Spann
https://www.youtube.com/@FLaNK-Stack
https://medium.com/@tspann
https://www.datainmotion.dev/
milvus, unstructured data, vector database, zilliz, cloud, vectors, python, deep learning, generative ai, genai, nifi, kafka, flink, streaming, iot, edge
Build applications with generative AI on Google CloudMárton Kodok
We will explore Vertex AI - Model Garden powered experiences, we are going to learn more about the integration of these generative AI APIs. We are going to see in action what the Gemini family of generative models are for developers to build and deploy AI-driven applications. Vertex AI includes a suite of foundation models, these are referred to as the PaLM and Gemini family of generative ai models, and they come in different versions. We are going to cover how to use via API to: - execute prompts in text and chat - cover multimodal use cases with image prompts. - finetune and distill to improve knowledge domains - run function calls with foundation models to optimize them for specific tasks. At the end of the session, developers will understand how to innovate with generative AI and develop apps using the generative ai industry trends.
Learn SQL from basic queries to Advance queriesmanishkhaire30
Dive into the world of data analysis with our comprehensive guide on mastering SQL! This presentation offers a practical approach to learning SQL, focusing on real-world applications and hands-on practice. Whether you're a beginner or looking to sharpen your skills, this guide provides the tools you need to extract, analyze, and interpret data effectively.
Key Highlights:
Foundations of SQL: Understand the basics of SQL, including data retrieval, filtering, and aggregation.
Advanced Queries: Learn to craft complex queries to uncover deep insights from your data.
Data Trends and Patterns: Discover how to identify and interpret trends and patterns in your datasets.
Practical Examples: Follow step-by-step examples to apply SQL techniques in real-world scenarios.
Actionable Insights: Gain the skills to derive actionable insights that drive informed decision-making.
Join us on this journey to enhance your data analysis capabilities and unlock the full potential of SQL. Perfect for data enthusiasts, analysts, and anyone eager to harness the power of data!
#DataAnalysis #SQL #LearningSQL #DataInsights #DataScience #Analytics
The Ipsos - AI - Monitor 2024 Report.pdfSocial Samosa
According to Ipsos AI Monitor's 2024 report, 65% Indians said that products and services using AI have profoundly changed their daily life in the past 3-5 years.
Rethinking Analytics, Analytical Processes, and Risk Architecture Across the Enterprise
1. The RMA Journal February 2020 | Copyright 2020 by RMA80
BY STEVAN MAGLIC AND JACOB KOSOFF
ANALYTICS AND RISK analytics infrastructure—and even how we think about analytical model
risk—have evolved by leaps and bounds at banks over the last decade. Following the financial
crisis, the Federal Reserve Board played a key role in setting new modeling requirements1
as
well as establishing new model validation standards.2
At the same time, the banking system
has continued to undergo significant structural changes, where many non-bank participants
have entered the market and various types of fund managers have made significant inroads
to what has been traditionally banking activity. Collateralized loan obligations (CLOs) and
other structures now hold significant exposures that banks historically held on their balance
sheets. As if that weren’t enough, fintech companies have recently emerged as a significant
RETHINKING ANALYTICS,
ANALYTICAL PROCESSES,
AND RISK ARCHITECTURE
ACROSS THE ENTERPRISE
ENTERPRISERISKMANAGEMENT
80-84.indd 8080-84.indd 80 1/15/20 1:40 PM1/15/20 1:40 PM
2. February 2020 The RMA Journal 81
, disruptive force and big technology names
such as Google, Apple, Facebook, and Amazon
all have their own ideas about how to become
more active in financial services. These are all
formidable threats. Innovation and technology
are true strengths of these new competitors.
At the same time they are not burdened by
legacy processes and systems as most banks
are. Therefore, given that competition is only
expected to increase, traditional banks must
rethink innovation and distinguish themselves
through their keen ability to understand risk
and regulation effectively.
Much of banks’ understanding of risk and
regulation comes down to analytics and ana-
lytical processes. In this regard, the industry
has made enormous investments over the last
decade building stress testing and Current
Expected Credit Loss (CECL) methodolo-
gies, financial crime detection models, in ad-
dition to implementing artificial intelligence
and machine learning modeling techniques.
This all comes on top of an already sizable
model infrastructure that banks use to man-
age themselves. At this point, banks need to
think about how they can use analytics much
more efficiently: how to more effectively de-
velop and deploy models, how to standardize
model development and testing, how to utilize
modern software development practices, how
to rationalize redundant analytical processes,
and how to build the environment needed to
support these activities. With every area of
banking increasingly reliant on modeling and
analytics, model efficiency and effectiveness are
going to be of paramount importance. Perhaps
a helpful way to frame the opportunity is for
banks to think about what they do in the con-
text of how a fintech or a big technology firm
would approach the challenge.
80-84.indd 8180-84.indd 81 1/15/20 1:40 PM1/15/20 1:40 PM
3. The RMA Journal February 2020 | Copyright 2020 by RMA82
ONPREVIOUSPAGE:SHUTTERSTOCK.COM
because the focus is on giving better
tools to modeling teams to bring about
standardization and efficiency.
A more challenging consideration
is the models themselves and how
they work—or in some cases, don’t
work—together within a firm. In fact,
most models were built for good rea-
son with a specific use in mind, but
over time that has created overlap
with different models seemingly doing
related things. To illustrate the chal-
lenge, one may ask: How many cash
flow engines does your firm have and
can the processes be rationalized in
some way? Continuing along this line
of thinking, prepayment models and
assumptions are embedded in mort-
gage servicing rights (MSR) valuation,
CCAR/CECL processes, asset liability
management (ALM), balance sheet
valuation activities, and elsewhere.
How can redundancy be reduced or
at least consistency improved? Addi-
tionally, most banks have a variety of
default estimation models in use for
different purposes. There is a genuine
opportunity for efficiency gains by in-
tegrating these models and processes
together to improve consistency.
Perhaps the best example of model
integration is how many banks have
In particular, banks have an oppor-
tunity to re-engineer the model devel-
opment cycle and how models can be
developed and validated more effec-
tively. This comprehensively includes
how models are developed, validated,
deployed, and monitored. Taken a step
further, one can easily imagine an en-
tire model lifecycle process in which
models move seamlessly from devel-
opment to validation to deployment
within flexible multi-purpose environ-
ments. Indeed, firms across multiple
industries have started to leverage
practices that were first developed by
software development companies to
effectively redesign the model devel-
opment and validation processes. For
example, Uber develops thousands of
internal and external facing models
in the React.js language. Rather than
have each modeling team reinvent the
wheel each time, Uber’s model devel-
opers leverage Web Base—a suite of
pre-built and standardized functions.
In the same vein, banks are develop-
ing a similar set of model features
for reuse in modern libraries such as
PySpark. With well over 1,000 model
features built on common deposit and
loan data sources, many institutions
have moved to this framework. In
doing so, both wealth management
and consumer banking can leverage
the same feature repository for their
specific business needs. This makes
not only model development easier,
but also model validation because the
validation team is already familiar with
the techniques used in a prior valida-
tion of a similar model. With so much
bespoke model development activity at
each institution, there really is a need
to standardize the process and make
this all much more effective. For in-
stance, how can model development
be partially automated and perhaps
even leverage economies of scale? An
example of such a scaling effort could
be to develop similar models at once,
with all the same standardized tests
in one framework. For example, it is
common for model development teams
to develop a central feature repository
and common analytical opportunities.
Not only are the same feature sets be-
ing used, but the same model frame-
works are being leveraged to jump
start the model development process
and decrease the time to deployment.
Bulk model development and valida-
tion of models could be applied to
all time series models or all logistic
regression-based models, for example.
Alternatively, efficiencies can be gained
through standardized components that
focus on specific tests such as out-of-
sample testing or ongoing monitoring.
For example, central feature sets can
have built-in automated testing, with
unit testing around every single func-
tion that generates a feature. If a model
risk team validates this feature set and
the unit test, the stored output that is
written back to the data lake could
be validated for other analytical uses.
While the system as a whole needs
to be validated, the core components
could be reviewed by model risk man-
agement from a prior validation and
periodically reviewed as part of the
governance process. This would make
model risk management more efficient.
These challenges could be considered
more straightforward to implement
80-84.indd 8280-84.indd 82 1/15/20 1:40 PM1/15/20 1:40 PM
4. February 2020 The RMA Journal 83
ONPREVIOUSPAGE:SHUTTERSTOCK.COM
repurposed stress testing models that
were originally developed for CCAR
to support the new CECL account-
ing standard for setting reserves. This
was accomplished through only mod-
est incremental efforts, as opposed to
building up an entirely separate set
of models and processes. Although
each firm has a unique analytics in-
frastructure, additional integration
opportunities certainly exist within
each firm. In many cases, reducing
model redundancy is unlikely to be
as straightforward as eliminating one
model in favor of another. At some
firms, one integration opportunity
might be economic capital and stress
testing processes that both estimate
some type of tail loss, yet rely on dif-
fering methodologies and assump-
tions. At other institutions, trading
book and banking book processes
may be disconnected and may ben-
efit from integration. As models
are used for multiple purposes, the
overhead to develop, validate, and
maintain the models can be reduced
materially. Through this moderniza-
tion, there will be greater simplicity,
transparency, and efficiency. This also
means more optimal use of quantita-
tive talent, fewer handoffs, and lower
turnaround time to produce results.
Mostimportantly,modelandprocess
integration facilitates a more transpar-
ent, consistent, and comprehensive
understanding of risk. Through this
integration, assumptions become
more aligned and the quality of results
becomes higher. Using the CCAR and
CECL example, one modeling frame-
work intuitively drives both expected
as well as stressed losses. With an
integrated model and process frame-
work, it becomes more possible to
understand how different risks work
together in your firm’s exposures. For
example, while most banks are able to
effectivelyquantifymarket,credit,inter-
est rate, and liquidity risk in isolation,
most firms are challenged to effectively
quantify how all these risks work to-
gether, especially during a crisis. At
to be a risk rating model when mac-
roeconomic dependence is turned off.
Model and process integration in-
volves both organizational consider-
ations and defining a clear operating
model. Although integration would
suggest some level of process con-
solidation, it cannot compromise the
ownership of key processes and the
perspective of critical subject matter
expertise. For this reason, banks need
an operating model that can support
some level of centralization of analyti-
cal process without compromising key
processes within a firm. For example,
both finance and risk need to individu-
ally own and direct the processes they
uniquely understand, while at the same
time stay connected to the rest of the
firm. One possible arrangement could
be an open architecture environment
whereby individual groups own spe-
cific model components of a common
framework of models that are used
throughout the firm. For example,
risk may own the credit modules of
the framework, while treasury may
own the interest rate component, but
both divisions have the benefit of the
common platform. Other areas could
be responsible for data structure and
some firms, understanding how credit
risk and liquidity risk work together
requires coordination and phone calls
between treasury and risk. In another
example, if independent models quan-
tify counterparty credit risk and credit
risk in the banking book separately, it
becomes unclear that these risks may
manifest themselves at the same time,
leadingtoincreasedconcentrationrisk.
It should be noted that not all mod-
els or processes may be appropriate
for integration. For example, an ana-
lytical process that may be suitable
for risk management purposes may
not meet accounting standards for
allowance purposes. Or, perhaps a
sophisticated prepayment model may
be needed for ALM purposes, whereas
a much simpler model may be suf-
ficient for capital planning. In some
cases, reconciliation may be favored
over full integration. In other cases, it
makes the most sense to re-engineer
two processes into one more general-
ized model or process. An example
of this would be to subsume an early
warning model and risk rating model
into one more comprehensive model
framework. Similarly, a credit loss
stress testing model can be designed
MODEL AND PROCESS INTEGRATION
THROUGH CULTURE AND GOVERNANCE
A firmwide effort to accomplish model
and process integration is critical. To this
end, model development, model valida-
tion, process owners, and stakeholders
all play a role. For example, one of the
most common issues found by model
validation teams is inconsistent assump-
tions between upstream, downstream, or
related models. Model validation analysts
at banks must ask model developers to
explain the role of their model in terms
of other related models and processes
in the firm. Comparing models is one of
the most important activities within the
scope of validation to truly understand
aggregate model risk. In this regard,
model validation staff—with its firmwide
perspective—is well-positioned to iden-
tify model and process redundancies. Ad-
ditionally, process owners, stakeholders,
and users must regularly challenge the
models with questions such as, “Why are
the prepayment assumptions in the ALM
process different from that of CCAR?” or
“How are the correlation assumptions em-
bedded in CCAR different from correlation
used in the economic capital process?”
Depending on the institution, it may
make sense to formalize the discussion
of models, model use, coordinated model
development, and model integration.
However—whether in a formal setting
or not—banks need to nurture innova-
tion by increasing model efficiency. This
discussion ties back to the culture of
an institution and the need for banks to
foster innovation to match fintech and big
technology competitors.
80-84.indd 8380-84.indd 83 1/15/20 1:40 PM1/15/20 1:40 PM
5. The RMA Journal February 2020 | Copyright 2020 by RMA84
not be attributed to Regions Financial Corpo-
ration or any of its subsidiaries or affiliates,
including Regions Bank. Any representation to
the contrary is expressly disclaimed.
Notes
1. Comprehensive Capital Analysis and Review (CCAR)
2. Federal Reserve Supervisory Letter SR 11-07: Guid-
ance on Model Risk Management
design of the computational environ-
ment. Under this arrangement, own-
ership would be shared by a few key
process owners and the bank is able to
fully leverage its collective subject mat-
ter expertise. Furthermore, modeling
teams will have a common underlying
framework to ensure collaboration. A
successful operating model thus en-
sures efficiency through clear owner-
ship, coordinated model development,
and coherent analytical processes.
Since the goal here is efficiency
and effectiveness, all projects must
result in measurable improvements.
One measure is the degree to which
results can be used to support business
decisions. For example, submitting an
informed bid on a portfolio acquisi-
tion requires timely and accurate
estimates of credit quality, allowance
usage, economic capital, stress capital
allocation, economic valuation, profit-
ability metrics, and concentration limit
considerations. Involving many groups
in this process introduces significant
delay and takes more people away
from their primary responsibilities
at the bank. A clear win for a bank
is where line of business partners can
easily access second-line risk models
and results to support timely portfo-
lio and single-name transaction deci-
sions. Other examples of measurable
improvement are reducing the number
of people or the time it takes to pro-
duce CCAR/CECL results or provide
updated customer information to the
line of business.
What is described here is a rede-
sign of the analytical risk architecture,
which is a significant endeavor at any
bank. Integration work of this type
requires a deep understanding of all
the processes and a breadth of experi-
ence to understand the appropriate
tradeoffs associated with integration.
Fortunately, ideas presented here
can be addressed with newly avail-
able technology and vendor solu-
tions to facilitate these transitions.
In this competitive and burgeoning
environment, there is no shortage of
good ideas and products to move the
industry forward.
In closing, it is perhaps worth mak-
ing a comparison between the finan-
cial services sector today and retail
sector during the late ’90s dotcom
bubble, when Amazon first emerged
as a viable competitor to traditional
brick-and-mortar businesses. It was
very clear back then that the world
had changed, but it was not clear
how it would all turn out. It is clear
that banking is changing very rapidly,
and that banking is not going away.
However, it is unclear which firms will
provide banking services to custom-
ers. In this context, traditional banks
have a unique opportunity to enhance
effectiveness through analytics and
innovation, while at the same time
continuing to leverage their expertise
and competitive advantages.
The opinions expressed in this article
are those of the authors, intended for
informational purposes only, and should
SIZING THE OPPORTUNITY AND DEFINING A PLAN
There are a few ways that one can size
up the opportunity and define a path
towards improved model efficiency and
integration. Firms must use their inventory
of models and processes more strategi-
cally. Sorting models and components of
models by model output helps to identify
related or potentially redundant activities.
In addition, it is also helpful to formulate
an end-state vision of the risk analytics
architecture, including the desired func-
tionality and characteristics. From this,
it becomes clearer what must be done
today to realize the end-state vision. For
example, data sources need to be rec-
onciled, integrated, and migrated to a
centrally available environment that can
support the required computational needs.
Models and processes that need to be
able to talk to one other may be written
in different languages and may need to be
modified accordingly. Highly involved or
bespoke models and processes need to
generalized and modularized—if possible
—to work alongside other models. Given
the number and complexity of models, the
integration can only take place piecemeal
over time. Finally, with so many integrated
models, processes, and users, a robust
governance structure is critical to ensure
all components work as designed and
interdependencies are clearly understood.
A key component to successfully
realizing this goal involves getting man-
agement on board with the plan. It is
especially important that senior execu-
tives understand that this is a multiyear
endeavor, and that their dedicated sup-
port is needed over this period. Further-
more, having a strong executive sponsor
ensures that obstacles can be overcome
and that the firm can accommodate the
needed changes. In exchange for this
support, model development teams must
commit to a steady stream of milestones
and measurable deliverables, so that
management has confidence that prog-
ress continues as promised.
JACOB KOSOFF is Senior
Vice President and Head of
Model Risk Management and
Validation at Regions Bank.
He can be reached at Jacob.
Kosoff@regions.com.
STEVAN MAGLIC is Senior
Vice President and Head of
Quantitative Risk Analytics at
Regions Bank. He can be reached
at Stevan.Maglic@regions.com.
80-84.indd 8480-84.indd 84 1/15/20 1:40 PM1/15/20 1:40 PM