Machine Learning Monitoring,
Compliance and Governance
Andrew Clark, Co-founder and CTO of Monitaur
• Co-founder and CTO of Machine Learning Assurance company Monitaur
• B.S. in Business Administration with a concentration in Accounting, Summa Cum
Laude, from University of Tennessee at Chattanooga
• M.S. in Data Science from Southern Methodist University
• Ph.D. Candidate in Economics at the University of Reading
• American Statistical Association Graduate Statistician (GStat) and INFORMS
Certified Analytics Professional (CAP)
• Experienced in designing, built, and deployed numerous machine learning and
continuous auditing solutions
• Worked in IT Audit for two publicly traded companies, one of them a Fortune 500
financial institution, as well as a researcher at a bespoke AI firm.
About me
• Machine Learning Overview
• The need for Machine Learning Monitoring and Governance
• Regulation Overview
• Frameworks for Machine Learning Governance
• How to customize a Machine Learning Governance program for your environment
Outline
● Understand the need for Machine Learning Governance and Risk Management
● Know at a high level the relevant regulations for Machine Learning
● Describe how to construct a Machine Learning Governance program
● Takeaway key risks and mitigating controls for Machine Learning Risk
Management and Compliance
Learning Objectives
A computer recognizing patterns without having
to be explicitly programed.
What is Machine Learning?
● Creating new revenue sources and reducing cost. Models can be used to
perform work or accelerate work once only possible by humans and that
frees up human resources for higher value work.
● Disrupting business. Example ML powered businesses disrupted
Blockbuster, Taxis, etc.
● Revolutionizing existing business models. Predictive maintenance in
manufacturing, retailing, credit card fraud detection, loan underwriting. At
times, a significant improvement to existing models.
● One of the key technologies in driving economic growth.
Why is Machine Learning Important?
● Magic
● Not going to take your job (for the majority of professionals)
● Always the best tool for the job
What Machine Learning is not
• Given a labeled dataset, ‘fraud not fraud’,the algorithm is ‘trained’, to recognize
which items are Fraud and which items are not fraud.
• Examples:
• Transaction fraud detection
• Classifying images: dog/not dog
Supervised
Supervised Cont.
● Given some cleaned data, the algorithm divides the data into like groups.
● Examples:
○ Pattern recognition - finding ways to compare data find latent trends
○ Anomaly detection - finding data that is NOT similar to other data points
○ Clustering - identifying data points that are similar to its ‘neighbors’
Unsupervised
Unsupervised Cont.
Signal vs Noise
What many businesses get wrong https://hackernoon.com/the-ai-hierarchy-of-needs-18f111fcc007
● With algorithms increasingly dictating our lives, how do we know that they are
operating as intended?
● Applecard
● Crime prediction algorithms
● Sentencing algorithms
● Source of truth - Every auditor/organization needs to have capture and access to
the truth of events.
Why do we need monitoring and governance?
● GDPR
● OCC 2011/12 - SR 11-7 - (reviewed in Frameworks section)
● FDA Proposed Regulatory Framework
Regulation Overview
● “As above, the GDPR has specific requirements around the provision of information about, and an
explanation of, an AI-assisted decision where:
○ it is made by a process without any human involvement; and
○ it produces legal or similarly significant effects on an individual (something affecting an individual’s
legal status/ rights, or that has equivalent impact on an individual’s circumstances, behaviour or
opportunities, eg a decision about welfare, or a loan). In these cases, the GDPR requires that you:
■ are proactive in “…[giving individuals] meaningful information about the logic involved, as
well as the significance and envisaged consequences…” (Articles 13 and 14);
■ “… [give individuals] at least the right to obtain human intervention on the part of the
controller, to express his or her point of view and to contest the decision.” (Article 22); and
■ • “… [give individuals] the right to obtain… meaningful information about the logic involved,
as well as the significance and envisaged consequences…” (Article 15) “…[including] an
explanation of the decision reached after such assessment…” (Recital
71)”-https://ico.org.uk/media/about-the-ico/consultations/2616434/explaining-ai-decis
ions-part-1.pdf
GDPR
● “GDPR: “Recital 71 provides interpretative guidance. It makes clear that
individuals have the right to obtain an explanation of a solely automated
decision after it has been made.”
● “The GDPR’s recitals are not legally binding, but they do clarify the meaning
and intention of its articles”
-https://ico.org.uk/media/about-the-ico/consultations/2616434/explainin
g-ai-decisions-part-1.pdf
GDPR Cont.
● “Here’s what AB 375 considers “personal information”:
○ Identifiers such as a real name, alias, postal address, unique personal identifier, online identifier IP address,
email address, account name, Social Security number, driver’s license number, passport number, or other
similar identifiers
○ Characteristics of protected classifications under California or federal law
○ Commercial information including records of personal property, products or services purchased, obtained or
considered, or other purchasing or consuming histories or tendencies
○ Biometric information
○ Internet or other electronic network activity information including, but not limited to, browsing history, search
history and information regarding a consumer’s interaction with a website, application or advertisement
○ Geolocation data
○ Audio, electronic, visual, thermal, olfactory or similar information
○ Professional or employment-related information
○ Education information, defined as information that is not publicly available personally identifiable information
(PII) as defined in the Family Educational Rights and Privacy Act (20 U.S.C. section 1232g, 34 C.F.R. Part
99)
○ Inferences drawn from any of the information identified in this subdivision to create a profile about a
consumer reflecting the consumer’s preferences, characteristics, psychological trends, preferences,
predispositions, behavior, attitudes, intelligence, abilities and aptitudes” -
https://www.csoonline.com/article/3292578/california-consumer-privacy-act-what-you-need-to-know-to-be-c
ompliant.html
California Consumer Privacy Act
● “When applied to AI/ML-based SaMD, the above approach would require a premarket submission to the
FDA when the AI/ML software modification significantly affects device performance, or safety and
effectiveness; the modification is to the device’s intended use; or the modification introduces a major
change to the SaMD algorithm...” -
● “ It also assures that ongoing algorithm changes are implemented according to pre-specified performance
objectives, follow defined algorithm change protocols, utilize a validation process that is committed to
improving the performance, safety, and effectiveness of AI/ML software, and include real-world monitoring
of performance. ‘
● Need a versioning controlled and auditable way to monitor machine learning.
● “To fully adopt a TPLC approach in the regulation of AI/ML-based SaMD, manufacturers can work to
assure the safety and effectiveness of their software products by implementing appropriate mechanisms
that support transparency and real-world performance monitoring. “
● https://www.fda.gov/files/medical%20devices/published/US-FDA-Artificial-Intelligence-and-Machine-Learn
ing-Discussion-Paper.pdf
FDA Proposed Regulatory Framework for Modifications
to AI/ML
● ISACA AI
● ICO
● OCC 2011/12 - SR 11-7
● CRISP-DMA
Frameworks for Machine Learning
Governance
● Based off of COBIT 2019 -
https://www.isaca.org/bookstore/bookstore-wht_papers-digital/whpaai
● DSS06
● White paper about the need of AI governance and the challenges for the IT
Auditor
● Not too useful for our needs.
ISACA AI
● Three Parts. Public response was finished up January 2020.
● https://ico.org.uk/media/about-the-ico/consultations/2616434/explaining-ai-decision
s-part-1.pdf
● Principles:
○ Be transparent
○ Be accountable
○ Consider context
○ Reflect on impacts
ICO Explaining AI Decisions
● Part 2:
https://ico.org.uk/media/about-the-ico/consultations/2616433/explaining-ai-decisions-part-2.pdf
● Detailed steps for data scientists to take:
○ Select priority explanations by considering the domain, use case and impact on the
individual.
○ Collect the information you need for each explanation type
○ Build your rationale explanation to provide meaningful information about the underlying
logic of your AI system
○ Translate the rationale of your system’s results into useable and easily understandable
reasons
○ Prepare implementers to deploy your AI system
○ Consider contextual factors when you deliver your explanation
○ Consider how to present your explanation
ICO Explaining AI Decisions Cont.
● Part 3:
https://ico.org.uk/media/about-the-ico/consultations/2616436/explaining-ai-decision
s-part-3.pdf
● Organizational policy and roles outlined
● Documentation guidelines
● Great for an in-depth review of the thought process of creating an AI Governance
framework
● Cons:
○ Hard to summarize into use.
○ Very specific, some specifics will be out of date soon.
ICO Explaining AI Decisions Cont.
● Gold standard for Model Risk Management
● Required and in place at regulated financial institutions.
● Well established.
● Great starting point, although slight differences need to be given taken for ML/AI.
● Establishes strong 2nd
line and 3rd
line processes and controls
● Models can be misapplied.
OCC 2011/12 – SR 11-7
● “A guiding principle for managing model risk is “effective challenge” of models, that is, critical analysis by
objective, informed parties who can identify model limitations and assumptions and produce appropriate
changes.” - https://www.occ.gov/news-issuances/bulletins/2011/bulletin-2011-12.html
● Appropriate compensation and incentives
○ Competent, well skilled model developers with an interdisciplinary bent
○ In-depth testing
○ Documentation, Documentation, Documentation
○ “Generally, validation should be done by people who are not responsible for
development or use and do not have a stake in whether a model is determined to be valid.”
■ Appropriately incentivized and compensated
■ Have authority to explicitly challenge
OCC 2011/12 – SR 11-7 Cont.
● Model Development, Implementation, and Use
● Model Validation
○ Evaluation of conceptual soundness, including developmental evidence
○ Ongoing monitoring, including process verification and benchmarking
○ Outcomes analysis, including back-testing
● Governance, Policies, and Controls
○ Board of Directors and Senior Management
○ Policies and Procedures
○ Roles and Responsibilities
○ Internal Audit
○ External Resources
○ Model Inventory
○ Documentation
OCC 2011/12 – SR 11-7 Cont.
● Framework that extends the industry standard data mining framework, CRISP-DM to auditing machine
learning implementations. -
https://www.isaca.org/resources/isaca-journal/issues/2018/volume-1/the-machine-learning-auditcrisp-dm-f
ramework
● Leverages iterative steps of the CRISP-DM model:
○ Business Understanding
○ Data Understanding
○ Data Preparation
○ Modeling
○ Evaluation
○ Deployment
CRISP-DM
● What is the goal of the algorithm?
● Have models been used in this use case before?
● What attributes, i.e. temperature, humidity, etc., have been identified by the
business as key factors for deriving the desired decision in the given use case?
● Are there any regulatory constraints or considerations of which to be aware?
CRISP-DM - Business Understanding
● What dataset[s] was utilized to train the model?
● What dataset[s] is utilized for production prediction?
● Where did the data set[s] identified in 1,2 originate? I.e. web scrapped data, log
files, relational databases.
● Are all of the input variables in the same format? I.e. miles or kilometers.
● Have the correlations and covariances been examined?
CRISP-DM - Data Understanding
● How was the data cleaned?
● If supervised learning was used, how was the training dataset created?
● Were standard software development techniques used for the ETL process for
production models?
● How was the data scaled?
● How were the variables selected? Was an automated variable selection technique
utilized?
● What process was used to separate the data into train and test sets? Was care
taken to avoid peaking at the test set?
CRISP-DM - Data Preparation
● What was the thought process behind choosing algorithm[s] for the model?
● What steps were used to guard against overfitting?
● What process was used to optimize the chosen algorithm?
● Was the algorithm coded from scratch or was a standard library used? If so, what
are the license terms of the library?
● What type of version control was utilized?
CRISP-DM - Modeling
● What metrics were used to evaluate the model?
● What process and metrics are in place to monitor the continued accuracy and
stability of the model?
● Create a mock dataset that covers all of the relevant assumptions and run the
results through the algorithm to test that it is operating as intended.
CRISP-DM - Evaluation
● How was the model moved to production? Was it rewritten by the engineering
team, or does it rely on an API, etc., (if it was rewritten, a code review for accuracy
should be performed).
● Is the model accomplishing what the business wanted it to accomplish?
CRISP-DM - Deployment
● Record, log and summarize all transactions
● Ensure there is business access to understand what was decided, why it was
decided and who was involved
● Establish post processing to identify anomalies
○ Unexpected inputs
○ Decision and output drift calculations
● Establish situational repeatability
● Enable counterfactual sensitivity analysis
CRISP-DM - Monitoring
● Take an inventory of all models used in your company and determine their
complexity and risk.
● Determine if a model management program is already in place. If so, you can
expand on existing policy (after reviewing it of course).
● Read through relevant frameworks. Choose on the simplistic one that meets
your needs.
● Map existing controls.
● Have internal audit validate and periodically test against.
● Use trusted third party for design and monitoring assistance.
How to construct a Machine Learning Governance or
Audit plan
Questions?
Thank you!
Email: andrew@monitaur.ai
LinkedIn: https://www.linkedin.com/in/andrew-clark-b326b767/
Company website: https://monitaur.ai/
Personal website: https://aclarkdata.github.io/

GRC 2020 - IIA - ISACA Machine Learning Monitoring, Compliance and Governance

  • 1.
    Machine Learning Monitoring, Complianceand Governance Andrew Clark, Co-founder and CTO of Monitaur
  • 2.
    • Co-founder andCTO of Machine Learning Assurance company Monitaur • B.S. in Business Administration with a concentration in Accounting, Summa Cum Laude, from University of Tennessee at Chattanooga • M.S. in Data Science from Southern Methodist University • Ph.D. Candidate in Economics at the University of Reading • American Statistical Association Graduate Statistician (GStat) and INFORMS Certified Analytics Professional (CAP) • Experienced in designing, built, and deployed numerous machine learning and continuous auditing solutions • Worked in IT Audit for two publicly traded companies, one of them a Fortune 500 financial institution, as well as a researcher at a bespoke AI firm. About me
  • 3.
    • Machine LearningOverview • The need for Machine Learning Monitoring and Governance • Regulation Overview • Frameworks for Machine Learning Governance • How to customize a Machine Learning Governance program for your environment Outline
  • 4.
    ● Understand theneed for Machine Learning Governance and Risk Management ● Know at a high level the relevant regulations for Machine Learning ● Describe how to construct a Machine Learning Governance program ● Takeaway key risks and mitigating controls for Machine Learning Risk Management and Compliance Learning Objectives
  • 5.
    A computer recognizingpatterns without having to be explicitly programed. What is Machine Learning?
  • 6.
    ● Creating newrevenue sources and reducing cost. Models can be used to perform work or accelerate work once only possible by humans and that frees up human resources for higher value work. ● Disrupting business. Example ML powered businesses disrupted Blockbuster, Taxis, etc. ● Revolutionizing existing business models. Predictive maintenance in manufacturing, retailing, credit card fraud detection, loan underwriting. At times, a significant improvement to existing models. ● One of the key technologies in driving economic growth. Why is Machine Learning Important?
  • 7.
    ● Magic ● Notgoing to take your job (for the majority of professionals) ● Always the best tool for the job What Machine Learning is not
  • 8.
    • Given alabeled dataset, ‘fraud not fraud’,the algorithm is ‘trained’, to recognize which items are Fraud and which items are not fraud. • Examples: • Transaction fraud detection • Classifying images: dog/not dog Supervised
  • 9.
  • 10.
    ● Given somecleaned data, the algorithm divides the data into like groups. ● Examples: ○ Pattern recognition - finding ways to compare data find latent trends ○ Anomaly detection - finding data that is NOT similar to other data points ○ Clustering - identifying data points that are similar to its ‘neighbors’ Unsupervised
  • 11.
  • 12.
  • 13.
    What many businessesget wrong https://hackernoon.com/the-ai-hierarchy-of-needs-18f111fcc007
  • 14.
    ● With algorithmsincreasingly dictating our lives, how do we know that they are operating as intended? ● Applecard ● Crime prediction algorithms ● Sentencing algorithms ● Source of truth - Every auditor/organization needs to have capture and access to the truth of events. Why do we need monitoring and governance?
  • 15.
    ● GDPR ● OCC2011/12 - SR 11-7 - (reviewed in Frameworks section) ● FDA Proposed Regulatory Framework Regulation Overview
  • 16.
    ● “As above,the GDPR has specific requirements around the provision of information about, and an explanation of, an AI-assisted decision where: ○ it is made by a process without any human involvement; and ○ it produces legal or similarly significant effects on an individual (something affecting an individual’s legal status/ rights, or that has equivalent impact on an individual’s circumstances, behaviour or opportunities, eg a decision about welfare, or a loan). In these cases, the GDPR requires that you: ■ are proactive in “…[giving individuals] meaningful information about the logic involved, as well as the significance and envisaged consequences…” (Articles 13 and 14); ■ “… [give individuals] at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.” (Article 22); and ■ • “… [give individuals] the right to obtain… meaningful information about the logic involved, as well as the significance and envisaged consequences…” (Article 15) “…[including] an explanation of the decision reached after such assessment…” (Recital 71)”-https://ico.org.uk/media/about-the-ico/consultations/2616434/explaining-ai-decis ions-part-1.pdf GDPR
  • 17.
    ● “GDPR: “Recital71 provides interpretative guidance. It makes clear that individuals have the right to obtain an explanation of a solely automated decision after it has been made.” ● “The GDPR’s recitals are not legally binding, but they do clarify the meaning and intention of its articles” -https://ico.org.uk/media/about-the-ico/consultations/2616434/explainin g-ai-decisions-part-1.pdf GDPR Cont.
  • 18.
    ● “Here’s whatAB 375 considers “personal information”: ○ Identifiers such as a real name, alias, postal address, unique personal identifier, online identifier IP address, email address, account name, Social Security number, driver’s license number, passport number, or other similar identifiers ○ Characteristics of protected classifications under California or federal law ○ Commercial information including records of personal property, products or services purchased, obtained or considered, or other purchasing or consuming histories or tendencies ○ Biometric information ○ Internet or other electronic network activity information including, but not limited to, browsing history, search history and information regarding a consumer’s interaction with a website, application or advertisement ○ Geolocation data ○ Audio, electronic, visual, thermal, olfactory or similar information ○ Professional or employment-related information ○ Education information, defined as information that is not publicly available personally identifiable information (PII) as defined in the Family Educational Rights and Privacy Act (20 U.S.C. section 1232g, 34 C.F.R. Part 99) ○ Inferences drawn from any of the information identified in this subdivision to create a profile about a consumer reflecting the consumer’s preferences, characteristics, psychological trends, preferences, predispositions, behavior, attitudes, intelligence, abilities and aptitudes” - https://www.csoonline.com/article/3292578/california-consumer-privacy-act-what-you-need-to-know-to-be-c ompliant.html California Consumer Privacy Act
  • 19.
    ● “When appliedto AI/ML-based SaMD, the above approach would require a premarket submission to the FDA when the AI/ML software modification significantly affects device performance, or safety and effectiveness; the modification is to the device’s intended use; or the modification introduces a major change to the SaMD algorithm...” - ● “ It also assures that ongoing algorithm changes are implemented according to pre-specified performance objectives, follow defined algorithm change protocols, utilize a validation process that is committed to improving the performance, safety, and effectiveness of AI/ML software, and include real-world monitoring of performance. ‘ ● Need a versioning controlled and auditable way to monitor machine learning. ● “To fully adopt a TPLC approach in the regulation of AI/ML-based SaMD, manufacturers can work to assure the safety and effectiveness of their software products by implementing appropriate mechanisms that support transparency and real-world performance monitoring. “ ● https://www.fda.gov/files/medical%20devices/published/US-FDA-Artificial-Intelligence-and-Machine-Learn ing-Discussion-Paper.pdf FDA Proposed Regulatory Framework for Modifications to AI/ML
  • 20.
    ● ISACA AI ●ICO ● OCC 2011/12 - SR 11-7 ● CRISP-DMA Frameworks for Machine Learning Governance
  • 21.
    ● Based offof COBIT 2019 - https://www.isaca.org/bookstore/bookstore-wht_papers-digital/whpaai ● DSS06 ● White paper about the need of AI governance and the challenges for the IT Auditor ● Not too useful for our needs. ISACA AI
  • 22.
    ● Three Parts.Public response was finished up January 2020. ● https://ico.org.uk/media/about-the-ico/consultations/2616434/explaining-ai-decision s-part-1.pdf ● Principles: ○ Be transparent ○ Be accountable ○ Consider context ○ Reflect on impacts ICO Explaining AI Decisions
  • 23.
    ● Part 2: https://ico.org.uk/media/about-the-ico/consultations/2616433/explaining-ai-decisions-part-2.pdf ●Detailed steps for data scientists to take: ○ Select priority explanations by considering the domain, use case and impact on the individual. ○ Collect the information you need for each explanation type ○ Build your rationale explanation to provide meaningful information about the underlying logic of your AI system ○ Translate the rationale of your system’s results into useable and easily understandable reasons ○ Prepare implementers to deploy your AI system ○ Consider contextual factors when you deliver your explanation ○ Consider how to present your explanation ICO Explaining AI Decisions Cont.
  • 24.
    ● Part 3: https://ico.org.uk/media/about-the-ico/consultations/2616436/explaining-ai-decision s-part-3.pdf ●Organizational policy and roles outlined ● Documentation guidelines ● Great for an in-depth review of the thought process of creating an AI Governance framework ● Cons: ○ Hard to summarize into use. ○ Very specific, some specifics will be out of date soon. ICO Explaining AI Decisions Cont.
  • 25.
    ● Gold standardfor Model Risk Management ● Required and in place at regulated financial institutions. ● Well established. ● Great starting point, although slight differences need to be given taken for ML/AI. ● Establishes strong 2nd line and 3rd line processes and controls ● Models can be misapplied. OCC 2011/12 – SR 11-7
  • 26.
    ● “A guidingprinciple for managing model risk is “effective challenge” of models, that is, critical analysis by objective, informed parties who can identify model limitations and assumptions and produce appropriate changes.” - https://www.occ.gov/news-issuances/bulletins/2011/bulletin-2011-12.html ● Appropriate compensation and incentives ○ Competent, well skilled model developers with an interdisciplinary bent ○ In-depth testing ○ Documentation, Documentation, Documentation ○ “Generally, validation should be done by people who are not responsible for development or use and do not have a stake in whether a model is determined to be valid.” ■ Appropriately incentivized and compensated ■ Have authority to explicitly challenge OCC 2011/12 – SR 11-7 Cont.
  • 27.
    ● Model Development,Implementation, and Use ● Model Validation ○ Evaluation of conceptual soundness, including developmental evidence ○ Ongoing monitoring, including process verification and benchmarking ○ Outcomes analysis, including back-testing ● Governance, Policies, and Controls ○ Board of Directors and Senior Management ○ Policies and Procedures ○ Roles and Responsibilities ○ Internal Audit ○ External Resources ○ Model Inventory ○ Documentation OCC 2011/12 – SR 11-7 Cont.
  • 28.
    ● Framework thatextends the industry standard data mining framework, CRISP-DM to auditing machine learning implementations. - https://www.isaca.org/resources/isaca-journal/issues/2018/volume-1/the-machine-learning-auditcrisp-dm-f ramework ● Leverages iterative steps of the CRISP-DM model: ○ Business Understanding ○ Data Understanding ○ Data Preparation ○ Modeling ○ Evaluation ○ Deployment CRISP-DM
  • 29.
    ● What isthe goal of the algorithm? ● Have models been used in this use case before? ● What attributes, i.e. temperature, humidity, etc., have been identified by the business as key factors for deriving the desired decision in the given use case? ● Are there any regulatory constraints or considerations of which to be aware? CRISP-DM - Business Understanding
  • 30.
    ● What dataset[s]was utilized to train the model? ● What dataset[s] is utilized for production prediction? ● Where did the data set[s] identified in 1,2 originate? I.e. web scrapped data, log files, relational databases. ● Are all of the input variables in the same format? I.e. miles or kilometers. ● Have the correlations and covariances been examined? CRISP-DM - Data Understanding
  • 31.
    ● How wasthe data cleaned? ● If supervised learning was used, how was the training dataset created? ● Were standard software development techniques used for the ETL process for production models? ● How was the data scaled? ● How were the variables selected? Was an automated variable selection technique utilized? ● What process was used to separate the data into train and test sets? Was care taken to avoid peaking at the test set? CRISP-DM - Data Preparation
  • 32.
    ● What wasthe thought process behind choosing algorithm[s] for the model? ● What steps were used to guard against overfitting? ● What process was used to optimize the chosen algorithm? ● Was the algorithm coded from scratch or was a standard library used? If so, what are the license terms of the library? ● What type of version control was utilized? CRISP-DM - Modeling
  • 33.
    ● What metricswere used to evaluate the model? ● What process and metrics are in place to monitor the continued accuracy and stability of the model? ● Create a mock dataset that covers all of the relevant assumptions and run the results through the algorithm to test that it is operating as intended. CRISP-DM - Evaluation
  • 34.
    ● How wasthe model moved to production? Was it rewritten by the engineering team, or does it rely on an API, etc., (if it was rewritten, a code review for accuracy should be performed). ● Is the model accomplishing what the business wanted it to accomplish? CRISP-DM - Deployment
  • 35.
    ● Record, logand summarize all transactions ● Ensure there is business access to understand what was decided, why it was decided and who was involved ● Establish post processing to identify anomalies ○ Unexpected inputs ○ Decision and output drift calculations ● Establish situational repeatability ● Enable counterfactual sensitivity analysis CRISP-DM - Monitoring
  • 36.
    ● Take aninventory of all models used in your company and determine their complexity and risk. ● Determine if a model management program is already in place. If so, you can expand on existing policy (after reviewing it of course). ● Read through relevant frameworks. Choose on the simplistic one that meets your needs. ● Map existing controls. ● Have internal audit validate and periodically test against. ● Use trusted third party for design and monitoring assistance. How to construct a Machine Learning Governance or Audit plan
  • 37.
  • 38.
    Thank you! Email: andrew@monitaur.ai LinkedIn:https://www.linkedin.com/in/andrew-clark-b326b767/ Company website: https://monitaur.ai/ Personal website: https://aclarkdata.github.io/