Successfully reported this slideshow.
Your SlideShare is downloading. ×

Responsible AI in Industry: Practical Challenges and Lessons Learned

Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Upcoming SlideShare
Amazon SageMaker Clarify
Amazon SageMaker Clarify
Loading in …3
×

Check these out next

1 of 55 Ad

Responsible AI in Industry: Practical Challenges and Lessons Learned

Download to read offline

How do we develop machine learning models and systems taking fairness, accuracy, explainability, and transparency into account? How do we protect the privacy of users when building large-scale AI based systems? Model fairness and explainability and protection of user privacy are considered prerequisites for building trust and adoption of AI systems in high stakes domains such as hiring, lending, and healthcare. We will first motivate the need for adopting a “fairness, explainability, and privacy by design” approach when developing AI/ML models and systems for different consumer and enterprise applications from the societal, regulatory, customer, end-user, and model developer perspectives. We will then focus on the application of responsible AI techniques in practice through industry case studies. We will discuss the sociotechnical dimensions and practical challenges, and conclude with the key takeaways and open challenges.

How do we develop machine learning models and systems taking fairness, accuracy, explainability, and transparency into account? How do we protect the privacy of users when building large-scale AI based systems? Model fairness and explainability and protection of user privacy are considered prerequisites for building trust and adoption of AI systems in high stakes domains such as hiring, lending, and healthcare. We will first motivate the need for adopting a “fairness, explainability, and privacy by design” approach when developing AI/ML models and systems for different consumer and enterprise applications from the societal, regulatory, customer, end-user, and model developer perspectives. We will then focus on the application of responsible AI techniques in practice through industry case studies. We will discuss the sociotechnical dimensions and practical challenges, and conclude with the key takeaways and open challenges.

Advertisement
Advertisement

More Related Content

Slideshows for you (20)

Similar to Responsible AI in Industry: Practical Challenges and Lessons Learned (20)

Advertisement
Advertisement

Responsible AI in Industry: Practical Challenges and Lessons Learned

  1. 1. Responsible AI in Industry: Practical Challenges and Lessons Learned CVPR 2021 Responsible Computer Vision Workshop Invited talk Krishnaram Kenthapadi & Nashlie Sephus, Ph.D. Amazon AWS AI
  2. 2. • Ethical challenges posed by AI systems • Inherent biases present in society • Reflected in training data • AI/ML models prone to amplifying such biases Algorithmic Bias
  3. 3. Laws against Discrimination Immigration Reform and Control Act Citizenship Rehabilitation Act of 1973; Americans with Disabilities Act of 1990 Disability status Civil Rights Act of 1964 Race Age Discrimination in Employment Act of 1967 Age Equal Pay Act of 1963; Civil Rights Act of 1964 Sex And more...
  4. 4. Fairness Privacy Transparency Explainability
  5. 5. Motivation & Business Opportunities Regulatory. We need to understand why the ML model made a given prediction and also whether the prediction it made was free from bias, both in training and at inference. Business. Providing explanations to internal teams (loan officers, customer service rep, compliance teams) and end users/customers Data Science. Improving models through better feature engineering and training data generation, understanding failure modes of the model, debugging model predictions, etc.
  6. 6. © 2020, Amazon Web Services, Inc. or its Affiliates. Amazon SageMaker VISION SPEECH TEXT SEARCH CHATBOTS PERSONALIZATION FORECASTING FRAUD CONTACT CENTERS Deep Learning AMIs & Containers GPUs & CPUs Elastic Inference Trainium Inferentia FPGA AI SERVICES ML SERVICES FRAMEWORKS & INFRASTRUCTURE DeepGraphLibrary Amazon Rekognition Amazon Polly Amazon Transcribe +Medical Amazon Lex Amazon Personalize Amazon Forecast Amazon Comprehend +Medical Amazon Textract Amazon Kendra Amazon CodeGuru Amazon Fraud Detector Amazon Translate INDUSTRIAL AI CODE AND DEVOPS NEW Amazon DevOps Guru Voice ID For AmazonConnect Contact Lens NEW Amazon Monitron NEW AWS Panorama + Appliance NEW Amazon Lookout for Vision NEW Amazon Lookout for Equipment Scaling Fairness, Explainability & Privacy across the AWS ML Stack NEW Amazon HealthLake HEALTHCARE AI NEW Amazon Lookout for Metrics ANOMALY DETECTION Amazon Transcribe for Medical Amazon Comprehend for Medical Label data NEW Aggregate & prepare data NEW Store & share features Auto ML Spark/R NEW Detect bias Visualize in notebooks Pick algorithm Train models Tune parameters NEW Debug & profile Deploy in production Manage & monitor NEW CI/CD Human review review NEW: Model management for edge devices NEW: SageMaker JumpStart SAGEMAKER STUDIO IDE
  7. 7. LinkedIn operates the largest professional network on the Internet Tell your story 740M members 55M+ companies are represented on LinkedIn 90K schools listed (high school & college) 36K skills listed 14M+ open jobs on LinkedIn Jobs 280B Feed updates
  8. 8. Error-free (no system is perfect) 100% confident Intended to replace human judgement What ML Is Not 9
  9. 9. Fairness Techniques in Faces 12
  10. 10. Detect presence of a face in an image or a video. Face Detection 13
  11. 11. A system to determine the gender, age, emotion, presence of facial hair, etc. from a detected face. Face Analysis
  12. 12. A system to determine a detected faces identity by matching it against a database of faces and their associated identities. Face Recognition 15
  13. 13. Estimation of the confidence or certainty of any prediction Expressed in the form of a probability or confidence score Confidence Score 16
  14. 14. Face Recognition: Common Causes of Errors ILLUMINATION VARIANCE POSE / VIEWPOINT AGING EXPRESSION / STYLE OCCLUSION Lighting, camera controls like exposure, shadows, highlights Face pose, camera angles Natural aging, artificial makeup Face expression like laughing, facial hair such as a beard, hair style Part of the face hidden as in group pictures 17
  15. 15. Where Can Biases Exist? 18
  16. 16. Racial Comparisons of Datasets [FairFace] 19
  17. 17. Launch with Confidence: Testing for Bias • How will you know if users are being harmed? • How will you know if harms are unfairly distributed? • Detailed testing practices are often not covered in academic papers • Discussing testing requirements is a useful focal point for cross-functional teams
  18. 18. Reproducibility - Notebook Experiments 21
  19. 19. PPB2 Data Analytics 22
  20. 20. 23
  21. 21. Gender Classification – PPB2 24
  22. 22. 25
  23. 23. 26
  24. 24. 27
  25. 25. 28
  26. 26. Short Hair 29
  27. 27. Gender Classification w.r.t. Hair Lengths – PPB2 30
  28. 28. Efficient Testing for Bias • Development teams are under multiple constraints • Time • Money • Human resources • Access to data • How can we efficiently test for bias? • Prioritization • Strategic testing
  29. 29. Choose your evaluation metrics in light of acceptable tradeoffs between False Positives and False Negatives
  30. 30. FAccT ’21, March 3–10, 2021, Virtual Event, Canada Paper Review Nashlie Sephus, PHD Tech Evangelist, AWS AI 33
  31. 31. Motivation - Datasets • How are datasets collected? • Where did the data come from? • When it comes to humans, were they aware? • Had the individuals given consent? • How are dataset owners being held accountable for consequences that may arise? • How to create greater transparency about data? 34
  32. 32. 35
  33. 33. Takeaways • Testing for blindspots amongst intersectionality is key. • Taking into account confidence scores/thresholds and error bars when measuring for biases is necessary. • Representation matters. • Transparency, reproducibility, and education can promote change. • Confidence in your product's fairness requires fairness testing • Fairness testing has a role throughout the product iteration lifecycle • Contextual concerns should be used to prioritize fairness testing 36
  34. 34. 37 © 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved | 37 © 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved | Detect bias in ML models and understand model predictions Amazon SageMaker Clarify
  35. 35. 40 © 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved | Amazon SageMaker Clarify Detect bias in ML models and understand model predictions Detect bias during data preparation Identify imbalances in data Evaluate the degree to which various types of bias are present in your model Check your trained model for bias Understand the relative importance of each feature to your model’s behavior Explain overall model behavior Understand the relative importance of each feature for individual inferences Explain individual predictions Provide alerts and detect drift over time due to changing real-world conditions Detect drift in bias and model behavior over time Generated automated reports Produce reports on bias and explanations to support internal presentations
  36. 36. Lessons learned • Fairness as a Process • Notions of bias & fairness are highly application dependent • Choice of the attribute(s) for which bias is to be measured & the choice of the bias metrics to be guided by social, legal, and other non-technical considerations • Collaboration/consensus across key stakeholders • Wide spectrum of customers with different levels of technical background • Managed service vs. open source packages • Monitoring of the deployed model • Fairness & explainability considerations across the ML lifecycle
  37. 37. 52 © 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved | Fairness and Explainability by Design in the ML Lifecycle
  38. 38. 53 © 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved | Additional Pointers For more information on Amazon SageMaker Clarify, please refer: • https://aws.amazon.com/sagemaker/clarify • Amazon Science / AWS Articles • https://aws.amazon.com/blogs/aws/new-amazon-sagemaker-clarify-detects- bias-and-increases-the-transparency-of-machine-learning-models • https://www.amazon.science/latest-news/how-clarify-helps-machine-learning- developers-detect-unintended-bias • Technical paper: Fairness Measures for Machine Learning in Finance • https://github.com/aws/amazon-sagemaker-clarify Acknowledgments: Amazon SageMaker Clarify core team, Amazon AWS AI team, and partners across Amazon
  39. 39. Key Takeaways
  40. 40. Good ML Practices Go a Long Way Lots of low hanging fruit in terms of improving fairness simply by using machine learning best practices • Representative data • Introspection tools • Visualization tools • Testing 01 Fairness improvements often lead to overall improvements • It’s a common misconception that it’s always a tradeoff 02
  41. 41. Breadth and Depth Required Looking End-to-End is critical • Need to be aware of bias and potential problems at every stage of product and ML pipelines (from design, data gathering, … to deployment and monitoring) 01 Details Matter • Slight changes in features or labeler criteria can change the outcome • Must have experts who understand the effects of decisions • Many details are not technical such as how labelers are hired 02
  42. 42. Process Best Practices Identify product goals Get the right people in the room Identify stakeholders Select a fairness approach Analyze and evaluate your system Mitigate issues Monitor Continuously and Escalation Plans Auditing and Transparency Policy Technology
  43. 43. Beyond Accuracy Performance and Cost Fairness and Bias Transparency and Explainability Privacy Security Safety Robustness
  44. 44. Fairness, Explainability & Privacy: Opportunities
  45. 45. Fairness in ML Application specific challenges Conversational AI systems: Unique bias/fairness/ethics considerations E.g., Hate speech, Complex failure modes Beyond protected categories, e.g., accent, dialect Entire ecosystem (e.g., including apps such as Alexa skills) Two-sided markets: e.g., fairness to buyers and to sellers, or to content consumers and producers Fairness in advertising (externalities) Tools for ensuring fairness (measuring & mitigating bias) in AI lifecycle Pre-processing (representative datasets; modifying features/labels) ML model training with fairness constraints Post-processing Experimentation & Post-deployment
  46. 46. Key Open Problems in Applied Fairness What if you don’t have the sensitive attributes? When should you use what approach? For example, Equal treatment vs equal outcome? How to identify harms? Process for framing AI problems: Will the chosen metrics lead to desired results? How to tell if data generation and collection method is appropriate for a task? (e.g., causal structure analysis?) Processes for mitigating harms and misbehaviors quickly
  47. 47. Explainability in ML Actionable explanations Balance between explanations & model secrecy Robustness of explanations to failure modes (Interaction between ML components) Application-specific challenges Conversational AI systems: contextual explanations Gradation of explanations Tools for explanations across AI lifecycle Pre & post-deployment for ML models Model developer vs. End user focused
  48. 48. Privacy in ML Privacy for highly sensitive data: model training & analytics using secure enclaves, homomorphic encryption, federated learning / on- device learning, or a hybrid Privacy-preserving model training, robust against adversarial membership inference attacks (Dynamic settings + Complex data / model pipelines) Privacy-preserving mechanisms for data marketplaces
  49. 49. Reflections “Fairness, Explainability, and Privacy by Design” when building AI products Collaboration/consensus across key stakeholders NYT / WSJ / ProPublica test :)
  50. 50. Related Tutorials / Resources • ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT) • AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AIES) • Sara Hajian, Francesco Bonchi, and Carlos Castillo, Algorithmic bias: From discrimination discovery to fairness-aware data mining, KDD Tutorial, 2016. • Solon Barocas and Moritz Hardt, Fairness in machine learning, NeurIPS Tutorial, 2017. • Kate Crawford, The Trouble with Bias, NeurIPS Keynote, 2017. • Arvind Narayanan, 21 fairness definitions and their politics, FAccT Tutorial, 2018. • Sam Corbett-Davies and Sharad Goel, Defining and Designing Fair Algorithms, Tutorials at EC 2018 and ICML 2018. • Ben Hutchinson and Margaret Mitchell, Translation Tutorial: A History of Quantitative Fairness in Testing, FAccT Tutorial, 2019. • Henriette Cramer, Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daumé III, Miroslav Dudík, Hanna Wallach, Sravana Reddy, and Jean Garcia-Gathright, Translation Tutorial: Challenges of incorporating algorithmic fairness into industry practice, FAccT Tutorial, 2019.
  51. 51. Related Tutorials / Resources • Sarah Bird, Ben Hutchinson, Krishnaram Kenthapadi, Emre Kiciman, Margaret Mitchell, Fairness-Aware Machine Learning: Practical Challenges and Lessons Learned, Tutorials at WSDM 2019, WWW 2019, KDD 2019. • Krishna Gade, Sahin Cem Geyik, Krishnaram Kenthapadi, Varun Mithal, Ankur Taly, Explainable AI in Industry, Tutorials at KDD 2019, FAccT 2020, WWW 2020. • Himabindu Lakkaraju, Julius Adebayo, Sameer Singh, Explaining Machine Learning Predictions: State-of-the-art, Challenges, and Opportunities, NeurIPS 2020 Tutorial. • Kamalika Chaudhuri, Anand D. Sarwate, Differentially Private Machine Learning: Theory, Algorithms, and Applications, NeurIPS 2017 Tutorial. • Krishnaram Kenthapadi, Ilya Mironov, Abhradeep Guha Thakurta, Privacy-preserving Data Mining in Industry, Tutorials at KDD 2018, WSDM 2019, WWW 2019.
  52. 52. Thanks! Questions?

Editor's Notes

  • In this tutorial, we will focus primarily on three dimensions: P/F/E. Prior to this tutorial, we had presented tutorials on privacy, on fairness, and on explainability. In particular, I would like to thank Timnit Gebru and Meg Mitchell for the realization of the fairness in industry tutorial. I had reached out to Timnit Gebru to find out potential collaborators from other companies – Timnit connected me with Meg.

    Today, we will present case studies from Amazon, LinkedIn, and Microsoft. Please refer the longer version of our tutorial for additional case studies, especially case studies from Google (which we would not be able to present today).
  • First, we will motivate the need for bias detection and mitigation in ML systems.

    Challenges that have received a lot of attention in the media, and have really highlighted how important it is to get AI right – to make sure that AI does not discriminate or further disadvantage already disadvantaged groups.

    Many of these stories have focused on high-stakes decisions where machine learning systems are used allocate opportunities, resources, or information in ways that can have significant negative impacts on people’s lives.
  • Recently, policymakers, regulators, and advocates have raised awareness about the ethical, policy, and legal challenges posed by machine learning and data-driven systems. In particular, they have expressed concerns about the potentially discriminatory impact of such systems, for example, due to inadvertent encoding of bias into automated decisions.

    “Do Google’s ‘unprofessional hair’ results show it’s racist?” by Leigh Alexander
  • There have been several laws in countries such as the United States that prohibit discrimination based on “protected attributes” such as race, gender, age, disability status, and religion. Many of these laws have their origins in the Civil Rights Movement in the United States (e.g., US Civil Rights Act of 1964).

    When legal frameworks prohibit the use of such protected attributes in decision making, there are usually two competing approaches on how this is enforced in practice: Disparate Treatment vs. Disparate Impact. Avoiding disparate treatment requires that such attributes should not be actively used as a criterion for decision making and no group of people should be discriminated against because of their membership in some protected group. Avoiding disparate impact requires that the end result of any decision making should result in equal opportunities for members of all protected groups irrespective of how the decision is made.

    Please see NeurIPS’17 tutorial titled Fairness in machine learning by Solon Barocas and Moritz Hardt for a thorough discussion.
  • Renewed focus in light of privacy breaches observed over the last several years.

    Example: EU General Data Protection Regulation (GDPR), which came into effect in May, 2018.

    The focus is not only on privacy of users, but also related dimensions such as algorithmic bias (or ensuring fairness), transparency, and explainability of decisions.


    Image credit: https://pixabay.com/en/gdpr-symbol-privacy-icon-security-3499380/
  • Challenges with scaling fairness, explainability, (and privacy) mechanisms to cater to the needs of AWS customers from financial services, healthcare, HR, and other industries:
    Providing functionality that caters to different customer personas
    Providing functionality that caters to the needs in different stages of ML lifecycle
    Starting with SageMaker, and then scaling the functionality to AI services and to other ML products & services across Amazon

  • From https://news.linkedin.com/about-us#Statistics as of 2021-02-01

    Largest professional network in the world. It is a platform for every professional to tell their story. Who they are, where they work, skills, etc. Once you have done that, the platform works infusing intelligence by harnessing the power of this data to help connect talent with opportunity at scale.

    * 740M members in more than 200 countries and territories worldwide | 1.5K fields of study | 600+ degrees | 24K titles ...

    Our vision is to develop a profile for every member of the global workforce, all 3 billion of them, every employer in the world, and every open job at each of these companies, to provide every member of the global workforce with transparency into the skills required to obtain those jobs.  We want to build a profile for every educational institution or training facility that enables people to acquire those skills, and a publishing platform that enables every individual, every company, and every university to share their professionally-relevant knowledge if they’re interested in doing so.
  • TODO split slide into 3
  • TODO split slide into 3
  • Amazon SageMaker helps data scientists and developers to prepare, build, train, and deploy high-quality machine learning (ML) models quickly by bringing together a broad set of capabilities purpose-built for ML.

    Ignore notes below (I won’t go into details)

    Predictive Maintenance
    Predict if a component will fail before failure based on sensor data. Example applications include predicting failure and remaining useful life (RUL) of automotive fleets, manufacturing equipment, and IoT sensors. The key value is increased vehicle and equipment up-time and cost savings. This use case is widely used in automotive and manufacturing industries. Industries: Automotive, Manufacturing

    Georgia Pacific uses SageMaker to detect machine issues early. To learn more, read the case study.
     
    Demand Forecasting
    Use historical data to forecast key demand metrics faster and make more accurate business decisions around production, pricing, inventory management, and purchasing/re-stocking. The key value is meeting customer demand, reducing inventory carrying costs by reducing surplus inventory, and reducing waste. This use case is used mainly in financial services, manufacturing, retail, and consumer packaged goods (CPG) industries. Industries: Financial Services (FSI), Manufacturing, Retail, Consumer Packaged Goods (CPG)

    Advanced Microgrid Solutions has built an ML model with SageMaker to forecast energy prices in near real time. Watch the re:Invent session.

    Fraud Detection
    Automate the detection of potentially fraudulent activity and flag it for review. The key value is reducing costs associated with fraud and maintaining customer trust. This use case is used mainly in financial services and online retail industries. Industries: FSI, Retail

    Euler Hermes uses SageMaker to catch suspicious domains. Learn more from the blog post.

    Credit Risk Prediction
    Explain individual predictions from a credit application to predict whether the credit will be paid back or not (often called a credit default). The key value is identifying bias and satisfying regulatory requirements. This use case is used mainly in financial services and online retail industries. Industries: FSI

    We have a Explaining Credit Decisions customized solution using SageMaker that can be used to explain individual predictions from machine learning models, including applications for credit decisions, churn prediction, medical diagnosis, and fraud detection.

    Extract & Analyze Data from Documents
    Understand text in written and digital documents and forms, extract information, and use it to classify items and make decisions. Industries: Healthcare, FSI, Legal, M&E, Education

    Computer Vision (image analysis)
    Main sub-use cases are: 1) Automatically medical diagnosis from X-ray and other imaging data; 2) Manufacturing quality control automation to detect defective parts; 3) Drug discovery; 4) Social distancing and tracking concentration of people for COVID-19 in public places. Industries: Healthcare/Pharma, Manufacturing, Public Sector

    Autonomous Driving
    Reinforcement learning and object detection algorithms. Industries: Automotive

    Personalized Recommendations
    Make personalized recommendations based on historical trends. Industries: M&E, Retail, Education (most likely classes to ensure graduation)

    Churn Prediction
    Predict customer likelihood to churn. Industries: Retail, Education, Software & Internet (SaaS)
  • Biases are imbalances in the training data or the prediction behavior of the model across different groups, such as age or income bracket. Biases can result from the data or algorithm used to train your model. For instance, if an ML model is trained primarily on data from middle-aged individuals, it may be less accurate when making predictions involving younger and older people.

    -----

    Regulatory Compliance
    Regulations may require companies to be able to explain financial decisions and take steps around model risk management. Amazon SageMaker Clarify can help flag any potential bias present in the initial data or in the model after training and can also help explain which model features contributed the most to an ML model’s prediction.
    Internal Reporting & Compliance
    Data science teams are often required to justify or explain ML models to internal stakeholders, such as internal auditors or executives. Amazon SageMaker Clarify can provide data science teams with a graph of feature importance when requested and can help quantify potential bias in an ML model or the data used to train it in order to provide additional information needed to support internal requirements.
    Customer Service
    Customer-facing employees, such as financial advisors or loan officers, may review a prediction made by an ML model as part of the course of their work. Working with the data science team, these employees can get a visual report via API directly from Amazon SageMaker Clarify with details on which features were most important to a given prediction in order to review it before making decisions that may impact customers.

    -----

    Ignore notes below (I will use description at https://aws.amazon.com/sagemaker/clarify)

    ML models, especially those that make predictions which serve end customers, are at risk of being biased and producing incorrect or harmful outcomes if proper precautions are not taken, making the ability to detect bias across the ML lifecycle critical.

    Let’s look at some of the ways bias may become present in a model:
    The initial data set you use for your model might contain imbalances, such as not having enough examples of members of a certain class which then cause the model to become biased against that class.
    Your model might develop biased behavior during the training process. For example, the model might use bank location as a positive indicator to approve a loan as opposed to actual financial data if one particular bank location approves more loans than others. In this case, the model has bias towards applicants that apply at that location and against applicants that do not, regardless of their financial standing.
    Finally, bias may develop over time if data in the real world begins to diverge from the data used to train your deployed model. For example, if your model has been trained on an outdated set of mortgage rates it may start to become biased against certain home loan applicants.

    But to understand WHY bias is present, we need explainability. And explainability is useful for more than just bias. Let me explain.
    Many regulators need to understand why the ML model made a given prediction and whether the prediction was free from bias, both in training and at inference
    You may need to provide explanations to internal teams (loan officers, customer service reps, compliance officers) in addition to end users / customers. For example a loan officer may need help explaining to a customer what factors caused their application to be denied.
    Finally, data science teams can improve models given a deeper understanding on whether a model is making the right inferences for the right reasons, or if perhaps irrelevant data points are being used that are altering model behavior.


  • Detect bias in your data and model
    Identify imbalances in data
    SageMaker Clarify is integrated with Amazon SageMaker Data Wrangler, making it easier to identify bias during data preparation. You specify attributes of interest, such as gender or age, and SageMaker Clarify runs a set of algorithms to detect any presence of bias in those attributes. After the algorithm runs, SageMaker Clarify provides a visual report with a description of the sources and measurements of possible bias so that you can identify steps to remediate the bias. For example, in a financial dataset that contains only a few examples of business loans to one age group as compared to others, SageMaker will flag the imbalance so that you can avoid a model that disfavors that age group.

    Check your trained model for bias
    You can also check your trained model for bias, such as predictions that produce a negative result more frequently for one group than they do for another. SageMaker Clarify is integrated with SageMaker Experiments so that after a model has been trained, you can identify attributes you would like to check for bias, such as age. SageMaker runs a set of algorithms to check the trained model and provides you with a visual report that identifies the different types of bias for each attribute, such as whether older groups receive more positive predictions compared to younger groups.

    Monitor your model for bias
    Although your initial data or model may not have been biased, changes in the world may introduce bias to a model that has already been trained. For example, a substantial change in home buyer demographics could cause a home loan application model to become biased if certain groups were not present or accurately represented in the original training data. SageMaker Clarify is integrated with SageMaker Model Monitor, enabling you to configure alerting systems like Amazon CloudWatch to notify you if your model exceeds certain bias metric thresholds. 

    Explain model behavior
    Understand your model
    Trained models may consider some model inputs more strongly than others when generating predictions. For example, a loan application model may weigh credit history more heavily than other factors. SageMaker Clarify is integrated with SageMaker Experiments to provide a graph detailing which features contributed most to your model’s overall prediction-making process after the model has been trained. These details may be useful for compliance requirements or can help determine if a particular model input has more influence than it should on overall model behavior.

    Explain individual model predictions
    Customers and internal stakeholders both want transparency into how models make their predictions. SageMaker Clarify integrates with SageMaker Experiments to show you the importance of each model input for a specific prediction. Results can be made available to customer-facing employees so that they have an understanding of the model’s behavior when making decisions based on model predictions.

    Monitor your model for changes in behavior
    Changes in real-world data can cause your model to give different weights to model inputs, changing its behavior over time. For example, a decline in home prices could cause a model to weigh income less heavily when making loan predictions. Amazon SageMaker Clarify is integrated with SageMaker Model Monitor to alert you if the importance of model inputs shift, causing model behavior to change.

  • Amazon SageMaker Clarify works across the entire ML workflow to implement bias detection and explainability.

    - It can look for bias in your initial dataset as part of SageMaker Data Wrangler
    - It can check for bias in your trained model as part of SageMaker Experiments, and also explain the behavior of your model overall
    - It extends SageMaker Model Monitor to check for changes in bias or explainability over time in your deployed model
    - It can provide explanations for individual inferences made by your deployed model
  • So to recap:
    - You can check your data for bias during data prep
    - You can check for bias in your trained model, and explain overall model behavior
    - You can provide explanations for individual predictions made by your deployed model
    - You can monitor and alert on any changes to model bias or behavior over time
  • Let’s take a quick product tour

    SageMaker Clarify is integrated with Amazon SageMaker Data Wrangler, making it easier to identify bias during data preparation. You specify attributes of interest, such as gender or age, and SageMaker Clarify runs a set of algorithms to detect any presence of bias in those attributes. After the algorithm runs, SageMaker Clarify provides a visual report with a description of the sources and measurements of possible bias so that you can identify steps to remediate the bias. For example, in a financial dataset that contains only a few examples of business loans to one age group as compared to others, SageMaker will flag the imbalance so that you can avoid a model that disfavors that age group.

  • You can also check your trained model for bias, such as predictions that produce a negative result more frequently for one group than they do for another. SageMaker Clarify is integrated with SageMaker Experiments so that after a model has been trained, you can identify attributes you would like to check for bias, such as age. SageMaker runs a set of algorithms to check the trained model and provides you with a visual report that identifies the different types of bias for each attribute, such as whether older groups receive more positive predictions compared to younger groups.
  • Although your initial data or model may not have been biased, changes in the world may introduce bias to a model that has already been trained. For example, a substantial change in home buyer demographics could cause a home loan application model to become biased if certain groups were not present or accurately represented in the original training data. SageMaker Clarify is integrated with SageMaker Model Monitor, enabling you to configure alerting systems like Amazon CloudWatch to notify you if your model exceeds certain bias metric thresholds. 
  • Trained models may consider some model inputs more strongly than others when generating predictions. For example, a loan application model may weigh credit history more heavily than other factors. SageMaker Clarify is integrated with SageMaker Experiments to provide a graph detailing which features contributed most to your model’s overall prediction-making process after the model has been trained. These details may be useful for compliance requirements or can help determine if a particular model input has more influence than it should on overall model behavior.
  • Changes in real-world data can cause your model to give different weights to model inputs, changing its behavior over time. For example, a decline in home prices could cause a model to weigh income less heavily when making loan predictions. Amazon SageMaker Clarify is integrated with SageMaker Model Monitor to alert you if the importance of model inputs shift, causing model behavior to change.
  • This concludes the overview of Amazon SageMaker Clarify, with a focus on the explainability functionality. Please refer the SageMaker Clarify webpage and the AWS blog post for additional information, including best practices for evaluating fairness and explainability in the ML lifecycle. Next, we will present the demo of SageMaker Clarify.

    Demo: https://youtu.be/cQo2ew0DQw0
  • Regulatory Compliance
    Regulations may require companies to be able to explain financial decisions and take steps around model risk management. Amazon SageMaker Clarify can help flag any potential bias present in the initial data or in the model after training and can also help explain which model features contributed the most to an ML model’s prediction.
    Internal Reporting & Compliance
    Data science teams are often required to justify or explain ML models to internal stakeholders, such as internal auditors or executives. Amazon SageMaker Clarify can provide data science teams with a graph of feature importance when requested and can help quantify potential bias in an ML model or the data used to train it in order to provide additional information needed to support internal requirements.
    Customer Service
    Customer-facing employees, such as financial advisors or loan officers, may review a prediction made by an ML model as part of the course of their work. Working with the data science team, these employees can get a visual report via API directly from Amazon SageMaker Clarify with details on which features were most important to a given prediction in order to review it before making decisions that may impact customers.

    -----

    Ignore below:

    As we mentioned earlier, there are a few use cases where bias detection and explainability are key – this is by no means an exhaustive list:

    Compliance: Regulations often require companies to remain unbiased and to be able to explain financial decisions.

    Internal Reporting: Data science teams are often required to justify or explain ML models to internal stakeholders, such as internal auditors or executives who would like more transparency.

    Operational Excellence: ML is often applied in operational scenarios, such as predictive maintenance and application users may want insight into why a given machine needs to be repaired.

    Customer Service: Customer-facing employees such as healthcare workers, financial advisors, or loan officers often need to field questions around the result of a decision made by an ML model, such as a denied loan.



  • Here are some best practices for evaluating fairness and explainability in the ML lifecycle. Fairness and explainability should be taken into account during each stage of the ML lifecycle, for example, Problem Formation, Dataset Construction, Algorithm Selection, Model Training Process, Testing Process, Deployment, and Monitoring/Feedback. It is important to have the right tools to do this analysis. To encourage engaging with these considerations, here are a few example questions worth asking during each of these stages.

    Fairness as a Process: We recognize that the notions of bias and fairness are highly application dependent and that the choice of the attribute(s) for which bias is to be measured, as well as the choice of the bias metrics, may need to be guided by social, legal, and other non-technical considerations. Building consensus and achieving collaboration across key stakeholders (such as product, policy, legal, engineering, and AI/ML teams, as well as end users and communities) is a prerequisite for the successful adoption of fairness-aware ML approaches in practice.

    Wide spectrum of customers, ranging from highly sophisticated (e.g., data scientists / applied ML researchers who would like granular options) to less technical (e.g., visualize the bias measures)
  • Here are some best practices for evaluating fairness and explainability in the ML lifecycle. Fairness and explainability should be taken into account during each stage of the ML lifecycle, for example, Problem Formation, Dataset Construction, Algorithm Selection, Model Training Process, Testing Process, Deployment, and Monitoring/Feedback. It is important to have the right tools to do this analysis. To encourage engaging with these considerations, here are a few example questions worth asking during each of these stages.

    Fairness as a Process: We recognize that the notions of bias and fairness are highly application dependent and that the choice of the attribute(s) for which bias is to be measured, as well as the choice of the bias metrics, may need to be guided by social, legal, and other non-technical considerations. Building consensus and achieving collaboration across key stakeholders (such as product, policy, legal, engineering, and AI/ML teams, as well as end users and communities) is a prerequisite for the successful adoption of fairness-aware ML approaches in practice.
  • Here are some best practices for evaluating fairness and explainability in the ML lifecycle. Fairness and explainability should be taken into account during each stage of the ML lifecycle, for example, Problem Formation, Dataset Construction, Algorithm Selection, Model Training Process, Testing Process, Deployment, and Monitoring/Feedback. It is important to have the right tools to do this analysis. To encourage engaging with these considerations, here are a few example questions worth asking during each of these stages.

    Fairness as a Process: We recognize that the notions of bias and fairness are highly application dependent and that the choice of the attribute(s) for which bias is to be measured, as well as the choice of the bias metrics, may need to be guided by social, legal, and other non-technical considerations. Building consensus and achieving collaboration across key stakeholders (such as product, policy, legal, engineering, and AI/ML teams, as well as end users and communities) is a prerequisite for the successful adoption of fairness-aware ML approaches in practice.
  • This concludes the overview of Amazon SageMaker Clarify, with a focus on the explainability functionality. Please refer the SageMaker Clarify webpage and the AWS blog post for additional information, including best practices for evaluating fairness and explainability in the ML lifecycle. Thank you for listening to this talk.
  • Ben
  • Cf. accessibility
    Cf. urban design
  • Microsoft’s AETHER (AI & Ethics in Engineering & Research) Advisory Board

    Cross-industry initiatives such as Partnership on AI

    It would be desirable to create an internal advisory board within each tech company on AI and ethics (in case one doesn’t already exist), consisting of senior leaders from different business lines and different functional roles (AI & Engineering, Product, Legal, etc.) across the company (similar to Microsoft’s AI and Ethics in Engineering and Research (AETHER) advisory board). By forming working groups consisting of computer scientists and engineers, social scientists/ethicists, policy experts, lawyers, and product leaders, this board could be tasked with designing AI ethics guidelines, best practices, and tools for the company as a whole, so that all AI efforts across a company are aligned with the company’s responsible AI principles.


  • Examples of complex failures:
    Failure to deflect/terminate contentious topics
    Refusing to discuss when disapproval would be better
    Polite agreement with unrecognized bias

    Differentiate between bot input and bot output in training data
    Remove offensive text from bot output training
    But don’t remove from bot inputs  allow learning of good responses to bad inputs

    End-to-end system to support fairness
  • - (1) Differentially private model training, meeting practical requirements [model updates/evolution over time; at par on accuracy with vanilla model]; (2) membership inference attacks on ML models, could be used towards quantifying leakage from a specific model
    privacy-preserving mechanisms for different teams within a company to develop joint models on highly sensitive datasets.

  • Lessons from privacy & fairness challenges  Need “Fairness, Explainabilty, and Privacy and Fairness by Design” approach when building AI products

    Collaboration/consensus across key stakeholders (product, engineering, AI, PR, legal, social scientists, policy experts, …, end users / customers)

    NYT/WSJ/ProPublica test :)

×