AI outperforms humans in prediction and classification tasks. Until you notice, that AI algorithms have denied women access to credit despite being more creditworthy than their male counterparts. Or that they fail to distinguish between black-skinned individuals and Gorillas. If you read the news, it quickly becomes obvious: AI has a bias problem. And not only predictive AI, but also generative AI like media darling ChatGPT, which - infamously - wrote a piece of code suggesting that only "White" or "Asian" men would be good scientists to hire. But is it the machines who are inherently biased or evil? Spoiler alert: no, it's not! Yet, AI fairness is one of the most pressing AI challenges to solve. Join Alexandra for this session to learn how bias finds its ways into AI and you, your organization and society at large should take to help overcome it.
Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...
[DSC DACH 23] (Un)Ethical Machines? Why AI Bias Is a Problem and What to Do About It - Alexandra Ebert
1. Not yet signed up to the
MOSTLY AI Synthetic Data
Platform?
Then please sign up now we‘ll
need it in a few minutes:
1. Go to bit.ly/mostlyai-signup
2. Submit the form
3. Click the link in your confirmation
email
Any questions? Happy to help via
the chat!
Alexandra Ebert
Chief TrustOfficer| MOSTLY AI
Host of the Data Democratization Podcast
Chair of the IEEE Synthetic Data IC Expert Group
(Un)Ethical Machines?
Why AI is Biased and What to Do About It
23. (UN)ETHICAL MACHINES? | DSC DACH 2023
23
1. Insufficient training data (e.g. facial recognition systems, UK health app,…)
10 (out of many) reasons for bias in AI
24. (UN)ETHICAL MACHINES? | DSC DACH 2023
24
1. Insufficient training data (e.g. facial recognition systems, UK health app,…)
10 (out of many) reasons for bias in AI
25. (UN)ETHICAL MACHINES? | DSC DACH 2023
25
1. Insufficient training data (e.g. facial recognition systems, UK health app,…)
2. Humans are biased – and so is the data AI is trained on
10 (out of many) reasons for bias in AI
26. (UN)ETHICAL MACHINES? | DSC DACH 2023
26
1. Insufficient training data (e.g. facial recognition systems, UK health app,…)
2. Humans are biased – and so is the data AI is trained on
10 (out of many) reasons for bias in AI
27. (UN)ETHICAL MACHINES? | DSC DACH 2023
27
1. Insufficient training data (e.g. facial recognition systems, UK health app,…)
2. Humans are biased – and so is the data AI is trained on (e.g. Amazon’s HR algorithm)
3. De-biasing data is exceptionally hard to do (removing sensitive attributes is not a good option!)
4. De-biasing AI models is very difficult too
5. Diversity amongst AI professionals is not as high as it should be
6. Fairness comes at a cost (that companies may not be willing to pay)
7. External AI audits could help – if privacy would not be an issue
8. Fairness is hard to define
9. What was fair yesterday could be biased tomorrow (e.g. Microsoft’s Tay)
10.The vicious bias cycle: biased AI will lead to more bias in data
10 (out of many) reasons for bias in AI
31. (UN)ETHICAL MACHINES? | DSC DACH 2023
What to do about AI bias?
31
Regulation
RAI Assurance
Ecosystems
Research
32. (UN)ETHICAL MACHINES? | DSC DACH 2023
What to do about AI bias?
32
Regulation
Education
& Training
RAI Assurance
Ecosystems
Research
33. (UN)ETHICAL MACHINES? | DSC DACH 2023
What to do about AI bias?
33
Regulation
Education
& Training
RAI Assurance
Ecosystems
Diversity
in AI Teams
Research
34. (UN)ETHICAL MACHINES? | DSC DACH 2023
What to do about AI bias?
34
Regulation
Education
& Training
RAI Assurance
Ecosystems
Diversity
in AI Teams
Research
Data
Demo-
cratization
35. (UN)ETHICAL MACHINES? | DSC DACH 2023
What to do about AI bias?
35
Regulation
Education
& Training
RAI Assurance
Ecosystems
Diversity
in AI Teams
Diversity
in Data
Research
Data
Demo-
cratization