The ethical implications of our work can be staggering but how do we balance commercial needs, ethical requirements, and productivity? Taking a pragmatic approach through starting with simple checklists evolving to the use of automation and structured processes, I look at how we can make our work more robust from an ethical perspective.
Step 0: Get alignment
Step 1: Make others think before you start
Step 2: Work robustly
Step 3: Maintain vigilance
1. @theStephLocke @rencontres_R
Practical AI & data science
ethics
The ethical implications of our work can be staggering but how do we
balance commercial needs, ethical requirements, and productivity? Taking
a pragmatic approach through starting with simple checklists evolving to
the use of automation and structured processes, I look at how we can
make our work more robust from an ethical perspective.
2. Steph Locke
CEO @ Nightingale HQ
Data & AI specialist
Microsoft MVP, 5+ years
Microsoft MCT
T: @theStephLocke
Li: /stephanielocke
steph@nightingalehq.ai
3. @theStephLocke @rencontres_R
Step 0
Get alignment
It’s important to ensure that the team and stakeholders have a
commitment to an ethical approach and have outlined key areas of
focus.
5. @theStephLocke @rencontres_R
Education
• Communicate about ethics internally
• Run workshops to help understand implications
scu.edu/ethics-in-technology-practice
• Share stories about what data scientists fail to be ethical
6. @theStephLocke @rencontres_R
5 concepts of fairness
• group unaware - same cutoff points / decision boundary
• group thresholds - different cutoff points to allow different
volumes in
• demographic parity - different cutoff points to end up with
distribution like overall demographics
• equal opportunity - the same true positive rate holds across
groups
• equal accuracy - the same overall accuracy rate holds across
groups
7. @theStephLocke @rencontres_R
Step 1
Make others think before you
start
Spend time working on the process around designing a data science
/ AI product idea or feature to ensure ethical considerations are
considered up-front
9. @theStephLocke @rencontres_R
Impact assessment
• Who / what will use the solution?
• What are the desired outcomes if things go right?
• What happens if something goes wrong?
• Are there different potential errors with different impacts/risks?
• Are there secondary groups impacted?
• What KPIs could this solution impact?
• What behaviours are we trying to drive?
13. @theStephLocke @rencontres_R
Step 2
Work robustly
Start implementing workflow practices and automations to ensure
“ethical hygiene” and alignment to identified outcomes & risks
21. @theStephLocke @rencontres_R
Step 3
Maintain vigilance
Ensure continued monitoring, stakeholder engagement, and
ongoing compliance. Reflect learnings back into processes and
internal education.
24. @theStephLocke @rencontres_R
Engagement
• Continued feedback and engagement from people affected by AI
use
• Mine customer service etc for ways AI may have caused problems
• Include AI in risk registers and other internal compliance
monitoring solutions
• Improve AI literacy
25. @theStephLocke @rencontres_R
Practical AI & data science
ethics
Step 0: Get alignment
Step 1: Make others think before you start
Step 2: Work robustly
Step 3: Maintain vigilance