Introduction and overview on ethics in data science and machine learning, variations and examples of algorithmic bias, and a call-to-action for self-regulation. Given by Thierry Silbermann as part of the Sao Paulo Machine Learning Meetup, theme: "Ethics".
https://www.linkedin.com/in/thierrysilbermann
https://twitter.com/silbermannt
https://github.com/thierry-silbermann
4. What are Ethics?
• Ethics tells us about right and wrong
• They are shared value / societal rule
• Ethic is not law
• No philosophical questions, but ethical practice of
Data Science
4
6. Informed Consent
• Human Subject must be
• Informed about the experiment
• Must consent to the experiment
• Voluntarily
• Must have the right to withdraw consent at any
time
6
8. Facebook/Cornell
experiment
• 689,003 people experiment
• One week period in 2012
• Facebook changed deliberately the content of the
feed
• Group A: Removed negative content from feed
• Group B: Removed positive content from feed
8
11. • Most of the time you don’t own the data about you.
The data belongs to the company who collected it
• Nevertheless we might have some control over
these data that aren’t ours because they are about
us
Data Ownership
11
12. • We need to create the principle to reason about
this control and that’s the main concern of a
discussion about the right to privacy
• If the company goes bankrupt, the company
buying it should keep the same privacy
Data Ownership
12
14. • Privacy, of course is the first concern that comes to so
many minds when we talk about Big Data.
• How do we get the value we would like by collecting,
linking, and analyzing data, while at the same time
avoiding the harms that can occur due to data about us
being collected, linked, analyzed, and propagated?
• Can we define reasonable rules that all would agree to?
• Can we make these tradeoffs, knowing that maintaining
anonymity is very hard?
Privacy
14
15. • People have different privacy boundaries.
• As society adapt to new technologies, attitude
change
• But different boundaries doesn’t mean no boundary
Privacy
15
16. No option to exit
• In the past, one could get a fresh start by:
• moving to a new place
• waiting till the past fades (reputation can rebuild
over time)
• Big Data is universal and never forgets
• Data Science results in major asymmetries in
knowledge
16
17. Wayback Machine
• Archives pages on the web (https://archive.org/
web/ - 300 billion pages saved over time)
• almost everything that is accessible
• should be retain forever
• If you have an unflattering page written about you,
it will survive for ever in the archive (even if the
original is removed)
17
18. Right to be forgotten
• Laws are often written to clear a person’s record
after some years.
• Law in EU and Argentina since 2006
• impacts search engines (not removed completely
but hard to find)
18
19. Collection vs Use
• Privacy is usually harmed only upon use of data
• Collection is a necessary first step before use
• Existence of collection can quickly lead to use
• But collection without use may sometimes be right
• E.g surveillance
• By the time you know what you need, it is too late to
go back and get it
19
20. Loss of Privacy
• Due to loss of control over personal data
• I am ok with you having certain data about me that
I have chosen to share with you or that is public,
but I really do not want you to share my data in
ways that I do not approve
20
21. ‘Waste’ Data Collection
• Your ID is taken at the club (Name, address, age)
• How this data is being used ? Is it stored ?
21
23. Metadata
• E.g phone call, metadata includes
• Caller
• Callee
• Time of Date of Call
• Duration
• Location
23
24. Underestimating Analysis
• A smart meter at your house can recognise
“signatures” of water use every time you flush the
toilet, take a shower, or wash clothes
24
25. Privacy is a basic human
need
• Even for people who have nothing to hide
25
26. Sneaky mobile App
• There was a time where App didn’t tell you what kind
of data they were collecting
• Many app asks for far more permissions that they
need
• Might be used for future functionality
• But most of the time just for adware
• Picture management app that needs your location
26
28. On the internet, nobody
knows you are a dog
• You can say whoever you are
• You can say whatever you are
• You can make up a persona
• But today, we find it less and less true
28
29. Many transactions need ID
• You must provide an address to receive goods
• You must give your name for travel booking
• You must reveal your location to get cellular service
• You must disclose intimate details of your health
care and lifestyle to get effective medical care
29
30. Facebook real name policy
• Robin Kills The Enemy
• Hiroko Yoda
• Phuc Dat Bich (pronunced: Phoo Da Bi)
https://en.wikipedia.org/wiki/Facebook_real-name_policy_controversy
30
31. Enough history tells all
• Search pattern for person can reveal your identity
• If we have a log of all your web searches over
some period, we can form a very good idea of who
you are, and quite likely identify you
31
32. De-identification
• Given zip code, birth date and sex, about 87% of
Social Security Numbers can be determined
uniquely
• Those three fields are usually not considered PII
(Personally Identifiable Information)
32
33. Netflix Prize
• User_ID, Movie, Ratings, Date
• Merge with data from IMDb
• With only a few ratings, user could be linked across the
two systems
• Their movie choices could be used to determined sexual
orientation, even if all their IMDb reviews revealed no such
information
• Bad already for only movie recommendation, so what
about medical records ?
33
34. Four Types of Leakage
• Reveal identity
• Reveal value hidden attribute
• Reveal link between two entities
• Reveal group membership
34
35. Anonymity is Impossible
• Anonymity is virtually impossible, with enough other
data
• Diversity of entity sets can be eliminated through
joining external data
• Aggregation works only if there is no known
structure among entities aggregated
• Face can be recognised in image data
35
36. Should we prevent sharing
data ?
• If anonymity is not possible, the simplest way to
prevent misuse is not to publish the dataset
• E.g. government agencies should not make public
potentially sensitive data
• Yes access to data is crucial for many desirable
purposes
• Medical data
• Public watchdog
36
37. Little game
• For the next questions, assume the following approximate
numbers:
• Population of Brazil = 210,000,000
• Number of states in Brazil: 26
• Number of zip codes in the Brazil = 100,000
• Number of days in a year = 350
• Assume also that each person lives for exactly 75 years
• Finally assume that all distributions are perfectly uniform.
37
38. Little game
• How many people live in any one zip code in the
Brazil?
38
Brazil
population
Nb States Nb zip Code Nb days in a
year
Life
expectation
210,000,000 26 100,000 350 75
39. Little game
• How many people live in any one zip code in the
Brazil? Answer: 2,100
39
Brazil
population
Nb States Nb zip Code Nb days in a
year
Life
expectation
210,000,000 26 100,000 350 75
40. Little game
• How many people live in any one zip code in the
Brazil? Answer: 2,100
• How many people in the Brazil share the same
gender, zip code and birthday (but not birth year)?
40
Brazil
population
Nb States Nb zip Code Nb days in a
year
Life
expectation
210,000,000 26 100,000 350 75
41. Little game
• How many people live in any one zip code in the
Brazil? Answer: 2,100
• How many people in the Brazil share the same
gender, zip code and birthday (but not birth year)?
Answer: 3
41
Brazil
population
Nb States Nb zip Code Nb days in a
year
Life
expectation
210,000,000 26 100,000 350 75
42. Little game
• How many people live in any one zip code in the Brazil?
Answer: 2,100
• How many people in the Brazil share the same gender,
zip code and birthday (but not birth year)? Answer: 3
• How many people in the Brazil share the same zip code
and birth date (including birth year)?
42
Brazil
population
Nb States Nb zip Code Nb days in a
year
Life
expectation
210,000,000 26 100,000 350 75
43. Little game
• How many people live in any one zip code in the Brazil?
Answer: 2,100
• How many people in the Brazil share the same gender,
zip code and birthday (but not birth year)? Answer: 3
• How many people in the Brazil share the same zip code
and birth date (including birth year)? Answer: 0
43
Brazil
population
Nb States Nb zip Code Nb days in a
year
Life
expectation
210,000,000 26 100,000 350 75
45. Validity
• Bad Data and Bad Models lead to bad decisions
• If decision making is opaque, results can be bad in
the aggregate, and catastrophic for an individual
• What if someone has a loan denied because of an
error in the data analysed
45
46. Sources of Error
1. Choice of representative sample
2. Choice of attributes and measures
3. Errors in the Data
4. Errors in Model Design
5. Errors in Data Processing
6. Managing change
46
47. 1) Choice of representative
sample
• Twitterverse (young, tech-savvy, richer than
average population)
• Should make sure that race, gender, age are well-
balanced
47
48. Example: Google Labelling
Error
• Due to poor representative sample, Google image
recognition technology used to classify picture of
black people as gorilla…
• You rarely can think of how your model can go
wrong
• Most likely the training set contained very few
dark faces
48
49. 2) Choice of attributes and
measures
• Usually limited to what is available
• Additional attributes can sometimes be purchased
or collected
• Require cost to value tradeoff
• Still, need to think about missing attributes
49
50. 3) Errors in the Data
• In the US in 2012, FTC found that 26% of the
consumers had at least one error in their credit
report
• 20% of these errors resulted in substantially lower
credit score
• Credit reporting agencies have a correction
process
• By 2015, about 20% still had unresolved errors
50
51. Third Party Data
• Material decisions can often be made on the basis
of public data or data provided by third party
• There are often errors in these data
• Does the affected subject have a mechanism to
correct errors ?
• Does the affected subject even know what data
were used ?
51
52. Solutions
• Are data sources authoritative, complete and timely
?
• Will subject have access ?
• How are mistakes and unintended consequences
detected and corrected ?
52
53. 4) Errors in Model Design
• Ending up with invalid conclusions even with perfect
inputs, perfect data going in…
• Many ways model could be incorrect
1. Model structure and Extrapolation
2. Feature Selection
3. Ecological Fallacy
4. Simpson’s Paradox
53
54. 4.1) Model Structure and
Extrapolation
• Most machine learning model just estimates
parameters to fit a pre-determined model
• Do you know the model is appropriate ?
• Are you trying to fit a linear model to a complex
non-linear reality ? Or the opposite ?
54
58. Overfitted model: Earthquake
magnitude 9 or more every 13000
years
Good model: Every 300 years
https://ml.berkeley.edu/blog/2017/07/13/tutorial-4/
58
4.1) Model Structure and
Extrapolation
60. 4.2) Feature selection
• Did you know that taller people are more likely to
grow beards?
• Women are generally shorter
• They don’t grow beards
60
61. 4.3) Ecological Fallacy
• Analysing results for a group and assign results to
individual
• Districts with higher income have lower crime rates
• => richer individual less likely to be a criminal
61
66. 5) Error in Data Processing
• Wrong entry for example
• Bug in code
66
67. 6) Managing Change
• System change continuously
• Is analysis still valid?
• Most changes may not impact analysis
• But some do and we might not know which one
• Famous case of “Google Flu”
• Predictor worked beautifully for a while
• Then crashed
67
68. Campbell’s Law
• “The more any quantitative social indicator (or even
some qualitative indicator) is used for social
decision making, the more subject it will be to
corruption pressures and the more apt it will be
distort and corrupt the social processes it is
intended to monitor” — Donal Campbell, 1979
68
69. Equivalent for DS
• If the metric are known, and they matter, people will
work towards the metrics
• If critical analysis inputs can be manipulated, they
will be
69
71. Algorithmic Fairness
• Can algorithm be biased?
• Can we make algorithms unbiased?
Is training data set representative of the population?
Is past population representative of future
population?
Are observed correlations due to confounding
processes?
71
72. Example: Algorithmic
Vicious Cycle
• Company has only 10% women employees
• Company has “‘boys’ club culture” that makes it
difficult for women to succeed
• Hiring algorithm trained on current data, based on
current employee success, scores women
candidates lower
• Company ends up hiring fewer women
72
73. Bad Analysis from Good
Data
• Correlated attributes
• Correct but misleading results
• P-Hacking
73
74. Racial Discrimination
• Universities prohibited by law from considering
race in admission
• Can find surrogate features that get them close,
without violating the law
• Lender prohibited by law from redlining on race
• Can find surrogate features
• In general, proxy can be found
74
75. Discrimination Intent
• Big Data provides the technology to facilitate such
proxy discrimination
• Whether this technology is used this way becomes
a matter of intent
• It also provides the technology to detect and
address discrimination
75
91. Correct but Misleading
Results: Score Edition
• Hotel A gets average 3.2, with a mix of mostly 3
and 4
• Hotel B gets average 3.2, with a mix of mostly 1
and 5
91
92. Correct but Misleading
Results: Score Edition
• Hotel A gets average 4.5, based on 2 reviews
• Hotel B gets average 4.4, based on 200 reviews
92
93. Correct but Misleading
Results: Score Edition
• Hotel A gets average 4.5, based on 10 reviews
• Hotel B gets average 4.4, based on 500 reviews
93
94. Correct but Misleading
Results: Score Edition
• Hotel A gets average 4.5, based on 10 reviews
• Hotel B gets average 4.4, based on 500 reviews
• Hotel A has 5 rooms, while hotel B has 500 rooms.
94
97. 97
Diversity Suppression
(Hiring)
• Use Data Science to find promising prospects
• Criteria are tuned to fit the majority
• Algorithm performs poorly on (some) minorities
• Best minority applicants are not hired
98. 98
Diversity Suppression
(Medical)
• Group A in Majority
• Drug is found effective with suitable significance level
• Patients in group B are also given this drug
• Group B in majority
• Drug is not found effective with sufficient significance
over the whole population
• Drug is not approved, even though minority (group A)
patients could have benefitted from it
99. P-Value Hacking
• Please read extensively about it before reporting
any result saying that your experiment has a p
value < 0.05
• Standard p-value mathematics was developed for
traditional experimental techniques where you
design the experiment first and then collect the
data
99
100. Algorithmic Fairness
Conlusion
• Human have many biases
No human is perfectly fair even with the best of
intentions and biases are hard to detect
sometime
• Biases in algorithms usually easier to measure,
even if outcome is no fairer
100
103. Code of Ethics
• Doctor have Hippocratic oath
• Journalists, lawyers, etc…
• Regulation?
• Trade associations
• Companies lose if they annoy customers
• They self-regulate
103
104. Code of Ethics
• Can’t just take data and spit out what the algorithm
spit out
• Need to understand the outcomes
• Need to own the outcomes
104
105. (One) Code of Ethics
• Do not surprise
• Who owns the data?
• What can the data be used for?
• What can you hide in exposed data?
• Own the outcome
• Is the data analysis valid?
• Is the data analysis fair?
• What are the societal consequences?
105
106. Conclusion: Ethics Matter
• Data Science has great power — to harm or to help
• Data Scientists must care about how this power is
used
• Cannot hide behind a claim of “neutral
technology”
• We are all better off if we voluntarily limit how this
power is used
106
108. The rise of social cooling
• Like oil leads to global warming, data leads to
social cooling
• If you feel you are being watched, you change your
behaviour
• Your data is turned into thousands of different
scores.
108
111. Social cooling
• People are starting to realise that this ‘digital
reputation’ could limit their opportunities
• People are changing their behaviour to get better
scores
111