Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
Ethical machines: data mining
and fairness
– the optimistic view
Anna Ronkainen
chief scientist, TrademarkNow
it’s complic...
My three points
1.  people aren’t exactly perfect, either, and
sometimes algorithms can be an
improvement
2.  different ty...
Heuristics
or biases?
(Dhami 2003)
Sometimes people fail in unexpected
ways...
(Danziger et al (2011):	Extraneous Factors
in Judicial Decisions)
Systems 1 and 2 in legal reasoning:
interaction
System 1:
making the
decision
System 2:
validation and
justification
(Ronk...
Implications for algorithms
(hypothesis)
-  System-1-like processes cannot be captured
reliably with GOFAI -> machine lear...
Taking data protection seriously?
(2016 EU General Data Protection Regulation)
Seriously-seriously?
(1995 EU Data Protection Directive 95/46/EC)
My three points
1.  people aren’t exactly perfect, either, and
sometimes algorithms can be an
improvement
2.  different ty...
Thank you!
You’ve finished this document.
Download and read it offline.
Upcoming SlideShare
Modeling meaning and knowledge: legal knowledge
Next
Upcoming SlideShare
Modeling meaning and knowledge: legal knowledge
Next
Download to read offline and view in fullscreen.

0

Share

Ethical machines: data mining and fairness – the optimistic view

Download to read offline

Introductory remarks to a seminar on algorithms and discrimination arranged by the Academy of Finland Centre of Excellence in the Philosophy of the Social Sciences at the University of Helsinki, 2016-05-02.

Related Books

Free with a 30 day trial from Scribd

See all
  • Be the first to like this

Ethical machines: data mining and fairness – the optimistic view

  1. 1. Ethical machines: data mining and fairness – the optimistic view Anna Ronkainen chief scientist, TrademarkNow it’s complicated, UU of Helsinki & Turku @ ronkaine 2016-05-02
  2. 2. My three points 1.  people aren’t exactly perfect, either, and sometimes algorithms can be an improvement 2.  different types of algorithms needed for arriving at decisions and validating/ disproving them 3.  data protection law about automated decision-making needs to be taken seriously
  3. 3. Heuristics or biases? (Dhami 2003)
  4. 4. Sometimes people fail in unexpected ways... (Danziger et al (2011): Extraneous Factors in Judicial Decisions)
  5. 5. Systems 1 and 2 in legal reasoning: interaction System 1: making the decision System 2: validation and justification (Ronkainen 2011)
  6. 6. Implications for algorithms (hypothesis) -  System-1-like processes cannot be captured reliably with GOFAI -> machine learning and other statistical approaches needed -  the System 2 part (finding supporting arguments and validating/falsifying the decision candidate) can (and should) be implemented with rule-based GOFAI for accountability, maintainability etc etc etc
  7. 7. Taking data protection seriously? (2016 EU General Data Protection Regulation)
  8. 8. Seriously-seriously? (1995 EU Data Protection Directive 95/46/EC)
  9. 9. My three points 1.  people aren’t exactly perfect, either, and sometimes algorithms can be an improvement 2.  different types of algorithms needed for arriving at decisions and validating/ disproving them 3.  data protection law about automated decision-making needs to be taken seriously
  10. 10. Thank you!

Introductory remarks to a seminar on algorithms and discrimination arranged by the Academy of Finland Centre of Excellence in the Philosophy of the Social Sciences at the University of Helsinki, 2016-05-02.

Views

Total views

207

On Slideshare

0

From embeds

0

Number of embeds

3

Actions

Downloads

4

Shares

0

Comments

0

Likes

0

×