4. Good information service = good filtering
Information services, e.g. internet search, news feeds etc.
free-to-use => no competition on price
lots of results => no competition on quantity
Competition on quality of service
Quality = relevance
= appropriate filtering
5. Personalized recommendations
•Content based – similarity to past results the user liked
•Collaborative – results that similar users liked
(people with statistically similar tastes/interests)
•Community based – results that people in the same social
network liked
(people who are linked on a social network e.g. ‘friends’)
6. Search engine manipulation effect could impact elections –
distort competition
Experiments that manipulated the search
rankings for information about political
candidates for 4556 undecided voters.
i. biased search rankings can shift the voting
preferences of undecided voters by 20% or
more
ii. the shift can be much higher in some
demographic groups
iii. such rankings can be masked so that people
show no awareness of the manipulation.
R. Epstein & R.E. Robertson “The search engine manipulation effect (SEME) and its possible impact on the outcome
of elections”, PNAS, 112, E4512-21, 2015
9. User understanding of social media algorithms
More than 60% of Facebook users are entirely unaware of any algorithmic
curation on Facebook at all: “They believed every single story from their
friends and followed pages appeared in their news feed”.
Published at: CHI 2015
12. Machine Learning based systems
12
Supervised learning based Machine Learning
= Computationally intensive statistics with
implicitly defined optimisation criteria
16. Algorithmic price setting by sellers on amazon
algorithmic tuning of prices becomes visible when something goes wrong
Based on observations based reverse
engineering:
profnath algorithm:
1. Find highest-priced copy of book
2. set price to be 0.9983 times highest-priced.
bordeebook algorithm:
1. Find highest-priced book from other sellers
2. Set price to 1.270589 times that price.
However, “0.9983” and “1.270589” suggests the
ratios were not chosen by humans
http://www.michaeleisen.org/blog/?p=358
18. Algorithmic Bias Considerations
All non-trivial* decisions are biased
We seek to minimize bias that is:
Unintended
Unjustified
Unacceptable
as defined by the context where the system is used.
*Non-trivial means the decision has more than one possible outcome and the
choice is not uniformly random.
19. Causes of algorithmic bias
Insufficient understanding of the context of use. – who will be affected?
Failure to rigorously identify the decision/optimization criteria.
Failure to have explicit justifications for the chosen criteria.
Failure to check if the justifications are acceptable in the context where the
system is used.
System not performing as intended. – implementation errors; unreliable input
data.
23. Biography
• Dr. Koene is Senior Research Fellow at the Horizon Digital Economy Research
Institute of the University of Nottingham, where he conducts research on
societal impact of Digital Technology.
• Chairs the IEEE P7003TM Standard for Algorithms Bias Considerations working
group., and leads the policy impact activities of the Horizon institute.
• He is co-investigator on the UnBias project to develop regulation-, design- and
education-recommendations for minimizing unintended, unjustified and
inappropriate bias in algorithmic systems.
• Over 15 years of experience researching and publishing on topics ranging from
Robotics, AI and Computational Neuroscience to Human Behaviour studies and
Tech. Policy recommendations.
• He received his M.Eng., and Ph.D. in Electical Engineering & Neuroscience,
respectively, from Delft University of Technology and Utrecht University.
• Trustee of 5Rights, a UK based charity for Enabling Children and Young People
to Access the digital world creatively, knowledgeably and fearlessly.
Dr. Ansgar Koene
ansgar.koene@nottingham.ac.uk
https://www.linkedin.com/in/akoene/
https://www.nottingham.ac.uk/comp
uterscience/people/ansgar.koene
http://unbias.wp.horizon.ac.uk/
Editor's Notes
Each method has its strengths and weaknesses. The following examples provide a brief overview of how these methods work, and how they can go wrong
All non-trivial decisions are biased. For example, a good results from a search engine should be biased to match the interests of the user as expressed by the search-term, and possibly refined based on personalization data.
When we say we want ‘no Bias’ we mean we want to minimize unintended, unjustified and unacceptable bias, as defined by the context within which the algorithmic system is being used.
In the absence of malicious intent, bias in algorithmic system is generally caused by:
Insufficient understanding of the context that the system is part of. This includes lack of understanding who will be affected by the algorithmic decision outcomes, resulting in a failure to test how the system performs for specific groups, who are often minorities. Diversity in the development team can partially help to address this.
Failure to rigorously map decision criteria. When people think of algorithmic decisions as being more ‘objectively trustworthy’ than human decisions, more often than not they are referring to the idea that algorithmic systems follow a clearly defined set of criteria with no ‘hidden agenda’. The complexity of system development challenges, however, can easily introduce ‘hidden decision criteria’ introduced as a quick fix during debugging or embedded within Machine Learning training data.
Failure to explicitly define and examine the justifications for the decision criteria. Given the context within which the system is used, are these justifications acceptable? For example, in a given context is it OK to treat high correlation as evidence of causation?
Ansgar Koene is a Senior Research Fellow at Horizon Digital Economy Research institute, University of Nottingham and chairs the IEEE P7003 Standard for Algorithm Bias Considerations working group. As part of his work at Horizon Ansgar is the lead researcher in charge of Policy Impact; leads the stakeholder engagement activities of the EPSRC (UK research council) funded UnBias project to develop regulation-, design- and education-recommendations for minimizing unintended, unjustified and inappropriate bias in algorithmic systems; and frequently contributes evidence to UK parliamentary inquiries related to ICT and digital technologies.
Ansgar has a multi-disciplinary research background, having previously worked and published on topics ranging from bio-inspired Robotics, AI and Computational Neuroscience to experimental Human Behaviour/Perception studies.