Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Algorithmically Mediated Online Inforamtion Access workshop at WebSci17

191 views

Published on

This was a half-day UnBias project workshop at the WebSci'17 conference presenting some of the interim UnBias project results and engaging the audience in debate on issues related to the role of algorithms in mediated access to online information.

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Algorithmically Mediated Online Inforamtion Access workshop at WebSci17

  1. 1. Algorithm Mediated Online Information Access: user trust, transparency, control and responsibility 25TH JUNE 2017 WEBSCI’17
  2. 2. Workshop outline 9:00 – 9:10 Introduction 9:10 – 9:30 Overview of the ongoing UnBias project: Youth Juries deliberations, user observation studies, stakeholder engagement workshops 10:10 – 10:30 “Platforms: Do we trust them”, by Rene Arnold 10:30 – 10:50 “IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems”, by John Havens 10:50 – 11:10 Break 11:10 – 11:50 Case study 1: News recommendation & Fake News 11:50 – 12:30 Case study 2: Business models, CSR/RRI a role for WebScience 12:30 – 12:50 Break 12:50 – 13:30 Case study 3: Algorithmic discrimination 13:30 – 14:00 Summary of outcomes
  3. 3. UnBias: Emancipating Users Against Algorithmic Biases for a Trusted Digital Economy User experience of algorithm driven internet platforms and the processes of algorithm design
  4. 4. UnBias: Emancipating Users Against Algorithmic Biases for a Trusted Digital Economy Mission: Develop co-designed recommendations for design, regulation and education to mitigate unjustified bias in algorithmic systems. Aim: A ‘fairness toolkit’ consisting of three co-designed tools ◦ a consciousness raising tool for young internet users to help them understand online environments; ◦ an empowerment tool to help users navigate through online environments; ◦ an empathy tool for online providers and other stakeholders to help them understand the concerns and rights of internet users.
  5. 5. Project team and institutions University of Nottingham Derek McAuley Ansgar Koene Elvira Perez Vallejos Virginia Portillo Monica Cano Gomez Liz Douthwaite http://unbias.wp.horizon.ac.uk/ University of Edinburgh Michael Rovatsos Sofia Ceppi University of Oxford Marina Jirotka Helena Webb Menisha Patel Giles Lane (Proboscis)
  6. 6. UnBias: work-packages WP1 [led by Nottingham] ‘Youth Juries’ with 13-17 year old “digital natives” to co-produce citizen education materials on properties of information filtering/recommendation algorithms; WP2 [led by Edinburgh] co-design workshops/hackathons & double-blind testing to produce user-friendly open source tools for benchmarking and visualising biases in filtering/recommendation algorithms; WP3 [led by Oxford] user observation & interviews to explore human-algorithm interaction; WP4 [co-led by Oxford and Nottingham] stakeholder engagement involving informed professionals from key groups to discuss issues of algorithmic bias and fairness. Workshop outcomes to inform project recommendations and outputs.
  7. 7. http://unbias.wp.horizon.ac.uk/
  8. 8. WP1: Youth Juries Stimulus Discussion Recommendations  Personalisation algorithms  Search Results & News Transparency & Regulation Participants N Age Gender 144 13-23 (av. 15) 67 F 77 M
  9. 9. Pre-session survey results  No preference between the internet experience to be more personalised or more neutral: A.: 53% More personalised, 47% More neutral  Lack of awareness about the way search engines rank the information but jurors believe it’s important for people to know • How much do you know?- A.: 36% Not much, 58% A little, 6% Quite a lot • Do you think it’s important to know?- A:. 62% Yes, 16% Not really, 22% Don’t know  Regulation role: Who makes sure that the Internet and digital world is safe and neutral? A:.4% Police, 23% Nobody, 29% Government, 44% The big tech companies
  10. 10. Post-session survey results  Q.: Did you learn anything new today about: • Algorithm fairness? A.: 68% Yes, a lot; 31% Yes, a little; 1% No • How the Internet affects your life?: A.: 55% Yes, a lot; 39% Yes, a little; 5% No; 1% Don’t know “Social media sites should not influence the information to their users” A.: 49% Agree, 25% Neutral, 20% Disagree, 5% No Response  “When I use digital technologies I would like to have more control of what happens to me and my personal data” A.: 82% Agree, 7% Neutral, 8% Disagree, 3% NR  “It should be made easier for people to remove digital content about themselves and recreate their online profiles” A.: 74% Agree, 18% Neutral, 4% Disagree, 4% Neutral  “The big tech companies are accountable enough to users of digital technologies” A.: 42% Agree, 35% Neutral, 16% Disagree,7% NR
  11. 11. Attitudinal change Increase in participants’ confidence as social actors:  I can influence the way that digital technologies work for young people Before: 4% Yes, a lot; 52% Yes, a little; 44% No After: 32% Agree, 36% Neutral, 27% No, 5% NR  13-24 year olds should influence how digital technologies and services are run Before: 42% Yes, 24% Not bothered, 28% Don’t know, 6% No After: 58% Agree, 27% Neutral, 10% No, 5% NR
  12. 12. Summary  Nearly all participants expressed learning something new about algorithms fairness (99%) and how the Internet affects their lives (94%)  No preference regarding personalisation vs neutrality, but 50% agreed social media should not influence the information to their users  Nearly half of the jurors (44%) believe the big tech companies are responsible for the Internet and digital world to be safe and neutral Young people would like to have more control over their online lives (82%) and should influence how digital technologies and services are run (58%)
  13. 13. WP2: Algorithm ‘fairness’
  14. 14. Evaluating fairness from outputs only Most preferred Least preferred
  15. 15. Evaluating fairness with knowledge about the algorithm decision principles  A1: minimise total distance while guaranteeing at least 70% of maximum possible utility  A2: maximise the minimum individual student utility while guaranteeing at least 70% of maximum possible total utility  A3: maximise total utility  A4: maximise the minimum individual student utility  A5: minimise total distance Most preferred Least preferred
  16. 16. WP3: User behaviour
  17. 17. WP4: First Multi-Stakeholder Workshop, 3rd Feb 2017 Multiple stakeholders: academia, education, NGOs, enterprises 30 participants Fairness in relation to algorithmic design and practice Four key case studies: fake news, personalisation, gaming the system, and transparency What constitutes a fair algorithm? What kinds of (legal and ethical) responsibilities do Internet companies have, to ensure their algorithms produce results that are fair and without bias?
  18. 18. The Conceptualisation of Fairness “a context-dependent evaluation of the algorithm processes and/or outcomes against socio-cultural values. Typical examples might include evaluating: the disparity between best and worst outcomes; the sum-total of outcomes; worst-case scenarios; everyone is treated/processed equally without prejudice or advantage due to task-irrelevant factors”.
  19. 19. Criteria relating to social norms and values: (i) Sometimes disparate outcome are acceptable if based on individual lifestyle choices over which people have control. (ii) Ethical precautions are more important than higher accuracy. (iii)There needs to be a balancing of individual values and socio- cultural values. Problem: How to weigh relevant social- cultural value?
  20. 20. Criteria relating to system reliability: (i) Results must be balanced with due regard for trustworthiness. (ii) Needs for independent system evaluation and monitoring over time.
  21. 21. Criteria relating to (non-)interference with user control: (i) Subjective fairness experience depends on user objectives at time of use, therefore requires an ability to tune the data and algorithm. (ii) Users should be able to limit data collection about them and its use. Inferred personal data is still personal data. Meaning assigned to the data must be justified towards the user. (iii) Reasoning/behaviour of algorithm demonstrated/explained in a way that can be understood by the data subject. (iv) If not vital to the task, there should be opt-out of the algorithm (v) Users must have freedom to explore algorithm effects, even if this would increase the ability to “game the system” (vi) Need for clear means of appeal/redress for impact of the algorithmic system.
  22. 22. Transparency Discussion Difference between transparency and meaningful transparency Transparency of an algorithmic process does not overcome bias if the data the algorithm works on is biased. Wariness that transparency can lead to ‘gaming the system’
  23. 23. Transparency Solutions 1) An intermediate organisation to audit and analyse algorithms, trusted by end users to deliver opinion about what the operation of an algorithm means in practice 2) Certification for algorithms – showing agreement to submit to regular testing, guarantees of good practice etc. 3) Voluntary certificates of fairness etc. e.g. Fairtrade.
  24. 24. WP4: Second Multi-Stakeholder Workshop, June 2017 25 participants: academia, law, enterprise, NGOs Fair resource allocation task Empathy tool for developers of algorithms
  25. 25. Key points Discussion of algorithms inevitably leads to discussion of context in which the algorithm will be applied Discussion of preferred or ‘best’ algorithm for allocation inevitably leads to moral debates over fairness, equality and what individuals ‘deserve’ Empathy can be a positive as it promotes understanding and may be followed by positive action. Empathy from developers of algorithms towards users might entail respect for users’ time, attention, content and contact with platforms. However empathy cannot be guaranteed to lead to positive action; it might make users more vulnerable to manipulation Is empathy compatible with a business model based on advertising? (cf Facebook targetting of vulnerable teenagers.
  26. 26. Case Study 1: The role of recommender algorithms in online hoaxes and fake news
  27. 27. Questions to consider 1. How easy is it for a) individual users and b) groups of users to influence the order to responses in a web-search? 2. How could search engines weight their search results towards more authoritative results ahead of more popular ones? Should they? 3. To what extent should web search platforms manually manipulate their own algorithms and in what instances? NB Google has made a number of adjustments re anti-Semitism etc. and places a suicide help line at the top of searches about how to kill oneself. 4. To what extent should public opinion influence the ways in which platforms design and adjust their autocomplete and search algorithms? 5. What other features should and should not have a role in influencing the design of autocomplete and search algorithms?
  28. 28. Case Study 2: Business models - how can WebScience boost CSR / RRI Responsible Research and Innovation (RRI) RRI emerged from concerns about increasingly potent and transformative potential of research and innovation, and the societal and ethical implications that these may engender,. A responsible approach to the design, development and appropriation of technologies through the lens of RRI, entails a multi-stakeholder involvement through the processes and outcomes of research and innovation. YouTube ad delivery algorithm controversy
  29. 29. Questions to consider:
  30. 30. Case Study 3: Unintended algorithmic discrimination online – towards detection and prevention Personalization can be very helpful However there are concerns: 1. The creation of online echo chambers or filter bubbles. 2. The results of personalisation algorithms may be inaccurate and even discriminatory. 3. Personalisation algorithms function to collate and act on information collected about the online user. 4. Algorithms typically lack transparency.
  31. 31. Questions to consider: 1. What is your response to this comment from Mark Zuckerberg to explain the value of personalisation on the platform? “A squirrel dying in front of your house may be more relevant to your interests right now than people dying in Africa” 2. What (legal or ethical) responsibilities do internet platforms have to ensure their personalisation algorithms are 1) not inaccurate or discriminatory and 2) transparent? 3. To what extent should users be able to determine how much or how little personal data internet platforms collect about us? 4. To what extent would algorithmic transparency help to address concerns raised about the negative impacts of personalisation algorithms? 5. Is there any point to algorithmic transparency? What might be some useful alternatives?
  32. 32. UnBias workshop report available at http://unbias.wp.horizon.ac.uk/
  33. 33. Summary Points 1) the ‘problem’ of potential bias and unfairness in algorithmic practice is broad in scope, has the potential to disproportionately affect vulnerable users 2) the problem is also nuanced as the presence of algorithms on online platforms can be of great benefit to assist users in achieving their goals; 3) finding solutions to the problem is highly complex.

×