Presentation at the AoIR2017 conference at Tartu, Estonia summarizing preliminary results from workshops by the EPSRC funded UnBias project (http://unbias.wp.horizon.ac.uk/)
Presentation at European Big Data Values forum on Fairness, Bias and the role of Ethics Standards in Algorithmic Decision Making. Part of the Data & Society session.
AI and us communicating for algorithmic bias awarenessAnsgar Koene
Presentation on awareness raising and effective communication around algorithmic bias at the Humboldt Institute for Internet and Society workshop on "AI & us", June 2018
Algorithmically Mediated Online Inforamtion Access workshop at WebSci17Ansgar Koene
This was a half-day UnBias project workshop at the WebSci'17 conference presenting some of the interim UnBias project results and engaging the audience in debate on issues related to the role of algorithms in mediated access to online information.
Keynote presentation on policy approaches to socio-technical causes of algorithmic bias at the Bias in Information, Algorithms and Systems workshop at the iConference on 25 March 2018.
Presentation on the IEEE P7000series standards, and the P7003 standard for Algorithmic Bias Considerations, at the EC JRC HUman behaviour and MAchine INTelligence (HUMAINT) project kick-off workshop, March 2018
Presentation at European Big Data Values forum on Fairness, Bias and the role of Ethics Standards in Algorithmic Decision Making. Part of the Data & Society session.
AI and us communicating for algorithmic bias awarenessAnsgar Koene
Presentation on awareness raising and effective communication around algorithmic bias at the Humboldt Institute for Internet and Society workshop on "AI & us", June 2018
Algorithmically Mediated Online Inforamtion Access workshop at WebSci17Ansgar Koene
This was a half-day UnBias project workshop at the WebSci'17 conference presenting some of the interim UnBias project results and engaging the audience in debate on issues related to the role of algorithms in mediated access to online information.
Keynote presentation on policy approaches to socio-technical causes of algorithmic bias at the Bias in Information, Algorithms and Systems workshop at the iConference on 25 March 2018.
Presentation on the IEEE P7000series standards, and the P7003 standard for Algorithmic Bias Considerations, at the EC JRC HUman behaviour and MAchine INTelligence (HUMAINT) project kick-off workshop, March 2018
Presentation outlining the UnBias project, an EPSRC funded project about transparency of biases in algorithm behaviour, often due to unavoidable implicit choices that had to be made.
This presentation was given at the DASTS16 conference in Aarhus Denmark on June 3rd 2016.
Editorial responsibilities arising from personalisation algorithmsAnsgar Koene
Presentation at the 2017 Ethicomp conference discussing issues of editorial responsibility of online platforms arising from personalization algorithms that mediate the information that people see online.
TRILcon'17 confernece workshop presentation on UnBias stakeholder engagementAnsgar Koene
Presentation outlining the stakeholder engagement activities of the UnBias project, including case study driven debate with participants at the Winchester TRILcon conference on May 3rd 2017
Briefing to eCommerce negotiators on algorithmic decision making and associated issues of algorithmic bias. The presentation uses examples to highlight the types and causes of algorithmic decision making bias and to summarize the current state of regulatory responses.
Overview of public concerns and regulatory interest in issues algorithm transparency, accountability and fairness, with background information about the technical/design origin of these issues.
Taming AI Engineering Ethics and PolicyAnsgar Koene
Presentation on the IEEE Global Initiative for Ethics of Autonomous and Intelligent Systems presented at the KAIST workshop on Taming AI: Engineering, Ethics and Policy, June 2018
Young people's policy recommendations on algorithm fairness web sci17Ansgar Koene
Conference presentation at the WebScience 2017 conference, June 26-28th 2017, Troy, NY, USA. This presentation summarizes the methodology and some preliminary results of the UnBias project Youth Juries activities, exploring young people's experiences, concerns and recommendations related to algorithms that mediate access to online information.
Studium Generale presenation at TU Eindhoven on 25 October 2017 (http://www.studiumgenerale-eindhoven.nl/nl/agenda/archief/the-age-of-the-algorithm/0/1109/) discussing the impact of algorithmic decision making on modern society and the ethical responsibility of engineers who build these systems
A koene intersectionality_algorithmic_discrimination_dec2017Ansgar Koene
Presentation on policy and standards activities related to algorithmic decision making presented at Lorentz centre workshop on Intersectionailty and Algorithmic Discrimination, December 2017
Responsible AI: An Example AI Development Process with Focus on Risks and Con...Patrick Van Renterghem
Organisations need to make sure that they use AI in an appropriate way. Martijn and Hugo explain how to ensure that the developments are ethically sound and comply with regulations, how to have end-to-end governance, and how to address bias and fairness, interpretability and explainability, and robustness and security.
During the conference, we looked at an example AI development process with focussing on the risks to be managed and the controls that can be established.
Algorithmic Bias: Challenges and Opportunities for AI in HealthcareGregory Nelson
Gregory S. Nelson, VP, Analytics and Strategy – Vidant Health | Adjunct Faculty Duke University
The promise of AI is quickly becoming a reality for a number of industries including healthcare. For example, we have seen early successes in the augmenting clinical intelligence for diagnostic imaging and in early detection of pneumonia and sepsis. But what happens when the algorithms are biased? In this presentation, we will outline a framework for AI governance and discuss ways in which we can address algorithmic bias in machine learning.
Objective 1: Illustrate the issues of bias in AI through examples specific to healthcare.
Objective 2: Summarize the growing body of work in the legal, regulatory, and ethical oversight of AI models and the implications for healthcare.
Objective 3: Outline steps that we can take to establish an AI governance strategy for our organizations.
CIS 2015 The Ethics of Personal Data - Robin WiltonCloudIDSummit
We're all more conscious than we were 2 years ago, about how much data is collected about us, and how revealing it can be. The commercial and government direction of travel is clear: more data, more mining, more monetization. And if personal data fuels the information economy, who'd want to stop that? But can we get the economic benefits, without selling our digital souls in the process?
- Is there a data equivalent to the ""polluter pays"" principle? And if not, is there an alternative?
- Ethical data handling sounds great in principle, but can it be practical?
- How can organizations put ethical data handling into practice?
ICT and Environmental Regulation in the Developing World: Inequalities in Ins...Rónán Kennedy
SCRIPT Centre Workshop on 'ICT in a changing climate: ICT for environmental regulation as a global justice and development issue', University of Edinburgh, June 2015
Artificial intelligence (AI) refers to a constellation of technologies, including machine learning, perception, reasoning, and natural language processing. While the field has been pursuing principles and applications for over 65 years, recent advances, uses, and attendant public excitement have returned it to the spotlight. The impact of early AI 1 systems is already being felt, bringing with it challenges and opportunities, and laying the foundation on which future advances in AI will be integrated into social and economic domains. The potential wide-ranging impact make it necessary to look carefully at the ways in which these technologies are being applied now, whom they’re benefiting, and how they’re structuring our social, economic, and interpersonal lives.
The Impact of Computing Systems | Causal inference in practiceAmit Sharma
Computing and machine learning systems are affecting almost all parts of our lives and the society at large. How do we formulate and estimate the impact of these systems? This talk introduces causal inference as a methodology to answer such questions and provides examples of applying it to estimating impact of recommender systems, online social media feeds, search engines and interventions in public health in India.
Algorithmically Mediated Online Inforamtion Access at MozFest17Ansgar Koene
Mozfest 2017 Decentralization session group discussion about the role of algorithms in mediating the information that people see online, ways in which this information is/can be manipulated and the responsibility of platforms and developers.
Presentation outlining the UnBias project, an EPSRC funded project about transparency of biases in algorithm behaviour, often due to unavoidable implicit choices that had to be made.
This presentation was given at the DASTS16 conference in Aarhus Denmark on June 3rd 2016.
Editorial responsibilities arising from personalisation algorithmsAnsgar Koene
Presentation at the 2017 Ethicomp conference discussing issues of editorial responsibility of online platforms arising from personalization algorithms that mediate the information that people see online.
TRILcon'17 confernece workshop presentation on UnBias stakeholder engagementAnsgar Koene
Presentation outlining the stakeholder engagement activities of the UnBias project, including case study driven debate with participants at the Winchester TRILcon conference on May 3rd 2017
Briefing to eCommerce negotiators on algorithmic decision making and associated issues of algorithmic bias. The presentation uses examples to highlight the types and causes of algorithmic decision making bias and to summarize the current state of regulatory responses.
Overview of public concerns and regulatory interest in issues algorithm transparency, accountability and fairness, with background information about the technical/design origin of these issues.
Taming AI Engineering Ethics and PolicyAnsgar Koene
Presentation on the IEEE Global Initiative for Ethics of Autonomous and Intelligent Systems presented at the KAIST workshop on Taming AI: Engineering, Ethics and Policy, June 2018
Young people's policy recommendations on algorithm fairness web sci17Ansgar Koene
Conference presentation at the WebScience 2017 conference, June 26-28th 2017, Troy, NY, USA. This presentation summarizes the methodology and some preliminary results of the UnBias project Youth Juries activities, exploring young people's experiences, concerns and recommendations related to algorithms that mediate access to online information.
Studium Generale presenation at TU Eindhoven on 25 October 2017 (http://www.studiumgenerale-eindhoven.nl/nl/agenda/archief/the-age-of-the-algorithm/0/1109/) discussing the impact of algorithmic decision making on modern society and the ethical responsibility of engineers who build these systems
A koene intersectionality_algorithmic_discrimination_dec2017Ansgar Koene
Presentation on policy and standards activities related to algorithmic decision making presented at Lorentz centre workshop on Intersectionailty and Algorithmic Discrimination, December 2017
Responsible AI: An Example AI Development Process with Focus on Risks and Con...Patrick Van Renterghem
Organisations need to make sure that they use AI in an appropriate way. Martijn and Hugo explain how to ensure that the developments are ethically sound and comply with regulations, how to have end-to-end governance, and how to address bias and fairness, interpretability and explainability, and robustness and security.
During the conference, we looked at an example AI development process with focussing on the risks to be managed and the controls that can be established.
Algorithmic Bias: Challenges and Opportunities for AI in HealthcareGregory Nelson
Gregory S. Nelson, VP, Analytics and Strategy – Vidant Health | Adjunct Faculty Duke University
The promise of AI is quickly becoming a reality for a number of industries including healthcare. For example, we have seen early successes in the augmenting clinical intelligence for diagnostic imaging and in early detection of pneumonia and sepsis. But what happens when the algorithms are biased? In this presentation, we will outline a framework for AI governance and discuss ways in which we can address algorithmic bias in machine learning.
Objective 1: Illustrate the issues of bias in AI through examples specific to healthcare.
Objective 2: Summarize the growing body of work in the legal, regulatory, and ethical oversight of AI models and the implications for healthcare.
Objective 3: Outline steps that we can take to establish an AI governance strategy for our organizations.
CIS 2015 The Ethics of Personal Data - Robin WiltonCloudIDSummit
We're all more conscious than we were 2 years ago, about how much data is collected about us, and how revealing it can be. The commercial and government direction of travel is clear: more data, more mining, more monetization. And if personal data fuels the information economy, who'd want to stop that? But can we get the economic benefits, without selling our digital souls in the process?
- Is there a data equivalent to the ""polluter pays"" principle? And if not, is there an alternative?
- Ethical data handling sounds great in principle, but can it be practical?
- How can organizations put ethical data handling into practice?
ICT and Environmental Regulation in the Developing World: Inequalities in Ins...Rónán Kennedy
SCRIPT Centre Workshop on 'ICT in a changing climate: ICT for environmental regulation as a global justice and development issue', University of Edinburgh, June 2015
Artificial intelligence (AI) refers to a constellation of technologies, including machine learning, perception, reasoning, and natural language processing. While the field has been pursuing principles and applications for over 65 years, recent advances, uses, and attendant public excitement have returned it to the spotlight. The impact of early AI 1 systems is already being felt, bringing with it challenges and opportunities, and laying the foundation on which future advances in AI will be integrated into social and economic domains. The potential wide-ranging impact make it necessary to look carefully at the ways in which these technologies are being applied now, whom they’re benefiting, and how they’re structuring our social, economic, and interpersonal lives.
The Impact of Computing Systems | Causal inference in practiceAmit Sharma
Computing and machine learning systems are affecting almost all parts of our lives and the society at large. How do we formulate and estimate the impact of these systems? This talk introduces causal inference as a methodology to answer such questions and provides examples of applying it to estimating impact of recommender systems, online social media feeds, search engines and interventions in public health in India.
Algorithmically Mediated Online Inforamtion Access at MozFest17Ansgar Koene
Mozfest 2017 Decentralization session group discussion about the role of algorithms in mediating the information that people see online, ways in which this information is/can be manipulated and the responsibility of platforms and developers.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/09/responsible-ai-tools-and-frameworks-for-developing-ai-solutions-a-presentation-from-intel/
Mrinal Karvir, Senior Cloud Software Engineering Manager at Intel, presents the “Responsible AI: Tools and Frameworks for Developing AI Solutions” tutorial at the May 2023 Embedded Vision Summit.
Over 90% of businesses using AI say trustworthy and explainable AI is critical to business, according to Morning Consult’s IBM Global AI Adoption Index 2021. If not designed with responsible considerations of fairness, transparency, preserving privacy, safety and security, AI systems can cause significant harm to people and society and result in financial and reputational damage for companies.
How can we take a human-centric approach to design AI solutions? How can we identify different types of bias and what tools can we use to mitigate those? What are model cards, and how can we use them to improve transparency? What tools can we use to preserve privacy and improve security? In this talk, Karvir discusses practical approaches to adoption of responsible AI principles. She highlights relevant tools and frameworks and explores industry case studies. She also discusses building a well-defined response plan to help address an AI incident efficiently.
Ethics of personalized information filteringAnsgar Koene
Presentation that was given at the 2nd International Conference on Internet Science, Brussels, May 27-29, 2015. In this presentation we discuss the ethical implication of information personalization by internet services (e.g. search engines, news feeds). After outlining the rationale behind information personalization and highlighting some of the socio-psychological, privacy and transparency issues, we conclude with a call for future research.
Ethics and Responsible AI Deployment
Abstract: As Artificial Intelligence (AI) becomes more prevalent, protecting personal privacy is a critical ethical issue that must be addressed. This article explores the need for ethical AI systems that safeguard individual privacy while complying with ethical standards. By taking a multidisciplinary approach, the research examines innovative algorithmic techniques such as differential privacy, homomorphic encryption, federated learning, international regulatory frameworks, and ethical guidelines. The study concludes that these algorithms effectively enhance privacy protection while balancing the utility of AI with the need to protect personal data. The article emphasises the importance of a comprehensive approach that combines technological innovation with ethical and regulatory strategies to harness the power of AI in a way that respects and protects individual privacy.
Artificial intelligence (AI) has the potential to significantly impact employment, social equity, and economic systems in ways that require careful ethical analysis and aggressive legislative measures to mitigate negative consequences. This means that the implications of AI in different industries, such as healthcare, finance, and transportation, must be carefully considered.
Due to the global nature of AI technology, global collaboration must be fostered to establish standards and regulatory frameworks that transcend national boundaries. This includes the establishment of ethical guidelines that AI researchers and developers worldwide should follow.
To address emergent ethical concerns with AI, future research must focus on several recommendations. Firstly, ethical considerations must be integrated into the design phase of AI systems and not treated as an afterthought. This is known as "Ethics by Design" and involves incorporating ethical standards during the development phase of AI systems to ensure that the technology aligns with ethical principles.
Secondly, interdisciplinary research that combines AI, ethics, law, social science, and other relevant domains should be promoted to produce well-rounded solutions to ethical dilemmas. This requires the participation of experts from different fields to identify and address ethical issues.
Thirdly, regulatory frameworks must be dynamic and adaptive to keep pace with the rapid evolution of AI technologies. This means that regulatory frameworks must be flexible enough to accommodate changes in AI technology while ensuring ethical standards are maintained.
Fourthly, empirical research should be conducted to understand the real-world implications of AI systems on individuals and society, which can then inform ethical principles and policies. This means that empirical data must be collected to understand how AI affects people in different contexts.
Finally, risk assessment procedures should be improved to better analyse the ethical hazards associated with AI applications.
What regulation for Artificial Intelligence?Nozha Boujemaa
Should we regulate Artificial Intelligence? What are the challenges to face bias in data and algorithms? What is trustworthy AI? AI HLEG (European Commission) and AIGO (OECD) feedback experiences and recommendations. Example in precision medicine: AI/ML for medical devices
Measuring effectiveness of machine learning systemsAmit Sharma
Many online systems, such as recommender systems or ad systems, are increasingly being used in societally critical domains such as education, healthcare, finance and governance. A natural question to ask is about their effectiveness, which is often measured using observational metrics. However, these metrics hide cause-and-effect processes between these systems, people's behavior and outcomes. I will present a causal framework that allows us to tackle questions about the effects of algorithmic systems and demonstrate its usage through evaluation of Amazon's recommender system and a major search engine. I will also discuss how such evaluations can lead to metrics for designing better systems.
Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...Shift Conference
Shift AI was a success, connecting hundreds of professionals that were eager to propel the progress of AI and discuss the newest technologies in data mining, machine learning and neural networks. More at https://ai.shiftconf.co/.
Talk description:
With all the breakthroughs in Machine Learning space, ML models are now being used to make decisions affecting the lives of humans, more than ever. Hence judging the quality of a model can no longer only fulfilled by accuracy, precision, and recall. It's important to understand that each individual and group of people is being treated with equality without any historical bias existed in the data. This talk focuses on some of the many potential ways to establish fairness as metrics for ML models in your organization. Also, my learnings and challenges, I encountered while building a fairness tool for data scientists and business stakeholders.
Demo: Algorithmic Fairness Tool (AFT) was an innovation project, done at Accenture The Dock, which focused on bringing the latest research from academia and building a tool for the industry.
AI Governance and Ethics - Industry StandardsAnsgar Koene
Presentation on the potential for Ethics based Industry Standards to function as vehicle to address socio-technical challenges from AI.
Presentation given at the the 1st Austrian IFIP forum ono "AI and future society".
Generative AI: Responsible Path forward, a presentation conducted during DataHour webinar series by Analytics Vidhya and attended by more than a hundred data scientists and AI experts from around the world. The presentation address the importance of AI ethics and the development of responsible AI governance at tech firms to help mitigate AI risks and ethical issues.
e-SIDES workshop at BDV Meet-Up, Sofia 14/05/2018e-SIDES.eu
The following presentation was given at the workshop "Technology solutions for privacy issues: what is the best way forward?" organized by e-SIDES at the BDVe Meet-up in Sofia on May 14, 2018. The workshop, chaired by Gabriella Cattaneo from IDC, involved stakeholders from ICT-18 projects.
Recommender Systems and Misinformation: The Problem or the Solution?Alejandro Bellogin
Presentation at Workshop on Online Misinformation- and Harm-Aware Recommender Systems co-located with the 14th ACM Conference on Recommender Systems (RecSys 2020).
Tutorial, Learning Analytics Summer Institute, Ann Arbor, June 2017
As algorithms pervade societal life, they’re moving from an arcane topic reserved for computer scientists and mathematicians, to the object of far wider academic and mainstream media attention (try a web news search on algorithms, and then add ethics). As agencies delegate machines with increasing powers to make judgements about complex human qualities such as ’employability’, ‘credit worthiness’, or ‘likelihood of committing a crime’, we are confronted by the challenge of “governing algorithms”, lest they turn into Weapons of Math Destruction. But in what senses are they opaque, and to whom? And what is meant by “accountable”?
The education sector is clearly not immune from these questions, and it falls to the Learning Analytics community to convene a vigorous debate, and devise good responses. In this tutorial, I’ll set the scene, and then propose a set of lenses that we can bring to bear on a learning analytics infrastructure, to identify some of the meanings that “accountability” might have. It turns out that algorithmic transparency and accountability may be the wrong focus — or rather, just one piece of the jigsaw. Intriguingly, even if you can look inside the algorithmic ‘black box’, which is imagined to lie in the system’s code, there may be little of use there. I propose that a human-centred informatics approach offers a more holistic framing, where the aggregate quality we are after might be termed Analytic System Integrity. I’ll work through a couple of examples as a form of ‘audit’, to show where one can identify weaknesses and opportunities, and consider the implications for how we conceive and design learning analytics that are responsive to the questions that society will rightly be asking.
Similar to Human Agency on Algorithmic Systems (20)
A koene governance_framework_algorithmicaccountabilitytransparency_october2018Ansgar Koene
Presentation outlining key finding of European Parliament Science Technology Options Assessment report on "A governance framework for algorithmic accountability and transparency", presented at the European Parliament on October 25th 2018.
A koene Rebooting The Expert Petcha Kutcha 2017Ansgar Koene
Pethcha Kutcha presentation as part of the University of Nottingham Policy Research Impact Network organized workshop event "Rebooting The Expert, Routes to Policy Impact", held in July 2017
Introduction talk at the University of Strathclyde (Scotland) Algorithms Workshop, providing a quick overview of the fundamental and practical reasons why algorithms are/are not technical black boxes. (This talk does not address issues of trade secret or other business reasons for lack of transparency). The presentation was given to an audience of academics and students at the law department.
In order to explore public attitudes towards the use of data from online services (e.g. social media) or digital devices (e.g. mobile phone GPS), we are running a Twitter based campaign (#AnalyzeMyData) in which we reminded people of instances of data usage that have been reported in news stories and asked them to rate if they considered these data uses to be OK. In order to produce momentum of public participation we designed the experiment as a sustained campaign in which a different news item is presented each day over a period of multiple weeks. Each Tweet includes a link to a mini-survey which asks participants to respond, 'yes', 'no' or 'depends'. To further motivate continued participation as the campaign progresses, we provide a running update on our website of the response statistics to the items that were previously Tweeted. The types of data usage included in the campaign range from academic studies of social media use, to data collection for product development, marketing and government studies. Our hope is that this campaign/experiment will 1) help to raise awareness of the various ways in which personal data, acquired through online services of digital devices, is currently being used, and 2) provide a large dataset of case-studies with an associated baseline of public acceptance/rejection that can be used for future research ethics guidelines and review training.
Bridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptxBrad Spiegel Macon GA
Brad Spiegel Macon GA’s journey exemplifies the profound impact that one individual can have on their community. Through his unwavering dedication to digital inclusion, he’s not only bridging the gap in Macon but also setting an example for others to follow.
APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024APNIC
Ellisha Heppner, Grant Management Lead, presented an update on APNIC Foundation to the PNG DNS Forum held from 6 to 10 May, 2024 in Port Moresby, Papua New Guinea.
Instagram has become one of the most popular social media platforms, allowing people to share photos, videos, and stories with their followers. Sometimes, though, you might want to view someone's story without them knowing.
1. Human Agency on
Algorithmic Systems
ANSGAR KOENE & ELVIRA PEREZ VALLEJOS, UNIVERSITY OF NOTTINGHAM
HELENA WEBB & MENISHA PATEL, UNIVERSITY OF OXFORD
AOIR 2017
7. Personalized recommendations
Content based – similarity to past results the user liked
Collaborative – results that similar users liked
(people with statistically similar tastes/interests)
Community based – results that people in the same social
network liked
(people who are linked on a social network e.g. ‘friends’)
7
10. User understanding of social media
algorithms: Facebook News Feed
Of 40 interviewed participants more than 60% of Facebook users
are entirely unaware of any algorithmic curation on Facebook at
all: “They believed every single story from their friends and
followed pages appeared in their news feed”.
Published at: CHI 2015
10
11. Pre-workshop survey of 96 teenagers
(13-17 years old)
No preference between the internet experience to be more personalised or more
‘organic’: A.: 53% More personalised, 47% More ‘organic’
Lack of awareness about the way search engines rank the information but
participants believe it’s important for people to know
• How much do you know?- A.: 36% Not much, 58% A little, 6% Quite a lot
• Do you think it’s important to know?- A:. 62% Yes, 16% Not really, 22% Don’t know
Regulation role: Who makes sure that the Internet and digital world is safe and
neutral? A:.4% Police, 23% Nobody, 29% Government, 44% The big tech companies
12. Multi-Stakeholder Workshop,
Multiple stakeholders: academia, education, NGOs, industry
30 participants
Fairness in relation to algorithmic design and practice
Four key case studies: fake news, personalisation, gaming the system, and
transparency
What constitutes a fair algorithm?
What kinds of (legal and ethical) responsibilities do Internet companies have,
to ensure their algorithms produce results that are fair and without bias?
13. Fairness in relation to algorithmic design and
practice - participant recommendations
Criteria relating to social norms and values:
Criteria relating to system reliability:
Criteria relating to (non-)interference with user control:
14. Criteria relating to social norms and values:
(i) Sometimes disparate outcome are acceptable if based on
individual lifestyle choices over which people have control.
(ii) Ethical precautions are more important than higher accuracy.
(iii)There needs to be a balancing of individual values and socio-
cultural values. Problem: How to weigh relevant social-
cultural value?
15. Criteria relating to system reliability:
(i) Results must be balanced with due regard for trustworthiness.
(ii) Need for independent system evaluation and monitoring over
time.
16. Criteria relating to (non-)interference
with user control:
(i) Subjective fairness experience depends on user objectives at
time of use, therefore requires an ability to tune the data and
algorithm.
(ii) Users should be able to limit data collection about them and
its use. Inferred personal data is still personal data. Meaning
assigned to the data must be justified towards the user.
(iii)Functioning of algorithm should be demonstrated/explained in
a way that can be understood by the data subject.
17. Criteria relating to (non-)interference
with user control:
(iv) If not vital to the task, there should be option to opt-out of
the algorithm
(v) Users must have freedom to explore algorithm effects, even
if this would increase the ability to “game the system”
(vi)Need for clear means of appeal/redress for impact of the
algorithmic system.
21. Evaluating fairness with knowledge
about the algorithm decision principles
A1: minimise disparity while
guaranteeing at least 70% of
maximum possible total
A2: maximise the minimum
individual outcome while
guaranteeing at least 70% of
maximum possible total
A3: maximise total
A4: maximise the minimum
individual outcome
A5: minimise disparity
Most preferred
Least preferred
22. Conclusion
Algorithmic mediation (can) plays an important role in improving
the usefulness of online services.
Users want more options to understand, adjust or even opt-out of
algorithmic mediation
Users do not agree on a single option when choosing a ‘best’
algorithm for a given task.
30. E. Bakshy, S. Medding & L.A. Adamic, “Exposure to ideologically diverse news and opinion on Facebook” Science, 348, 1130-1132, 2015
Echo-chamber enhancement by NewsFeed
algorithm
10.1 million active US Facebook users
Proportion of content that is cross-cutting
30
31. E. Bakshy, S. Medding & L.A. Adamic, “Exposure to ideologically diverse news and opinion on Facebook” Science, 348, 1130-1132, 2015
Positioning effect in NewsFeed
31
Editor's Notes
Our first stakeholder workshop was held on February 3rd 2017, at the Digital Catapult in London.
The first workshop brought together participants from academia, education, NGOs and enterprises. We were fortunate to have 30 particiipants on the day, which was a great turnout.
The workshop itself focused on four case studies each chosen as it concerned a key current debate surrounding the use of algorithms and fairness.
The case studies centred around: fake news, personalisation, gaming the system, and transparency.
This WP aims to develop a methodology and the necessary IT and techniques for revealing the impact of algorithmic biases in personalisation-based platforms to non-experts (e.g. youths), and for co-developing “fairer” algorithms in close collaboration with specialists and non-expert users.
In Year 1, Sofia and Michael have been running a task that asks participants to make task allocation decisions. In a situation in which resources are limited, different algorithms might be used to determine who receives what. Participants are asked to determine which algorithm is best suited to make the allocation and this inevitably brings up issues of fairness. Disucssion reveals different models of fairness. These findings will put towards further work on the processes of algorithm design and the possibility to develop a fair algorithm.