The document discusses a research project that aims to map values in AI governance by studying how value attributions take form in human and computational ecologies. It proposes moving beyond focusing on ideal norms and values or trying to directly understand legal and computational commands, and instead "encircling" the topic by analyzing mundane practices. The researchers argue this assemblage perspective is needed to understand the interactions that constitute systems' viability and better inform academics, practitioners, regulators and judges.
The Three Forms of (Legal) Prediction: Experts, Crowds and Algorithms -- Prof...Daniel Katz
The Three Forms of (Legal) Prediction: Experts, Crowds and Algorithms -- Professors Daniel Martin Katz & Michael J. Bommarito - Illinois Tech Law / Univ of Michigan CSCS (Updated Version)
Algorithms are taking control of our information rich world. As the twin sibling to Big Data, increasingly they decide how society views us via constructed profiles (as criminals? as terrorists? as rich or poor consumers?); what we see as important, newsworthy, cool or profitable (eg Twitter trending topics, automated stock selling, Amazon recommendations, BBC website top news topics etc); and indeed what we see at all as algorithms are increasingly used to filter our illegal or undesirable content as tools of public policy. Algorithms are peceived by virtue of their automation as neutral, objective and fair, unlike human decision makers - yet evidence increasingly shows the opposite - eg a series of legal complaints assert that Google games its own search results to promote its own economic interests and demote those of competitors or annoyances; while in the defamation field, French, German and Italian courts have decided that algorithmically generated autosuggests in search can be libellous (eg "Bettina Wolf prostitute"). . This paper asks if any legal remedies do or should exist to *audit* proprietary algorithms , given their importance, and asks if one way forward might be via existing and future subject access rights to personal data in EU data protection law. The transformation of these rights as proposed in the draft Data Protection Regulation is not however hopeful.
Introduction to artificial intelligence and lawLawScienceTech
Presentation at Seminar on Artificial Intelligence and Law (15/03/2018) at the Norwegian Research Center for Computers and Law (NRCCL), University of Oslo
The Three Forms of (Legal) Prediction: Experts, Crowds and Algorithms -- Prof...Daniel Katz
The Three Forms of (Legal) Prediction: Experts, Crowds and Algorithms -- Professors Daniel Martin Katz & Michael J. Bommarito - Illinois Tech Law / Univ of Michigan CSCS (Updated Version)
Algorithms are taking control of our information rich world. As the twin sibling to Big Data, increasingly they decide how society views us via constructed profiles (as criminals? as terrorists? as rich or poor consumers?); what we see as important, newsworthy, cool or profitable (eg Twitter trending topics, automated stock selling, Amazon recommendations, BBC website top news topics etc); and indeed what we see at all as algorithms are increasingly used to filter our illegal or undesirable content as tools of public policy. Algorithms are peceived by virtue of their automation as neutral, objective and fair, unlike human decision makers - yet evidence increasingly shows the opposite - eg a series of legal complaints assert that Google games its own search results to promote its own economic interests and demote those of competitors or annoyances; while in the defamation field, French, German and Italian courts have decided that algorithmically generated autosuggests in search can be libellous (eg "Bettina Wolf prostitute"). . This paper asks if any legal remedies do or should exist to *audit* proprietary algorithms , given their importance, and asks if one way forward might be via existing and future subject access rights to personal data in EU data protection law. The transformation of these rights as proposed in the draft Data Protection Regulation is not however hopeful.
Introduction to artificial intelligence and lawLawScienceTech
Presentation at Seminar on Artificial Intelligence and Law (15/03/2018) at the Norwegian Research Center for Computers and Law (NRCCL), University of Oslo
History of AI, Current Trends, Prospective TrajectoriesGiovanni Sileno
Talk given at the 2nd Winter Academy on Artificial Intelligence and International Law of the Asser Institute. The birth of AI: Dartmouth workshop. The biggest AI waves: classic symbolic AI (reasoning, knowledge systems, problem-solving), machine learning (induction). Current problems: explainability, trustworthyness, impact and transformation on society and people, the rise of artificially dumber systems.
Artificial Intelligence and Law - A Primer Daniel Katz
Artificial Intelligence in Law (and beyond) including Machine Learning as a Service, Quantitative Legal Prediction / Legal Analytics, Experts + Crowds + Algorithms
Law, Ethics and Tech Aspects for an Irrevocable BlockChain Based Curriculum V...eraser Juan José Calderón
Law, Ethics and Tech Aspects for an Irrevocable
BlockChain Based Curriculum Vitae Created by Big
Data Analytics Fed by Internet of Things, Sensors and
Approved Data Sources. Vasilios Kanavas, Athanasios Zisopoulos & Konstantinos Spinthiropoulos
What AI is and examples of how it is used in legalBen Gardner
This presentation was given at Legal Geek on 10th Dec 2015. It is a scenesetting peice that looks to de-mystify artificial intelligence by looking beyond the hype.
Why We Are Open Sourcing ContraxSuite and Some Thoughts About Legal Tech and ...Daniel Katz
Why We Are Open Sourcing ContraxSuite and Some Thoughts About Legal Tech and the Modern Information Economy - By Michael Bommarito + Daniel Martin Katz from LexPredict
Ethical Issues in Machine Learning Algorithms. (Part 1)Vladimir Kanchev
This presentation describes recent ethical issues related to AI and ML algorithms. Its focus is data and algorithmic bias, algorithmic interpretability and how GDPR relates to these issues.
Fin (Legal) Tech – Law’s Future from Finance’s Past (Some Thoughts About the ...Daniel Katz
Fin (Legal) Tech – Law’s Future from Finance’s Past (Some Thoughts About the Financialization of the Law) – Professors Daniel Martin Katz + Michael J Bommarito
LEGISLATIVE COORDINATION AND CONCILIATIONS SCIENCE AND TECHNOLOGY OPTIONS ASS...Karlos Svoboda
Pathways towards responsible ICT Innovation
Policy Brief Abstract ICT has an immediate and broad impact on the lives of most individuals. Ethical scrutiny is not well established. Existing ethics review mechanisms are not suited for many of the ethical issues that ICT is likely to cause in the future. Europe has the unique opportunity to show international leadership by pointing the way to how human rights, ethical values and moral norms can be explicitly considered in technology development. The ETICA project (Ethical Issues of Emerging ICT Applications, GA 230318) provides the basis for a new enlightened approach to the development, governance and use of emerging ICT. MAY 2011 PE 460.346 EN
Exploring the Physical Properties of Regulatory Ecosystems - Professors Danie...Daniel Katz
Exploring the Physical Properties of Regulatory Ecosystems: Regulatory Dynamics Revealed by Securities Filings — Professors Daniel Martin Katz + Michael J Bommarito
Brief summary of how the law and legal practice may be affected by the ris of AI and autonomous cars, robots, etc - with a look at what harms or biases may result and how law and the market might try to solve those problems.
History of AI, Current Trends, Prospective TrajectoriesGiovanni Sileno
Talk given at the 2nd Winter Academy on Artificial Intelligence and International Law of the Asser Institute. The birth of AI: Dartmouth workshop. The biggest AI waves: classic symbolic AI (reasoning, knowledge systems, problem-solving), machine learning (induction). Current problems: explainability, trustworthyness, impact and transformation on society and people, the rise of artificially dumber systems.
Artificial Intelligence and Law - A Primer Daniel Katz
Artificial Intelligence in Law (and beyond) including Machine Learning as a Service, Quantitative Legal Prediction / Legal Analytics, Experts + Crowds + Algorithms
Law, Ethics and Tech Aspects for an Irrevocable BlockChain Based Curriculum V...eraser Juan José Calderón
Law, Ethics and Tech Aspects for an Irrevocable
BlockChain Based Curriculum Vitae Created by Big
Data Analytics Fed by Internet of Things, Sensors and
Approved Data Sources. Vasilios Kanavas, Athanasios Zisopoulos & Konstantinos Spinthiropoulos
What AI is and examples of how it is used in legalBen Gardner
This presentation was given at Legal Geek on 10th Dec 2015. It is a scenesetting peice that looks to de-mystify artificial intelligence by looking beyond the hype.
Why We Are Open Sourcing ContraxSuite and Some Thoughts About Legal Tech and ...Daniel Katz
Why We Are Open Sourcing ContraxSuite and Some Thoughts About Legal Tech and the Modern Information Economy - By Michael Bommarito + Daniel Martin Katz from LexPredict
Ethical Issues in Machine Learning Algorithms. (Part 1)Vladimir Kanchev
This presentation describes recent ethical issues related to AI and ML algorithms. Its focus is data and algorithmic bias, algorithmic interpretability and how GDPR relates to these issues.
Fin (Legal) Tech – Law’s Future from Finance’s Past (Some Thoughts About the ...Daniel Katz
Fin (Legal) Tech – Law’s Future from Finance’s Past (Some Thoughts About the Financialization of the Law) – Professors Daniel Martin Katz + Michael J Bommarito
LEGISLATIVE COORDINATION AND CONCILIATIONS SCIENCE AND TECHNOLOGY OPTIONS ASS...Karlos Svoboda
Pathways towards responsible ICT Innovation
Policy Brief Abstract ICT has an immediate and broad impact on the lives of most individuals. Ethical scrutiny is not well established. Existing ethics review mechanisms are not suited for many of the ethical issues that ICT is likely to cause in the future. Europe has the unique opportunity to show international leadership by pointing the way to how human rights, ethical values and moral norms can be explicitly considered in technology development. The ETICA project (Ethical Issues of Emerging ICT Applications, GA 230318) provides the basis for a new enlightened approach to the development, governance and use of emerging ICT. MAY 2011 PE 460.346 EN
Exploring the Physical Properties of Regulatory Ecosystems - Professors Danie...Daniel Katz
Exploring the Physical Properties of Regulatory Ecosystems: Regulatory Dynamics Revealed by Securities Filings — Professors Daniel Martin Katz + Michael J Bommarito
Brief summary of how the law and legal practice may be affected by the ris of AI and autonomous cars, robots, etc - with a look at what harms or biases may result and how law and the market might try to solve those problems.
An insight in the legal challenges and opportunities of Artificial Intelligence (AI). By Matthias Dobbelaere-Welvaert, managing partner of theJurists Europe.
This slide shows (1) AI and Accountability , (2) AI Ethics, (2) Privacy Protection. Several AI ethics documents such as IEEE EAD, EC-HELG Ethics Guideline for Trustworthy AI, Social Principles of Human-Centric AI(Japan), focus on AI's transparency, accountability and trust. We follow the discussions of these documents around the above (1),(2) and (3) topics.
Ethical Dimensions of Artificial Intelligence (AI) by Rinshad ChoorapparaRinshad Choorappara
Explore the ethical landscape of Artificial Intelligence (AI) through our insightful PowerPoint presentation. Delve into crucial considerations that shape the responsible development and deployment of AI technologies. From privacy concerns and bias mitigation to transparency and accountability, this presentation covers the key ethical dimensions of AI. Gain a comprehensive understanding of the ethical challenges and solutions in the rapidly evolving world of artificial intelligence. Stay informed and empower your audience with the knowledge needed to navigate the ethical intricacies of AI responsibly.
Let us see the good and bad effects of the impact of Artificial Intelligence and the emerging technologies!
GRC 2020 - IIA - ISACA Machine Learning Monitoring, Compliance and GovernanceAndrew Clark
With Machine Learning (ML) taking on a more significant role in decision making, ML is becoming a risk management
and compliance issue. In light of increasing regulatory scrutiny, companies deploying ML must ensure that they have a
robust monitoring and compliance program. This presentation will provide context around relevant regulations, outline
critical risks and mitigating controls for ML, and provide an overview of monitoring and governance best practices.
Presentation by Keita Nishiyama at the OECD Global Conference on Governance Innovation which took place in Paris on 13-14 January 2020. Further information is available at http://www.oecd.org/gov/regulatory-policy/oecd-global-conference-on-governance-innovation.htm.
Updated vesion of my talk from 2013 as given in March 2016.
Coves the basics of why algorithmic governance may be problematic for users and society and suggests some legal remedies for these problems including competition law and defamation law.
SHOULD ALGORITHMS DECIDE YOUR FUTUREThis publication was .docxmaoanderton
SHOULD ALGORITHMS DECIDE YOUR FUTURE?
This publication was prepared by Kilian Vieth and
Joanna Bronowicka from Centre for Internet and
Human Rights at European University Viadrina. It was
prepared based on a publication “The Ethics of
Algorithms: from radical content to self-driving cars”
with contributions from Zeynep Tufekci, Jillian C. York,
Ben Wagner and Frederike Kaltheuner and an event
on the Ethics of Algorithms, which took place on
March 9-10, 2015 in Berlin. The research was support-
ed by the Dutch Ministry of Foreign Affairs.
Find out more: cihr.eu/ethics-of-algorithms/
Follow the discussion on Twitter: #EoA2015
Graphic design by Thiago Parizi
cihr.eu @cihr_eu
1 | ETHICS OF ALGORITHMS ETHICS OF ALGORITHMS | 2
WHAT IS AN ALGORITHM?
ALGORITHMS SHAPE OUR WORLD(S)!
Our everyday life is shaped by computers and our computers are shaped
by algorithms. Digital computation is constantly changing how we commu-
nicate, work, move, and learn. In short, digitally connected computers are
changing how we live our lives. This revolution is unlikely to stop any time
soon.
Digitalization produces increasing amounts of datasets known as ‘big
data’. So far, research focused on how ‘big data is produced and stored.
Now, we begin to scrutinize how algorithms make sense of this growing
amount of data
Algorithms are the brains of our computers, mobiles, Internet of Things.
Algorithms are increasingly used to make decisions for us, about us, or
with us – oftentimes without us realizing it. This raises many questions
about the ethical dimension of algorithms.
WHY DO ALGORITHMS RAISE ETHICAL
CONCERNS?
First, let's have a closer look at some of the critical features of algorithms.
What are typical functions they perform? What are negative impacts for
human rights? Here are some examples that probably affect you too.
THEY KEEP INFORMATION AWAY FROM US
Increasingly, algorithms decide what gets attention, and what is ignored;
and even what gets published at all, and what is censored. This is true for
all kinds of search rankings, for example the way your social media news-
feed looks. In other words, algorithms perform a gate-keeping function.
EXAMPLE
Hiring algorithms decide if you are invited for an interview.
• Algorithms, rather than managers, are more and more taking part in
hiring (and firing) of employees. Deciding who gets a job and who does
not, is among the most powerful gate-keeping function in society.
• Research shows that human managers display many different biases in
hiring decisions, for example based on social class, race and gender.
Clearly, human hiring systems are far from perfect.
• Nevertheless, we may not simply assume that algorithmic hiring can
easily overcome human biases. Algorithms might work more accurate
in some areas, but can also create new, sometimes unintended, prob-
lems depending on how they are programmed and what input data is
used.
Ethical.
Rob van Kranenburg - Kunnen we ons een sociaal krediet systeem zoals in het o...BigDataExpo
IoT, Big Data, AI creëren een nieuwe situatie met betrekking tot het nemen van beslissingen door beleidsmakers. Toch verschuift er weinig in ons democratisch bestel, terwijl onze data in handen zijn van GAFA, China en andere nieuwe vormen van bestuur die nog ontstaan in de digitale transitie. Wij, in Europa, staan stil.
Richard van der Velde, Technical Support Lead for Cookiebot @CMP – “Artificia...Associazione Digital Days
The training of artificial intelligence systems is just the latest use of users’ personal data that companies collect online. But the information on how the data is used, what consent is needed or how it will be regulated is not always clear. Strong concerns have already been raised about data privacy and consent.
Procurement governance and complex technologies: a promising future?Albert Sanchez Graells
This is the presentation I will give at the AGM of the UK's Procurement Lawyers Association on 6 March 2019. Its content critically assesses the impacts that complex technologies such as blockchain (or smart contracts), artificial intelligence (including big data) and the internet of things could have for public procurement governance and oversight.
Presentation of Nozha Boujemaa (Dr Inria) on Trusworthy Artificial Intelligence including Responsible and Robust Artificial Intelligence - MIT Tech Review Innovation Leaders Summit "Breakthrough to Impact", Paris November 30th 2018
AI in Law Enforcement - Applications and Implications of Machine Vision and M...Daniel Faggella
This presentation was given at an INTERPOL / United Nations events about law enforcement and AI, at INTERPOL's innovation lab in Singapore, July 11th, 2018.
The presentation itself covers nearly a dozen AI-related use cases, along with their possible uses in law enforcement, surveillance, or falsifying evidence.
Presentation given at the CRCL 2022: Computational ‘law’ on edge conference (CRCL2022), Brussels, 3 ovember 2022 https://www.cohubicol.com/about/conference-crcl-2022/
Presentation given at the 3rd International Workshop on Cognition: Interdisciplinary Foundations, Models and Applications (CIFMA2021), joint with SEFM 2021
Operationalizing Declarative and Procedural KnowledgeGiovanni Sileno
Operationalizing Declarative and Procedural Knowledge: a Benchmark on Logic Programming Petri Nets (LPPNs)
International Conference on Logic Programming (ICPL2020) Workshop on Causal Reasoning and Explanation in Logic Programming (CAUSAL2020)
On the problems of interface: explainability, conceptual spaces, relevanceGiovanni Sileno
Summary talk of the research conducted at Télécom ParisTech and Paris Dauphine University during my postdoc project (2016-2018), in collaboration with Isabelle Bloch, Jamal Atif and Jean-Louis Dessalles.
A Petri net-based notation for normative modeling: evaluation on deontic para...Giovanni Sileno
Presentation at MiReL workshop @ ICAIL 2017, the workshop on MIning and REasoning with Legal texts at the International Conference on Artificial Intelligence and Law
Bridging Representation of Laws, of Implementations and of BehavioursGiovanni Sileno
Presentation at JURIX 2015 (Legal Knowledge and Information Systems) conference.
To align representations of law, of implementations of law and of concrete behaviours, we designed a common ground representational model for the three domains, based on the notion of position, building upon Petri nets. This paper reports on work to define subsumption between positional models.
Presentation at AICOL Workshop: AI Approaches to the Complexity of Legal Systems
Abstract: the paper is an investigation on how behaviour relates to norms, i.e. on how a certain conduct acquires meaning in institutional terms. The simplest example of this phenomenon is given by the ’count-as’ relation, generally associated to constitutive rules, through which an agent has the legal capacity, via performing a certain action, to create, modify or destroy a certain institutional fact. In the literature, however, the ‘count-as’ relation is mostly accounted for its classificatory functions. Introducing an extension of the Petri Net notation, we argue that the structure of constitutive rules cannot be completely captured by logic conditionals, nor by causal connectives, but it can approached by the notion of supervenience.
Inspired by research in precedential reasoning in Law (amongst others, by Horty), I present a set of algorithms for the conversion of rule base from priority-based and constraint-based representations and viceversa. I explore as well a simple optimization mechanism, using assumptions about the world, providing a model of environmental adaptation
On the Interactional Meaning of Fundamental Legal ConceptsGiovanni Sileno
Presentation at JURIX 2014.
Abstract: Rather than as abstract entities, jural relations are analyzed in terms of the bindings they create on the individual behaviour of concurrent social agents. Investigating a simple sale transaction modeled with Petri Nets, we argue that the concepts on the two Hohfeldian squares rely on the implicit reference to a “transcendental” collective entity, to which the two parties believe or are believed to belong. From this perspective, we observe that both liabilities and duties are associated to obligations, respectively of an epistemic or practical nature. The fundamental legal concepts defined by Hohfeld are revisited accordingly, leading to the construction of two Hohfeldian prisms.
Legal Knowledge Conveyed by Narratives: towards a representational modelGiovanni Sileno
The paper investigates a representational model for narratives, aiming to facilitate the acquisition of the systematic core of stories concerning legal cases, i.e. the set of causal and temporal relationships that govern the world in which the narrated scenario takes place. At the discourse level, we consider narratives as sequences of messages collected in an observation, including descriptions of agents, of agents’ behaviour and of mechanisms relative to physical, mental and institutional domains. At the content level, stories correspond to synchronizations of embodied agent-roles scripts. Following this approach, the Pierson v Post case is analyzed in detail and represented as a Petri net.
NATURE, ORIGIN AND DEVELOPMENT OF INTERNATIONAL LAW.pptxanvithaav
These slides helps the student of international law to understand what is the nature of international law? and how international law was originated and developed?.
The slides was well structured along with the highlighted points for better understanding .
DNA Testing in Civil and Criminal Matters.pptxpatrons legal
Get insights into DNA testing and its application in civil and criminal matters. Find out how it contributes to fair and accurate legal proceedings. For more information: https://www.patronslegal.com/criminal-litigation.html
ASHWINI KUMAR UPADHYAY v/s Union of India.pptxshweeta209
transfer of the P.I.L filed by lawyer Ashwini Kumar Upadhyay in Delhi High Court to Supreme Court.
on the issue of UNIFORM MARRIAGE AGE of men and women.
PRECEDENT AS A SOURCE OF LAW (SAIF JAVED).pptxOmGod1
Precedent, or stare decisis, is a cornerstone of common law systems where past judicial decisions guide future cases, ensuring consistency and predictability in the legal system. Binding precedents from higher courts must be followed by lower courts, while persuasive precedents may influence but are not obligatory. This principle promotes fairness and efficiency, allowing for the evolution of the law as higher courts can overrule outdated decisions. Despite criticisms of rigidity and complexity, precedent ensures similar cases are treated alike, balancing stability with flexibility in judicial decision-making.
Responsibilities of the office bearers while registering multi-state cooperat...Finlaw Consultancy Pvt Ltd
Introduction-
The process of register multi-state cooperative society in India is governed by the Multi-State Co-operative Societies Act, 2002. This process requires the office bearers to undertake several crucial responsibilities to ensure compliance with legal and regulatory frameworks. The key office bearers typically include the President, Secretary, and Treasurer, along with other elected members of the managing committee. Their responsibilities encompass administrative, legal, and financial duties essential for the successful registration and operation of the society.
Introducing New Government Regulation on Toll Road.pdfAHRP Law Firm
For nearly two decades, Government Regulation Number 15 of 2005 on Toll Roads ("GR No. 15/2005") has served as the cornerstone of toll road legislation. However, with the emergence of various new developments and legal requirements, the Government has enacted Government Regulation Number 23 of 2024 on Toll Roads to replace GR No. 15/2005. This new regulation introduces several provisions impacting toll business entities and toll road users. Find out more out insights about this topic in our Legal Brief publication.
WINDING UP of COMPANY, Modes of DissolutionKHURRAMWALI
Winding up, also known as liquidation, refers to the legal and financial process of dissolving a company. It involves ceasing operations, selling assets, settling debts, and ultimately removing the company from the official business registry.
Here's a breakdown of the key aspects of winding up:
Reasons for Winding Up:
Insolvency: This is the most common reason, where the company cannot pay its debts. Creditors may initiate a compulsory winding up to recover their dues.
Voluntary Closure: The owners may decide to close the company due to reasons like reaching business goals, facing losses, or merging with another company.
Deadlock: If shareholders or directors cannot agree on how to run the company, a court may order a winding up.
Types of Winding Up:
Voluntary Winding Up: This is initiated by the company's shareholders through a resolution passed by a majority vote. There are two main types:
Members' Voluntary Winding Up: The company is solvent (has enough assets to pay off its debts) and shareholders will receive any remaining assets after debts are settled.
Creditors' Voluntary Winding Up: The company is insolvent and creditors will be prioritized in receiving payment from the sale of assets.
Compulsory Winding Up: This is initiated by a court order, typically at the request of creditors, government agencies, or even by the company itself if it's insolvent.
Process of Winding Up:
Appointment of Liquidator: A qualified professional is appointed to oversee the winding-up process. They are responsible for selling assets, paying off debts, and distributing any remaining funds.
Cease Trading: The company stops its regular business operations.
Notification of Creditors: Creditors are informed about the winding up and invited to submit their claims.
Sale of Assets: The company's assets are sold to generate cash to pay off creditors.
Payment of Debts: Creditors are paid according to a set order of priority, with secured creditors receiving payment before unsecured creditors.
Distribution to Shareholders: If there are any remaining funds after all debts are settled, they are distributed to shareholders according to their ownership stake.
Dissolution: Once all claims are settled and distributions made, the company is officially dissolved and removed from the business register.
Impact of Winding Up:
Employees: Employees will likely lose their jobs during the winding-up process.
Creditors: Creditors may not recover their debts in full, especially if the company is insolvent.
Shareholders: Shareholders may not receive any payout if the company's debts exceed its assets.
Winding up is a complex legal and financial process that can have significant consequences for all parties involved. It's important to seek professional legal and financial advice when considering winding up a company.
A "File Trademark" is a legal term referring to the registration of a unique symbol, logo, or name used to identify and distinguish products or services. This process provides legal protection, granting exclusive rights to the trademark owner, and helps prevent unauthorized use by competitors.
Visit Now: https://www.tumblr.com/trademark-quick/751620857551634432/ensure-legal-protection-file-your-trademark-with?source=share
1. On mapping values
in AI governance
2 December 2021, Algorithmic Law and Society symposium, HEC Paris, Paris
Geoff Gordon, Asser Institute, the Hague
Bernhard Rieder, Media Studies, University of Amsterdam
Giovanni Sileno, Informatics Institute, University of Amsterdam
2. ● RPA Human(e) AI seed grant by UvA
project: “Mapping Value(s) in AI”
context of this work
Geoff Gordon
Faculty of Law
Bernhard Rieder
Faculty of Humanities
Giovanni Sileno
Faculty of Science
3. ● algorithmic decisions are increasingly used in all types of human-related activities:
○ predictive systems, recommender systems, decision support-systems…
focus: algorithmic decision systems
4. ● algorithmic decisions are increasingly used in all types of human-related activities:
○ predictive systems, recommender systems, decision support-systems…
● these are both objects and instruments of regulatory governance
focus: algorithmic decision systems
5. ● not what values AI should satisfy, but
how values manifest in context-sensitive
computational and social processes?
general research question
6. ● not what values AI should satisfy, but
how values manifest in context-sensitive
computational and social processes?
● this paper focuses in particular on setting the theoretical groundwork
underpinning and motivating a “mapping” methodology for AI governance
general research question
7. 1. assemblage as method to look at techno-regulation and regulation of technology
2. material stance on law
3. connection with critical practice of AI
key points of our contribution
9. partial performances
● what an AI system produces cannot be defined nor observed upon the AI system alone
● the performativity of law is not defined by the sources of law alone
10. partial performances
● what an AI system produces cannot be defined nor observed upon the AI system alone
● the performativity of law is not defined by the sources of law alone
but where do people focus
their analysis the most?
11. partial performances
● what an AI system produces cannot be defined nor observed upon the AI system alone
● the performativity of law is not defined by the sources of law alone
but where do people focus
their analysis the most?
12. AI black box
ML algorithm
input data
input samples
output
example: algorithmic fairness
13. AI black box
ML algorithm
input data
input samples
output
example: algorithmic fairness
correct
parameters of
neural network
debias
output
purging input
of sensitive
elements
debias
training data
14. AI black box
ML algorithm
input data
input samples
output
example: algorithmic fairness
correct
parameters of
neural network
debias
output
purging input
of sensitive
elements
debias
training data
● all these methods focus on data
● just within or nearby the computational system boundaries
15. AI black box
ML algorithm
input data
input samples
output
looking at the bigger picture..
16. looking at the bigger picture..
AI black box
ML algorithm
input
data
input
samples
output
model
17. AI black box
ML algorithm
input
data
input
samples
output
model
data-provider
data-provider
looking at the bigger picture..
data-processor
18. AI black box
ML algorithm
input
data
input
samples
output
model
data-provider
data-provider
data-subject
data-subject
data-processor
data-user
even bigger picture..
19. AI black box
ML algorithm
input
data
input
samples
output
model
data-provider
data-provider
data-subject
data-subject
data-subject
data-subject
data-processor
data-processor
data-processor
data-user
data-user
data-user
even bigger picture..
20. AI black box
ML algorithm
input
data
input
samples
output
model
data-provider
data-provider
data-subject
data-subject
data-subject
data-subject
data-processor
data-processor
data-processor
data-user
data-user
data-user
subject
subject
other people
● impact can be assessed only
beyond the system’s boundaries
● need for “ecological” paradigms
even bigger picture..
21. from totality
eg. monolithic computational module, individual
● components defined by relations of interiority
eg. socio-technical distributed system, social context
● components defined by relations of exteriority
to assemblage
in doing so, we changed the framing
22. socio-technical assemblage
● law is not defined outside the assemblage, but within it
● similarly, AI is not defined outside the assemblage
law and AI reside within the same assemblage
23. socio-technical assemblage: concept usage
consider a lawyer trying to attack a recommendation produced by AI (e.g. a parole
decision, a credit rating, inclusion on a black list, stop at borders, ...).
possible options:
● focusing on the AI output alone
24. socio-technical assemblage: concept usage
consider a lawyer trying to attack a recommendation produced by AI (e.g. a parole
decision, a credit rating, inclusion on a black list, stop at borders, ...).
possible options:
● focusing on the AI output alone
● focusing on the code and training of the AI
25. socio-technical assemblage: concept usage
consider a lawyer trying to attack a recommendation produced by AI (e.g. a parole
decision, a credit rating, inclusion on a black list, stop at borders, ...).
possible options:
● focusing on the AI output alone
● focusing on the code and training of the AI
● tracing the series of interactions in which technical operations and normative
judgments are translated back and forth and from point to point
27. against bifurcation
● in the code-is-law tradition, law is materialized in code
● code must be two things at once:
○ a functioning code and,
○ a representative of law.
28. against bifurcation
● in the code-is-law tradition, law is materialized in code
● code must be two things at once:
○ a functioning code and,
○ a representative of law.
this bifurcation opens space for misrecognition:
● the policy-maker may ‘not get’ the code;
● the code may ‘get the law wrong’.
29. against bifurcation
● in the code-is-law tradition, law is materialized in code
● code must be two things at once:
○ a functioning code and,
○ a representative of law.
this bifurcation opens space for misrecognition:
● the policy-maker may ‘not get’ the code;
● the code may ‘get the law wrong’.
but the law is nowhere
completely determined
tension with the code,
proxy of a normative end
30. example: the SyRI case
● SyRI was developed by the Dutch Ministry of Social Affairs and Employment since
2014, and designed for end use by a variety of national agencies (e.g., the tax
authority, the authority responsible for employment benefits, etc.) and municipalities.
typical application: producing risk
warnings signaling potential
frauds in individual applications
for social services.
31. SyRI in court (2020)
● the SyRI technology was recently found contrary to Article 8 of the European
Convention of Human Rights, which broadly protects the right to respect for private and
family life, home and correspondence.
● the final judgment of the court
○ did not definitively determine competing legal interests, due e.g. to privacy and
social services administration
○ centred on the failure of the government to offer any meaningful explanation of
the technology, and even less its limits.
32. SyRI in court (2020)
● the SyRI technology was recently found contrary to Article 8 of the European
Convention of Human Rights, which broadly protects the right to respect for private and
family life, home and correspondence.
● the final judgment of the court
○ did not definitively determine competing legal interests, due e.g. to privacy and
social services administration
○ centred on the failure of the government to offer any meaningful explanation of
the technology, and even less its limits.
(adequate) transparency is a crucial requirement, but we plausibly need other
modes of knowing and assessing the interplay of the law and technology
33. SyRI in court (2020)
● the SyRI technology was recently found contrary to Article 8 of the European
Convention of Human Rights, which broadly protects the right to respect for private and
family life, home and correspondence.
● the final judgment of the court
○ did not definitively determine competing legal interests, due e.g. to privacy and
social services administration
○ centred on the failure of the government to offer any meaningful explanation of
the technology, and even less its limits.
(adequate) transparency is a crucial requirement, but we plausibly need other
modes of knowing and assessing the interplay of the law and technology
problematic for the indeterminate nature of law
problematic for the complexity of technology
34. methodological standpoint
● rather than focusing attention on the ideal norms or values applicable to AI generally,
we look at the assemblage and the interactions that constitute its viability and operation
35. methodological standpoint
● rather than focusing attention on the ideal norms or values applicable to AI generally,
we look at the assemblage and the interactions that constitute its viability and operation
● as a communicative practice that is materially situated, law coordinates horizons of
material expectations among networked participants (including expectations of and
among objects and things).
these coordinating horizons function as material affordances.
36. affordances
● affordance is opportunity of action:
a behaviour of the agent that an
environment (object) can “afford”.
37. affordances
● affordance is opportunity of action:
a behaviour of the agent that an
environment (object) can “afford”.
institutional
affordances
the medieval port of Genoa,
flourishing with the introduction
of insurances, contract options
and other mechanisms of risk
management
38. affordances
● affordance is opportunity of action:
a behaviour of the agent that an
environment (object) can “afford”.
institutional
affordances
the medieval port of Genoa,
flourishing with the introduction
of insurances, contract options
and other mechanisms of risk
management
the “reality” of law derives precisely from legal affordances defined and
operating with other parts of the assemblage!
40. connection with critical practice of AI
● our methodology in line with the call for a critical practice of AI called for by Philip Agre.
● most AI research and development is centred on a single question: does a proposed
alternative solution work better?
● As Agre makes clear, this begs a prior question, namely: what does it work for?
42. central issue
the legal command is indeterminate
the computational command is complex
most people work on trying to disambiguate
most people work on trying to make it understandable
43. central issue
the legal command is indeterminate
the computational command is complex
most people work on trying to disambiguate
most people work on trying to make it understandable
(suppose that) for all the reasons argued above
we renounce to approach these commands directly.
44. knowing by “encircling”
● encircling is a research technique recently proposed in security studies (De
Goede, Bosma), developed to deal with problems of secrecy.
45. knowing by “encircling”
● encircling is a research technique recently proposed in security studies (De
Goede, Bosma), developed to deal with problems of secrecy.
● the method ‘is less focused on uncovering the kernel of the
secret, than it is on analysing the mundane lifeworlds of
security practices and practitioners that are powerfully
structured through codes and rites of secrecy.’
46. “mapping” values via encircling
● our goal: studying how value attributions denoting relative worth, merit, or
importance, take form in ecologies of human and computational agents.
47. “mapping” values via encircling
● our goal: studying how value attributions denoting relative worth, merit, or
importance, take form in ecologies of human and computational agents.
we cannot see “values”, but we
can see how people/AI deal with them
48. “mapping” values via encircling
● our goal: studying how value attributions denoting relative worth, merit, or
importance, take form in ecologies of human and computational agents.
we cannot see “values”, but we
can see how people/AI deal with them
● possible axes (parallel work):
1. ambient technical knowledge
2. local design conditions
3. materialized values
50. Wrapping up
● we are not arguing against rights-based and rule of law programs; but we
elaborate on their limits,
● new research programs can be designed that go beyond such limits, so in
complementation to standard programs.
51. Perspectives
possible uses of the proposed framework:
● academics: a wider array of interactions among incentives, pressures, materialities
and routines for analysis
● AI practitioners: reflective standpoints relevant for design, development and
deployment phases,
● legal practitioners: a wider number of sites to contest
● regulators and judges: fuller perspective on the normative conditions and stakes at
play in any given outcome.
52. On mapping values
in AI governance
2 December 2021, Algorithmic Law and Society symposium, HEC Paris, Paris
Geoff Gordon, Asser Institute, the Hague
Bernhard Rieder, Media Studies, University of Amsterdam
Giovanni Sileno, Informatics Institute, University of Amsterdam