5. Areas of attention
Domain/Industry Specific
• Military uses
• Fintech
• Autonomous vehicles
• Drones
General
• Privacy
• Bias
• Liability
• Registration
• Ownership
• Artificial personhood
• Taxation
6. Plans, reports, and research
• China
– A Next Generation Artificial Intelligence Development Plan (2017)
• Japan
– Pushed for rules on AI at G7 2016
– Developed Draft AI R&D Guidelines to be discussed with OECD and G7
• South Korea
– Developing Robots Ethics Charter (2007)
• US
– "it's not even on our radar screen…” - Treasury Secretary Steve Mnuchin
– The Partnership on AI
• Industry consortium focused on establishing best practices for artificial intelligence systems
– The Allen Institute for Artificial Intelligence
• Developed three rules for regulating AI
7. Plans, reports, and research cont.
• Estonia
– Considering giving robots legal recognition. This would classify ‘robot agents’
in a position between being a legal entity and being owned property.
• Germany
– The German Government released a code of ethics for autonomous vehicles in
August 2017.
• UK
– In November 2017, the House of Commons announced an inquiry into the use
of algorithms in public and business decision making.
• EU
– EU Parliament adopted a report with commendations on Civil Law Rules on
Robotics in February 2017
8. Currently enacted legislation
• EU
– General Data Protection Regulation (GDPR) (25
May 2018)
• UK
– Data Protection Act 2018 (23 May 2018)
• GDPR equivalent
9. GDPR in detail
• Right to know of existence of algorithms and
right to obtain an explanation of decisions
made (Articles 13-15)
• Right to opt out of some algorithmic decisions
altogether (Article 22)
10. GDPR in detail - cont.
• Regulation – similar to national law, but
applies to all EU countries
• Scope is global. Applies to any company
processing EU resident’s data. Irrelevant
where data is processed.
• Penalties – 20mil Euro or 4% of global
revenue. Whichever is greater
11. Potential impact of GDPR
• Data availability
• Predictive risk modeling
• Credit and insurance risk assessments
• Recommender systems
• Computational advertising
• Social networks
12. The story for NZ
• GDPR impacts due to EU resident users
• NZ Stats –Principles for Safe and Effective Data
Analysis
• Data.gov.nz – Reviewing algorithms to
increate transparency and accountability
• Centre for Artificial Intelligence and Public
Policy at Otago University
15. Some sources
• AI Forum report
– Artificial Intelligence: Shaping a Future New Zealand
– http://resources.aiforum.org.nz/AI+Shaping+A+Future+New+Zealand+Report+2018.pdf
• Report to EU Parliament
– Report with recommendations to the Commission on Civil Law
Rules on Robotics (2015/2103(INL))
– http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//TEXT+REPORT+A8-
2017-0005+0+DOC+XML+V0//EN
• GDPR analysis paper
– Goodman B. and Flaxman S (2017). European Union regulations
on algorithmic decision-making and a "right to explanation“.
AI Magazine, Vol 38, No 3, 2017
– https://arxiv.org/abs/1606.08813
16. Further reading and contacts
• Artificial Intelligence and Law in New Zealand – Otago
University
– http://www.cs.otago.ac.nz/research/ai/AI-Law/
• NZ Stats
– Principles for the safe and effective use of data and analytics
– https://www.privacy.org.nz/news-and-publications/guidance-resources/principles-for-the-safe-and-
effective-use-of-data-and-analytics-guidance/
• Data.gov.nz
– Algorithm review underway to increase transparency and
accountability
– https://data.govt.nz/blog/algorithm-review-underway-to-increase-transparency-and-accountability/
• AI mailing list
– Import AI
– https://jack-clark.net/
17. 3 rules of AIs
• 1. an AI system must be subject to the full
gamut of laws that apply to its human
operator.
• 2. an AI system must clearly disclose that it is
not human.
• 3. an AI system cannot retain or disclose
confidential information without explicit
approval from the source of that information.
Editor's Notes
Thank you for that. This topic has been pretty interesting to explore and read about. My focus with this talk is to have a broad overview of the kind of work being done around regulating AI and Machine Learning internationally and here in NZ.
But first, why regulate?Are we really talking about trying to protect ourselves from killer robots bent on humanity's destruction?
Well maybe a bit.
However, these days there are much more mundane areas of our lives where autonomous systems are starting to make an impact. Things like insurance claims, loan applications, or even the news we read. But these systems are well designed and built on reliable data right? Even if no, aren’t their designers keep trying to make them better? How do we know?
Let’s look at an example that may strain these assumptions.
There was a case in the US a couple years ago where a judge reject a plea deal and sentenced the defendant to 5 years in prison as well as 6 years probation because of the risk score prepared by the COMPAS assessment.
This is a model provided by Northpointe, a private company, to evaluate the risk of a person reoffending.
The thing is, the operation of this model was not provided to the court or the defense team and Loomis ended up appealing the sentence on that basis.
Now here’s where it gets worrying.
The state of Wisconsin countered that Northpointe required it to keep the algorithms confidential, to protect the firm’s intellectual property.
And the Wisconsin Supreme Court upheld Loomis’s sentence, reasoning that the risk assessment was only one part of the rationale for the sentence.
It wanted to continue to give judges the opportunity to take into account the COMPAS score as one part of their sentencing rationale, even if they had no idea how it was calculated.
So, this is an algorithm that a private company developed that seem to materially impact people but isn’t available to be examined for bias or appropriate design.
That’s just one of the types of situations that have prompted governments to look at regulating for AI and Machine learning is used.
But bias and fairness are not the only areas of concern when looking at regulating autonomous systems.
Countries are looking a lot of different topics.
In my research it seems they are taking generally two kinds of approaches. One that is focused on domain or industry specific regulation.
So for example, limitations on autonomous military systems, how driverless cars fit into road rules, or even the pretty simple things we’ve seen about limiting operations of drones. But several countries are also starting to take a more general approach. And this is where there are quite interesting discussions. Overall, the consensus is that liability is the main focus for near term issues. For example, if autonomous system takes actions that cause damage, would would be responsible for remediation. Would it be the owner, the designer, the system itself?This is where concepts like artificial personhood start to be used. That is, treating a system as its own entity somewhere between and organization and a person.
Beyond that, they also discuss effects of more and more AI and robots being used in business. As more people are replaced by robots it will likely start having an effect on the tax base available for social programs. Especially as they become more needed.
Some of these end up sounding like writing prompts for scifi stories, but they are becoming more realistic topics of discussions for people in government around the world.
So let’s have a look at how some of them are working to address them.
---
UN has had the 5th confrence on Lethal Autonomous Weapons.
I’ve had a look around and it does seem like most countries are only in the very early stages of discussions. And for some, I could find much activity at all. Let’s have a look at Asia first
China is actively focused on developing their AI capability and released their plan for developing as a leader in AI. That plan does mention the need for regulation and guidlines but it doesn’t have much detail on those topics.
Japan presented a proposal for rules governing AI at the 2017 G7 meeting and I expect that to continue to be discussed, but doesn’t seem to be leading to regulation in the near term.
South Korea started developing a ethics charter for robotics in 2007, but there hasn’t been anything new written about it since then.
US is an interesting case in that the new administration has effectively stepped back from their focus on AI, let alone any regulations of it.
The quote from Steve Mnuchin relates to AI displacing people in the workforce, but seems to show their overall lack of interest in the space.
There is work being done in private organizations like the Partnership on AI and the Allen Institute for Artificial Intelligence and they have developed their own 3 rules of AI.
But it does seem like most of the activity is centered in Europe.
In Estonia, the government has acknowledged the intent to be leader in legislation on AI and is considering creating a legal class for “robot agents” that would sit between a legal entity and property.
Germany has released a code of ethics for autonomous vehicles in Aug 2017 and the government will adopt guidelines for self-driving cars in Germany covering things like prioritizing the value and equality of human life over property or animals. UK has also had a house of commons enquiry into the use of algorithms in public and business decision making.
The EU parliament actually put out a really interesting report that discussed a lot of the implications and topics I mentioned already.
So all of this I’ve covered up to now have been position papers, reports, analysis and that kind of stuff. But has anything actually been enacted so far?
Well I’ve only been able to find on piece of legislation that actually impacts how Machine Learning and AI are able to be used. And that was in the GDPR passed by the EU earlier this year.
It’s the same one that caused all the trouble for the companies relying on subscriptions and mailing lists.
The GDPR contains a couple articles that specifically sets out the rights EU residents have in relation to algorithms and the decisions made about them.
The UK has also mostly adopted the rules set out in the GDPR.
Now, let’s have a look at what the GDPR actually talks about.
There are a couple articles that directly discuss algorithms
In Articles 13-15, those are ones talking about data collected about people, there is clause that requires the company to notify of those people of any algorithms that exist for automated decisions making and how they work.
This has become known as the “right of explanation”. There is currently an active debate in the legal literature about what this actually means and what is required of companies, especially those that use more advanced machine learning algorithms.
The other important article is 22. This one allows an EU resident to not be subject to automated decision making. Although. There are exemptions from this for explicit consent and if it’s required by a government. But even in those cases, the person has the right to request human intervention in the decision and the ability to contest the outcome.
So, what are the implications of these rules.
-----------------
The intrinsic value of explanations tracks a person’s need for free will and control, most familiarly expressed in the desire to avoid living out the plot of a Franz Kafka novel.22
First off, the GDPR is a regulation so it behaves at the level of a national law for all member countries. But what’s interesting, is hat it applies to any company processing EU resident’s data. Irrelevant of where the data is stored or processed. Because of this, even though it’s a EU law, the implications are global. And the penalties are quite steep too. Up to 20mil euros or 4% of global revenue. Whichever is greater. You can imagine what kind of impact this would make to a company like Facebook.
And some people are indeed worried that it would have a major impact on the work being done in companies. The restrictions set out by those articles and other areas of the GDPR could have quite on a effect on areas where automated decision systems are being used today.
These can be found in many of the applications and services we use on a daily basis and are becoming mission critical for some companies and governments.
But let’s bring it back here to NZ. What kinds of things are going on here. Well, first off we have the impact of GDPR on companies dealing with data of EU residents and because NZ companies do tend to operate internationally there could be info in their datasets that would make them responsible of adhering to those the regulations.
There is also work locally to look at how government uses data and algorithms to make decisions.
NZ Stats along with the Privacy Commissioner and Govt Data Stewart has recently released a document about their recommended principle for data analysis. Data.gov has also begun a review of how algorithms are being used across government and the first findings should be published this month.
Through my discussions, I haven’t found any laws that have been proposed yet, but last year, Otago University started a 3 year project to look at how to approach the question of regulation of AI.
With regulations, there is always the concern of stifling innovation. And that could happen to an extent.
But there are also opportunities that may open up because of the new rules.
Since a large focus is on explanabiltity and bias, it could spur research into the expandability of algorithms as well as bias detection and mitigation.
And it seem this would be a good thing to make sure that the models we use can be as fair and appropriate as we expect them to be.
Right to obtain an explanation of decisions made by algorithms and (13-15)
Right to opt out of some algorithmic decisions altogether. (Art 22)
Compliance with this regulation will depend on progress in Explainable AI research, and on uptake of Explainable AI techniques.