Challenge the algorithim
https://www.newstatesman.com/spotlight/emerging-technologies/2018/06/government-ai-project-has-already-begun
https://www.leagle.com/decision/infdco20170530802
https://casetext.com/case/hou-fed-tchrs-lcl-2415-v-hou-indep-sch-dist
https://arktimes.com/arkansas-blog/2017/01/27/legal-aid-sues-dhs-again-over-algorithm-denial-of-benefits-to-disabled-update-
with-dhs-comment
Eppink said the experts that they hired found big problems with
what the state Medicaid program was doing:
Data to create formula for setting assistance limits was corrupt.
Historical data to predict the future.
Two-thirds of the records thrown away prior to rules creation
WHY?
data entry errors and data that didn’t make sense?
bad data produces bad results
https://www.aclu.org/blog/privacy-technology/pitfalls-artificial-intelligence-decisionmaking-highlighted-idaho-aclu-case
https://independentaustralia.net/politics/politics-display/the-centrelink-robo-debt-debacle-has-only-just-begun,9951
https://www.theguardian.com/australia-news/2019/jun/12/centrelink-robodebt-scheme-faces-second-legal-challenge
https://www.theguardian.com/australia-news/2019/feb/06/robodebt-faces-landmark-legal-challenge-over-crude-income-
calculations
https://www.businessinsider.com.au/federal-budget-australia-deficit-surplus-2018-5
 
Push for automated systems have made the 
vulnerable MORE vulnerable 
Adoption of cost saving measures are designed to 
target populations which are deemed to be the 
“Most Expensive” which includes the most 
politically, socially and economically marginalised 
people.
https://ainowinstitute.org/litigatingalgorithms.pdf
Litigating Algorithms: Challenging Government use of
Algorithmic Decisions
Insist that Government Agencies have a
“Speak to a Human” option
https://www.ntsb.gov/investigations/AccidentReports/Reports/HWY18MH010-prelim.pdf
Software will eat the?
Machine Learning ate my homework

Machine Learning ate my homework

Editor's Notes

  • #2 You wake up. The room is spinning very gently around your head. Or at least it would be if you could see it which you can’t. You stumble out of bed to find the light switch and realize your homework essay “History of GNU/Linux” was deleted.
  • #3 Alexa determined the essay was “radical literature” and it has been quarantined under “Needs Further Review”. The algorithms have informed you that you will need to submit a new essay by lunch.
  • #4 What do you do? Accept your fate and spend the next four hours in the library frantically writing a new essay removing any mention to “GNU” or "RMS" Or head to the University and place a challenge to the system requesting the validation algorithm and an explanation why your work is banned?
  • #5 You decide to challenge the algorithm You proceed to university to the administration centre You wait nervously inline only to see the staff have been replaced by touch screens. The computer rejects your complaint and reminds you that you have until 12pm to submit your essay.
  • #6 You find out the Administration staff have been replaced with AI driven assistant touch screens. This decision was made because the new AI evaluation system for University Staff pay determined the administration staff to be the least performing and highest cost. The only avenue for complaint is the touch screen. You look at the clock as it strikes 12pm. You didn’t submit your essay, you just failed your unit. Go back to page 46.
  • #7 The future is now In the past three years Governments have been ramping up their use of AI, the future is here and so are the legal challenges When machines make decisions who is checking the machine Who is validating the numbers How do you challenge an algorithm if you know it has made the wrong decision? Lets look at some recent legal challenges
  • #8 Houston Federation of Teachers vs Houston Independent School District   Public School Teachers Union challenged the use of proprietary algorithms for school employment practices.
  • #9 The Medicaid program, which helps subsidize medical costs for people with low incomes, uses software to assess a person’s background to decide what they are entitled to. In the worst cases, faulty AI decisions “terminated benefits and services to individuals with intellectual, developmental, and physical disabilities For example, in Arkansas, algorithmic systems failed to cater for cerebral palsy or diabetes patients looking for health care options at home.
  • #10 There were a lot of things wrong with it. First of all, the data they used to come up with their formula for setting people’s assistance limits was corrupt. They were using historical data to predict what was going to happen in the future. But they had to throw out two-thirds of the records they had before they came up with the formula because of data entry errors and data that didn’t make sense. So they were supposedly predicting what this population was going to need, but the historical data they were using was flawed, and they were only able to use a small sub-set of it. And bad data produces bad results.
  • #11 Australian Government automated the determination of welfare recipient fraud. To do this they relied on historical data. And has become known as “RoboDebt” So far 29,888 debts reduced, 14,621 wiped to zero, 26,104 “waived or written off permanently”. That represents 17% – or about one in six – of the total debts raised so far.
  • #12 There has been numerous legal challenges however it seems prior to court action Centrelink the department who owns robodebt provides a new assessment that states the debt is no longer owed voiding the court case… what does this mean? No precedent has yet been set in Australia! The program has not been cancelled and is still in operation and has had a huge impact on citizen psyche.
  • #13 Automating simple regulatory outcomes seems harmless? It saves a lot of money! Regulation can be easily broken down into rules? It saves a lot of money! It directly impacts the quality of citizens life's? It saves a lot of money!
  • #14 When countries utilize ai technology to make what is seen to be basic administrative decisions these can have huge impacts on citizens life and privacy LITIGATING ALGORITHMS: CHALLENGING GOVERNMENT USE OFALGORITHMIC DECISION SYSTEMS An AI Now Institute Report In collaboration with Center on Race, Inequality, and the Law Electronic Frontier Foundation SEPTEMBER 2018
  • #15 2,030 people that died after receiving a robo-debt notice, some 663 people were officially classified by the system as “vulnerable,” 776 of the 2,030 recorded deaths were people aged 45 and under. A seriously worrying 429 were under the age of 35. 663* people were officially classified by the system as “vulnerable,” meaning the DHS had recorded a history of issues like mental illness, drug use, or domestic violence with each individual. *That amount only covers those that the Centrelink system saw fit to classify as vulnerable; a title the system makes very difficult to obtain,
  • #16 Put humans at the center We need a “safety button” “inject human button” Start from the most vulnerable, minority groups, disability Demand policies and actions in place to enable citizens to argue against the maths Arguing for Open Source licensing of rules algorithm software Not just the software framework but the inputs that produce the outputs Validation Criteria accessible for citizens/lawyers Policies to allow citizens access to the rules code that made the decision
  • #17 How can we validate ai decisions? How can we measures impacts on citizens? How can we measure levels of social duress caused? Does automating rules make them easier to understand or further blackbox them?
  • #18 Uber self driving vehicle was travelling at 43mph. Initially the car identified the pedestrian as an unknown object. Then as a vehicle. Then as a bicycle. It also displayed varying expectations of future travel path. 1.3 seconds prior to impact the system decided that an emergency braking maneuverer was needed to mitigate collision.
  • #19 Uber then said under self driving control emergency braking is not meant to happen as it is supposed to be initiated by the “Operator”. Uber later admitted that the system currently does not alert the operator. The operator did initialise breaking which was estimated to be 1 second after collision.. When we stop challenging the world around us, when we let authorities take a “We know better approach” We place ourselves in a situation where we are relying on a safety button that was never designed to work, and we all know where that has lead us in the past.
  • #20 Software Isn't eating the world. We are drowning in software, most of it mediocre, duplicative, and bad. The code we write is not perfect it is susceptible to bias and failure as we those who write it are and therefore any machine decisions are also susceptible to failure and bias.
  • #21 First the algorithm came for those on welfare, but I was not on welfare, then it came for the teachers salary but I was not a teacher, then it took away peoples health care but I had private insurance, soon the algorithm will come for me and what will I do? Read more up on what Countries are doing to litigate algorithm, seek to understand what decisions in your life are being made by an algorithm, educate yourself on your local laws and how they can be used if you find yourself in a situation where a decision was made by an algorithm that has harmed you, it is your right to understand how the algorithm made that decision we need an audit trail for how machines make decisions, its our choice to start ensuring a more transparent future or to let the computer say “No”.