Successfully reported this slideshow.
Your SlideShare is downloading. ×

Machine Learning ate my homework

Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Upcoming SlideShare
Final app presentation
Final app presentation
Loading in …3
×

Check these out next

1 of 20 Ad

Machine Learning ate my homework

Download to read offline

The year is 2050. You’re running late. You stumble out of bed to find the light switch and realize your homework essay “History of GNU/Linux” was deleted; SkyNet determined the essay was “radical literature.” The future is now. Did they forgot to add the part where algorithms would be wielded against us, and we would end up proving false positives to equations that respond with “do not reply”?

- Presented at OSCON Ignite 2019 July Portland USA

The year is 2050. You’re running late. You stumble out of bed to find the light switch and realize your homework essay “History of GNU/Linux” was deleted; SkyNet determined the essay was “radical literature.” The future is now. Did they forgot to add the part where algorithms would be wielded against us, and we would end up proving false positives to equations that respond with “do not reply”?

- Presented at OSCON Ignite 2019 July Portland USA

Advertisement
Advertisement

More Related Content

Similar to Machine Learning ate my homework (20)

More from Aimee Maree Forsstrom (20)

Advertisement

Recently uploaded (20)

Machine Learning ate my homework

  1. 1. Challenge the algorithim
  2. 2. https://www.newstatesman.com/spotlight/emerging-technologies/2018/06/government-ai-project-has-already-begun
  3. 3. https://www.leagle.com/decision/infdco20170530802 https://casetext.com/case/hou-fed-tchrs-lcl-2415-v-hou-indep-sch-dist
  4. 4. https://arktimes.com/arkansas-blog/2017/01/27/legal-aid-sues-dhs-again-over-algorithm-denial-of-benefits-to-disabled-update- with-dhs-comment
  5. 5. Eppink said the experts that they hired found big problems with what the state Medicaid program was doing: Data to create formula for setting assistance limits was corrupt. Historical data to predict the future. Two-thirds of the records thrown away prior to rules creation WHY? data entry errors and data that didn’t make sense? bad data produces bad results https://www.aclu.org/blog/privacy-technology/pitfalls-artificial-intelligence-decisionmaking-highlighted-idaho-aclu-case
  6. 6. https://independentaustralia.net/politics/politics-display/the-centrelink-robo-debt-debacle-has-only-just-begun,9951
  7. 7. https://www.theguardian.com/australia-news/2019/jun/12/centrelink-robodebt-scheme-faces-second-legal-challenge https://www.theguardian.com/australia-news/2019/feb/06/robodebt-faces-landmark-legal-challenge-over-crude-income- calculations
  8. 8. https://www.businessinsider.com.au/federal-budget-australia-deficit-surplus-2018-5
  9. 9.   Push for automated systems have made the  vulnerable MORE vulnerable  Adoption of cost saving measures are designed to  target populations which are deemed to be the  “Most Expensive” which includes the most  politically, socially and economically marginalised  people. https://ainowinstitute.org/litigatingalgorithms.pdf Litigating Algorithms: Challenging Government use of Algorithmic Decisions
  10. 10. Insist that Government Agencies have a “Speak to a Human” option
  11. 11. https://www.ntsb.gov/investigations/AccidentReports/Reports/HWY18MH010-prelim.pdf
  12. 12. Software will eat the?

Editor's Notes

  • You wake up.
    The room is spinning very gently around your head.
    Or at least it would be if you could see it which you can’t.
    You stumble out of bed to find the light switch and realize your homework essay “History of GNU/Linux” was deleted.
  • Alexa determined the essay was “radical literature” and it has been quarantined under “Needs Further Review”.
    The algorithms have informed you that you will need to submit a new essay by lunch.
  • What do you do?
    Accept your fate and spend the next four hours in the library frantically writing a new essay removing any mention to “GNU” or "RMS"
    Or head to the University and place a challenge to the system requesting the validation algorithm and an explanation why your work is banned?
  • You decide to challenge the algorithm
    You proceed to university to the administration centre
    You wait nervously inline only to see the staff have been replaced by touch screens.
    The computer rejects your complaint and reminds you that you have until 12pm to submit your essay.
  • You find out the Administration staff have been replaced with AI driven assistant touch screens.
    This decision was made because the new AI evaluation system for University Staff pay determined the administration staff to be the least performing and highest cost.
    The only avenue for complaint is the touch screen.
    You look at the clock as it strikes 12pm.
    You didn’t submit your essay, you just failed your unit.
    Go back to page 46.
  • The future is now
    In the past three years Governments have been ramping up their use of AI, the future is here and so are the legal challenges
    When machines make decisions who is checking the machine
    Who is validating the numbers
    How do you challenge an algorithm if you know it has made the wrong decision?
    Lets look at some recent legal challenges
  • Houston Federation of Teachers vs Houston Independent School District
     
    Public School Teachers Union challenged the use of proprietary algorithms for school employment practices.
  • The Medicaid program, which helps subsidize medical costs for people with low incomes, uses software to assess a person’s background to decide what they are entitled to.
    In the worst cases, faulty AI decisions “terminated benefits and services to individuals with intellectual, developmental, and physical disabilities
    For example, in Arkansas, algorithmic systems failed to cater for cerebral palsy or diabetes patients looking for health care options at home.
  • There were a lot of things wrong with it.
    First of all, the data they used to come up with their formula for setting people’s assistance limits was corrupt.
    They were using historical data to predict what was going to happen in the future.
    But they had to throw out two-thirds of the records they had before they came up with the formula because of data entry errors and data that didn’t make sense.
    So they were supposedly predicting what this population was going to need, but the historical data they were using was flawed, and they were only able to use a small sub-set of it.
    And bad data produces bad results.
  • Australian Government automated the determination of welfare recipient fraud.
    To do this they relied on historical data.
    And has become known as “RoboDebt”
    So far
    29,888 debts reduced,
    14,621 wiped to zero,
    26,104 “waived or written off permanently”.
    That represents 17% – or about one in six – of the total debts raised so far.
  • There has been numerous legal challenges however it seems prior to court action
    Centrelink the department who owns robodebt provides a new assessment that states the debt is no longer owed voiding the court case… what does this mean?
    No precedent has yet been set in Australia!
    The program has not been cancelled and is still in operation and has had a huge impact on citizen psyche.
  • Automating simple regulatory outcomes seems harmless?
    It saves a lot of money!
    Regulation can be easily broken down into rules?
    It saves a lot of money!
    It directly impacts the quality of citizens life's?
    It saves a lot of money!
  • When countries utilize ai technology to make what is seen to be basic administrative decisions these can have huge impacts on citizens life and privacy
    LITIGATING ALGORITHMS: CHALLENGING GOVERNMENT USE OFALGORITHMIC DECISION SYSTEMS An AI Now Institute Report In collaboration with Center on Race, Inequality, and the Law Electronic Frontier Foundation SEPTEMBER 2018
  • 2,030 people that died after receiving a robo-debt notice, some 663 people were officially classified by the system as “vulnerable,”
    776 of the 2,030 recorded deaths were people aged 45 and under. A seriously worrying 429 were under the age of 35.
    663* people were officially classified by the system as “vulnerable,” meaning the DHS had recorded a history of issues like mental illness, drug use, or domestic violence with each individual.
    *That amount only covers those that the Centrelink system saw fit to classify as vulnerable; a title the system makes very difficult to obtain,
  • Put humans at the center
    We need a “safety button” “inject human button”
    Start from the most vulnerable, minority groups, disability
    Demand policies and actions in place to enable citizens to argue against the maths
    Arguing for Open Source licensing of rules algorithm software
    Not just the software framework but the inputs that produce the outputs
    Validation Criteria accessible for citizens/lawyers
    Policies to allow citizens access to the rules code that made the decision
  • How can we validate ai decisions?
    How can we measures impacts on citizens?
    How can we measure levels of social duress caused?
    Does automating rules make them easier to understand or further blackbox them?
  • Uber self driving vehicle was travelling at 43mph.
    Initially the car identified the pedestrian as an unknown object.
    Then as a vehicle.
    Then as a bicycle.
    It also displayed varying expectations of future travel path.
    1.3 seconds prior to impact the system decided that an emergency braking maneuverer was needed to mitigate collision.
  • Uber then said under self driving control emergency braking is not meant to happen as it is supposed to be initiated by the “Operator”.
    Uber later admitted that the system currently does not alert the operator.
    The operator did initialise breaking which was estimated to be 1 second after collision..
    When we stop challenging the world around us, when we let authorities take a “We know better approach”
    We place ourselves in a situation where we are relying on a safety button that was never designed to work, and we all know where that has lead us in the past.
  • Software Isn't eating the world. We are drowning in software, most of it mediocre, duplicative, and bad.
    The code we write is not perfect it is susceptible to bias and failure as we those who write it are and therefore any machine decisions are also susceptible to failure and bias.
  • First the algorithm came for those on welfare, but I was not on welfare, then it came for the teachers salary but I was not a teacher, then it took away peoples health care but I had private insurance, soon the algorithm will come for me and what will I do?
    Read more up on what Countries are doing to litigate algorithm, seek to understand what decisions in your life are being made by an algorithm, educate yourself on your local laws and how they can be used if you find yourself in a situation where a decision was made by an algorithm that has harmed you, it is your right to understand how the algorithm made that decision we need an audit trail for how machines make decisions, its our choice to start ensuring a more transparent future or to let the computer say “No”.

×