AI/Data Analytics (AIDA)
Key concepts, examples & risks
UTS / ACOSS meeting, 24 Sept. 2019
Simon Buckingham Shum & Kirsty Kitto
Connected Intelligence Centre, University of Technology Sydney
cic.uts.edu.au
Overview of key AI capabilities
ai4kids.org
Overview of key AI capabilities
automation of
routine cognitive tasks
Overview of key AI capabilities
automation of
routine cognitive tasks
but (1) companies may oversimplify a
“routine” task ignoring important human
judgements, or (2) go well beyond “routine”
to advanced cognition
and this is where the problems lie
Overview of key AI capabilities
“From clicks to constructs”
AIDA tools must map from the low level data signals
that computers log, to higher level categories that are
meaningful to humans
Issues: This mapping always makes assumptions,
which may be accidental or intentional, and can be
challenged
What can a computer game
tell us about a student’s
“Conscientiousness”?
“From clicks to constructs”
Educational example
What can a computer game
tell us about a student’s
“Conscientiousness”?
What can a computer game
tell us about a student’s
“Conscientiousness”?
Shute, V. J. and M. Ventura (2013). Stealth Assessment: Measuring and
supporting learning in video games. Cambridge, MA, MIT Press.
Figure 5 from report to The John D. and Catherine T. MacArthur Foundation
Reports on Digital Media and Learning
http://myweb.fsu.edu/vshute/pdf/Stealth_Assessment.pdf
What can a computer game
tell us about a student’s
“Conscientiousness”?
“From clicks to constructs”
Substitute “student” with “employee”
“Employee is
competent”
“Responsive”
“Expert”
“Team player”
Making streams of data meaningful:
Substitute “student” with “employee”
“Employee is
competent”
“Responsive”
“Expert”
“Team player” These are like normal performance
KPIs: a set of behaviours set as
approximate indicators that you
have the qualities the job requires.
(We can debate whether they’re
sensible of course)
Making streams of data meaningful:
Substitute “student” with “employee”
“Employee is
competent”
“Responsive”
“Expert”
“Team player”
Flawed metrics of teacher quality 14
http://vamboozled.com
https://www.edutopia.org/blog/vams-instructor-effectiveness-unpacking-debate-david-stroupe
“Value Added Models” (VAMs)
are automated assessments
of teacher quality, causing
huge controversy in the US
Natural Language
Processing (text analytics)
NLP enables computers to understand human
speech and writing (e.g. Siri; language translation)
Issues: • risk of bias in the data on which it was
trained • how effectively it feeds back to the user •
whether it is clear if you’re talking to a human or AI
(e.g. over the phone)
Example (UTS): instant feedback on writing 16
https://uts.edu.au/acawriter
Shortlisting for interviews: Resumé analysis 17
https://www.weforum.org/agenda/2019/05/ai-assisted-recruitment-is-biased-heres-how-to-beat-it/
Examples are now being
documented of systematic
gender and racial bias in
automated systems, because
the training data of ‘desirable
employees’ described a white,
male world.
Personal assistant example: booking a table
18
Impressive but controversial Google demo in which an AI system booked a table over the phone
https://youtu.be/-RHG5DFAjp8
Predictive modelling
“Based on the past, statistically speaking, we expect
the future to look like this…”
Issues: What if there are good reasons to think that
the past is not, or should not be, a good guide to the
future? • If data from the past has biases that we now
recognise, we should not perpetuate those.
Predicting future criminality 20
https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
Statistical model powering a
product used in US courts, to
judge the likelihood of reoffending,
has been demonstrated to be
systematically biased against
blacks
Image and video analysis
Machine learning enables computers to classify
images or videos
Issues: As with any machine learning system, like a
child, it learns based on what examples it’s shown. If
the examples are a distorted view of the world, it will
learn and perpetuate those categories.
Facial recognition is becoming a commodity service22
https://aws.amazon.com/rekognition/
https://medium.com/@Joy.Buolamwini/response-racial-and-gender-bias-in-amazon-rekognition-commercial-ai-system-for-analyzing-faces-a289222eeced
So, in the future, we’ll be
able to track all racial
features at equally high
definition (fairness)
— nobody will be left out of
the surveillance (ethics)
https://blogs.microsoft.com/ai/gender-skin-tone-facial-recognition-improvement/
S.A.F.E Face initiative 24
https://www.safefacepledge.org
Profiling & recommendation
A user is guided towards a choice based upon a profile
of similar users and/or past behaviour
Issues: Profiles can be poor and the recommended
service or product undesired. Sometimes the user cannot
overrule the recommendation. Sometimes doing so is
annoying or difficult.
Shopping... 26
27
Trade Unions: both
resisting and engaging
with AI
Issues: We need trust-building conversations for an
informed dialogue, and reskilling pathways
Teaching union concerns about AI-powered learning platforms
https://twitter.com/AGavrielatos/status/1121704316069236739
https://twitter.com/hashtag/TellPearson
https://www.weforum.org/reports/the-future-of-jobs-report-2018
“Insufficient reskilling and upskilling:
Employers indicate that they are set to
prioritize and focus their re- and
upskilling efforts on employees
currently performing high-value
roles as a way of strengthening their
enterprise’s strategic capacity
[...]
In other words, those most in
need of reskilling and upskilling
are least likely to receive such
training.” (p.ix)
RSA Future Work Centre
https://www.thersa.org/action-and-research/rsa-projects/economy-enterprise-manufacturing-folder/the-future-of-work
Excellent resources at sites such as...
AI Now Institute Data & Society Institute
https://ainowinstitute.org https://datasociety.net
Excellent resources at sites such as...
There are ways to empower employees if they are
given a genuine voice in the AIDA design process
34

AI/Data Analytics (AIDA): Key concepts, examples & risks

  • 1.
    AI/Data Analytics (AIDA) Keyconcepts, examples & risks UTS / ACOSS meeting, 24 Sept. 2019 Simon Buckingham Shum & Kirsty Kitto Connected Intelligence Centre, University of Technology Sydney cic.uts.edu.au
  • 2.
    Overview of keyAI capabilities ai4kids.org
  • 3.
    Overview of keyAI capabilities automation of routine cognitive tasks
  • 4.
    Overview of keyAI capabilities automation of routine cognitive tasks but (1) companies may oversimplify a “routine” task ignoring important human judgements, or (2) go well beyond “routine” to advanced cognition and this is where the problems lie
  • 5.
    Overview of keyAI capabilities
  • 6.
    “From clicks toconstructs” AIDA tools must map from the low level data signals that computers log, to higher level categories that are meaningful to humans Issues: This mapping always makes assumptions, which may be accidental or intentional, and can be challenged
  • 7.
    What can acomputer game tell us about a student’s “Conscientiousness”? “From clicks to constructs” Educational example
  • 8.
    What can acomputer game tell us about a student’s “Conscientiousness”?
  • 9.
    What can acomputer game tell us about a student’s “Conscientiousness”?
  • 10.
    Shute, V. J.and M. Ventura (2013). Stealth Assessment: Measuring and supporting learning in video games. Cambridge, MA, MIT Press. Figure 5 from report to The John D. and Catherine T. MacArthur Foundation Reports on Digital Media and Learning http://myweb.fsu.edu/vshute/pdf/Stealth_Assessment.pdf What can a computer game tell us about a student’s “Conscientiousness”?
  • 11.
    “From clicks toconstructs” Substitute “student” with “employee” “Employee is competent” “Responsive” “Expert” “Team player”
  • 12.
    Making streams ofdata meaningful: Substitute “student” with “employee” “Employee is competent” “Responsive” “Expert” “Team player” These are like normal performance KPIs: a set of behaviours set as approximate indicators that you have the qualities the job requires. (We can debate whether they’re sensible of course)
  • 13.
    Making streams ofdata meaningful: Substitute “student” with “employee” “Employee is competent” “Responsive” “Expert” “Team player”
  • 14.
    Flawed metrics ofteacher quality 14 http://vamboozled.com https://www.edutopia.org/blog/vams-instructor-effectiveness-unpacking-debate-david-stroupe “Value Added Models” (VAMs) are automated assessments of teacher quality, causing huge controversy in the US
  • 15.
    Natural Language Processing (textanalytics) NLP enables computers to understand human speech and writing (e.g. Siri; language translation) Issues: • risk of bias in the data on which it was trained • how effectively it feeds back to the user • whether it is clear if you’re talking to a human or AI (e.g. over the phone)
  • 16.
    Example (UTS): instantfeedback on writing 16 https://uts.edu.au/acawriter
  • 17.
    Shortlisting for interviews:Resumé analysis 17 https://www.weforum.org/agenda/2019/05/ai-assisted-recruitment-is-biased-heres-how-to-beat-it/ Examples are now being documented of systematic gender and racial bias in automated systems, because the training data of ‘desirable employees’ described a white, male world.
  • 18.
    Personal assistant example:booking a table 18 Impressive but controversial Google demo in which an AI system booked a table over the phone https://youtu.be/-RHG5DFAjp8
  • 19.
    Predictive modelling “Based onthe past, statistically speaking, we expect the future to look like this…” Issues: What if there are good reasons to think that the past is not, or should not be, a good guide to the future? • If data from the past has biases that we now recognise, we should not perpetuate those.
  • 20.
    Predicting future criminality20 https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing Statistical model powering a product used in US courts, to judge the likelihood of reoffending, has been demonstrated to be systematically biased against blacks
  • 21.
    Image and videoanalysis Machine learning enables computers to classify images or videos Issues: As with any machine learning system, like a child, it learns based on what examples it’s shown. If the examples are a distorted view of the world, it will learn and perpetuate those categories.
  • 22.
    Facial recognition isbecoming a commodity service22 https://aws.amazon.com/rekognition/ https://medium.com/@Joy.Buolamwini/response-racial-and-gender-bias-in-amazon-rekognition-commercial-ai-system-for-analyzing-faces-a289222eeced
  • 23.
    So, in thefuture, we’ll be able to track all racial features at equally high definition (fairness) — nobody will be left out of the surveillance (ethics) https://blogs.microsoft.com/ai/gender-skin-tone-facial-recognition-improvement/
  • 24.
    S.A.F.E Face initiative24 https://www.safefacepledge.org
  • 25.
    Profiling & recommendation Auser is guided towards a choice based upon a profile of similar users and/or past behaviour Issues: Profiles can be poor and the recommended service or product undesired. Sometimes the user cannot overrule the recommendation. Sometimes doing so is annoying or difficult.
  • 26.
  • 27.
  • 28.
    Trade Unions: both resistingand engaging with AI Issues: We need trust-building conversations for an informed dialogue, and reskilling pathways
  • 29.
    Teaching union concernsabout AI-powered learning platforms https://twitter.com/AGavrielatos/status/1121704316069236739 https://twitter.com/hashtag/TellPearson
  • 30.
    https://www.weforum.org/reports/the-future-of-jobs-report-2018 “Insufficient reskilling andupskilling: Employers indicate that they are set to prioritize and focus their re- and upskilling efforts on employees currently performing high-value roles as a way of strengthening their enterprise’s strategic capacity [...] In other words, those most in need of reskilling and upskilling are least likely to receive such training.” (p.ix)
  • 31.
    RSA Future WorkCentre https://www.thersa.org/action-and-research/rsa-projects/economy-enterprise-manufacturing-folder/the-future-of-work Excellent resources at sites such as...
  • 32.
    AI Now InstituteData & Society Institute https://ainowinstitute.org https://datasociety.net Excellent resources at sites such as...
  • 33.
    There are waysto empower employees if they are given a genuine voice in the AIDA design process 34