The Ethics In
Artificial Intelligence
Are intelligent machines friend or foe?
Nov 14, 2016
TONIGHT’S SPEAKERS
Chris Messina
Member of the Board
of Directors
North American Nickel
Karl Seiler
PIVIT &
Big Data Florida
Malcolm McRoberts
Software Architect
NANTHEALTH
Artificial Intelligence
When a machine mimics "cognitive" functions that humans
associate with other human minds, such as "learning" and
"problem solving"
AI IS ALREADY EVERYWHERE, EVERYDAY
You live in the age
of the data-driven
algorithm
Decisions that affect
your life — are
being made by
mathematical
models.
Why the rush to AI?
o Cheaper computing
o More data
o Better algorithms
…its because we can
Why the rush to AI?
o Decision automation is now an
inevitable economic imperative
o Driven by a faster-paced, micro-
managed, interconnected,
automated, and optimized world
o Never-asleep autonomous
decision making - it is here now
Why the rush to AI?
o Decisions are made in view of assessed
positive and negative projected outcomes
o Positive and negative are merely derived
(learned) weights
o Relative to some system of value
o Moving toward or away from objectives &
problems
-+
Why the rush to AI?
o Weights are encoded intent
o Based on some worldview,
zeitgeist, culture, rule of law,
economic goal and philosophical
perspective
o So autonomous systems are
encoded with intent
Why the rush to AI?
o A linked chain from software to
intent
o How can we impose systems
that bend code creating and
learning systems toward positive
intent for our friends and
potentially negative intent for
the evil-doers?
The
good
More precision
Better reliability
Increased savings
Better safety
More speed
“We have the opportunity in the decades ahead to
make major strides in addressing the grand challenges
of humanity. AI will be the pivotal technology in
achieving this progress. We have a moral imperative
to realize this promise while controlling the peril. It
won’t be the first time we’ve succeeded in doing this.”
Ray Kurzweil
The
bad
“Success in creating AI would be the biggest event in
human history,…”
“Unfortunately, it might also be the last, unless we
learn how to avoid the risks. In the near term, world
militaries are considering autonomous-weapon
systems that can choose and eliminate targets.”
“…humans, limited by slow biological evolution,
couldn’t compete and would be superseded by A.I.”
Stephen Hawking
“I am in the camp that is concerned about super
intelligence. First the machines will do a lot of jobs for
us and not be super intelligent. That should be positive
if we manage it well. A few decades after that though
the intelligence is strong enough to be a concern. I
agree with Elon Musk and some others on this and
don’t understand why some people are not
concerned.”
Bill Gates
AI is “our greatest existential threat…”
“I’m increasingly inclined to think that there should be
some regulatory oversight, maybe at the national and
international level, just to make sure that we don’t do
something very foolish.”
“I think there is potentially a dangerous outcome
there.” (referring to Google’s Deep Mind which he
invested in to keep an eye on things)
Elon Musk
When really smart people get worried
I make it a habit to pay attention!
More than 16,000 researchers and
thought leaders have signed an open
letter to the United Nations calling for
the body to ban the creation of
autonomous and semi-autonomous
weapons,
“…it’s all
changing so
fast…”
No one before
has seen the
change you
have seen
It is nothing
compared to
the change
that is coming
The
ugly
The
ugly
Another fatal Tesla crash reportedly on Autopilot emerges, Model S
hits a streetsweeper truck – caught on dashcam
Remember I-ROBOT & Asimov’s 3 Laws
 A robot may not injure a human being or, through inaction, allow a
human being to come to harm.
 A robot must obey orders given it by human beings except where such
orders would conflict with the First Law.
 A robot must protect its own existence as long as such protection does
not conflict with the First or Second Law.
The ugly (autonomous cars & the trolley
predicament)
Ethical questions
arise when
programming
cars to act in
situations in
which human
injury or death is
inevitable,
especially when
there are split-
second choices
to be made
about whom to
put at risk.
The ugly (gap-filling non-human care
providers)
AI-based
applications could
improve health
outcomes and
quality of life for
millions of people in
the coming years—
but only if they gain
the trust of doctors,
nurses, and patients.
The ugly (non-human directed education)
Though quality
education will
always require active
engagement by
human teachers, AI
promises to
enhance education
at all levels,
especially by
providing
personalization at
scale.
The ugly (lights-out economy)
The whole idea is to do something
no other human—and no other
machine—is doing.
If we all die, it would keep trading.
The ugly (no work for you – reskill becomes
a priority in education)
In the first machine age the vast
majority of Americans worked in
agriculture. Now it's less than two
percent. These people didn't simply
become unemployed, they reskilled.
One of the best ideas that America had
was mass primary education. That's one
of the reasons it became an economic
leader and other countries also adopted
this model of mass education, where
people paid not only for their own
children but other people's children to
go to school.
Safe exploration - agents
learn about their
environment without
executing catastrophic
actions?
Robustness - machine
learning systems that are
robust to changes in the
data distribution, or at
least fail gracefully?
Avoiding negative side
effects- avoid undesired
effects on the
environment?
Avoiding “reward hacking”
- prevent agents from
“gaming” their reward
functions
Scalable oversight - agents
efficiently achieve goals for
which feedback is very
expensive? For example,
can we build an agent that
tries to clean a room in the
way the user would be
happiest with, even
feedback from the user is
very rare
…and so
o AI adoption and sophistication is speeding up
o It is an economic imperative outpacing constraints
o Decision making is being coded into every system and product
o Decision making overlaps ethics and will be autonomous
o Forward thinkers are CONCERNED and starting to work this problem
Carbon-based work-units unite!
Karl Seiler | President
+ 1 321 - 7 5 0 - 5165
k ar l@ piviting.c om
www.Piviting.c om
S M A R T E R C H A N G E

The Ethics of Artificial Intelligence

  • 1.
    The Ethics In ArtificialIntelligence Are intelligent machines friend or foe? Nov 14, 2016
  • 2.
    TONIGHT’S SPEAKERS Chris Messina Memberof the Board of Directors North American Nickel Karl Seiler PIVIT & Big Data Florida Malcolm McRoberts Software Architect NANTHEALTH
  • 3.
    Artificial Intelligence When amachine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving"
  • 4.
    AI IS ALREADYEVERYWHERE, EVERYDAY
  • 5.
    You live inthe age of the data-driven algorithm Decisions that affect your life — are being made by mathematical models.
  • 6.
    Why the rushto AI? o Cheaper computing o More data o Better algorithms …its because we can
  • 7.
    Why the rushto AI? o Decision automation is now an inevitable economic imperative o Driven by a faster-paced, micro- managed, interconnected, automated, and optimized world o Never-asleep autonomous decision making - it is here now
  • 8.
    Why the rushto AI? o Decisions are made in view of assessed positive and negative projected outcomes o Positive and negative are merely derived (learned) weights o Relative to some system of value o Moving toward or away from objectives & problems -+
  • 9.
    Why the rushto AI? o Weights are encoded intent o Based on some worldview, zeitgeist, culture, rule of law, economic goal and philosophical perspective o So autonomous systems are encoded with intent
  • 10.
    Why the rushto AI? o A linked chain from software to intent o How can we impose systems that bend code creating and learning systems toward positive intent for our friends and potentially negative intent for the evil-doers?
  • 11.
  • 12.
  • 13.
  • 14.
  • 15.
  • 16.
  • 20.
    “We have theopportunity in the decades ahead to make major strides in addressing the grand challenges of humanity. AI will be the pivotal technology in achieving this progress. We have a moral imperative to realize this promise while controlling the peril. It won’t be the first time we’ve succeeded in doing this.” Ray Kurzweil
  • 21.
  • 23.
    “Success in creatingAI would be the biggest event in human history,…” “Unfortunately, it might also be the last, unless we learn how to avoid the risks. In the near term, world militaries are considering autonomous-weapon systems that can choose and eliminate targets.” “…humans, limited by slow biological evolution, couldn’t compete and would be superseded by A.I.” Stephen Hawking
  • 24.
    “I am inthe camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.” Bill Gates
  • 25.
    AI is “ourgreatest existential threat…” “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.” “I think there is potentially a dangerous outcome there.” (referring to Google’s Deep Mind which he invested in to keep an eye on things) Elon Musk
  • 26.
    When really smartpeople get worried I make it a habit to pay attention!
  • 27.
    More than 16,000researchers and thought leaders have signed an open letter to the United Nations calling for the body to ban the creation of autonomous and semi-autonomous weapons,
  • 29.
  • 30.
    No one before hasseen the change you have seen It is nothing compared to the change that is coming
  • 31.
    The ugly The ugly Another fatal Teslacrash reportedly on Autopilot emerges, Model S hits a streetsweeper truck – caught on dashcam
  • 32.
    Remember I-ROBOT &Asimov’s 3 Laws  A robot may not injure a human being or, through inaction, allow a human being to come to harm.  A robot must obey orders given it by human beings except where such orders would conflict with the First Law.  A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
  • 33.
    The ugly (autonomouscars & the trolley predicament) Ethical questions arise when programming cars to act in situations in which human injury or death is inevitable, especially when there are split- second choices to be made about whom to put at risk.
  • 34.
    The ugly (gap-fillingnon-human care providers) AI-based applications could improve health outcomes and quality of life for millions of people in the coming years— but only if they gain the trust of doctors, nurses, and patients.
  • 35.
    The ugly (non-humandirected education) Though quality education will always require active engagement by human teachers, AI promises to enhance education at all levels, especially by providing personalization at scale.
  • 36.
    The ugly (lights-outeconomy) The whole idea is to do something no other human—and no other machine—is doing. If we all die, it would keep trading.
  • 37.
    The ugly (nowork for you – reskill becomes a priority in education) In the first machine age the vast majority of Americans worked in agriculture. Now it's less than two percent. These people didn't simply become unemployed, they reskilled. One of the best ideas that America had was mass primary education. That's one of the reasons it became an economic leader and other countries also adopted this model of mass education, where people paid not only for their own children but other people's children to go to school.
  • 38.
    Safe exploration -agents learn about their environment without executing catastrophic actions? Robustness - machine learning systems that are robust to changes in the data distribution, or at least fail gracefully?
  • 39.
    Avoiding negative side effects-avoid undesired effects on the environment? Avoiding “reward hacking” - prevent agents from “gaming” their reward functions
  • 40.
    Scalable oversight -agents efficiently achieve goals for which feedback is very expensive? For example, can we build an agent that tries to clean a room in the way the user would be happiest with, even feedback from the user is very rare
  • 41.
    …and so o AIadoption and sophistication is speeding up o It is an economic imperative outpacing constraints o Decision making is being coded into every system and product o Decision making overlaps ethics and will be autonomous o Forward thinkers are CONCERNED and starting to work this problem Carbon-based work-units unite!
  • 43.
    Karl Seiler |President + 1 321 - 7 5 0 - 5165 k ar l@ piviting.c om www.Piviting.c om S M A R T E R C H A N G E

Editor's Notes

  • #30 Innovation surprises us. Because its new, outside our expectations, and often seems like magic. I share a “have you seen this yet…” moment every day with my social network. I also tend to talk to lots of people across diverse industries and nowadays I hear the same thing. “It is all changing so fast”. So I ask “Is it speeding up?”. No pause - “Definitely”. I ask “Is it good?”. They pause, they look down, look up, look back into my eyes. “Yes, I think so.” They brighten and say, “It is very exciting”. I want to talk to you about that “pause”. That pause is the space between the lightning and the thunder. I ask “How far ahead can you see”. These are typically serious and empowered people I am talking to. They tell me that their strategic planning has contracted; 25 year plans are now 10, 10 year plans are now 3 to 5, or more commonly they are just reacting. They just can’t predict how their world shakes out. Their usual “go-to experts” can’t reliably predict the future for them any more. But they do believe that their children’s and grand children’s world will be amazingly different. This conversation seems universal.