Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
ARTIFICIAL INTELLIGENCE:
THE GOOD, THE BAD, & THE PRACTICAL
GORDON HAFF
Technology Evangelist, Red Hat
March 2018
@ghaff
WHO AM I?
● Evangelist for emerging technologies
and practices at Red Hat
● Co-author of From Pots and Vats to
Programs an...
CONVERGENCE OF PHYSICAL & DIGITAL
IoT
AI
Blockchain
THE CONSUMERIZATION
OF I.T.
THE DIGITIZATION
OF O.T.
CONSUMER
IoT/
MOB...
AI TODAY: MOORE’S LAW + OPEN SOURCE
● Can collect, store, and process huge
quantities of data
● Massive distributed proces...
HOW DID WE GET HERE?
IS AI...
Artificial General
Intelligence (AGI) or
“Strong AI”?
IS AI...
The stuff we haven’t
figured out how to do yet?
PREHISTORY
Philosophy Logic, methods of reasoning
Mathematics Algorithms, probability
Psychology Behaviorism, cognitive ps...
HISTORY
1956
Dartmouth
Summer
workshop
1952-1969
Early enthusiasm
Lisp, formal logic vs.
working models
Partially based on...
Thinking Humanly
Cognitive modeling
Informed by neurophysiology
Thinking Rationally
Logicist tradition
Intelligence based ...
Source: Russell and Norvig
LEARNING AGENT / REINFORCEMENT LEARNING
Source: MathWorks
Source: MapR
UNSUPERVISED CLUSTERING: K-MEANS
Source: Uli Drepper, Red Hat
SUPERVISED LEARNING
SUPERVISED CLUSTERING
NEURAL NETWORKS
Source: Ian Goodfellow of OpenAI
HOW WE GOT HERE
● 1969 Perceptrons (Minsky/Papert): Simple
networks can perform only basic functions
● Backpropagation (Hi...
Source: ml.berkeley.edu
AI = DEEP LEARNING = NEURAL NETWORKS
Backpropagation
Source: http://www.asimovinstitute.org/neural-network-zoo/
I’M SIMPLIFYING
“One problem with drawing them as node maps: it...
AMAZING STUFF SINCE ~2010
● Voice recognition: Siri, Alexa, Cortana, Google
● IBM Watson wins Jeopardy
● Google DeepMind's...
“While HIMSS[18] doesn’t look exactly like
CES, it’s getting close – and it’s totally
unrecognizable from the HIMSS of 10
...
HEALTHCARE AREAS OF INVESTIGATION
● Drug discovery
● Treatment options based on current research
● Diagnosis (imaging, cor...
THE CHALLENGES
Source: James Somers September 29, 2017
https://www.technologyreview.com/s/608911/is-ai-riding-a-one-trick-pony/
Source: Gartner, July 2017
GREAT FUZZY PATTERN RECOGNIZERS BUT...
● Almost all value in AI today is supervised learning (Andrew Ng)
● Fundamentally a...
HOW DID YOU ARRIVE AT THAT ANSWER?
http://nautil.us/issue/40/learning/is-artificial-intelligence-permanently-inscrutable
CLINICAL AND PATIENT DECISION SUPPORT SOFTWARE
DRAFT GUIDANCE FOR INDUSTRY AND FDA STAFF
DECEMBER, 2017
OBLIGATORY DILBERT CARTOON
OTHER HEALTHCARE CHALLENGES
● Long-term ROIs
● “Healthcare data sucks.” (Dr. Mark Weisman)
● “Black and white ‘truth’ is r...
WHAT’S THE COMPUTATIONAL BASIS FOR:
● Learning concepts
● Judging similarity
● Inferring causal connections
● Forming perc...
GETTING STARTED
… AND MANY OTHERS
HOW RED HAT IS OPTIMIZING
● Integration and access to specialized hardware resources such as
GPUs, FPGAs, and Infiniband
●...
Apache Spark, Project Jupyter, TensorFlow, Apache
Kafka, AMQP, Ceph, S3, OpenShift/Kubernetes
THANK YOU
plus.google.com/+RedHat
linkedin.com/company/red-hat
youtube.com/user/RedHatVideos
facebook.com/redhatinc
twitte...
AI: The Good, the Bad, and the Practical
Upcoming SlideShare
Loading in …5
×

AI: The Good, the Bad, and the Practical

348 views

Published on

Artificial intelligence, machine learning, neural networks. We’re in the midst of a wave of excitement around AI such as hasn’t been seen for a few decades. But those previous periods of inflated expectations led to troughs of disappointment. This time is (mostly) different.

Applications of AI such as predictive analytics are already decreasing costs and improving reliability of industrial machinery. Pattern recognition can equal or exceed the ability of human experts in some domains. It’s developing into an increasingly commercially important technology area. (Although it’s also easy to look at wins in specific domains and generalize to an overly-optimistic view of AI writ large.)

In this session, Red Hat Technology Evangelist for Emerging Technology will examine the AI landscape and identify those domains and approaches that have seen genuine advance and why. He’ll also discuss some of the specific ways in which both organizations and individuals are getting up to speed with AI today.

Given at BCBS NC Tech Summit, Raleigh, 2018

Published in: Software
  • Be the first to comment

  • Be the first to like this

AI: The Good, the Bad, and the Practical

  1. 1. ARTIFICIAL INTELLIGENCE: THE GOOD, THE BAD, & THE PRACTICAL GORDON HAFF Technology Evangelist, Red Hat March 2018 @ghaff
  2. 2. WHO AM I? ● Evangelist for emerging technologies and practices at Red Hat ● Co-author of From Pots and Vats to Programs and Apps ● Former IT industry analyst ● Former big system guy ● Website: http://www.bitmasons.com
  3. 3. CONVERGENCE OF PHYSICAL & DIGITAL IoT AI Blockchain THE CONSUMERIZATION OF I.T. THE DIGITIZATION OF O.T. CONSUMER IoT/ MOBILITY “Software is eating the world” “Data is the new oil”
  4. 4. AI TODAY: MOORE’S LAW + OPEN SOURCE ● Can collect, store, and process huge quantities of data ● Massive distributed processing capability/GPU/cloud ● Open source platforms, tools, and development model = Complex neural networks
  5. 5. HOW DID WE GET HERE?
  6. 6. IS AI... Artificial General Intelligence (AGI) or “Strong AI”?
  7. 7. IS AI... The stuff we haven’t figured out how to do yet?
  8. 8. PREHISTORY Philosophy Logic, methods of reasoning Mathematics Algorithms, probability Psychology Behaviorism, cognitive psychology Economics Utility/game/decision theory, operations research Linguistics Grammar, knowledge representation Control theory Objective functions, feedback loops
  9. 9. HISTORY 1956 Dartmouth Summer workshop 1952-1969 Early enthusiasm Lisp, formal logic vs. working models Partially based on Russell & Norvig 1966-1973 Reality sets in Lack of real world context Computing power limits 1969-1979 Knowledge-based systems Expert systems, language Early-mid 1980s Becomes an industry Ambitious goals Neural nets return Late 80s AI winter Collapse of Lisp machine market Failures: expert systems, Fifth Generation project, etc. 1995- Intelligent agents Modular approaches Internet cross-pollination 21st century Data, data, data Machine learning Deep learning
  10. 10. Thinking Humanly Cognitive modeling Informed by neurophysiology Thinking Rationally Logicist tradition Intelligence based on logical relationships Acting Humanly Turing Test Computer vision, robotics, language, reasoning Acting Rationally Rational agent approach Achieve best or best expected outcome RUSSELL & NORVIG
  11. 11. Source: Russell and Norvig LEARNING AGENT / REINFORCEMENT LEARNING
  12. 12. Source: MathWorks
  13. 13. Source: MapR UNSUPERVISED CLUSTERING: K-MEANS
  14. 14. Source: Uli Drepper, Red Hat SUPERVISED LEARNING
  15. 15. SUPERVISED CLUSTERING
  16. 16. NEURAL NETWORKS
  17. 17. Source: Ian Goodfellow of OpenAI
  18. 18. HOW WE GOT HERE ● 1969 Perceptrons (Minsky/Papert): Simple networks can perform only basic functions ● Backpropagation (Hinton and others) provided way to train multi-level networks (1986 based on earlier research) ● Became practical computationally and with sufficient data ~2010s
  19. 19. Source: ml.berkeley.edu AI = DEEP LEARNING = NEURAL NETWORKS Backpropagation
  20. 20. Source: http://www.asimovinstitute.org/neural-network-zoo/ I’M SIMPLIFYING “One problem with drawing them as node maps: it doesn’t really show how they’re used. For example, variational autoencoders (VAE) may look just like autoencoders (AE), but the training process is actually quite different. The use-cases for trained networks differ even more, because VAEs are generators, where you insert noise to get a new sample. AEs, simply map whatever they get as input to the closest training sample they “remember”. I should add that this overview is in no way clarifying how each of the different node types work internally (but that’s a topic for another day).”
  21. 21. AMAZING STUFF SINCE ~2010 ● Voice recognition: Siri, Alexa, Cortana, Google ● IBM Watson wins Jeopardy ● Google DeepMind's AlphaGo defeats Lee Sedol 4–1 ● Libratus wins against four top players at no-limit Texas hold 'em ● Autonomous driving research ● Ubiquitous bots ● Lots of unsexy predictive analytics, trading, optimization, and analysis
  22. 22. “While HIMSS[18] doesn’t look exactly like CES, it’s getting close – and it’s totally unrecognizable from the HIMSS of 10 years ago. So it’s fun to imagine what HIMSS28 will look like.” Mimi Grant, Adaptive Business Leaders (ABL) Organization
  23. 23. HEALTHCARE AREAS OF INVESTIGATION ● Drug discovery ● Treatment options based on current research ● Diagnosis (imaging, correlation) ● E.g. Early Colorectal Cancer Detected by Machine Learning Model Using Gender, Age, and Complete Blood Count Data (Kaiser Permanente NorthWest, 2017)
  24. 24. THE CHALLENGES
  25. 25. Source: James Somers September 29, 2017 https://www.technologyreview.com/s/608911/is-ai-riding-a-one-trick-pony/
  26. 26. Source: Gartner, July 2017
  27. 27. GREAT FUZZY PATTERN RECOGNIZERS BUT... ● Almost all value in AI today is supervised learning (Andrew Ng) ● Fundamentally a statistical learning technique ● Dependent on huge training sets ● Learning model effectively classical conditioned training ● Sensitive to small changes ● No physical world context ● Difficult to explain results
  28. 28. HOW DID YOU ARRIVE AT THAT ANSWER? http://nautil.us/issue/40/learning/is-artificial-intelligence-permanently-inscrutable
  29. 29. CLINICAL AND PATIENT DECISION SUPPORT SOFTWARE DRAFT GUIDANCE FOR INDUSTRY AND FDA STAFF DECEMBER, 2017
  30. 30. OBLIGATORY DILBERT CARTOON
  31. 31. OTHER HEALTHCARE CHALLENGES ● Long-term ROIs ● “Healthcare data sucks.” (Dr. Mark Weisman) ● “Black and white ‘truth’ is rare in medicine.” (Dr. Lynda Chin) ● Privacy/sharing data ● Resistance to adoption ● Training algorithms require domain expertise
  32. 32. WHAT’S THE COMPUTATIONAL BASIS FOR: ● Learning concepts ● Judging similarity ● Inferring causal connections ● Forming perceptual representations ● Learning word meanings and syntactic principles in natural language ● Predicting the future ● Developing physical world intuitions ?
  33. 33. GETTING STARTED
  34. 34. … AND MANY OTHERS
  35. 35. HOW RED HAT IS OPTIMIZING ● Integration and access to specialized hardware resources such as GPUs, FPGAs, and Infiniband ● Specialized features such as exclusive cores, CPU pinning strategies, hugepages, and NUMA ● Optimizing the access and efficiency of resources with robust scheduling, prioritization, and preemption capabilities ● Performance benchmarking and tuning
  36. 36. Apache Spark, Project Jupyter, TensorFlow, Apache Kafka, AMQP, Ceph, S3, OpenShift/Kubernetes
  37. 37. THANK YOU plus.google.com/+RedHat linkedin.com/company/red-hat youtube.com/user/RedHatVideos facebook.com/redhatinc twitter.com/RedHat

×