Introduction to
Artificial Intelligence and Machine Learning
Raffaele Mauro
Managing Director
Endeavor Italy
Futureland
Talent Garden Calabiana
November 2017
Technology
Finance &
Venture Capital
Policy
Innovation
PR wars
Fake news & PR wars
Robophobia!!
Reality: Engineering breakthroughs
+ massive investments … but long road ahead
Self-driving
vehicles (drones,
submarines, cars)
Speech
recognition
Image recognition
& search
Intelligence
analysis
Manufacturing
automation
Gaming
Virtual assistants
Antispam
filters
Automatic
translation
Anti-fraud
systems and
credit scoring
Medical
diagnosis
Robotics
Recommendation
systems
Robotics
Narrow Vs General AI
From a limited set of specific capabilities to autonomous intelligent
agents with general reasoning and real world autonomy?
?
Perception
Learning
Planning
Reasoning
Mobility
…....
Expanding set of
tasks efficiently
performed by
machine
intelligence
+
AI is everywhere
but we are not
calling it “AI”
anymore
Competing with humans:
IBM Deep Blue, IBM Watson, Google AlphaGo
1996-97 2011
Google
2017
VC investments in early stage AI companies
Italian VCs investing in AI: August 2017
Google – Investment, Research, Applications
Example: Google Gmail
Source: Google Blog https://gmail.googleblog.com/2007/10/how-our-spam-filter-works.html
“Software is eating the world” (M arc Andreessen)
“Mobile is eating the world” (Benedict Evans)
“AI is eating the world”
Source:Facebook, Business Insider http://www.businessinsider.com/facebook-f8-ten-year-roadmap-2016-4?IR=T
“…quando orientur controversiae, non magis disputatione opus erit inter duos philosophus,
quam inter duos computistas. Sufficiet enim calamos in manus sumere sedereque ad abacos,
et sibi mutuo (accito si placet amico) dicere: calculemus!” (Gottfried Wilhelm Leibniz)
Computational thinking
Alan Turing
“A computer would deserve to be called intelligent if it could deceive
a human into believing that it was human.”
“Most of our future attempts to build large, growing Artificial Intelligences
will be subject to all sorts of mental disorders.”
Marvin Minsky
History of AI:
Multiple Gartner Cycles & “AI Winters”
5 Paradigms of AI
• Inspired by Logic, Philosophy and
LinguisticsSymbolic
• Inspired by NeuroscienceConnectionist
• Inspired by Evolutionary BiologyEvolutionist
• Inspired by Probability, Statistics and
CombinatoricsStatistical
• Inspired by Psychology and MathematicsAnalogical
Source: Pedro Domingos, “The Master Algorithm”, MIT Press, 2015
• Popular from the 50’s to late 80’s
• Focus on Logic (if-then rules, etc.)
• Focus on problem solving
• Limited learning capacity
• Knowledge engineering
Symbolic Approach: Logic and Decision
Source: edu(b)log http://thinkdifferent.typepad.com/edulog/
Symbolic Approach: Expert Systems
Knowledge engineering: ontologies, knowledge representation,
natural language processing, reasoning, decision
Source: Steve Copley, IGSE ICT https://www.igcseict.info/readme/index.html
• Intelligent Agents: rational and autonomous, perceive environment, take decisions
based on a specific objective, plan action to achieve it
• Perception: recognition (vision, etc.) of the environment
• Actuation: navigation or manipulation of the environment
On step beyond: Intelligence Agents
Source: Pattie Maes, MIT Media Lab - Artificial Intelligence: A Modern Approach by Stuart Russell and Peter Novig.
• Decision: Simple reflexes or
complex reprensentation of the
world (internal states) with
reasoning
• Problem representation & state
space
• Actions generate passage from
state A to state B
• Solution -> from existing state to
optimal state
• Exploring state space has
computational cost
Symbolic Approach: Problem Solving
Source: Daniel Valana, Jared Bouchier, Xin Yuan, University of Adelaide Student Wiki.
Blind search: brute force
• Backward search:
starting from the solution
• Backtracking: error / obstacle
-> go to previous step
• Depth first (LIFO, queue)
Vs breadth first (FIFO, stack)
Heuristic search: based on
knowledge
Symbolic Approach: State Space Search
Source: William H. Wilson, University of New South Wales, http://www.cse.unsw.edu.au/~billw/
Natural Language Processing
NLP: Interaction between machines and
human languages, with tasks regarding
syntax, semantics, discourse and speech
Structure: Mix of techniques from
traditional symbobolic-linguistic to deep
learning
Examples of corporate APIs: Google
Cloud Natural Language API, IBM
Watson, Amazon Lex, Microsoft
Cognitive Services, Facebook's DeepText
Applications: Translation, Chatbots,
Automatic summarization, Antispam,
Information extraction/classification
Source: Natural Language Toolkit http://www.nltk.org/
Rules Vs Learning
Learning is the key to intelligence acceleration
?
• Machine learning: mix of connectionist,
statistical, genetic and analogical paradigm.
Between data science and AI
• Why: Automatizes automation, accelerates
progress, human programmer and
instructions no more as a bottleneck
• Example: Go game, n° of potential moves is
10170, higher than the number of atoms in the
universe 1080
• Applications:
• Basket analysis in ecommerce: learnin associations
with conditional probability
• Credit scoring in finance: learning classifications: if
income > X AND savings > Y => Low risk (grafico)
• Medical diagonosis: pattern recognition
• Predictions in financial markets
• Bioinformatics
• Games
Machine Learning
Classification Vs Regression Analysis
Classification Regression
Sources: http://www.whatissixsigma.net/ and https://jaxenter.com/machine-learning-an-introduction-for-programmers-122135.html
Classification: Separate data finding discrete category /label
Regression: Find coefficients of the line that minimize distance between
observation points and prediction line
Supervised Vs Unsupervised learning
Supervised: Learning a class from labeled examples
Source: -
Reinforcement learning
Source: Berkeley’s CS 294: Deep Reinforcement Learning by John Schulman & Pieter Abbeel
Example: Gaming, Robotics, Self Driving Cars
Overfitting: Model with low generality and too much tied to a specific
training set
Solution: Less variables and larger training set
Overfitting Vs Underfitting
Source: http://www.turingfinance.com/regression-analysis-using-python-statsmodels-and-quandl/
Connectionist Approach: Neural Netowrks
Source: Wikimedia Commons.
• Imitating biological
computation in neurons
• Dealing with complexity
and uncertainty
• Non-symbolic knowledge
representation
• Learning capability
• Parallel computation
Source: Wikibooks
Connectionist Approach: Perceptrons
• Input on “dedrites” then ”cell body” compute weights
• Output: 0, 1 / yes-no after a threshold in the “axon”
Connectionist Approach: Classification
Source: Wikimedia Commons
Classification: the perceptron
separates inputs in two classes with
a linear boundary
Boundary: Straight line in 2
dimension, plane in 3 dimensions, n-
1 hyperplane in n dimensions (=n
variables)
Learning: Inclusion of new elements
in the training set increases accuracy
Limits: Complex boundaries, XOR
functions
Backpropagation: real output confronted
with expected outcome and change weights
in neuron connections
Multilayer perceptrons: hidden layer beyond
input and output layers
Complexity: Classification / learning with
non linear boundaries
Applications examples: speech recognition,
image recognition, machine translation
Gradient descent: Analogy with loss function
finding local minimum
Connectionist approach:
Multilayer Perceptrons and Backpropagation
Source: Wikimedia Commons
Deep learning: multilayer
perceptron with multiple
layers inside -> network forced
to extract salient
characteristics
Applications: NLP, translation
vision, speech &audio
recognition, bioinformatics
Pros: Learning of abstract
concepts without human
supervision
Cons: Non-transparent logic
Deep Learning
Source: http://www.kdnuggets.com/2016/01/seven-steps-deep-learning.html
Human Brain: Operationally
flexible and algorithmically
compact (DNA)
Combination of approaches: in
part learning neural net, in part
specialized regions (visual cortex,
cerebellum)
Hierarchical Order
Reading: “How to create a mind”
R. Kurzweil, “On Intelligence”, J.
Hawkins, S. Blakeslee
The Brain Analogy
Source: Wikimedia Commons.
Genetic Algorithms: first mentioned
by John Von Neumann as self-
replicating machines
Structure:
• Set of automata with casual variations
• Fitness function to be maximized
• Then mutations and / or random
crossover-reproduction
• Selection of a new generation
• Iteration
References: Santa Fe Institute,
Melanie Mitchell
Evolutionist Approach: Genetic Algorithms
Source: Quantdare https://quantdare.com/
Local Minima problem
Source: Sebastian Raschka Blog, https://sebastianraschka.com/
Algorithm does not reach the global maximum
Dataclism: Explosion in data production, storage and availability
From data to knowledge: New power for statistical techniques:
Statistical Approach: Data Explosion
Source: P Desjardins-Proulx blog, http://phdp.github.io/blog.html
• Bayesian techniques: Use probability
theory and Bayes theorem to
uptdate existing knowledge
incorparating new data
• P(A) = P event #1
• P(B) = P event #2
• P(A|B) = Probability of A if B is true
• P (B|A) = Probability of B if A is true
• “Degree of belief” subjective /
theoretical (Vs frequentist /
experimental)
• Example: Google’s antispam filter
Statistical Approach: Bayesian Reasoning
Markov Chain: sequence of states with
probabilistic relation
Example: In a sentence, if there is the
word X then there is a probability P than
the next word will be Y
Example: Google’s Page Rank
Hidden Markov model: Hidden states,
operates as a dynamic Bayesian network
Example: Apple’s voice recognition
Monte Carlo Chain: random values from
probability distributions, then find
outputs from each set of values
-> complex models without complex
functions
Statistical Approach: Markov Chains
Source: Wikimedia Commons.
Nearest Neighbor: Supervised learning
algorithm based on analogy, measured on
the distance from other data on a
plane/space
Data: classified with the most frequent label
among the majority of k nearest training
samples
Training-> Distance measure->Classification
Dimensionality reduction: Fundamental for
application
Applications: Pattern recognition
Analogical Approach:
k-Nearest Neighbor algorithm
Source: Wikimedia Commons.
• Support vector machines:
Supervised learning with looking for
nearest points to separation margins,
with more margins in “competition”
• Objective: Maximize margins, or
distance with separation hyperplane
• Kernelization: Bring data on higher
dimensions, where higher margin
exists, even if it wasn’t present in
normal dimensionality
Analogical Approach: Support Vector Machines
Source: EFDB, http://efavdb.com/ and OpenCV http://docs.opencv.org/2.4/index.html
K-Means: Classification of unlabeled data- clustering di dati non strutturati
K = n° of neighbors to be found Centroid: Middle point in a cluster
Set-up: choice of centroid and position data
Calculation: Reposition centroids and iteret until threshold
Example: Face recognition
Unsupervised Learning: K-Means
Source: http://iancat.tistory.com/6
Problem: If training set contains prejudices, output will be projudiced
Example: Word associations with minorities
Solution: Equal opportunity by design ?
Amplifying Prejudice ?
Source: https://factordaily.com/dangers-of-artificial-intelligence/
Man-machine integration
Intelligence
Analysis
Policy / Etiquette
Enforcement
Automatic screening followed by human judgement
Extended Moore’s Law
Intelligence Explosion
Intelligence Explosion
• Vernon Vinge
• Hans Moravec
• Nick Bostrom
• Elizer S. Yudkowsky
• Ben Goertzel
• Ray Kurzweil
Speculation on Super-human AI
“The term “Singularity” in my book is comparable to the use of this term by the physics community. Just as
we find it hard to see beyond the event horizon of a black hole, we also find it difficult to see beyond the
event horizon of the historical Singularity. How can we …. imagine what our future civilization, with its
intelligence multiplied trillions-fold, be capable of thinking and doing?” (Ray Kurzweil)
Maybe the Singularity is not near …...
Or maybe we should fear the Roko’s Basilisk !
“Roko's basilisk is a thought experiment about the potential risks involved in developing artificial
intelligence. The premise is that an all-powerful artificial intelligence from the future could retroactively
punish those who did not help bring about its existence, including those who merely knew about the
possible development of such a being. It resembles a futurist version of Pascal's wager.”
(Source: RationalWiki)
Ethics and “Friendly” AI
China: Rising innovation performance
China: Rising innovation performance
Quantum machine learning
Google / NASA Quantum AI Lab
Online courses
Andrew NG
Francesco
Mosconi
Sakunthala
Panditharatne
Machine Learning
Zero to Deep
Learning™ with
Python and Keras
Neural networks
for hackers
Books
rmauro@post.harvard.edu
raffa.mauro@gmail.com
Thank you !
Raffaele Mauro, Ph.D.
Raffaele Mauro is passionate about technology, policy and global finance.
Now Managing Director at Endeavor Italy, he is focused on high-impact entrepreneurship and venture capital,
providing companies access to smart capital, talent and markets.
Previously he was Head of Finance for Innovation & Entrepreneurship at Intesa Sanpaolo and worked at venture
capital funds such as United Ventures (formerly Annapurna Ventures), P101 and OltreVenture.
Raffaele is a Kauffman Fellow and holds an MPA from Harvard University, a Ph.D. from Bocconi and is alumnus of
the Singularity University Graduate Studies Program at NASA Ames.
Raffaele co-authored the book “Hacking Finance”, an essay on Bitcoin, blockchain and cryptocurrencies, and was
invited speaker at EY EMEIA Accelerate, Wired Money and the Bundesbank. He invested and advised several
companies including Multiply Labs (YC 2016).
Raffaele is also Junior Fellow at the Aspen Institute, member of the Young Leaders group of the US-Italy Council,
member of the “Young European Leaders – 40 under 40” cohort of 2011, member of the scientific committee at
Blockchainlab.it and member of the executive committee at the Global Shapers Hub - Milano, a World Economic
Forum community.
Twitter: @rafr
69

Introduction to Artificial Intelligence and Machine Learning: Ecosystem and Technology Overview

  • 1.
    Introduction to Artificial Intelligenceand Machine Learning Raffaele Mauro Managing Director Endeavor Italy Futureland Talent Garden Calabiana November 2017
  • 3.
  • 5.
  • 6.
    Fake news &PR wars
  • 7.
  • 8.
    Reality: Engineering breakthroughs +massive investments … but long road ahead
  • 9.
    Self-driving vehicles (drones, submarines, cars) Speech recognition Imagerecognition & search Intelligence analysis Manufacturing automation Gaming Virtual assistants Antispam filters Automatic translation Anti-fraud systems and credit scoring Medical diagnosis Robotics Recommendation systems
  • 10.
  • 11.
    Narrow Vs GeneralAI From a limited set of specific capabilities to autonomous intelligent agents with general reasoning and real world autonomy? ? Perception Learning Planning Reasoning Mobility …....
  • 12.
    Expanding set of tasksefficiently performed by machine intelligence + AI is everywhere but we are not calling it “AI” anymore
  • 13.
    Competing with humans: IBMDeep Blue, IBM Watson, Google AlphaGo 1996-97 2011
  • 14.
  • 16.
    VC investments inearly stage AI companies
  • 18.
    Italian VCs investingin AI: August 2017
  • 19.
    Google – Investment,Research, Applications
  • 20.
    Example: Google Gmail Source:Google Blog https://gmail.googleblog.com/2007/10/how-our-spam-filter-works.html
  • 21.
    “Software is eatingthe world” (M arc Andreessen) “Mobile is eating the world” (Benedict Evans) “AI is eating the world” Source:Facebook, Business Insider http://www.businessinsider.com/facebook-f8-ten-year-roadmap-2016-4?IR=T
  • 22.
    “…quando orientur controversiae,non magis disputatione opus erit inter duos philosophus, quam inter duos computistas. Sufficiet enim calamos in manus sumere sedereque ad abacos, et sibi mutuo (accito si placet amico) dicere: calculemus!” (Gottfried Wilhelm Leibniz) Computational thinking
  • 23.
    Alan Turing “A computerwould deserve to be called intelligent if it could deceive a human into believing that it was human.”
  • 24.
    “Most of ourfuture attempts to build large, growing Artificial Intelligences will be subject to all sorts of mental disorders.” Marvin Minsky
  • 25.
    History of AI: MultipleGartner Cycles & “AI Winters”
  • 26.
    5 Paradigms ofAI • Inspired by Logic, Philosophy and LinguisticsSymbolic • Inspired by NeuroscienceConnectionist • Inspired by Evolutionary BiologyEvolutionist • Inspired by Probability, Statistics and CombinatoricsStatistical • Inspired by Psychology and MathematicsAnalogical Source: Pedro Domingos, “The Master Algorithm”, MIT Press, 2015
  • 27.
    • Popular fromthe 50’s to late 80’s • Focus on Logic (if-then rules, etc.) • Focus on problem solving • Limited learning capacity • Knowledge engineering Symbolic Approach: Logic and Decision Source: edu(b)log http://thinkdifferent.typepad.com/edulog/
  • 28.
    Symbolic Approach: ExpertSystems Knowledge engineering: ontologies, knowledge representation, natural language processing, reasoning, decision Source: Steve Copley, IGSE ICT https://www.igcseict.info/readme/index.html
  • 29.
    • Intelligent Agents:rational and autonomous, perceive environment, take decisions based on a specific objective, plan action to achieve it • Perception: recognition (vision, etc.) of the environment • Actuation: navigation or manipulation of the environment On step beyond: Intelligence Agents Source: Pattie Maes, MIT Media Lab - Artificial Intelligence: A Modern Approach by Stuart Russell and Peter Novig.
  • 30.
    • Decision: Simplereflexes or complex reprensentation of the world (internal states) with reasoning • Problem representation & state space • Actions generate passage from state A to state B • Solution -> from existing state to optimal state • Exploring state space has computational cost Symbolic Approach: Problem Solving Source: Daniel Valana, Jared Bouchier, Xin Yuan, University of Adelaide Student Wiki.
  • 31.
    Blind search: bruteforce • Backward search: starting from the solution • Backtracking: error / obstacle -> go to previous step • Depth first (LIFO, queue) Vs breadth first (FIFO, stack) Heuristic search: based on knowledge Symbolic Approach: State Space Search Source: William H. Wilson, University of New South Wales, http://www.cse.unsw.edu.au/~billw/
  • 32.
    Natural Language Processing NLP:Interaction between machines and human languages, with tasks regarding syntax, semantics, discourse and speech Structure: Mix of techniques from traditional symbobolic-linguistic to deep learning Examples of corporate APIs: Google Cloud Natural Language API, IBM Watson, Amazon Lex, Microsoft Cognitive Services, Facebook's DeepText Applications: Translation, Chatbots, Automatic summarization, Antispam, Information extraction/classification Source: Natural Language Toolkit http://www.nltk.org/
  • 33.
    Rules Vs Learning Learningis the key to intelligence acceleration ?
  • 34.
    • Machine learning:mix of connectionist, statistical, genetic and analogical paradigm. Between data science and AI • Why: Automatizes automation, accelerates progress, human programmer and instructions no more as a bottleneck • Example: Go game, n° of potential moves is 10170, higher than the number of atoms in the universe 1080 • Applications: • Basket analysis in ecommerce: learnin associations with conditional probability • Credit scoring in finance: learning classifications: if income > X AND savings > Y => Low risk (grafico) • Medical diagonosis: pattern recognition • Predictions in financial markets • Bioinformatics • Games Machine Learning
  • 35.
    Classification Vs RegressionAnalysis Classification Regression Sources: http://www.whatissixsigma.net/ and https://jaxenter.com/machine-learning-an-introduction-for-programmers-122135.html Classification: Separate data finding discrete category /label Regression: Find coefficients of the line that minimize distance between observation points and prediction line
  • 36.
    Supervised Vs Unsupervisedlearning Supervised: Learning a class from labeled examples Source: -
  • 37.
    Reinforcement learning Source: Berkeley’sCS 294: Deep Reinforcement Learning by John Schulman & Pieter Abbeel Example: Gaming, Robotics, Self Driving Cars
  • 38.
    Overfitting: Model withlow generality and too much tied to a specific training set Solution: Less variables and larger training set Overfitting Vs Underfitting Source: http://www.turingfinance.com/regression-analysis-using-python-statsmodels-and-quandl/
  • 39.
    Connectionist Approach: NeuralNetowrks Source: Wikimedia Commons. • Imitating biological computation in neurons • Dealing with complexity and uncertainty • Non-symbolic knowledge representation • Learning capability • Parallel computation
  • 40.
    Source: Wikibooks Connectionist Approach:Perceptrons • Input on “dedrites” then ”cell body” compute weights • Output: 0, 1 / yes-no after a threshold in the “axon”
  • 41.
    Connectionist Approach: Classification Source:Wikimedia Commons Classification: the perceptron separates inputs in two classes with a linear boundary Boundary: Straight line in 2 dimension, plane in 3 dimensions, n- 1 hyperplane in n dimensions (=n variables) Learning: Inclusion of new elements in the training set increases accuracy Limits: Complex boundaries, XOR functions
  • 42.
    Backpropagation: real outputconfronted with expected outcome and change weights in neuron connections Multilayer perceptrons: hidden layer beyond input and output layers Complexity: Classification / learning with non linear boundaries Applications examples: speech recognition, image recognition, machine translation Gradient descent: Analogy with loss function finding local minimum Connectionist approach: Multilayer Perceptrons and Backpropagation Source: Wikimedia Commons
  • 43.
    Deep learning: multilayer perceptronwith multiple layers inside -> network forced to extract salient characteristics Applications: NLP, translation vision, speech &audio recognition, bioinformatics Pros: Learning of abstract concepts without human supervision Cons: Non-transparent logic Deep Learning Source: http://www.kdnuggets.com/2016/01/seven-steps-deep-learning.html
  • 44.
    Human Brain: Operationally flexibleand algorithmically compact (DNA) Combination of approaches: in part learning neural net, in part specialized regions (visual cortex, cerebellum) Hierarchical Order Reading: “How to create a mind” R. Kurzweil, “On Intelligence”, J. Hawkins, S. Blakeslee The Brain Analogy Source: Wikimedia Commons.
  • 45.
    Genetic Algorithms: firstmentioned by John Von Neumann as self- replicating machines Structure: • Set of automata with casual variations • Fitness function to be maximized • Then mutations and / or random crossover-reproduction • Selection of a new generation • Iteration References: Santa Fe Institute, Melanie Mitchell Evolutionist Approach: Genetic Algorithms Source: Quantdare https://quantdare.com/
  • 46.
    Local Minima problem Source:Sebastian Raschka Blog, https://sebastianraschka.com/ Algorithm does not reach the global maximum
  • 47.
    Dataclism: Explosion indata production, storage and availability From data to knowledge: New power for statistical techniques: Statistical Approach: Data Explosion Source: P Desjardins-Proulx blog, http://phdp.github.io/blog.html
  • 48.
    • Bayesian techniques:Use probability theory and Bayes theorem to uptdate existing knowledge incorparating new data • P(A) = P event #1 • P(B) = P event #2 • P(A|B) = Probability of A if B is true • P (B|A) = Probability of B if A is true • “Degree of belief” subjective / theoretical (Vs frequentist / experimental) • Example: Google’s antispam filter Statistical Approach: Bayesian Reasoning
  • 49.
    Markov Chain: sequenceof states with probabilistic relation Example: In a sentence, if there is the word X then there is a probability P than the next word will be Y Example: Google’s Page Rank Hidden Markov model: Hidden states, operates as a dynamic Bayesian network Example: Apple’s voice recognition Monte Carlo Chain: random values from probability distributions, then find outputs from each set of values -> complex models without complex functions Statistical Approach: Markov Chains Source: Wikimedia Commons.
  • 50.
    Nearest Neighbor: Supervisedlearning algorithm based on analogy, measured on the distance from other data on a plane/space Data: classified with the most frequent label among the majority of k nearest training samples Training-> Distance measure->Classification Dimensionality reduction: Fundamental for application Applications: Pattern recognition Analogical Approach: k-Nearest Neighbor algorithm Source: Wikimedia Commons.
  • 51.
    • Support vectormachines: Supervised learning with looking for nearest points to separation margins, with more margins in “competition” • Objective: Maximize margins, or distance with separation hyperplane • Kernelization: Bring data on higher dimensions, where higher margin exists, even if it wasn’t present in normal dimensionality Analogical Approach: Support Vector Machines Source: EFDB, http://efavdb.com/ and OpenCV http://docs.opencv.org/2.4/index.html
  • 52.
    K-Means: Classification ofunlabeled data- clustering di dati non strutturati K = n° of neighbors to be found Centroid: Middle point in a cluster Set-up: choice of centroid and position data Calculation: Reposition centroids and iteret until threshold Example: Face recognition Unsupervised Learning: K-Means Source: http://iancat.tistory.com/6
  • 53.
    Problem: If trainingset contains prejudices, output will be projudiced Example: Word associations with minorities Solution: Equal opportunity by design ? Amplifying Prejudice ? Source: https://factordaily.com/dangers-of-artificial-intelligence/
  • 54.
    Man-machine integration Intelligence Analysis Policy /Etiquette Enforcement Automatic screening followed by human judgement
  • 55.
  • 56.
  • 57.
  • 58.
    • Vernon Vinge •Hans Moravec • Nick Bostrom • Elizer S. Yudkowsky • Ben Goertzel • Ray Kurzweil Speculation on Super-human AI “The term “Singularity” in my book is comparable to the use of this term by the physics community. Just as we find it hard to see beyond the event horizon of a black hole, we also find it difficult to see beyond the event horizon of the historical Singularity. How can we …. imagine what our future civilization, with its intelligence multiplied trillions-fold, be capable of thinking and doing?” (Ray Kurzweil)
  • 59.
    Maybe the Singularityis not near …...
  • 60.
    Or maybe weshould fear the Roko’s Basilisk ! “Roko's basilisk is a thought experiment about the potential risks involved in developing artificial intelligence. The premise is that an all-powerful artificial intelligence from the future could retroactively punish those who did not help bring about its existence, including those who merely knew about the possible development of such a being. It resembles a futurist version of Pascal's wager.” (Source: RationalWiki)
  • 61.
  • 62.
  • 63.
  • 64.
  • 65.
    Google / NASAQuantum AI Lab
  • 66.
    Online courses Andrew NG Francesco Mosconi Sakunthala Panditharatne MachineLearning Zero to Deep Learning™ with Python and Keras Neural networks for hackers
  • 67.
  • 68.
  • 69.
    Raffaele Mauro, Ph.D. RaffaeleMauro is passionate about technology, policy and global finance. Now Managing Director at Endeavor Italy, he is focused on high-impact entrepreneurship and venture capital, providing companies access to smart capital, talent and markets. Previously he was Head of Finance for Innovation & Entrepreneurship at Intesa Sanpaolo and worked at venture capital funds such as United Ventures (formerly Annapurna Ventures), P101 and OltreVenture. Raffaele is a Kauffman Fellow and holds an MPA from Harvard University, a Ph.D. from Bocconi and is alumnus of the Singularity University Graduate Studies Program at NASA Ames. Raffaele co-authored the book “Hacking Finance”, an essay on Bitcoin, blockchain and cryptocurrencies, and was invited speaker at EY EMEIA Accelerate, Wired Money and the Bundesbank. He invested and advised several companies including Multiply Labs (YC 2016). Raffaele is also Junior Fellow at the Aspen Institute, member of the Young Leaders group of the US-Italy Council, member of the “Young European Leaders – 40 under 40” cohort of 2011, member of the scientific committee at Blockchainlab.it and member of the executive committee at the Global Shapers Hub - Milano, a World Economic Forum community. Twitter: @rafr 69